CN111696044B - Large-scene dynamic visual observation method and device - Google Patents

Large-scene dynamic visual observation method and device Download PDF

Info

Publication number
CN111696044B
CN111696044B CN202010548500.4A CN202010548500A CN111696044B CN 111696044 B CN111696044 B CN 111696044B CN 202010548500 A CN202010548500 A CN 202010548500A CN 111696044 B CN111696044 B CN 111696044B
Authority
CN
China
Prior art keywords
event
view
feature point
stream data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010548500.4A
Other languages
Chinese (zh)
Other versions
CN111696044A (en
Inventor
高跃
李思奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010548500.4A priority Critical patent/CN111696044B/en
Publication of CN111696044A publication Critical patent/CN111696044A/en
Application granted granted Critical
Publication of CN111696044B publication Critical patent/CN111696044B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Multimedia (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a large scene dynamic visual observation method, which comprises the following steps: a multi-view event data acquisition step of acquiring multi-view event stream data by using an event camera array; a multi-view event image conversion step of converting event stream data into an event count image; event image feature point detection, which is to perform feature point detection and feature vector extraction on the event counting image; matching event image feature points, namely calculating to obtain a matching relation between the feature points; a spatial coordinate transformation matrix calculation step, namely calculating a spatial coordinate transformation matrix according to the characteristic point matching relation; and a multi-view event stream data fusion step, namely performing space coordinate transformation on the multi-view event stream data, splicing the transformed multi-view event stream data, and acquiring large-scene dynamic observation event stream data. According to the invention, on the basis of scene dynamic visual observation based on the event camera, a large scene dynamic observation effect with a larger view field range is obtained.

Description

Large-scene dynamic visual observation method and device
Technical Field
The invention relates to the field of computer vision and computational photography, in particular to a large scene dynamic vision observation method and device.
Background
The event camera is a bio-inspired sensor, and the working principle is very different from that of the traditional camera. Unlike a conventional camera that captures scene absolute light intensity at a fixed frame rate, such a camera outputs data if and only if the scene light intensity changes, such output data being referred to as an event stream. Compared with the traditional camera, the event camera has the advantages of high dynamic range, high time resolution, no dynamic blurring and the like.
The event camera as a new type of vision sensor outputs data in a form completely different from that of the conventional camera, and various algorithms of the conventional camera and images cannot be directly applied. Conventional cameras acquire light intensity values of a scene at a fixed rate (i.e., frame rate) and output as picture data at the fixed rate. The event camera does not have the concept of frame rate, each pixel point of the event camera works asynchronously, and when the light intensity change is detected, an event is output. Each event is a quadruple (x, y, t, p) including a pixel abscissa (x, y), a timestamp t, and an event polarity p (where p-1 indicates that the light intensity of the pixel decreases, and p-1 indicates that the light intensity of the pixel increases). The event data output by all the pixel points are collected to form an event list consisting of one event, and the event list is used as the event stream data output by the camera. An example of video data obtained by a conventional camera having a length of 20 seconds and event stream data output by an event camera corresponding thereto is shown in fig. 1. Therefore, the algorithms and methods applied in the conventional camera and conventional image processing fields cannot be directly applied to the event camera and the event data.
The single traditional camera has the problem of small field range, and can adopt a multi-view camera array to acquire multi-view images, and realize the purpose of large field observation by utilizing image registration and splicing technology. The single event camera also has the problem of small field range, but because the event camera outputs event stream data, the difference from the traditional camera is large, and the existing image registration and splicing technical method cannot be directly used. That is, a related art method for registration and fusion of multi-view event stream data is currently lacking.
Disclosure of Invention
In order to solve the problem that a related technical method for registration and fusion of multi-view event stream data is lacked at present, the invention provides a large-scene dynamic visual observation method and a large-scene dynamic visual observation device, which can realize registration and fusion of event stream data acquired by a multi-view event camera, namely realize acquisition of event stream data with a larger view field.
The invention provides a large scene dynamic visual observation method which is characterized by comprising the following steps:
step 1, acquiring multi-view event data, and acquiring multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronous event camera array;
step 2, converting the multi-view event images, processing multi-view event stream data, converting the event stream data of each view into an event counting image of two channels, and counting the number of positive and negative events occurring at each pixel point of the event counting image;
step 3, event image feature point detection, namely performing feature point detection and feature description on the multi-view event counting image by using a Speeded Up Robust Features (SURF) algorithm, and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
step 4, matching event image feature points, namely taking event stream data acquired by an event camera at a first view angle as a reference view angle and taking the other view angles as non-reference view angles, and sequentially performing feature matching on a feature point set of each non-reference view angle and a reference view angle feature point set to acquire a matching relation;
step 5, calculating a space coordinate transformation matrix, namely calculating the space coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the mutually matched feature point coordinates;
and 6, fusing multi-view event stream data, performing space coordinate transformation on the multi-view event stream data by using the space coordinate transformation matrix, splicing the transformed event stream data, and acquiring large-scene dynamic observation event stream data.
The invention also provides a large scene dynamic visual observation device, which comprises: the system comprises a multi-view event data acquisition unit, a multi-view event image conversion unit, an event image characteristic point detection unit, an event image characteristic point matching unit, a spatial coordinate transformation matrix calculation unit and a multi-view event stream data fusion unit; the method is characterized in that:
a multi-view event data acquisition unit that acquires multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronized event camera array;
the multi-view event image conversion unit is used for processing multi-view event stream data, each multi-view event stream data is converted into an event counting image of two channels, and each pixel point of the event counting image counts the number of positive and negative events occurring at the pixel point;
the event image feature point detection unit is used for carrying out feature point detection and feature description on the multi-view event counting image by utilizing a speedUp Robust Features (SURF) algorithm and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
the event image feature point matching unit is used for taking event stream data acquired by the event camera at a first visual angle as a reference visual angle and taking the other visual angles as non-reference visual angles, and sequentially performing feature matching on a feature point set of each non-reference visual angle and a reference visual angle feature point set to acquire a matching relation;
the spatial coordinate transformation matrix calculation unit is used for calculating a spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the feature point coordinates matched with each other;
and the multi-view event stream data fusion unit is used for carrying out space coordinate transformation on the multi-view event stream data by utilizing the space coordinate transformation matrix, splicing the transformed event stream data and acquiring the large-scene dynamic observation event stream data.
The invention has the beneficial effects that: the method can solve the problem that a related technical method for registration and fusion of multi-view event stream data is lacked at present. The large-scene dynamic visual observation method can realize registration and fusion of multi-view event stream data on the basis of small-view event stream data acquired by a multi-view event camera, namely, realize scene dynamic observation of a larger view field.
Drawings
Fig. 1 is a schematic diagram of video data obtained by a conventional camera with a time length of 20 seconds and stream data obtained by an event camera corresponding thereto.
FIG. 2 is a schematic flow chart of a large scene dynamic visual observation method of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to fig. 2.
As shown in fig. 2, the embodiment provides a large scene dynamic visual observation method for realizing registration and fusion of event stream data acquired by a multi-view event camera, including the following steps:
step 1, obtaining multi-view event stream data, wherein the multi-view event stream data is obtained by a synchronous event camera array;
in the step, an event camera array is built by using the existing event cameras with small viewing fields, and each camera is synchronously triggered to acquire multi-view event stream data. Event cameras are a new type of biomimetic camera that outputs data if and only if the scene light intensity changes, such output data being referred to as an event stream. The event stream data output by the event camera may be represented in the form of formula (1):
Figure BDA0002541593530000041
wherein epsilon is an event stream set, i represents the number of events in the event stream, xiIs the spatial abscissa, y, of the ith eventiIs the spatial ordinate, t, of the ith eventiTime stamp, p, representing the ith eventiThe polarity of the ith event. p is a radical ofi1 indicates that the light intensity of the pixel point is increased, piThe light intensity of this pixel decreases as indicated by-1.
Assuming that the event camera array includes N event cameras in total, numbered 1 to N, the multi-view event stream data can be recorded as:
E={ε12,...,εN} (2)
wherein epsiloniRepresenting event stream data output by the ith event camera.
Step 2, converting the multi-view event images, processing multi-view event stream data, converting the event stream data of each view into an event counting image of two channels, and counting the number of positive and negative events occurring at each pixel point of the event counting image;
in this step, each pixel point of the event counting image counts the number of positive and negative events occurring at the pixel point, and a specific conversion formula is as follows:
Figure BDA0002541593530000051
wherein, I is an event counting image, M is the total event number in the event stream data, and delta is a unit pulse function.
Step 3, completing event image feature point detection, and performing feature point detection and feature description on the multi-view event counting image by using a speedup Robust Features (SURF) algorithm to obtain a feature point set of the event counting image of each view and a corresponding feature description thereof;
step 4, matching event image feature points, taking event stream data acquired by an event camera at a first view angle as a reference view angle, taking the other view angles as non-reference view angles, and performing feature matching on the feature point set of each non-reference view angle and the reference view angle feature point set in sequence to acquire a matching relation;
in this step, the feature point matching is obtained by using the following algorithm:
Figure BDA0002541593530000052
Figure BDA0002541593530000053
wherein, A and B are respectively a characteristic point set extracted from the event counting image of a reference visual angle a and any non-reference visual angle B, AiRepresents the ith feature point in the feature point set A,
Figure BDA0002541593530000054
a feature vector representing the feature point, BjRepresents the jth feature point in the feature point set B,
Figure BDA0002541593530000055
a feature vector representing the feature point, for the feature point A in the feature point set AiThe matched characteristic points are characteristic vectors B in the characteristic point set Bj
And 5, calculating a spatial coordinate transformation matrix, and calculating the spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the mutually matched characteristic point coordinates.
In the step, a spatial coordinate transformation matrix of each non-reference view angle relative to a reference view angle is calculated according to the feature point coordinates matched with each other. The spatial coordinate transformation matrix is obtained by the following algorithm:
Figure BDA0002541593530000061
wherein the content of the first and second substances,
Figure BDA0002541593530000062
the ith feature point A in the feature point set A extracted for the event count image of the reference view angle aiIs determined by the coordinate of (a) in the space,
Figure BDA0002541593530000063
feature points in feature point set B and feature points A extracted for event count image of any non-reference view BiMatched feature point BiN is the total number of matched feature point pairs in the feature point set a and the feature point set B, HabIs a spatial coordinate transformation matrix of the non-reference view b relative to the reference view a.
And 6, completing multi-view event stream data fusion, performing space coordinate transformation on the multi-view event stream data by using the space coordinate transformation matrix, splicing the transformed event stream data, and acquiring large-scene dynamic observation event stream data.
In the step, the spatial coordinate transformation matrix obtained in the step 5 is used for transforming the spatial coordinate of each event in the event stream data of the non-reference view, the multi-view event stream data after coordinate transformation are combined, repeated events are removed, and the large scene dynamic observation event stream data are obtained. The large scene dynamic observation event stream data is obtained by the following algorithm:
ε=ε1∪ε′2∪…∪ε′N
Figure BDA0002541593530000064
Figure BDA0002541593530000065
wherein epsilon is a spliced large scene dynamic observation event stream, epsilon1Event stream data, ε ', collected by an event camera for a reference view'jEvent stream data obtained by performing space coordinate transformation matrix operation on event stream data acquired by the event camera at the jth view angle, wherein N is the number of event cameras in the multi-view-angle event camera array, and H1jSpatial coordinate transformation matrix for jth non-reference view relative to reference view, xi jRaw spatial abscissa, y, of ith event in event data collected for event camera of jth view anglei jOriginal spatial ordinate, t, of ith event in event data collected for event camera of jth viewi jTimestamp, p, triggered by the ith event in event data collected by an event camera for the jth viewi jPolarity of an ith event in the event data collected for the event camera of the jth view angle.
Figure BDA0002541593530000071
Obtaining an abscissa of an ith event in event data collected by an event camera at a jth visual angle after spatial coordinate transformation matrix operation,
Figure BDA0002541593530000072
event data collected by the event camera for the jth view angle is emptyAnd (5) obtaining a vertical coordinate after matrix operation of inter-coordinate transformation.
The embodiment also provides a large scene dynamic visual observation device, which includes: the system comprises a multi-view event data acquisition unit, a multi-view event image conversion unit, an event image characteristic point detection unit, an event image characteristic point matching unit, a spatial coordinate transformation matrix calculation unit and a multi-view event stream data fusion unit;
a multi-view event data acquisition unit that acquires multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronized event camera array;
the multi-view event image conversion unit is used for processing multi-view event stream data, each multi-view event stream data is converted into an event counting image of two channels, and each pixel point of the event counting image counts the number of positive and negative events occurring at the pixel point;
the event image feature point detection unit is used for carrying out feature point detection and feature description on the multi-view event counting image by utilizing a speedUp Robust Features (SURF) algorithm and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
the event image feature point matching unit is used for taking event stream data acquired by the event camera at a first visual angle as a reference visual angle and taking the other visual angles as non-reference visual angles, and sequentially performing feature matching on a feature point set of each non-reference visual angle and a reference visual angle feature point set to acquire a matching relation;
the spatial coordinate transformation matrix calculation unit is used for calculating a spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the feature point coordinates matched with each other;
and the multi-view event stream data fusion unit is used for carrying out space coordinate transformation on the multi-view event stream data by utilizing the space coordinate transformation matrix, splicing the transformed event stream data and acquiring the large-scene dynamic observation event stream data.
The large-scene dynamic visual observation device described in the above embodiment is used to execute the large-scene dynamic visual observation method, and the related algorithm is used in a module corresponding to the large-scene dynamic visual observation device.
Although the present invention has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative of and not restrictive on the application of the present invention. The scope of the invention is defined by the appended claims and may include various modifications, adaptations and equivalents of the invention without departing from its scope and spirit.

Claims (10)

1. A large scene dynamic visual observation method is characterized by comprising the following steps:
step 1, acquiring multi-view event data, and acquiring multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronous event camera array;
step 2, converting the multi-view event images, processing multi-view event stream data, converting the event stream data of each view into an event counting image of two channels, and counting the number of positive and negative events occurring at each pixel point of the event counting image;
step 3, detecting event image feature points, namely performing feature point detection and feature description on the multi-view event counting image by using a speedUp Robust Features (SURF) algorithm, and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
step 4, matching event image feature points, namely taking event stream data acquired by an event camera at a first view angle as a reference view angle and taking the other view angles as non-reference view angles, and sequentially performing feature matching on a feature point set of each non-reference view angle and a reference view angle feature point set to acquire a matching relation;
step 5, calculating a space coordinate transformation matrix, namely calculating the space coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the mutually matched feature point coordinates;
and 6, fusing multi-view event stream data, performing space coordinate transformation on the multi-view event stream data by using the space coordinate transformation matrix, splicing the transformed event stream data, and acquiring large-scene dynamic observation event stream data.
2. The large scene dynamic visual observation method of claim 1, wherein in step 2, the event count image specific conversion formula is as follows:
Figure FDA0003604147940000011
wherein I is an event count image, xiIs the spatial abscissa, y, of the ith eventiIs the spatial ordinate, p, of the ith eventiIs the polarity of the ith event, N is the total number of events in the event stream data, and δ is the unit pulse function.
3. The large-scene dynamic visual observation method according to claim 1, wherein in step 4, the feature point matching relationship is obtained by using the following algorithm:
Figure FDA0003604147940000021
Figure FDA0003604147940000022
wherein, A and B are respectively a characteristic point set extracted from the event counting image of a reference visual angle a and any non-reference visual angle B, AiRepresents the ith feature point in the feature point set A,
Figure FDA0003604147940000023
a feature vector representing the feature point, BjRepresents the jth feature point in the feature point set B,
Figure FDA0003604147940000024
a feature vector representing the feature point, for the feature point A in the feature point set Ai,AiMatched feature pointsFor the feature point B in the feature point set Bj
4. A method for dynamic visual inspection of large scenes as claimed in claim 1, characterized in that in step 5, the spatial coordinate transformation matrix is obtained by the following algorithm:
Figure FDA0003604147940000025
wherein the content of the first and second substances,
Figure FDA0003604147940000026
the ith feature point A in the feature point set A extracted for the event count image of the reference view angle aiIs determined by the coordinate of (a) in the space,
Figure FDA0003604147940000027
feature points in feature point set B and feature points A extracted for event count image of any non-reference view BiMatched feature point BiN is the total number of matched feature point pairs in the feature point set a and the feature point set B, HabIs a spatial coordinate transformation matrix of the non-reference view b relative to the reference view a.
5. The large scene dynamic visual inspection method of claim 1, wherein in step 6, the large scene dynamic inspection event stream data is obtained by the following algorithm:
ε=ε1Uε′2U…Uε′N
Figure FDA0003604147940000028
Figure FDA0003604147940000031
wherein epsilon is the spliced large fieldScene dynamic observation of event stream, epsilon1Event stream data, ε ', collected by an event camera for a reference view'jEvent stream data obtained by performing space coordinate transformation matrix operation on event stream data acquired by the event camera at the jth view angle, wherein N is the number of event cameras in the multi-view-angle event camera array, and H1jSpatial coordinate transformation matrix for jth non-reference view relative to reference view, xi jRaw spatial abscissa, y, of ith event in event data collected for event camera of jth view anglei jRaw spatial ordinate, t, of the ith event in the event data collected by the event camera for the jth view anglei jTimestamp, p, triggered by the ith event in event data collected by an event camera for the jth viewi jThe polarity of the ith event in the event data collected by the event camera for the jth view angle,
Figure FDA0003604147940000032
obtaining an abscissa of an ith event in event data collected by an event camera at a jth view angle after spatial coordinate transformation matrix operation,
Figure FDA0003604147940000033
and obtaining a vertical coordinate after the ith event in the event data collected by the event camera at the jth visual angle is subjected to space coordinate transformation matrix operation.
6. A large scene dynamic visual observation apparatus comprising: the system comprises a multi-view event data acquisition unit, a multi-view event image conversion unit, an event image characteristic point detection unit, an event image characteristic point matching unit, a spatial coordinate transformation matrix calculation unit and a multi-view event stream data fusion unit; the method is characterized in that:
a multi-view event data acquisition unit that acquires multi-view event stream data, wherein the multi-view event stream data is acquired by a synchronized event camera array;
the multi-view event image conversion unit is used for processing multi-view event stream data, converting the multi-view event stream data into a two-channel event counting image, and counting the number of positive and negative events occurring at each pixel point by each pixel point of the event counting image;
the event image feature point detection unit is used for carrying out feature point detection and feature description on the multi-view event counting image by utilizing a speedUp Robust Features (SURF) algorithm and acquiring a feature point set of the event counting image of each view and feature description of each feature point;
the event image feature point matching unit is used for taking event stream data acquired by the event camera at a first visual angle as a reference visual angle and taking the other visual angles as non-reference visual angles, and sequentially performing feature matching on a feature point set of each non-reference visual angle and a reference visual angle feature point set to acquire a matching relation;
the spatial coordinate transformation matrix calculation unit is used for calculating a spatial coordinate transformation matrix of each non-reference visual angle relative to the reference visual angle according to the feature point coordinates matched with each other;
and the multi-view event stream data fusion unit is used for carrying out space coordinate transformation on the multi-view event stream data by utilizing the space coordinate transformation matrix, splicing the transformed event stream data and acquiring the large-scene dynamic observation event stream data.
7. The large scene dynamic visual inspection device of claim 6, wherein the event count image is obtained using the following algorithm:
Figure FDA0003604147940000041
wherein I is an event count image, xiIs the spatial abscissa, y, of the ith eventiIs the spatial ordinate, p, of the ith eventiIs the polarity of the ith event, N is the total number of events in the event stream data, and δ is the unit pulse function.
8. The large scene dynamic visual inspection device of claim 6, wherein the feature point matching relationship is obtained using the following algorithm:
Figure FDA0003604147940000042
Figure FDA0003604147940000043
wherein, A and B are respectively a characteristic point set extracted from the event counting image of a reference visual angle a and any non-reference visual angle B, AiRepresents the ith feature point in the feature point set A,
Figure FDA0003604147940000044
a feature vector representing the feature point, BjRepresents the jth feature point in the feature point set B,
Figure FDA0003604147940000045
a feature vector representing the feature point, for the feature point A in the feature point set Ai,AiThe matched characteristic points are characteristic points B in the characteristic point set Bj
9. The large scene dynamic visual observation apparatus of claim 6, wherein the spatial coordinate transformation matrix is obtained by the following algorithm:
Figure FDA0003604147940000051
wherein the content of the first and second substances,
Figure FDA0003604147940000052
the ith feature point A in the feature point set A extracted for the event count image of the reference view angle aiIs determined by the coordinate of (a) in the space,
Figure FDA0003604147940000053
feature points in feature point set B and feature points A extracted for event count image of any non-reference view BiMatched feature point BiN is the total number of matched feature point pairs in the feature point set a and the feature point set B, HabIs a spatial coordinate transformation matrix of the non-reference view b relative to the reference view a.
10. The large scene dynamic visual inspection device of claim 6, wherein the large scene dynamic inspection event stream data is obtained by the following algorithm:
ε=ε1Uε′2U…Uε′N
Figure FDA0003604147940000054
Figure FDA0003604147940000055
wherein epsilon is a spliced large scene dynamic observation event stream, epsilon1Event stream data, ε ', collected by an event camera for a reference view'jEvent stream data obtained by performing space coordinate transformation matrix operation on event stream data acquired by the event camera at the jth view angle, wherein N is the number of event cameras in the multi-view-angle event camera array, and H1jSpatial coordinate transformation matrix for jth non-reference view relative to reference view, xi jRaw spatial abscissa, y, of ith event in event data collected for event camera of jth view anglei jRaw spatial ordinate, t, of the ith event in the event data collected by the event camera for the jth view anglei jTimestamp, p, triggered by the ith event in event data collected by an event camera for the jth viewi jThe polarity of the ith event in the event data collected by the event camera for the jth view angle,
Figure FDA0003604147940000061
obtaining an abscissa of an ith event in event data collected by an event camera at a jth view angle after spatial coordinate transformation matrix operation,
Figure FDA0003604147940000062
and obtaining a vertical coordinate after the ith event in the event data collected by the event camera at the jth visual angle is subjected to space coordinate transformation matrix operation.
CN202010548500.4A 2020-06-16 2020-06-16 Large-scene dynamic visual observation method and device Active CN111696044B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010548500.4A CN111696044B (en) 2020-06-16 2020-06-16 Large-scene dynamic visual observation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010548500.4A CN111696044B (en) 2020-06-16 2020-06-16 Large-scene dynamic visual observation method and device

Publications (2)

Publication Number Publication Date
CN111696044A CN111696044A (en) 2020-09-22
CN111696044B true CN111696044B (en) 2022-06-10

Family

ID=72481264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010548500.4A Active CN111696044B (en) 2020-06-16 2020-06-16 Large-scene dynamic visual observation method and device

Country Status (1)

Country Link
CN (1) CN111696044B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800860B (en) * 2021-01-08 2023-10-17 中电海康集团有限公司 High-speed object scattering detection method and system with coordination of event camera and visual camera
CN112966556B (en) * 2021-02-02 2022-06-10 豪威芯仑传感器(上海)有限公司 Moving object detection method and system
CN113220251B (en) * 2021-05-18 2024-04-09 北京达佳互联信息技术有限公司 Object display method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578376A (en) * 2017-08-29 2018-01-12 北京邮电大学 The fork division of distinguished point based cluster four and the image split-joint method of local transformation matrix
CN109829853A (en) * 2019-01-18 2019-05-31 电子科技大学 A kind of unmanned plane image split-joint method
WO2019221580A1 (en) * 2018-05-18 2019-11-21 Samsung Electronics Co., Ltd. Cmos-assisted inside-out dynamic vision sensor tracking for low power mobile platforms
CN110727700A (en) * 2019-10-22 2020-01-24 中信银行股份有限公司 Method and system for integrating multi-source streaming data into transaction type streaming data
CN111126125A (en) * 2019-10-15 2020-05-08 平安科技(深圳)有限公司 Method, device and equipment for extracting target text in certificate and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578376A (en) * 2017-08-29 2018-01-12 北京邮电大学 The fork division of distinguished point based cluster four and the image split-joint method of local transformation matrix
WO2019221580A1 (en) * 2018-05-18 2019-11-21 Samsung Electronics Co., Ltd. Cmos-assisted inside-out dynamic vision sensor tracking for low power mobile platforms
CN109829853A (en) * 2019-01-18 2019-05-31 电子科技大学 A kind of unmanned plane image split-joint method
CN111126125A (en) * 2019-10-15 2020-05-08 平安科技(深圳)有限公司 Method, device and equipment for extracting target text in certificate and readable storage medium
CN110727700A (en) * 2019-10-22 2020-01-24 中信银行股份有限公司 Method and system for integrating multi-source streaming data into transaction type streaming data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
丁辉等.融合GMS与VCS+GC-RANSAC的图像配准算法.《计算机应用》.2020,(第04期), *
刘朝霞等.图结构在航空遥感图像特征点匹配中的应用.《计算机工程与应用》.2018,(第01期), *

Also Published As

Publication number Publication date
CN111696044A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
CN111696044B (en) Large-scene dynamic visual observation method and device
US8139823B2 (en) Method for capturing images comprising a measurement of local motions
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
US9444978B2 (en) Turbulence-free camera system and related method of image enhancement
CN112396562A (en) Disparity map enhancement method based on RGB and DVS image fusion in high-dynamic-range scene
CN111696143B (en) Event data registration method and system
CN103905746A (en) Method and device for localization and superposition of sub-pixel-level image offset and video device
CN106012778B (en) Digital image acquisition analysis method for express highway pavement strain measurement
JP2014116716A (en) Tracking device
CN112470189B (en) Occlusion cancellation for light field systems
EP4050553A1 (en) Method and device for restoring image obtained from array camera
JP2001167276A (en) Photographing device
Zuo et al. Accurate depth estimation from a hybrid event-RGB stereo setup
JP4871315B2 (en) Compound eye photographing apparatus, control method therefor, and program
KR20150050790A (en) Method and Apparatus for Detecting Region of Interest Image by Using Variable Resolution
CN115409707A (en) Image fusion method and system based on panoramic video stitching
US20090244313A1 (en) Compound eye photographing apparatus, control method therefor, and program
CN113808070A (en) Binocular digital speckle image related parallax measurement method
TWI668411B (en) Position inspection method and computer program product
JP2009237652A (en) Image processing apparatus and method, and program
CN116309218A (en) Real-time synthesis method of polarization degree image
CN116866522B (en) Remote monitoring method
WO2023176488A1 (en) Moving bodies measurement method
CN112565630B (en) Video frame synchronization method for video stitching
JP3759712B2 (en) Camera parameter estimation method, apparatus, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant