CN105160703A - Optical flow computation method using time domain visual sensor - Google Patents

Optical flow computation method using time domain visual sensor Download PDF

Info

Publication number
CN105160703A
CN105160703A CN201510525146.2A CN201510525146A CN105160703A CN 105160703 A CN105160703 A CN 105160703A CN 201510525146 A CN201510525146 A CN 201510525146A CN 105160703 A CN105160703 A CN 105160703A
Authority
CN
China
Prior art keywords
time
optical flow
formula
pixel
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510525146.2A
Other languages
Chinese (zh)
Other versions
CN105160703B (en
Inventor
胡燕翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Normal University
Original Assignee
Tianjin Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Normal University filed Critical Tianjin Normal University
Priority to CN201510525146.2A priority Critical patent/CN105160703B/en
Publication of CN105160703A publication Critical patent/CN105160703A/en
Application granted granted Critical
Publication of CN105160703B publication Critical patent/CN105160703B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses an optical flow computation method using a time domain visual sensor, and specific implementation steps are given out. Different from a current method for performing optical flow computation by adopting "a frame sequence image (video)", the method disclosed by the present invention uses a visual information acquisition device-time domain visual sensor to perform optical flow computation of a field of view. Furthermore, because each pixel autonomously performs change detection and asynchronous output, motion and change of an object in a scenario can be sensed in real time, a continuity assumption of "no change in brightness and no change in local speed" in a differential optical flow computation method is well satisfied, and the real-time property of optical flow computation is greatly improved while the precision of optical flow computation is significantly improved. Therefore, the method is very suitable for optical flow computation and subsequent tracking and speed measurement of the high-speed moving object.

Description

A kind of optical flow computation method using time-domain visual sensor
Technical field
The present invention relates to multiple technical fields such as computer vision, image procossing and image sensor design, is a kind of method using the optical flow computation of time-domain visual sensor in particular.
Background technology
light stream
Light stream (Opticflow is also called image stream) refers to the motion projection produced when the motive target imaging in three dimensions is in two dimensional image plane, and this two-dimensional projection shows in the mode of brightness of image " flowing ", is referred to as light stream.The object of research optical flow field is the three bit space sports grounds in order to approximate treatment from sequence image can not directly obtain.
Optical flow analysis is one of important research direction of video analysis, effectively can carry out moving object detection and tracking and segmentation by optical flow analysis.Optical flow analysis has widespread use in multiple fields such as robot, military affairs, Aero-Space, industrial traffic industry, medical science and meteorologies.Vision Builder for Automated Inspection by carrying out to scene the various operations that optical flow analysis carries out view-based access control model, such as robot self-navigation and Obstacle avoidance; Unmanned spacecraft automatic Landing and path design, guided missile precise guidance and target selection etc.Current light stream research mainly concentrates on the design of implementation algorithm and new algorithm on peculiar hardware platform.
Gibson and Wallach etc. propose in the fifties in last century the hypothesis can recovering spatial three-dimensional movement and structural parameters from the optical flow field of two dimensional surface first.Within 1981, propose the first actual effective optical flow computation method by Hom and schunck, become the foundation stone of optical flow algorithm development.After this become the focus of computer vision field about the research of light stream, create lot of research.These methods can be divided into several large class of the differential method, matching method, energy method, phase method and neurodynamics method.Wherein the differential method has good combination property, and calculated amount is relatively little and effect better, is therefore widely used in practice.It utilizes the time domain of time varying image (video) gray-scale value and spatial domain differential (gradient function) to calculate the velocity of pixel.The differential method mainly comprises: 1, Horn-sehunek overall situation smoothing method; 2, Lucas-Kanade local smoothing method method; 3, the specific algorithm such as oriented smoothing method of Nagel.
Differential method light stream is supposed based on brightness constancy.This hypothesis is thought that object does in space and is moved relatively continuously, projects the image consecutive variations on retina (or imageing sensor) in motion process.Specifically be expressed as and " for the motion of certain target in one group of continuous print two-dimensional image sequence, along each frame pixel on this path curves, there is identical gray-scale value.If the gray-scale value of pixel (x, y) is I (x, y, t) in t image, it is point (X, Y, the Z) picture on image on t object.When t+ t, this moves to (X+ X, Y+ Y, Z+ Z), and the picture on its image becomes (x+ x, y+ y), and during t+ t, the gray-scale value at picture point (x+ x, y+ y) place is I (x+ x, y+ y, t+ t).When time variation amount is very little, according to the constant hypothesis of brightness, grey scale pixel value remains unchanged:
U, v are the speed component of t picture point (x, y) on x and y direction, and the linear equation therefore about light stream (u, v) is:
Above formula is optical flow constraint equation, wherein I x, I yfor the spatial gradient of point (x, y) place brightness, I tfor the time gradient of this point, these three values are all tried to achieve by successive image frame.Owing to there are u, v two unknown numbers in above formula, be therefore ill posed (not existence and unique solution).This is that two-dimensional representation owing to the three-dimensional motion in space to be projected to plane causes, therefore in order to solve u, v of each pixel, also need the constraint condition adding other, the local smoothing method supposition etc. that the overall situation smoothly supposes, Lucas-Kanade proposes that such as Horn-sehunek proposes.
According to the local smoothing method supposition that Lucas-Kanade proposes, the pixel in the zonule centered by (x, y) has identical speed (u, v), that is:
I in above formula 1, I 2..., I mthat a zonule (is got usually ) in neighbor.Due to m>2, therefore can adopt LMSE method solve light stream (u, v).
Based in the optical flow analysis method of two field picture, following form can be used to calculate the I at pixel (i.j) place x ,i y, I t:
Formula (6 ~ 8), each meaning of parameters is as follows:
F (x, y, t) represents the brightness at (x, y) some place in the image of t in video, and the distribution of pixel as shown in Figure 3.Least error in the Ω of zonule can adopt following formula:
Two-dimensional Gaussian function graphical distribution is shown in Fig. 4.
Use above each formula, adopt LMSE method to calculate to make light stream that in the Ω of zonule, light stream error is minimum (u, v).
Although the research of differential optic flow technique has achieved a large amount of achievement and obtained a large amount of use in Practical Project, but still faces a following difficult problem:
(1) brightness constancy hypothesis is inappropriate for most of natural video frequency image, particularly when existence in image is blocked or movement velocity is higher;
(2) when image existence is blocked, velocity field can be undergone mutation, and various smoothness constraint can make body form be twisted;
(3) prerequisite of differential light stream is image continuously differentiable, if when image space graded is larger, can produces have a strong impact on the precision of optical flow analysis.
Analyzed from above, image taking frame frequency is the key factor affecting optical flow analysis accuracy.Shooting speed due to current general camera is 30 ~ 60 frames/per second, therefore obviously can produce and the incongruent contradiction of continuity hypothesis the optical flow analysis of high-speed moving object.If shooting speed enough fast (time interval between each two field picture is enough little), then the problems referred to above can effectively be solved; But meanwhile, high frame rate can make calculated amount increase sharply, Real-Time Optical flow analysis is difficult to realize.
vision sensor
" light--the electricity conversion " principle used according to imaging, the image sensor chip (CCD and CMOS) of current use is all based on " frame sampling " mode pattern:
(1) all pixels start after resetting photosensitive (collection optical charge) simultaneously, stop photosensitive after reaching the setting time shutter;
(2) read the optical charge collected by each pixel successively, and be converted into voltage;
(3) this voltage becomes digital quantity after analog to digital conversion, stores after exporting.This digital quantity is the brightness value of this point.The two-dimensional matrix of all pixel brightness value compositions is captured image.
In the Vision Builder for Automated Inspection using above-mentioned " frame sampling " image sensor camera, the shooting speed of image sequence (video) is generally 30-60 frame/second, and computing machine performs image processing algorithm extraction target and carries out differentiating and analyze again then.
The shortcoming that above-mentioned " frame sampling " imaging mode exists:
(1) background data redundancy.There is a large amount of redundancy background informations in adjacent two interframe, constant background area is repetitively sampled reading, brings immense pressure to the process of system and storage capacity.Shooting speed is higher, then transmit, Storage and Processing pressure is larger;
(2) high operating lag.Change in scene can not be exported by imageing sensor perception immediately, and must be perceived and export according to the rhythm of " frame ".This high operating lag is very unfavorable for the recognition and tracking of high-speed moving object, and discontinuity and the error of the faster then testing result of movement velocity are more obvious;
In recent years, researcher, according to the principle of biological vision " change sampling ", adopts VLSI (very large scale integrated circuit) (VLSI) technical design to go out " vision sensor (VisionSensor, the VS) " of novel mode of operation.Principle comprises:
(1), biological vision system do not carry out imaging in the mode of " frame ", and this change only to sensitive, and is delivered to brain visual cortex in the mode of nerve impulse and processes by retinal photoreceptor cells;
(2), the imaging mechanism of mimic biology vision, only responsive to " change events (ActivityEvent, the AE) " in scene and the output of sampling of VS pixel.By its character, AE can be divided into spatial variations (brightness relationship of certain pixel and its neighboring pixel changes) and the large class of time variations (brightness of pixel self changes) two.The vision sensor of time domain sensitive is referred to as time-domain visual sensor (TemporalVisionSensor, TVS);
(3) whether each pixel, in TVS independently detects experienced light intensity and changes.Concrete grammar is the variable quantity of the photogenerated current in each pixel period measuring unit time.When the variable quantity of photogenerated current exceedes the threshold value of setting in the unit interval, represent that the light intensity of this point changes, therefore namely each AE shows that light intensity there occurs fixing variable quantity.The AE that pixel produces is by the asynchronous output of universal serial bus, unrelated between pixel;
(4) the method representation AE being called " address events represents (Address-Event-Representation; AER) ", is usually adopted, i.e. AE=(x, y, P), wherein (x, y) is the rank addresses of pixel in pel array, P represents the attribute (such as light intensity increases to " 1 ", is reduced to " 0 ") of change;
(5), TVS export each AE given a time stamp T by back-end system, this timestamp points out the concrete output time of AE, i.e. AE=(x, y, P, T).
Introduced from above, TVS is compared with tradition " frame sampling " imageing sensor, and its most outstanding advantage is:
(1), output data quantity is little, irredundant information.Output data only include the change information in scene, 5 ~ 10% of mode that data volume is generally " frame sampling ";
(2), real-time is high.Pixel can immediately perceived brightness change and export, the delay of " change-export " can be reduced to Microsecond grade, is equivalent to the shooting speed of several thousand ~ several ten thousand frames.Fig. 1 illustrates common " frame sampling " imageing sensor to compare with the shooting effect of time-domain visual sensor.
Summary of the invention
The present invention proposes a kind of differential optical flow computation method using time-domain visual sensor (TVS).Use TVS as vision input source, based on the differential optical flow analysis principle of " brightness constancy " and " local velocity is constant ", use " Active event (AE) representing brightness change " sequence to carry out the calculating of illumination variation spatial gradient and time gradient, carry out optical flow computation analysis by least square method.For realizing this object, the invention discloses following technology contents:
A kind of optical flow computation method using time-domain visual sensor , it is characterized in thatbe assumed to basis with the local smoothing method that Lucas-Kanade proposes, the AE sequence using TVS to export completes the calculating of spatial gradient and time gradient: comprise
(1) computer memory gradient:
Ix, Iy are the spatial gradient (change) of pixel (x, y) place brightness, the difference of each pixel and its neighbor pixel accumulative AE number in the past in a period of time Δ t is used to carry out computer memory gradient, consider the feature detecting live width, TVS noisiness and calculate in real time, spatial gradient computing formula is:
AE(x in above formula, y, t) represent the AE that t pixel (x, y) place produces, Δ t is the counting period of algorithm setting, is set, recommended value 50 ~ 200us by the movement velocity of target, and in frame sampling situation, temporal resolution is equivalent to 20000 ~ 5000 frames/per second;
(2) time gradient:
Time gradient represents that pixel experiences the pace of change of light intensity, its computing formula:
Above formula obtains the spatial gradient at (x, y) some place by the AE ratio that is total and Δ t that pixel (x, y) in the calculating Δ t time interval produces.
(3) based on the optical flow algorithm of TVS
The differential optical flow constraint equation that the AE sequence using TVS to export represents is:
Algorithm reads in the AE that TVS produces continuously, whenever reading in an AE, according to its address, calculates spatial gradient and the time gradient of each point in this point and territory, peripheral cell thereof according to formula (1 ~ 3), uses LMSE method to solve its light stream according to local smoothing method supposition;
Algorithm flow:
1) algorithm initialization, sets up AE queue and the light stream queue thereof of each pixel according to time sequence; Definition zonule size be (n*n) and computing time interval of delta t; Current time T=0;
2) read in AE (x, y, t), upgrade current time T=t, upgrade AE list by address;
3) if T< Δ t, return (2), otherwise turn (4);
4) spatial gradient and the time gradient of each point in zonule Ω centered by (x, y) is calculated according to formula (1 ~ 3);
5) following formula is used
(u, v), upgrades the light stream queue that (x, y) puts in the light stream at employing LMSE method calculating current time (x, y) some place;
6) return (2), circulation performs said process;
7) in photographed scene each point not light stream be in the same time recorded in the light stream queue of often.
The optical flow computation method of use vision sensor disclosed by the invention is to adopt " frame sequence image (video) " to carry out the method for optical flow computation at present different, the present invention adopts a kind of output of Novel visual information acquisition device-time-domain visual sensor (TemporalVisionSensor, TVS) to carry out the optical flow computation of visual field.Because this image capture device is only to the light intensity sensitive in photographed scene and sampling, does not comprise static background in therefore exporting, greatly reduce redundant information and output data quantity, and then significantly reduce operation time and the resource requirement of back-end processing algorithm.In addition independently carry out change due to each pixel to detect and asynchronous output, therefore, it is possible to the motion of target and change in real-time perception scene, meet the continuity hypothesis of " brightness is constant constant with local velocity " in differential optical flow computation method well, obviously improving the precision of optical flow computation simultaneously, substantially increase the real-time of optical flow computation, be therefore very suitable for the optical flow computation of high-speed moving object and follow-up and then carry out tracking and test the speed.
The optical flow computation method that the present invention proposes is based on " brightness constancy " and " local velocity is constant " principle, use TVS as vision input source, use " Active event (AE) representing brightness change " sequence to carry out the calculating of illumination variation spatial gradient and time gradient, complete calculating light stream by least square method with based on the error function of two-dimensional Gaussian function.
The good effect that the optical flow computation method of use time-domain visual sensor disclosed by the invention is compared with prior art had is:
Invention is input source with TDS, the AE sequence utilizing it to produce, and adopts differential optical flow analysis method to carry out optical flow computation.The image-forming principle that TVS adopts change sampling, asynchronous output, address events to represent, therefore has the peculiar advantage of extremely low data redundancy, high real-time and temporal resolution, is applicable to very much the optical flow analysis being applied to Describing Motion target:
(1) simultaneously owing to adopting the sampling principle of " the asynchronous output of change sampling+pixel ", the change therefore in scene can the perceived and output with Microsecond grade, Millisecond, is equivalent to several thousand ~ several ten thousand frames under frame sampling/per second.So high temporal resolution is supposed well to be met by making " brightness constancy " of differential optical flow analysis and " light stream local smoothing method ", therefore has higher computational accuracy;
(2) the data output quantity of TVS only has the 5-10% of " frame sampling " imageing sensor usually, and therefore calculated amount greatly reduces, and can realize the Real-Time Optical flow analysis of low cost.
Accompanying drawing explanation
Fig. 1 is that common " frame sampling " imageing sensor compares with the shooting effect of time-domain visual sensor (TVS) in the face of same scene.Two field picture carries out light intensity record and output to the every bit in scene, and TVS only carries out sampling output to the change in scene, therefore have recorded target (human body) ruuning situation (position VS time) of moving in scene, the background information remained unchanged in scene is then out in the cold;
Fig. 2 provides the schematic diagram that three-dimensional space motion projects to two-dimensional imaging plane.Space point is (X1, Y1, Z1) in world coordinate system (three-dimensional) position of t, and the correspondence position in two-dimensional imaging coordinate system is (x1, y1); Through the Δ t time, this point moves to (X1+ Δ X, Y1+ Δ Y, Z1+ Δ Z), and corresponding two-dimensional imaging position is (x1+ Δ x, y1+ Δ y).Then the light stream of this point is (u, v)=(Δ x/ Δ t, Δ y/ Δ t);
Fig. 3 provides the two-dimensional coordinate calculated for spatial gradient.Top left corner pixel coordinate is (1,1).When calculating the horizontal direction gradient of pixel (i, j), use the neighbouring pixel with its same column; When calculating its VG (vertical gradient), use the left and right neighbor of going together with it;
Fig. 4 uses two-dimensional Gaussian function spatial distribution map for calculating in square error, and pixel proportion in total error that distance center pixel (i, j) is far away is less;
Fig. 5 is the optical flow computation process flow diagram that the present invention proposes, and refers to algorithmic descriptions.
Embodiment
In order to simple and clearly object, the hereafter appropriate description eliminating known technology, in order to avoid the description of those unnecessary details impact to the technical program.Below in conjunction with preferred embodiment, the present invention will be further described.
Embodiment 1
The present invention uses the differential optical flow computation method of TVS: be assumed to basis with the local smoothing method that Lucas-Kanade proposes, the AE sequence using TVS to export completes the calculating of spatial gradient and time gradient.
spatial gradient
Ix, Iy are the spatial gradient (change) of pixel (x, y) place brightness.In based on the method for two field picture, use pixel (x, y) to represent the change of light intensity with the luminance difference of its periphery neighbor pixel. the present invention uses the difference of each pixel and its neighbor pixel accumulative AE number in the past in a period of time Δ t to carry out computer memory gradient.Consider the feature detecting live width, TVS noisiness and calculate in real time, spatial gradient computing formula is:
AE(x in above formula, y, t) represent the AE that t pixel (x, y) place produces, Δ t is the counting period of algorithm setting, is set by the movement velocity of target, and in recommended value 50 ~ 200us(frame sampling situation, temporal resolution is equivalent to 20000 ~ 5000 frames/per second).The principle of above-mentioned formula is:
(1) TVS pixel can send an AE when determining the photocurrent variations exceeding pre-set threshold to be detected in the time, and therefore each AE represents identical change amplitude;
(2) object of which movement can cause the light intensity of each pixel to change, based on brightness constancy prerequisite, what light intensity was large o'clock can produce more AE when a pixel moves to one other pixel, and the difference of unidirectional (increase or reduce) AE event number that therefore point-to-point transmission absolute luminance differences can produce with them represent;
(3) because TVS pixel produces the time interval usual tens ns of AE in succession, therefore Δ t can be less than hundreds of us, and the change sampling continuity therefore caused owing to moving is very high.Compared with the frame sampling mode being generally tens ms with sampling interval, more meet the theoretical premise of differential optical flow analysis.
time gradient
Time gradient represents that pixel experiences the pace of change of light intensity.Applying unit time of the present invention pixel produces the speed measure time gradient of AE, effect with use before and after the principle of frame luminance difference identical, but the interval of sampling due to AE is very short, therefore the computational accuracy of time gradient and continuity improve greatly.Computing formula:
Above formula obtains the spatial gradient at (x, y) some place by the AE ratio that is total and Δ t that pixel (x, y) in the calculating Δ t time interval produces.
based on the optical flow algorithm of TVS
The differential optical flow constraint equation that the AE sequence using TVS to export represents is:
Algorithm reads in the AE that TVS produces continuously.Whenever reading in an AE, according to its address, calculating spatial gradient and the time gradient of each point in this point and territory, peripheral cell thereof according to formula (1 ~ 3), using LMSE method to solve its light stream according to local smoothing method supposition.
Algorithm flow:
(1) algorithm initialization, sets up AE queue and the light stream queue thereof of each pixel according to time sequence; Definition zonule size be (n*n) and computing time interval of delta t; Current time T=0;
(2) read in AE (x, y, t), upgrade current time T=t, upgrade AE list by address;
(3) if T< Δ t, return (2), otherwise turn (4);
(4) spatial gradient and the time gradient of each point in zonule Ω centered by (x, y) is calculated according to formula (1 ~ 3);
(5) following formula is used
(u, v), upgrades the light stream queue that (x, y) puts in the light stream at employing LMSE method calculating current time (x, y) some place;
(6) return (2), circulation performs said process;
(7) in photographed scene each point not light stream be in the same time recorded in the light stream queue of often, algorithm flow chart is shown in Fig. 5.

Claims (1)

1. one kind uses the optical flow computation method of time-domain visual sensor , it is characterized in thatbe assumed to basis with the local smoothing method that Lucas-Kanade proposes, the AE sequence using TVS to export completes the calculating of spatial gradient and time gradient: comprise
(1) computer memory gradient:
Ix, Iy are the spatial gradient variations of pixel (x, y) place brightness, the difference of each pixel and its neighbor pixel accumulative AE number in the past in a period of time Δ t is used to carry out computer memory gradient, consider the feature detecting live width, TVS noisiness and calculate in real time, spatial gradient computing formula is:
(formula 1)
(formula 2)
AE(x in above formula, y, t) represent the AE that t pixel (x, y) place produces, Δ t is the counting period of algorithm setting, is set, recommended value 50 ~ 200us by the movement velocity of target, and in frame sampling situation, temporal resolution is equivalent to 20000 ~ 5000 frames/per second;
(2) time gradient:
Time gradient represents that pixel experiences the pace of change of light intensity, its computing formula:
(formula 3)
Above formula obtains the spatial gradient at (x, y) some place by the AE ratio that is total and Δ t that pixel (x, y) in the calculating Δ t time interval produces;
(3) based on the optical flow algorithm of TVS
The differential optical flow constraint equation that the AE sequence using TVS to export represents is:
(formula 4)
Algorithm reads in the AE that TVS produces continuously, whenever reading in an AE, according to its address, calculates spatial gradient and the time gradient of each point in this point and territory, peripheral cell thereof according to formula (1 ~ 3), uses LMSE method to solve its light stream according to local smoothing method supposition;
Algorithm flow:
1) algorithm initialization, sets up AE queue and the light stream queue thereof of each pixel according to time sequence; Definition zonule size is with interval of delta t computing time; Current time T=0;
2) read in AE (x, y, t), upgrade current time T=t, upgrade AE list by address;
3) if T< Δ t, return (2), otherwise turn (4);
4) spatial gradient and the time gradient of each point in zonule Ω centered by (x, y) is calculated according to formula (1 ~ 3);
5) following formula is used
(formula 5)
And the error calculation function that formula (6 ~ 7) defines:
(formula 6)
(formula 7)
(u, v), upgrades the light stream queue that (x, y) puts in the light stream at employing LMSE method calculating current time (x, y) some place;
6) return (2), circulation performs said process;
7) in photographed scene each point not light stream be in the same time recorded in the light stream queue of often.
CN201510525146.2A 2015-08-25 2015-08-25 A kind of optical flow computation method using time-domain visual sensor Expired - Fee Related CN105160703B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510525146.2A CN105160703B (en) 2015-08-25 2015-08-25 A kind of optical flow computation method using time-domain visual sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510525146.2A CN105160703B (en) 2015-08-25 2015-08-25 A kind of optical flow computation method using time-domain visual sensor

Publications (2)

Publication Number Publication Date
CN105160703A true CN105160703A (en) 2015-12-16
CN105160703B CN105160703B (en) 2018-10-19

Family

ID=54801545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510525146.2A Expired - Fee Related CN105160703B (en) 2015-08-25 2015-08-25 A kind of optical flow computation method using time-domain visual sensor

Country Status (1)

Country Link
CN (1) CN105160703B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957060A (en) * 2016-04-22 2016-09-21 天津师范大学 Method for dividing TVS events into clusters based on optical flow analysis
CN106127800A (en) * 2016-06-14 2016-11-16 天津大学 Real-time many object tracking methods based on AER imageing sensor and device
CN107220942A (en) * 2016-03-22 2017-09-29 三星电子株式会社 Method and apparatus for the graphical representation and processing of dynamic visual sensor
CN107764271A (en) * 2017-11-15 2018-03-06 华南理工大学 A kind of photopic vision dynamic positioning method and system based on light stream
CN108288289A (en) * 2018-03-07 2018-07-17 华南理工大学 A kind of LED visible detection methods and its system for visible light-seeking
CN108574793A (en) * 2017-03-08 2018-09-25 三星电子株式会社 It is configured as regenerating the image processing equipment of timestamp and the electronic equipment including it
CN108961318A (en) * 2018-05-04 2018-12-07 上海芯仑光电科技有限公司 A kind of data processing method and calculate equipment
CN105719290B (en) * 2016-01-20 2019-02-05 天津师范大学 A kind of binocular solid Matching Method of Depth using time-domain visual sensor
CN109461173A (en) * 2018-10-25 2019-03-12 天津师范大学 A kind of Fast Corner Detection method for the processing of time-domain visual sensor signal
CN109509213A (en) * 2018-10-25 2019-03-22 天津师范大学 A kind of Harris angular-point detection method applied to asynchronous time domain visual sensor
CN109785365A (en) * 2019-01-17 2019-05-21 西安电子科技大学 Address events drive the real-time modeling method method of unstructured signal
CN110692083A (en) * 2017-05-29 2020-01-14 苏黎世大学 Block-matched optical flow and stereo vision for dynamic vision sensors
CN111951558A (en) * 2020-08-21 2020-11-17 齐鲁工业大学 Machine vision system and method applied to traffic early warning robot
CN112435279A (en) * 2019-08-26 2021-03-02 天津大学青岛海洋技术研究院 Optical flow conversion method based on bionic pulse type high-speed camera
WO2022257035A1 (en) * 2021-06-09 2022-12-15 Nvidia Corporation Computing motion of pixels among images

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480615B1 (en) * 1999-06-15 2002-11-12 University Of Washington Motion estimation within a sequence of data frames using optical flow with adaptive gradients
CN103516946A (en) * 2012-06-19 2014-01-15 三星电子株式会社 Event-based image processing apparatus and method
CN104205169A (en) * 2011-12-21 2014-12-10 皮埃尔和玛利居里大学(巴黎第六大学) Method of estimating optical flow on the basis of an asynchronous light sensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480615B1 (en) * 1999-06-15 2002-11-12 University Of Washington Motion estimation within a sequence of data frames using optical flow with adaptive gradients
CN104205169A (en) * 2011-12-21 2014-12-10 皮埃尔和玛利居里大学(巴黎第六大学) Method of estimating optical flow on the basis of an asynchronous light sensor
CN103516946A (en) * 2012-06-19 2014-01-15 三星电子株式会社 Event-based image processing apparatus and method

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719290B (en) * 2016-01-20 2019-02-05 天津师范大学 A kind of binocular solid Matching Method of Depth using time-domain visual sensor
CN107220942A (en) * 2016-03-22 2017-09-29 三星电子株式会社 Method and apparatus for the graphical representation and processing of dynamic visual sensor
CN107220942B (en) * 2016-03-22 2022-03-29 三星电子株式会社 Method and apparatus for image representation and processing of dynamic vision sensors
CN105957060A (en) * 2016-04-22 2016-09-21 天津师范大学 Method for dividing TVS events into clusters based on optical flow analysis
CN106127800A (en) * 2016-06-14 2016-11-16 天津大学 Real-time many object tracking methods based on AER imageing sensor and device
US11202025B2 (en) 2017-03-08 2021-12-14 Samsung Electronics Co., Ltd. Image processing device configured to regenerate timestamp and electronic device including the same
CN108574793A (en) * 2017-03-08 2018-09-25 三星电子株式会社 It is configured as regenerating the image processing equipment of timestamp and the electronic equipment including it
CN108574793B (en) * 2017-03-08 2022-05-10 三星电子株式会社 Image processing apparatus configured to regenerate time stamp and electronic apparatus including the same
US11575849B2 (en) 2017-03-08 2023-02-07 Samsung Electronics Co., Ltd. Image processing device configured to regenerate timestamp and electronic device including the same
CN110692083A (en) * 2017-05-29 2020-01-14 苏黎世大学 Block-matched optical flow and stereo vision for dynamic vision sensors
CN110692083B (en) * 2017-05-29 2024-01-05 苏黎世大学 Block-matched optical flow and stereoscopic vision for dynamic vision sensor
CN107764271A (en) * 2017-11-15 2018-03-06 华南理工大学 A kind of photopic vision dynamic positioning method and system based on light stream
CN107764271B (en) * 2017-11-15 2023-09-26 华南理工大学 Visible light visual dynamic positioning method and system based on optical flow
CN108288289B (en) * 2018-03-07 2023-07-18 华南理工大学 LED visual detection method and system for visible light positioning
CN108288289A (en) * 2018-03-07 2018-07-17 华南理工大学 A kind of LED visible detection methods and its system for visible light-seeking
CN108961318A (en) * 2018-05-04 2018-12-07 上海芯仑光电科技有限公司 A kind of data processing method and calculate equipment
CN109461173B (en) * 2018-10-25 2022-03-04 天津师范大学 Rapid corner detection method for time domain vision sensor signal processing
CN109509213A (en) * 2018-10-25 2019-03-22 天津师范大学 A kind of Harris angular-point detection method applied to asynchronous time domain visual sensor
CN109461173A (en) * 2018-10-25 2019-03-12 天津师范大学 A kind of Fast Corner Detection method for the processing of time-domain visual sensor signal
CN109785365B (en) * 2019-01-17 2021-05-04 西安电子科技大学 Real-time target tracking method of address event driven unstructured signal
CN109785365A (en) * 2019-01-17 2019-05-21 西安电子科技大学 Address events drive the real-time modeling method method of unstructured signal
CN112435279A (en) * 2019-08-26 2021-03-02 天津大学青岛海洋技术研究院 Optical flow conversion method based on bionic pulse type high-speed camera
CN112435279B (en) * 2019-08-26 2022-10-11 天津大学青岛海洋技术研究院 Optical flow conversion method based on bionic pulse type high-speed camera
CN111951558A (en) * 2020-08-21 2020-11-17 齐鲁工业大学 Machine vision system and method applied to traffic early warning robot
WO2022257035A1 (en) * 2021-06-09 2022-12-15 Nvidia Corporation Computing motion of pixels among images

Also Published As

Publication number Publication date
CN105160703B (en) 2018-10-19

Similar Documents

Publication Publication Date Title
CN105160703A (en) Optical flow computation method using time domain visual sensor
US10260862B2 (en) Pose estimation using sensors
Rebecq et al. Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time
CN106780620B (en) Table tennis motion trail identification, positioning and tracking system and method
CN103854283B (en) A kind of mobile augmented reality Tracing Registration method based on on-line study
KR102595604B1 (en) Method and apparatus of detecting object using event-based sensor
CN107357286A (en) Vision positioning guider and its method
CN113674416B (en) Three-dimensional map construction method and device, electronic equipment and storage medium
CN104408725A (en) Target recapture system and method based on TLD optimization algorithm
CN104766342A (en) Moving target tracking system and speed measuring method based on temporal vision sensor
CN110390685B (en) Feature point tracking method based on event camera
WO2023221524A1 (en) Human movement intelligent measurement and digital training system
CN110139031B (en) Video anti-shake system based on inertial sensing and working method thereof
CN112833892B (en) Semantic mapping method based on track alignment
Bashirov et al. Real-time rgbd-based extended body pose estimation
Li et al. A binocular MSCKF-based visual inertial odometry system using LK optical flow
CN105303518A (en) Region feature based video inter-frame splicing method
CN105957060B (en) A kind of TVS event cluster-dividing method based on optical flow analysis
Sokolova et al. Human identification by gait from event-based camera
Liu et al. An attention fusion network for event-based vehicle object detection
Liu et al. Accurate real-time ball trajectory estimation with onboard stereo camera system for humanoid ping-pong robot
Zhou et al. MH pose: 3D human pose estimation based on high-quality heatmap
CN109785365A (en) Address events drive the real-time modeling method method of unstructured signal
CN105203045A (en) System and method for detecting product shape integrity based on asynchronous time domain vision sensor
CN114548224A (en) 2D human body pose generation method and device for strong interaction human body motion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181019

Termination date: 20190825