WO2002093931A2 - Verfahren und vorrichtung zum ermitteln von bewegung in zeitlich aufeinander folgenden digitalen bildern - Google Patents
Verfahren und vorrichtung zum ermitteln von bewegung in zeitlich aufeinander folgenden digitalen bildern Download PDFInfo
- Publication number
- WO2002093931A2 WO2002093931A2 PCT/DE2002/001585 DE0201585W WO02093931A2 WO 2002093931 A2 WO2002093931 A2 WO 2002093931A2 DE 0201585 W DE0201585 W DE 0201585W WO 02093931 A2 WO02093931 A2 WO 02093931A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- contour
- image
- determined
- movement
- pixel
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/537—Motion estimation other than block-based
- H04N19/54—Motion estimation other than block-based using feature points or meshes
Definitions
- the invention relates to a method and a device for determining movement in at least two temporally successive digital images, a computer-readable storage medium and a computer program element.
- the determination of movement in temporally successive images is essential information for determining the content of digital images.
- motion estimation is essential information for determining the content of digital images.
- the determination of the movement of perceived objects is carried out as an early processing step in human sensory perception.
- the limitation to individual pixels when determining the image movement means that only a small field of pixels with coding information associated with the pixels is taken into account when determining the movement, the individual pixels being sparsely distributed over the entire image.
- Coding information is further understood to mean brightness information (uminance information) and / or color information (chrominance information), which is / are each assigned to one or more pixels.
- the selected pixels which are taken into account in the course of the movement determination in temporally successive digital images, are defined via gray value corners, that is to say via pixels which are located in a corner region of abrupt transitions in the uminance values assigned to the respective pixels.
- gray-scale corners are not necessarily object-specific. This is especially true at the object boundaries, because at the object boundaries the gray value corners are determined by the gray value course of the background and object.
- the background does not have to be uniform in the image, the temporal assignment of the gray value corners leads to incorrect movement information.
- [4] and [5] describe methods for determining a contour with contour picture elements in a digital picture with picture elements to which coding information is assigned.
- a distance transformation is also known in [6] as a morphological operation for determining minimum distances from points of a considered local environment to a contour with contour image points. [7] and [8] describe two alternative implementations of the distance transformation from [6].
- [10] also describes a method for segmenting an image sequence, in which contour information is determined from segmentation information of objects which have already been segmented. A calculation of
- Movement information is based on the object-related contour information.
- the method described in [11] for determining the movement of objects in a sequence of digitized images uses a statistical model with two components, a static component (for describing the background) and a moving component (for describing moving objects) ,
- the invention is based on the problem of a simplified and therefore faster and less expensive determination of the Specify motion in a sequence of images that follow one another in time.
- pixels are present in the digital images, each of which is assigned coding information.
- coding information at least one contour with a multiplicity of contour image points located on the contour is determined in a first image.
- the contour image points located on the determined contour of the first image the movement is determined with respect to a reference contour contained in a second image with reference contour image points.
- a device for determining the movement in at least two chronologically successive digital images has a processor which is set up in such a way that the method steps described above can be carried out.
- a program is stored on a computer-readable storage medium, which, after it has been loaded into a memory of the computer, enables a computer to carry out the method steps described above for determining the movement in at least two digital images which follow one another in time.
- a computer program element has the method steps described above after it has been stored in a memory of the
- Computer has been loaded and run from the computer is used to determine the movement in at least two temporally successive digital images.
- the determination of the movement is determined on the basis of the determined contour, that is to say extracted from a digital image, in a digital image with respect to a reference contour in a temporally preceding or temporally following image.
- the contour information is determined directly from the coding information assigned to the pixels.
- a contiguous term is used to denote a coherent, that is to say a sequence of contour image points that are spatially adjacent in an image.
- pixels are contiguous and therefore form a contour if they are arranged directly adjacent to one another on the image grid, that is to say in the digital image.
- Motion detection in a sequence of digital images allows the determined movement information to be stabilized over time and also the determination and detection of even small movements in the sequence of digital images that follow one another in time. This is made possible in particular by the fact that by means of the determination of the contours and the consideration of the contours in the movement determination, a temporal integration is usually carried out over the clearly defined area explained below, which is caused by the displacement of contours in temporally successive digital images between the two considered contours is formed.
- real time is not a clearly defined concept of performance.
- real time is understood to mean a processing time that is essentially less than 40 ms.
- a time interval of 40 ms corresponds to the time offset of two digital single images of an analog video sequence.
- At least one reference contour with a plurality of reference contour pixels located on the contour is determined using the coding information in the second image.
- the movement can be determined both from a previous image and from a time following image can be carried out, i.e. both a movement prediction and a movement determination can be carried out in retrospective consideration.
- the second image as a reference image with the reference contour can be the temporally preceding or also temporally following image compared to the first image with the extracted contour taken into account in the image movement.
- a minimum distance image can be determined for the reference contour and the reference contour image points located on the reference contour by means of a morphological operation, that is to say clearly a field of values with which a minimum Distance of a pixel in the minimum distance image to a reference contour pixel is specified.
- a distance transformation can be used as a morphological operation, and it has been found that the distance transformation described in [6] is particularly suitable and leads to very good results.
- Pixel [x, y] and a reference contour pixel on the reference contour in the second image • [x, y] a pixel in the distance image, V 1 (l, t) a reference contour pixel in the second image,
- contour direction that is to say the direction in which the change in contrast runs along a contour.
- the invention is particularly suitable for use in the field of the detection of moving objects in a scenario in which it is important to distinguish a large number of moving objects from one another and from non-moving objects.
- a very suitable area of application is, in particular, traffic monitoring or the determination of the movement in scenes which are recorded by a digital camera installed in a moving vehicle.
- FIG. 1 is a block diagram in which the individual
- FIG. 2 shows a flow chart in which the method steps for determining the image movement according to an exemplary embodiment of the invention are shown in detail;
- Figure 3 is an illustration of a distance image with a
- Reference contour with contour lines assigned to the reference contour, and with a contour
- Figures 4a to 4c results of the contour-based motion determination according to the invention for different scenes.
- a digital camera is installed on a vehicle and records a recording area in the direction of travel of the moving vehicle.
- a sequence of digital images is thus generated by means of the digital camera, each digital image being assigned a multiplicity of picture elements and the picture elements
- Coding information according to this embodiment, the brightness values assigned to the pixels.
- contour extraction is carried out for each digital image, hereinafter referred to as first digital image according to this exemplary embodiment, with the brightness values l (x, y, t).
- contours are determined for the first image (block 101 in block diagram 100 in FIG. 1). This is done by detecting edges in the digital image. Edges mark jumps in contrast in the course of the brightness information in the digital image.
- contours contiguous chains of contour points, that is to say as contiguous edges, that is to say as a coherent sequence of locally directly adjacent contour points, are referred to as contours.
- the contour extraction is carried out using the method described in [4], alternatively the method described in [5].
- FIG. 2 shows the step of contour extraction 101 for a digital image 201 in detail in a flow chart 200.
- Gradient filtering (step 202) and then gradient-based line thinning (step 203) is carried out for the digital image 201 according to the flow diagram 200 shown in FIG.
- determined edge image points e (x, y, t) of determined lines in the digital image 201 are linked to one another and a contour, in general a plurality of N contours in each digital image 201 is determined (step 205 ).
- the contour is assigned in a further step (step 102 in FIG. 1).
- a corresponding point in a reference contour in a temporally preceding digital image that is to say a reference contour image point, is determined for each determined contour point of a contour v (t).
- the reference contour pixel is located on a reference contour in the previous image, v (t - l) thus designates the contour structure determined in the previous time step.
- each contour pixel of the contour is expressed via a displacement vector, also called a translation vector.
- the assumption is used that the change in contours over time is approximately described by a translation.
- the shift is optimal when the sum of the minimum distances between points of the contour environment under consideration and reference contour image points of the reference contour v M (t - l) becomes minimal.
- a morphological operation according to this exemplary embodiment the distance transformation described in [6], is used to determine the minimum distances.
- FIG. 3 shows the principle of assignment for two contours that follow one another in time, that is to say for contours from two images that follow one another in time, for each of which a movement determination is carried out.
- 3 shows a distance image 300 with a reference contour 301, as well as with contour lines 302, which are formed by means of the distance transformation described in [6] become.
- Contour lines are further understood to mean those lines in the distance image which have a constant minimum distance from the reference contour, ie from a reference contour image point on the reference contour.
- the distance transformation is carried out for each reference contour that is taken into account when determining the movement.
- the minimum distance image 300 is shifted for motion detection with respect to a contour 303 for which the motion is to be determined.
- the minimum distance to the reference contour v '(l, t - l) can be determined at each contour point using the
- Contour line 302 can be specified.
- This principle of contour assignment has the advantage over a direct comparison of contours that errors in contour detection have less influence on the quality of motion detection. This is particularly important when contours are incomplete or interrupted in their course.
- the contour assignment 102 is explained in more detail below.
- the that is, in the block of the contour assignment 102 the actual movement along the contours is calculated.
- the image representation is furthermore converted into a data structure which enables direct access to contours as a chain of contour points.
- contour index n is a natural number in the range between 1 and N, where N denotes the number of contours contained in the data structure.
- the temporal assignment of the contours follows in a further step.
- an optimal assignment is determined for each contour point.
- Y j _ (k) a contour pixel in the first image, • a contour pixel index for uniquely identifying a contour pixel in the first
- the difference surface between two contours that is, between the contour v (k, t) and the reference contour v (l, t - l) is approximated by means of the sum of the minimum distances.
- T j _ denotes an optimal translation
- the minimum distances are determined very efficiently.
- Any image value i.e. any minimum distance value
- the distance image 300 contains the Information of the minimum distance, that is to say, the minimum distance value D V ⁇ L ") (X, y, t - l) of one
- the distance transformation is applied to each reference contour point and the corresponding image point in the distance image D v ) (x, y, t - l) 300 in accordance with the following regulation:
- V '(l) a reference contour pixel in the second image
- regulation (1) can be converted into the following regulation:
- the energies are determined by the contour, for which the determination is to be determined taking into account the reference contour, via the distance image 300, that is to say via the function, that is to say the minimum distance values D v (i) ( x 'Y / t - l) n is shifted from the distance image 300 and for each shift, that is to say translation, the distance values (distance values) are read from the distance image, that is to say ascertained and summed up.
- the minimum distance is therefore only calculated once when the distance image 300 is generated.
- Contour point in the operator image that is, pointing to a reference contour image point. Otherwise the respective energy value is set to a maximum, predetermined value (MAX_VALUE).
- a new, stabilized movement is calculated for the individual contour pixels (step 207).
- T L (t) denotes the L-past translations that are known about the process reference-contour image points.
- the new movement is then determined, for example, by averaging. In other words, there is a time feedback when determining the respective translations.
- the following process steps are carried out for averaging.
- the first step is the mean shift
- ⁇ j (t) [r i # ⁇ ? (t - l), ..., ⁇ j "2 (t - l) j. (7)
- the movement can be determined by recursively filtering the determined translation vectors in accordance with the following rule:
- ⁇ _i (t) m i (t - l) + ⁇ • ( ⁇ t - l) - ⁇ ⁇ ).
- the contour is transferred into the distance image, so that the value at each pixel in the distance image corresponds to the minimum distance to a contour pixel according to the following rule:
- the distance transformation can be implemented in accordance with the methods described in [7] and [8].
- the representation is used in particular to illustrate the numerical effort involved in implementing the method described above.
- the calculation of the distance values is therefore a relatively numerically complex operation.
- a local mask is iteratively pushed over the distance image.
- the new distance value in the distance image to the reference contour is calculated according to the following rule:
- D (x, y) denotes the inverted distance image.
- Contour pixels the image value that corresponds to the distance value with the value "0" and all remaining image values have a constant value larger than the maximum distance to be expected.
- the local mask is denoted by mask (u, v).
- Mask values correspond to the local distance values of the pixels at the respective mask positions to the mask center.
- the optimal local distance values are determined for different mask sizes, so that the resulting distance values deviate as little as possible from the true Euclidean distance.
- the temporally feedback motion determination 103 takes place, which is the result of the Contour assignment used over several images in succession.
- the contour assignment 102 is used to stabilize the movement of the contours over time.
- a motion vector is specified for each contour pixel.
- M (k, t) denotes the set of all motion vectors at each contour pixel at a time t.
- the object contours from the entry of an object into a monitored, that is to say a recording area recorded by a digital camera, up to its area
- the dwell times of vehicles in the recording area can be determined directly for an automatic acquisition of traffic data and, for example, in the traffic jam forecast or also in the context of collision avoidance of
- contour assignment is initially based only on the evaluation of the distance values and thus on the shape of the contour itself.
- contour direction is added to the contour shape, namely the contour direction, by means of which it is specified in which direction the contrast jump takes place in the respective contour.
- the contour direction is automatically determined with the contour generation. With a white lane marking, the gray value changes from dark to light and back to dark again. The left and right edges of the respective contour then run parallel, but their contour direction is opposite.
- the invention clearly shows a very advantageous compromise between data reduction and preservation of the essential image information in a sequence of digital images.
- the invention provides a correlation-based one based on the use of contours for motion detection Approach that is considerably more robust than the known pixel-based methods, particularly with regard to segmentation errors.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP02740306A EP1396153A2 (de) | 2001-05-14 | 2002-05-02 | Verfahren und vorrichtung zum ermitteln von bewegung in zeitlich aufeinander folgenden digitalen bildern |
US10/476,690 US20050008073A1 (en) | 2001-05-14 | 2002-05-02 | Method and device for determining movement in at least two successive digital images, computer readable storage medium and computer program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE10123365.5 | 2001-05-14 | ||
DE10123365A DE10123365A1 (de) | 2001-05-14 | 2001-05-14 | Verfahren und Vorrichtung zum Ermitteln von Bewegung in mindestens zwei zeitlich aufeinander folgenden digitalen Bildern, Computerlesbares Speichermedium und Computerprogramm-Element |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2002093931A2 true WO2002093931A2 (de) | 2002-11-21 |
WO2002093931A3 WO2002093931A3 (de) | 2003-05-08 |
Family
ID=7684709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/DE2002/001585 WO2002093931A2 (de) | 2001-05-14 | 2002-05-02 | Verfahren und vorrichtung zum ermitteln von bewegung in zeitlich aufeinander folgenden digitalen bildern |
Country Status (4)
Country | Link |
---|---|
US (1) | US20050008073A1 (de) |
EP (1) | EP1396153A2 (de) |
DE (1) | DE10123365A1 (de) |
WO (1) | WO2002093931A2 (de) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE10258794A1 (de) * | 2002-12-16 | 2004-06-24 | Ibeo Automobile Sensor Gmbh | Verfahren zur Erkennung und Verfolgung von Objekten |
US8837669B2 (en) | 2003-04-25 | 2014-09-16 | Rapiscan Systems, Inc. | X-ray scanning system |
US7949101B2 (en) | 2005-12-16 | 2011-05-24 | Rapiscan Systems, Inc. | X-ray scanners and X-ray sources therefor |
US8451974B2 (en) | 2003-04-25 | 2013-05-28 | Rapiscan Systems, Inc. | X-ray tomographic inspection system for the identification of specific target items |
GB0525593D0 (en) | 2005-12-16 | 2006-01-25 | Cxr Ltd | X-ray tomography inspection systems |
US8223919B2 (en) | 2003-04-25 | 2012-07-17 | Rapiscan Systems, Inc. | X-ray tomographic inspection systems for the identification of specific target items |
US9113839B2 (en) | 2003-04-25 | 2015-08-25 | Rapiscon Systems, Inc. | X-ray inspection system and method |
US8243876B2 (en) | 2003-04-25 | 2012-08-14 | Rapiscan Systems, Inc. | X-ray scanners |
CN109584156B (zh) * | 2018-10-18 | 2022-01-28 | 中国科学院自动化研究所 | 显微序列图像拼接方法及装置 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5764283A (en) * | 1995-12-29 | 1998-06-09 | Lucent Technologies Inc. | Method and apparatus for tracking moving objects in real time using contours of the objects and feature paths |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR19990008977A (ko) * | 1997-07-05 | 1999-02-05 | 배순훈 | 윤곽선 부호화 방법 |
KR100301113B1 (ko) * | 1998-08-05 | 2001-09-06 | 오길록 | 윤곽선 추적에 의한 동영상 객체 분할 방법 |
-
2001
- 2001-05-14 DE DE10123365A patent/DE10123365A1/de not_active Ceased
-
2002
- 2002-05-02 US US10/476,690 patent/US20050008073A1/en not_active Abandoned
- 2002-05-02 EP EP02740306A patent/EP1396153A2/de not_active Withdrawn
- 2002-05-02 WO PCT/DE2002/001585 patent/WO2002093931A2/de not_active Application Discontinuation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5764283A (en) * | 1995-12-29 | 1998-06-09 | Lucent Technologies Inc. | Method and apparatus for tracking moving objects in real time using contours of the objects and feature paths |
Non-Patent Citations (1)
Title |
---|
DE ASSIS ZAMPIROLLI F ET AL: "Classification of the distance transformation algorithms under the mathematical morphology approach" COMPUTER GRAPHICS AND IMAGE PROCESSING, 2000. PROCEEDINGS XIII BRAZILIAN SYMPOSIUM ON GRAMADO, BRAZIL 17-20 OCT. 2000, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 17. Oktober 2000 (2000-10-17), Seiten 292-299, XP010521957 ISBN: 0-7695-0878-2 * |
Also Published As
Publication number | Publication date |
---|---|
US20050008073A1 (en) | 2005-01-13 |
EP1396153A2 (de) | 2004-03-10 |
DE10123365A1 (de) | 2002-11-28 |
WO2002093931A3 (de) | 2003-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2467828B1 (de) | Verfahren und system zur automatischen objekterkennung und anschliessenden objektverfolgung nach massgabe der objektform | |
DE69635980T2 (de) | Verfahren und vorrichtung zur detektierung von objektbewegung in einer bilderfolge | |
DE60038158T2 (de) | Zielerfassungsvorrichtung und verfahren zur schätzung des azimuts von radarzielen mittels radon transformation | |
EP1444654A2 (de) | Quantitative analyse, visualisierung und bewegungskorrektur in dynamischen prozessen | |
DE60111851T2 (de) | Videobildsegmentierungsverfahren unter verwendung von elementären objekten | |
DE112009003144B4 (de) | Verfahren und Vorrichtung zum Feststellen eines Hindernisses in einem Bild | |
EP0385384B1 (de) | Verfahren zur Detektion bewegter Objekte in digitaler Bildfolge | |
DE60224853T2 (de) | Verfahren und Vorrichtung zur Verarbeitung von Fahrzeugbildern | |
DE112018000332T5 (de) | Dichtes visuelles slam mit probabilistic-surfel-map | |
DE102017220307B4 (de) | Vorrichtung und Verfahren zum Erkennen von Verkehrszeichen | |
DE102006035637A1 (de) | Verfahren zum Erfassen und Verfolgen von deformierbaren Objekten | |
DE112011103690T5 (de) | Erkennung und Verfolgung sich bewegender Objekte | |
EP3980968B1 (de) | Detektion, 3d-rekonstruktion und nachverfolgung von mehreren relativ zueinander bewegten starren objekten | |
DE102006050364A1 (de) | Verfahren zum Detektieren und Verfolgen deformierbarer Objekte unter Verwendung eines adaptiven zeitvariierenden autoregressiven Modells | |
WO2002093931A2 (de) | Verfahren und vorrichtung zum ermitteln von bewegung in zeitlich aufeinander folgenden digitalen bildern | |
EP0414113A2 (de) | Verfahren zur Bewegungskompensation in einem Bewegtbildcoder oder -decoder | |
DE102019105293A1 (de) | Schätzung der Bewegung einer Bildposition | |
DE102004026782A1 (de) | Verfahren und Vorrichtung zur rechnergestützten Bewegungsschätzung in mindestens zwei zeitlich aufeinander folgenden digitalen Bildern, computerlesbares Speichermedium und Computerprogramm-Element | |
DE4314483A1 (de) | Überwachungssystem | |
DE102021201124A1 (de) | Trainieren von bildklassifizierernetzen | |
EP3811336B1 (de) | Verfahren zur bestimmung von zueinander korrespondierenden bildpunkten, soc zur durchführung des verfahrens, kamerasystem mit dem soc, steuergerät und fahrzeug | |
DE60310766T2 (de) | Beleuchtungsunabhängige gesichtserkennung | |
DE102017112333A1 (de) | Verbesserung eines pyramidalen Optical-Flow-Trackers | |
WO2003025843A2 (de) | Modellbasierte objektklassifikation und zielerkennung | |
DE102021128523A1 (de) | Hierarchische bildzerlegung zur defekterkennung |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2002740306 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2002740306 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10476690 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2002740306 Country of ref document: EP |