EP2215601A2 - Modell-basierte 3d positionsbestimmung eines objektes welches mit einer kalibrierten kamera zu überwachungszwecken aus einer einzigen perspektive aufgenommen wird wobei die 3d position des objektes als schnittpunkt der sichtgeraden mit dem szenenmodell bestimmt wird - Google Patents
Modell-basierte 3d positionsbestimmung eines objektes welches mit einer kalibrierten kamera zu überwachungszwecken aus einer einzigen perspektive aufgenommen wird wobei die 3d position des objektes als schnittpunkt der sichtgeraden mit dem szenenmodell bestimmt wirdInfo
- Publication number
- EP2215601A2 EP2215601A2 EP08804765A EP08804765A EP2215601A2 EP 2215601 A2 EP2215601 A2 EP 2215601A2 EP 08804765 A EP08804765 A EP 08804765A EP 08804765 A EP08804765 A EP 08804765A EP 2215601 A2 EP2215601 A2 EP 2215601A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- surveillance
- model
- camera
- point
- processing module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30221—Sports video; Sports image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- An image processing module for estimating an object position of a surveillance obstruction for estimating an object position of a surveillance obstruction, a method for determining an object position of a surveillance object, and a computer program
- the invention relates to an image processing module for estimating an object position of a surveillance object or subareas thereof in a surveillance area for a surveillance system for monitoring at least this surveillance area with a surveillance camera, with a model input interface for adopting a model or submodel of the surveillance area, with a camera input interface for adopting a camera model Security camera, with an object input interface to
- an object point of the surveillance object wherein the object point is determined on the basis of one or more pixels of the surveillance object of a surveillance image recorded with the surveillance camera, wherein the image processing module is formed by offsetting the model, the camera model and the object point, the object position of the surveillance object or
- the invention relates to a corresponding method for determining the object position of a surveillance object and a computer program.
- Video surveillance systems are often used to monitor large, angular or complex surveillance areas with the help of surveillance cameras.
- the recorded with the surveillance cameras image streams are usually merged into a monitoring center or the like and there either automated or controlled by monitoring personnel.
- the number of image streams that need to be controlled by security guards is steadily increasing. From a certain number of video monitors for displaying the image data streams, it must be assumed that attention for each one
- Video monitor of the security guards is safely reduced. If the number of video monitors is further increased, this can lead to an unmanageable number, so that a sufficient monitoring quality may no longer be ensured.
- the rectangles are "set up” by these are pivoted in the 2-D model of the football field to one of the surveillance camera facing side line of the rectangle by 90 °.
- the proportions of the rectangle are distorted according to perspective, so that in the 2-D model of the
- Soccer field first perspective approximate images of the football players are shown. In further steps, this operation will be carried out for other football players and thus get views of each football player from different perspectives. These perspective views are contracted to a model of the football player, so that a 3-D representation of the football players is formed.
- the proposed image processing module is preferably implemented as part of a monitoring system, wherein the monitoring system is suitable and / or designed to monitor at least one surveillance area with a surveillance camera, wherein the surveillance cameras are aimed and / or directed to relevant areas in the surveillance area.
- a surveillance camera aimed and / or directed to relevant areas in the surveillance area.
- Another disclosed subject of the invention is therefore also such a monitoring system with the image processing module.
- the surveillance area can be designed, for example, as the interior of a building complex, public spaces, hospitals, etc.
- the image processing module is configured to estimate an object position of a surveillance object or portions thereof in the surveillance area.
- An estimate in this context means a determination of the object position, wherein an arbitrary definition for the object position of the object
- the monitoring object can be designed as any moving, quasi-stationary or stationary monitoring object and is preferably characterized in that it is not shown in the model described below.
- the image processing module has a model input interface for adopting a model or a submodel of the monitoring region, wherein the model or submodel are collectively referred to below as a model.
- the model forms static, quasi-static and / or non-static elements of the
- the model may include the plans of a building, furniture, desks, or the like. Quasi-static elements are given, for example, in the monitoring of a parking lot by parked cars, which usually remain a longer residence time unmoved. Furthermore, that can - A -
- Model also included non-static elements, such as passing trains, escalators, elevators, paternoster or the like included.
- An object input interface is used to transfer an object point of the monitoring object.
- the object point is based on one or more
- Pixels of the surveillance object of a recorded with the surveillance camera surveillance image determined.
- a detection of the monitored object takes place in the monitoring image, the detected image of the monitored object being e.g. is enclosed with a rectangle or polygon and where e.g. a foot point of the rectangle is passed as an object point.
- a focus of the monitored object or the like can be passed as an object point.
- the image processing module comprises a camera input interface for
- the camera model includes in particular the position and / or orientation as well as the imaging and / or projection properties of the surveillance camera in world coordinates and / or model coordinates, so that the surveillance camera and its detection range can be integrated into the model.
- the camera model is implemented as a transformation matrix, which allows a transformation of pixels in the image plane of the surveillance camera into the model.
- Model input interface and camera input interface may also be formed as a common interface, the model then includes the camera model.
- the image processing module is designed as control technology and / or circuit technology, the object position of the surveillance object or partial areas thereof in the
- the position of the object point in the model is preferably first determined-thus determining the position of the object point in model coordinates-and in a further step transmitting the position in model coordinates to a position in world coordinates in the monitoring area.
- the model is designed as a 3-D model so that the object position can be determined as a 3-D object position.
- the 3-D model is delimited from the 2-D model, in particular by the 3-D model containing at least one out-of-plane element.
- the 3-D model thus has spatial elements such as 3-D triangular meshes, elevation maps or even simple shapes or bodies such as spheres, cuboids or the like for modeling real elements in the surveillance area.
- the model should come as close as possible to the spatial conditions of the real scene.
- the model is also called a collision model.
- One idea of the invention is that it is often necessary to locate surveillance objects not only in flat or planned surveillance areas, but also in surveillance areas with changing altitudes. On the one hand - to get back to the example of the football field - a grandstand or the like can no longer be sensibly processed by the known system.
- the embodiment according to the invention is advantageous if the 3-D model, for example, depicts a complete building and object positions are to be determined in different floors or intermediate floors. The fact that a "true" 3-D model is used and an object point of the monitoring object is mapped into the model, geometrically complex monitoring areas can be easily incorporated into the object position determination.
- the at least one surveillance camera has one or the detection area, that is to say a spatial area in the area
- Surveillance area observed with the surveillance camera wherein the surveillance area and / or the model extends farther or wider than the detection area.
- the detection range of the camera thus forms a subarea of the surveillance area or only covers a subarea of the model.
- the monitoring system has at least two surveillance cameras with overlapping and / or non-overlapping detection areas, the surveillance area and / or the model extending exactly over the detection areas or further than the detection areas of the at least two surveillance cameras. Both trainings aim at that Model is far more spacious than the detection range of a single surveillance camera is designed to - as already explained - to be able to control complex surveillance areas effectively.
- the image processing module is designed such that the object point which lies in the image plane or in an equivalent plane of the surveillance camera and is indicated in image coordinates, based on the camera model to an imaging point of the 3 -. D model is mapped.
- the imaging point in the 3-D model is on an edge surface of an element shown in the 3-D model.
- the imaging point determined in this way is interpreted as a 3-D object position and output, for example, via a further interface.
- the image processing module is designed such that the object point in the image plane is imaged onto a projection point in the model, wherein the projection point lies on a projection plane of the surveillance camera arranged in the correct position in the model.
- the projection plane is defined as the plane through which rays, starting from a projection center of the
- Surveillance camera through one or more projection points and meet in the model on the corresponding imaging point.
- the mapping of the object point to the projection point is completely determined by the camera model.
- the image processing module is developed such that at least one intersection of the half-lines with one or more elements of the 3-D model are searched.
- a point on the edge surface ie, for example, a puncture point is sought as the intersection. Since it is possible for the half-line to pierce several elements of the 3-D model and / or to pierce a front and a back of an element of the 3-D model, it is further preferred that the intersection be chosen as the imaging point the smallest distance to the projection origin and / or to the
- intersection point is interpreted as an imaging point and initially represents the 3D object position of the object point and thus also of the object in model coordinates, which can be easily converted into world coordinates.
- Memory device is implemented in which the set of all imaging points for a plurality and / or all pixels in the detection range of the surveillance camera is stored.
- the corresponding imaging point can be found in monitoring mode for an object point in a technically simple manner, without having to carry out the test for multiple intersections, for example.
- the data content of the memory device can be stored in an initialization phase, alternatively, it is possible that the content of the memory device is updated regularly, irregularly and / or event-controlled.
- Another object of the invention relates to a method for determining a 3D object position of a surveillance camera detected by a surveillance object in a surveillance area, which is preferably carried out using the image processing module just described or according to one of the preceding claims.
- an object point of the surveillance object is determined in the image plane of the surveillance camera. For this purpose, for example - as already stated - a rectangle is placed around the detected monitoring object and the base point of the rectangle is selected as the object point.
- the object point is imaged on the basis of one or the camera model of the surveillance camera onto one or the imaging point of a 3-D model or 3-D submodel, which are also referred to collectively below as a 3-D model.
- the mapping can take place, for example, by forming a transformation matrix on the basis of the camera model, which transforms the video image or sections thereof of the surveillance camera from the image plane onto a projection surface of the 3-D model, the position and size of the transformed video image being selected such that the pixels of the video image are arranged as correctly positioned projection points in the 3-D model.
- the object point is mapped to a projection point in the projection plane by means of the transformation matrix.
- a half-line starting in the projection center of the surveillance camera and piercing the projection point is then cut with the model, finding an imaging point which is formed as an intersection of the half-line with the 3-D model.
- the 3-D object position of the imaging point is then interpreted as a 3-D object position of the object point and / or the monitored object.
- some or all object points of the monitoring object can also be projected into the 3-D model in order to obtain a more complex 3-D position information of the surveillance object.
- the intersection with the smallest distance to the projection center is selected as the imaging point.
- Another possible embodiment can be seen in that some, a plurality and / or all pixels in the image plane of the surveillance camera are mapped to imaging points of the 3-D model, so that they are present in the sense of a table or a look-up table , This step may be periodic, irregular, and / or event driven, and / or depending on the updating of the 3-D model.
- This step may be periodic, irregular, and / or event driven, and / or depending on the updating of the 3-D model.
- Another object of the invention relates to a computer program having the features of claim 12.
- FIG. 1 is a block diagram of a monitoring system having an image processing module as an embodiment of the invention
- Figure 2 is a schematic representation of a 3-D model for illustrating the method according to the invention.
- FIG. 1 shows a schematic block diagram of a monitoring system 1, which is connected to a plurality of surveillance cameras 2 and / or connectable.
- the surveillance cameras 2 are arranged distributed in a monitoring area (not shown), wherein the detection areas of the
- Surveillance cameras 2 partially overlap, but also detection areas are provided, which forms no intersection with a detection range of another surveillance camera 2.
- the monitoring system 1 has an image processing module 3, which is designed to determine a 3-D object position of a monitoring object in the monitoring area on the basis of input data defined in more detail later.
- This image processing module 3 is used when it is interesting to obtain information about the 3-D object position in world coordinates for a monitoring object detected by the monitoring system 1.
- the surveillance object is detected in a video image of one of the surveillance cameras 2 and detected, for example, by a "bounding box", ie a rectangle projected in terms of data in the video image.
- a base point of the bounding box ie a point which lies in the middle of the side edge of the rectangle, which, in perspective interpreted form, should represent the underside of the monitored object, is defined as an object point.
- the operation of object point extraction is performed in a detection device 4, which
- the object point is transferred to an evaluation device 5, which is designed to determine the 3-D object position of this object point, as will be explained below.
- the evaluation device 5 receives from a first data memory 6 a model of the monitoring area.
- the model is designed as a 3-D model, thus containing elements that are defined outside a common plane.
- the spatial extent of the model can be limited to the detection range of a single surveillance camera 2, be greater than this detection range, pass over at least the detection areas of two surveillance cameras 2 or capture a plurality of detection areas of surveillance cameras 2.
- the model should come as close to reality as possible and has, for example, space,
- the evaluation device 5 receives from a second data memory 7 a camera model of the surveillance camera 2 with which the surveillance object is detected or which provides the object point.
- the camera model includes data on the position and orientation of the surveillance camera 2 in model or world coordinates and on the optical imaging properties of the surveillance camera, so that, for example, the detection range of the surveillance camera 2 is modeled in the 3D model.
- the 3-D object position of the object point can now be calculated in model or world coordinates. This calculation will be explained with reference to the schematic representation of a 3-D model 8 in the following figure.
- the 3-D model is shown in simplified form in FIG. 2 with two elements, namely a cuboid 9 and a modeled surveillance camera 10.
- the 3-D model is designed as a wireframe model.
- the cuboid 9 represents any real object in the surveillance area, such as a desk or the like. A plurality of such bodies can thus form the 3-D model.
- the modeled surveillance camera 10 comprises a projection center 11 and a projection plane 12.
- the projection center 11 and plane 12 are arranged such that a pixel of the real surveillance camera 2 corresponding to the modeled camera 10 is projected on a half-line which begins in the projection center 11 and through one the image point corresponding imaging point extends, in the correct position as a projection point on the projection plane 12 is shown.
- the object point is entered as a projection point 14 in the correct position in the model 8.
- the half-line 13 is formed by the projection center 11 and the projection point 14, so that a direction of the half-line 13 is defined by these input data.
- there are two points of intersection namely the imaging point 15 and a further intersection point 16. If there are several intersections 15, 16, the evaluation device 5 checks which intersection point 15, 16 has the smallest distance to the projection center 11 this then the modeled surveillance camera 10 facing upper side or visible side of the cuboid 9 and the 3-D model 8 must represent.
- the 3-D position data of the imaging point 15 are subsequently output as a 3-D object position by the evaluation device 5 or the image processing module 3.
- the illustrated method allows the position determination in rooms and at different heights. Particularly noteworthy is that walls or similar visual collision objects are taken into account.
- the image processing module has another input interface to a third
- Data memory 17 wherein in the third data memory 17 an assignment table is stored, in which for each pixel of the surveillance cameras 2, a corresponding 3 -D object position is calculated.
- the third data memory 17 is refreshed, for example, in an initialization phase, but also during an update of the model, the camera model, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102007056835A DE102007056835A1 (de) | 2007-11-26 | 2007-11-26 | Bildverarbeitunsmodul zur Schätzung einer Objektposition eines Überwachungsobjekts, Verfahren zur Bestimmung einer Objektposition eines Überwachungsobjekts sowie Computerprogramm |
PCT/EP2008/062883 WO2009068336A2 (de) | 2007-11-26 | 2008-09-26 | Bildverarbeitungsmodul zur schätzung einer objektposition eines überwachungsobjekts, verfahren zur bestimmung einer objektposition eines überwachungsobjekts sowie computerprogramm |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2215601A2 true EP2215601A2 (de) | 2010-08-11 |
Family
ID=40577061
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP08804765A Ceased EP2215601A2 (de) | 2007-11-26 | 2008-09-26 | Modell-basierte 3d positionsbestimmung eines objektes welches mit einer kalibrierten kamera zu überwachungszwecken aus einer einzigen perspektive aufgenommen wird wobei die 3d position des objektes als schnittpunkt der sichtgeraden mit dem szenenmodell bestimmt wird |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP2215601A2 (de) |
DE (1) | DE102007056835A1 (de) |
WO (1) | WO2009068336A2 (de) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102010003336A1 (de) | 2010-03-26 | 2011-09-29 | Robert Bosch Gmbh | Verfahren zur Visualisierung von Aktivitätsschwerpunkten in Überwachungsszenen |
CN112804481B (zh) * | 2020-12-29 | 2022-08-16 | 杭州海康威视系统技术有限公司 | 监控点位置的确定方法、装置及计算机存储介质 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9719694D0 (en) | 1997-09-16 | 1997-11-19 | Canon Kk | Image processing apparatus |
WO2004042662A1 (en) * | 2002-10-15 | 2004-05-21 | University Of Southern California | Augmented virtual environments |
-
2007
- 2007-11-26 DE DE102007056835A patent/DE102007056835A1/de not_active Withdrawn
-
2008
- 2008-09-26 WO PCT/EP2008/062883 patent/WO2009068336A2/de active Application Filing
- 2008-09-26 EP EP08804765A patent/EP2215601A2/de not_active Ceased
Non-Patent Citations (2)
Title |
---|
None * |
See also references of WO2009068336A2 * |
Also Published As
Publication number | Publication date |
---|---|
DE102007056835A1 (de) | 2009-05-28 |
WO2009068336A3 (de) | 2010-01-14 |
WO2009068336A2 (de) | 2009-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE60133386T2 (de) | Vorrichtung und verfahren zur anzeige eines ziels mittels bildverarbeitung ohne drei dimensionales modellieren | |
DE112005000929B4 (de) | Automatisches Abbildungsverfahren und Vorrichtung | |
DE602006000627T2 (de) | Dreidimensionales Messverfahren und dreidimensionale Messvorrichtung | |
DE102005021735B4 (de) | Videoüberwachungssystem | |
WO2008083869A1 (de) | Verfahren, vorrichtung und computerprogramm zur selbstkalibrierung einer überwachungskamera | |
DE102011084554A1 (de) | Verfahren zur Darstellung eines Fahrzeugumfeldes | |
DE112016001150T5 (de) | Schätzung extrinsischer kameraparameter anhand von bildlinien | |
EP2553660B1 (de) | Verfahren zur visualisierung von aktivitätsschwerpunkten in überwachungsszenen | |
DE112016006262T5 (de) | Dreidimensionaler Scanner und Verarbeitungsverfahren zur Messunterstützung für diesen | |
DE112017001545T5 (de) | Virtuelles überlagerungssystem und verfahren für verdeckte objekte | |
DE102017121694A1 (de) | Vorrichtung und Verfahren zum Erzeugen dreidimensionaler Daten und Überwachungssystem mit Vorrichtung zur Erzeugung dreidimensionaler Daten | |
DE102016124978A1 (de) | Virtuelle Repräsentation einer Umgebung eines Kraftfahrzeugs in einem Fahrerassistenzsystem mit mehreren Projektionsflächen | |
DE102016124747A1 (de) | Erkennen eines erhabenen Objekts anhand perspektivischer Bilder | |
DE102020127000A1 (de) | Erzeugung von zusammengesetzten bildern unter verwendung von zwischenbildflächen | |
DE102009026091A1 (de) | Verfahren und System zur Überwachung eines dreidimensionalen Raumbereichs mit mehreren Kameras | |
WO2009068336A2 (de) | Bildverarbeitungsmodul zur schätzung einer objektposition eines überwachungsobjekts, verfahren zur bestimmung einer objektposition eines überwachungsobjekts sowie computerprogramm | |
DE102018118422A1 (de) | Verfahren und system zur darstellung von daten von einer videokamera | |
WO2006094637A1 (de) | Verfahren zum vergleich eines realen gegenstandes mit einem digitalen modell | |
DE112022002520T5 (de) | Verfahren zur automatischen Kalibrierung von Kameras und Erstellung von Karten | |
EP2940624B1 (de) | Dreidimensionales virtuelles Modell einer Umgebung für Anwendungen zur Positionsbestimmung | |
DE102012010799B4 (de) | Verfahren zur räumlichen Visualisierung von virtuellen Objekten | |
DE102019201600A1 (de) | Verfahren zum Erzeugen eines virtuellen, dreidimensionalen Modells eines realen Objekts, System zum Erzeugen eines virtuellen, dreidimensionalen Modells, Computerprogrammprodukt und Datenträger | |
DE102019102561A1 (de) | Verfahren zum Erkennen einer Pflastermarkierung | |
EP1434184B1 (de) | Steuerung eines Multikamera-Systems | |
DE102012025463A1 (de) | Verfahren zum Bestimmen eines Bewegungsparameters eines Kraftfahrzeugs durch Auffinden von invarianten Bildregionen in Bildern einer Kamera des Kraftfahrzeugs, Kamerasystem und Kraftfahrzeug |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA MK RS |
|
17P | Request for examination filed |
Effective date: 20100714 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20130104 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20180215 |