EP1854083A1 - Camera servant a poursuivre des objets - Google Patents
Camera servant a poursuivre des objetsInfo
- Publication number
- EP1854083A1 EP1854083A1 EP06707263A EP06707263A EP1854083A1 EP 1854083 A1 EP1854083 A1 EP 1854083A1 EP 06707263 A EP06707263 A EP 06707263A EP 06707263 A EP06707263 A EP 06707263A EP 1854083 A1 EP1854083 A1 EP 1854083A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- camera
- tracking
- unit
- image
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19606—Discriminating between target movement or movement in an area of interest and other non-signicative movements, e.g. target movements induced by camera shake or movements of pets, falling leaves, rotating fan
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19654—Details concerning communication with a camera
- G08B13/19656—Network used to communicate with a camera, e.g. WAN, LAN, Internet
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19663—Surveillance related processing done local to the camera
Definitions
- the invention relates to a camera for tracking objects with an image sensor unit for generating image data and to a processing unit for processing the image data transferred from the image sensor unit to the processing unit.
- the invention also relates to a multi-camera system having at least two cameras and to a method for processing image data in a camera for tracking objects.
- Tracking applications based on a network of distributed cameras are becoming increasingly popular in the field of security technology for monitoring airports, train stations, museums or public places, as well as in the field of industrial image processing in production lines and vision-guided robots.
- Traditional centralized approaches have many disadvantages here.
- Today's systems typically transmit the complete raw image stream of the camera sensor via expensive and distance-limited connections to a central computer and then have to be processed there all.
- the cameras are thus typically regarded only as simple sensors and the processing takes place only after elaborate transmission of the raw video stream. This concept quickly reaches its limits in multi-camera systems and cameras with high resolutions and / or frame rates.
- the invention is thus based on the problem to provide an object tracking by cameras, which is able to work with multiple cameras and bandwidth-limited networks.
- a camera for tracking objects with an image sensor unit for generating image data and a processing unit for processing the image data transferred from the image sensor unit to the processing unit, is provided for this purpose
- An operation unit has an ROI selecting unit for selecting image areas of interest for object tracking and a tracking unit for detecting tracking data of objects to be tracked from the image data.
- the processing of the image data thus already takes place in the camera, so that it is not necessary to transmit the complete, raw video stream in full resolution to an external processing unit. Instead, only the resulting tracking data is transmitted.
- ROI Region of Interest
- the image data to be processed is already severely limited in its amount, so that the processing of the data can be done in real time, which is of great importance in tracking applications. Since only the resulting data has to be transmitted by the camera, the use of standard network connections becomes possible in the first place.
- no external computer is required to calculate the tracking data, as this is already done inside the camera. An optionally existing central computer can then be used for higher-level tasks.
- the tracking data can be output at a signal output of the camera, the tracking data having a significantly reduced amount of data compared with the quantity of image data generated by the image sensor unit, in particular reduced by a factor of about 1000.
- the selection of image areas which are of interest for object tracking and, on the other hand, the calculation of the tracking data within the camera contributes to this considerable reduction of the amount of data to be transmitted according to the invention.
- a camera image in VGA resolution requires about a third of the 100 Mbps standard Ethernet bandwidth, this being achieved without using the so-called Bayer mosaic, otherwise the triple bandwidth is needed.
- Bayer mosaic the triple bandwidth is needed.
- a reduction to a few hundred kilobits per second is made possible, since only the results are transmitted. Since the raw video stream according to the invention is no longer limited by the bandwidth of the connection to the outside, sensors with very high spatial and lateral resolution can be used in the camera according to the invention.
- the tracking data are provided in the form of a particular approximated probability density function.
- the probability density function is approximated by a plurality of nodes.
- the target data exclusively of interest for a tracking application such as position and speed of an object to be tracked, are calculated and then output by the camera.
- the approximation of the probability density function by a plurality of support points whose position and number may be adaptively changed a significant reduction of the computational effort to be performed is achieved. Nevertheless, it has been shown that a precision sufficient for tracking applications can be achieved.
- parallel processing means are provided in the processing unit for the parallel processing of the interpolation points of the probability density function and data dependent thereon.
- the tracking unit implements a so-called particle filter, in which a probability density function (p (X t
- X t denotes the state at time t and Zt all measurements up to and including time t.
- the probability density function is sampled and thus new interpolation points for approximating the state vector X t are determined.
- the new state vector X t of an object to be tracked is determined by means of old measurements Zu and an old state vector X M and taking into account a stored motion model, and in the measuring step the new state vector Xt is weighted taking into account a new measurement.
- the approximation step the approximation of the probability density function p (Xt
- the tracking unit transmits tracking data of objects to be tracked, in particular a prediction comparison object, to the ROI selection unit in order to select the image areas of interest for the processing as a function of the tracking data.
- the image areas of interest By selecting the image areas of interest on the basis of tracking data, it can be ensured with high probability that only relevant image areas are evaluated. For example, it is possible to use the tracking data to calculate back to a comparison object of the object to be tracked, and it is then decided on the basis of this comparison object which image areas from the current camera image should be selected. In the case of an object to be tracked, which moves at a constant speed, the comparison object would thus correspond to the image in the last camera shot, only its position would be shifted in contrast.
- the prediction comparison object is generated by means of a stored parametric model which is adaptively adaptable.
- the image data of the image area selected by the ROI selection unit is converted into a color histogram in the processing unit and the tracking unit determines the tracking data on the basis of the color histogram.
- a color histogram has advantages in terms of robustness of the processing algorithms in terms of rotations, partial occlusion and deformation.
- the HSV color space hue saturation value
- the RGB color space red-green-blue
- the CMY color space cyan-magenta-yellow
- the ROI selection unit controls the image sensor unit as a function of the tracking data in such a way that only those image data are transferred from the image sensor unit to the processing unit which correspond to the image areas selected by the ROI selection unit.
- the bandwidth from the sensor to the processing hardware can be significantly reduced by transferring only the combination of image areas at all to the processing required for processing the tracking algorithm is necessary. This happens regardless of the physical resolution of the sensor. These regions of interest are generated dynamically from frame to frame and transmitted to the sensor. Of course, the sensor must allow such direct access to image areas, but this is the case with today's CMOS sensors.
- the image sensor unit and the processing unit are integrated in a common housing.
- the processing unit has a network unit.
- the camera according to the invention can thereby be integrated into a network, for example a wireless network, without problems. That this is possible at all, is due to the very low bandwidth in the invention, which is required for a transmission of the results calculated in the camera to the outside.
- a control unit and setting means are provided in order to change setting parameters of the camera, in particular alignment, image detail and magnification, as a function of the tracking data. Since the camera calculates the tracking data itself, a control unit in the camera can then also carry out the tracking of the camera. It is essential that no signal transmission to the outside is required for this purpose. The failure of a network to which the camera is connected, is thus not detectable from the outside. Even if there is no longer any connection from the camera to a central evaluation station, the tracking of the camera still maintains the impression of continuous monitoring, which, as soon as the connection is established again, can continue seamlessly.
- the problem underlying the invention is also solved by a method for processing image data in a camera for tracking objects, in which the following steps are provided:
- the method according to the invention it is possible to transmit only the result data of an object tracking from the camera to the outside, so that already by the externally required transmission bandwidth is substantially reduced.
- only those image data are selected for the processing which are more likely to contain information about objects to be tracked, for example by means of a feedback of the traceable data. cking data to a selection unit. This creates the opportunity to realize an object tracking by means of cameras even with high spatial and temporal resolution in real time.
- the step of selecting regions of the image data includes driving the image sensor unit in such a way that only image data are transferred from the image sensor unit to the processing unit, where there is an increased probability that they contain information about objects to be tracked.
- the amount of image data to be transmitted by the image sensor unit can be significantly reduced.
- the step of generating tracking data comprises approximating a probability density function by means of a plurality of interpolation points.
- circuits for processing the individual support points in hardware or software can be executed in parallel, so that a very fast generation of the tracking data is possible.
- the step of generating tracking data includes the generation of image data of a comparison object based on a probability density function of the objects to be tracked and at least one stored parametric model of the objects to be tracked.
- the calculated tracking results can be converted back into image data and these image data of a comparison object. jects can then be compared with the current camera image to assess the quality of the tracking results and adjust them if necessary.
- the image data of the comparison object can be used to select only those image data by means of the selection unit, which essentially correspond to the image detail of the comparison object.
- a multi-camera system having at least two cameras according to the invention, in which each camera has a network unit and the at least two cameras are connected to one another via a network, in particular Ethernet or WLAN.
- multi-camera systems with the cameras according to the invention can be realized on the basis of standard network applications. This is also possible with wireless network connections, for example.
- the communication over the network can of course be bidirectional.
- the cameras can not only output the result data, but also receive information about objects to be tracked or control signals for setting and aligning the camera optics via the network.
- the processing unit of at least one of the cameras is designed to process tracking data of another camera.
- a central processing unit is provided in the network for evaluating the tracking data transmitted by the at least two cameras.
- the tracking data using evaluations can be made.
- typical motion sequences can be used for object recognition or to recognize emergency situations.
- FIG. 1 is a schematic representation of a camera according to the invention for object tracking
- FIG. 2 shows a schematic representation of a multi-camera system according to the invention
- FIG. 3 shows a block diagram of a preferred embodiment of the camera according to the invention
- FIG. 4 shows a schematic representation of a multi-camera system according to the invention in an application for beach monitoring
- FIG. 5 is a schematic representation of another embodiment of a camera according to the invention.
- FIG. 6 shows a schematic representation of a multi-camera system according to the invention
- 7 is a schematic representation to illustrate the method according to the invention
- FIG. 10 shows representations of a probability density function of a tracked object according to the method according to the invention.
- FIG. 1 shows a camera according to the invention for object tracking 10, which has an image sensor unit 12 and a processing unit 14 in a common housing.
- the image sensor unit 12 is designed, for example, as a CMOS sensor and supplies image data to the processing unit 14.
- tracking data are generated which characterize an object to be tracked, at least in terms of position and speed and also, for example, in terms of shape, color and the like.
- the processing unit 14 has a so-called tracking unit in which the tracking data are generated.
- the processing unit 14 has a region of interest (ROI) selection unit, with which the image sensor unit 12 can be controlled in such a way that only the image areas that are of interest for the object tracking are transferred to the processing unit 14.
- ROI region of interest
- ROI selection unit also selecting the image areas taking into account the tracking data. From the image sensor unit 12 to the processing unit 14, only those image areas are transmitted in which a large probability is that they can provide information about the object to be tracked.
- the combination of a ROI selection method and the generation of the tracking data within the camera 10 itself enables the result output of the camera 10, symbolized by a double arrow 16, to require only a very small bandwidth and that this result transmission can take place over a standard network.
- the generation of the tracking data within the camera 10 can be done so fast that real-time applications can be realized.
- the structure of the camera 10 will be explained in more detail below.
- FIG. 2 shows a multi-camera system with several cameras 10a, 10b, 10c according to the invention.
- Each of the cameras 10a, 10b and 10c is constructed identically to the camera 10 of FIG.
- the cameras 10a, 10b, 10c are connected to each other via a network 18.
- a data exchange with the network 18 can be bidirectional, so that tracking data of an object to be tracked can be passed from the camera 10a to the camera 10b, for example, when the object to be tracked leaves the detection area of the camera 10a.
- the tracking data can also be transferred from the camera 10a to the camera 10c, and depending on which detection area an object to be tracked changes, the camera recognizing the object to be tracked can then output further tracking results.
- the image sensor unit 12 generates image data and supplies it to the processing unit 14, the processing unit 14 in FIG. 3 being indicated merely by means of a dashed outline. tet is.
- the image data from the image sensor unit 12 are first transferred to a ROl selection unit 20, which initially only looped through the image data or cached in a cache so that the double or multiple transmission of overlapping image areas is avoided.
- the task of the ROI selection unit 20 is to control the image sensor unit 12 so that only the image areas of interest for further processing are forwarded. How the ROI unit 20 determines these image areas of interest will be explained below. If the ROI unit 20 does not fulfill a buffering function, the image sensor unit 12 can also pass on the image data while bypassing the ROI unit 20.
- Reference numeral 22 thus provides image data of image areas in which there is a high probability that they contain information about the objects to be tracked.
- This image data is passed to a filter 24 which is optional and which then provides the filtered data at 26.
- the filter 24 can, for example, convert the image data from 22 into a color histogram in the HSV color space (Hue-Saturation Value).
- the filter 24 can also implement a color histogram in the RGB color space (red-green-blue).
- the implementation in color histograms has the advantage that the robustness of the subsequent evaluation is significantly increased, for example, against rotations and / or changes in shape of an object to be tracked.
- the filtered image data 26 are then fed to a comparison unit 28, in which a comparison measurement is performed and the image data 26 corresponding to the object to be tracked are compared with similarly prepared data of a comparison object.
- the resulting weights of all nodes must then be normalized.
- the comparison unit 28 then gives an approximate Probability density function 30, which simultaneously represents the central output of the camera 10.
- the probability density function 30, which is efficiently approximated by means of several nodes, represents the result of the tracking unit and only requires a small bandwidth for transmission over a network.
- the approximated probability density function 30 may then be output via a network I / O unit 32 and supplied to further units that perform further processing based on this result.
- a maximum likelihood condition i. the state in which the probability density function is maximum is calculated. In the present approximation by support points, this means that the support point is used with the highest weight. Furthermore, an expected value can be calculated in the unit 34.
- the unit 34 may also output the result of its evaluation via the network I / O unit 32 to a network.
- a control unit 36 uses the probability density function 30 for control applications. For this purpose, the control unit 36 generates control signals for a so-called pan-tilt unit, on which the camera 10 is mounted. By means of this pan-tilt unit, the camera 10 can be tracked to an object to be tracked. Alternatively, the control signals of the control unit 36 may also be output to a robot controller or CNC machine controller.
- Further units 38 which use the probability density function 30 for further processing, generate, for example, commands for passing persons / objects into a multi-camera system when a person traverses the field of view from one camera to the next.
- the initialization of a target object basically by presenting in front of the camera and training is done.
- the units 34, 36 and 38 may output their respective results via the network I / O unit to a network or, if there is no network, to a signal line.
- the probability density function 30 is also supplied to a so-called update unit 40, in which a time index of the probability density function being calculated is reduced by one in order to classify the probability density function just calculated no longer as the current value but as the most recent old value.
- the update unit 40 is thus the first station of a feedback loop within the tracking unit 21.
- this feedback loop on the one hand, a prediction is made as to how the probability density function is likely to appear at the next time step, and based on this prediction, a comparison object is again generated which, as already described, is then compared in the comparison unit 28 with the currently detected object.
- a weighting of the individual nodes is made and based on this weighting, it is decided whether a redistribution of the support points for the next pass of the loop is required.
- This probability density function of FIG. 42 is linked for prediction to a motion model 44, which in the illustrated embodiment is also in the form of a probability density function.
- a motion model 44 which in the illustrated embodiment is also in the form of a probability density function.
- the linking of the motion model of FIG. 44 with the probability density function of FIG. 42 takes place in a prediction unit 46.
- a convolution of the motion model is performed with the probability density function, as set forth in the equation found below the unit 46.
- a new interpolation point distribution is generated on the basis of the weighting of the interpolation points, with interpolation points of high weight receiving a number of successors corresponding to the weighting in the last iteration, but all of them are initially arranged at the same position.
- the position of the new nodes is scattered after applying the motion model. The movement model is to be applied only once by a new support point, only then the position is scattered. Support points with low weighting receive no successor.
- a new probability density function is output at 48, which correspondingly represents a predicted position based on the knowledge previously available.
- the prediction of the probability density function from 48 in a rendering unit 50 is linked to a parametric model from 52.
- the rendering step in render unit 50 generates the image data of a comparison object. In the simplest case of an object moving linearly at a constant speed, the image data of the comparison object would thus correspond to the object displaced by a certain distance.
- the parametric model from 52 can be adapted depending on external circumstances. This is of importance, for example, when objects with complex geometry are to be traced, whose shape may even change, whose projection changes as a function of a rotational position or with changing illumination.
- an adaptation is only carried out if it is very likely that it is also the object to be tracked, which has now changed its appearance. For example, the environment of a support point of the probability density function with the relatively highest weighting may not be used for adaptation at each step. If, in fact, the object to be tracked is no longer located in the viewed image section, an adaptation then carried out would result in the parametric model being changed in such a way that recognition of the object to be tracked is not possible.
- Remedy can, however, be created, for example, that the environment of a support point with the relatively highest weight is additionally tested for absolute weighting and above a defined weighting, so if it can be assumed with great certainty that it is the object to be tracked , the environment of this support point is used for adaptation.
- AIs model can serve an image region (ROI) of the target object.
- the model 52 can also be a so-called AAM implementation (Active Apperance Model), whereby this non-rigid and optionally textured model, in particular in the case of changes in shape, is advantageous. Also a three-dimensional AAM is possible.
- the filter 24 can be completely eliminated. It is also possible to use a contour-based method as a model, where the state determines the shape of the contour, for example with splines.
- image data of a comparison object is thus available at 54.
- These image data of the comparison object at 54 will now be compared with the currently recorded image data at 22.
- these image data from 54 are subjected to the same filtering as the image data from FIG. 22, so that a filter unit 56 identical to the filter unit 24 is provided correspondingly and then the filtered image data of FIG Comparative object present.
- a comparison of the image data of the object to be tracked currently recorded by the image sensor unit 12 and the image data of the comparison object is then compared with one another in comparison unit 28.
- the comparison measurement corresponds to a weighting of the new state Xt according to the new measurement Zt.
- the probability density function 30 results as a result of the comparison measurement in the comparison unit 28.
- the image data of the comparison object is also supplied to the ROI selection unit 20 at 54.
- the ROI unit 20 controls the image sensor unit 12 to request only those regions of interest corresponding to the image regions of the image data of the comparison object of FIG. 54.
- the ROI selector 20 implements a caching method to save overlap of ROIs of the same iteration so that even overlapping areas of different image areas of interest need only be transferred once.
- the image region (ROI) is determined, which in fact is only needed to determine this state, that is, this hypothesis which manifests itself in the comparison object. to rate. This is done technically for each sample or sample X t (l) .
- the camera according to the invention and the method implemented are highly suitable for parallel processing. So only have to determine the probability density function 30, or for determination the approximation of the probability density function by multiple nodes, all nodes are merged and normalized.
- the other explained calculation steps can be carried out separately for each support point and, for example, can also be implemented in parallel hardware.
- the camera according to the invention and the method according to the invention are therefore particularly suitable for real-time applications.
- the invention can also be applied to cameras with more than one sensor element.
- a stereo camera is possible or even the combination of a conventional image sensor and a thermal image sensor. Such a combination is of particular interest for surveillance applications. Fusion of the results from the two different sensors would then be performed in unit 38 in, for example, FIG. 3.
- FIG. 4 shows a multicamera system according to the invention in a schematic representation in a possible application scenario.
- Today, lifeguard swimming areas are monitored by the sea or by a lake to save injured or exhausted people from drowning.
- a bathing section is monitored by a multi-camera system with cameras 60a, 60b, 60c, 60d and 60e.
- the cameras 60a, 60b, 60c, 60d and 60e are interconnected by means of a wireless network, not shown.
- the cameras are mounted on a pier 62 and on rescue towers 64, 66.
- a suitable monitoring algorithm for example implemented in the unit 38 of FIG. 3, it is to be monitored whether there is a critical situation, for example if a float 68 is in trouble.
- PDAs personal digital assistant
- the multi-camera system it is possible to display the results of all cameras 60a, 60b, 60c, 60d and 60e on an external device with low computing power, for example a so-called PDA.
- a communication between the lifeguards can take place via the same network.
- a surfer 70 whose surfboard has a networkable display unit could be informed about the danger situation.
- the cameras 60a, 60b, 60c, 60d and 60 ⁇ can also be realigned, programmed, configured and parameterized via the network.
- the cameras 60a, 60b, 60c, 60d, and 60e may also be connected to a non-local network, such as the Internet.
- the camera is part of a modern mobile phone.
- the mobile phone has other sensors, such as inertial, inertial, and position sensors.
- the mobile phone also has a computing unit in which a localization algorithm is implemented. For example, entering an airport, a three-dimensional map of the airport is transmitted to the mobile phone along with additional symbolic aspects, such as terminal names, restaurants, and the like.
- the state of the overall system X t designates the position within the building in this embodiment. When walking around with the appropriately equipped mobile phone image sequences are continuously recorded. The probabilistic tracking method then allows these measurements ultimately to crystallize a current position that can then be output, for example on the 3D map.
- FIG. 5 shows a further embodiment of a camera 71 according to the invention.
- This panoramic mirror 74 is spaced from the image sensor unit 72 and allows an omnidirectional view for the tracking, that is, it can be tracked in all directions simultaneously.
- the captured image regions are to be warped accordingly using known calibration techniques.
- the camera according to the invention and the method according to the invention it is thus now possible to automatically track a person within a camera view by means of tracking methods and thus to output only the position of the person instead of the live video stream.
- the camera according to the invention only a very low bandwidth requirement is imposed on a data connection from the camera to the outside and it is thereby possible without any problems to perform monitoring tasks within a network of cameras.
- any decentralized architecture and a virtually unlimited expandability of the network with cameras is possible.
- the invention it is possible to integrate the information of several inventive so-called smart cameras and then to visualize them in a common model, in particular a three-dimensional world model.
- This makes it possible for the path to be visualized in a 3D model - decoupled from the respective cameras, ie across camera views.
- the angle of view can be freely selected for the person, for example with the person "flying in.”
- the use of three-dimensional models for visualizing monitoring results according to the invention therefore makes it possible to use less abstract representations than known visualizations
- the invention makes it possible to provide the monitoring results visualized in a common coordinate system at any location of a network and thus to have them available in ubiquitous form Coordinate systems and be embedded in a three-dimensional world model.
- the reference numeral 80 shows the outline of a building entrance in which a total of six smart cameras 82, 84, 86, 88, 90 and 92 according to the invention are positioned. All the cameras 82 to 92 are connected to a visualization unit 94, which may be designed, for example, as a portable visualization client in the network. In the visualization unit 94, the monitoring results, for example the results of a person tracking, are embedded in a three-dimensional model.
- the connections of the cameras 82 to 92 with the visualization unit 94 are only indicated schematically, can be set up any type of network connection in any configuration and topology, for example, as a bus connection, alternatively as wireless network connections.
- illustrations of the viewing angle of the individual cameras 82 to 92 in the form of a respective snapshot are also included in FIG. 6.
- FIG. 7 schematically illustrates the steps that are carried out in the visualization according to the invention.
- the smart cameras 82 to 92 each output a probability density function approximated by interpolation points.
- This probability density function can be output in spatial coordinates. In the example shown in Fig. 7, the probability density function is output via two-dimensional coordinates x, y.
- the output probability density function can then be represented, for example, three-dimensionally, with a ground plane representing the coordinate plane x, y and the value of the probability density function being plotted upward from this ground plane.
- This three-dimensional representation is designated by reference numeral 96 in FIG.
- the reference numeral 98 in the illustration of FIG. 7 denotes a plan view of the illustration of FIG. 96.
- the Values of the probability density function can then be represented, for example, in color-coded form.
- a three-dimensional model of the environment or of a building to be monitored is recorded or read in, for example in the form of a CAD file (computer-aided engineering).
- the smart cameras are or will be installed in a suitable location in the building and added to a network.
- the smart cameras must then be calibrated relative to the three-dimensional model.
- the three-dimensional model is georeferenced and after calibration, the outputs of the smart cameras are georeferenced with it.
- a person runs into the field of view of a smart camera and is automatically detected by the smart camera and recorded as a new target object and tracked with the particulate filter method already described.
- the visualization of the tracking then takes place in the three-dimensional model, wherein different display modes can be provided. For example, with a single person mitfented, from the perspective of individual cameras or by graphical visualization of the previous path of a person.
- the representation of a person or an object in the three-dimensional model takes place by means of a generic three-dimensional person model.
- the person's current appearance can be mapped as a texture to the three-dimensional person model or represented as a sprite, ie as a graphic object superimposed on the visualization model.
- a user with his network-capable visualization client for example a PDA / smartphone, himself moves in the field of vision of one or more smart cameras and thereby simultaneously also inputting the tracking, in other words by the smart cameras themselves is pursued.
- the user After visualizing the monitoring results on his PDA / Smartphone, the user can thus directly see his own position and thereby make a self-localization.
- a navigation system can be operated for such a user, which, in contrast to GPS (Global Positioning System), also operates with high precision within a building.
- GPS Global Positioning System
- services can be offered, such as route guidance to a specific office, even across floors, or in an airport terminal.
- Visualization on the mobile device also makes it easier for the user to find his way around.
- friends or buddies can be visualized in the three-dimensional model. If the user himself is in the field of view of the smart cameras, this is particularly interesting, because he then sees directly on his mobile device, who is in his vicinity, or where his friends are currently.
- This can be used, for example, in singles contact services, where, if the coincidence of common preferences or the like has been established, the position of the potential partner can be released from the network for the other party so that both can see each other on their mobile terminals, and optionally also be guided to each other by a routing function. This is possible, for example, in a nightclub or a hotel complex, but not limited in range.
- more advanced requests may be implemented, such as "what happened?" An answer could be that a new person has joined, that a person is entering a safety-critical area in the airport, another request may be "where?" ring.
- Such a request can be answered by specifying a three-dimensional position and systems based thereon can be used, for example, to answer the question of where an abandoned suitcase is located in an airport.
- the output of the respective tracking position is no longer in coordinates of the image plane of the respective camera, but using the calibration in a global coordinate system, for example in a georeferenced global world coordinate system (WCS).
- WCS georeferenced global world coordinate system
- stereo cameras which spatially capture a certain angle of view and thereby can output the three-dimensional position of a person.
- an average person height can be assumed, and the height in camera pixels can be used to infer the true height of the person using the camera calibration. In this way, an approximate distance to the camera can be calculated. If several cameras overlap with respect to their field of view, a distance measurement to the camera or cameras is possible even without the assumption of an average person height. In this way, the two-dimensional image plane of a smart camera can be extended to a world coordinate system.
- an Internet-based world-wide representation can be used, in which georeferenced contents can be embedded.
- An example of this is the "Google Earth" visualization accessible via the Internet, in which, for example, three-dimensional models of buildings can be embedded, and such a world-wide representation can also be used to visualize the tracking results of the decentralized smart camera network the positions of people in this presentation are indicated by green dots, where the extent of the dots indicates a confidence of how confident a person is - -
- simplification arises from the fact that, when the camera is permanently mounted, a background model is detected, in which the recorded scene is presented without moving objects, for example without persons.
- the smart camera builds a background model from this scene in which, for example, a running average is formed over several temporally successive images in order to eliminate the noise.
- the background model may be calculated using thresholds of temporal change.
- the smart camera has a background model available, so that segmentation can be realized in operation by difference formation methods and optionally additionally by known erosion and dilation methods. This segmentation includes just all moving objects and can be used for the tracking process as a region-of-interest (ROI). Only in this segmented areas can be a person to be tracked.
- ROI region-of-interest
- This segmented area which is potentially incoherent, is a superset of the actual tracking, as several people can be in the picture at the same time. In this way, the required in the smart camera computational reading can be reduced because only those areas are further processed by the segmentation, in which a person to be tracked can be at all.
- an automatic initialization to movement is also made possible. This can simplify the tracking of multiple objects or multiple people.
- the initialization responds to motion relative to the background model.
- To make new objects very fast To track additional support points can preferably be placed at positions in the image, where people can leave the field of vision or enter. Incidentally, this is not necessarily the edge of the picture. For example, if the camera is mounted on a corridor, the entrance area could also be in the center of the image.
- Such positions, at which additional support points are provided can be specified or also be set up adaptively, for example, by sufficiently long training to be learned.
- the visualization takes place in a three-dimensional and preferably georeferenced visualization model.
- the smart cameras continue to work in their respective image plane and a conversion into world coordinates is then carried out taking into account a camera calibration.
- several cameras can be used together to determine the position of a person or an object in the room by means of known stereo methods.
- a so-called decentralized tracking can be performed by running in each smart camera own particulate filter. If there is a moving object in the field of view of a smart camera, a particle filter runs for this object. If two moving objects move within the field of view of the smart camera, then two particle filters are set up accordingly.
- the integration of the results of the tracking into a uniform three-dimensional model then takes place only at the level of the tracking results.
- the tracking results of all cameras are drawn into the three-dimensional model.
- this is done by transmitting the tracking results in the network, in particular to the visualization unit 94 and the there then following visualization. In the simplest case, passing the tracking results between the smart cameras can be done so that if two cameras provide very similar coordinates in the three-dimensional model, then these two results will be unified into one moving object.
- a state X consists here of the position of the person or of the object directly in world coordinates, this state X is held by the visualization unit 94 and each support point above this state X can be understood as a position hypothesis in world coordinates.
- Each smart camera then receives these coordinates from the visualization unit 94 to perform its own measurement. The joint processing of position hypothesis and measurement result is thereby already carried out at the measurement level, correspondingly in the smart camera itself.
- the visualization unit 94 has tasks of a central processing unit.
- the tracking results are in any case decoupled from the respective smart cameras.
- a person thus simply leaves the image plane of a first camera and enters the image plane of a second camera, the handover is thus implicitly done, as is calculated directly in world coordinates.
- the calibration of the cameras in global, in particular georeferenced, coordinates can be carried out using standard methods, but a so-called analysis-by-synthesis approach can also be used.
- the three-dimensional visualization model is used as the calibration object and the camera parameters are iteratively changed until selected points of the image plane of the camera coincide with the corresponding points of the three-dimensional visualization model, ie until the real camera view coincides optimally with the view of the visualization model.
- a smart camera can also be provided with one or more angle sensors in order to obtain information about the respective viewing direction of the camera.
- the position of the camera can also be determined by known surveying techniques relative to the environment, since the environment exists as a 3D model, so that the position relative to this model is known.
- the tracking that is, the tracking of a moving object or a person
- the time scale ⁇ should indicate the duration until the next-time evaluation of a current sensor image, this being specified in units of frames of the sensor.
- a new sensor image basically has an effect on the weight of a support point relative to other support points and, if appropriate, on the adaptation, if adaptive methods are provided.
- each object to be tracked or each person to be tracked on different timescales, on different timescales at the same time.
- the object to be tracked can thus be viewed over the full probability density function over time.
- the time scale can also be covered by interpolation points.
- the basis for the application of different time scales is the assumption that an object to be tracked behaves in much the same way as the movement model and can change its appearance at different speeds or analogously behaves according to the appearance model and deviates from the movement model, but not both happens at the same time. Both alternative assumptions are monitored and tracked by the timescales, and then the right thing crystallizes out.
- the so-called Markov assumption states that the current state is defined only by the previous states.
- a time scale with ⁇ > 1 is realized in that in an iteration in which no new sensor image is to be processed, the time-consuming measuring step is omitted. Instead, the object is predicted only according to the motion model and optionally the appearance model. Since it is already known at a certain time scale in advance when a measurement is to take place, the motion model and the optional Appearance model due to the deterministic nature of all iterations that contain no measurement, for efficiency reasons in one step at a time run. In the illustration of FIG. 8, all iterations that contain no measurement can be recognized by the fact that in the different time scales of FIG. 8, no vertical line is drawn at these iterations.
- the computational effort for the above-described extension of the time scales or the uses of multiple time scales is almost twice as high when using the scheme described above than without this extension.
- the use of multiple time scales can also be used as a control entity for occlusion of objects to be detected.
- the use of multiple timescales can also be used on moving cameras where segmentation is not directly applicable.
- the use of multiple time scales can also help with existing segmentation methods, since they only segment moving objects to the background, but not between moving objects or people.
- the Appearance is defined as a part of the state, according to X ne u "- (X, A), ie the new state depends on the previous state X and the Apperance A.
- the already described particulate filter method does not need to be changed.
- Analogous to the movement model there is an Appearance model that predicts a new one from the old Appearance.
- a low dimensional parameterization can be an analytic appearance model that uses an analytical model of the whole distribution instead of sampling the appearances directly with its own landmarks. There are two options for this:
- a spline is generated in image coordinates, which is superimposed over the sensor image.
- the difference of this contour estimate to the current sensor image is calculated. For example, as shown in FIG. 9, in particular at regular intervals along the contour, points are considered at which the distance to the next edge in the sensor image is calculated perpendicular to the contour.
- These vertical lines drawn along the contour in FIG. 9 have a definable maximum length up to which an edge is sought. If no edge has been found up to this maximum length, this maximum length is assumed, thus limiting the difference upwards and limiting the search range.
- the sum or the squared sum of these differences is used in the previous Gaussian function and thus leads to a one-dimensional difference value for this interpolation point.
- the region-of-interest can only consist of the superposition of these vertical lines and only this superposition of the vertical lines must be transmitted by the smart camera or the sensor. For all support points together, therefore, the overlay of all these vertical lines from the smart camera alone is to be requested.
- the illustration of FIG. 9 shows in the upper left image the contour resulting from a support point X and the points spaced along this contour. In Fig. 9 top right then the addressed vertical lines are located at all points. In 9 bottom left, the contour can be seen together with the vertical lines and in Fig. 9, bottom right, only the vertical lines are shown, which are ultimately to be requested as ROl from the sensor.
- AAM Active Appearance Model
- contour-based methods can also be linked to the histogram-based.
- a support point X then consists of the concatenation of both state variables.
- the state X can also include its speed in terms of direction and magnitude, and possibly also the angular orientation of the object.
- the state then contains the coding of the contour, as described, for example, the control points of a spline.
- the visualization of the monitoring result by visualization of the probability density function of a person over time t is shown by way of example.
- Such visualization is generated by volume rendering methods and traces the trajectory of a tracked person, with different gray or color codes representing the probabilities of residence along the path.
- An application of the invention can be made, for example, in the detection of abandoned suitcases, for example in railway stations or airports. These are fixed cameras and, like already described, uses multiple timescales. It should thereby be recognized objects that have been added on a time scale. As with a bandpass, this filters out objects that change too fast, such as people walking around or picture noise. Similarly, too low frequencies are filtered out, so the background or sufficiently slow changes in the background.
- the detection of stray suitcases in an airport can be combined in a particularly advantageous manner with the monitoring of persons, since it is of particular interest to track the person who has parked the suitcase, both before parking but also afterwards.
- the system can track all recognizable persons in the field of view of the cameras. It should be noted that these persons do not necessarily have to be displayed to the user. For example, if one of the persecuted persons turns off a suitcase, the system can promptly present it to the user by following the suitcase and the path of the associated person who has potentially parked that suitcase. Then both the path before parking as well as after parking is shown, since all lying in the field of view persons were followed as a precaution. The user can thereby be shown only the important information without flooding it with information of no interest to the application.
- the user can thus immediately clarify the "what?" Question, namely an abandoned suitcase, and clearly follow the "where?" Question in the three-dimensional visualization model.
- the security staff at the airport can visualize this visualization embedded in a three-dimensional model on a mobile visualization client and, since they are also tracked by the system and thus localized, route planning to the target person or suitcase is calculated. This route planning is continuously updated, since the movement of the tracked target person so flows in real time. Further aspects and features of the invention will become apparent from the following scientific paper, which also describes also realized examples.
- This article presents a network-ready intelligent camera (“smart camera”) for probabilistic tracking of objects. It is capable of tracking objects in real time and demonstrates an approach that is very sparing with transmission bandwidth, since the camera only has to transmit the results of the tracking, which are at a higher level of abstraction.
- Object tracking plays a central role in many applications, in particular within robotics (Robotic robotic robots, RoboCup robot football), surveillance technology (person tracking) as well as in the human-machine interface, in motion-capture motion tracking, in the field of augmented reality and for 3D television.
- robotics Robot robotic robots, RoboCup robot football
- surveillance technology person tracking
- motion-capture motion tracking in the field of augmented reality and for 3D television.
- Particle filters have become established as an important type of object tracking today [1, 2, 3].
- the visual modalities used include form [3], color [4, 5, 6, 7], or a combination of modalities [8, 9].
- the Particle Filtering procedure is described in Section 2.
- an approach based on color histograms is used, which has been specially adapted to the requirements for technical implementation embedded in the camera.
- the architecture of the smart camera is described in Section 3. Subsequently, various advantages of the proposed approach will be discussed. Experimental results of this approach are illustrated in Section 4, followed by a summary. 2.
- Particle filters can handle multiple simultaneous hypotheses and nonlinear systems. Following the notation of Isard and Blake [3], Z t defines all measurements ⁇ z ⁇ , ..., z t ⁇ until the time t, X t describes the. State vector at time t of dimension fc (position, velocity etc. of the target object). Particle filters are based on Bayes' theorem to obtain the a posterior probability density function (pdf) at each time step. to calculate using all available information:
- the new state X t is weighted as a function of the new measurement z t (ie depending on the new camera sensor image).
- the measuring step (3) supplements the prediction step (2), together they implement the Bayes theorem (I) -
- Each node S j induces a region of interest (ROI) Pj 1 'around its local position in the image space.
- the size of the image region (H x , H y ) is user-defined.
- weighting is used depending on the local distance to the center of the image region. The following weighting function is used here:
- HiStO x U (b) f ⁇ k ⁇ W ⁇ ⁇ ) ⁇ [I (w) - b]
- the histogram of the target object is compared with the histogram of each well: for this purpose, the Bhattacharyya similarity [4] is used here, both in the HS and in the ⁇ histogram singly.
- a mVBlueLYNX 420CX camera from Matrix Vision [10] is used as a base, as shown in Fig. 2.
- the camera contains a sensor, an FPGA, a processor and
- an Ethernet network interface More specifically, it incorporates a Bayer progressive color CCD (Progressive Scan CCD) with a Bayer color mosaic.
- a Xilinx Spartan II FPGA is used for low-level processing. It also includes a 200MHz Motorola PowerPC processor with MMU and FPU unit running embedded Linux. It is connected to 32MB SDRAM and 36MB FLASH memory.
- the camera also includes a 100Mbps Ethernet interface. on the one hand for updating in the field (“Field Upgradability"), on the other hand for transmitting the results of the object tracking to the outside
- Field Upgradability for updating in the field
- the camera is not only intended as a prototype under laboratory conditions, it was also developed to cope with harsh industrial environments 3.2 Camera Tracking Architecture
- Fig. 3 shows the architecture of the smart camera.
- the output of the smart camera is transmitted via Ethernet using sockets. On the PC side, this data can then be visualized in real time and stored on data carriers for later evaluation.
- the implementation allows parameterization of the particle filter in a wide range. This includes the number of nodes N, the size of the image region (ROI) [H x , Hy), the number of bins in the histogram (in H, S, V), the factor for the mixing ratio HS + V (between hue saturation ( p ⁇ s) and Value (py)), the variance vector for diffusion in the motion model, the variance for the Bhattacharyya weighting and the combination of the motion models.
- the camera is initialized with a cube object. For this she is trained by presenting the object in front of the camera, she saves the associated color distribution as a reference of the target object.
- the tracking performance was very satisfactory: the camera can track the target object robustly over time at a refresh rate of 15 fps and a sensor resolution of 640x480 pixels.
- the process works directly on the raw and thus still color-filtered pixels through the Bayer mosaic: Instead of first making an expensive Bayer mosaic color conversion and then ultimately only using the histogram over it, which is not local Contains information, each four pixel Bayer neighborhood is interpreted as an RGB pixel.
- FIG. 5 illuminates the circular motion within the cube sequence in detail.
- a screenshot of the current positions of the interpolation points in conjunction with their weights is given at different times. It should be noted that the fact that the camera is mounted statically here, has not been exploited, so the performance presented is already achieved without a background segmentation as preprocessing.
- This article featured a smart camera for real-time object tracking.
- particle filters on HSV color distributions it provides robust tracking performance because it can handle multiple hypotheses simultaneously. Nevertheless, their bandwidth requirement is very low, since
- Another plan is to automatically adapt and track the view (" Appearance ”) of the target object at runtime, to further increase the robustness of object tracking in the event of lighting changes, as well as to build a multi-camera system to take advantage of it to demonstrate the communication that occurs between cameras at this higher level of abstraction (for example, as the basis for persecuting people in a surveillance application).
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
- Automatic Focus Adjustment (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102005009626A DE102005009626A1 (de) | 2005-02-24 | 2005-02-24 | Kamera zum Verfolgen von Objekten |
PCT/EP2006/001727 WO2006089776A1 (fr) | 2005-02-24 | 2006-02-24 | Camera servant a poursuivre des objets |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1854083A1 true EP1854083A1 (fr) | 2007-11-14 |
EP1854083B1 EP1854083B1 (fr) | 2011-01-26 |
Family
ID=36589246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06707263A Not-in-force EP1854083B1 (fr) | 2005-02-24 | 2006-02-24 | Camera servant a poursuivre des objets |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1854083B1 (fr) |
AT (1) | ATE497230T1 (fr) |
DE (2) | DE102005009626A1 (fr) |
WO (1) | WO2006089776A1 (fr) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006060612B4 (de) * | 2006-12-21 | 2008-08-28 | Eads Deutschland Gmbh | Verfahren zur Überwachung von Zielobjekten und Multispektralkamera dazu |
DE102007033391A1 (de) * | 2007-07-18 | 2009-01-22 | Robert Bosch Gmbh | Informationsvorrichtung, Verfahren zur Information und/oder Navigation von einer Person sowie Computerprogramm |
US8428310B2 (en) * | 2008-02-28 | 2013-04-23 | Adt Services Gmbh | Pattern classification system and method for collective learning |
DE102008038527A1 (de) | 2008-08-20 | 2010-02-25 | Eads Deutschland Gmbh | Verfahren zur Auswertung von Bildern mit einer Multispektralkamera oder einem SAR-Radar sowie Verfahren zur Fusionierung von Bildern einer Stereo-Multispektralkamera und eines SAR Gerätes |
DE102009009533B4 (de) * | 2009-02-18 | 2016-09-15 | Leuze Electronic Gmbh & Co. Kg | Bildverarbeitender Sensor |
DE102010032496A1 (de) * | 2010-07-28 | 2012-02-02 | Ids Imaging Development Systems Gmbh | Überwachungskamera mit einem Positionssensor |
DE102010046220A1 (de) * | 2010-09-21 | 2012-03-22 | Hella Kgaa Hueck & Co. | Verfahren zum Konfigurieren eines Überwachungssystems und konfigurierbares Überwachungssystem |
DE102011010334B4 (de) | 2011-02-04 | 2014-08-28 | Eads Deutschland Gmbh | Kamerasystem und Verfahren zur Beobachtung von Objekten in großer Entfernung, insbesondere zur Überwachung von Zielobjekten bei Nacht, Dunst, Staub oder Regen |
DE102011106810B4 (de) * | 2011-07-07 | 2016-08-11 | Testo Ag | Wärmebildkamera und Verfahren zur Bildanalyse und/oder Bildbearbeitung eines IR-Bildes mit einer Wärmebildkamera |
DE102011082052B4 (de) * | 2011-09-02 | 2015-05-28 | Deere & Company | Anordnung und Verfahren zur selbsttätigen Überladung von Erntegut von einer Erntemaschine auf ein Transportfahrzeug |
DE102012002321B4 (de) | 2012-02-06 | 2022-04-28 | Airbus Defence and Space GmbH | Verfahren zur Erkennung eines vorgegebenen Musters in einem Bilddatensatz |
EP3136367B1 (fr) | 2015-08-31 | 2022-12-07 | Continental Autonomous Mobility Germany GmbH | Dispositif de camera de vehicule et procede de detection d'une zone environnante arriere d'un vehicule automobile |
DE102016224573A1 (de) | 2016-12-09 | 2018-06-14 | Conti Temic Microelectronic Gmbh | Radarsystem mit dynamischer Objekterfassung in einem Fahrzeug. |
CN107135377A (zh) * | 2017-05-27 | 2017-09-05 | 深圳市景阳科技股份有限公司 | 监控自动跟踪方法及装置 |
WO2020243436A1 (fr) * | 2019-05-30 | 2020-12-03 | Infinity Collar Llc | Système pour fournir une limite virtuelle portable dynamique |
US11022972B2 (en) * | 2019-07-31 | 2021-06-01 | Bell Textron Inc. | Navigation system with camera assist |
CN112788227B (zh) * | 2019-11-07 | 2022-06-14 | 富泰华工业(深圳)有限公司 | 目标追踪拍摄方法、装置、计算机装置及存储介质 |
DE102019135211A1 (de) * | 2019-12-19 | 2021-06-24 | Sensific GmbH | Verfahren und Vorrichtung zur Nachverfolgung von Objekten |
DE102020109763A1 (de) | 2020-04-08 | 2021-10-14 | Valeo Schalter Und Sensoren Gmbh | Computerbasiertes System und Verfahren zur Objektverfolgung |
US11610080B2 (en) | 2020-04-21 | 2023-03-21 | Toyota Research Institute, Inc. | Object detection improvement based on autonomously selected training samples |
US11620966B2 (en) * | 2020-08-26 | 2023-04-04 | Htc Corporation | Multimedia system, driving method thereof, and non-transitory computer-readable storage medium |
CN113222464A (zh) * | 2021-05-31 | 2021-08-06 | 华诺智能(深圳)有限公司 | 一种车间人员行为分析管控系统与管控方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0878965A3 (fr) * | 1997-05-14 | 2000-01-12 | Hitachi Denshi Kabushiki Kaisha | Méthode pour la poursuite d'un objet entrant et appareil pour la poursuite et la surveillance d'un tel objet |
US6091771A (en) * | 1997-08-01 | 2000-07-18 | Wells Fargo Alarm Services, Inc. | Workstation for video security system |
-
2005
- 2005-02-24 DE DE102005009626A patent/DE102005009626A1/de not_active Withdrawn
-
2006
- 2006-02-24 DE DE502006008806T patent/DE502006008806D1/de active Active
- 2006-02-24 AT AT06707263T patent/ATE497230T1/de active
- 2006-02-24 WO PCT/EP2006/001727 patent/WO2006089776A1/fr active Application Filing
- 2006-02-24 EP EP06707263A patent/EP1854083B1/fr not_active Not-in-force
Non-Patent Citations (1)
Title |
---|
See references of WO2006089776A1 * |
Also Published As
Publication number | Publication date |
---|---|
EP1854083B1 (fr) | 2011-01-26 |
DE102005009626A1 (de) | 2006-08-31 |
DE502006008806D1 (de) | 2011-03-10 |
WO2006089776A1 (fr) | 2006-08-31 |
ATE497230T1 (de) | 2011-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1854083B1 (fr) | Camera servant a poursuivre des objets | |
US11733370B2 (en) | Building radar-camera surveillance system | |
US20220343138A1 (en) | Analysis of objects of interest in sensor data using deep neural networks | |
Liu et al. | Intelligent video systems and analytics: A survey | |
US9396399B1 (en) | Unusual event detection in wide-angle video (based on moving object trajectories) | |
US8599266B2 (en) | Digital processing of video images | |
EP1589484B1 (fr) | Procédé pour la détection et/ou le suivi d'objets | |
DE69635980T2 (de) | Verfahren und vorrichtung zur detektierung von objektbewegung in einer bilderfolge | |
WO2006133474A1 (fr) | Procede et unite d'evaluation d'images pour analyse de scene | |
DE102014105351A1 (de) | Detektion von menschen aus mehreren ansichten unter verwendung einer teilumfassenden suche | |
DE102005008131A1 (de) | Objektdetektion auf Bildpunktebene in digitalen Bildsequenzen | |
WO1992002894A1 (fr) | Procede d'analyse de sequences chronologiques d'images numeriques | |
DE112020001255T5 (de) | Tiefes neurales netzwerk mit niedrigem leistungsverbrauch zur gleichzeitigen objekterkennung und semantischen segmentation in bildern auf einem mobilen rechengerät | |
Lim et al. | Scalable image-based multi-camera visual surveillance system | |
Lalonde et al. | A system to automatically track humans and vehicles with a PTZ camera | |
CN105809108A (zh) | 基于分布式视觉的行人定位方法和系统 | |
DE102005055879A1 (de) | Flugverkehr-Leiteinrichtung | |
EP2064684A1 (fr) | Procede pour faire fonctionner au moins une camera | |
Rabie et al. | Mobile vision-based vehicle tracking and traffic control | |
WO2023186350A1 (fr) | Aéronef sans pilote pour détection de zone optique | |
DE102022131567A1 (de) | Verfahren und System zur Ermittlung eines Grundrisses | |
Haritaoglu et al. | Active outdoor surveillance | |
DE112021004501T5 (de) | Modellierung der fahrzeugumgebung mit einer kamera | |
Cutler et al. | Monitoring human and vehicle activities using airborne video | |
DE19827835B4 (de) | Bildübertragungsverfahren und -vorrichtung |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20070918 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
17Q | First examination report despatched |
Effective date: 20071219 |
|
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REF | Corresponds to: |
Ref document number: 502006008806 Country of ref document: DE Date of ref document: 20110310 Kind code of ref document: P |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 502006008806 Country of ref document: DE Effective date: 20110310 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: VDEP Effective date: 20110126 |
|
LTIE | Lt: invalidation of european patent or patent extension |
Effective date: 20110126 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110507 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110526 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110526 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110427 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
BERE | Be: lapsed |
Owner name: UNIVERSITAT TUBINGEN Effective date: 20110228 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110426 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110228 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110228 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110228 Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110228 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20110426 |
|
26N | No opposition filed |
Effective date: 20111027 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 502006008806 Country of ref document: DE Effective date: 20111027 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110426 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20120210 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110328 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R082 Ref document number: 502006008806 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R081 Ref document number: 502006008806 Country of ref document: DE Owner name: FLECK, SVEN, DR., DE Free format text: FORMER OWNER: UNIVERSITAET TUEBINGEN, 72074 TUEBINGEN, DE Effective date: 20120725 Ref country code: DE Ref legal event code: R081 Ref document number: 502006008806 Country of ref document: DE Owner name: FLECK, SVEN, DE Free format text: FORMER OWNER: UNIVERSITAET TUEBINGEN, 72074 TUEBINGEN, DE Effective date: 20120725 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MM01 Ref document number: 497230 Country of ref document: AT Kind code of ref document: T Effective date: 20110224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20110224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20110126 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20140709 Year of fee payment: 9 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 502006008806 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20151007 |