EP2786564A1 - Procédé et dispositif destinés à surveiller une zone de surveillance - Google Patents

Procédé et dispositif destinés à surveiller une zone de surveillance

Info

Publication number
EP2786564A1
EP2786564A1 EP12794848.7A EP12794848A EP2786564A1 EP 2786564 A1 EP2786564 A1 EP 2786564A1 EP 12794848 A EP12794848 A EP 12794848A EP 2786564 A1 EP2786564 A1 EP 2786564A1
Authority
EP
European Patent Office
Prior art keywords
monitored
objects
image sensors
image sensor
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP12794848.7A
Other languages
German (de)
English (en)
Inventor
Markus HERRLI ANDEREGG
Jonas HAGEN
David Studer
Martin Wüthrich
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xovis AG
Original Assignee
Xovis AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xovis AG filed Critical Xovis AG
Priority to EP12794848.7A priority Critical patent/EP2786564A1/fr
Publication of EP2786564A1 publication Critical patent/EP2786564A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19641Multiple cameras having overlapping views on a single scene
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position

Definitions

  • the invention relates to a method and a device for monitoring a surveillance area with at least two image sensors.
  • a partial area of the surveillance area is monitored by each of the image sensors in that localized objects to be monitored are detected by each image sensor within the subarea monitored by it and data about the detected objects is output by each image sensor.
  • the image sensors are arranged and aligned such that the monitored subareas overlap and that each object to be monitored, which is located in the surveillance area, is always detected by at least one image sensor. From the data of the image sensors, a totality of the objects to be monitored in the monitored area is determined.
  • State of the art State of the art
  • the pictures taken by the cameras are equalized for the combination to the higher picture.
  • Foreground objects such as As moving people are modeled as pixel collections and tracked in their movement.
  • An important feature used to identify individuals is a color signature.
  • the disadvantage of this approach is that in a first step, the images of the cameras must be put together. If the cameras are video cameras, the image sequences of the different cameras must be synchronized in order to be able to assemble the corresponding sequences of the superordinate images. This leads to an increase in the cost of installing the cameras.
  • Other types of modeling can also be used. Such an example is disclosed in US 2008/1 18106 A1 to Kilambi et al. described.
  • groups of people are modeled as elliptical cylinders. These cylinders are used for subsequent calculations, in particular for determining the number of persons in the group.
  • the object of the invention is to provide a method of the technical field mentioned above and a corresponding device, which allow the data from the image sensors improved determination of the entirety of the monitored objects in the surveillance area.
  • the solution of the problem is defined by the features of claim 1.
  • the objects to be monitored in overlapping partial areas, which are detected by more than one image sensor are assigned to one another by means of an evaluation of their agreement in order to determine the entirety of the objects to be monitored in the monitored area.
  • the objects to be monitored in overlapping partial areas, which are detected by more than one image sensor can be assigned to each other by means of a calculation unit based on the data of the image sensors in order to determine the entirety of the objects to be monitored in the monitored area.
  • the calculation unit may be formed as a separate unit or may be integrated into one of the at least two image sensors.
  • the image sensors used for this can be any image sensor type. For example, they may be cameras that only occasionally capture an image. In this case, the detection of an image can each be triggered by a motion sensor. But there is also the possibility that the image sensors capture images at regular intervals. There can be temporal distances of any length between the individual pictures. These distances can also be arbitrarily short. In the latter case, the image sensors may be, for example, cameras which record film sequences.
  • the image sensors can be cameras that record optical images in the visible range. It can also be any other camera type.
  • the image sensors may also be infrared cameras, ultraviolet cameras, or other cameras available per se that capture images of electromagnetic radiation of any other wavelength or range of wavelengths.
  • the image sensors may, for example, also be cameras which record sound or ultrasound images. But there is also the possibility that the image sensors are laser sensors.
  • the type of image sensors there is a possibility that some or all of the image sensors include two sensors. By such an image sensor, detection of the three-dimensional space may be possible.
  • such an image sensor can be an image sensor with two cameras arranged next to one another.
  • the invention can be realized expressly without any detection of stereo images.
  • the image sensors are sensors which enable detection of the three-dimensional space.
  • they can be 3D laser sensors which scan the room and can also detect a distance of the objects to be monitored from the respective SD laser sensor.
  • the objects to be monitored may be persons. In this case, for example, a distinction can be made between adult persons and children. Furthermore, the objects to be monitored may, for example, also be animals. It is also possible to differentiate between different animal sizes. However, there is also the possibility that the objects to be monitored are vehicles or other moving objects. It can be distinguished, for example, between trucks, cars and motorcycles.
  • different image sensors can be used, which are optimized for the corresponding application. In this case, for example, the light conditions can be considered for a choice of image sensors.
  • the entirety of the objects to be monitored determined from the data of the image sensors relates to the objects to be monitored, which are located in the entire surveillance area.
  • this entirety of the objects to be monitored may comprise only a number of the total detected objects or an identification for each object to be monitored.
  • the Entity of objects to be monitored for each of the objects includes more data.
  • the further data may include the positions of the objects.
  • the further data capture a course of movement of the objects. If the totality of the objects to be monitored, together with the identification, comprises further data, then these further data can be listed, for example, as a list. The individual objects can be identified based on their position in the list, which eliminates the need for separate identification of the objects.
  • the evaluation of the match used for the assignment of the objects to be monitored in overlapping partial areas can take place in different ways. For example, it can be a positive rating. In doing so, the match of objects that is more likely to be scored is rated larger, while the score of objects that are less likely to score is rated smaller. There is also the possibility that the individual ratings for probabilities are normalized. Alternatively, the match score may be a negative score (cost score) where the match of items that is more likely to be scored is rated smaller, while the score of items that are less likely to score is to be valued with larger values. As a further variant, however, there is also the possibility that a certain evaluation value represents a highest probability of a match. In doing so, both overlying and underlying rating values may represent a smaller probability of a match. By way of example, underlying and underlying valuation values can thus represent a statement about a used valuation type or valuation criteria used.
  • the solution of the invention has the advantage that the reliability of the assignment of the objects is increased by the evaluation of the agreement. Accordingly, an improved monitoring of the surveillance area is ensured.
  • the method and the device are suitable for person tracking and for counting people and function across sensors or across cameras.
  • each object to be monitored which is located in the monitored area, is always completely detected by at least one image sensor.
  • This has the advantage that the reliability of the assignment is increased because the assignment of the objects detected by different image sensors takes place only for completely detected objects.
  • the objects to be monitored are not always completely covered by at least one image sensor. This can be advantageous in order to also detect edge areas of the monitoring area and to reduce a number of the required image sensors.
  • a matrix is preferably created for the assignment of the objects to be monitored in overlapping subareas whose elements pjj contain the evaluation of how well an object detected by an image sensor with identification i matches an object with identification j detected by another image sensor.
  • the indices i and j run over all objects to be monitored, which are located in the subarea that is monitored by the corresponding image sensor.
  • the indices i and j run only over those objects which are located in an overlapping region of the two subregions and are detected by the corresponding image sensor.
  • the created matrix can also be more than two-dimensional.
  • the matrix may comprise the same number of dimensions as the number of overlapping subregions. Accordingly, the number of indices of the matrix elements pjj must be adjusted. In doing so, the indices can, in turn, contain all the objects to be monitored in the corresponding subarea run or only run over those objects that are within an overlap area of the subregions.
  • a sum of the scores of the matching objects in this combination is formed.
  • the assignment of the objects to be monitored preferably takes place by a choice of the combination, which results in an extreme of this sum.
  • the sum of the reviews will be maximized. This is advantageous when a large score value represents a high probability of a match, while a small score value represents a small probability of a match.
  • the sum can also be minimized. This is advantageous when a small score value represents a high probability of a match, while a large score value represents a small probability of a match.
  • Both maximizing the sum and minimizing the sum have the advantage that an optimal allocation of the objects to be monitored can be achieved in a simple manner.
  • a particular score represents a maximum likelihood of a match
  • both score and score below represent a smaller probability of a match
  • the assignment of the objects to be monitored can be achieved by selecting the combination with one extreme of this sum of the differences.
  • a sum for example, a sum of functional values of the evaluation values or the Differences are formed.
  • the evaluation values or the differences can be squared before the summation, or the square root of the evaluation values or differences can be drawn in each case.
  • the summands are calculated from the evaluation values or differences by means of any other formula or function.
  • the use of such a formula or function may be advantageous since a more stable and reliable algorithm for the assignment of the objects to be monitored can be achieved.
  • the assignment of the objects to be monitored can also be carried out differently than by the formation of a sum of the evaluation values.
  • the objects to be monitored can also be detected by only one image sensor.
  • the fact that an object to be monitored can only be detected by an image sensor can be due, for example, to the fact that an object to be monitored is located at a position monitored only by an image sensor. However, it may also be due to the fact that, while an object to be monitored is located in an overlapping region of two or more subregions, it is only detected by an image sensor. In both cases, taking into account that the objects to be monitored can also be detected by only one image sensor has the advantage that an improved assignment of the objects to be monitored is achieved.
  • a matrix is created for the assignment of the objects to be monitored in overlapping subareas, such a co-consideration can be implemented, for example, by adding in the matrix a column or a row whose elements contain the evaluation that no match with one monitoring object in the sub-area, which is represented by the columns or rows.
  • each object is detected by at least two image sensors.
  • This variant can be advantageous if an assignment takes place only for objects to be monitored, which are located in an overlapping area of two or more subareas. This allows the assignment to be made faster.
  • unassigned objects are included as separate objects in the entirety of the objects to be monitored.
  • This has the advantage that even non-assignable or only with low probability attributable objects are included in the entirety of the objects to be monitored. This allows a more complete and better determination of the entirety of the objects to be monitored. Alternatively, however, there is also the possibility that unassigned objects are not included in the entirety of the objects to be monitored.
  • the data output by the image sensors is output anonymously.
  • the data can be further processed to the entirety of the objects to be monitored, whereby data protection regulations can be adhered to without special precautions being taken to specifically protect further data processing.
  • the anonymization consists, in particular, in the fact that the image sensors do not output any data which could directly or indirectly permit a conclusion on the identity of monitored persons.
  • the captured image data are not output but also no information about recorded colors and / or body mass of the monitored persons.
  • each detected object is represented by a parameterized model.
  • a parameterized model consists solely of a position of the detected object or of a beam which, starting from the corresponding image sensor, passes through the respective detected object.
  • anonymized data about the objects to be monitored can be output by parameterizing the models.
  • a parameterized model also is more complex. For example, a size, a shape or an orientation of the objects to be monitored can be characterized by the parameters. This has the advantage that, despite the output of such parameters, the data on the detected objects can be anonymized.
  • the detected objects are not represented by a parameterized model.
  • each detected object is represented by a parameterized model, preferably for the evaluation of the match, one from a minimum distance between a ray from an image sensor through a centroid of the parameterized model and a ray from takes into account a value calculated for another image sensor by a centroid of the parametric model.
  • the center of gravity of the parameterized model is a reference point of the respective object to be monitored, which in the present document is also referred to as the center of gravity of an object. The use of this term does not mean that it is the physical center of gravity of the object. Although the point may be at the physical center of gravity of an object, it may be located elsewhere in or around the object.
  • the exact arrangement of such a center of gravity can be determinable, for example, on the basis of the parameters output by the corresponding image sensor and the parametric model used.
  • the center of gravity can, however, also be determinable, for example, directly in the corresponding image sensor on the basis of the parametric model used, with the image sensors each outputting only data to the beam starting from the respective image sensor through the center of gravity of the corresponding object.
  • the arrangement of the center of gravity in the parameterized model or in the object to be monitored depends on the model.
  • the center of gravity can be determined differently based on the parameters. For example, it may be a geometric center of the parameterized model. It can also be another point in or around the parameterized model.
  • the center of gravity may be, for example, a center of the ellipse.
  • the object to be monitored is a person, it may, for example, also be the position of the feet or the head of the person to be monitored, which is determined on the basis of the parameters of the parameterized model.
  • each detected object is represented by a parameterized model
  • the value of the height can be adapted dynamically to the last specific value.
  • the value of the height is in each case dynamically adapted to an average of the previously determined values of the height.
  • each detected object is represented by a parameterized model
  • it is preferred for the evaluation of the Agreement takes into account whether a center point of a shortest connecting line between the beam from the one image sensor through the center of gravity of the parameterized model detected by it and the beam from the other image sensor through the center of gravity of the parameterized model detected by it, at a reasonable height for a height of Center of gravity of the objects to be monitored.
  • a center point of a shortest connecting line between the beam from the one image sensor through the center of gravity of the parameterized model detected by it and the beam from the other image sensor through the center of gravity of the parameterized model detected by it at a reasonable height for a height of Center of gravity of the objects to be monitored.
  • a fixed range for a reasonable height is given.
  • an area around a stored and dynamically adjusted value of the height of the center of gravity of the corresponding object is used for the determination of a meaningful height.
  • the use of a fixed range for a meaningful height also has the advantage that the objects to be monitored can be selected according to the height of their center of gravity.
  • no selection of the objects to be monitored should take place according to the height of their center of gravity. For example, this may be the case when the objects to be monitored include both adults and children of different ages and, where appropriate, animals such as dogs or cats.
  • each detected object is represented by a parameterized model
  • a similarity of the parameterized models of the objects to be monitored for the evaluation of the match is preferably taken into account.
  • the parameters that are output from the different image sensors to the modeled objects are directly compared.
  • a position of the models output by the image sensors is compared in space, wherein an orientation and positioning of the image sensors are taken into account.
  • the position of the respective object may be determined by the intersection of the beam from the corresponding image sensor through the center of gravity of the object with a height of the center of gravity of the corresponding object Object above the bottom of the corresponding portion can be determined, wherein the thus determined based on the output from the various image sensors data positions of the objects are compared.
  • the detected objects are modeled with a three-dimensional model such as, for example, an ellipsoid, a cylinder or a rectangular block.
  • the data output by the image sensors does not contain any parameters from this three-dimensional model, but that the data output from the image sensors include parameters of a two-dimensional model representing a projection of the three-dimensional model onto a two-dimensional surface, the two-dimensional Area corresponds to a modeling of the monitored subareas.
  • the data output by the image sensors may be, for example, parameters for an ellipse, a rectangle or another geometric shape. If, in this example, the similarity of the parameterized models of the objects to be monitored is taken into account for the evaluation of the conformity, then for example an orientation and a positioning of the image sensors as well as the fact that the parameters output are parameters of a model can be taken into account, which corresponds to a projection. As an alternative, there is also the possibility that a similarity of the parameterized models of the objects to be monitored is not taken into account for the evaluation of the match.
  • speeds and directions of movement of the objects to be monitored are taken into account for the evaluation of the conformity of the image sensors. This has the advantage of not being due to different movement obviously mismatched objects due to their movement behavior can be evaluated accordingly and held by an assignment.
  • the assignments of the objects to be monitored which have been made earlier are taken into account for the evaluation of the match.
  • This has the advantage that a consistency of the assignments of the objects to be monitored can be achieved over a period of time.
  • the image sensors provide data on the detected objects in short time intervals. For example, this may be the case when the image sensors are cameras that record film sequences.
  • one or more other evaluation criteria such as, for example, a color of the objects to be monitored or another identification of the objects detected by the image sensors, to be used as an evaluation criterion for the evaluation of the match.
  • evaluation criteria may be used together with the above evaluation criteria to evaluate the match.
  • different scoring criteria or different combinations of scoring criteria are used to match the objects in different regions within the surveillance area. This can be advantageous, for example, if the lighting conditions within the monitoring area are very different. In this case, it may be useful, for example, to weight the speed of the objects to be monitored more than a size of the objects in a rather dark region, since a size estimation of the objects is less accurate due to the lighting conditions.
  • the surveillance area is modeled as a two-dimensional area with a two-dimensional coordinate system, wherein each of the objects to be monitored of the entirety of the objects to be monitored is characterized by data relating to this coordinate system.
  • the characterizing data for example, an identification of the corresponding object contain.
  • the data may also include, for example, information about a position, a speed, a course of movement or the like of the corresponding object. This has the advantage that a spatial assignment of the individual objects is made possible.
  • the surveillance area is modeled as a three-dimensional space with a three-dimensional coordinate system, wherein each of the objects of the entirety of the objects to be monitored is characterized by data relating to this coordinate system.
  • the characterizing data may include, for example, an identification of the corresponding object.
  • the data may also include, for example, information about a position, a speed, a course of movement or the like of the corresponding object.
  • the surveillance area is modeled as a one-dimensional space or as a line with a one-dimensional coordinate system, wherein each of the objects of the totality of the objects to be monitored is characterized by data relating to this coordinate system.
  • the surveillance area has an elongated shape and objects can only move along this elongate shape.
  • a surveillance area may be a street, a corridor, a roller conveyor, or any other laterally limited, elongated space.
  • a street, a corridor or a conveyor belt can also be modeled as a two- or three-dimensional space. The latter can be advantageous if, for example, vehicles on the road or persons or animals in the aisle or on the roller conveyor can overtake or cross one another.
  • the surveillance area is modeled as a one-, two- or three-dimensional space with an additional time dimension.
  • the surveillance area is not modeled as one-, two-, or three-dimensional space.
  • Such an alternative may be advantageous if, for example, only a number of objects to be monitored in the monitoring area to be determined. In this case, no data is needed on the location of the objects, which also requires less computational capacity.
  • the monitoring area can also be modeled only by a time dimension in order to determine a temporal change of the number of objects to be monitored in the monitored area.
  • each partial area monitored by an image sensor is modeled as a two-dimensional area having a two-dimensional coordinate system, wherein the data output by each image sensor relating to the detected objects relate to the coordinate system of the partial area monitored by this image sensor.
  • the data output by the image sensors may contain, for example, an identification of the corresponding object.
  • the data may also include information about a position, a speed, a course of movement or the like of the corresponding object.
  • the output data may also include information about a size or other characteristics of the object. This has the advantage that a spatial assignment of the individual objects is made possible and that, if appropriate, certain features of the objects to be monitored can be detected.
  • each subarea monitored by an image sensor may be modeled as a three-dimensional space with a three-dimensional coordinate system, wherein the data output by each image sensor relating to the detected objects relate to the coordinate system of the subarea monitored by this image sensor.
  • the data output by the image sensors may contain, for example, an identification of the corresponding object.
  • the data may also include information about a position, a speed, a course of movement or the like of the corresponding object.
  • the output data may also include information about a size or other characteristics of the object.
  • This variant also has the advantage that a spatial assignment of the individual objects is made possible and that, if appropriate, certain features of the objects to be monitored can be detected.
  • each subarea monitored by an image sensor is modeled as a one-dimensional space or as a line with a one-dimensional coordinate system, with the data output by each image sensor relating to the detected objects being based on the coordinate system of the one monitored by this image sensor Refer to subarea.
  • a subarea may be a roadway, a hallway, a conveyor belt, or any other laterally limited, elongated space.
  • a street or a roller conveyor can also be modeled as a two- or three-dimensional space. This can be advantageous if, for example, vehicles on the road or persons or animals in the corridor or on the roller conveyor can overtake or cross each other.
  • each subarea monitored by an image sensor is modeled as a one-, two- or three-dimensional space with an additional time dimension.
  • the data output by the image sensors to each of the detected objects to be monitored includes an identification and X and Y coordinates.
  • the data output by the image sensors also includes a velocity vector for each of the detected objects to be monitored.
  • the data output by the image sensors comprise different data for each of the detected objects to be monitored.
  • the objects detected by the image sensors are advantageously modeled as ellipsoids whose projection onto the two-dimensional surface of the corresponding subregion yields an ellipse, the data output by the image sensors for each of the detected objects to be monitored, an identification, X and Y coordinates of a center of the ellipse, sizes of the major axes of the ellipse, an orientation angle of the ellipse, and a velocity vector.
  • This has the advantage that the data output by the image sensors are adequately anonymized.
  • the data output by the image sensors enables the objects to be distinguished, as well as positioning and monitoring the speed of the objects.
  • these output data have the advantage that they allow an estimation of a size and a position of the objects to be monitored in the room.
  • the objects detected by the image sensors are modeled differently.
  • the objects can be modeled directly as ellipses.
  • the objects are not modeled as ellipsoids but, for example, as cylinders or rectangular blocks.
  • the objects can also be directly modeled as projections of such a cylinder or as a projection of such a rectangular block or directly as a rectangle.
  • the data output by the image sensors may include corresponding parameters of the modeling of the objects used instead of the major axes of the ellipses and the orientation angle of the ellipse.
  • these parameters may be a height and a radius and an orientation angle of a cylinder.
  • it may also be, for example, the side lengths and the orientation angle of a rectangle.
  • the objects detected by the image sensors are modeled as ellipsoids, wherein the data output by the image sensors to each of the detected objects to be monitored an identification, X, Y and Z coordinates of a center of the ellipsoid, sizes of the major axes of the ellipsoid, two Orientation angle of the ellipsoid and a velocity vector include.
  • the data output by the image sensors are anonymized.
  • the data output by the image sensors enable identification of the objects as well as positioning and monitoring of the speed of the objects.
  • these output data have the advantage that they allow an estimation of a size and a position of the objects to be monitored in the room.
  • the objects detected by the image sensors are modeled differently.
  • the objects can not be modeled as ellipsoids but, for example, as cylinders or rectangular blocks.
  • the data output by the image sensors may include, instead of the major axes of the ellipsoid and the two angles of orientation of the ellipsoid, corresponding parameters of the modeling of the objects used.
  • these parameters may be a height and a radius, as well as two orientation angles of a cylinder.
  • it may also be, for example, the side lengths and two orientation angles of a rectangular block.
  • the objects detected by the image sensors are modeled differently and that the data output by the image sensors comprise different data to the detected objects.
  • the data may include only one position of the detected objects.
  • the data include information such as colors or identification codes of the detected objects.
  • the coordinate systems to which the data output from the image sensors relate are advantageously equalized.
  • the coordinate systems are preferably equalized by straightening objectively caused curvatures of the coordinate systems.
  • the coordinate systems to which the data output by the image sensors relate can be equalized by being adapted to the coordinate system to which the data output from one of the image sensors is adapted. Both have the advantage that the coordinate systems to which the data output from the image sensors relate can be compared more easily with each other.
  • the data output from the image sensors relates to a two-dimensional coordinate system and the surveillance area is modeled by a two-dimensional coordinate system, it is preferable to transmit the data output from the image sensors to the two-dimensional coordinate system of the surveillance area.
  • the data output from the image sensors relate to a three-dimensional coordinate system and the surveillance area is modeled by a three-dimensional coordinate system
  • the data output from the image sensors are preferably transmitted to the three-dimensional coordinate system of the surveillance area.
  • the data output by the image sensors relate to a one-dimensional coordinate system and the surveillance area is modeled by a one-dimensional coordinate system
  • the data output by the image sensors are preferably transmitted to the one-dimensional coordinate system of the surveillance area.
  • This transmission can be done for example by a conversion, in which an alignment and positioning of the different image sensors in the coordinate system of the monitoring area are taken into account. Accordingly, the coordinate systems to which the data output from the image sensors relate may be rotated, for example. In addition, for example, length units of the coordinate systems to which the data output by the image sensors relate can be converted to the coordinate system of the monitored area.
  • the data output by the image sensors relate to a two- or three-dimensional coordinate system and the surveillance area is modeled by a one-dimensional coordinate system
  • the data output by the image sensors are preferably transmitted to the one-dimensional coordinate system of the surveillance area.
  • the data output by the image sensors relate to a three-dimensional coordinate system and the surveillance area is modeled by a one- or two-dimensional coordinate system
  • the data output by the image sensors are preferably transmitted to the one-dimensional or two-dimensional coordinate system of the surveillance area.
  • the data when transferring the data to the coordinate system of the surveillance area, the data may be projected onto the coordinate system of the surveillance area. If, for example, the data contain parameters of a modeling of the objects to be monitored, then for example the model can be projected onto the coordinate system. This means that if the data output by the image sensors contain parameters of an ellipsoid, for example this ellipsoid is projected onto a two-dimensional coordinate system of the surveillance area.
  • the data output by the image sensors contains, for example, parameters of an ellipsoid or an ellipse and the surveillance region is modeled by a one-dimensional coordinate system, then, for example, a longitudinal extension of the ellipsoid or the ellipse on the one-dimensional coordinate system can be taken into account in the transmission of the data.
  • the data output from the image sensors relate to a coordinate system with fewer dimensions than the coordinate system of the monitored area, it is preferable to transmit the data output from the image sensors to the coordinate system of the surveillance area by spatially positioning an alignment of the coordinate systems of the image sensors in the coordinate system of the surveillance area become.
  • the data output by the image sensors relate to a coordinate system of the respective image sensor and if the surveillance area is modeled by a two-dimensional coordinate system, preferably a position and an orientation of each image sensor are stored and transferred to the data output from the image sensors considered two-dimensional coordinate system of the surveillance area.
  • the data output by the image sensors relate to a coordinate system of the respective image sensor and if the surveillance area is modeled by a one- or three-dimensional coordinate system, a position and an orientation of each image sensor are preferably stored and transmitted for transmission by the image sensors Data taken into account on the one- or three-dimensional coordinate system of the surveillance area. In both cases mentioned above, this has the advantage that an optimal transfer of the data to the coordinate system of the monitoring area is achieved.
  • the data output by the image sensors relate to a coordinate system of the respective image sensor and if the monitoring area is modeled by a coordinate system, a position and an orientation of the coordinate systems of the image sensors is determined in each case on the basis of features in the image data of the image sensors, this position and orientation for the transmission of the data output from the image sensors are taken into account on the coordinate system of the monitoring area.
  • the image sensors are mounted overhead. This means that the image sensors are aligned substantially vertically downwards and accordingly capture images of the events below the image sensors. The image sensors are thus expediently arranged above the objects to be monitored.
  • the overhead installation of the image sensors has the advantage that the subregions monitored by the image sensors are monitored from above.
  • the subregions are essentially horizontal surfaces which, although they may have bulges and inclinations, the subregions are thereby monitored from a position substantially perpendicular to their surface. Accordingly, the objects to be monitored move on a surface which is substantially parallel to the image plane of the image sensors. This enables optimal detection of positions and speeds of the objects to be monitored. In addition, this has the advantage that the objects to be monitored only in very few cases from the perspective of the image sensors move each other and can hide each other. If the image sensors are mounted overhead and the objects to be monitored are persons, there is a danger, for example, of marginal areas of the monitored subareas or of an adult person bending over a child, that persons hide each other. Otherwise, the persons can be optimally monitored by this arrangement of the image sensors.
  • the image sensors are not mounted overhead.
  • they can be aligned obliquely downwards or horizontally laterally.
  • such an arrangement of the image sensors may also be advantageous. This may be the case, for example, when monitoring a conveyor belt on which there are persons or animals that are not overtaking each other can. However, this can also be the case, for example, when objects transported on a conveyor belt are to be monitored.
  • Fig. 1 is a schematic representation of an inventive device for
  • Fig. 3 is a further schematic representation of the inventive
  • Fig. 4 is a further schematic representation of the inventive
  • FIG. 1 shows a schematic representation of a device 1 according to the invention for monitoring a monitoring area 2.
  • This device 1 comprises a first image sensor 3 and a second image sensor 4. Both image sensors 3, 4 each comprise a camera which can record film sequences.
  • the first image sensor 3 monitors a first subarea 5 of the surveillance area 2, while the second image sensor 4 monitors a second subarea 6 of the surveillance area 2.
  • the two subregions 5, 6 overlap in an overlapping area 7 and together cover the entire surveillance area 2.
  • the device 1 comprises a calculation unit 8.
  • This calculation unit 8 can be, for example, a server or else a computer.
  • the two image sensors 3 and 4 are connected to the calculation unit 8 and output data to the calculation unit 8.
  • the calculation unit 8 can also be integrated into one of the image sensors 3, 4.
  • the image sensors 3, 4 can detect objects to be monitored within the subarea 5, 6 monitored by them.
  • the objects to be monitored may be persons. It can also be animals, vehicles or objects.
  • Figures 2a, 2b and 2c each show a schematic representation for illustrating the detection of an object to be monitored by an image sensor.
  • first image sensor 3 and the first subregion 5 for the image sensor shown and the subregion shown.
  • this description also stands for the second image sensor 4 described above and the second subregion 6 described above.
  • FIG. 2 a shows the first image sensor 3 and the first subregion 5 monitored by it.
  • the first image sensor 3 comprises a processing unit (not shown) which processes the image data recorded by the first image sensor 3. For this purpose, it identifies objects to be monitored, which are thereby detected by the image sensor 3, and outputs data on the detected objects to be monitored. In the present case, therefore, the data output is data relating to the person 9, which is located in the first subarea 5.
  • the objects to be detected are modeled as ellipsoids in three-dimensional space. As shown in Figure 2b, therefore, the person 9 is modeled by an ellipsoid 10.
  • the image data acquired by the first image sensor 3 are equalized in the processing unit in a first step and the rectified image data are provided with a two-dimensional coordinate system which extends over the monitored first subregion 5.
  • the objects to be monitored in the image data are identified by applying a known method for object identification.
  • this method can be a method in which a still image (without any object in the monitored first subarea 5) is subtracted from the image data.
  • an ellipse 1 1 is placed on the detected objects.
  • lengths of the two main axes of the ellipse 1 1, a position of the ellipse 1 1 and an angle between one of the two main axes of the ellipse 1 1 and a coordinate axis of the first portion 5 are fitted to the corresponding detected object.
  • These data on the detected objects are output by the processing unit and the first image sensor 3 to the computing unit 8 (see FIG. 1) where they are further processed.
  • FIG. 3 shows, as already shown in FIG. 1, a schematic representation of the device 1 according to the invention for monitoring the monitoring area 2.
  • a first of these two persons 9.1 is also located in FIG It is detected by both the first and the second image sensor 3, 4.
  • the second person 9.2 is located only in the second subarea 6 and is only detected by the second image sensor 4.
  • the rays 12.1, 12.2 are shown, which starting from the respective image sensor 3, 4 by the center of gravity or center of the modeled ellipsoid and the corresponding modeled ellipse 1 1.1, 1 1.2 run.
  • the two ellipses 1 1.1, 1 1.2 are shown, which are detected by the two image sensors 3, 4 for the first person 9.1.
  • the parameters of the two ellipses 1 1.1, 1 1.2 are output from the image sensors 3, 4 to the calculation unit 8.
  • the beam 12.3 is shown, which extends from the second image sensor 4 through the center of gravity or center of the modeled ellipsoid and the corresponding modeled ellipse 1 1.3.
  • the ellipse 1 1.3 is shown, which is detected by the second image sensor 4 for the second person 9.2.
  • the parameters of this ellipse 1 1.3 are output from the second image sensor 4 to the calculation unit 8.
  • a totality of the objects to be monitored is determined by the calculation unit 8.
  • the data output by the two image sensors 3, 4 is converted to a two-dimensional coordinate system which extends over the monitoring area 2.
  • a matrix is created on the basis of the converted data.
  • the elements of this matrix contain evaluations for the probability that an object detected by the first image sensor 3 coincides with one of the second image sensor 4.
  • these scores are for probabilities, i. normalized to one.
  • the ratings could also be normalized to another value.
  • the most probable allocation of the detected objects is determined and the entirety of the objects to be monitored is determined. In this case, the most probable assignment is determined by the calculation unit 8 using the Hungarian method, which is also referred to as the Kuhn-Munkres algorithm.
  • a size of the matrix created by the calculation unit depends on the number of objects detected by the two image sensors 3, 4. Because it can be that one of the one Image sensor 3, 4 detected object is not detected by the other image sensor 4, 3, the matrix contains a number of lines, which is the number of objects detected by the one image sensor 3, 4 plus one. Further, for this reason, the matrix contains a number of columns, which is the number of objects detected by the other image sensor 4, 3 plus one. In the present case, only one object to be monitored is detected by the first image sensor 3 with the first person 9.1. Therefore, the matrix determined by the calculation unit 8 has two lines. By contrast, the second image sensor 4 detects two objects to be monitored with the first and the second person 9.1, 9.2. Accordingly, the matrix determined by the calculation unit 8 has three columns. Thus, the matrix determined by the calculation unit 8 has the form:
  • the first line of this matrix relates to assessments that the first person 9.1 detected by the first image sensor 3 coincides with one person or no person detected by the second image sensor 4.
  • the second row of the matrix refers to assessments that no person detected by the first image sensor 3 agrees with one person or with no person detected by the second image sensor 4.
  • the first column of the matrix relates to assessments that the first person 9.1 detected by the second image sensor 4 coincides with one person or no person detected by the first image sensor 3.
  • the second column of the matrix relates to evaluations that the second person 9.2 detected by the second image sensor 4 agrees with one person or with no person detected by the first image sensor 3.
  • the third column refers to assessments that no person detected by the second image sensor 4 coincides with or with any person detected by the first image sensor 3.
  • this element can be set to a fixed value. In the following, this element is set to zero. It could also be set to any other value.
  • both the positions and the orientations of the two image sensors 3, 4 are stored. Therefore, based on the data output by the first image sensor 3, the calculation unit 8 can determine whether an object detected by the first image sensor 3 is only in the first subregion 5, or whether it is also in the second subregion 6 and thus in the overlap region 7.
  • the calculation unit 8 can therefore determine whether an object detected by the second image sensor 4 is only in the second subarea 6, or if it is also in the first subarea 5 and thus in the overlap region 7.
  • the first person 9.1 is in the overlapping area 7, while the second person 9.2 is located only in the second area 6. Accordingly, the calculation unit 8 sets the score p to zero and the score p 2Z to 1:
  • the elements p, p 2 and 13 of the matrix are determined by the calculation unit 8 by the application of boundary conditions as well as certain evaluation criteria.
  • this means that the matrix has the shape
  • the matrix set up by the calculation unit 8 looks different.
  • the matrix created by the calculation unit 8 looks as follows:
  • the values of the elements p and 12 are determined by the calculation unit 8 according to the evaluation criteria explained below.
  • FIG. 4 shows a schematic representation of the device 1 according to the invention for monitoring the monitoring area 2.
  • no persons are shown.
  • the two ellipses 1.1, 1.2 in the overlap area 7 together with the beams 12.1, 12.2 emanating from the image sensors 3, 4 are shown, whose data according to FIG. 3 for the detected first person 9.1 (not shown here) is provided by the image sensors 3 4 are output to the calculation unit 8.
  • FIG. 4 serves to illustrate the determination of the elements of the matrix.
  • the elements of the matrix contain the evaluations for the probability that an object detected by the first image sensor 3 coincides with an object detected by the second image sensor 4.
  • different evaluation criteria can be used.
  • a minimum distance 13 between the rays 12.1, 12.2 emanating from the image sensors 3, 4 can be used as an evaluation criterion.
  • a similarity of the ellipse sizes can be used.
  • the ellipses are projections of an ellipsoid. Accordingly, a distance in the plane of the monitoring area 2 to the respective image sensor 3, 4 can be taken into account for the determination of the ellipse sizes.
  • a measure can be used which rates a meaningful height of the midpoint 14 of the line of the minimum distance 13 between the beams 12.1, 12.2 emanating from the image sensors 3, 4.
  • the sensible height can be adapted to the objects to be monitored. For adult persons the reasonable height can be 80 - 100cm depending on the assumption of the person size.
  • the matrix elements can each be formed from a sum of the different evaluation values.
  • the different evaluation values of the different evaluation criteria can also be weighted differently.
  • the assignment of the objects to be monitored on the basis of the matrix by the calculation unit 8 takes place in a manner known per se, using the Hungarian method, which is also referred to as the Kuhn-Munkres algorithm. From this assignment, the totality of the objects to be monitored is determined by the calculation unit 8.
  • the entirety of the objects to be monitored can be output by the calculation unit 8 again.
  • the monitored objects can be displayed as points in the monitoring area 2 on a screen (not shown).
  • the entirety of the objects to be monitored is output only in the form of numbers.
  • the calculation unit 8 may be connected to a further computer, which evaluates the time profile of the number of objects to be monitored.
  • this additional computer can also, for example, record the movements of the objects and output congestion warnings if two objects accumulate in one area of the surveillance area.
  • the device 1 described above is not the only embodiment according to the invention.
  • Various modifications of the device 1 are possible.
  • the objects to be detected are not modeled by the image sensors as ellipsoids but differently.
  • the data output by the image sensors to contain only the positions of the detected objects or only parameters for beams starting from the respective image sensor for the respective detected object.
  • the data acquisition of the image sensors essentially takes place in three steps as described above in connection with FIG. 2c. Of the data thus acquired can For example, for a detected object only the position of the ellipse midpoint from the processing unit and the corresponding image sensor are output to the calculation unit.
  • the image sensors in each case can output only data on a beam starting from the corresponding image sensor through the ellipse center.
  • another form is fitted to the detected object.
  • it can be a circle or a rectangle.
  • the value of the height of the center of the line of the minimum distance can be stored directly as the height of the center of gravity.
  • a mean value of the previously determined values of the height of the center of gravity is stored. The latter allows the consideration of several determinations of the height of the center of gravity, allowing a more precise determination of the height of the center of gravity. This leads to a more accurate position determination of the objects to be monitored, since the rays emanating from an image sensor and extending to the detected object can run in a very inclined manner when an object is located in the edge region of a subregion. Accordingly, even small deviations in the height of the center of gravity can lead to considerable positional deviations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Alarm Systems (AREA)

Abstract

La présente invention concerne un procédé ainsi qu'un dispositif (1) conçus pour surveiller une zone de surveillance (2), ledit dispositif comportant au moins deux capteurs d'images (3, 4). Une zone partielle (5, 6) de la zone de surveillance (2) est surveillée par chacun des capteurs d'images (3, 4), les objets (9.1, 9.2) à surveiller par chaque capteur d'images (3, 4) à l'intérieur de la zone de surveillance (5, 6) où ils sont localisés étant détectés et les données concernant les objets (9.1, 9.2) détectés par chaque capteur d'image (3, 4) étant émises. Les capteurs d'images (3, 4) sont agencés et orientés de manière à ce que les zones partielles (5, 6) se chevauchent et que chacun des objets surveillés (9.1, 9.2) se trouvant dans la zone de surveillance (2) soit toujours détecté par au moins un capteur d'images (3, 4). Un ensemble de données concernant les objets à surveiller dans la zone de surveillance (2) est déterminé à partir des données des capteurs d'images (3, 4), les objets à surveiller (9.1, 9.2) dans des zones partielles (5, 6) qui se chevauchent pouvant être classés, au moyen d'une unité de calcul (8), sur la base des données des capteurs d'images (3, 4) qui sont captées par plus d'un capteur d'images (3, 4), au moyen de l'évaluation de leur concordance mutuelle, afin de déterminer l'ensemble de données concernant les objets à surveiller dans la zone de surveillance (2). Le procédé et le dispositif sont adaptés pour le suivi de personnes et le décompte de personnes et fonctionnent par capteur ou caméra.
EP12794848.7A 2011-11-29 2012-11-23 Procédé et dispositif destinés à surveiller une zone de surveillance Withdrawn EP2786564A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP12794848.7A EP2786564A1 (fr) 2011-11-29 2012-11-23 Procédé et dispositif destinés à surveiller une zone de surveillance

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP11405363.0A EP2600611A1 (fr) 2011-11-29 2011-11-29 Procédé et dispositif de surveillance d'une zone de surveillance
PCT/CH2012/000261 WO2013078568A1 (fr) 2011-11-29 2012-11-23 Procédé et dispositif destinés à surveiller une zone de surveillance
EP12794848.7A EP2786564A1 (fr) 2011-11-29 2012-11-23 Procédé et dispositif destinés à surveiller une zone de surveillance

Publications (1)

Publication Number Publication Date
EP2786564A1 true EP2786564A1 (fr) 2014-10-08

Family

ID=47278628

Family Applications (2)

Application Number Title Priority Date Filing Date
EP11405363.0A Withdrawn EP2600611A1 (fr) 2011-11-29 2011-11-29 Procédé et dispositif de surveillance d'une zone de surveillance
EP12794848.7A Withdrawn EP2786564A1 (fr) 2011-11-29 2012-11-23 Procédé et dispositif destinés à surveiller une zone de surveillance

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP11405363.0A Withdrawn EP2600611A1 (fr) 2011-11-29 2011-11-29 Procédé et dispositif de surveillance d'une zone de surveillance

Country Status (3)

Country Link
US (1) US9854210B2 (fr)
EP (2) EP2600611A1 (fr)
WO (1) WO2013078568A1 (fr)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013209484A1 (de) * 2013-05-22 2014-11-27 Hella Kgaa Hueck & Co. Verfahren zur Ermittlung eines Flächenbelegungsgrads oder eines Volumenbelegungsgrads
KR101557376B1 (ko) * 2014-02-24 2015-10-05 에스케이 텔레콤주식회사 사람 계수 방법 및 그를 위한 장치
KR102101438B1 (ko) * 2015-01-29 2020-04-20 한국전자통신연구원 연속 시점 전환 서비스에서 객체의 위치 및 크기를 유지하기 위한 다중 카메라 제어 장치 및 방법
WO2017060083A1 (fr) * 2015-10-06 2017-04-13 Philips Lighting Holding B.V. Système de comptage de personnes et d'éclairage intégré
KR102076531B1 (ko) * 2015-10-27 2020-02-12 한국전자통신연구원 멀티 센서 기반 위치 추적 시스템 및 방법
EP3368953B1 (fr) * 2015-10-30 2019-09-25 Signify Holding B.V. Mise en service d'un système de capteur
US10388027B2 (en) * 2016-06-01 2019-08-20 Kyocera Corporation Detection method, display apparatus, and detection system
US10929561B2 (en) * 2017-11-06 2021-02-23 Microsoft Technology Licensing, Llc Removing personally identifiable data before transmission from a device
US10776672B2 (en) 2018-04-25 2020-09-15 Avigilon Corporation Sensor fusion for monitoring an object-of-interest in a region
EP3564900B1 (fr) * 2018-05-03 2020-04-01 Axis AB Procédé, dispositif et système pour un degré de flou devant être appliqué à des données d'image dans une zone de confidentialité d'une image
WO2020119924A1 (fr) 2018-12-14 2020-06-18 Xovis Ag Procédé et agencement pour déterminer un groupe de personnes à considérer
CN110503028B (zh) * 2019-08-21 2023-12-15 腾讯科技(深圳)有限公司 确定区域中对象的分布的传感器、系统、方法和介质
DE102019214198A1 (de) * 2019-09-18 2021-03-18 Robert Bosch Gmbh Ereignisbasierte Erkennung und Verfolgung von Objekten
CN113705388B (zh) * 2021-08-13 2024-01-12 国网湖南省电力有限公司 基于摄像信息实时定位多人空间位置的方法及系统

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110150327A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Method and apparatus for masking privacy area of image

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090231436A1 (en) 2001-04-19 2009-09-17 Faltesek Anthony E Method and apparatus for tracking with identification
US8547437B2 (en) 2002-11-12 2013-10-01 Sensormatic Electronics, LLC Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
US20050012817A1 (en) * 2003-07-15 2005-01-20 International Business Machines Corporation Selective surveillance system with active sensor management policies
US7286157B2 (en) 2003-09-11 2007-10-23 Intellivid Corporation Computerized method and apparatus for determining field-of-view relationships among multiple image sensors
US7346187B2 (en) 2003-10-10 2008-03-18 Intellivid Corporation Method of counting objects in a monitored environment and apparatus for the same
US7558762B2 (en) * 2004-08-14 2009-07-07 Hrl Laboratories, Llc Multi-view cognitive swarm for object recognition and 3D tracking
US7924311B2 (en) * 2004-12-21 2011-04-12 Panasonic Corporation Camera terminal and monitoring system
DE102005013225A1 (de) * 2005-03-18 2006-09-28 Fluyds Gmbh Objektverfolgungs- und Situationsanalysesystem
US7418113B2 (en) * 2005-04-01 2008-08-26 Porikli Fatih M Tracking objects in low frame rate videos
US7409076B2 (en) 2005-05-27 2008-08-05 International Business Machines Corporation Methods and apparatus for automatically tracking moving entities entering and exiting a specified region
US8116564B2 (en) 2006-11-22 2012-02-14 Regents Of The University Of Minnesota Crowd counting and monitoring
US8253797B1 (en) * 2007-03-05 2012-08-28 PureTech Systems Inc. Camera image georeferencing systems
US8098891B2 (en) 2007-11-29 2012-01-17 Nec Laboratories America, Inc. Efficient multi-hypothesis multi-human 3D tracking in crowded scenes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110150327A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Method and apparatus for masking privacy area of image

Also Published As

Publication number Publication date
EP2600611A1 (fr) 2013-06-05
WO2013078568A1 (fr) 2013-06-06
US20140327780A1 (en) 2014-11-06
US9854210B2 (en) 2017-12-26
EP2600611A8 (fr) 2013-07-17

Similar Documents

Publication Publication Date Title
WO2013078568A1 (fr) Procédé et dispositif destinés à surveiller une zone de surveillance
DE60308782T2 (de) Vorrichtung und Methode zur Hinderniserkennung
DE102014105351B4 (de) Detektion von menschen aus mehreren ansichten unter verwendung einer teilumfassenden suche
DE102009009815B4 (de) Verfahren und Vorrichtung zur Erkennung von Parklücken
DE10029866B4 (de) Objekterkennungssystem
WO2008083869A1 (fr) Procédé, dispositif et programme informatique pour l'auto-calibrage d'une caméra de surveillance
WO2013029722A2 (fr) Procédé de représentation de l'environnement
DE102004018813A1 (de) Verfahren zur Erkennung und/oder Verfolgung von Objekten
WO2007107315A1 (fr) Détecteur d'objets multi-sensoriel reposant sur des hypothèses et dispositif de suivi d'objets
DE102018133441A1 (de) Verfahren und System zum Bestimmen von Landmarken in einer Umgebung eines Fahrzeugs
DE102012000459A1 (de) Verfahren zur Objektdetektion
DE102018123393A1 (de) Erkennung von Parkflächen
DE102017215079A1 (de) Erfassen von Verkehrsteilnehmern auf einem Verkehrsweg
WO2020178198A1 (fr) Estimation du déplacement d'une position d'image
DE102016201741A1 (de) Verfahren zur Höhenerkennung
DE10148070A1 (de) Verfahren zur Erkennung und Verfolgung von Objekten
DE112019004963T5 (de) Optikbasiertes mehrdimensionales Ziel- und Mehrfachobjekterkennungs- und verfolgungsverfahren
DE102020133506A1 (de) Parkplatzsteuerungssystem, Parkplatzsteuerungsverfahren und Programm
DE10049366A1 (de) Verfahren zum Überwachen eines Sicherheitsbereichs und entsprechendes System
DE102019209473A1 (de) Verfahren und Vorrichtung zur schnellen Erfassung von sich wiederholenden Strukturen in dem Bild einer Straßenszene
WO2019162327A2 (fr) Procédé de calcul d'un éloignement entre un véhicule automobile et un objet
WO2022079162A1 (fr) Système et procédé d'annotation de données radar de voiture
DE102006036345A1 (de) Verfahren zur Lagebestimmung von Objekten im dreidimensionalen Raum
DE102021210256A1 (de) Messsystem und Aufzeichnungsmedium, das darauf ein Messprogramm speichert
DE102019210518A1 (de) Verfahren zum Erkennen eines Objektes in Sensordaten, Fahrerassistenzsystem sowie Computerprogramm

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140616

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20170511

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170922