EP1434184B1 - Commande d'un système multicaméra - Google Patents

Commande d'un système multicaméra Download PDF

Info

Publication number
EP1434184B1
EP1434184B1 EP03029530A EP03029530A EP1434184B1 EP 1434184 B1 EP1434184 B1 EP 1434184B1 EP 03029530 A EP03029530 A EP 03029530A EP 03029530 A EP03029530 A EP 03029530A EP 1434184 B1 EP1434184 B1 EP 1434184B1
Authority
EP
European Patent Office
Prior art keywords
camera
line
sight
image
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP03029530A
Other languages
German (de)
English (en)
Other versions
EP1434184A3 (fr
EP1434184A2 (fr
Inventor
Markus Dr. Michaelis
Volker Dr. Steinbiss
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Funkwerk Video Systeme GmbH
Original Assignee
Funkwerk Plettac Electronic GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Funkwerk Plettac Electronic GmbH filed Critical Funkwerk Plettac Electronic GmbH
Publication of EP1434184A2 publication Critical patent/EP1434184A2/fr
Publication of EP1434184A3 publication Critical patent/EP1434184A3/fr
Application granted granted Critical
Publication of EP1434184B1 publication Critical patent/EP1434184B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19641Multiple cameras having overlapping views on a single scene
    • G08B13/19643Multiple cameras having overlapping views on a single scene wherein the cameras play different roles, e.g. different resolution, different camera type, master-slave camera

Definitions

  • the invention relates to a method for controlling a multi-camera system with the features of claim 1 and an apparatus for implementing the method with the features of claim 16.
  • Multicamera systems are well known. Especially in surveillance systems and alarm systems, there is often a combination of stationary cameras and pan-tilt-zoom cameras. Stationary cameras are cheaper and allow continuous monitoring of a predefined area. A disadvantage of stationary cameras, however, is that an intruder can not be tracked. If an intruder leaves the predefined surveillance field, tracking becomes impossible and the intruder is often lost. Moreover, with such systems, the coarse resolution of the camera often makes it impossible to identify an intruder or detect small objects. Therefore, pan-tilt zoom cameras are used to track and magnify an intruder detected with a stationary camera. The same problem arises in other applications, e.g. in camera systems for traffic observation or sports events in large halls or stadiums. For a user, manually controlling the pan-tilt-zoom camera to quickly and accurately capture an object is generally difficult.
  • EP-A1-0 529 317 and EP-A1-0 714 081 It is known to pass moving objects, with their pursuit with cameras, from one camera to the next.
  • the invention describes an application within closed spaces, for example within a casino. This selects the closest available pan-tilt zoom camera that can track the object as it exits the surveillance area of the camera currently sensing the object.
  • the applications deal with interiors in which the area to be monitored is one level - the floor level. For the objects a known size - person - and ground contact is assumed.
  • US-A-6,359,647 Another invention is disclosed which makes it possible to enable the transfer of objects between cameras and their monitored areas.
  • the cameras used here can be both pan-tilt-zoom cameras and stationary cameras.
  • the aim of the invention is to continuously track an object across multiple cameras, zoomed object tracking and the associated problems are not addressed. It is assumed that the objects move on one level - floor level - have a known size or the distance is determined by the focus setting of the camera lens.
  • US-A-6,215,519 It is known to use a second camera with higher resolution for tracking moving objects that have been detected in a stationary camera with a large field of view.
  • the stationary camera is assumed to be an omnidirectional camera with very low resolution, so that the higher resolution of the second camera is only a normal resolution and accordingly does not require high accuracy.
  • To localize the object with the second camera it is proposed to place both cameras directly next to one another or to accept the objects on a known ground plane.
  • a video surveillance device has a permanently installed first video camera.
  • the captured by the first video camera image section is divided into different evaluation windows. If a change in the image is detected in one of the evaluation windows, a second camera sets a predefined setting values, which represent the evaluation windows, via a control device.
  • the object of the invention is to improve the detection of an object visible in the image of a first camera by means of a second orientable camera.
  • the camera system according to the invention consists of a first camera, also referred to as a master camera, and at least one second camera, also referred to as a slave camera.
  • the second camera is advantageously a pan-tilt-zoom camera.
  • an object detector is present.
  • any type of suitable device, method or human user is called, in a camera image, an object of interest, eg.
  • People, cars, faces, moving objects, the position of a vanished object, a cloud of smoke, a shadow, etc. can detect. This can e.g. a human user who can access the object via a suitable input medium, e.g. a mouse, detected on a monitor image or other suitable representation of the camera image. Another example is video motion detectors that detect moving objects. Another possibility is to detect objects based on characteristic features, e.g. Color, shape, face recognition. A user is understood below to mean a person who operates the system.
  • An automatic detector should be understood as meaning detectors which function without human intervention. Semi-automatic combinations are also possible in which an automatic detector makes detection suggestions that must be verified by a user. Detection should also be understood here as localization, i. E.
  • a detection generally refers to a position determination in this sense in a camera image. In the case of a user, this is done in advantageous embodiments often by a mouse click. However, a joystick, a touch-sensitive monitor, or other suitable input medium may also be used.
  • the position can be a point that is assigned to the object or even one or more surfaces.
  • the position is often determined by a pixel.
  • This pixel has a finite extent and may be e.g. be represented in a higher resolution by multiple pixels.
  • the detector such as a motion detector or by so-called touchscreens, one or more surfaces may be selected as the position of the object. In the following, we therefore understand one or more surfaces under a point in the image or pixel. A point in the scene imaged by the camera can therefore also have an extension.
  • a monitor is to be understood as meaning any suitable device for temporally variable representation of images, text and graphics.
  • This can be, for example, CRT monitors such as PC monitors, surveillance monitors, TV or something similar. It can also be flat screens, LCD screens or other types of displays.
  • a camera image that is really visible to a user but also a non-visible representation of the same image as digital data in a computer.
  • a monitor picture is meant and the fade-in or marking of lines, dots etc. takes place in the monitor picture in a form perceptible to the user.
  • a process part is executed by an automatic detector, it is a general, not necessarily visibly displayed image.
  • the drawing or fade in here corresponds to a consideration of the corresponding positions in the image by the affected automatically executed process part.
  • a master camera is a camera in which an object is detected by an object detector, wherein this detection is to be used to detect the same object in further, so-called slave cameras.
  • the master camera is often stationary, but it can also be equipped with a pan-tilt unit.
  • each camera is suitable as a master camera in which at any given time a clear association between a point in the camera image and a so-called visual beam is given.
  • a visual ray belonging to a pixel is the location of all points in the space that are imaged by the camera onto that pixel. For real cameras, these are straight lines for extension-free pixels, and cones corresponding to extended pixels are the sum of all straight lines.
  • an unstretched straight line is given as a ray of sight, from this cone a suitable straight line is used as a representative, usually belonging to the object center or the object base straight. This may be the case, for example, when a video motion detector detects an area in the camera image or a user does not enter a pixel-perfect position via a touchscreen.
  • the assignment rule between pixels and visual rays, wherein the visual rays are given in a camera-fixed coordinate system referred to as internal calibration of the camera.
  • the position of the camera relative to a world coordinate system or other camera is referred to as external calibration.
  • the property of the master camera refers only to the current detection task. In a further detection, the same camera can also perform the function of a slave camera if its field of view is adjustable.
  • the task of the slave camera is to capture the object detected in the master camera.
  • the slave camera is any camera with adjustable field of view suitable.
  • the adjustment of the field of view for the detection of the object by the slave camera is understood to mean the alignment of the so-called camera sector with the object.
  • the camera vector is a predefined directional vector permanently connected to the camera; it is therefore set together with the field of view of the camera.
  • slave cameras are pan-tilt-zoom cameras (domes or pan-tilt heads).
  • any other mechanism for adjusting the field of view is also suitable. A zoom feature is not necessary for all applications, but often beneficial. It is assumed that the slave camera is externally calibrated, ie its position and orientation relative to the master camera is known.
  • the adjustment mechanism of the field of view is assumed to be calibrated, ie the camera sector can be set to specific directions and the zoom can be selectively influenced.
  • the function of the slave camera is only for this detection process.
  • the same camera can also be a master camera in another acquisition process.
  • a system may consist of two or more cameras, with each camera operating as a master camera and one or more cameras having an adjustable field of view operating as slave cameras for a specific object detection. For the next object detection, these can be other camera pairings or the roles of the last pairing can be reversed.
  • An object can also be detected by several master cameras at the same or different times. In the camera image of the master camera, it is also possible to detect a plurality of objects which are then detected by a plurality of slave cameras or an object is detected, which is then detected by a plurality of slave cameras. The search of the object is restricted in all cases to the visual beam of the object given by the respective master camera. For multiple objects, there is a line of sight through each object.
  • This exemplary description does not represent a limitation or limitation of the invention to the listed examples. It merely serves to illustrate the invention and particularly advantageous embodiments.
  • FIGS. 3 to 5 allow a quick basic understanding of the invention.
  • FIG. 1 shows the block diagram of the system.
  • a first camera 101 as a master camera and a second camera 102 with an adjustable field of view as a slave camera are shown here.
  • the master camera 101 is a stationary camera.
  • the master camera 101 is connected to a control unit 140 which connects the master camera 101 with a central control unit 142 via analog outputs, in particular, for example, for the video signal and digital outputs, in particular, for example, for control signals via networks. If digital cameras or image transmissions are used, the network for image transmission may be completely or partially digital. If the mater camera 101 has an adjustable zoom lens, it will be adjusted from the central control unit 142 via the control unit 140.
  • the current value is stored either in the central control unit 142 or queried via the control unit 140.
  • control unit 141 controls and connects the pan-tilt-zoom camera 102.
  • Pan and Tilt can also be set and the current values can be interrogated.
  • the central control unit 142 thus has access to the control, the current settings and the camera images of all participating cameras 101 and 102.
  • the camera images can be displayed on the monitors 144 and 145 connected to the central control unit 142. If automatic detectors are used for the master camera 101 and / or the slave camera 102, the corresponding monitors 144 and 145 can also be connected directly to the control units 140 and 141 or directly to the cameras 101 and 102 or they can be omitted altogether.
  • the central control unit 142 is capable of displaying information such as a mouse pointer, marked dots and visual rays in the displayed images.
  • graphic controls such as sliders
  • a user may make the necessary controls via an input device 143, which may be advantageously a mouse and / or keyboard and / or slider and / or control panel and / or joystick, etc.
  • the central control unit 142 also stores the data for the internal and external calibration of the cameras 101 and 102. If available, terrain models are also stored and automatic detectors, such as motion detectors, etc. installed. All information and modules for one of the cameras 101 or 102 may also be located in the local control units 140 or 141. at these modules are advantageously modules for calibration and / or automatic detection.
  • the central control unit 142 all inputs and information are collected and processed, ie, for an object detected in the master camera 101, the associated visual beam 303 (in FIG. 3 ) are calculated in world coordinates and, as far as the visual beam is detected by the slave camera 102, superimposed on the camera image of the slave camera 102 (visual beam 504 in FIG FIG. 5 ). Further overlays, according to the embodiments described below, are made and the slave camera 102, for example, according to the embodiments described below, controlled.
  • FIG. 2 shows a schematic representation of a camera.
  • cameras are often modeled as so-called projective or even hole cameras. All visual rays pass through one point, the so-called optical center 260 of the camera. Through the optical center 260 and a pixel 206 on the image plane 262 (one pixel on the CCD or CMOS chip in digital cameras), a line of sight 203 is defined, which is the location of all points in space imaged onto the pixel 206 by the camera become. Many real cameras can accurately match this model by algorithmically correcting distortions in the image. For other cameras, there are similar models.
  • the orientation of a camera with adjustable field of view does not necessarily occur in the middle of the picture. If the camera is not modeled as a projective camera, it may also be useful to define the orientation of the camera by a beam that does not pass through the optical center of the camera or there is no unique optical center. Therefore, a general camera sector 261 is defined that defines the orientation of the camera. If the camera is modeled as a projective camera, one can advantageously anchor the camera sector 261 in the optical center 260, as shown in the drawing. The camera sector 261 then corresponds to a specific line of sight and corresponds to the orientation of the camera sector the orientation of the associated pixel (pixel) on the object. For slave cameras with zoom capability, this pixel can be advantageously chosen as the zoom center, ie the pixel which remains stationary in the image as the zoom changes. Any other pixel, such as the center of the image, is also possible.
  • FIG. 3 shows a schematic plan (top view) of an area to be monitored with a system of two cameras 301 and 302.
  • the camera 301 is the master camera, the camera 302, the slave camera.
  • the two cameras 301 and 302 are eg surveillance cameras and serve for the visual surveillance of the area.
  • In the area are the objects 311 (house), 312 - 315 (trees) and 316 (cars).
  • the master camera 301 detects in its indicated by the boundary lines 321 field of view of this camera associated monitoring area. Surveillance areas often do not cover the whole field of vision. This is indicated by the lines 370 and 371, which represent here by way of example beginning and end of the monitoring area.
  • the imaged scene represents the orientation of the camera at the time of object detection.
  • the slave camera 302 has an adjustable field of view, eg as pan-tilt-zoom camera.
  • Line 307 denotes the line of sight associated with the camera camera of the slave camera (or the line defined by the camera vector in the general case). By aligning the slave camera with an object, we understand the orientation of this line on the object.
  • FIG. 4 shows a camera image of the master camera 301 FIG. 3 as it is, for example, for a user on a monitor. The image shows the objects 311, 314, 315 and 316 FIG.
  • an input unit eg a mouse
  • This point 406 can be marked by the system on the screen, in FIG. 4 this is done by a black dot.
  • the point 406 indicates an object in the scene, in this case a corner of the house, from the slave camera 302 FIG. 3 should be recorded.
  • This object corresponds to the area plan of FIG. 3 the world point 305 with the associated visual ray 303.
  • the distinction between the World point 305 and the pixel 406, and between the (world) visual ray 303 and its imaging / projection in a camera image (504 in the camera image of the slave camera in FIG. 5 ) is essential.
  • the entire visual beam 303 is formed in the camera image of the master camera on the one pixel 406.
  • the detector marks a pixel in the camera image of the master camera and thus a line of sight.
  • the position of the corresponding world point on the visual ray, ie its distance from the master camera, is still unknown.
  • the location of the world point and the effective control of the slave camera 302 is the subject of the embodiments of the invention described below.
  • the point 406 in FIG. 4 or 305 in FIG. 3 is with adapted reference numerals also in the FIGS. 5 to 7 located.
  • the slave camera 302 can be arranged close to the master camera 301, with proper alignment the visual beams of the master camera 301 and the camera camera of the slave camera run quasi parallel, the object distance is then irrelevant to the orientation of the slave camera.
  • the detection by the slave camera 302 is relatively easy to solve in this case.
  • a special case also arises if the terrain of the monitored area is known, especially if it is a very simple terrain form such as a floor level in interior monitoring. In this case, assuming that the object is on the ground, it is relatively easy to calculate the object distance and direct the slave camera 302 at the object. In more complex terrain, the terrain shape could also be determined by the camera system itself. This is possible, for example, by calculating distance maps by means of known stereo or multicamera methods.
  • a distance map is a map that stores the distance to the point of interest represented for each pixel in the camera image of the master camera. Another option, if the object size is known, is to use this to estimate the object distance and thus align the slave camera 302. Of all these cases, however, only a small to moderate accuracy is to be expected because cameras can not be in the same place, as object sizes are only vaguely known and inaccurate detected or variable (different objects, different views) and there terrain forms and distance maps normally only vaguely known. For the use of terrain forms / distance maps also the ground contact of the objects must be assumed and that the in Camera image of the master camera detected point 406 is located at a known height above ground.
  • the required accuracy is generally determined by the zoom, i. depend on the camera opening angle. Assuming that the detection by the slave camera 302 has an accuracy of e.g. 1/10 of the camera aperture angle, this means that for a large zoom of about 2 degrees aperture angle, an accuracy of setting the slave camera 302 by 0.2 degrees. This accuracy is not achievable with the methods given above in most cases.
  • the object search in an effective way for the user or an automatic detector is therefore the subject of the following and further described embodiments.
  • point 406 becomes a line of sight 303 in FIG. 3
  • entering this point means, for example, a mouse click.
  • the object distance can be restricted to a certain area. Often this is due to the size of the master camera 301 in FIG. 3 detected monitoring range, which is shown as an example between the boundary lines 370 and 371 in FIG. 3 lies.
  • the slave camera 302 is then automatically set by the system (pan, tilt and zoom) as in FIG. 3 shown. This is defined by the boundary lines 322 in FIG FIG. 3 indicated field of view completely captures the line of sight 303 of the pixel 406 within the surveillance area. If complete acquisition is not possible, the largest possible part can be recorded and a warning displayed. If necessary, the focus can be automatically adjusted so that objects appear as sharp as possible along the line of sight.
  • FIG. 12 shows the camera image of the slave camera 302 thus set as it is presented to a user on a monitor or an automatic detector.
  • the sight ray 303 off FIG. 3 is displayed in an advantageous manner (line 504 in FIG. 5 ), so that the user can orient themselves but depending on the application, the image is not obscured.
  • the line of sight 504 is displayed in color, dashed and / or semitransparent, etc.
  • the user or an automatic detector must now search the image only along the line of sight 504 to find the pixel 506, which is an image of the world point 305 sought FIG. 3 in the camera image of the slave camera. For a user, this means eg a second mouse click.
  • the line of sight 504 can be an imaginary line.
  • the detection in the camera image of the slave camera determines the position of the object on the line of sight 303, and the slave camera 302 can be automatically aimed at the object and set to high zoom. Restricting the search to this line is of course advantageous for an automatic detector, but for a user as well, the insertion of the Sehstrahls 504 and the restriction of the search along the line of sight 504 be helpful. This is the case, for example, if the object is very low in contrast and the image is of poor quality or if there are a large number of similar objects, for example a person in a crowd of viewers.
  • a great advantage in any case is that the user or detector with two detections (for a user eg two mouse clicks) and two associated movements of the slave camera 302 has detected the object. If the object in the camera image of the master camera was detected as an area, the search area would be correspondingly given by the sum of the visual rays.
  • FIG. 6 shows the same watch scene as FIG. 3 with a master camera 601 and a slave camera 602.
  • the user is presented on the screen or in the input device 143 in FIG. 1 a slider is offered.
  • the goal is again to align the slave camera 602 with the world point 605.
  • the associated pixel 406 is detected in the camera image of the master camera ( FIG. 4 ).
  • the slave camera 602 is already set to a large zoom, which is indicated by the boundary lines 322, 323 and 324 of the field of view for three different orientations of the slave camera in the drawing. This has the advantage, for example, that a high resolution is already given during the search and during the detection of the object by the slave camera 602.
  • the system therefore offers the user the option of moving the slave camera along the line of sight 603 by moving the slider.
  • the search is so much easier and faster because only this one degree of freedom to use and is to search.
  • the search on the visual ray S is automatically restricted to the range determined by the apriori knowledge.
  • the slider can be provided with distance information on the line of sight 603, for example.
  • the slider can also be replaced by other input media with one degree of freedom or input media whose function is limited to one degree of freedom.
  • the normal pan-tilt control of a camera via a joystick or buttons for object detection can be switched so that one of the two deflections or a pair of keys the slave camera moves on the line of sight, while the other deflection or the other pair of buttons is turned off Zoom the slave camera or the deflection of the slave camera perpendicular to the line controls or other tasks.
  • FIG. 7 shows representations of camera images taken from the slave camera 602 in FIG FIG. 6 with the orientations 622, 623 and 624 along the line of sight 603.
  • the camera image 722 displays the object 612 FIG. 6
  • the camera image 723 shows the objects 613, 614 and 616.
  • the object (the corner of the house) selected by the point 406 is in the camera image 724 in FIG FIG. 7 from the perspective of the slave camera 602 and zoomed in (point 706).
  • the slave camera 602 can be controlled by the same procedure, the slider is then of course omitted.
  • the object in the camera image of the master camera ( FIG. 4 ) detected as an area one of the visual rays, such as the middle, must be selected in this embodiment.
  • FIG. 8 shows a camera image 833 of the slave camera 602.
  • the slave camera 602 is still too high to detect the line of sight 603, sd only the Treetops of the objects 613 and 614 are detected.
  • the arrow 809 indicates the direction in which the line of sight is reached on the shortest path.
  • the information could also be displayed on a control panel, eg in which direction a joystick for camera control is to be moved or which of the control keys is to be operated. If the slave camera is moved according to this information, the camera image 823 is reached.
  • the slave camera has detected the visual beam, the camera image 823 is identical to the camera image 723 FIG. 7 ,
  • the arrows 804 shown indicate the direction in which the camera must be moved to start the line of sight. Instead of the arrows, the line of sight can also be displayed as a line or in another suitable way.
  • FIG. 9 shows a further particularly advantageous embodiment.
  • FIG. 9 shows a scenario with a master camera 901, a slave camera 902 and three persons 982, 983 and 984 located on a line of sight 903 of the master camera 901 at different positions.
  • the zoom is not adjusted during the process of the slave camera 602, indicated by the fields of view 922, 923 and 924.
  • the corresponding camera images are in FIG. 10 to see. Due to the different distances from the slave camera 902, the person 982 in the camera image 1022 becomes too large (face cut off), the person 984 in the camera image 1024 is detected too small.
  • the person 983 happens to be at a suitable distance (camera image 1023) for the set zoom.
  • the distance of the slave camera to the visual ray 903 for each orientation can be easily calculated and the zoom adjusted accordingly.
  • the distance information can also be used to control the focus when moving the camera so that a clear image is obtained at all times. This has the advantage over a normal autofocus that the latter is slow, on the one hand, and moving camera and / or one on the other moving objects does not have to work reliably.
  • the possibility of snapshots of the searched object is particularly advantageous for this embodiment.
  • a suspicious person is detected in low resolution.
  • a high-resolution snapshot of the person's face can be made very efficiently with the aid of the slave camera 902.
  • the snapshot can be triggered by a user, by an automatic detector (here a face detector) or semi-automatically.
  • the auto focus of the focus on the visual beam can also support the detection of the object in the slave camera.
  • the arrangement is such that the line of sight, before hitting the selected object, is far away from other objects (through the air).
  • the objects that are projected into the camera image of the slave camera along the line of sight then have the 'wrong' distance from the slave camera and are not focused.
  • the first sharply imaged object is in this case the searched object.
  • the distance from the slave camera 302, 602 or 902 can be easily calculated by triangulation. If the object has been detected by the slave camera on the visual beam, the distance of the object from the master camera 301, 601 or 901 can also be easily determined by triangulation. This information is generally useful for automatically adjusting zoom and focus and for other tasks.
  • the detection or tracking of a moving object in the camera image of the master camera 301, 601 or 901 can also take place continuously.
  • the line of sight 303, 603 or 903 and the control of the slave camera 302, 602 or 902 are then automatically adjusted continuously.
  • the focus is automatically adjusted such that objects are sharply imaged on the line of sight.
  • the searched object is thereby automatically detected, whereby starting from the first camera 01 with automatically focused on the line of sight 03 zoom the first sharply imaged or the sharpest pictured object is.
  • the one-degree-of-freedom controller is implemented as a slider or joystick or in the form of keys, wherein a deflection of the joystick moves the respective camera on the line of sight and the input medium used in normal operation for controlling the second camera 02 can be switched for the object detection task in such a way that it fulfills the function for object detection.
  • the master camera as in FIG. 3 shown equipped with a focused light source 350, eg a laser.
  • the light source is adjustable in its orientation, the spectrum does not have to be in the visible range.
  • the focused light source 350 is mounted as close as possible to the master camera 301 and can thus be used for a selected point 406 (FIG. FIG. 4 ) along the associated line of sight 303.
  • the beam then hits the selected object and illuminates it.
  • the slave camera 302 is sensitive to the radiation used, this may be used to detect the object by the slave camera 302. This applies in particular if the radiation leads via its spectrum, its intensity, via a pulsed operation, via combinations of these or other features, to a clearly detectable signal in the camera image of the slave camera.
  • the detection of the object in the camera image of the slave camera can be facilitated for a detector, in particular if a plurality of similar objects are visible along the displayed visual beam 504.
  • a detector in particular if a plurality of similar objects are visible along the displayed visual beam 504.
  • Other objects are only in the projection of the camera image on the visual beam, such as the objects 712-714 in FIG. 7 , They are therefore not marked by the light source.
  • This procedure is general, ie also in the in FIGS. 6 and 9 described embodiments, applicable.
  • a special feature arises if the light source is equipped with a distance measurement (laser scanner). In this case, the object distance is known directly and the slave camera can be aimed directly at the object. It is particularly advantageous if the focused light source is coupled to a distance measuring device, in particular in the function of a laser scanner, and the distance information thus obtained is used to direct
  • FIG. 12 Another embodiment is in FIG. 12 shown.
  • the figure shows as well FIG. 4 a camera image of the master camera 301 off FIG. 3 .
  • the superimposed visual beam 1208 is the visual beam 307 belonging to the camera vector of the slave camera FIG. 3 (or the straight line defined by the camera vector in the general case).
  • This allows a user to instantly recognize the current tilt of the slave camera.
  • the user is often offered location maps of the area on a monitor in which the cameras are shown. Since the site plans are top views, it is also easy to display the current pan of a pan-tilt-zoom camera by a corresponding symbol. This facilitates the user's orientation in controlling the camera.

Claims (19)

  1. Procédé pour la détection d'un objet dans une scène, en particulier pour la surveillance de celui-ci, avec au moins une première caméra (01) et au moins une deuxième caméra orientable (02), moyennant quoi l'on choisit dans l'image de caméra de la première caméra (01) un point d'image (06) et par conséquent un point spatial (05) associé dans la scène observée par la première caméra (01), caractérisé en ce qu'un rayon visuel (03) est déterminé par le point d'image (06) choisi et en ce que la deuxième caméra (02) est orientée vers le point spatial (05), une orientation de la deuxième caméra (02) selon le rayon visuel (03) ayant lieu.
  2. Procédé selon la revendication 1, caractérisé en ce que le rayon visuel (03) est superposé de manière adéquate dans l'image de caméra de la deuxième caméra (02), par exemple en tant que ligne continue, ou ligne en pointillés et/ou est représenté de manière à moitié transparente ou colorée.
  3. Procédé selon la revendication 1 ou 2, caractérisé en ce que le rayon visuel (03) est converti par l'unité de commande (41) de la deuxième caméra (02) sur la base des données délivrées par l'unité de commande (40) de la première caméra (01), et en ce que la deuxième caméra (02) est orientée vers le point spatial (05) sur la base de ces données provenant de l'unité de commande (41) de la deuxième caméra (02), ou bien en ce que le rayon visuel (03) est calculé par l'unité de commande centrale (42) et est superposé.
  4. Procédé selon l'une des revendications 1 à 3, caractérisé en ce que le rayon visuel (S) est défini comme étant la droite définie par le point d'image (06) choisi et le centre optique (60) de la première caméra (01).
  5. Procédé selon l'une des revendications 1 à 4, caractérisé en ce que la première caméra (01) est gérée en tant que caméra maître et la deuxième caméra (02) en tant que caméra esclave, la première caméra (01) étant une caméra stationnaire ou une caméra à panoramique horizontal et/ou une caméra à panoramique vertical et/ou une caméra à zoom, et/ou en ce que la première caméra (01) est munie d'un objectif oeil de poisson, d'un objectif à grand angle, d'un objectif à effet de zoom, ou de miroirs, ou bien est réalisée en tant que caméra catadioptrique et/ou en ce que la deuxième caméra (02) est une caméra à panoramique horizontal et/ou une caméra à panoramique vertical et/ou une caméra à zoom et/ou en ce que la deuxième caméra (02) est munie d'une unité de déplacement, laquelle permet de régler l'orientation du champ visuel de la deuxième caméra (02) de manière à ce qu'une zone de la scène à surveiller puisse être balayée par la deuxième caméra (02).
  6. Procédé selon l'une des revendications 1 à 5, caractérisé en ce que, grâce aux données de réglage de la première caméra (01) et au point d'image (06) et aux données de réglage de la deuxième caméra (02), on calcule la distance du point spatial (05) par rapport à la première caméra (01) et/ou la deuxième caméra (02).
  7. Procédé selon l'une des revendications 1 à 6, caractérisé en ce que le point d'image (06) est choisi par l'intermédiaire d'un détecteur, lequel peut être un utilisateur, un détecteur automatique ou un détecteur semi-automatique, et en ce qu'en cas de détecteur automatique, la représentation sur un moniteur peut être omise et/ou en ce que le détecteur d'objet est réalisé en tant qu'unité de détection de mouvement, unité de reconnaissance de caractéristiques de couleurs, unité de reconnaissance de personnes, de véhicules automobiles ou de visages, et/ou en ce que le détecteur d'objet est réalisé dans l'unité de commande (40) de la première caméra (01), dans l'unité de commande centrale (42), ou de manière répartie dans les deux unités de commande.
  8. Procédé selon l'une des revendications 1 à 7, caractérisé en ce que le zoom et l'orientation, et si nécessaire également le foyer, de la deuxième caméra (02) sont choisis de manière à ce qu'après le marquage du point d'image (06) dans l'image de caméra de la première caméra (01), le rayon visuel (03) associé dans l'image de caméra de la deuxième caméra (02) est détecté à peu près au milieu et si possible de manière nette, et en ce que la zone de surveillance prédéterminée est détectée le long du rayon visuel sur toute sa longueur et/ou en ce qu'un signal d'alerte est généré, lorsque le rayon visuel (03) associé au point d'image (06) choisi de la deuxième caméra (02) ne peut pas être détecté sur toute la zone de surveillance prédéterminée.
  9. Procédé selon l'une des revendications 1 à 8, caractérisé en ce que l'on superpose dans l'image de caméra de la scène détectée par la première caméra (01) le rayon visuel (07) faisant partie du vecteur de caméra de la deuxième caméra (02), afin d'indiquer l'orientation de la deuxième caméra (02) et/ou faciliter la commande de la deuxième caméra (02), et/ou en ce que le point d'incidence du rayon visuel (07) faisant partie du vecteur de caméra de la deuxième caméra (02) est marqué dans l'image de caméra de la première caméra (01), si l'on connaît la forme du paysage de la scène surveillée.
  10. Procédé selon l'une des revendications 1 à 9, caractérisé en ce que l'on superpose à l'image de caméra de la deuxième caméra (02) un indicateur de direction, en particulier sous forme de flèche, pour indiquer dans quelle direction il faut déplacer la deuxième caméra (02), afin d'obtenir le rayon visuel (03).
  11. Procédé selon l'une des revendications 1 à 10, caractérisé en ce que, dans le cas où le rayon visuel (03) se trouve déjà dans l'image de caméra de la deuxième caméra (02), le rayon visuel (03) est superposé dans cette image de caméra, en particulier sous forme de deux flèches de directions opposées ou d'une ligne, et en ce que l'on indique ainsi dans quelle direction la deuxième caméra (02) doit être déplacée le long du rayon visuel (03).
  12. Procédé selon l'une des revendications 1 à 11, caractérisé en ce que l'on propose un régulateur à un seul degré de liberté, lequel commande l'orientation de la deuxième caméra (02) le long du rayon visuel (03).
  13. Procédé selon l'une des revendications 1 à 12, caractérisé en ce qu'une taille d'objet est prédéterminée, et en ce que le zoom de la deuxième caméra (02), lors de la mise en oeuvre le long du rayon visuel (03), est automatiquement réglé de manière à ce que des objets de cette taille soient détectés sur le rayon visuel à la taille adéquate dans l'image de caméra de la deuxième caméra (02) et/ou en ce que le foyer est automatiquement réglé lors du déplacement de la deuxième caméra (02) le long du rayon visuel (03) de manière à ce que des objets sur le rayon visuel soient représentés de manière nette.
  14. Procédé selon l'une des revendications 1 à 13, caractérisé en ce que, lors de la mise en oeuvre de la deuxième caméra (02) le long du rayon visuel (03), l'objet cherché est automatiquement détecté de ce fait, moyennant quoi il est, en partant de la première caméra (01) avec un zoom réglé automatiquement de manière nette sur le rayon visuel (03), le premier objet représenté de manière nette et/ou l'objet représenté de la manière la plus nette, et/ou en ce que le régulateur à un seul degré de liberté est réalisé en tant que régulateur à coulisse ou joystick ou sous forme de touches, une excursion du joystick mettant en oeuvre la caméra respective sur le rayon visuel.
  15. Procédé selon l'une des revendications 1 à 14, caractérisé en ce que le moyen de saisie utilisé lors du fonctionnement normal pour commander la deuxième caméra (02) pour la tâche de détection d'objets peut être commuté de manière à remplir la fonction de détection d'objet et/ou en ce qu'un objet est détecté dans la deuxième caméra (02), en ce que l'on fixe sur la première caméra (01) une source lumineuse focalisée avec orientation réglable, laquelle est orientée le long du rayon visuel (03) associé au point d'image (06) et grâce à laquelle l'objet est éclairé et ainsi marqué, et/ou en ce qu'une détection précise est facilitée grâce à la longueur d'onde de la lumière de la source lumineuse et/ou à un fonctionnement par impulsions de la source lumineuse, et/ou en ce que la source lumineuse focalisée est couplée avec un dispositif de mesure de distance, en particulier dans la fonction d'un scanner au laser, en ce que l'information sur la distance ainsi acquise est utilisée pour orienter la deuxième caméra (02) directement sur un objet sur le rayon visuel (03) à cette distance.
  16. Dispositif pour détecter un objet dans une scène se composant au moins d'une première caméra (01) et au moins d'une deuxième caméra (02) orientable, une première unité de commande (40) étant associée à la première caméra (01) et une deuxième unité de commande (41) étant associée à la deuxième caméra orientable (02), et un point d'image (06) pouvant être sélectionné, via une unité de commande centrale (42) avec des unités de saisie (43) associées dans l'image de caméra de la première caméra (01) et avec celui-ci un point spatial (05) associé, dans la scène détectée par la première caméra (01), caractérisé en ce que l'unité de commande centrale (42) calcule un rayon visuel (03) allant de la première caméra (01) jusqu'au point spatial (05) et en ce qu'à l'aide de ce rayon visuel (03) la deuxième unité de commande (41) oriente la deuxième caméra (02) et l'oriente vers le point spatial (05), la deuxième unité de commande (41) orientant la deuxième caméra (02) selon le rayon visuel (03).
  17. Dispositif selon la revendication 16, caractérisé en ce que la deuxième unité de commande (41) de la deuxième caméra (02) superpose le rayon visuel (03) dans la scène détectée par la deuxième caméra (02) et représente le rayon visuel (03) comme ligne continue, ou ligne en pointillés et/ou de façon à moitié transparente ou colorée, et en ce que le rayon visuel (03) est calculé par la deuxième unité de commande (41) de la deuxième caméra (02) sur la base des données délivrées par la première unité de commande (40) de la première caméra (01).
  18. Dispositif selon la revendication 16 ou 17, caractérisé en ce que la première caméra (01) est une caméra stationnaire et/ou une caméra orientable et/ou en ce que la première caméra (01) est une caméra maître et la deuxième caméra (02) une caméra esclave associée à la caméra maître et/ou en ce que la première caméra (01) est une caméra à panoramique horizontal et/ou une caméra à panoramique vertical et/ou une caméra à effet de zoom et/ou en ce que la première caméra (01) est munie d'un objectif oeil de poisson, d'un objectif à grand angle, d'un objectif à effet de zoom, ou de miroirs, ou bien est réalisée en tant que caméra catadioptrique et/ou en ce que la deuxième caméra (02) est une caméra à panoramique horizontal et/ou une caméra à panoramique vertical et/ou une caméra à zoom et/ou en ce que la deuxième caméra (02) présente une unité de déplacement, laquelle permet de régler l'orientation du champ visuel de la deuxième caméra (02) de manière à ce que la deuxième caméra (02) puisse balayer une zone de la scène à surveiller et/ou en ce que l'unité de commande centrale (42) calcule, à l'aide des données de réglage de la première caméra (01) et de la deuxième caméra (02), la distance du point spatial (05) par rapport à la première caméra (01) et/ou à la deuxième caméra (02).
  19. Dispositif selon l'une des revendications 16 à 18, caractérisé en ce que l'unité de commande centrale (42) représente la scène détectée par la première caméra (01) sur un dispositif d'affichage (44) associé à la première caméra (01), en ce que le point spatial (05) est sélectionné par un détecteur et/ou en ce que l'unité de commande centrale (42) superpose le rayon visuel (03) sur un dispositif d'affichage (45) associé à la deuxième caméra (02) dans la scène détectée par la deuxième caméra (02), et/ou en ce que le choix du point d'image (06) s'effectue automatiquement via un détecteur d'objet et/ou en ce que le détecteur d'objet est une unité de détection de mouvements, une unité de reconnaissance des caractéristiques de couleurs ou une unité de reconnaissance de visages, et/ou en ce que le détecteur d'objet est réalisé dans la première unité de commande (40) de la première caméra (01).
EP03029530A 2002-12-22 2003-12-20 Commande d'un système multicaméra Expired - Lifetime EP1434184B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE10261146A DE10261146A1 (de) 2002-12-22 2002-12-22 Steuerung eines Multikamera-Systems
DE10261146 2002-12-22

Publications (3)

Publication Number Publication Date
EP1434184A2 EP1434184A2 (fr) 2004-06-30
EP1434184A3 EP1434184A3 (fr) 2004-11-17
EP1434184B1 true EP1434184B1 (fr) 2008-03-05

Family

ID=32404310

Family Applications (1)

Application Number Title Priority Date Filing Date
EP03029530A Expired - Lifetime EP1434184B1 (fr) 2002-12-22 2003-12-20 Commande d'un système multicaméra

Country Status (4)

Country Link
EP (1) EP1434184B1 (fr)
AT (1) ATE388457T1 (fr)
DE (2) DE10261146A1 (fr)
ES (1) ES2301752T3 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004034663A1 (de) * 2004-07-17 2006-02-09 Siemens Ag Folgekamerasteuerung
DE102016119241B4 (de) 2016-10-10 2018-08-09 Markus Blömer Operationsbestimmungsvorrichtung zum Bestimmen einer von einem Gerät durchzuführenden Operation
CN112509257A (zh) * 2020-12-29 2021-03-16 鼎力联合(深圳)高新技术有限公司 一种光束扫描防御系统及其使用方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994017636A1 (fr) * 1993-01-29 1994-08-04 Bell Communications Research, Inc. Systeme de commande de cameras a poursuite automatique
DE19639728C2 (de) * 1996-09-26 1998-12-24 Siemens Ag Video-Überwachungseinrichtung
US6853809B2 (en) * 2001-01-30 2005-02-08 Koninklijke Philips Electronics N.V. Camera system for providing instant switching between wide angle and full resolution views of a subject

Also Published As

Publication number Publication date
EP1434184A3 (fr) 2004-11-17
EP1434184A2 (fr) 2004-06-30
ATE388457T1 (de) 2008-03-15
DE50309311D1 (de) 2008-04-17
ES2301752T3 (es) 2008-07-01
DE10261146A1 (de) 2004-07-01

Similar Documents

Publication Publication Date Title
DE10235888B4 (de) Automatisch kollimierende Vermessungsvorrichtung mit Bildaufnahmevorrichtung
DE112005000929B4 (de) Automatisches Abbildungsverfahren und Vorrichtung
EP2464098B1 (fr) Dispositif de représentation d'environnement ainsi qu'un véhicule doté d'un tel dispositif de représentation d'environnement et procédé de représentation d'une image panoramique
EP0735757B1 (fr) Procédé et appareil pour prendre des vues automatiques du visage d'une personne
EP2880853B1 (fr) Dispositif et procédé destinés à déterminer la situation d'une caméra de prise de vue
EP2044573B1 (fr) Caméra de surveillance, procédé d'étalonnage et d'utilisation de la caméra de surveillance
DE60123534T2 (de) Gerät zum Verfolgen eines sich bewegenden Objekts
EP1583022A2 (fr) Procédé et dispositif d'acquisition de zones d'intérêt d'objets en mouvement
WO2007104367A1 (fr) Système de vidéosurveillance
EP3534210B1 (fr) Unité d'affichage à réglage du foyer
DE102008039130A1 (de) Durch ein neurales Netzwerk gesteuertes automatisches Verfolgungs- und Erkennungssystem und Verfahren
DE10152883A1 (de) Nachführvorrichtung
EP3104330B1 (fr) Procede de suivi d'au moins un objet et procede de remplacement d'au moins un objet par un objet virtuel dans un signal d'image animee enregistre par une camera
DE102016106696A1 (de) Koordinatenmesssystem
DE10226398B4 (de) Verfahren und Vorrichtung zum Erfassen der Lage eines Objekts im Raum
EP1434184B1 (fr) Commande d'un système multicaméra
DE69721520T2 (de) System mit einem Photosensor, insbesonder zur Zeitmessung bei Wettkämpfen, und Einstellverfahren zum Ausrichten eines solchen Systems auf eine Ziellinie
EP2831839B1 (fr) Procédé d'exploitation automatique d'un système de surveillance
EP1912431A2 (fr) Procédé et dispositif destinés à la commande d'une caméra pivotante
EP3833576B1 (fr) Systeme de camera de surveillance
DE19956266A1 (de) Überwachungsanlage
EP3185213B1 (fr) Procédé de réalisation d'une carte bathymétrique à l'aide d'une caméra
DE102019111238A1 (de) Verfahren zur Einstellung und Visualisierung von Parametern zur Fokussierung eines Objektivs auf ein Objekt sowie System zur Durchführung des Verfahrens
DE102019102423A1 (de) Verfahren zur Live-Annotation von Sensordaten
DE102014105011B4 (de) System zur Visualisierung des Sichtfeldes einer optischen Vorrichtung

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

17P Request for examination filed

Effective date: 20041124

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20060703

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REF Corresponds to:

Ref document number: 50309311

Country of ref document: DE

Date of ref document: 20080417

Kind code of ref document: P

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2301752

Country of ref document: ES

Kind code of ref document: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080805

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080605

ET Fr: translation filed
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

26N No opposition filed

Effective date: 20081208

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080605

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

BERE Be: lapsed

Owner name: FUNKWERK PLETTAC ELECTRONIC G.M.B.H.

Effective date: 20081231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081231

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081231

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081231

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081220

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080906

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081220

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080305

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080606

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20130117

Year of fee payment: 10

Ref country code: ES

Payment date: 20121226

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20121218

Year of fee payment: 10

REG Reference to a national code

Ref country code: NL

Ref legal event code: V1

Effective date: 20140701

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20140829

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140701

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131231

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20150327

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131221

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 50309311

Country of ref document: DE

Owner name: FUNKWERK VIDEO SYSTEME GMBH, DE

Free format text: FORMER OWNER: FUNKWERK PLETTAC ELECTRONIC GMBH, 90766 FUERTH, DE

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20171117

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20171221

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20171229

Year of fee payment: 15

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 50309311

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20181220

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190702

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181220

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20181220