WO2022022809A1 - Masking device - Google Patents
Masking device Download PDFInfo
- Publication number
- WO2022022809A1 WO2022022809A1 PCT/EP2020/071259 EP2020071259W WO2022022809A1 WO 2022022809 A1 WO2022022809 A1 WO 2022022809A1 EP 2020071259 W EP2020071259 W EP 2020071259W WO 2022022809 A1 WO2022022809 A1 WO 2022022809A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- masking
- area
- surveillance
- images
- module
- Prior art date
Links
- 230000000873 masking effect Effects 0.000 title claims abstract description 134
- 238000012544 monitoring process Methods 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims abstract description 11
- 238000010191 image analysis Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19639—Details of the system layout
- G08B13/19652—Systems using zones in a single scene defined for different treatment, e.g. outer zone gives pre-alarm, inner zone gives alarm
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/19686—Interfaces masking personal details for privacy, e.g. blurring faces, vehicle license plates
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19639—Details of the system layout
- G08B13/19641—Multiple cameras having overlapping views on a single scene
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19678—User interface
- G08B13/1968—Interfaces for setting up or customising the system
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the invention is related to a masking device for a surveillance system wherein the surveillance system comprises at least one surveillance camera.
- the masking device is configured for editing surveillance images to edited images by masking.
- Surveillance systems comprising cameras are widely used, for example in shops, airports, railway stations and also in public places like parks and streets. It is a serious concern of private people, companies and also of governments to protect the privacy of each person. For example, when a shop is monitored by a surveillance system people outside the shop, for example on the street, should not be shown in the monitoring images when the camera inside the room catches also pictures through the window. Therefore masking of public areas is needed.
- the document DE 10 2008007 199 Al which seems to be the closest state-of-the-art, discloses a module for masking selected objects in monitoring images.
- the monitoring area is captured in the monitoring images by a camera, wherein a user can select objects in those pictures, wherein the masking module is configured to mask those selected objects.
- the invention discloses a masking device for a surveillance system with the features of claim 1. Furthermore, the invention concerns a surveillance system, a method, a computer program and a data carrier. Preferred and/or advantageous embodiments are shown in the description, the figures and the subclaims.
- the invention concerns a masking device.
- the masking device is for a surveillance system and/or part of the surveillance system.
- the surveillance system comprises at least one surveillance camera, especially more than 10 surveillance cameras.
- the surveillance camera is for example a stereo camera.
- the surveillance camera can be a fixed camera with a fixed view or perspective or a movable camera with a movable view, for example pan- tilt-zoom camera.
- the surveillance camera is arranged and/or is configured to monitor a monitoring area.
- the surveillance camera takes images, also called pictures, of the monitoring area, whereby the images can be two-dimensional images or stereo images.
- the surveillance camera is configured to provide surveillance images, wherein the surveillance images showing especially the monitoring area as captured by the camera.
- the surveillance images are in particular unmasked.
- the surveillance camera may provide the surveillance images as an image stream, for example a video.
- the surveillance system comprises more than one surveillance camera
- surveillance images of this cameras are provided to the masking device.
- the masking device is for example coupled with the surveillance camera and/or the surveillance cameras, whereby the surveillance images are provided to the masking device.
- the masking device comprises a selection module and a masking module.
- the selection module and the masking module are forming together a processing module.
- the selection module and/or the masking module may be a software module or a hardware model.
- the selection module and/or the masking module comprises a neural network and/or adapted for machine learning.
- the selection module and/or the masking module comprises a display for showing pictures.
- the selection module and/or the masking module comprise a touch screen.
- the selection module is preferably comprising or configured as a human-machine-interface, especially as a graphic user interface.
- the selection module is configured for selecting at least one masking area.
- the selection of the masking area is maybe done by a person, especially called user.
- the masking area is for example a two-dimensional area, e.g. a rectangle or circle.
- the masking area is maybe a volume for example a cone.
- the selecting of the masking area may be done graphically by the user using the selection module and/or maybe done alpha-numerically.
- the masking area selected by the user is for example an area where objects, especially people, should not be shown in captured images.
- the masking area is the area behind a shop window outside on the street.
- the masking area is a subsection of the monitoring area. Especially, the masking area is a subsection of the monitoring area which is shown in the surveillance images if there are no people in the scene.
- the masking module is configured to process the surveillance images to edited images.
- the masking device is set between the surveillance camera and a data output for showing the surveillance on a display, whereby the masking device holds back the surveillance images and only allows to output edited images.
- the edited images are for example the surveillance images with masked parts.
- the masking module is configured to mask all parts of the surveillance image which are showing and/or belonging to the masking area. For example, all parts outside of the window, especially on the streets, are masked.
- the masking module is checking if shown parts of the monitoring area in the surveillance images have an intersection with the masking area, whereby those areas with an intersection are masked.
- the masking area is defined and/or selected by a location, form and/or a size in three dimensions.
- the masking area comprises the information about its location, form and/or size in three dimensions.
- a coordinate system defined by the camera for example based on its viewpoint, or a coordinate system based on the monitoring area is used to define and/or select the masking areas location, form and/or size.
- the masking area has a depth information and/or an information, e.g. the distance to the camera.
- the invention is based on the idea that using a masking area defined in three dimensions can improve the information density which can be taken from a surveillance system, since less areas are unnecessarily and/or false masked.
- current masking systems allow a user to draw masking areas onto the image or sensor plane, for example a rectangular shape. By drawing it on the image or sensor plane those masks yield in the 3-D monitoring area to a cone emerging from the camera viewpoint extending into infinity. This results in the problem that objects and/or people which are inside of the cone, especially between the plane and that viewpoint, are masked since they seem to be in the masking area.
- the masking area is described in three- dimensions and/or yields in masking only objects and/or areas which are really inside the masking area and not only before it.
- the selection module is configured to allow a user to select the selected area in the surveillance image and/or based on the surveillance image.
- the selection of an area as selected area may be based on the edited images.
- the selection module is configured as human machine interface, especially as a graphical user interface, whereby the user can select an area or volume as selected area based on the monitoring area in the images.
- the selection module is configured to specify, define and/or compute the masking area based on the selected area and the depth information and/or distance information.
- the selection module is configured to assign a depth information and/or distance information to the selected area.
- the user can use the selection module to select a rectangular, polygonal or curved area as selected area, for example by drawing the area in a surveillance image.
- the selection module may be configured for selecting a point or an area of the surveillance images as selection base.
- the selection module is for example configured to determine a selected area based on the selection base, for example using an image analysis and/or image or object classification. For example the user has selected a pixel or a group of pixels which belong to the same object, wherein the selection module is configured to set as selected area the whole object.
- the selection module may be configured to specify, define or determine the masking area based on the selected area, especially the selected base, and a depth information and/or a distance information. For example, the user may select a pixel or a group of pixel and set, for example type in or choose, the depth information and/or distance information, wherein the selection module is configured to define the selected area based on this input parameters.
- the selection module is configured to determine the selected area based on an object classification, for example the user is choosing a part of an object, for example part of a street, wherein the selection module is configured to select the whole object, for example whole street as selected area using and/or processing an object classification, for example object classification based on the surveillance image.
- the object classification is adapted as a machine learning algorithm and/or using a neural network.
- the selection module is configured to allow a user to enter and/or select the depth information and/or distance information.
- the user may type in the distance and depth information.
- the selection module is configured to allow the user to select and/or enter the depth and/or distance information from a table, sliding bar or graphically. For example, the user knows that the window, which should be set as selected area and/or as masking area, is 3 m away from the camera so that the user can enter and/or select this number as distance and/or depth information.
- the selection module is configured to determine the depth information and/or the distance information based on a stereographic analysis of the image, especially the surveillance image.
- the user has selected an area and/or a selection based object, wherein the selection module is configured to check how far away the object and/or part of the monitoring area belonging to this selection is by using a stereographic analysis.
- the surveillance camera is a stereoscopic adapted camera and provides a pair of pictures usable for stereographic analysis. This embodiment is based on the idea to provide a very easy handling masking device, wherein the user only has to choose an area without knowing how far this object is away from the camera, wherein the selection module is configured to determine the distance or depth information by itself.
- the masking module is configured to apply an image analysis to the surveillance images.
- the image analysis is configured to obtain image portions.
- the image portions are for example parts of the image that can concern to the same object, have the same colour or design.
- the masking module is configured in this embodiment to mask the image portions that are a part of the masking area and/or have an intersection with the masking area.
- the masking module is configured to obtain an image portion depth to each image portion. Especially the masking module is configured to mask the image portions based on the image portion depth, for example by checking if the image depth is behind and/or a bigger than the depth of the masking area.
- the image analysis is configured as object detection and/or object classification, for example the image analysis is configured to obtain image portions which belonged to a same object for example to obtain people in the surveillance image.
- an image portion contains all pixels and/or image parts that belong to this and/or one object.
- the masking module is for example configured to obtain for the image portion which belongs to the and/or one object the distance, for example average distance, of the object from the camera and use and/or set this distance as image portion depth.
- This embodiment is based on the idea that using the surveillance image to classify and/or detect objects and determine the distance and/or depth of this object, whereby by comparing the distance of this object with the distance and/or the location of the masking area objects that are nearer to the camera than the masking area are still shown as object and are not masked.
- the masking module can be configured to obtain the image portion depth based on a stereoscopic image analysis, for example when the surveillance camera is configured as a stereoscopic camera.
- the masking module is configured to obtain the image portion depth based on an object tracing and/or tracking for example an object which is known to be inside a store and cannot be outside in the street may be traced and/or tracked, whereby by knowing that the object or person had not used the door this object and/or person has still to be before the masking area.
- the masking module may be adapted and/or configured to use a physical sensors to obtain the image portion depth, for example a radar sensor or an infrared sensor.
- the physical sensor is preferably part of the surveillance system, for example part of the surveillance camera.
- a further object of the invention is a surveillance system.
- the surveillance system comprises at least one surveillance camera to monitor the monitoring area and to provide surveillance images.
- the surveillance system comprises the masking device as previous described.
- the surveillance camera is connected with the masking device to provide the surveillance images to the masking device.
- the invention concerns a method for masking surveillance images.
- the method contains selecting of a masking area, wherein the masking area is a subsection of the monitoring area and/or a subsection of the surveillance images.
- the method is executed and/or used by the masking device.
- parts of surveillance images are masked, wherein parts of the surveillance images which are part and/or have an intersection with the masking area are masked.
- the masked images are called edited images, where by the edited images are especially provided to a monitoring center.
- Another object of the invention is computer program, wherein the computer program is adapted to run the method, especially when the computer program is run on a computer and/or a masking device.
- Figure la and lb shows a surveillance scene and masks
- FIG. la shows a monitoring area 1 which is monitored via a surveillance camera of the surveillance system.
- the monitoring area 1 is part of the shop, whereby sales objects 2 are presented for a selling.
- the surveillance camera 9 is taking surveillance images 3 of the monitoring area 1.
- the shop comprises a window 4 and a door 5, whereby the window 4 and the door 5 are made of glass and hence public areas 6 outside the shop are also captured by the surveillance images 6. Since a general interest is the protection of privacy and the right of a person on its own picture, a device for masking the public areas 6, and hence the window 4 and the door 5, is needed.
- Figure lb shows an edited image 10 based on the surveillance image 3 of figure la, whereby the public areas 6 are masked by a mask 7a and 7b.
- the masks 7a and 7b are covering the masking areas 8a, 8b.
- the masking areas 8a, 8b are defined in three dimensions which means that the masking areas 8a, b are defined by an extension in the image plane and a distance to the surveillance camera 9. Therefore, parts in the public areas 6 outside the window and outside the door are covered by the mask 7a, b, whereby parts inside the shop are still shown in the edited image 10.
- Figure 2 shows a top view of the monitoring area 1 of figure la, b.
- the masking areas 7a, b comprises distance information, also called depth information, whereby the distance information describes the distance of the masking area 7a, b from the camera 9.
- the distance information is for example defined by a coordinate system 11 of the real world.
- the camera 9 is configured and arranged to capture this part of the store which is inside the detection zone 12 of the camera 9.
- parts 13a, b of the monitoring area 1 between the masking areas 8a, b and the camera 9 would also be covered by the mask 7a, b.
- a person 14 in the area 13b would not be shown in the edited images 10 and covered by the mask 7b.
- the masking areas 8a, b By defining the masking areas 8a, b in three dimensions and applying a depth information and/or distance information to it, objects between the masking areas 8a, b, like the person 14, would not be covered by a mask since the distance information and/or depth information of the object or person 14, differs from the depth information and/or distance information of the masking area 8a, b.
- Figure 3 shows an example of masking device 15.
- the masking device 15 is placed between the surveillance system 16, comprising several surveillance cameras 9, and a monitoring centre 17 for displaying and showing the edited images 10.
- the surveillance system 16, especially the surveillance cameras 9, are providing the surveillance images 3 to the masking device 15.
- the masking device 15 comprises selection module 18 which is configured to select one or more masking areas 8.
- the selection module comprises a graphical user interface, whereby the user can select parts of the monitoring area 1 as masking areas 8.
- the selection module 18 is connected with the masking module 19 for providing the masking areas 8 to the masking module 19.
- the masking module 19 is configured to cover those parts of the surveillance image 3 which show these parts of the monitoring area 1 that are part of the masking area 8.
- the masking module 19 provides the masked images as edited images 10 to the monitoring 17. Therefore surveillance images 3 which contain private areas or public areas, like streets, are not provided to the monitoring centre 17, such that privacy interests and the right on the own picture are ensured.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Masking device (15) for a surveillance system (16), wherein the surveillance system (16) comprises at least one surveillance camera (9), wherein the surveillance camera (9) is arranged and/or configured to monitor a monitoring area (1) and to provide surveillance images (3), with a selection module (18) and a masking module (19), wherein the selection module (18) is configured for (10) selecting a masking area (8, 8a, b), wherein the masking area (8, 8a, b) is a subsection of the monitoring area (1), wherein the masking module (19) is configured to process the surveillance images (3) to edited images (10), wherein areas of the surveillance images (3) showing at least a part of the monitoring area (1) which is part of the masking area (8, 8a, b) are masked, wherein the (15) masking area (8, 8a, b) is defined by a location, a form and/or a size in three dimensions.
Description
Title
Masking device
Description
State-of-the-art
The invention is related to a masking device for a surveillance system wherein the surveillance system comprises at least one surveillance camera. The masking device is configured for editing surveillance images to edited images by masking.
Surveillance systems comprising cameras are widely used, for example in shops, airports, railway stations and also in public places like parks and streets. It is a serious concern of private people, companies and also of governments to protect the privacy of each person. For example, when a shop is monitored by a surveillance system people outside the shop, for example on the street, should not be shown in the monitoring images when the camera inside the room catches also pictures through the window. Therefore masking of public areas is needed.
For example the document DE 10 2008007 199 Al, which seems to be the closest state-of-the-art, discloses a module for masking selected objects in monitoring images. The monitoring area is captured in the monitoring images by a camera, wherein a user can select objects in those pictures, wherein the masking module is configured to mask those selected objects.
Disclosure of the invention
The invention discloses a masking device for a surveillance system with the features of claim 1. Furthermore, the invention concerns a surveillance system, a method, a computer program and a data carrier. Preferred and/or advantageous embodiments are shown in the description, the figures and the subclaims.
The invention concerns a masking device. The masking device is for a surveillance system and/or part of the surveillance system. The surveillance system comprises
at least one surveillance camera, especially more than 10 surveillance cameras. The surveillance camera is for example a stereo camera. The surveillance camera can be a fixed camera with a fixed view or perspective or a movable camera with a movable view, for example pan- tilt-zoom camera. The surveillance camera is arranged and/or is configured to monitor a monitoring area. For example the surveillance camera takes images, also called pictures, of the monitoring area, whereby the images can be two-dimensional images or stereo images. The surveillance camera is configured to provide surveillance images, wherein the surveillance images showing especially the monitoring area as captured by the camera. The surveillance images are in particular unmasked. The surveillance camera may provide the surveillance images as an image stream, for example a video. Especially, when the surveillance system comprises more than one surveillance camera, surveillance images of this cameras are provided to the masking device. The masking device is for example coupled with the surveillance camera and/or the surveillance cameras, whereby the surveillance images are provided to the masking device.
The masking device comprises a selection module and a masking module. For example the selection module and the masking module are forming together a processing module. The selection module and/or the masking module may be a software module or a hardware model. In a preferred embodiment the selection module and/or the masking module comprises a neural network and/or adapted for machine learning. Preferably, the selection module and/or the masking module comprises a display for showing pictures. In particular, the selection module and/or the masking module comprise a touch screen. The selection module is preferably comprising or configured as a human-machine-interface, especially as a graphic user interface.
The selection module is configured for selecting at least one masking area. The selection of the masking area is maybe done by a person, especially called user. The masking area is for example a two-dimensional area, e.g. a rectangle or circle. Furthermore, the masking area is maybe a volume for example a cone. The selecting of the masking area may be done graphically by the user using the selection module and/or maybe done alpha-numerically. The masking area selected by the user is for example an area where objects, especially people,
should not be shown in captured images. For example, the masking area is the area behind a shop window outside on the street. The masking area is a subsection of the monitoring area. Especially, the masking area is a subsection of the monitoring area which is shown in the surveillance images if there are no people in the scene.
The masking module is configured to process the surveillance images to edited images. For example the masking device is set between the surveillance camera and a data output for showing the surveillance on a display, whereby the masking device holds back the surveillance images and only allows to output edited images. The edited images are for example the surveillance images with masked parts. For example the masking module is configured to mask all parts of the surveillance image which are showing and/or belonging to the masking area. For example, all parts outside of the window, especially on the streets, are masked. For example, the masking module is checking if shown parts of the monitoring area in the surveillance images have an intersection with the masking area, whereby those areas with an intersection are masked.
The masking area is defined and/or selected by a location, form and/or a size in three dimensions. For example, the masking area comprises the information about its location, form and/or size in three dimensions. In particular, a coordinate system defined by the camera, for example based on its viewpoint, or a coordinate system based on the monitoring area is used to define and/or select the masking areas location, form and/or size. Especially, it is assumed that the masking area has a depth information and/or an information, e.g. the distance to the camera.
The invention is based on the idea that using a masking area defined in three dimensions can improve the information density which can be taken from a surveillance system, since less areas are unnecessarily and/or false masked. For example, current masking systems allow a user to draw masking areas onto the image or sensor plane, for example a rectangular shape. By drawing it on the image or sensor plane those masks yield in the 3-D monitoring area to a cone emerging from the camera viewpoint extending into infinity. This results in the problem that objects and/or people which are inside of the cone, especially between the plane and that viewpoint, are masked since they seem to be in the
masking area. By using this invention the masking area is described in three- dimensions and/or yields in masking only objects and/or areas which are really inside the masking area and not only before it.
Preferably, the selection module is configured to allow a user to select the selected area in the surveillance image and/or based on the surveillance image. Especially, the selection of an area as selected area may be based on the edited images. For example, the selection module is configured as human machine interface, especially as a graphical user interface, whereby the user can select an area or volume as selected area based on the monitoring area in the images. For example, the selection module is configured to specify, define and/or compute the masking area based on the selected area and the depth information and/or distance information. For example, the selection module is configured to assign a depth information and/or distance information to the selected area. For example, the user can use the selection module to select a rectangular, polygonal or curved area as selected area, for example by drawing the area in a surveillance image.
The selection module may be configured for selecting a point or an area of the surveillance images as selection base. The selection module is for example configured to determine a selected area based on the selection base, for example using an image analysis and/or image or object classification. For example the user has selected a pixel or a group of pixels which belong to the same object, wherein the selection module is configured to set as selected area the whole object. The selection module may be configured to specify, define or determine the masking area based on the selected area, especially the selected base, and a depth information and/or a distance information. For example, the user may select a pixel or a group of pixel and set, for example type in or choose, the depth information and/or distance information, wherein the selection module is configured to define the selected area based on this input parameters.
Preferably, the selection module is configured to determine the selected area based on an object classification, for example the user is choosing a part of an object, for example part of a street, wherein the selection module is configured to select the whole object, for example whole street as selected area using and/or processing an object classification, for example object classification based on the
surveillance image. Preferably, the object classification is adapted as a machine learning algorithm and/or using a neural network.
Preferably, the selection module is configured to allow a user to enter and/or select the depth information and/or distance information. The user may type in the distance and depth information. Alternatively, the selection module is configured to allow the user to select and/or enter the depth and/or distance information from a table, sliding bar or graphically. For example, the user knows that the window, which should be set as selected area and/or as masking area, is 3 m away from the camera so that the user can enter and/or select this number as distance and/or depth information.
In a preferred embodiment of the invention the selection module is configured to determine the depth information and/or the distance information based on a stereographic analysis of the image, especially the surveillance image. For example, the user has selected an area and/or a selection based object, wherein the selection module is configured to check how far away the object and/or part of the monitoring area belonging to this selection is by using a stereographic analysis. For example, the surveillance camera is a stereoscopic adapted camera and provides a pair of pictures usable for stereographic analysis. This embodiment is based on the idea to provide a very easy handling masking device, wherein the user only has to choose an area without knowing how far this object is away from the camera, wherein the selection module is configured to determine the distance or depth information by itself.
Optionally, the masking module is configured to apply an image analysis to the surveillance images. The image analysis is configured to obtain image portions. The image portions are for example parts of the image that can concern to the same object, have the same colour or design. The masking module is configured in this embodiment to mask the image portions that are a part of the masking area and/or have an intersection with the masking area.
Especially, the masking module is configured to obtain an image portion depth to each image portion. Especially the masking module is configured to mask the
image portions based on the image portion depth, for example by checking if the image depth is behind and/or a bigger than the depth of the masking area.
For example the image analysis is configured as object detection and/or object classification, for example the image analysis is configured to obtain image portions which belonged to a same object for example to obtain people in the surveillance image. Preferably an image portion contains all pixels and/or image parts that belong to this and/or one object. The masking module is for example configured to obtain for the image portion which belongs to the and/or one object the distance, for example average distance, of the object from the camera and use and/or set this distance as image portion depth. This embodiment is based on the idea that using the surveillance image to classify and/or detect objects and determine the distance and/or depth of this object, whereby by comparing the distance of this object with the distance and/or the location of the masking area objects that are nearer to the camera than the masking area are still shown as object and are not masked.
The masking module can be configured to obtain the image portion depth based on a stereoscopic image analysis, for example when the surveillance camera is configured as a stereoscopic camera. Alternatively and/or additionally the masking module is configured to obtain the image portion depth based on an object tracing and/or tracking for example an object which is known to be inside a store and cannot be outside in the street may be traced and/or tracked, whereby by knowing that the object or person had not used the door this object and/or person has still to be before the masking area. Furthermore, the masking module may be adapted and/or configured to use a physical sensors to obtain the image portion depth, for example a radar sensor or an infrared sensor. The physical sensor is preferably part of the surveillance system, for example part of the surveillance camera.
A further object of the invention is a surveillance system. Particularly, the surveillance system comprises at least one surveillance camera to monitor the monitoring area and to provide surveillance images. Furthermore, the surveillance system comprises the masking device as previous described. Especially, the surveillance camera is connected with the masking device to provide the surveillance images to the masking device.
Furthermore, the invention concerns a method for masking surveillance images. The method contains selecting of a masking area, wherein the masking area is a subsection of the monitoring area and/or a subsection of the surveillance images. Particularly, the method is executed and/or used by the masking device. According to the method, parts of surveillance images are masked, wherein parts of the surveillance images which are part and/or have an intersection with the masking area are masked. The masked images are called edited images, where by the edited images are especially provided to a monitoring center.
Another object of the invention is computer program, wherein the computer program is adapted to run the method, especially when the computer program is run on a computer and/or a masking device.
Further advantages and/or preferred embodiments are shown in the figures and its description. Thereby,
Figure la and lb shows a surveillance scene and masks;
Figure 2 scene and mask in top view from Figures 1;
Figure 3 masking device.
Figure la shows a monitoring area 1 which is monitored via a surveillance camera of the surveillance system. The monitoring area 1 is part of the shop, whereby sales objects 2 are presented for a selling. The surveillance camera 9 is taking surveillance images 3 of the monitoring area 1. The shop comprises a window 4 and a door 5, whereby the window 4 and the door 5 are made of glass and hence public areas 6 outside the shop are also captured by the surveillance images 6. Since a general interest is the protection of privacy and the right of a person on its own picture, a device for masking the public areas 6, and hence the window 4 and the door 5, is needed.
Figure lb shows an edited image 10 based on the surveillance image 3 of figure la, whereby the public areas 6 are masked by a mask 7a and 7b. The masks 7a
and 7b are covering the masking areas 8a, 8b. The masking areas 8a, 8b are defined in three dimensions which means that the masking areas 8a, b are defined by an extension in the image plane and a distance to the surveillance camera 9. Therefore, parts in the public areas 6 outside the window and outside the door are covered by the mask 7a, b, whereby parts inside the shop are still shown in the edited image 10.
Figure 2 shows a top view of the monitoring area 1 of figure la, b. For clarity reasons the selling objects 2 are not shown in the figure 2. The masking areas 7a, b comprises distance information, also called depth information, whereby the distance information describes the distance of the masking area 7a, b from the camera 9. The distance information is for example defined by a coordinate system 11 of the real world.
The camera 9 is configured and arranged to capture this part of the store which is inside the detection zone 12 of the camera 9. Without defining the masking areas 8a, b in three dimensions, for example like in the state-of-the-art defining chest and the image plane, parts 13a, b of the monitoring area 1 between the masking areas 8a, b and the camera 9 would also be covered by the mask 7a, b. For example a person 14 in the area 13b would not be shown in the edited images 10 and covered by the mask 7b. By defining the masking areas 8a, b in three dimensions and applying a depth information and/or distance information to it, objects between the masking areas 8a, b, like the person 14, would not be covered by a mask since the distance information and/or depth information of the object or person 14, differs from the depth information and/or distance information of the masking area 8a, b.
Figure 3 shows an example of masking device 15. The masking device 15 is placed between the surveillance system 16, comprising several surveillance cameras 9, and a monitoring centre 17 for displaying and showing the edited images 10. The surveillance system 16, especially the surveillance cameras 9, are providing the surveillance images 3 to the masking device 15. The masking device 15 comprises selection module 18 which is configured to select one or more masking areas 8. For example the selection module comprises a graphical user interface, whereby the user can select parts of the monitoring area 1 as masking areas 8. The
selection module 18 is connected with the masking module 19 for providing the masking areas 8 to the masking module 19.
The masking module 19 is configured to cover those parts of the surveillance image 3 which show these parts of the monitoring area 1 that are part of the masking area 8. The masking module 19 provides the masked images as edited images 10 to the monitoring 17. Therefore surveillance images 3 which contain private areas or public areas, like streets, are not provided to the monitoring centre 17, such that privacy interests and the right on the own picture are ensured.
Claims
1. Masking device (15) for a surveillance system (16), wherein the surveillance system (16) comprises at least one surveillance camera (9), wherein the surveillance camera (9) is arranged and/or configured to monitor a monitoring area (1) and to provide surveillance images (3), with a selection module (18) and a masking module (19), wherein the selection module (18) is configured for selecting a masking area (8, 8a, b), wherein the masking area (8, 8a, b) is a subsection of the monitoring area (1), wherein the masking module (19) is configured to process the surveillance images (3) to edited images (10), wherein areas of the surveillance images (3) showing at least a part of the monitoring area (1) which is part of the masking area (8, 8a, b) are masked, wherein the masking area (8, 8a, b) is defined by a location, a form and/or a size in three dimensions.
2. Masking device (15) according to claim 1, wherein the selection module (18) is configured to allow a user to select in the surveillance pictures (3) an area as selected area, wherein the selection module (18) is configured to specify the masking area (8, 8a, b) based on the selected area and a depth and/or distance information.
3. Masking device (15) according to claim 1 or 2, wherein the selection module (18) is configured to allow a user to select a point or area of the monitoring
images (3) as selection base, wherein the selection module (18) is configured to determine a selected area based on the selection base and an image analysis, wherein the selection module (18) is configured to specify the masking area (8, 8a, b) based on the selected area and a depth and/or distance information.
4. Masking device (15) according to claim 3, wherein the selection module (18) is configured to determine the selected area based on an object classification.
5. Masking device (15) according to one of the claims 2 to 4, wherein the selection module (18) is configured for entering and/or selecting the depth and/or distance information by the user.
6. Masking device (15) according to one of the claims 2 to 5, wherein the selection module (18) is configured to determine the depth and/or distance information based on a stereographic analysis of the surveillance pictures (3).
7. Masking device (15) according to one of the previous claims, wherein the masking module (19) is configured to apply an image analysis to the surveillance images (3) to obtain image portions, wherein the masking module (19) is configured to mask image portions which are part of the masking area (8, 8a, b).
8. Masking device (15) according to claim 7, wherein the masking module (19) is configured to obtain an image portion depth for each image portion and mask based on the image portion depth image portions which are behind the masking area (8, 8a, b).
9. Masking device (15) according to claim 8, wherein the masking module (19) is configured to obtain the image portion depth based on a stereoscopic image analysis, an object tracing and/or a physical sensor of the surveillance system.
10. Surveillance system with at least one surveillance camera and a masking device according to one of the previous claims.
11. Method for masking surveillance images (3), wherein a masking area (8, 8a, b) is selected, wherein the masking area (8, 8a, b) is defined by a location, a form and/or a size in three dimension, where in the surveillance images are showing a monitoring area, wherein the surveillance images (3) are processed to edited images (10) by masking areas of the surveillance images (3) showing at least a part of the monitoring area (8, 8a, b).
12. Computer program, wherein the computer program is configured to execute the method according to claim 11.
13. Data carrier, wherein the computer program according to claim 12 is stored on the data carrier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2020/071259 WO2022022809A1 (en) | 2020-07-28 | 2020-07-28 | Masking device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2020/071259 WO2022022809A1 (en) | 2020-07-28 | 2020-07-28 | Masking device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022022809A1 true WO2022022809A1 (en) | 2022-02-03 |
Family
ID=71842689
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2020/071259 WO2022022809A1 (en) | 2020-07-28 | 2020-07-28 | Masking device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022022809A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024068039A1 (en) * | 2022-09-30 | 2024-04-04 | Verisure Sàrl | Image capture arrangement and method of capturing images |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102008007199A1 (en) | 2008-02-01 | 2009-08-06 | Robert Bosch Gmbh | Masking module for a video surveillance system, method for masking selected objects and computer program |
JP2010193227A (en) * | 2009-02-19 | 2010-09-02 | Hitachi Kokusai Electric Inc | Video processing system |
WO2013137534A1 (en) * | 2012-03-12 | 2013-09-19 | Samsung Techwin Co.,Ltd. | System and method for processing image to protect privacy |
EP3300045A1 (en) * | 2016-09-26 | 2018-03-28 | Mobotix AG | System and method for surveilling a scene comprising an allowed region and a restricted region |
-
2020
- 2020-07-28 WO PCT/EP2020/071259 patent/WO2022022809A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102008007199A1 (en) | 2008-02-01 | 2009-08-06 | Robert Bosch Gmbh | Masking module for a video surveillance system, method for masking selected objects and computer program |
US20100328460A1 (en) * | 2008-02-01 | 2010-12-30 | Marcel Merkel | Masking module for a video surveillance system, method for masking selected objects, and computer program |
JP2010193227A (en) * | 2009-02-19 | 2010-09-02 | Hitachi Kokusai Electric Inc | Video processing system |
WO2013137534A1 (en) * | 2012-03-12 | 2013-09-19 | Samsung Techwin Co.,Ltd. | System and method for processing image to protect privacy |
EP3300045A1 (en) * | 2016-09-26 | 2018-03-28 | Mobotix AG | System and method for surveilling a scene comprising an allowed region and a restricted region |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024068039A1 (en) * | 2022-09-30 | 2024-04-04 | Verisure Sàrl | Image capture arrangement and method of capturing images |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11594031B2 (en) | Automatic extraction of secondary video streams | |
US10915783B1 (en) | Detecting and locating actors in scenes based on degraded or supersaturated depth data | |
US10937290B2 (en) | Protection of privacy in video monitoring systems | |
KR102152318B1 (en) | Tracking system that can trace object's movement path | |
US9602778B2 (en) | Security video system using customer regions for monitoring point of sale areas | |
JP6403687B2 (en) | Monitoring system | |
JP2013535896A (en) | Security camera tracking and monitoring system and method using thermal image coordinates | |
JPWO2013179335A1 (en) | Surveillance camera control device and video surveillance system | |
Alshammari et al. | Intelligent multi-camera video surveillance system for smart city applications | |
JP2017201745A (en) | Image processing apparatus, image processing method, and program | |
WO2006128124A2 (en) | Total awareness surveillance system | |
CN107704851B (en) | Character identification method, public media display device, server and system | |
KR20190099216A (en) | RGBD detection based object detection system and method | |
KR20170006356A (en) | Method for customer analysis based on two-dimension video and apparatus for the same | |
WO2022022809A1 (en) | Masking device | |
Al-Salhie et al. | Multimedia surveillance in event detection: crowd analytics in Hajj | |
KR102557769B1 (en) | System for detect accident based on cctv using digital twin virtual model | |
JP3384425B2 (en) | Moving object monitoring and measurement equipment | |
KR102152319B1 (en) | Method of calculating position and size of object in 3d space and video surveillance system using the same | |
Bouma et al. | Integrated roadmap for the rapid finding and tracking of people at large airports | |
JP5909709B1 (en) | Flow line analysis system, camera device, and flow line analysis method | |
TW202305646A (en) | Masking device | |
GB2439184A (en) | Obstacle detection in a surveillance system | |
EP2706483A1 (en) | Privacy masking in monitoring system. | |
WO2005024726A1 (en) | Method for detecting object traveling direction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20746971 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20746971 Country of ref document: EP Kind code of ref document: A1 |