US10956752B2 - Camera for monitoring a monitored area and monitoring device, and method for monitoring a monitored area - Google Patents

Camera for monitoring a monitored area and monitoring device, and method for monitoring a monitored area Download PDF

Info

Publication number
US10956752B2
US10956752B2 US16/463,691 US201716463691A US10956752B2 US 10956752 B2 US10956752 B2 US 10956752B2 US 201716463691 A US201716463691 A US 201716463691A US 10956752 B2 US10956752 B2 US 10956752B2
Authority
US
United States
Prior art keywords
module
person
camera
surveillance
unmasked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/463,691
Other versions
US20190377958A1 (en
Inventor
Thomas Geiler
Didier Stricker
Oliver Wasenmueller
Jens Ackermann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Assigned to ROBERT BOSCH GMBH reassignment ROBERT BOSCH GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STRICKER, DIDIER, Wasenmueller, Oliver, ACKERMANN, JENS, GEILER, THOMAS
Publication of US20190377958A1 publication Critical patent/US20190377958A1/en
Application granted granted Critical
Publication of US10956752B2 publication Critical patent/US10956752B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06K9/00771
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • H04N21/4542Blocking scenes or portions of the received content, e.g. censoring scenes
    • G06K9/00342
    • G06K9/4652
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19686Interfaces masking personal details for privacy, e.g. blurring faces, vehicle license plates
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • H04N5/23219
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present invention relates to a camera for monitoring a monitored area, including a camera sensor for generating at least one unmasked surveillance image of the monitored area and/or of a subarea of the monitored area, the camera sensor being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image.
  • the camera encompasses an evaluation unit, the evaluation unit encompassing an input interface for receiving the raw data and an output interface for providing image data.
  • the evaluation unit encompasses a check module and a masking module, the check module being configured for detecting a person in the unmasked surveillance image on the basis of the raw data, the masking module being configured for generating a masked surveillance image, the detected person being represented in a masked manner in the masked surveillance image.
  • Cameras may be utilized for the monitoring of monitored areas. For example, production facilities are monitored, whereby the right of the employees in the production facility to the protection of personal rights may be extremely limited. By comparison, computer vision applications have evolved so far that camera monitoring represents a cost-effective approach to process monitoring.
  • Video monitoring systems may resort to blocking sensor systems, such as photoelectric barriers and inductive switches, in order to switch off cameras if a person is present in the monitored area.
  • blocking sensor systems such as photoelectric barriers and inductive switches
  • Publication DE 10 2008 007 199 A1 which may be related art, discusses a masking module for a monitoring system, the monitoring system including at least one monitoring camera and being suitable and/or situated for observing monitored areas encompassing moving objects, including a selection unit for selecting objects as selection objects, the masking module being configured for outputting selection objects or portions thereof in a masked manner.
  • Cameras may be utilized for monitoring public and non-public areas, where the safeguarding of personal rights is to be observed.
  • a camera for monitoring a monitored area having the features described herein a monitoring device having the features described herein, and a method for monitoring a monitored area having the features described herein are described.
  • Advantageous specific embodiments of the present invention result from the further descriptions herein, the following description, and the attached figures.
  • a camera for monitoring a monitored area is provided.
  • the camera is, for example, a color camera, a black-and-white camera, an infrared camera, a 3D camera, a PTZ (pan-tilt-zoom) camera, or the like.
  • the camera is, in particular, a video camera and/or a single-shot camera.
  • the camera may be situatable and/or situated in the monitored area.
  • the monitored area is, in particular, an open area or an inner area.
  • the monitored area is a plant, a factory, a building, a production facility, a warehouse, a large hall, an airport, a road, and/or a park.
  • the camera encompasses at least one camera sensor for generating at least one unmasked surveillance image of the monitored area and/or a subarea of the monitored area.
  • the generation of the surveillance image is, in particular, a shot of an image which shows and/or represents the monitored area.
  • the camera sensor encompasses and/or is, in particular, a CCD chip or CMOS chip.
  • the camera sensor may be configured for generating a single image, a few images, and/or a data stream of images, specifically a video sequence, as an unmasked surveillance image.
  • the camera has, in particular, a shooting direction, the camera and the shooting direction being orientable toward the monitored area and/or the subarea.
  • the camera sensor is configured for providing raw data, in particular raw image data.
  • the raw data encompass at least one unmasked surveillance image.
  • the raw image data encompass the data stream of images.
  • the raw data may encompass additional data, the additional data encompassing, for example, a time stamp and/or sensor data of sensors in the camera.
  • the raw data are, in particular, digital and/or analog data.
  • the camera encompasses an evaluation unit, the evaluation unit being integrated into the camera.
  • the evaluation unit is configured, in particular, as a piece of hardware and is, for example, a processor, an electronic module, a computer unit, and/or a microcontroller.
  • the evaluation unit encompasses an input interface for receiving the raw data.
  • the input interface may be a cable interface.
  • the camera sensor encompasses a camera sensor output for providing the raw data, the camera sensor output being exclusively connected to the input interface, for example, with the aid of a cable connection, for the purpose of data transmission.
  • the connection of the camera sensor output and the input interface may be branch-free and/or junction-free, in particular, no further module being situated between the camera sensor output and the input interface.
  • the integrated evaluation unit encompasses an output interface for providing output image data.
  • the output interface is a cable interface or a radio interface, such as a WLAN interface, an infrared interface, or a Bluetooth interface.
  • the output image data may be digital data; alternatively, the output image data are analog data.
  • the integrated evaluation unit encompasses a check module.
  • the check module is configured, in particular, as a hardware component and/or a software component in the integrated evaluation unit.
  • the check module is configured for detecting one person, a few persons, and/or all persons in the unmasked surveillance image and/or in the monitored area on the basis of the raw data. At least one person may be located in the unmasked surveillance image and/or in the monitored area; in particular, at least one person may be temporarily located in the monitored area.
  • the check module may be configured for detecting all persons located in the monitored area in the unmasked surveillance image and/or in the unmasked surveillance images.
  • the test module is configured, in particular, for investigating the raw data based on rules, for example, on predefined parameters and/or characteristics, the parameters being configured for finding persons and/or for differentiating persons and background.
  • the parameters and/or characteristics encompass, for example, shapes, speeds, and/or patterns.
  • the integrated evaluation unit encompasses a masking module.
  • the masking module is, in particular, a hardware component and/or a software component in the evaluation unit.
  • the masking module is configured for generating a masked surveillance image.
  • the masking module is configured for creating a plurality of masked surveillance images.
  • the person detected in the unmasked surveillance image and/or all persons detected in the unmasked surveillance image is/are represented in a masked manner in the masked surveillance image.
  • the masked surveillance image is the unmasked surveillance image including an additional anonymization of the persons with the aid of masking.
  • the masking may take place with the aid of coverage, distortion, or another camouflage of the person.
  • the entire body of the person is masked; alternatively, only the head and/or eye area of the person are/is masked.
  • the output image data exclusively encompass masked surveillance images.
  • the masked surveillance images and/or output image data do not encompass any unmasked surveillance images including persons and/or unmasked persons.
  • the output image data need not encompass any personal information regarding the persons in the unmasked monitored area; in particular, the output image data do not encompass any clearly recognizable faces.
  • Unmasked surveillance images without persons are, in particular, masked surveillance images.
  • the camera may encompass a camera interface, the camera interface being connected to the output interface for the purpose of data transmission.
  • the camera interface coincides with the output interface and/or forms the output interface.
  • the output interface may be configured for providing the output image data to a user of the camera.
  • the output image data may encompass, in particular, additional data, such as sensor data of sensors in the camera, time stamps, and/or location information.
  • One consideration of the present invention is that of providing a cost-effective and reliable camera, with the aid of which monitored areas may be monitored without capturing, providing, and/or storing personal data.
  • the software-based protection against attacks by third parties may now also be configured to be simpler, since the camera is configured to be unable to output surveillance images including personal information which may be of interest to third parties. For employees and/or persons in the monitored area, a better real and/or perceived transparency of the anonymization process of the monitoring camera data is achieved.
  • the camera and/or the integrated evaluation unit encompass and/or encompasses a segmentation module.
  • the segmentation module is configured for segmenting the unmasked surveillance image into a person area and a background area based on the raw data.
  • the segmentation module is configured for segmenting the unmasked image into multiple person areas and background areas, for example, in order to determine multiple person areas for multiple persons in a surveillance image.
  • the multiple person areas and/or background areas may be contiguous and/or fragmented.
  • the person area is, in particular, the area occupied by the person in the unmasked and/or masked surveillance image(s).
  • the person area is the head and/or eye area of the person in the masked and/or unmasked surveillance image.
  • the background area is, in particular, the area in the masked and/or unmasked surveillance image(s), which is free of persons. Segmentation is understood to be, for example, the division, for example, the virtual and/or area-based division, of the unmasked surveillance image into two different areas, namely the person areas and the background areas.
  • the embodiment is based on the consideration of providing a camera which may divide a surveillance image into an area including personal data and an area without personal data, for example, based on a set of rules.
  • the masking module may be for anonymizing the detected person in the masked surveillance image by colorizing and/or distorting the person area.
  • the masking module is configured for masking all detected persons by colorizing and/or distorting the person areas.
  • the masked surveillance image may be the unmasked surveillance image including the colorized person area, the background area remaining unchanged. The colorization may take place having a high contrast or a low contrast with respect to the surroundings of the person area.
  • the camera in particular the integrated evaluation unit, encompasses an estimation module.
  • the estimation module is configured for determining an estimated background for the person area, the estimated background may be determined or estimated based on the raw data. Alternatively and/or additionally, the estimated background for the monitored area is fixedly stored in the estimation module. Moreover, it is possible that the estimated background is redetermined at any time with the aid of the estimation module based on further image and/or sensor material.
  • the estimated background is the real background of the relevant area and/or an approach to the area concealed by the person and/or the persons in the unmasked surveillance image.
  • the masking module is configured for masking, in the masked surveillance image, the detected person by filling the person area in the unmasked surveillance image with the estimated background.
  • the person area filled with the estimated background may be framed in the masked surveillance image, the framing in the masked surveillance image making it clear where a masked person is located.
  • This embodiment is based on the consideration of creating masked surveillance images and may include inconspicuous masking for persons and of not losing background information.
  • the check module, the masking module, the segmentation module, and/or the evaluation unit are/is configured as a neural network.
  • the neural network is an associative learning network which adapts via feedback on the basis of the offered learning pattern in connection with the expected result.
  • the neural network may act as a non-linear filter, a desirable filter function being obtained by training the network, whereby the available parameters of the network are adjusted.
  • the embodiment is based on the consideration of providing a particularly effective, intelligent, and/or fast option of segmenting, checking, and masking surveillance images.
  • the check module, the masking module, the segmentation module, and/or the evaluation unit are/is and/or encompasses/encompass a field programmable gate array, or FPGA.
  • the check module, the masking module, the segmentation module, and/or the evaluation unit are/is and/or encompass/encompasses another fixedly programmed and/or configured electronics assembly.
  • the FGPA encompasses configurable logic elements, the FPGA being configured for implementing combinational logic between various Boolean operations.
  • the detection of the person by the check module is based on fuzzy logic.
  • the check module is configured for applying fuzzy rules for detecting one person and/or multiple persons in the unmasked surveillance images.
  • the embodiment is based on the consideration of providing a camera for anonymization, for which an explicit mathematical model for evaluating the raw data is not required.
  • the check module is configured for carrying out the detection of a person and/or all persons in the monitored area and/or in the unmasked surveillance image on the basis of a detected motion.
  • the motion detection takes place, for example, by evaluating at least two surveillance images of an overlapping and/or identical subarea recorded at different times. Alternatively and/or additionally, the motion detection takes place on the basis of sensor data of sensors integrated in the camera and directed toward the subarea.
  • the check module may be configured for tracking a person based on the detected motion, areas in the masked monitored area, to which the person will presumably move, being represented in a masked manner, for example.
  • the embodiment is based on the consideration of achieving a simple way to carry out detection and achieving a secure masking of persons and reducing the risk of unmasked person areas with greater certainty, for example, with the aid of directed expanded masking.
  • the camera and/or the integrated evaluation unit encompass/encompasses a blocking module.
  • the blocking module is configured as hardware and/or software.
  • the blocking module is situated between the masking module and the output interface with respect to data transmission and is connected to the masking module and the output interface for the purpose of data transmission.
  • the blocking module is configured for blocking and/or preventing a transfer of raw data and/or a transfer of unmasked surveillance images to the output image data.
  • the blocking module is configured for investigating the masked surveillance images with respect to persons and/or personal data.
  • the blocking module is configured for blocking and/or aborting the output of output image data upon detection of persons and/or personal data in the masked surveillance images.
  • the embodiment is based on the consideration of providing a camera which particularly reliably prevents the output of surveillance images including personal data.
  • the blocking module is for preventing a data flow from the output interface in the direction of the masking module and/or the camera sensor.
  • the blocking module is a data connection which permits a data flow exclusively from the masking module to the output interface.
  • the blocking module is configured for preventing attacks from the outside and/or attempts to disable the masking module and/or attempts to tap unmasked surveillance images. The embodiment is based on the consideration of providing a security step for the camera so that no unmasked surveillance images are tappable.
  • the camera sensor and the evaluation unit are situated in a shared housing.
  • a plurality of camera sensors is situatable in the shared housing, the plurality of camera sensors being connected to the evaluation unit.
  • Further sensors are situatable in the housing, for example, the sensors providing sensor data which may be portions of the raw data.
  • the camera and/or the housing may be made apparent from the outside in an optical, for example, color-based manner, so that persons in the monitored area recognize the camera as a masking camera. This embodiment is based on the consideration of providing a one-piece camera and/or device which records anonymized and marked surveillance images of a monitored area.
  • a further object of the present invention is a monitoring device for monitoring a monitored area, including a plurality of cameras, the cameras encompassing a camera sensor for generating at least one unmasked surveillance image of the monitored area and/or a subarea of the monitored area, the camera sensor being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image, including an integrated evaluation unit, the evaluation unit encompassing an input interface for receiving the raw data and an output interface for providing output image data, the evaluation unit encompassing a check module and a masking module, the check module being configured for detecting a person in the unmasked surveillance image on the basis of the raw data, the masking module being configured for generating a masked surveillance image, the detected person being represented in a masked manner in the masked surveillance image, the output image data exclusively encompassing masked surveillance images.
  • the monitoring device is configured, in particular, for monitoring public areas, such as parks or roads, or non-public areas, such as factories.
  • FIG. 1 shows a schematic view of a camera.
  • FIG. 2 shows an unmasked surveillance image
  • FIG. 3 shows a masked surveillance image
  • FIG. 1 shows a schematic view of a camera 1 .
  • Camera 1 is a color camera and is configured for the video monitoring of a monitored area 2 .
  • Monitored area 2 is a building area in this case, such as a production line in a factory.
  • Camera 1 includes a housing 3 .
  • Housing 3 forms an encapsulation of the camera components with respect to the surroundings. Housing 3 may include fasteners for fastening camera 1 in monitored area 2 .
  • a camera sensor 4 and an integrated evaluation unit 5 are situated in housing 3 .
  • Camera sensor 4 is a light-sensitive, pixelated semiconductor chip which points with the shooting direction toward monitored area 2 .
  • Camera sensor 4 is configured for recording unmasked surveillance images 6 ( FIG. 2 ) of monitored area 2 .
  • Camera sensor 4 includes a camera sensor output 7 as a data interface, camera sensor output 7 being configured for providing raw data, raw data encompassing unmasked surveillance image 6 .
  • Integrated evaluation unit 5 is, for example, an electronic module and includes an input interface 8 which is connected to camera sensor output 7 , for example, with the aid of a hard-wired line, for the purpose of data transmission. In this way, the raw data and unmasked surveillance image 6 are provided to input interface 8 .
  • Integrated evaluation unit 5 includes a check module 9 .
  • Check module 9 is connected to input interface 8 for the purpose of data transmission and, in this way, receives the raw data.
  • Check module 9 is configured for checking the raw data and/or unmasked surveillance images 6 for persons 10 ( FIG. 2 ) and for detecting found persons 10 as such.
  • Check module 9 analyzes unmasked surveillance images 6 for certain characteristics and evaluates, based on a set of rules, whether something is a person 10 or an object 11 ( FIG. 2 ).
  • Integrated evaluation unit 5 encompasses a segmentation module 12 which is configured for dividing unmasked surveillance image 6 into a background area and a person area 14 ( FIG. 2 ).
  • a segmentation module 12 which is configured for dividing unmasked surveillance image 6 into a background area and a person area 14 ( FIG. 2 ).
  • areas in unmasked surveillance image 6 which encompass a person 10 are defined as a person area 14 and, for example, are represented as a completely covering rectangle or as a surrounding area. Areas without persons 10 are established by segmentation module 12 as the background area.
  • Integrated evaluation unit 5 encompasses a masking module 15 .
  • Masking module 15 is configured for creating a masked surveillance image 16 , masked surveillance image 16 corresponding to unmasked surveillance image 6 including person areas 14 which have been filled with color or provided with a colored background. Person areas 14 provided with a colored background anonymize persons 10 , so that no personal information is contained in masked surveillance image 16 .
  • Masking module 15 is connected to an output interface 17 of integrated evaluation unit 5 for the purpose of data transmission, output image data being provided to output interface 17 by masking module 15 .
  • the output image data encompass masked surveillance image 16 .
  • Unmasked surveillance images 6 without persons 10 and/or without personal information are general masked surveillance images 16 .
  • Camera housing 3 includes a camera interface 18 , the output image data being provided to a user with the aid of camera interface 18 .
  • a blocking module 19 is situated on the data connection between output interface 17 and masking module 15 .
  • Blocking module 19 is configured for fending off hacker attacks from the outside.
  • Blocking module 19 is configured for preventing a data flow from output interface 17 to masking module 15 , so that masking module 15 , for example, may not be disabled and/or so that unmasked surveillance images 6 may not be tapped.
  • FIG. 2 schematically shows an unmasked surveillance image 6 .
  • Unmasked surveillance image 6 includes a plurality of objects 11 a , 11 b and 11 c and two persons 10 a and 10 b .
  • Check module 9 is configured for recognizing persons 10 a and 10 b as persons and differentiating them from objects 11 a , 11 b and 11 c.
  • Segmentation module 12 has divided unmasked surveillance image 6 into a background area and two person areas 14 a and 14 b .
  • Person area 14 a is generated by segmentation module 12 by circumscribing detected person 10 a with a rectangle, so that entire person 10 a is covered by person area 14 a .
  • Person area 14 b is generated by segmentation module 12 by circumscribing the head of detected person 10 b with a rectangle, so that only the head of person 10 b is covered by person area 14 b.
  • FIG. 3 shows masked surveillance image 16 based on unmasked surveillance image 6 .
  • Masked surveillance image 16 shows objects 11 a , 11 b , 11 c and 11 d from unmasked surveillance image 6 .
  • Masked surveillance image 16 does not show person 10 a from unmasked surveillance image 6 .
  • Masked surveillance image 16 also does not show the head of person 10 b from unmasked surveillance image 6 .
  • Integrated evaluation unit 5 of camera 1 includes an estimation module.
  • the estimation module is configured for estimating a background for person area 14 a based on unmasked surveillance image 6 and/or other images of monitored area 2 .
  • the background is, in particular, the area of monitored area, which is concealed in unmasked surveillance image 6 by person 10 a .
  • the background for person area 14 a is object 11 a in this case.
  • Masking module 15 is configured for representing person area 14 a in masked surveillance image 16 filled with the background. Person 10 is therefore represented in a masked manner and is anonymized.
  • Masked person area 14 b shows one further type of masking.
  • Masking module 15 is configured, in this case, for representing person area 14 b in the form of a colored filling. The head of person 10 b is no longer recognizable due to the colored filling in the masked surveillance image and, in this way, person 10 b is anonymized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

A camera (1) for monitoring a monitored area (2), including a camera sensor (4) for generating at least one unmasked surveillance image (6) of the monitored area (2) and/or of a subarea of the monitored area (2), the camera sensor (4) being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image (6), and including an integrated evaluation unit (5), the evaluation unit (5) encompassing an input interface (8) for receiving the raw data and encompassing an output interface (17) for providing output image data.

Description

FIELD OF THE INVENTION
The present invention relates to a camera for monitoring a monitored area, including a camera sensor for generating at least one unmasked surveillance image of the monitored area and/or of a subarea of the monitored area, the camera sensor being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image. Moreover, the camera encompasses an evaluation unit, the evaluation unit encompassing an input interface for receiving the raw data and an output interface for providing image data. The evaluation unit encompasses a check module and a masking module, the check module being configured for detecting a person in the unmasked surveillance image on the basis of the raw data, the masking module being configured for generating a masked surveillance image, the detected person being represented in a masked manner in the masked surveillance image.
BACKGROUND INFORMATION
Cameras may be utilized for the monitoring of monitored areas. For example, production facilities are monitored, whereby the right of the employees in the production facility to the protection of personal rights may be extremely limited. By comparison, computer vision applications have evolved so far that camera monitoring represents a cost-effective approach to process monitoring.
Video monitoring systems may resort to blocking sensor systems, such as photoelectric barriers and inductive switches, in order to switch off cameras if a person is present in the monitored area.
Publication DE 10 2008 007 199 A1, which may be related art, discusses a masking module for a monitoring system, the monitoring system including at least one monitoring camera and being suitable and/or situated for observing monitored areas encompassing moving objects, including a selection unit for selecting objects as selection objects, the masking module being configured for outputting selection objects or portions thereof in a masked manner.
SUMMARY OF THE INVENTION
Cameras may be utilized for monitoring public and non-public areas, where the safeguarding of personal rights is to be observed.
Within the scope of the present invention, a camera for monitoring a monitored area having the features described herein, a monitoring device having the features described herein, and a method for monitoring a monitored area having the features described herein are described. Advantageous specific embodiments of the present invention result from the further descriptions herein, the following description, and the attached figures.
According to the present invention, a camera for monitoring a monitored area is provided. The camera is, for example, a color camera, a black-and-white camera, an infrared camera, a 3D camera, a PTZ (pan-tilt-zoom) camera, or the like. The camera is, in particular, a video camera and/or a single-shot camera. The camera may be situatable and/or situated in the monitored area. The monitored area is, in particular, an open area or an inner area. Specifically, the monitored area is a plant, a factory, a building, a production facility, a warehouse, a large hall, an airport, a road, and/or a park.
The camera encompasses at least one camera sensor for generating at least one unmasked surveillance image of the monitored area and/or a subarea of the monitored area. The generation of the surveillance image is, in particular, a shot of an image which shows and/or represents the monitored area. The camera sensor encompasses and/or is, in particular, a CCD chip or CMOS chip. The camera sensor may be configured for generating a single image, a few images, and/or a data stream of images, specifically a video sequence, as an unmasked surveillance image. The camera has, in particular, a shooting direction, the camera and the shooting direction being orientable toward the monitored area and/or the subarea. The camera sensor is configured for providing raw data, in particular raw image data. The raw data encompass at least one unmasked surveillance image. In particular, the raw image data encompass the data stream of images. The raw data may encompass additional data, the additional data encompassing, for example, a time stamp and/or sensor data of sensors in the camera. The raw data are, in particular, digital and/or analog data.
The camera encompasses an evaluation unit, the evaluation unit being integrated into the camera. The evaluation unit is configured, in particular, as a piece of hardware and is, for example, a processor, an electronic module, a computer unit, and/or a microcontroller. The evaluation unit encompasses an input interface for receiving the raw data. The input interface may be a cable interface. In particular, the camera sensor encompasses a camera sensor output for providing the raw data, the camera sensor output being exclusively connected to the input interface, for example, with the aid of a cable connection, for the purpose of data transmission. The connection of the camera sensor output and the input interface may be branch-free and/or junction-free, in particular, no further module being situated between the camera sensor output and the input interface.
The integrated evaluation unit encompasses an output interface for providing output image data. The output interface is a cable interface or a radio interface, such as a WLAN interface, an infrared interface, or a Bluetooth interface. The output image data may be digital data; alternatively, the output image data are analog data.
The integrated evaluation unit encompasses a check module. The check module is configured, in particular, as a hardware component and/or a software component in the integrated evaluation unit. The check module is configured for detecting one person, a few persons, and/or all persons in the unmasked surveillance image and/or in the monitored area on the basis of the raw data. At least one person may be located in the unmasked surveillance image and/or in the monitored area; in particular, at least one person may be temporarily located in the monitored area. The check module may be configured for detecting all persons located in the monitored area in the unmasked surveillance image and/or in the unmasked surveillance images.
The test module is configured, in particular, for investigating the raw data based on rules, for example, on predefined parameters and/or characteristics, the parameters being configured for finding persons and/or for differentiating persons and background. The parameters and/or characteristics encompass, for example, shapes, speeds, and/or patterns.
The integrated evaluation unit encompasses a masking module. The masking module is, in particular, a hardware component and/or a software component in the evaluation unit. The masking module is configured for generating a masked surveillance image. Alternatively and/or additionally, the masking module is configured for creating a plurality of masked surveillance images. The person detected in the unmasked surveillance image and/or all persons detected in the unmasked surveillance image is/are represented in a masked manner in the masked surveillance image. For example, the masked surveillance image is the unmasked surveillance image including an additional anonymization of the persons with the aid of masking. The masking may take place with the aid of coverage, distortion, or another camouflage of the person. In particular, for the purpose of anonymization, the entire body of the person is masked; alternatively, only the head and/or eye area of the person are/is masked.
The output image data exclusively encompass masked surveillance images. In particular, the masked surveillance images and/or output image data do not encompass any unmasked surveillance images including persons and/or unmasked persons. The output image data need not encompass any personal information regarding the persons in the unmasked monitored area; in particular, the output image data do not encompass any clearly recognizable faces. Unmasked surveillance images without persons are, in particular, masked surveillance images.
The camera may encompass a camera interface, the camera interface being connected to the output interface for the purpose of data transmission. In particular, the camera interface coincides with the output interface and/or forms the output interface. The output interface may be configured for providing the output image data to a user of the camera. The output image data may encompass, in particular, additional data, such as sensor data of sensors in the camera, time stamps, and/or location information.
One consideration of the present invention is that of providing a cost-effective and reliable camera, with the aid of which monitored areas may be monitored without capturing, providing, and/or storing personal data. The complex checking of an individual case with the aid of extra sensor systems, such as with the aid of motion sensors, becomes superfluous. The software-based protection against attacks by third parties may now also be configured to be simpler, since the camera is configured to be unable to output surveillance images including personal information which may be of interest to third parties. For employees and/or persons in the monitored area, a better real and/or perceived transparency of the anonymization process of the monitoring camera data is achieved.
In a particular embodiment of the present invention, the camera and/or the integrated evaluation unit encompass and/or encompasses a segmentation module. The segmentation module is configured for segmenting the unmasked surveillance image into a person area and a background area based on the raw data. In particular, the segmentation module is configured for segmenting the unmasked image into multiple person areas and background areas, for example, in order to determine multiple person areas for multiple persons in a surveillance image. The multiple person areas and/or background areas may be contiguous and/or fragmented. The person area is, in particular, the area occupied by the person in the unmasked and/or masked surveillance image(s). Alternatively and/or additionally, the person area is the head and/or eye area of the person in the masked and/or unmasked surveillance image. The background area is, in particular, the area in the masked and/or unmasked surveillance image(s), which is free of persons. Segmentation is understood to be, for example, the division, for example, the virtual and/or area-based division, of the unmasked surveillance image into two different areas, namely the person areas and the background areas. The embodiment is based on the consideration of providing a camera which may divide a surveillance image into an area including personal data and an area without personal data, for example, based on a set of rules.
The masking module may be for anonymizing the detected person in the masked surveillance image by colorizing and/or distorting the person area. In particular, the masking module is configured for masking all detected persons by colorizing and/or distorting the person areas. The masked surveillance image may be the unmasked surveillance image including the colorized person area, the background area remaining unchanged. The colorization may take place having a high contrast or a low contrast with respect to the surroundings of the person area.
In one embodiment, the camera, in particular the integrated evaluation unit, encompasses an estimation module. The estimation module is configured for determining an estimated background for the person area, the estimated background may be determined or estimated based on the raw data. Alternatively and/or additionally, the estimated background for the monitored area is fixedly stored in the estimation module. Moreover, it is possible that the estimated background is redetermined at any time with the aid of the estimation module based on further image and/or sensor material. The estimated background is the real background of the relevant area and/or an approach to the area concealed by the person and/or the persons in the unmasked surveillance image. The masking module is configured for masking, in the masked surveillance image, the detected person by filling the person area in the unmasked surveillance image with the estimated background. The person area filled with the estimated background may be framed in the masked surveillance image, the framing in the masked surveillance image making it clear where a masked person is located. This embodiment is based on the consideration of creating masked surveillance images and may include inconspicuous masking for persons and of not losing background information.
It particularly may be provided that the check module, the masking module, the segmentation module, and/or the evaluation unit are/is configured as a neural network. In particular, the neural network is an associative learning network which adapts via feedback on the basis of the offered learning pattern in connection with the expected result. The neural network may act as a non-linear filter, a desirable filter function being obtained by training the network, whereby the available parameters of the network are adjusted. The embodiment is based on the consideration of providing a particularly effective, intelligent, and/or fast option of segmenting, checking, and masking surveillance images.
In one further embodiment of the present invention, the check module, the masking module, the segmentation module, and/or the evaluation unit are/is and/or encompasses/encompass a field programmable gate array, or FPGA. Alternatively, the check module, the masking module, the segmentation module, and/or the evaluation unit are/is and/or encompass/encompasses another fixedly programmed and/or configured electronics assembly. In particular, the FGPA encompasses configurable logic elements, the FPGA being configured for implementing combinational logic between various Boolean operations. The advantage of this embodiment is that of obtaining a particularly secure, fast, and attack-proof camera including an integrated evaluation unit.
In particular, it is provided that the detection of the person by the check module is based on fuzzy logic. For example, the check module is configured for applying fuzzy rules for detecting one person and/or multiple persons in the unmasked surveillance images. The embodiment is based on the consideration of providing a camera for anonymization, for which an explicit mathematical model for evaluating the raw data is not required.
In one particular embodiment of the present invention, the check module is configured for carrying out the detection of a person and/or all persons in the monitored area and/or in the unmasked surveillance image on the basis of a detected motion. The motion detection takes place, for example, by evaluating at least two surveillance images of an overlapping and/or identical subarea recorded at different times. Alternatively and/or additionally, the motion detection takes place on the basis of sensor data of sensors integrated in the camera and directed toward the subarea. The check module may be configured for tracking a person based on the detected motion, areas in the masked monitored area, to which the person will presumably move, being represented in a masked manner, for example. The embodiment is based on the consideration of achieving a simple way to carry out detection and achieving a secure masking of persons and reducing the risk of unmasked person areas with greater certainty, for example, with the aid of directed expanded masking.
It particularly may be provided that the camera and/or the integrated evaluation unit encompass/encompasses a blocking module. The blocking module is configured as hardware and/or software. The blocking module is situated between the masking module and the output interface with respect to data transmission and is connected to the masking module and the output interface for the purpose of data transmission. The blocking module is configured for blocking and/or preventing a transfer of raw data and/or a transfer of unmasked surveillance images to the output image data. For example, the blocking module is configured for investigating the masked surveillance images with respect to persons and/or personal data. In particular, the blocking module is configured for blocking and/or aborting the output of output image data upon detection of persons and/or personal data in the masked surveillance images. The embodiment is based on the consideration of providing a camera which particularly reliably prevents the output of surveillance images including personal data.
In one further embodiment of the present invention, the blocking module is for preventing a data flow from the output interface in the direction of the masking module and/or the camera sensor. For example, the blocking module is a data connection which permits a data flow exclusively from the masking module to the output interface. In particular, the blocking module is configured for preventing attacks from the outside and/or attempts to disable the masking module and/or attempts to tap unmasked surveillance images. The embodiment is based on the consideration of providing a security step for the camera so that no unmasked surveillance images are tappable.
It particularly may be provided that the camera sensor and the evaluation unit are situated in a shared housing. In particular, a plurality of camera sensors is situatable in the shared housing, the plurality of camera sensors being connected to the evaluation unit. Further sensors, such as motion sensors, are situatable in the housing, for example, the sensors providing sensor data which may be portions of the raw data. The camera and/or the housing may be made apparent from the outside in an optical, for example, color-based manner, so that persons in the monitored area recognize the camera as a masking camera. This embodiment is based on the consideration of providing a one-piece camera and/or device which records anonymized and marked surveillance images of a monitored area.
A further object of the present invention is a monitoring device for monitoring a monitored area, including a plurality of cameras, the cameras encompassing a camera sensor for generating at least one unmasked surveillance image of the monitored area and/or a subarea of the monitored area, the camera sensor being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image, including an integrated evaluation unit, the evaluation unit encompassing an input interface for receiving the raw data and an output interface for providing output image data, the evaluation unit encompassing a check module and a masking module, the check module being configured for detecting a person in the unmasked surveillance image on the basis of the raw data, the masking module being configured for generating a masked surveillance image, the detected person being represented in a masked manner in the masked surveillance image, the output image data exclusively encompassing masked surveillance images. The monitoring device is configured, in particular, for monitoring public areas, such as parks or roads, or non-public areas, such as factories.
Further features, advantages, and effects of the present invention result from the following description of an exemplary embodiment of the present invention and from the attached figures.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a schematic view of a camera.
FIG. 2 shows an unmasked surveillance image.
FIG. 3 shows a masked surveillance image.
DETAILED DESCRIPTION
FIG. 1 shows a schematic view of a camera 1. Camera 1 is a color camera and is configured for the video monitoring of a monitored area 2. Monitored area 2 is a building area in this case, such as a production line in a factory. Camera 1 includes a housing 3. Housing 3 forms an encapsulation of the camera components with respect to the surroundings. Housing 3 may include fasteners for fastening camera 1 in monitored area 2.
A camera sensor 4 and an integrated evaluation unit 5 are situated in housing 3. Camera sensor 4 is a light-sensitive, pixelated semiconductor chip which points with the shooting direction toward monitored area 2. Camera sensor 4 is configured for recording unmasked surveillance images 6 (FIG. 2) of monitored area 2. Camera sensor 4 includes a camera sensor output 7 as a data interface, camera sensor output 7 being configured for providing raw data, raw data encompassing unmasked surveillance image 6.
Integrated evaluation unit 5 is, for example, an electronic module and includes an input interface 8 which is connected to camera sensor output 7, for example, with the aid of a hard-wired line, for the purpose of data transmission. In this way, the raw data and unmasked surveillance image 6 are provided to input interface 8.
Integrated evaluation unit 5 includes a check module 9. Check module 9 is connected to input interface 8 for the purpose of data transmission and, in this way, receives the raw data. Check module 9 is configured for checking the raw data and/or unmasked surveillance images 6 for persons 10 (FIG. 2) and for detecting found persons 10 as such. Check module 9 analyzes unmasked surveillance images 6 for certain characteristics and evaluates, based on a set of rules, whether something is a person 10 or an object 11 (FIG. 2).
Integrated evaluation unit 5 encompasses a segmentation module 12 which is configured for dividing unmasked surveillance image 6 into a background area and a person area 14 (FIG. 2). In this case, with the aid of segmentation module 12, areas in unmasked surveillance image 6 which encompass a person 10 are defined as a person area 14 and, for example, are represented as a completely covering rectangle or as a surrounding area. Areas without persons 10 are established by segmentation module 12 as the background area.
Integrated evaluation unit 5 encompasses a masking module 15. Masking module 15 is configured for creating a masked surveillance image 16, masked surveillance image 16 corresponding to unmasked surveillance image 6 including person areas 14 which have been filled with color or provided with a colored background. Person areas 14 provided with a colored background anonymize persons 10, so that no personal information is contained in masked surveillance image 16. Masking module 15 is connected to an output interface 17 of integrated evaluation unit 5 for the purpose of data transmission, output image data being provided to output interface 17 by masking module 15. The output image data encompass masked surveillance image 16. Unmasked surveillance images 6 without persons 10 and/or without personal information are general masked surveillance images 16.
Camera housing 3 includes a camera interface 18, the output image data being provided to a user with the aid of camera interface 18.
A blocking module 19 is situated on the data connection between output interface 17 and masking module 15. Blocking module 19 is configured for fending off hacker attacks from the outside. Blocking module 19 is configured for preventing a data flow from output interface 17 to masking module 15, so that masking module 15, for example, may not be disabled and/or so that unmasked surveillance images 6 may not be tapped.
FIG. 2 schematically shows an unmasked surveillance image 6. Unmasked surveillance image 6 includes a plurality of objects 11 a, 11 b and 11 c and two persons 10 a and 10 b. Check module 9 is configured for recognizing persons 10 a and 10 b as persons and differentiating them from objects 11 a, 11 b and 11 c.
Segmentation module 12 has divided unmasked surveillance image 6 into a background area and two person areas 14 a and 14 b. Person area 14 a is generated by segmentation module 12 by circumscribing detected person 10 a with a rectangle, so that entire person 10 a is covered by person area 14 a. Person area 14 b is generated by segmentation module 12 by circumscribing the head of detected person 10 b with a rectangle, so that only the head of person 10 b is covered by person area 14 b.
FIG. 3 shows masked surveillance image 16 based on unmasked surveillance image 6. Masked surveillance image 16 shows objects 11 a, 11 b, 11 c and 11 d from unmasked surveillance image 6. Masked surveillance image 16 does not show person 10 a from unmasked surveillance image 6. Masked surveillance image 16 also does not show the head of person 10 b from unmasked surveillance image 6.
Integrated evaluation unit 5 of camera 1 includes an estimation module. The estimation module is configured for estimating a background for person area 14 a based on unmasked surveillance image 6 and/or other images of monitored area 2. The background is, in particular, the area of monitored area, which is concealed in unmasked surveillance image 6 by person 10 a. The background for person area 14 a is object 11 a in this case. Masking module 15 is configured for representing person area 14 a in masked surveillance image 16 filled with the background. Person 10 is therefore represented in a masked manner and is anonymized. Masked person area 14 b shows one further type of masking. Masking module 15 is configured, in this case, for representing person area 14 b in the form of a colored filling. The head of person 10 b is no longer recognizable due to the colored filling in the masked surveillance image and, in this way, person 10 b is anonymized.

Claims (15)

What is claimed is:
1. A camera for monitoring a monitored area, comprising:
a camera sensor for generating at least one unmasked surveillance image of the monitored area and/or of a subarea of the monitored area, the camera sensor being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image; and
an evaluation unit, the evaluation unit encompassing an input interface for receiving the raw data and an output interface for providing output image data;
wherein:
the evaluation unit includes a check module, a masking module, and a blocking module;
the check module is configured to detect a person in the unmasked surveillance image on the basis of the raw data;
the masking module is configured to generate a masked surveillance image, the detected person being represented in a masked manner in the masked surveillance image;
the evaluation unit is integrated into the camera;
the output image data exclusively encompasses masked surveillance images; and
the blocking module:
(1) is configured to ensure that the output image data exclusively encompasses the masked surveillance images by detecting a person in the image data being provided towards the output interface and, in response to the detection of the person by the blocking module, blocking output of the image data in which the blocking module has detected the person; and/or
(2) is configured to prevent a data flow in a direction into the evaluation unit via the output interface; and/or
(3) is situated between the masking module and the output interface with respect to data transmission for blocking and/or for preventing a transfer of raw data and/or a transfer of unmasked surveillance images to the output image data.
2. The camera of claim 1, further comprising:
a segmentation module for segmenting the unmasked surveillance image into a person area and a background area based on the raw data.
3. The camera of claim 2, wherein the masking module is for anonymizing the detected person in the masked surveillance image by colorizing the person area in the unmasked surveillance image.
4. The camera of claim 2, further comprising:
an estimation module for determining an estimated background for the person area, wherein the masking module is for masking the detected person in the masked surveillance image by filling the person area in the unmasked surveillance image with the estimated background.
5. The camera of claim 1, wherein the check module, the masking module, the estimation module and/or the integrated evaluation unit are configured as a neural network.
6. The camera of claim 1, wherein the check module, the masking module, the estimation module and/or the integrated evaluation unit are configured as a field programmable gate array.
7. The camera of claim 1, wherein the detection of the person by the check module is based on a motion detection.
8. The camera of claim 1, wherein the blocking module is situated between the masking module and the output interface with respect to data transmission, for blocking and/or for preventing the transfer of raw data and/or the transfer of the unmasked surveillance images to the output image data.
9. The camera of claim 8, wherein the blocking module is for preventing a data flow from the output interface towards the masking module and/or the camera sensor.
10. The camera of claim 1, wherein the camera sensor and the evaluation unit are situated in a shared housing.
11. The camera of claim 1, wherein the blocking module is configured to ensure that the output image data exclusively encompasses the masked surveillance images by detecting the person in the image data being provided towards the output interface and, in response to the detection of the person by the blocking module, blocking output of the image data in which the blocking module has detected the person.
12. The camera of claim 1, wherein the blocking module is configured to prevent the data flow in the direction into the evaluation unit via the output interface.
13. A camera for monitoring a monitored area, comprising:
a camera sensor for generating at least one unmasked surveillance image of the monitored area and/or of a subarea of the monitored area, the camera sensor being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image;
at least one processor integrated into the camera;
an input interface via which the at least one processor can receive the raw data; and
an output interface;
wherein the at least one processor is configured to:
detect a person in the unmasked surveillance image based on the raw data using a fuzzy logic;
generate a masked surveillance image in which the detected person is represented in a masked manner; and
output the masked surveillance image as output image data via the output interface.
14. A monitoring device for monitoring a monitored area, comprising:
a plurality of cameras;
wherein each of the cameras includes:
a camera sensor for generating at least one unmasked surveillance image of the monitored area and/or of a subarea of the monitored area, the camera sensor being configured for providing raw data, the raw data encompassing the at least one unmasked surveillance image; and
an evaluation unit, the evaluation unit encompassing an input interface for receiving the raw data and an output interface for providing output image data;
wherein:
the evaluation unit includes a check module, a masking module, and a blocking module;
the check module is configured to detect a person in the unmasked surveillance image on the basis of the raw data;
the masking module is configured to generate a masked surveillance image, the detected person being represented in a masked manner in the masked surveillance image;
the evaluation unit is integrated into the camera;
the output image data exclusively encompasses masked surveillance images; and
the blocking module:
(1) is configured to ensure that the output image data exclusively encompasses the masked surveillance images by detecting a person in the image data being provided towards the output interface and, in response to the detection of the person by the blocking module, blocking output of the image data in which the blocking module has detected the person; and/or
(2) is configured to prevent a data flow in a direction into the evaluation unit via the output interface; and/or
(3) is situated between the masking module and the output interface with respect to data transmission for blocking and/or for preventing a transfer of raw data and/or a transfer of unmasked surveillance images to the output image data.
15. A method for monitoring a monitored area, the method comprising:
generating at least one unmasked surveillance image of the monitored area and/or of a subarea of the monitored area with the aid of a camera sensor;
providing raw data with the camera sensor, the raw data encompassing the at least one unmasked surveillance image;
providing an evaluation unit, the evaluation unit encompassing an input interface for receiving the raw data and an output interface for providing output image data, the evaluation unit encompassing a check module, a masking module, and blocking module;
detecting a person in the unmasked surveillance image based on the raw data with the check module;
generating a masked surveillance image with the masking module, the detected person being represented in a masked manner in the masked surveillance image;
providing output image data which exclusively encompasses masked surveillance images with the evaluation unit, wherein the evaluation unit is integrated into the camera; and
the blocking module:
(1) ensuring that the output image data exclusively encompasses the masked surveillance images by detecting a person in the image data being provided towards the output interface and, in response to the detection of the person by the blocking module, blocking output of the image data in which the blocking module has detected the person; and/or
(2) preventing a data flow in a direction into the evaluation unit via the output interface; and/or
(3) blocking and/or for preventing a transfer of raw data and/or a transfer of unmasked surveillance images to the output image data, with the blocking module being situated between the masking module and the output interface with respect to data transmission for.
US16/463,691 2016-11-30 2017-11-06 Camera for monitoring a monitored area and monitoring device, and method for monitoring a monitored area Active 2038-01-07 US10956752B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102016223859.0A DE102016223859A1 (en) 2016-11-30 2016-11-30 Camera for monitoring a surveillance area and monitoring device, and method for monitoring a surveillance area
DE102016223859.0 2016-11-30
PCT/EP2017/078286 WO2018099689A1 (en) 2016-11-30 2017-11-06 Camera for monitoring a monitored region, monitoring device, and method for monitoring a monitored region

Publications (2)

Publication Number Publication Date
US20190377958A1 US20190377958A1 (en) 2019-12-12
US10956752B2 true US10956752B2 (en) 2021-03-23

Family

ID=60293948

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/463,691 Active 2038-01-07 US10956752B2 (en) 2016-11-30 2017-11-06 Camera for monitoring a monitored area and monitoring device, and method for monitoring a monitored area

Country Status (6)

Country Link
US (1) US10956752B2 (en)
EP (1) EP3549115B1 (en)
CN (1) CN110036422A (en)
DE (1) DE102016223859A1 (en)
ES (1) ES2901388T3 (en)
WO (1) WO2018099689A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11375160B2 (en) 2018-10-10 2022-06-28 BSH Hausgeräte GmbH Cooking appliance having a camera and method for operating a cooking appliance
US11392719B2 (en) * 2019-03-28 2022-07-19 Samsung Electronics Co., Ltd. Electronic device and method for securing personal information included in image

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3640903B1 (en) * 2018-10-18 2023-12-27 IDEMIA Identity & Security Germany AG Signal dependent video surveillance
US12094250B2 (en) * 2019-03-07 2024-09-17 Nec Corporation Image processing apparatus, control method, and non-transitory storage medium
DE102019205221A1 (en) 2019-04-11 2020-10-15 Robert Bosch Gmbh Anonymization device, monitoring device, method, computer program and storage medium
WO2021141339A1 (en) * 2020-01-09 2021-07-15 씨드로닉스 주식회사 Method and device for monitoring port and ship in consideration of sea level
DE102020203475A1 (en) 2020-03-18 2021-09-23 Robert Bosch Gesellschaft mit beschränkter Haftung Anonymization device, monitoring device, method, computer program and storage medium
DE102020203473A1 (en) 2020-03-18 2021-09-23 Robert Bosch Gesellschaft mit beschränkter Haftung Anonymization device, monitoring device, method, computer program and storage medium
WO2022126250A1 (en) * 2020-12-18 2022-06-23 Raja Tuli Camera supported by solar wafer
US11593520B2 (en) * 2021-04-19 2023-02-28 Western Digital Technologies, Inc. Privacy enforcing memory system
DE102022201311A1 (en) 2022-02-08 2023-08-10 Robert Bosch Gesellschaft mit beschränkter Haftung Monitoring device for a storage area, intralogistics arrangement with the monitoring device and monitoring method
JP2023148898A (en) * 2022-03-30 2023-10-13 東芝ライテック株式会社 information processing system
ES3015549T3 (en) 2022-05-11 2025-05-06 Sick Ag Method and device for anonymizing a captured image in an industrial installation

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005286468A (en) 2004-03-29 2005-10-13 Mitsubishi Electric Corp Surveillance system and camera with masking function, and mask release device used with the camera
US20060064384A1 (en) 2004-09-15 2006-03-23 Sharad Mehrotra Apparatus and method for privacy protection of data collection in pervasive environments
DE102008007199A1 (en) 2008-02-01 2009-08-06 Robert Bosch Gmbh Masking module for a video surveillance system, method for masking selected objects and computer program
CN102067175A (en) 2008-03-31 2011-05-18 谷歌公司 Automatic face detection and identity masking in images, and applications thereof
EP2429182A2 (en) 2010-09-13 2012-03-14 Smartspector Artificial Perception Engineering GmbH Method for masking personal information from a camera
US20140023248A1 (en) * 2012-07-20 2014-01-23 Electronics And Telecommunications Research Institute Apparatus and method for protecting privacy information based on face recognition
US20140056518A1 (en) * 2012-08-22 2014-02-27 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US20150222861A1 (en) * 2014-02-05 2015-08-06 Panasonic Intellectual Property Management Co., Ltd. Monitoring apparatus, monitoring system, and monitoring method
CN105354793A (en) 2015-11-25 2016-02-24 小米科技有限责任公司 Facial image processing method and device
US20160125246A1 (en) * 2014-10-30 2016-05-05 Kent W. Ryhorchuk System and method for parking and traffic analysis
CN105704440A (en) 2014-11-25 2016-06-22 霍尼韦尔国际公司 System and Method of Contextual Adjustment of Video Fidelity to Protect Privac
US20170011529A1 (en) * 2014-02-14 2017-01-12 Nec Corporation Video analysis system
US20170208243A1 (en) * 2014-07-18 2017-07-20 Artincam Ltd. Automatic image composition

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005286468A (en) 2004-03-29 2005-10-13 Mitsubishi Electric Corp Surveillance system and camera with masking function, and mask release device used with the camera
US20060064384A1 (en) 2004-09-15 2006-03-23 Sharad Mehrotra Apparatus and method for privacy protection of data collection in pervasive environments
DE102008007199A1 (en) 2008-02-01 2009-08-06 Robert Bosch Gmbh Masking module for a video surveillance system, method for masking selected objects and computer program
CN102067175A (en) 2008-03-31 2011-05-18 谷歌公司 Automatic face detection and identity masking in images, and applications thereof
EP2429182A2 (en) 2010-09-13 2012-03-14 Smartspector Artificial Perception Engineering GmbH Method for masking personal information from a camera
US20140023248A1 (en) * 2012-07-20 2014-01-23 Electronics And Telecommunications Research Institute Apparatus and method for protecting privacy information based on face recognition
US20140056518A1 (en) * 2012-08-22 2014-02-27 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US20150222861A1 (en) * 2014-02-05 2015-08-06 Panasonic Intellectual Property Management Co., Ltd. Monitoring apparatus, monitoring system, and monitoring method
US20170011529A1 (en) * 2014-02-14 2017-01-12 Nec Corporation Video analysis system
US20170208243A1 (en) * 2014-07-18 2017-07-20 Artincam Ltd. Automatic image composition
US20160125246A1 (en) * 2014-10-30 2016-05-05 Kent W. Ryhorchuk System and method for parking and traffic analysis
CN105704440A (en) 2014-11-25 2016-06-22 霍尼韦尔国际公司 System and Method of Contextual Adjustment of Video Fidelity to Protect Privac
CN105354793A (en) 2015-11-25 2016-02-24 小米科技有限责任公司 Facial image processing method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Belbachir A Ed and Belbachir Ahmed A Nabil., "Chapter 16, Smart Cameras for Machine Vision", Jan. 1, 2010, Smart Cam, Springer, New York, NY, pp. 283-303, XP008157219.
BELBACHIR AHMED NABIL: "Smart Cameras", 1 January 2010, SPRINGER, New York, NY, ISBN: 978-1-4419-0953-4, article A. BELBACHIR: "Chapter 16, Smart Cameras for Machine Vision", pages: 283 - 303, XP008157219
International Search Report for PCT/EP2017/078286, dated Jan. 18, 2018.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11375160B2 (en) 2018-10-10 2022-06-28 BSH Hausgeräte GmbH Cooking appliance having a camera and method for operating a cooking appliance
US11392719B2 (en) * 2019-03-28 2022-07-19 Samsung Electronics Co., Ltd. Electronic device and method for securing personal information included in image

Also Published As

Publication number Publication date
US20190377958A1 (en) 2019-12-12
DE102016223859A1 (en) 2018-05-30
EP3549115B1 (en) 2021-09-29
CN110036422A (en) 2019-07-19
WO2018099689A1 (en) 2018-06-07
ES2901388T3 (en) 2022-03-22
EP3549115A1 (en) 2019-10-09

Similar Documents

Publication Publication Date Title
US10956752B2 (en) Camera for monitoring a monitored area and monitoring device, and method for monitoring a monitored area
CN107273822B (en) A privacy protection method based on surveillance video multi-target tracking and face recognition
Kim et al. Intelligent intrusion detection system featuring a virtual fence, active intruder detection, classification, tracking, and action recognition
AU2009269607B2 (en) Apparatus and method of classifying movement of objects in a monitoring zone
KR101858396B1 (en) Intelligent intrusion detection system
ES2364915T3 (en) VIDEO TRAP CABLE.
KR101890589B1 (en) System and method of processing image for preserving private life
US20180101960A1 (en) Combination video surveillance system and physical deterrent device
Zaidi et al. Video anomaly detection and classification for human activity recognition
Sharma Human detection and tracking using background subtraction in visual surveillance
Yoon et al. Tracking System for mobile user Based on CCTV
DH et al. Autonomous vehicles camera blinding attack detection using sequence modelling and predictive analytics
Mahajan et al. 3D Object 360-Degree Motion Detection Using Ultra-Frequency PIR Sensor
Nadimi et al. Multistrategy fusion using mixture model for moving object detection
CN106898014A (en) A kind of intrusion detection method based on depth camera
Chattopadhyay Developing an Innovative Framework for Design and Analysis of Privacy Enhancing Video Surveillance
Su et al. Moving object tracking using an adaptive colour filter
Ramli et al. Comparison of human motion detection between thermal and ordinary images
Jones et al. A novel approach for surveillance using visual and thermal images
Sehairi et al. A Real-Time Implementation of Moving Object Action Recognition System Based on Motion Analysis
Chan A robust target tracking algorithm for FLIR imagery
KR101292907B1 (en) Human tracking system and method for privacy masking
Teixeira et al. A rule-based methodology and assessment for context-aware privacy
Becker et al. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams
Srilaya et al. Surveillance using video analytics

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ROBERT BOSCH GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GEILER, THOMAS;STRICKER, DIDIER;WASENMUELLER, OLIVER;AND OTHERS;SIGNING DATES FROM 20190722 TO 20190731;REEL/FRAME:050283/0126

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4