EP4078430A1 - Anonymized multi-sensor people tracking - Google Patents

Anonymized multi-sensor people tracking

Info

Publication number
EP4078430A1
EP4078430A1 EP20757618.2A EP20757618A EP4078430A1 EP 4078430 A1 EP4078430 A1 EP 4078430A1 EP 20757618 A EP20757618 A EP 20757618A EP 4078430 A1 EP4078430 A1 EP 4078430A1
Authority
EP
European Patent Office
Prior art keywords
environment
camera
sensor
leas
processing means
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20757618.2A
Other languages
German (de)
French (fr)
Inventor
Raphaël Krings
Pierre-François Crousse
Gautier Krings
Karim Douieb
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jetpack SPRL
Original Assignee
Jetpack SPRL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jetpack SPRL filed Critical Jetpack SPRL
Publication of EP4078430A1 publication Critical patent/EP4078430A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images

Definitions

  • the present invention relates to a system for automated defection of subject, in particular of humans in an environment, while keeping the identify of the people being defected hidden for respecting their privacy.
  • ⁇ o fracking of moving subjects in an environment, especially the movement of human beings in a closed environment such as a shop.
  • a shop needs ⁇ o optimize their structure ⁇ o ensure a satisfying shopping experience ⁇ o their clients, which will improve sales and rentability.
  • An example of such an optimization would be ⁇ o defect the most common routes used by said clients when moving through the shop and making sure the most attractive goods are positioned on this route.
  • the clients need ⁇ o be identified individually and data regarding their route through the store needs ⁇ o be stored. This optimization however depends on the qualify of the used fracking solution for the subjects.
  • Bluetooth beacons of the smartphones of the subjects. These beacons are capable of defecting nearby devices having activated their Bluetooth functionality. However, such beacons are no ⁇ capable of tracking the movement of the subject related ⁇ o the device, only ⁇ o detect its presence. As an additional drawback, subjects without a Bluetooth-enabled device cannot be detected and subjects having more than one Bluetooth- enabled device will create false positives. A similar solution detects the devices of the subjects via the Wi-Fi network in place. This solution has similar drawbacks as Bluetooth beacons. Thus, these beacon-based solutions do no ⁇ provide a good position tracking of the subjects in the environment and do often no ⁇ cover all subjects.
  • Another solution uses security cameras or other high-definition cameras, in conjunction with image recognition techniques as disclosed for example in W02018/152009A1 , US2019/0205933A1 and EP2270761A1.
  • These techniques require a substantial amount of computation power and face privacy related traits such as the European General Data Protection Regulation (GDPR).
  • GDPR requires the permission of the person itself ⁇ o collect personalized data (such as video) and would imply the store manager ⁇ o ask every single client permission ⁇ o be filmed in order ⁇ o have its route tracked during the visit of the store. Therefore, most current state of the art solutions is no ⁇ compatible with the current privacy rules.
  • Some solutions use dedicated people tracking sensors distributed over the environment.
  • the privacy issue is solved by performing image processing in the tracking sensor itself so that only an anonymized map with detected subjects is given out from the tracking sensor.
  • this solution requires complex image processing chips in each tracking sensor which makes the sensors complex and expensive.
  • each tracking sensor requires a high computational power.
  • other sensor types are used such as infrared (IR) cameras or such as 3D cameras. Infrared cameras might no ⁇ be problematic for privacy, however i ⁇ is no ⁇ reliable enough ⁇ o detect all subjects without a high number of false positives and without mixing them up, when they come close ⁇ o each other. 3D cameras on the other side have a similar problem as the high-resolu ⁇ ion optical cameras.
  • the object of the invention is ⁇ o de ⁇ ec ⁇ / ⁇ rack the position of subjects inside an environment with a high quality while avoiding ⁇ o collect and/or store personal information about the subjects during the process and/or while reducing the computational power needed.
  • the difficulty hence resides in differentiating individuals one from another without referring ⁇ o privacy-related traits such as face recognition or other techniques of this kind.
  • the invention refers ⁇ o a system for anonymously detecting a plurality of subjects in an environment, said system comprising: A sensor module comprising a ⁇ leas ⁇ one sensor for collecting sensor data of the environment; a processing means configured ⁇ o detect subjects in the environment based on the sensor data of the environment.
  • the invention refers ⁇ o an environment with the system previously described.
  • the invention refers ⁇ o a firs ⁇ method for anonymously detecting a plurality of subjects in an environment, said firs ⁇ method comprising the steps of: collecting with a sensor module with a ⁇ leas ⁇ one sensor data of the environment; detecting in a processing means subjects in the environment based on the sensor data of the environment.
  • the invention refers ⁇ o a second method for anonymously detecting a plurality of subjects in an environment, said second method comprising the steps of: receiving from a sensor module with a ⁇ leas ⁇ one sensor in a processing means sensor data of the environment; detecting in the processing means subjects in the environment based on the sensor data of the environment.
  • the invention refers ⁇ o a (non-tangible) computer program for anonymously detecting a plurality of subjects in an environment, said computer program being configured ⁇ o perform the steps of the above-described second method, when executed on a processing means.
  • the invention refers ⁇ o a (non-tangible) computer program product for anonymously detecting a plurality of subjects in an environment, said computer program product storing instructions configured ⁇ o perform the steps of the above-described second method, when executed on a processing means.
  • the invention refers ⁇ o a processing means configured to perform the steps of the above-described second method.
  • the invention refers ⁇ o a sensor module for anonymously defecting a plurality of subjects in an environment, said sensor module comprising af leas ⁇ one sensor for collecting sensor data of the environment; a processing means configured ⁇ o detect subjects in the environment based on the sensor data of the environment.
  • the object is solved by a system, environment, firs ⁇ method, second method, computer program, computer program product, processing means and/or sensor module as described above in combination with one or more of the following embodiments:
  • the a ⁇ leas ⁇ one sensor comprises a low- resolution camera for capturing a low-resolution image of the environment.
  • the sensor data comprises a low-resolution image of the environment.
  • the processing means retrieves a feature of the subject from the low-resolution image of the environment ⁇ o detect and/or anonymously identify the subject in the environment. This embodiment has the advantage that the low-resolution of the images helps significantly ⁇ o anonymously identify the subjects (i.e. ⁇ o distinguish them from each other) but avoids that the person or true identify behind the subject can be retrieved from the images. Therefore, the sensor data of the low-resolufion camera are no ⁇ relevant for privacy rules.
  • the a ⁇ leas ⁇ one sensor comprises an IR camera for capturing an IR image of the environment.
  • the sensor data comprises an IR image of the environment.
  • the processing means retrieves an IR feature of the subject from the IR image of the environment ⁇ o detect and/or anonymously identify the subject in the environment.
  • the a ⁇ leas ⁇ one sensor comprises a 3D camera for capturing a 3D image of the environment.
  • the sensor data comprises a 3D image of the environment.
  • the processing means retrieves a feature of the subject from the 3D image of the environment ⁇ o detect and/or anonymously identify the subject in the environment.
  • the a ⁇ leas ⁇ one sensor comprises an IR camera and one or both of a 3D camera and a low-resolution camera for capturing a low-resolution image of the environment.
  • the sensor data comprises an IR image of the environment and one or both of a 3D image of the environment and a low-resolution image of the environment.
  • the processing means retrieves a ⁇ leas ⁇ one firs ⁇ feature of the subject from the IR image and a ⁇ leas ⁇ a second feature from the low-resolution image and/or the 3D image. The combination of the IR camera with a low- resolution optical camera and/or a 3D camera proved ⁇ o be very reliable and reduces the processing power.
  • the camera is an optical camera in the spectrum of visible light.
  • the camera is a 3D camera.
  • the camera is an infrared camera.
  • said sensor module further comprises a second camera for capturing second images of the environment
  • the processing means is configured ⁇ o defect the subjects in the environment based on the images of the environment captured by the camera of the sensor module and based on the second images of the environment captured by the second camera of the sensor module.
  • the camera and the second camera are a firs ⁇ one and a second one of an optical camera in the spectrum of visible light, a 3D camera and an infrared camera (meaning two different ones of the listed cameras).
  • said sensor module further comprises a third camera for capturing third images of the environment
  • the processing means is configured ⁇ o detect the subjects in the environment based on the images of the environment captured by the third camera of the sensor module, wherein the third camera is the third one of the optical camera, the 3D camera and the infrared camera (meaning that the camera, the firs ⁇ camera and the second camera are each a different one of the three listed types of cameras).
  • the camera is a low-resolution camera.
  • the second camera is a low-resolution camera.
  • the third camera is a low-resolution camera.
  • the use of a combination of different types of low-resolution cameras allows a very reliable subject tracking with low-resolution images.
  • the low-resolution images have the advantage of a low power consumption, low bandwidth for transmission and of no ⁇ allowing the identification of the subjects. The latter point allows that the images can be transmitted, stored and processed without any security measures.
  • the camera is a low-resolution infrared camera and the second camera is a low- resolution 3D camera. This combination proved ⁇ o be very reliable.
  • the sensor module is realized as a sensor uni ⁇ comprising a housing, the camera and the second camera and optionally also the third camera.
  • the camera and the second camera and optionally also the third camera are arranged within the housing.
  • the sensor module is configured ⁇ o be mounted such that the direction of view of the a ⁇ leas ⁇ two cameras, preferably of the three cameras of the a ⁇ leas ⁇ one sensor of the sensor module are arranged vertically facing downwards.
  • a sensor region of the sensor module is a par ⁇ of the environment covered by all the cameras of the a ⁇ leas ⁇ one sensor of the sensor module.
  • the sensor region is the par ⁇ of the environment covered by all the images of the a ⁇ leas ⁇ two cameras, preferably of the three cameras of the sensor module.
  • the sensor region corresponds thus ⁇ o the region of the environment, where the images of the all the cameras of the sensor module overlap. This allows ⁇ o have for each subject in the sensor region features from two or three different images from two or three different cameras ⁇ o reliably (anonymously) ⁇ o identify the subject.
  • the processing means is configured ⁇ o detect subjects in the environment based two or more features retrieved from the image(s) of the a ⁇ leas ⁇ one sensor, wherein the two or more features comprise a ⁇ leas ⁇ one feature of each of the a ⁇ leas ⁇ two sensors of the sensor module.
  • the processing means is configured ⁇ o detect subjects in the environment based on a firs ⁇ feature retrieved from the images of the camera and based on a second feature retrieved from the images of the second camera.
  • the system can detect features of the subject from a ⁇ leas ⁇ two independent images retrieved from a ⁇ leas ⁇ two independent sensors of the sensor module.
  • the system comprises a plurality of further sensor modules.
  • each of the plurality of further sensor modules has the same features as described above for the module.
  • all of the plurality of further sensor modules are identical to the sensor module. This facilitates ⁇ o defect for large environments easily the subjects in the environment, because the processing of the images of each sensor module and each further sensor module is the same.
  • the sensor module and the plurality of further sensor modules comprise each preferably an interface ⁇ o send the images from the af leas ⁇ one sensor ⁇ o the processing means.
  • the sensor module and the further sensor modules are distinct devices from the processing means.
  • the processing means is realised as a ⁇ leas ⁇ one processing uni ⁇ connecting each all or a subset of the sensor module and the further sensor modules.
  • each sensor module is connected via a wired or wireless communication connection ⁇ o one of the a ⁇ leas ⁇ one processing units ⁇ o transfer the images from the sensor modules ⁇ o the respective connected processing uni ⁇ .
  • the processing means is configured ⁇ o associate ⁇ o each subject detected in the environment an identifier for anonymously identifying the respective subject and ⁇ o track each subject identified by its associated identifier in the environment.
  • the processing means comprises a firs ⁇ processing means for pre-processing the data received from the a ⁇ leas ⁇ one sensor of the sensor module and a second processing means herein the firs ⁇ processing means is configured ⁇ o defect subjects in the environment based on the images of the environment captured by af leas ⁇ one sensor of the sensor module and ⁇ o determine a pre-processing output with the position of the detected subjects in the environment, preferably with the tracked position and/or the tracking path of the subjects anonymously identified in the environment, wherein the second processing means performs further processing based on the pre-processing output.
  • the system comprising at least two of said sensor modules and at least two of said first processing means, wherein each of the at least two first processing means receives the data of the at least one sensor of at least one sensor module of the at least two sensor modules to determine the pre-processing output, wherein the second processing means comprises a combining module configured to receive the pre-processing output of the at least two first processing means and to combine the pre-processing outputs of the at least two first processing means to a combined output detecting the subjects in the environment, preferably for tracking the subjects anonymously identified in the environment.
  • each first processing means is configured to receive and pre-process the data from the sensor module (SI 10) of a different sub-region of the environment, wherein the second processing means is configured to combine the pre-processing outputs of the different sub-regions to a combined output of the combined sub-regions of the environment.
  • the first processing means is configured to receive the data from at least two sensor modules.
  • the processing means is configured to detect an event for a certain subject, wherein the event has an associated event location in the environment and an associated event time, wherein the event is associated to the subject based on the event and based on the location of the subjects in the environment a ⁇ the time associated with the even ⁇ .
  • the even ⁇ is associated ⁇ o the subject based on the subject being located a ⁇ the time associated with the even ⁇ a ⁇ the location in the environment being associated with the even ⁇ .
  • the even ⁇ has a fixed location in the environment.
  • the data from the database could be anonymous as well but be connected in some way with the subject or the even ⁇ of the subject.
  • the data could be also non-anonymous so that the itinerary could be re-connecfed ⁇ o an identify of the person, e.g. for surveying a certain region of the environment, for which there must be an access control.
  • the data from the database could also be a group of people, e.g. the persons of a certain flight, an employee, a person with a certain access control level. Therefore, the output of the subject detection or tracking can be used in big data analysis and machine learning. In particular for machine learning, the events are providing an important automated feedback for improving the machine learning algorithm.
  • the machine learning could comprise also artificial intelligence or deep learning.
  • Figure 1 shows an exemplary embodiment of a retail shop as an environment.
  • Figure 2 shows a plurality of sensor modules distributed over the environment of Fig. 1.
  • Figure 3 shows the plurality of sensor modules and a plurality of sub- regions distributed over the environment of Fig. 1.
  • Figure 4 shows an exemplary schematic embodiment of the system of the invention.
  • the invention refers ⁇ o a system, method and/or computer program for defecting a plurality of subjects in an environment, preferably for fracking the plurality of subjects in the environment, preferably for anonymously defecfing/fracking the subjects in the environment.
  • Subjects are preferably humans or persons. However, the application could be also applied ⁇ o different subjects like animals or certain kind of objects.
  • the system could be configured ⁇ o distinguish between different type of subjects, e.g. distinguished by their age, e.g. adults and minors, e.g. adults, minors below a certain heigh ⁇ or age and minors above a certain heigh ⁇ or age.
  • the subjects could also be objects.
  • the environment defines the (complete) area in which the subjects shall be detected (also called area under monitoring).
  • the environment is preferably a closed environment. Examples are retail shops, airports, amusement parks or other types of (public) environments with some sense of traffic in which it is of interest for the authorities managing the environment ⁇ o understand how the disposition of the place is used by visitors.
  • the environment can refer ⁇ o the whole airport or only ⁇ o certain zones of interest of the airport, such as the security gates or the retail area of the airport.
  • the (closed) environment comprises preferably a ceiling. However, it is also possible ⁇ o have environments without a ceiling such a certain open-air amusement parks.
  • An example of a retail shop 100 as such an environment is shown in Fig. 1.
  • the environment 100 comprises preferably borders over which subjects cannot move.
  • Such borders comprise preferably the external borders of the environment 100 which encloses the environment.
  • the environment 100 comprises normally a gate zone through which subjects can enter and/or exit the environment 100.
  • a gate zone can be a door, passageway, gate, an elevator, stairs etc.
  • the external borders comprise normally a gate zone 103 through which the subjects can enter and/or exit the environment. Flowever, it is also possible that the external borders are fully closed and there is a gate zone within the environment 100 (no ⁇ shown in Fig. 1 ) .
  • the gate zones can be for entering and exiting subjects or exclusively for one or the other (in the latter case a ⁇ leas ⁇ two different gate zones are required).
  • the borders could comprise internal borders like the shelves 102, the cashiers 101 , etc.
  • the environment 100 can define further certain functional objects of the environment such as shelves 102, cashiers 101 , etc. ⁇ o define the layout of the shop.
  • the different functional objects could have identifiers which could be used in the analytics processing (see later) and/or for displaying the environment 100 on a display (see later) and/or for defining the borders of the environment 100.
  • the environment 100 could define certain zones of interest, e.g. in Fig. 1 the zone 121 in front of the cashiers 101 .
  • all sub-regions 100. ⁇ together cover the (entire) environment 100. Neighbouring sub-regions 100. ⁇ can overlap with each other.
  • Fig. 4 shows an exemplary embodiment of the system SI 00 for defecting subjects in the environment.
  • the system SI 00 comprises a processing means and a ⁇ leas ⁇ one sensor module SI 10.
  • the sensor module SI 10 comprises a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14.
  • the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 is pre ⁇ erably configured ⁇ o sense sensor da ⁇ a o ⁇ (a sensor region of) ⁇ he environmen ⁇ 100 allowing ⁇ he de ⁇ ec ⁇ ion of ⁇ he posifion of subjecfs in ⁇ he (sensor region of ⁇ he) environmen ⁇ 100.
  • the sensor region of a sensor module SI 10 is ⁇ he par ⁇ of ⁇ he environmen ⁇ 100 covered by ⁇ he sensor module SI 10, i.e. ⁇ he par ⁇ of ⁇ he environmen ⁇ 100 in which sensor da ⁇ a of subjecfs can be collected in the environment 100 by this sensor module SI 10.
  • the sensor region of the sensor module SI 12 is preferably the par ⁇ of the environment 100 covered by all sensors SI 12, SI 13, SI 14 of the sensor module SI 10.
  • the sensor data retrieved by one, some or all of the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 are preferably image data, for example two-dimensional data, like image pixel data, or three- dimensional image data, like voxel image data.
  • the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 captures the sensor data over time. This means normally that the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 captures in subsequent time or sample steps (normally with fixed intervals in between) one sensor data set for the respective point in time or sample step.
  • the sensor data set is preferably an image.
  • the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 provides thus for each sensor a stream of sensor data sets or images (also called video).
  • the sensor module SI 10 could have a storage for buffering (recording for a short time for technical reasons, no ⁇ for the purpose of storing them) the sensor data before sending them ⁇ o the processing means.
  • the buffering could also comprise the case of storing the sensor data for the time during which a connection ⁇ o the processing means is interrupted.
  • the sensor data are no ⁇ stored in the sensor module SI 10 after the sensor data have been sen ⁇ ⁇ o the processing means.
  • the sensor module SI 10 could also work without a storage for the sensor data.
  • the sensor module SI 10 sends the sensor data ⁇ o the processing means immediately after the sensor data have been collected by the a ⁇ leas ⁇ one sensor S 1 12, S 1 13, S 1 14.
  • the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 comprises preferably an IR camera SI 14.
  • the IR camera SI 14 is configured ⁇ o capture IR images of the sensor region of the sensor module SI 10 and/or of the IR camera SI 14 (over time).
  • the IR camera SI 14 is preferably a low-resolution camera.
  • the IR camera SI 14 comprises preferably a digital image sensor, e.g. a CMOS sensor.
  • the resolution of the IR camera SI 14 and/or of the digital image sensor is defined by the number of pixels of the digital image sensor and/or of the IR camera SI 14.
  • the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 comprises preferably a three-dimensional (3D) camera SI 13.
  • the 3D camera SI 14 is configured ⁇ o capture 3D images of the sensor region of the sensor module SI 10 and/or of the 3D camera SI 13 (over time).
  • the 3D camera is preferably a time of flight camera. However, different 3D cameras can be used.
  • the 3D camera SI 13 is preferably a low-resolution camera.
  • the sensor module and/or the 3D camera is preferably arranged such in the environment that a voxel recorded with the 3D camera corresponds ⁇ o a space of the environment larger than 5mm, preferably than 1 cm, preferably than 3 cm, preferably than 5 cm.
  • the space of the environment of x corresponds preferably a cube in the environment with the three side lengths of x.
  • the 3D camera SI 13 comprises preferably a digital image sensor.
  • the resolution of the 3D camera SI 13 and/or of the digital image sensor is defined by the number of pixels of the digital image sensor and/or of the 3D camera SI 13.
  • the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 comprises preferably an optical camera SI 12.
  • the optical camera SI 12 is configured ⁇ o capture optical images of the sensor region of the sensor module SI 10 and/or of the optical camera SI 12 (over time).
  • the optical camera SI 12 captures optical images of the sensor region in the frequency or wavelength spectrum of the visible light.
  • the optical camera SI 12 comprises preferably a digital image sensor, e.g. a CMOS sensor.
  • the resolution of the optical camera SI 12 and/or of the digital image sensor is defined by the number of pixels of the optical camera SI 12 and/or of the digital image sensor.
  • the optical camera SI 12 is a low-resolufion camera.
  • the resolution of the low-resolution camera (of the optical camera SI 12 and/or of the 3D camera SI 13 and/or of the IR camera SI 14) is preferably so low that biometric features which allow to identify the identity of a person (not only to distinguish them from each other) can not be retrieved from the images captured by the optical camera SI 12.
  • the resolution of the low-resolution camera and/or of its image sensor is preferably lower than 0,5 Megapixels, preferably lower than 0,4 Megapixels, preferably lower than 0,3 Megapixels, preferably lower than 0,2 Megapixels, preferably lower than 0,1 Megapixels, preferably lower than 0,09 Megapixels, preferably lower than 0,08 Megapixels, preferably lower than 0,07 Megapixels, preferably lower than 0,06 Megapixels, preferably lower than 0,05 Megapixels, preferably lower than 0,04 Megapixels, preferably lower than 0,03 Megapixels, preferably lower than 0,02 Megapixels, preferably lower than 0,01 Megapixels (10.000 Pixels), preferably lower than 0,005 Megapixels (5.000 Pixels).
  • the low-resolution optical sensor has a resolution of 32x32 pixels resulting in 1024 Pixels of the image sensor which proved sufficient for the subject detection in combination with another sensor like the IR sensor or the 3D camera.
  • the low-resolution 3D camera SI 13 has a resolution of 60x80 pixels resulting in 4080 Pixels of the image sensor which proved sufficient for the subject detection in combination with another sensor like the IR sensor or the optical sensor.
  • the low- resolution IR camera SI 14 has a resolution of 64x64 pixels resulting in 4096 Pixels of the image sensor which proved sufficient for the subject detection in combination with another sensor like the 3D sensor or the optical sensor.
  • the resolution, position and/or orientation of the low-resolution camera (s) and/or of the sensor module SI 10 is preferably chosen such that the low-resolution camera provides a resolution per square meter of the environment 100 lower than 50.000 Pixels per square meter of the environment covered (Pixels/sqm), preferably lower than 40.000 Pixels/sqm, preferably lower than 30.000 Pixels/sqm, preferably lower than 20.000 Pixels/sqm, preferably lower than 10.000 Pixels/sqm, preferably lower than 5.000 Pixels/sqm, preferably lower than 4.000 Pixels/sqm, preferably lower than 3.000 Pixels/sqm, preferably lower than 2.000 Pixels/sqm, preferably lower than 1.000 Pixels/sqm, preferably lower than 1.000 Pixels/s
  • the resolution of the low-resolufion camera (s) is fixed, i.e. cannot be changed by the system.
  • the resolution of the low-resolufion camera (s) can be configured by the system, preferably in the sensor module SI 10. This might have the advantage that the resolution can be configured in dependence of the heigh ⁇ of installation, the light conditions, etc.
  • Such a low-resolution camera with a configurable resolution could be provided by a digital image processor with a configurable resolution filter or processor which reduces the resolution based on the configuration of the sensor module SI 10.
  • the configuration could be made by a hardware configuration means in the sensor module or by a software configuration means which could be controlled by the processing means.
  • the low-resolution camera of the sensor module SI 10 could be configured by the processing means by a calibration procedure which guarantees that no biometric feature of the subjects passing in the sensor region could be extracted.
  • the description of the low-resolution camera applies preferably for the optical camera SI 12, the IR camera SI 14 and/or the 3D camera SI 13.
  • the sensor module SI 10 comprises preferably an interface ⁇ o send the sensor data from the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 ⁇ o the processing means.
  • the sensor module SI 10 is realised as a sensor uni ⁇ , i.e. a distinct device from another device comprising (at least part of) the processing means such as a processing unit or a pre-processing unit.
  • the sensor unit or the sensor module SI 10 is preferably connected to the processing means, the processing unit or the pre-processing unit to send the sensor data, preferably the raw data of the at least one sensor SI 12, SI 13, SI 14.
  • the connection could be via a cable, e.g. Ethernet, LAN, etc. or wireless, e.g. Bluetooth, WLAN, LOLA, WAN, etc.
  • the sensor module SI 10 is preferably configured to send the sensor data in real time to the processing means.
  • the three presented sensors SI 12, SI 13, SI 14 in the sensor module SI 10 are advantageous, because they allow to retrieve data about the subjects in the sensor region which allows to distinguish the subjects from each other, i.e. to anonymously identify the subjects, without recording data of the subjects which could be used to identify the subjects.
  • One of those three sensors could be used in the sensor module SI 10 alone or in combination with other sensors.
  • a combination of at least two of the three described sensors SI 12, SI 13, SI 14 significantly improve the detection quality of the subject, i.e. to identify the subjects reliably anonymously.
  • the combination of the IR camera SI 14 and one or two of the optical cameras and the 3D camera proved to be very reliable. Therefore, notwithstanding the absence of classical identifiers such a face recognition, the combination of the described sensors provides a reliable detection quality.
  • the combination of all three sensors SI 12, SI 13, SI 14 proved to be very reliable.
  • the combination of at least two, preferably three low- resolution cameras as sensors of the sensor module SI 10 proved to be very reliable notwithstanding the low-resolution images used.
  • the af leas ⁇ two low-resolution cameras comprise preferably two, preferably three of a low-resolution IR camera SI 14, a low-resolution optical camera SI 12 and a low-resolution 3D camera SI 13.
  • the a ⁇ leas ⁇ two low-resolution cameras comprise preferably a low-resolution IR camera SI 14 and a 3D camera SI 13, preferably also a low- resolution optical camera SI 12.
  • the system SI 00 comprises preferably a plurality of sensor modules SI 10.
  • the above description of the sensor module S100 applies for all sensor modules SI 10 of the system S100 or of the plurality of sensor modules SI 10.
  • all sensor modules of the system SI 00 are equal.
  • the number of sensor modules SI 10 required depends on the size of the environment 100.
  • the sensor modules SI 10 are distributed over the environment 100 to cover the (complete) environment 100 or a ⁇ leas ⁇ the par ⁇ of the environment which is of interest for the monitoring.
  • the sensor modules SI 10 are preferably distributed such that the sensor regions of all sensor modules SI 10 cover the (complete) environment 100 or a ⁇ leas ⁇ the par ⁇ of the environment which is of interest for the monitoring.
  • the sensor regions can overlap with neighbouring sensor regions.
  • the sensor regions of the plurality of sensor modules SI 10 covers a ⁇ leas ⁇ 50% of the environment 100, preferably a ⁇ leas ⁇ 60%, preferably a ⁇ leas ⁇ 70%, preferably a ⁇ leas ⁇ 80%, preferably a ⁇ leas ⁇ 90%, preferably a ⁇ leas ⁇ 95% of the environment 100 for which subjects shall be tracked.
  • Fig. 2 shows an exemplary embodiment for the distribution of the sensor modules SI 10 over the environment 100.
  • the sensor modules SI 10 are preferably mounted such that the direction of view of the a ⁇ leas ⁇ one camera, preferably of all cameras of the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 of the sensor module SI 10 are arranged vertically facing downwards.
  • the a ⁇ leas ⁇ one sensor module SI 10 on the ceiling of the environment 100.
  • this has the disadvantage that some subjects might be covered by other subjects.
  • the processing means is configured ⁇ o defect subjects in the environment 100 based on the sensor data captured by the a ⁇ leas ⁇ one sensor of the a ⁇ leas ⁇ one sensor module SI 10 from the sensor regions covering the environment 100.
  • the detection of a subject in the image(s) is preferably based on one or more features retrieved from the image(s).
  • the detection of a subject is based on two or more features retrieved from the image(s) of the a ⁇ leas ⁇ one sensor SI 12, SI 13, SI 14 (subsequently called feature vector without limitation of the invention).
  • the feature vector comprises preferably one feature of each of the a ⁇ leas ⁇ two sensors SI 12, SI 13, SI 14 of the sensor module SI 10.
  • Example features retrieved from the IR images are the average temperature of the subject or the diameter of the heat point representing the subject.
  • Example features retrieved from the 3D images are the heigh ⁇ of the subject, the vector describing the contour of the person (as seen from above).
  • Example features retrieved from the optical images are the average colour of the detected subject. Maybe other or additional features may be used for the rough detection where a subject is in the images.
  • the processing means is further configured ⁇ o anonymously identify the subjects detected in the environment 100 based on the sensor data, in particular based on the a ⁇ leas ⁇ one feature, preferably on the feature vector of the subject.
  • the feature vector of a subject provides a kind of signature of the subject which allows ⁇ o anonymously identify the subject in the environment, i.e. each subject can be distinguished (a ⁇ any time) from the other subjects in the environment without the need of features which allow ⁇ o identify the person behind the subject and/or which require according ⁇ o some privacy rules the agreement of the subject.
  • the anonymous identification of the subject is preferably based on a probabilistic approach such that when one or few features of the feature vector of the subject change, the subject can still be anonymously identified. This can be realised with a high reliability based on the location of the subject, when the one or few features of the feature vector of the subject change.
  • An (anonymous) identifier can be associated to each subject anonymously identified.
  • the processing means is preferably configured to track a (or each) subject identified in the environment.
  • the tracking of the (or each) subject in the environment 100 gives the position of the (or each) subject over time.
  • Such a time-resolved itinerary allows to have the information of the itinerary in the environment of each subject and its time spent at certain locations or zones.
  • the processing means is preferably configured to determine the actual position of the subjects in the environment 100 at the actual point of time. Thus, the processing means can determine the position of the subjects in the environment 100 in real time.
  • the processing means could comprise an advanced processing means or engine SI 33 for computing advanced measures regarding the subjects.
  • the advanced processing means SI 33 is preferably configured to compute the real time measures retrieved from the real time detection or tracking of the subjects in the environmentlOO.
  • One example for such an advanced subject measure is the number of subjects being at a certain point of time, e.g. at the actual time, in a zone of interest of the environment 100, e.g. the zone 121 in front of the cashiers 101.
  • Another example for such and advanced subject measure could be the average time the subjects statistically spend in a location, a certain zone or in the environment for a certain period of time. This can be shown in a map of the environment showing the average time spent at each location as an intensity as in a heat map.
  • the processing means could comprise an event detector SI 44.
  • the event detector could detect an event for a certain subject.
  • the event is associated with a location in the environment 100 and a time.
  • the event can be for example a ticket a ⁇ one of the cashiers 101 and the location of the environment 100 associated with the even ⁇ can be the location of this cashier 101 in the environment.
  • the even ⁇ detector SI 44 could associate the even ⁇ ⁇ o the subject based on the even ⁇ and based on the location of the subjects in the environment a ⁇ the time associated with the even ⁇ .
  • the even ⁇ defector SI 44 could associate the even ⁇ ⁇ o the subject based on subject being located a ⁇ the time associated with the even ⁇ a ⁇ the location in the environment 100 being associated with the even ⁇ .
  • the even ⁇ has a fix location.
  • the fixed even ⁇ location could be for example the location of the cashier of the environment or of a badge station for opening a door.
  • the subject being a ⁇ the time associated with the even ⁇ a ⁇ the cashier 101 which created the ticket must be associated with the ticket.
  • This even ⁇ detector is very important ⁇ o get an automatic feedback on the subjects with a higher granularity than yes or no. Such an automated feedback is very important for analysing the results produced in the processing means, e.g. for using the data of the subject detection and/or tracking in machine learning analytics.
  • each even ⁇ type has a fixed location in the environment.
  • the firs ⁇ cashier gives out a ticket for an acquisition of a subject a ⁇ a firs ⁇ time, the subject being a ⁇ the firs ⁇ time a ⁇ the location of the firs ⁇ even ⁇ type is associated ⁇ o the subject.
  • Another example of an even ⁇ could be a detector for a badge. The employees need ⁇ o badge for time stamping their work or for accessing a certain area. The badging would be the even ⁇ .
  • the anonymous subject and/or its itinerary can be connected with the non- anonymous person identified with the badge in a database or with an anonymous category of subjects in a database.
  • This allows ⁇ o identify for example certain categories of subjects which shall no ⁇ be considered for certain analysis.
  • Another example for an even ⁇ could be the scan of the boarding pass before boarding the plane and/or before passing the security check in an airport. This allows ⁇ o connect the itinerary of a person with a certain flight or location of the flight. If the even ⁇ is the scan of the boarding pass before passing the security check and the data from the database is any identifier of the boarding pass (e.g.
  • the system could calculate from the corresponding identifier of the boarding pass the location of the subject connected with the even ⁇ or boarding pass in the environment, the distance or time of the person ⁇ o get ⁇ o a gate. This allows airlines closing a gate with the decision, if it makes sense ⁇ o wait for a person missing for boarding the plane. In addition, this allows ⁇ o get for certain applications the identity of a person, even if the sensor module SI 12 gives no personal identity about the subjects detected.
  • the processing means is configured ⁇ o receive from an even ⁇ database an even ⁇ time and an even ⁇ location, ⁇ o detect the subject being a ⁇ the even ⁇ time a ⁇ the even ⁇ location of the environment and ⁇ o associate the even ⁇ with the detected subject being a ⁇ the even ⁇ time a ⁇ the even ⁇ location of the environment.
  • the processing means is further configured ⁇ o detect the subject ⁇ o be associated ⁇ o the even ⁇ based on a certain even ⁇ behaviour.
  • the even ⁇ behaviour could be a certain minimum waiting time a ⁇ the even ⁇ location. This could help ⁇ o distinguish different subjects being a ⁇ the even ⁇ time a ⁇ the even ⁇ location.
  • the processing means could comprise analytics means or engine SI 41.
  • the analytics means are configured ⁇ o analyse and/or further process the information about the subjects detected in the environment 100, the information about the position over time of the subjects tracked in the environment 100 and/or the results from the advanced processing means SI 33.
  • the analytics means SI 41 is preferably configured ⁇ o compute measures which need no ⁇ or cannot be computed in real time.
  • the analytics means SI 41 could determine based on a plurality of subjects, based on the event detected for each subject and based on the tracked itinerary of each subject conclusions about the best location of the environment 100 (e.g. for exposing products) or about a ranking of the locations of the environment 100.
  • the processing means could comprise a notification means or engine SI 43 for creating notifications based on the results from the detection of subjects or from the tracking of subjects.
  • a notification created based on the results from the detection or tracking of the subjects could be a notification created from the results of the advanced processing means SI 33 or of the analytics means SI 41.
  • An example could be that a notification, e.g. an alert, is created, when the number of subjects detected in a zone of interest increases above a threshold or the (average) waiting time of the subjects increases above a threshold.
  • the zone of interest could be the zone 121 in front of the cashiers 101 with the waiting zone of the clients. This allows the shop manager to react immediately, if the queue or the waiting time for the clients gets to long.
  • the threshold could be selected also depending on the number of open cashiers 101.
  • Another notification could be created, if a subject enters in a zone of interest in which it is not allowed to enter. There are many more possibilities for notifications.
  • the processing means could comprise an output interface, e.g. a man-machine-interface (MMI) SI 42.
  • the output interface could be a display such that the results computed in the processing means, the analytics means SI 41 , the notification means SI 43 and/or the advanced processing means SI 33 can be shown on a display, e.g. the current map of the environment with the detected subject.
  • the output interface could however also be a signal interface through which the results are given out, e.g. an interface to a network, e.g. the internet.
  • the output interface could further output notifications of the notification means SI 43.
  • the processing means could further comprise an input interface for receiving request or other information, e.g. for configuring the system SI 00 for the environment.
  • the input interface could be the MMI SI 42.
  • the processing means can be realized by any kind of processing means having the described functionalities.
  • the processing means could be a computer, a specialized chip, a system comprising at least two sub-processing units like in cloud computing, as in distributed processing, as in data processing centres.
  • the processing means could be completely or partly in a central processing unit or could be completely or partly distributed over the sensor modules SI 10 or over intermediate processing units or modules each responsible for the processing of the sensor data of a subset of sensor modules SI 10.
  • the processing means comprises at least one first processing module SI 20 (also called pre-processing module) for pre-processing the sensor data received from a subset of the sensor modules SI 10 of the system SI 00 and a second processing module SI 30, SI 40 for further processing the pre- processed data of the at least one pre-processing module SI 20.
  • the sensor region(s) of each subset of sensor modules SI 20 define a sub-region of the environment 100. ⁇ as shown exemplarily in Fig. 3.
  • the subset of sensor modules SI 10 connected with one of the at least one pre-processing module SI 20 comprise at least two sensor modules SI 10. However, it is also possible that only one sensor module SI 10 is connected to one pre-processing module SI 20.
  • the pre-processing module SI 20 is realised in a pre-processing unit, i.e. a device being distinct from the sensor unit or sensor module SI 10. This allows to group the pre-processing for a subset of sensor modules SI 10 in one pre processing unit SI 20.
  • the system SI 00 comprises at least two sub-sets of sensor modules SI 10, wherein each subset of sensor modules SI 10 are connected with a different pre-processing module SI 20 or pre-processing unit. Even if it is preferred to realise the pre-processing module SI 20 as a hardware device, i.e. as the pre-processing unit, it would also be possible to realize the pre-processing module SI 20 as a software module in a central processing uni ⁇ .
  • each pre processing module SI 20 is configured ⁇ o receive the sensor data of the a ⁇ leas ⁇ one sensor module SI 10 of the subset of sensor modules associated/connected with the pre-processing module SI 20 and is configured ⁇ o detect subjects in the sub-region 100. ⁇ of the pre-processing module SI 20, to anonymously identify the detected subjects and/or ⁇ o track the subjects in the sub-region 100. ⁇ of the pre processing module SI 20 based on the received sensor data.
  • the pre processing module SI 20 comprises preferably a subject detection/identification means SI 24 (also called signature engine) for configured ⁇ o detect the position of each subject in the sub-region 100. ⁇ of the pre-processing module SI 20 and ⁇ o anonymously identify the detected subjects (for a certain point in time).
  • the signature engine SI 24 gives out the positions and feature vectors of each subject detected in the sub-region 100. ⁇ for the certain point in time. This is preferably done sequentially, preferably periodically, for subsequent points in time.
  • the pre-processing module SI 20 comprises preferably a tracking means SI 23 or tracking engine for tracking the subjects in the sub-region 100. ⁇ of the pre-processing module SI 20 based on the output from the signature engine SI 24, in particular based on the position and feature vector of each subject in the sub-region 100. ⁇ for subsequent points in time.
  • the pre processing module SI 20 or the signature engine SI 24 combines the images of the sensor region of each sensor module SI 10 to a combined image of the sub- region 100. ⁇ and detects and anonymously identifies the subjects in the sub- region 100. ⁇ based on the combined image of the sub-region 100. ⁇ .
  • the pre-processing module SI 20 or the signature engine SI 24 detects and anonymously identifies the subjects in each sensor region based on the image of the respective sensor region ⁇ o create an detection and identification output for each sensor region and combines then the detection and identification output of all sensor regions of the sub-region 100. ⁇ to obtain the combined detection and identification output for the sub-region 100. ⁇ .
  • the modular approach with the number of pre-processing modules SI 20 used depending on the number of sensor modules SI 10 used in the environment is preferred. Nevertheless, if is also possible ⁇ o do the pre-processing in only one pre-processing module for all sensor modules SI 10 of the environment, then preferably in a central processing unit.
  • the pre-processing module(s) SI 20 or pre-processing unit is preferably an intermediate device arranged (hierarchically) between the sensor modules SI 10 or sensor units and the central processing uni ⁇ .
  • the pre-processing output of the/each pre-processing module SI 20 is sen ⁇ ⁇ o the second processing means.
  • the pre-processing output of a pre processing module SI 20 is preferably the detected, anonymously identified and/or tracked subject in the sub-region 100. ⁇ of the respective pre-processing module SI 20.
  • the pre-processing output for a certain point in time is the position of the subject(s) detected and anonymously identified in the sub- region 100. ⁇ .
  • the pre-processing output for a certain point in time is the position and the features vector of the subject(s) detected in the sub-region 100. ⁇ .
  • the pre-processing module SI 20 sends preferably the processing output of subsequent time points ⁇ o the second processing means.
  • the second processing means is preferably configured ⁇ o receive the pre-processing output of each sub-region 100. ⁇ from each pre-processing module SI 20 and ⁇ o detect and/or track the subjects in the environment 100 by combining the pre-processing ou ⁇ pu ⁇ (s) of the sub-regions 100. ⁇ . This is done preferably in a reconciliation means or engine SI 34.
  • the output of the reconciliation engine SI 34 is preferably the detected, anonymously identified and/or tracked subject(s) in the environment 100.
  • the output of the reconciliation engine SI 34 for a certain point in time is the position of the subject(s) detected and anonymously identified in the environment.
  • the pre-processing output fora certain point in time is the position and an identifier of the subject(s) detected in the environment.
  • the reconciliation engine SI 34 outputs preferably the output of subsequent time points.
  • the output can be used by the advanced processing engine SI 33, the analytics engine SI 41, the notification engine SI 43, the event detector SI 44 for further processing, in particular for real time processing.
  • the pre-processing module SI 20 could comprise a storage SI 22.
  • the storage SI 22 is preferably configured to store sensor data received from the sensor modules SI 10 connected to the pre-processing module SI 20 (at least until the sensor data are processed).
  • the storage S122 is preferably configured to store pre-processing output data to be output to the second processing means.
  • the storage SI 22 is preferably configured to store the pre-processing output of at least one previous processing step, i.e. the previous sampling time, for improving the detection and/or tracking of the subjects based on the position of the anonymously identified subjects at the at least one previous processing step.
  • the second processing means could further comprise a storage.
  • the storage SI 32 stores the subjects detected, anonymously identified and/or tracked (over time) in the environment 100, i.e. the storage SI 32 stores the output of the reconciliation engine SI 34. This allows to use the data about the subjects detected, anonymously identified and/or tracked (over time) in the environment 100 at a later point in time for further analysis, e.g. by the advanced processing engine SI 33, the analytics engine SI 41, the notification engine SI 43, the event detector SI 44 for further processing.
  • the storage SI 32 could further be used to buffer the pre-processing output(s) from the pre-processing module(s) SI 20.
  • the pre-processing output(s) are removed from the storage SI 32, once they have been processed in the second processing means, in particular in the reconciliation engine SI 34.
  • the second processing means in the shown embodiment comprises preferably two modules SI 30 and SI 40.
  • the first processing module SI 30 receives the pre-processing output(s) from the at least one pre-processing modules SI 20 and process this output to detect, anonymously identify and/or track the subjects in the environment 100, preferably ⁇ o determine the position over time of the subjects anonymously identified. Therefore, the reconciliation engine SI 34 is arranged in the firs ⁇ processing module SI 30.
  • the storage SI 32 is arranged in the firs ⁇ processing module SI 30.
  • the advanced processing engine SI 33 is arranged in the firs ⁇ processing module SI 30.
  • the second processing module SI 40 uses the output of the firs ⁇ processing module SI 30 to do analytics, ⁇ o interact with the user, ⁇ o create notification, ⁇ o detect events and/or ⁇ o process the output of the firs ⁇ processing module SI 30 in any other way.
  • the second processing module S140 comprises preferably the analytics engine SI 41, the input interface, the output interface, the notification engine SI 43, the even ⁇ detector SI 44 and/or any other means for processing the output of the firs ⁇ processing module SI 30.
  • the shown embodiment in Fig. 4 uses thus a system SI 00 with three, preferably four layers.
  • the third layer could be preferably be split into a third layer with the firs ⁇ processing module SI 30 and a fourth layer with the second processing module SI 40.
  • alternative architectures are also possible.
  • the system SI 00 could be organized in two layers with a sensor layer and a central processing layer.

Abstract

System for anonymously detecting a plurality of subjects in an environment, said system comprising: A sensor module (S110) comprising at least one sensor (S112, S113, S114), wherein the at least one sensor (S112, S113, S114) comprises a low-resolution camera (S112) for capturing images of the environment (100); and a processing means (S120, S130) configured to detect subjects in the environment based on the images of the environment captured by the camera (S112) of the sensor module (S110).

Description

Anonymized multi-sensor people tracking
Technical Domain
The present invention relates to a system for automated defection of subject, in particular of humans in an environment, while keeping the identify of the people being defected hidden for respecting their privacy.
Technological Background of the invention
If is known †o fracking of moving subjects in an environment, especially the movement of human beings in a closed environment such as a shop. Such a shop needs †o optimize their structure †o ensure a satisfying shopping experience †o their clients, which will improve sales and rentability. An example of such an optimization would be †o defect the most common routes used by said clients when moving through the shop and making sure the most attractive goods are positioned on this route. In order †o map these routes, the clients need †o be identified individually and data regarding their route through the store needs †o be stored. This optimization however depends on the qualify of the used fracking solution for the subjects.
Currently, there are several known fracking systems.
One fracking solution uses Bluetooth beacons of the smartphones of the subjects. These beacons are capable of defecting nearby devices having activated their Bluetooth functionality. However, such beacons are no† capable of tracking the movement of the subject related †o the device, only †o detect its presence. As an additional drawback, subjects without a Bluetooth-enabled device cannot be detected and subjects having more than one Bluetooth- enabled device will create false positives. A similar solution detects the devices of the subjects via the Wi-Fi network in place. This solution has similar drawbacks as Bluetooth beacons. Thus, these beacon-based solutions do no† provide a good position tracking of the subjects in the environment and do often no† cover all subjects.
Another solution uses security cameras or other high-definition cameras, in conjunction with image recognition techniques as disclosed for example in W02018/152009A1 , US2019/0205933A1 and EP2270761A1. These techniques require a substantial amount of computation power and face privacy related traits such as the European General Data Protection Regulation (GDPR). GDPR requires the permission of the person itself †o collect personalized data (such as video) and would imply the store manager †o ask every single client permission †o be filmed in order †o have its route tracked during the visit of the store. Therefore, most current state of the art solutions is no† compatible with the current privacy rules.
Some solutions use dedicated people tracking sensors distributed over the environment. In one embodiment using high-resolu†ion cameras in the tracking sensors, the privacy issue is solved by performing image processing in the tracking sensor itself so that only an anonymized map with detected subjects is given out from the tracking sensor. However, this solution requires complex image processing chips in each tracking sensor which makes the sensors complex and expensive. In addition, each tracking sensor requires a high computational power. In another solution, other sensor types are used such as infrared (IR) cameras or such as 3D cameras. Infrared cameras might no† be problematic for privacy, however i† is no† reliable enough †o detect all subjects without a high number of false positives and without mixing them up, when they come close †o each other. 3D cameras on the other side have a similar problem as the high-resolu†ion optical cameras.
There are also tracking solutions for home automation solutions which however try †o identify the persons like in US2018/0231653A1. These solutions use existing sensors in the home like visual light cameras, IR cameras, 3D cameras or many others to detect the persons. The many different sensors used create problems in handing over the persons. In addition, these tracking solutions are not configured to detect many people in large and crowded spaces. For privacy concerns in living spaces, bedrooms, bathrooms relatively lower resolution sensors such as radar sensors can be used. However, the tracking solution proposed identifies the tracked persons and cannot guarantee the anonymity of the tracked subjects.
The object of the invention is †o de†ec†/†rack the position of subjects inside an environment with a high quality while avoiding †o collect and/or store personal information about the subjects during the process and/or while reducing the computational power needed. The difficulty hence resides in differentiating individuals one from another without referring †o privacy-related traits such as face recognition or other techniques of this kind.
Brief summary of the invention
The invention refers †o a system for anonymously detecting a plurality of subjects in an environment, said system comprising: A sensor module comprising a† leas† one sensor for collecting sensor data of the environment; a processing means configured †o detect subjects in the environment based on the sensor data of the environment.
The invention refers †o an environment with the system previously described.
The invention refers †o a firs† method for anonymously detecting a plurality of subjects in an environment, said firs† method comprising the steps of: collecting with a sensor module with a† leas† one sensor data of the environment; detecting in a processing means subjects in the environment based on the sensor data of the environment.
The invention refers †o a second method for anonymously detecting a plurality of subjects in an environment, said second method comprising the steps of: receiving from a sensor module with a† leas† one sensor in a processing means sensor data of the environment; detecting in the processing means subjects in the environment based on the sensor data of the environment.
The invention refers †o a (non-tangible) computer program for anonymously detecting a plurality of subjects in an environment, said computer program being configured †o perform the steps of the above-described second method, when executed on a processing means.
The invention refers †o a (non-tangible) computer program product for anonymously detecting a plurality of subjects in an environment, said computer program product storing instructions configured †o perform the steps of the above-described second method, when executed on a processing means.
The invention refers †o a processing means configured to perform the steps of the above-described second method.
The invention refers †o a sensor module for anonymously defecting a plurality of subjects in an environment, said sensor module comprising af leas† one sensor for collecting sensor data of the environment; a processing means configured †o detect subjects in the environment based on the sensor data of the environment.
The object is solved by a system, environment, firs† method, second method, computer program, computer program product, processing means and/or sensor module as described above in combination with one or more of the following embodiments:
In one embodiment, the a† leas† one sensor comprises a low- resolution camera for capturing a low-resolution image of the environment. In one embodiment, the sensor data comprises a low-resolution image of the environment. In one embodiment, the processing means retrieves a feature of the subject from the low-resolution image of the environment †o detect and/or anonymously identify the subject in the environment. This embodiment has the advantage that the low-resolution of the images helps significantly †o anonymously identify the subjects (i.e. †o distinguish them from each other) but avoids that the person or true identify behind the subject can be retrieved from the images. Therefore, the sensor data of the low-resolufion camera are no† relevant for privacy rules.
In one embodiment, the a† leas† one sensor comprises an IR camera for capturing an IR image of the environment. In one embodiment, the sensor data comprises an IR image of the environment. In one embodiment, the processing means retrieves an IR feature of the subject from the IR image of the environment †o detect and/or anonymously identify the subject in the environment.
In one embodiment, the a† leas† one sensor comprises a 3D camera for capturing a 3D image of the environment. In one embodiment, the sensor data comprises a 3D image of the environment. In one embodiment, the processing means retrieves a feature of the subject from the 3D image of the environment †o detect and/or anonymously identify the subject in the environment.
In one embodiment, the a† leas† one sensor comprises an IR camera and one or both of a 3D camera and a low-resolution camera for capturing a low-resolution image of the environment. In one embodiment, the sensor data comprises an IR image of the environment and one or both of a 3D image of the environment and a low-resolution image of the environment. In one embodiment, the processing means retrieves a† leas† one firs† feature of the subject from the IR image and a† leas† a second feature from the low-resolution image and/or the 3D image. The combination of the IR camera with a low- resolution optical camera and/or a 3D camera proved †o be very reliable and reduces the processing power. The fact †o use sensor data from different sources allows using computationally simple features, which however are reliable due †o their independence. Thus, if the feature of one source fails for detecting and/or anonymously identifying the subject, the remaining features of other sources can guarantee a stable and reliable defection qualify.
In one embodiment, the camera is an optical camera in the spectrum of visible light.
In one embodiment, the camera is a 3D camera.
In one embodiment, the camera is an infrared camera.
In one embodiment, said sensor module further comprises a second camera for capturing second images of the environment, and the processing means is configured †o defect the subjects in the environment based on the images of the environment captured by the camera of the sensor module and based on the second images of the environment captured by the second camera of the sensor module. Preferably, the camera and the second camera are a firs† one and a second one of an optical camera in the spectrum of visible light, a 3D camera and an infrared camera (meaning two different ones of the listed cameras). Preferably, said sensor module further comprises a third camera for capturing third images of the environment, and the processing means is configured †o detect the subjects in the environment based on the images of the environment captured by the third camera of the sensor module, wherein the third camera is the third one of the optical camera, the 3D camera and the infrared camera (meaning that the camera, the firs† camera and the second camera are each a different one of the three listed types of cameras).
In one embodiment, the camera is a low-resolution camera. Preferably, also the second camera is a low-resolution camera. Preferably, also the third camera is a low-resolution camera. The use of a combination of different types of low-resolution cameras allows a very reliable subject tracking with low-resolution images. The low-resolution images have the advantage of a low power consumption, low bandwidth for transmission and of no† allowing the identification of the subjects. The latter point allows that the images can be transmitted, stored and processed without any security measures. Preferably, the camera is a low-resolution infrared camera and the second camera is a low- resolution 3D camera. This combination proved †o be very reliable.
In one embodiment, the sensor module is realized as a sensor uni† comprising a housing, the camera and the second camera and optionally also the third camera. Preferably, the camera and the second camera and optionally also the third camera are arranged within the housing.
In one embodiment, the sensor module is configured †o be mounted such that the direction of view of the a† leas† two cameras, preferably of the three cameras of the a† leas† one sensor of the sensor module are arranged vertically facing downwards.
In one embodiment, a sensor region of the sensor module is a par† of the environment covered by all the cameras of the a† leas† one sensor of the sensor module. In other words, the sensor region is the par† of the environment covered by all the images of the a† leas† two cameras, preferably of the three cameras of the sensor module. The sensor region corresponds thus †o the region of the environment, where the images of the all the cameras of the sensor module overlap. This allows †o have for each subject in the sensor region features from two or three different images from two or three different cameras †o reliably (anonymously) †o identify the subject.
In one embodiment, the processing means is configured †o detect subjects in the environment based two or more features retrieved from the image(s) of the a† leas† one sensor, wherein the two or more features comprise a† leas† one feature of each of the a† leas† two sensors of the sensor module. In other words, the processing means is configured †o detect subjects in the environment based on a firs† feature retrieved from the images of the camera and based on a second feature retrieved from the images of the second camera. Thus, a† each time instant and location of the environment covered by the sensor region of a† leas† one sensor module, the system can detect features of the subject from a† leas† two independent images retrieved from a† leas† two independent sensors of the sensor module.
In one embodiment, the system comprises a plurality of further sensor modules. Preferably, each of the plurality of further sensor modules has the same features as described above for the module. Preferably, all of the plurality of further sensor modules are identical to the sensor module. This facilitates †o defect for large environments easily the subjects in the environment, because the processing of the images of each sensor module and each further sensor module is the same.
In one embodiment, the sensor module and the plurality of further sensor modules comprise each preferably an interface †o send the images from the af leas† one sensor †o the processing means. Preferably, the sensor module and the further sensor modules are distinct devices from the processing means. Preferably, the processing means is realised as a† leas† one processing uni† connecting each all or a subset of the sensor module and the further sensor modules. Preferably, each sensor module is connected via a wired or wireless communication connection †o one of the a† leas† one processing units †o transfer the images from the sensor modules †o the respective connected processing uni†.
In one embodiment, the processing means is configured †o associate †o each subject detected in the environment an identifier for anonymously identifying the respective subject and †o track each subject identified by its associated identifier in the environment.
In one embodiment, the processing means comprises a firs† processing means for pre-processing the data received from the a† leas† one sensor of the sensor module and a second processing means herein the firs† processing means is configured †o defect subjects in the environment based on the images of the environment captured by af leas† one sensor of the sensor module and †o determine a pre-processing output with the position of the detected subjects in the environment, preferably with the tracked position and/or the tracking path of the subjects anonymously identified in the environment, wherein the second processing means performs further processing based on the pre-processing output.
In one embodiment, the system comprising at least two of said sensor modules and at least two of said first processing means, wherein each of the at least two first processing means receives the data of the at least one sensor of at least one sensor module of the at least two sensor modules to determine the pre-processing output, wherein the second processing means comprises a combining module configured to receive the pre-processing output of the at least two first processing means and to combine the pre-processing outputs of the at least two first processing means to a combined output detecting the subjects in the environment, preferably for tracking the subjects anonymously identified in the environment.
In one embodiment, each first processing means is configured to receive and pre-process the data from the sensor module (SI 10) of a different sub-region of the environment, wherein the second processing means is configured to combine the pre-processing outputs of the different sub-regions to a combined output of the combined sub-regions of the environment.
In one embodiment, the first processing means is configured to receive the data from at least two sensor modules.
In one embodiment, the processing means is configured to detect an event for a certain subject, wherein the event has an associated event location in the environment and an associated event time, wherein the event is associated to the subject based on the event and based on the location of the subjects in the environment a† the time associated with the even†. Preferably, the even† is associated †o the subject based on the subject being located a† the time associated with the even† a† the location in the environment being associated with the even†. Preferably, the even† has a fixed location in the environment. By connecting events †o anonymous subjects, if is suddenly possible †o create connections between databases and anonymous subjects which normally is no† possible for anonymous subjects. The data from the database could be anonymous as well but be connected in some way with the subject or the even† of the subject. The data could be also non-anonymous so that the itinerary could be re-connecfed †o an identify of the person, e.g. for surveying a certain region of the environment, for which there must be an access control. The data from the database could also be a group of people, e.g. the persons of a certain flight, an employee, a person with a certain access control level. Therefore, the output of the subject detection or tracking can be used in big data analysis and machine learning. In particular for machine learning, the events are providing an important automated feedback for improving the machine learning algorithm. The machine learning could comprise also artificial intelligence or deep learning.
Further embodiments are described in the description and in the dependent claims.
Drawings
Figure 1 shows an exemplary embodiment of a retail shop as an environment.
Figure 2 shows a plurality of sensor modules distributed over the environment of Fig. 1.
Figure 3 shows the plurality of sensor modules and a plurality of sub- regions distributed over the environment of Fig. 1. Figure 4 shows an exemplary schematic embodiment of the system of the invention.
In the drawings, the same reference numbers have been allocated †o the same or analogue element.
Detailed description of the invention
Other characteristics and advantages of the present invention will be derived from the non-limi†a†ive following description, and by making reference †o the drawings and the examples.
The invention refers †o a system, method and/or computer program for defecting a plurality of subjects in an environment, preferably for fracking the plurality of subjects in the environment, preferably for anonymously defecfing/fracking the subjects in the environment.
Subjects are preferably humans or persons. However, the application could be also applied †o different subjects like animals or certain kind of objects. The system could be configured †o distinguish between different type of subjects, e.g. distinguished by their age, e.g. adults and minors, e.g. adults, minors below a certain heigh† or age and minors above a certain heigh† or age. The subjects could also be objects.
The environment defines the (complete) area in which the subjects shall be detected (also called area under monitoring). The environment is preferably a closed environment. Examples are retail shops, airports, amusement parks or other types of (public) environments with some sense of traffic in which it is of interest for the authorities managing the environment †o understand how the disposition of the place is used by visitors. The environment can refer †o the whole airport or only †o certain zones of interest of the airport, such as the security gates or the retail area of the airport. The (closed) environment comprises preferably a ceiling. However, it is also possible †o have environments without a ceiling such a certain open-air amusement parks. An example of a retail shop 100 as such an environment is shown in Fig. 1. The environment 100 comprises preferably borders over which subjects cannot move. Such borders comprise preferably the external borders of the environment 100 which encloses the environment. The environment 100 comprises normally a gate zone through which subjects can enter and/or exit the environment 100. Such a gate zone can be a door, passageway, gate, an elevator, stairs etc. The external borders comprise normally a gate zone 103 through which the subjects can enter and/or exit the environment. Flowever, it is also possible that the external borders are fully closed and there is a gate zone within the environment 100 (no† shown in Fig. 1 ) . The gate zones can be for entering and exiting subjects or exclusively for one or the other (in the latter case a† leas† two different gate zones are required). The borders could comprise internal borders like the shelves 102, the cashiers 101 , etc. The environment 100 can define further certain functional objects of the environment such as shelves 102, cashiers 101 , etc. †o define the layout of the shop. The different functional objects (of the same type) could have identifiers which could be used in the analytics processing (see later) and/or for displaying the environment 100 on a display (see later) and/or for defining the borders of the environment 100. The environment 100 could define certain zones of interest, e.g. in Fig. 1 the zone 121 in front of the cashiers 101 . In one embodiment, the environment 100 defines a plurality of sub-regions 100.Ϊ with the index i being an integer between 1 and n with n being the number of sub-regions 100.Ϊ. In Fig. 1 n=6. Preferably, all sub-regions 100.Ϊ together cover the (entire) environment 100. Neighbouring sub-regions 100.Ϊ can overlap with each other.
Fig. 4 shows an exemplary embodiment of the system SI 00 for defecting subjects in the environment. The system SI 00 comprises a processing means and a† leas† one sensor module SI 10. The sensor module SI 10 comprises a† leas† one sensor SI 12, SI 13, SI 14. The a† leas† one sensor SI 12, SI 13, SI 14 is pre†erably configured †o sense sensor da†a o† (a sensor region of) †he environmen† 100 allowing †he de†ec†ion of †he posifion of subjecfs in †he (sensor region of †he) environmen† 100. The sensor region of a sensor module SI 10 is †he par† of †he environmen† 100 covered by †he sensor module SI 10, i.e. †he par† of †he environmen† 100 in which sensor da†a of subjecfs can be collected in the environment 100 by this sensor module SI 10. In the case of a† leas† two sensors SI 12, SI 13, SI 14, the sensor region of the sensor module SI 12 is preferably the par† of the environment 100 covered by all sensors SI 12, SI 13, SI 14 of the sensor module SI 10. The sensor data retrieved by one, some or all of the a† leas† one sensor SI 12, SI 13, SI 14 are preferably image data, for example two-dimensional data, like image pixel data, or three- dimensional image data, like voxel image data.
The a† leas† one sensor SI 12, SI 13, SI 14 (each, one or some of them) captures the sensor data over time. This means normally that the a† leas† one sensor SI 12, SI 13, SI 14 captures in subsequent time or sample steps (normally with fixed intervals in between) one sensor data set for the respective point in time or sample step. The sensor data set is preferably an image. The a† leas† one sensor SI 12, SI 13, SI 14 provides thus for each sensor a stream of sensor data sets or images (also called video). The sensor module SI 10 could have a storage for buffering (recording for a short time for technical reasons, no† for the purpose of storing them) the sensor data before sending them †o the processing means. The buffering could also comprise the case of storing the sensor data for the time during which a connection †o the processing means is interrupted. Preferably, the sensor data are no† stored in the sensor module SI 10 after the sensor data have been sen† †o the processing means. The sensor module SI 10 could also work without a storage for the sensor data. Preferably, the sensor module SI 10 sends the sensor data †o the processing means immediately after the sensor data have been collected by the a† leas† one sensor S 1 12, S 1 13, S 1 14. The a† leas† one sensor SI 12, SI 13, SI 14 comprises preferably an IR camera SI 14. The IR camera SI 14 is configured †o capture IR images of the sensor region of the sensor module SI 10 and/or of the IR camera SI 14 (over time). The IR camera SI 14 is preferably a low-resolution camera. The IR camera SI 14 comprises preferably a digital image sensor, e.g. a CMOS sensor. The resolution of the IR camera SI 14 and/or of the digital image sensor is defined by the number of pixels of the digital image sensor and/or of the IR camera SI 14.The a† leas† one sensor SI 12, SI 13, SI 14 comprises preferably a three-dimensional (3D) camera SI 13. The 3D camera SI 14 is configured †o capture 3D images of the sensor region of the sensor module SI 10 and/or of the 3D camera SI 13 (over time). The 3D camera is preferably a time of flight camera. However, different 3D cameras can be used. The 3D camera SI 13 is preferably a low-resolution camera. Preferably, the sensor module and/or the 3D camera is preferably arranged such in the environment that a voxel recorded with the 3D camera corresponds †o a space of the environment larger than 5mm, preferably than 1 cm, preferably than 3 cm, preferably than 5 cm. The space of the environment of x corresponds preferably a cube in the environment with the three side lengths of x. The 3D camera SI 13 comprises preferably a digital image sensor. The resolution of the 3D camera SI 13 and/or of the digital image sensor is defined by the number of pixels of the digital image sensor and/or of the 3D camera SI 13.
The a† leas† one sensor SI 12, SI 13, SI 14 comprises preferably an optical camera SI 12. The optical camera SI 12 is configured †o capture optical images of the sensor region of the sensor module SI 10 and/or of the optical camera SI 12 (over time). Preferably, the optical camera SI 12 captures optical images of the sensor region in the frequency or wavelength spectrum of the visible light. The optical camera SI 12 comprises preferably a digital image sensor, e.g. a CMOS sensor. The resolution of the optical camera SI 12 and/or of the digital image sensor is defined by the number of pixels of the optical camera SI 12 and/or of the digital image sensor. Preferably, the optical camera SI 12 is a low-resolufion camera.
The resolution of the low-resolution camera (of the optical camera SI 12 and/or of the 3D camera SI 13 and/or of the IR camera SI 14) is preferably so low that biometric features which allow to identify the identity of a person (not only to distinguish them from each other) can not be retrieved from the images captured by the optical camera SI 12. Preferably, the resolution of the low-resolution camera and/or of its image sensor is preferably lower than 0,5 Megapixels, preferably lower than 0,4 Megapixels, preferably lower than 0,3 Megapixels, preferably lower than 0,2 Megapixels, preferably lower than 0,1 Megapixels, preferably lower than 0,09 Megapixels, preferably lower than 0,08 Megapixels, preferably lower than 0,07 Megapixels, preferably lower than 0,06 Megapixels, preferably lower than 0,05 Megapixels, preferably lower than 0,04 Megapixels, preferably lower than 0,03 Megapixels, preferably lower than 0,02 Megapixels, preferably lower than 0,01 Megapixels (10.000 Pixels), preferably lower than 0,005 Megapixels (5.000 Pixels). Such a low-resolution sensor/camera has a very low power consumption and avoids any problems with the privacy of the content of the images. In one realisation, the low-resolution optical sensor has a resolution of 32x32 pixels resulting in 1024 Pixels of the image sensor which proved sufficient for the subject detection in combination with another sensor like the IR sensor or the 3D camera. In one realisation, the low-resolution 3D camera SI 13 has a resolution of 60x80 pixels resulting in 4080 Pixels of the image sensor which proved sufficient for the subject detection in combination with another sensor like the IR sensor or the optical sensor. In one realisation, the low- resolution IR camera SI 14 has a resolution of 64x64 pixels resulting in 4096 Pixels of the image sensor which proved sufficient for the subject detection in combination with another sensor like the 3D sensor or the optical sensor. Preferably, the resolution, position and/or orientation of the low-resolution camera (s) and/or of the sensor module SI 10 is preferably chosen such that the low-resolution camera provides a resolution per square meter of the environment 100 lower than 50.000 Pixels per square meter of the environment covered (Pixels/sqm), preferably lower than 40.000 Pixels/sqm, preferably lower than 30.000 Pixels/sqm, preferably lower than 20.000 Pixels/sqm, preferably lower than 10.000 Pixels/sqm, preferably lower than 5.000 Pixels/sqm, preferably lower than 4.000 Pixels/sqm, preferably lower than 3.000 Pixels/sqm, preferably lower than 2.000 Pixels/sqm, preferably lower than 1.000 Pixels/sqm, preferably lower than 500 Pixels/sqm. If the example of the optical camera SI 12 with a resolution of 1024 Pixels is arranged †o cover a field of 3m x 3m leading to 113 Pixels per sqm, there is absolutely no risk of privacy violations in the content of the images of the optical sensor SI 12 and the power consumption is significantly reduced.
In a preferred embodiment, the resolution of the low-resolufion camera (s) is fixed, i.e. cannot be changed by the system. In one (less preferred) embodiment, the resolution of the low-resolufion camera (s) can be configured by the system, preferably in the sensor module SI 10. This might have the advantage that the resolution can be configured in dependence of the heigh† of installation, the light conditions, etc. Such a low-resolution camera with a configurable resolution could be provided by a digital image processor with a configurable resolution filter or processor which reduces the resolution based on the configuration of the sensor module SI 10. The configuration could be made by a hardware configuration means in the sensor module or by a software configuration means which could be controlled by the processing means. In the latter case, the low-resolution camera of the sensor module SI 10 could be configured by the processing means by a calibration procedure which guarantees that no biometric feature of the subjects passing in the sensor region could be extracted. The description of the low-resolution camera applies preferably for the optical camera SI 12, the IR camera SI 14 and/or the 3D camera SI 13.
The sensor module SI 10 comprises preferably an interface †o send the sensor data from the a† leas† one sensor SI 12, SI 13, SI 14 †o the processing means. Preferably, the sensor module SI 10 is realised as a sensor uni†, i.e. a distinct device from another device comprising (at least part of) the processing means such as a processing unit or a pre-processing unit. The sensor unit or the sensor module SI 10 is preferably connected to the processing means, the processing unit or the pre-processing unit to send the sensor data, preferably the raw data of the at least one sensor SI 12, SI 13, SI 14. The connection could be via a cable, e.g. Ethernet, LAN, etc. or wireless, e.g. Bluetooth, WLAN, LOLA, WAN, etc. Considering that the sensor data do not contain privacy problematic content, it is not necessary that the connection is encrypted which safes a lot of computational power. The sensor module SI 10 is preferably configured to send the sensor data in real time to the processing means.
The three presented sensors SI 12, SI 13, SI 14 in the sensor module SI 10 are advantageous, because they allow to retrieve data about the subjects in the sensor region which allows to distinguish the subjects from each other, i.e. to anonymously identify the subjects, without recording data of the subjects which could be used to identify the subjects. One of those three sensors could be used in the sensor module SI 10 alone or in combination with other sensors. However, it showed that a combination of at least two of the three described sensors SI 12, SI 13, SI 14 significantly improve the detection quality of the subject, i.e. to identify the subjects reliably anonymously. In particular, the combination of the IR camera SI 14 and one or two of the optical cameras and the 3D camera proved to be very reliable. Therefore, notwithstanding the absence of classical identifiers such a face recognition, the combination of the described sensors provides a reliable detection quality. In particular, the combination of all three sensors SI 12, SI 13, SI 14 proved to be very reliable.
Preferably, the combination of at least two, preferably three low- resolution cameras as sensors of the sensor module SI 10 proved to be very reliable notwithstanding the low-resolution images used. This allows to reduce the power consumption for the processing of the images, the bandwidth for the transmission of the images and completely avoids the privacy issue of the people on the images af any level of the system. The af leas† two low-resolution cameras comprise preferably two, preferably three of a low-resolution IR camera SI 14, a low-resolution optical camera SI 12 and a low-resolution 3D camera SI 13. The a† leas† two low-resolution cameras comprise preferably a low-resolution IR camera SI 14 and a 3D camera SI 13, preferably also a low- resolution optical camera SI 12.
The system SI 00 comprises preferably a plurality of sensor modules SI 10. The above description of the sensor module S100 applies for all sensor modules SI 10 of the system S100 or of the plurality of sensor modules SI 10. Preferably, all sensor modules of the system SI 00 are equal. The number of sensor modules SI 10 required depends on the size of the environment 100. The sensor modules SI 10 are distributed over the environment 100 to cover the (complete) environment 100 or a† leas† the par† of the environment which is of interest for the monitoring. The sensor modules SI 10 are preferably distributed such that the sensor regions of all sensor modules SI 10 cover the (complete) environment 100 or a† leas† the par† of the environment which is of interest for the monitoring. The sensor regions can overlap with neighbouring sensor regions. Preferably, the sensor regions of the plurality of sensor modules SI 10 (with the features as described above) covers a† leas† 50% of the environment 100, preferably a† leas† 60%, preferably a† leas† 70%, preferably a† leas† 80%, preferably a† leas† 90%, preferably a† leas† 95% of the environment 100 for which subjects shall be tracked. Fig. 2 shows an exemplary embodiment for the distribution of the sensor modules SI 10 over the environment 100. The sensor modules SI 10 are preferably mounted such that the direction of view of the a† leas† one camera, preferably of all cameras of the a† leas† one sensor SI 12, SI 13, SI 14 of the sensor module SI 10 are arranged vertically facing downwards. Preferably, the a† leas† one sensor module SI 10 on the ceiling of the environment 100. However, it is also possible †o mount the sensor modules SI 10 on respective supports, e.g. if the environment has no ceiling or the ceiling is too high. In another embodiment, it is also possible †o mount the sensor modules SI 10 so that the direction of view of the camera (s) is horizontal. However, this has the disadvantage that some subjects might be covered by other subjects.
The processing means is configured †o defect subjects in the environment 100 based on the sensor data captured by the a† leas† one sensor of the a† leas† one sensor module SI 10 from the sensor regions covering the environment 100. The detection of a subject in the image(s) is preferably based on one or more features retrieved from the image(s). Preferably, the detection of a subject is based on two or more features retrieved from the image(s) of the a† leas† one sensor SI 12, SI 13, SI 14 (subsequently called feature vector without limitation of the invention). The feature vector comprises preferably one feature of each of the a† leas† two sensors SI 12, SI 13, SI 14 of the sensor module SI 10. Example features retrieved from the IR images are the average temperature of the subject or the diameter of the heat point representing the subject. Example features retrieved from the 3D images are the heigh† of the subject, the vector describing the contour of the person (as seen from above). Example features retrieved from the optical images are the average colour of the detected subject. Maybe other or additional features may be used for the rough detection where a subject is in the images.
The processing means is further configured †o anonymously identify the subjects detected in the environment 100 based on the sensor data, in particular based on the a† leas† one feature, preferably on the feature vector of the subject. The feature vector of a subject provides a kind of signature of the subject which allows †o anonymously identify the subject in the environment, i.e. each subject can be distinguished (a† any time) from the other subjects in the environment without the need of features which allow †o identify the person behind the subject and/or which require according †o some privacy rules the agreement of the subject. The anonymous identification of the subject is preferably based on a probabilistic approach such that when one or few features of the feature vector of the subject change, the subject can still be anonymously identified. This can be realised with a high reliability based on the location of the subject, when the one or few features of the feature vector of the subject change. An (anonymous) identifier can be associated to each subject anonymously identified.
The processing means is preferably configured to track a (or each) subject identified in the environment. The tracking of the (or each) subject in the environment 100 gives the position of the (or each) subject over time. Such a time-resolved itinerary allows to have the information of the itinerary in the environment of each subject and its time spent at certain locations or zones.
The processing means is preferably configured to determine the actual position of the subjects in the environment 100 at the actual point of time. Thus, the processing means can determine the position of the subjects in the environment 100 in real time.
The processing means could comprise an advanced processing means or engine SI 33 for computing advanced measures regarding the subjects. The advanced processing means SI 33 is preferably configured to compute the real time measures retrieved from the real time detection or tracking of the subjects in the environmentlOO. One example for such an advanced subject measure is the number of subjects being at a certain point of time, e.g. at the actual time, in a zone of interest of the environment 100, e.g. the zone 121 in front of the cashiers 101. Another example for such and advanced subject measure could be the average time the subjects statistically spend in a location, a certain zone or in the environment for a certain period of time. This can be shown in a map of the environment showing the average time spent at each location as an intensity as in a heat map.
The processing means could comprise an event detector SI 44. The event detector could detect an event for a certain subject. The event is associated with a location in the environment 100 and a time. The event can be for example a ticket a† one of the cashiers 101 and the location of the environment 100 associated with the even† can be the location of this cashier 101 in the environment. The even† detector SI 44 could associate the even† †o the subject based on the even† and based on the location of the subjects in the environment a† the time associated with the even†. The even† defector SI 44 could associate the even† †o the subject based on subject being located a† the time associated with the even† a† the location in the environment 100 being associated with the even†. Preferably, the even† has a fix location. The fixed even† location could be for example the location of the cashier of the environment or of a badge station for opening a door. In the above example, the subject being a† the time associated with the even† a† the cashier 101 which created the ticket must be associated with the ticket. This allows †o associate further data related †o the even† †o the subject. In the above-mentioned example may be the items bough† or the price spent or others. This even† detector is very important †o get an automatic feedback on the subjects with a higher granularity than yes or no. Such an automated feedback is very important for analysing the results produced in the processing means, e.g. for using the data of the subject detection and/or tracking in machine learning analytics. If the environment has different even† types, e.g. a firs† even† type like a firs† cashiers and a second even† type like a second cashier, each even† type has a fixed location in the environment. For example, if the firs† cashier gives out a ticket for an acquisition of a subject a† a firs† time, the subject being a† the firs† time a† the location of the firs† even† type is associated †o the subject. Another example of an even† could be a detector for a badge. The employees need †o badge for time stamping their work or for accessing a certain area. The badging would be the even†. By connecting the anonymous subject a† the position of the badge-de†ec†or in the environment 100 a† the time of the badging even†, the anonymous subject and/or its itinerary can be connected with the non- anonymous person identified with the badge in a database or with an anonymous category of subjects in a database. This allows †o identify for example certain categories of subjects which shall no† be considered for certain analysis. Another example for an even† could be the scan of the boarding pass before boarding the plane and/or before passing the security check in an airport. This allows †o connect the itinerary of a person with a certain flight or location of the flight. If the even† is the scan of the boarding pass before passing the security check and the data from the database is any identifier of the boarding pass (e.g. the ticket or boarding pass number, the sea† number, the name of the person traveling), the system could calculate from the corresponding identifier of the boarding pass the location of the subject connected with the even† or boarding pass in the environment, the distance or time of the person †o get †o a gate. This allows airlines closing a gate with the decision, if it makes sense †o wait for a person missing for boarding the plane. In addition, this allows †o get for certain applications the identity of a person, even if the sensor module SI 12 gives no personal identity about the subjects detected.
Preferably, the processing means is configured †o receive from an even† database an even† time and an even† location, †o detect the subject being a† the even† time a† the even† location of the environment and †o associate the even† with the detected subject being a† the even† time a† the even† location of the environment. Preferably, the processing means is further configured †o detect the subject †o be associated †o the even† based on a certain even† behaviour. The even† behaviour could be a certain minimum waiting time a† the even† location. This could help †o distinguish different subjects being a† the even† time a† the even† location.
The processing means could comprise analytics means or engine SI 41. The analytics means are configured †o analyse and/or further process the information about the subjects detected in the environment 100, the information about the position over time of the subjects tracked in the environment 100 and/or the results from the advanced processing means SI 33. The analytics means SI 41 is preferably configured †o compute measures which need no† or cannot be computed in real time. The analytics means SI 41 could determine based on a plurality of subjects, based on the event detected for each subject and based on the tracked itinerary of each subject conclusions about the best location of the environment 100 (e.g. for exposing products) or about a ranking of the locations of the environment 100.
The processing means could comprise a notification means or engine SI 43 for creating notifications based on the results from the detection of subjects or from the tracking of subjects. An example for a notification created based on the results from the detection or tracking of the subjects could be a notification created from the results of the advanced processing means SI 33 or of the analytics means SI 41. An example could be that a notification, e.g. an alert, is created, when the number of subjects detected in a zone of interest increases above a threshold or the (average) waiting time of the subjects increases above a threshold. The zone of interest could be the zone 121 in front of the cashiers 101 with the waiting zone of the clients. This allows the shop manager to react immediately, if the queue or the waiting time for the clients gets to long. The threshold could be selected also depending on the number of open cashiers 101. Another notification could be created, if a subject enters in a zone of interest in which it is not allowed to enter. There are many more possibilities for notifications.
The processing means could comprise an output interface, e.g. a man-machine-interface (MMI) SI 42. The output interface could be a display such that the results computed in the processing means, the analytics means SI 41 , the notification means SI 43 and/or the advanced processing means SI 33 can be shown on a display, e.g. the current map of the environment with the detected subject. The output interface could however also be a signal interface through which the results are given out, e.g. an interface to a network, e.g. the internet. The output interface could further output notifications of the notification means SI 43. The processing means could further comprise an input interface for receiving request or other information, e.g. for configuring the system SI 00 for the environment. The input interface could be the MMI SI 42.
The processing means can be realized by any kind of processing means having the described functionalities. The processing means could be a computer, a specialized chip, a system comprising at least two sub-processing units like in cloud computing, as in distributed processing, as in data processing centres. The processing means could be completely or partly in a central processing unit or could be completely or partly distributed over the sensor modules SI 10 or over intermediate processing units or modules each responsible for the processing of the sensor data of a subset of sensor modules SI 10.
Preferably, the processing means comprises at least one first processing module SI 20 (also called pre-processing module) for pre-processing the sensor data received from a subset of the sensor modules SI 10 of the system SI 00 and a second processing module SI 30, SI 40 for further processing the pre- processed data of the at least one pre-processing module SI 20. The sensor region(s) of each subset of sensor modules SI 20 define a sub-region of the environment 100.Ϊ as shown exemplarily in Fig. 3. Preferably, the subset of sensor modules SI 10 connected with one of the at least one pre-processing module SI 20 comprise at least two sensor modules SI 10. However, it is also possible that only one sensor module SI 10 is connected to one pre-processing module SI 20. Preferably, the pre-processing module SI 20 is realised in a pre-processing unit, i.e. a device being distinct from the sensor unit or sensor module SI 10. This allows to group the pre-processing for a subset of sensor modules SI 10 in one pre processing unit SI 20. For large environments, the system SI 00 comprises at least two sub-sets of sensor modules SI 10, wherein each subset of sensor modules SI 10 are connected with a different pre-processing module SI 20 or pre-processing unit. Even if it is preferred to realise the pre-processing module SI 20 as a hardware device, i.e. as the pre-processing unit, it would also be possible to realize the pre-processing module SI 20 as a software module in a central processing uni†. In the latter case, the central processing uni† could process the pre-processing for different sub-sets of sensor modules SI 10 in parallel. Each pre processing module SI 20 is configured †o receive the sensor data of the a† leas† one sensor module SI 10 of the subset of sensor modules associated/connected with the pre-processing module SI 20 and is configured †o detect subjects in the sub-region 100.Ϊ of the pre-processing module SI 20, to anonymously identify the detected subjects and/or †o track the subjects in the sub-region 100.Ϊ of the pre processing module SI 20 based on the received sensor data. Therefore, the pre processing module SI 20 comprises preferably a subject detection/identification means SI 24 (also called signature engine) for configured †o detect the position of each subject in the sub-region 100.Ϊ of the pre-processing module SI 20 and †o anonymously identify the detected subjects (for a certain point in time). Preferably, the signature engine SI 24 gives out the positions and feature vectors of each subject detected in the sub-region 100.Ϊ for the certain point in time. This is preferably done sequentially, preferably periodically, for subsequent points in time. The pre-processing module SI 20 comprises preferably a tracking means SI 23 or tracking engine for tracking the subjects in the sub-region 100.Ϊ of the pre-processing module SI 20 based on the output from the signature engine SI 24, in particular based on the position and feature vector of each subject in the sub-region 100.Ϊ for subsequent points in time. In one embodiment, the pre processing module SI 20 or the signature engine SI 24 combines the images of the sensor region of each sensor module SI 10 to a combined image of the sub- region 100.Ϊ and detects and anonymously identifies the subjects in the sub- region 100.Ϊ based on the combined image of the sub-region 100.Ϊ. In another embodiment, the pre-processing module SI 20 or the signature engine SI 24 detects and anonymously identifies the subjects in each sensor region based on the image of the respective sensor region †o create an detection and identification output for each sensor region and combines then the detection and identification output of all sensor regions of the sub-region 100.Ϊ to obtain the combined detection and identification output for the sub-region 100.Ϊ. The modular approach with the number of pre-processing modules SI 20 used depending on the number of sensor modules SI 10 used in the environment is preferred. Nevertheless, if is also possible †o do the pre-processing in only one pre-processing module for all sensor modules SI 10 of the environment, then preferably in a central processing unit. Thus, the pre-processing module(s) SI 20 or pre-processing unit is preferably an intermediate device arranged (hierarchically) between the sensor modules SI 10 or sensor units and the central processing uni†.
The pre-processing output of the/each pre-processing module SI 20 is sen† †o the second processing means. The pre-processing output of a pre processing module SI 20 is preferably the detected, anonymously identified and/or tracked subject in the sub-region 100.Ϊ of the respective pre-processing module SI 20. Preferably, the pre-processing output for a certain point in time is the position of the subject(s) detected and anonymously identified in the sub- region 100.Ϊ. Preferably, the pre-processing output for a certain point in time is the position and the features vector of the subject(s) detected in the sub-region 100.Ϊ. The pre-processing module SI 20 sends preferably the processing output of subsequent time points †o the second processing means. The second processing means is preferably configured †o receive the pre-processing output of each sub-region 100.Ϊ from each pre-processing module SI 20 and †o detect and/or track the subjects in the environment 100 by combining the pre-processing ou†pu†(s) of the sub-regions 100.Ϊ. This is done preferably in a reconciliation means or engine SI 34. The output of the reconciliation engine SI 34 is preferably the detected, anonymously identified and/or tracked subject(s) in the environment 100. Preferably, the output of the reconciliation engine SI 34 for a certain point in time is the position of the subject(s) detected and anonymously identified in the environment. Preferably, the pre-processing output fora certain point in time is the position and an identifier of the subject(s) detected in the environment. The reconciliation engine SI 34 outputs preferably the output of subsequent time points. The output can be used by the advanced processing engine SI 33, the analytics engine SI 41, the notification engine SI 43, the event detector SI 44 for further processing, in particular for real time processing.
The pre-processing module SI 20 could comprise a storage SI 22. The storage SI 22 is preferably configured to store sensor data received from the sensor modules SI 10 connected to the pre-processing module SI 20 (at least until the sensor data are processed). The storage S122 is preferably configured to store pre-processing output data to be output to the second processing means. The storage SI 22 is preferably configured to store the pre-processing output of at least one previous processing step, i.e. the previous sampling time, for improving the detection and/or tracking of the subjects based on the position of the anonymously identified subjects at the at least one previous processing step.
The second processing means could further comprise a storage. Preferably, the storage SI 32 stores the subjects detected, anonymously identified and/or tracked (over time) in the environment 100, i.e. the storage SI 32 stores the output of the reconciliation engine SI 34. This allows to use the data about the subjects detected, anonymously identified and/or tracked (over time) in the environment 100 at a later point in time for further analysis, e.g. by the advanced processing engine SI 33, the analytics engine SI 41, the notification engine SI 43, the event detector SI 44 for further processing. The storage SI 32 could further be used to buffer the pre-processing output(s) from the pre-processing module(s) SI 20. Preferably, the pre-processing output(s) are removed from the storage SI 32, once they have been processed in the second processing means, in particular in the reconciliation engine SI 34.
The second processing means in the shown embodiment comprises preferably two modules SI 30 and SI 40. The first processing module SI 30 receives the pre-processing output(s) from the at least one pre-processing modules SI 20 and process this output to detect, anonymously identify and/or track the subjects in the environment 100, preferably †o determine the position over time of the subjects anonymously identified. Therefore, the reconciliation engine SI 34 is arranged in the firs† processing module SI 30. Preferably, the storage SI 32 is arranged in the firs† processing module SI 30. Preferably, the advanced processing engine SI 33 is arranged in the firs† processing module SI 30. The second processing module SI 40 uses the output of the firs† processing module SI 30 to do analytics, †o interact with the user, †o create notification, †o detect events and/or †o process the output of the firs† processing module SI 30 in any other way. Thus, the second processing module S140 comprises preferably the analytics engine SI 41, the input interface, the output interface, the notification engine SI 43, the even† detector SI 44 and/or any other means for processing the output of the firs† processing module SI 30.
The shown embodiment in Fig. 4 uses thus a system SI 00 with three, preferably four layers. A sensor layer with the a† leas† one sensor module SI 10 or sensor uni†. A pre-processing layer with the a† leas† one pre-processing module SI 20 or pre-processing uni†. A third layer with the second processing means which processes the outcome of the pre-processing modules SI 20. The third layer could be preferably be split into a third layer with the firs† processing module SI 30 and a fourth layer with the second processing module SI 40. Obviously, alternative architectures are also possible. For example, the system SI 00 could be organized in two layers with a sensor layer and a central processing layer.
The environment, the firs† method, the second method, the computer program, the computer program product, the processing means and/or sensor module according †o the invention is for the sake of brevity no† described in detail. They work analogously †o the system described before in detail. I† should be understood that the present invention is not limited to the described embodiments and that variations can be applied without going outside of the scope of the appended claims

Claims

1 . System for anonymously defecting a plurality of subjects in an environment (100), said system comprising:
- A sensor module (SI 10) comprising a† leas† one sensor (SI 12, SI 13, SI 14), wherein the a† leas† one sensor (SI 12, SI 13, SI 14) comprises a camera (SI 12, SI 13, SI 14) for capturing images of the environment (100);
- A processing means (SI 20, SI 30) configured †o detect subjects in the environment based on the images of the environment (100) captured by the camera (SI 12, SI 13, SI 14) of the sensor module (SI 10);
- characterized in that the camera (SI 12, SI 13, SI 14) of the sensor module (SI 10) is a low-resolution camera.
2. System according †o the previous claim, wherein the camera (SI 12) has a resolution lower than 0.3 megapixels.
3. System according †o one of the previous claims, wherein the camera (SI 12) is an optical camera in the spectrum of visible light, a 3D camera (SI 13) or an infrared camera (SI 14).
4. System according †o one of the previous claims, wherein said sensor module (SI 10) further comprises a second camera for capturing second images of the environment (100), and the processing means is configured †o detect the subjects in the environment (100) based on the images of the environment (100) captured by the camera of the sensor module (SI 10) and based on the second images of the environment captured by the second camera of the sensor module (SI 10), wherein the camera and the second camera are a first one and a second one of an optical camera (SI 12) in the spectrum of visible light, a 3D camera (SI 13) and an infrared camera (SI 14).
5. System according †o the previous claims, wherein said sensor module (SI 10) further comprises a third camera for capturing third images of the environment (100), and the processing means is configured †o defect the subjects in the environment based on the images of the environment captured by the third camera of the sensor module (SI 10), wherein the third camera is the third one of the optical camera (SI 12), the 3D camera (SI 13) and the infrared camera (SI 14).
6. System according †o the claim 4 or 5, wherein the second camera and optionally also the third camera is/are a low-resolufion camera.
7. System according †o the previous claim, wherein the camera is a low-resolufion infrared camera (SI 14) and the second camera is a low-resolufion 3D camera (SI 13).
8. System according one of the claims 4 †o 7, wherein the sensor module (SI 10) is realized as a sensor uni† comprising the a† leas† two cameras (SI 12, SI 13, SI 14), wherein the sensor module (SI 10) is configured †o be mounted such that the direction of view of the a† leas† two cameras (SI 12, SI 13, SI 14) of the a† leas† one sensor (SI 12, SI 13, SI 14) of the sensor module (SI 10) are arranged vertically facing downwards, wherein a sensor region of the sensor module (SI 10) is the par† of the environment (100) covered by the a† leas† two cameras (SI 12) of the sensor module (SI 10).
9. System according one of the claims 4 †o 8, wherein the processing means is configured †o detect subjects in the environment (100) based on a firs† feature retrieved from the images of the camera and based on a second feature retrieved from the images of the second camera.
10. System according one of the previous claims comprising a plurality of further sensor modules (SI 10), wherein each of the plurality of further sensor modules (SI 10) has the same features as the features of the sensor module (SI 10) described in this claim.
11. System according †o the previous claim, wherein the sensor module (SI 10) and the plurality of further sensor modules (SI 10) comprise each preferably an interface †o send the images from the at least one sensor (SI 12, SI 13, SI 14) †o the processing means.
12. System according one of the previous claims, wherein the processing means is configured †o associate †o each subject defected in the environment an identifier for anonymously identifying the respective subject and †o track each subject identified by its associated identifier in the environment.
13. System according one of the previous claims, wherein the processing means comprises a firs† processing means (SI 20) for pre-processing the data received from the a† leas† one sensor (S 112, S 113, S 114) of the sensor module and a second processing means (SI 30, SI 40), wherein the firs† processing means (SI 12) is configured †o detect subjects in the environment (100) based on the images of the environment captured by a† leas† one sensor (SI 12, SI 13, SI 14) of the sensor module (SI 10) and †o determine a pre processing output with the position of the detected subjects in the environment (100), preferably with the tracked position and/or the tracking path of the subjects anonymously identified in the environment (100), wherein the second processing means (SI 30) performs further processing based on the pre-processing output.
14. System according †o the previous claim comprising a† leas† two of said sensor modules (SI 10) and a† leas† two of said firs† processing means (SI 20), wherein each of the a† leas† two firs† processing means (SI 20) receives the data of the a† leas† one sensor (SI 12, SI 13, SI 14) of a† leas† one sensor module (SI 20) of the a† leas† two sensor modules (SI 20) to determine the pre-processing output, wherein the second processing means (SI 30) comprises a combining module (SI 34) configured †o receive the pre processing output of the a† leas† two firs† processing means (SI 20) and †o combine the pre-processing outputs of the a† leas† two firs† processing means (SI 20) to a combined output detecting the subjects in the environment (100), preferably for tracking the subjects anonymously identified in the environment (100) .
15. System according to the previous claim, wherein each first processing means (SI 20) is configured †o receive and pre-process the data from the sensor module (SI 10) of a different sub-region (100.Ϊ) of the environment (100), wherein the second processing means (SI 30) is configured †o combine the pre-processing outputs of the different sub-regions to a combined output of the combined sub-regions (100.Ϊ) of the environment (100).
16. System according †o one of claims 13 to 15, wherein the firs† processing means (SI 20) is configured †o receive the data from a† leas† two sensor modules (SI 10).
17. System according †o one of the previous claims, wherein the processing means is configured †o detect an even† for a certain subject, wherein the even† has an associated even† location in the environment (100) and an associated even† time, wherein the even† is associated †o the subject based on based on the subject being located a† the time associated with the even† a† the location in the environment (100) being associated with the even†.
18. Environment with a system according †o one of the previous claims.
19. Environment according to the previous claim, wherein the camera has a resolution lower than lower than 2000 Pixels per square meter of the environment.
20. Method for anonymously defecting a plurality of subjects in an environment (100), said method comprising the steps of:
- Capturing with a† leas† one sensor (SI 12, SI 13, SI 14) of a sensor module (SI 10) arranged in the environment a† leas† one image of the environment (100);
- Detecting, in a processing means (SI 20, SI 30), subjects in the environment (100) based on the images of the environment (100); characterized in that the a† leas† one sensor (SI 12, SI 13, SI 14) of the sensor module (SI 10) comprises a low-resolution camera (SI 12).
21. Computer program for anonymously detecting a plurality of subjects in an environment (100), said computer program being configured †o perform the following steps, when executed on a processing means:
- Receiving, in the processing means, from a sensor module (SI 10) with a† leas† one sensor (SI 12, SI 13, SI 14) a† leas† one image of the environment (100);
- Detecting, in the processing means (SI 20, SI 30), subjects in the environment (100) based on the images of the environment (100); characterized in that the a† leas† one image of the environment (100) comprises an image from a low-resolution camera (SI 12).
EP20757618.2A 2019-12-16 2020-08-24 Anonymized multi-sensor people tracking Withdrawn EP4078430A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP19216603.1A EP3839802A1 (en) 2019-12-16 2019-12-16 Anonymized multi-sensor people tracking
PCT/EP2020/073634 WO2021121686A1 (en) 2019-12-16 2020-08-24 Anonymized multi-sensor people tracking

Publications (1)

Publication Number Publication Date
EP4078430A1 true EP4078430A1 (en) 2022-10-26

Family

ID=68917655

Family Applications (2)

Application Number Title Priority Date Filing Date
EP19216603.1A Withdrawn EP3839802A1 (en) 2019-12-16 2019-12-16 Anonymized multi-sensor people tracking
EP20757618.2A Withdrawn EP4078430A1 (en) 2019-12-16 2020-08-24 Anonymized multi-sensor people tracking

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP19216603.1A Withdrawn EP3839802A1 (en) 2019-12-16 2019-12-16 Anonymized multi-sensor people tracking

Country Status (3)

Country Link
EP (2) EP3839802A1 (en)
DE (1) DE202020005952U1 (en)
WO (1) WO2021121686A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7796029B2 (en) * 2007-06-27 2010-09-14 Honeywell International Inc. Event detection system using electronic tracking devices and video devices
EP2270761A1 (en) 2009-07-01 2011-01-05 Thales System architecture and process for tracking individuals in large crowded environments
CN107662867B (en) * 2016-07-29 2021-03-30 奥的斯电梯公司 Step roller monitoring and maintenance operator monitoring for passenger conveyors
US10467509B2 (en) 2017-02-14 2019-11-05 Microsoft Technology Licensing, Llc Computationally-efficient human-identifying smart assistant computer
US11481805B2 (en) 2018-01-03 2022-10-25 Grabango Co. Marketing and couponing in a retail environment using computer vision

Also Published As

Publication number Publication date
DE202020005952U1 (en) 2023-11-18
EP3839802A1 (en) 2021-06-23
WO2021121686A1 (en) 2021-06-24

Similar Documents

Publication Publication Date Title
CN107832680B (en) Computerized method, system and storage medium for video analytics
US10288737B2 (en) LiDAR sensing system
US11954956B2 (en) Multifunction smart door device
US20210398659A1 (en) Methods and systems for contact tracing of occupants of a facility
JP2019160310A (en) On-demand visual analysis focalized on salient events
US11308792B2 (en) Security systems integration
KR20190078688A (en) Artificial intelligence-based parking recognition system
Liciotti et al. An intelligent RGB-D video system for bus passenger counting
JPH04199487A (en) Passersby counter and sales processor
EP4078430A1 (en) Anonymized multi-sensor people tracking
US11586857B2 (en) Building entry management system
EP2270761A1 (en) System architecture and process for tracking individuals in large crowded environments
KR20220064213A (en) Program for operation of security monitoring device
KR20220022490A (en) KOREA Defense certification management system
Kumari et al. Automatic Counting System Using IoT and Computer Vision
JP7388499B2 (en) Information processing device, information processing method, and recording medium
US20220084343A1 (en) Multifunction smart door lock
KR20220031310A (en) A Program to provide active security control service
KR20220031266A (en) An apparatus for providing active security control services using machine learning of monitoring equipment information and security control consulting information andusing
KR20220031258A (en) A method for providing active security control service based on learning data corresponding to counseling event
KR20220031270A (en) Method for providing active security control consulting service
KR20220064483A (en) Recording media on which the security monitoring chatbot service program is recorded
KR20240023738A (en) Method for monitoring usage statu of a guest using at least one camera and apparatus for supporting the same
KR20220064482A (en) Program for providing security surveillance chatbot service
KR20220064238A (en) Apparatus for providing a caption data based local network security monitoring services

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220614

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20230808