GB2588106A - System and method for determining occupancy zones - Google Patents

System and method for determining occupancy zones Download PDF

Info

Publication number
GB2588106A
GB2588106A GB1914441.9A GB201914441A GB2588106A GB 2588106 A GB2588106 A GB 2588106A GB 201914441 A GB201914441 A GB 201914441A GB 2588106 A GB2588106 A GB 2588106A
Authority
GB
United Kingdom
Prior art keywords
occupancy
data
object data
space
zone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1914441.9A
Other versions
GB201914441D0 (en
Inventor
Goodacre James
Jonathan Robert Lee-Smith Andrew
Andrew Pallister Michael
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seechange Technologies Ltd
Original Assignee
Seechange Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seechange Technologies Ltd filed Critical Seechange Technologies Ltd
Priority to GB1914441.9A priority Critical patent/GB2588106A/en
Publication of GB201914441D0 publication Critical patent/GB201914441D0/en
Priority to PCT/GB2020/052196 priority patent/WO2021069857A1/en
Publication of GB2588106A publication Critical patent/GB2588106A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

A method of determining occupancy zones, comprising: obtaining object identification data 321-323 of one or more objects 311-313 detected in the space at a plurality of time instances; performing clustering 331-333 on the object data to identify clusters of object data; and identifying, based on the clusters, occupancy zones in the space 341, 342. To detect the objects, visible light, Infra-Red, millimetre wave radar and/or depth data sensor data may be provided to a Convolutional Neural Network (CNN). The data may be used to determine: an occupancy zone type; an occupancy capacity; and/or an occupancy level. The object data may comprise a first and second object type. The first object type may be candidates to define an occupancy zone and the second object type candidates to occupy an occupancy zone. The space may be an office space, the occupancy zones desks and the objects people. The space may be a car parking, the occupancy zones vehicle parking spaces and the objects vehicles.

Description

SYSTEM AND METHOD FOR DETERMINING OCCUPANCY ZONES
Technical Field
[0001] The present invention relates to a method and system for determining occupancy zones of a space. More particularly, determining occupancy zones using obtained object data in a space at a plurality of time instances.
Background
[0002] It is desirable to determine occupancy zones of a space, occupancy zone capacity, occupancy zone type and occupancy zone level, using obtained obj ect data identifying one or more selected objects detected in the space at a plurality of time instances and performing clustering of the object data to identify one or more clusters of object data and identifying based on the clusters of object data, occupancy zone data representing one or more identified occupancy zones in the space. Thereby, the occupancy zone, occupancy capacity, occupancy type and occupancy level are automatically updated based on object data from one or more sensors Summ [0003] There is provided a method of determining one or more occupancy zones of a space, the method comprising: obtaining object data identifying one or more objects detected in the space at a plurality of time instances; performing clustering on the object data to identify one or more clusters of object data; and identifying, based on the clusters of object data, occupancy zone data representing one or more identified occupancy zones in the space.
[0004] There is provided a system to determine one or more occupancy zones of a space, the system comprising a processor to: obtain, at the system, object data identifying one or more objects detected in the space at a plurality of time instances; performing, using processing circuitry, clustering on the object data to identify one or more clusters of object data; and identifying, using the processing circuitry and based on the clusters of object data, occupancy zone data representing one or more identified occupancy zones in the space.
[0005] There is provided a transitory or non-transitory computer-readable medium comprising instnictions which, when executed by processing circuitry, cause the processing circuitry to perform the methods herein.
Brief Description of the Drawings
[0006] Further features will become apparent from the following description, given by way of example only, which is made with reference to the accompanying drawings in which like reference numerals are used to denote like features.
100071 Figure 1 is a flow diagram showing an example method of processing obtained object data to determine one or more occupancy zones according to examples; [0008] Figure 2 illustrates a system according to some examples of the present
disclosure;
[0009] Figure 3 illustrates an example application of the present disclosure to an office environment, [0010] Figure 4 is a flow diagram showing an example method of determining current occupancy levels of occupancy zones according to examples; [0011] Figure 5 is a flow diagram showing an example method of determining current occupancy capacity of occupancy zones according to examples; and [0012] Figure 6 illustrates a system according to some examples of the present disclosure.
Detailed Description
[0013] Details of systems and methods according to examples will become apparent from the following description, with reference to the figures. In this description, for the purpose of explanation, numerous specific details of certain examples are set forth. Reference in the specification to "an example" or similar language means that a feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples. It should further be noted that certain examples are described schematically with certain features omitted and/or necessarily simplified for ease of explanation and understanding of the concepts underlying the examples.
[0014] The present disclosure relates generally to the determination of occupancy zones within a monitored space based on the detection of objects within the space.
The approaches described herein are configured to determination the presence of occupancy zones in spaces which are dynamic, changing, or re-configurable over time by adapting to changes in the configuration or usage of a space and determining based, on such adapted usage, useful technical information about the usage of such spaces. The determined occupancy zones can be represented as occupancy zone data which can be output for usage by other computer systems or for display. Examples application of these techniques include monitoring of spaces such as warehouses, office spaces, and parking lots or any space in which the zones or regions which objects occupy changes over time.
[0015] Figure 1 is a flow diagram illustrating a method of processing obtained object data derived from sensor data captured from a monitored space to identify one or more occupancy zones of the monitored space 10016] The method 10 of Figure 1 begins at step 11 at which object data is obtained. The obtained object data identifies one or more objects detected in the monitored space at each of a plurality of time instances. The object data therefore represents an indication of the presence of objects within the space over a period.
100171 The obtaining of object data could be from local memory of an apparatus performing at least a portion of the steps of the method of Figure 1 or a remote device at which the object data has been generated. The object data may be detected at a plurality of time instances.
10018] The object data may be generated by applying an object detection algorithm to sensor data captured by a remote sensor which is configured to monitor the space. For example, the identified by an optical video camera monitoring at least a portion of the space. The object data may for example may be generated for each of a plurality of frames of a video sequence. Alternatively, the sensor may be be visual, infrared, millimetre radar, or other types of sensor.
[0019] The object data may be generated based solely on two-dimensional sensor data or may also include other data. For example, depth data might also be utilized in the generation of object data. As such, the obtained object data may have multiple dimensionality, including time at which the object was detected and two or three-dimensional position of the object within the space.
[0020] In an example, the object detection algorithm which generates the object data obtained at step 11 may comprise two different object types. As such the object data may comprise one or both of first and second object types. A first object type comprises objects that are a candidate for defining an occupancy zone. Put another way, first object types may comprise objects that define or have characteristics of the occupancy zones. In an office environment, first types of objects may be keyboards, desks, mice, or chairs. In a warehouse environment, first types of objects may be shelving or markings indicating regions of a warehouse.
[0021] A second object type comprises objects that are a candidate for occupying an occupancy zone. Put another way, second object types comprise comprise objects that define or have characteristics of occupants of an occupancy zone. For example, in an office environment, second types of objects may be people. In a warehouse environment, second types of objects may include boxes or packaging.
[0022] Step 11 comprises obtaining a set of object data comprising object data generated for each of a plurality of time instances.
[0023] At step 12, an unsupervised clustering algorithm is applied to the set of object data obtained at step 11 in order to identify one or more clusters or groupings of the identified objects within the space over a time-window defined by the range of time instances within the set of object data. It will be appreciated that like-object types may be clustered separately. For example, first object types may be clustered separately from second object types to form first object type clusters and second object type clusters.
[0024] Examples of unsupervised clustering algorithms include k-means clustering, Mean-Shift Clustering, or Density-Based Spatial Clustering of Applications with Noise (DBSCAN) or a hierarchical extension to DBSCAN referred to as I-MBSCAN. Other clustering techniques can be utilized as will be appreciated by the skilled person. The result of performing a clustering algorithm on the object data is that one or more clusters of objects are defined, which may be considered to be cluster zones or regions. In some embodiments, clusters may be defined in any of several ways such as by location within the space and size.
[0025] It will be appreciated that the time-window of which clustering is performed will be depending upon the specific application of the techniques described herein.
For example, in some implementations, such as an office environment, it may be that some portions in a space may be very dynamic and thus need to be determined on an hourly or daily basis whereas other spaces or portions thereof may change configuration less regularly.
[0026] At step 13, method 10 proceeds to step 13 in which one or more occupancy zones are identified based on the clusters of object data. More specifically, step 13 may involve determining based on some pre-determined criteria or threshold whether or not the clusters of object data defined by cluster data provide sufficient confidence that they represent an occupancy zone. For example, if the number of detected objects within a cluster region is below a pre-determined threshold then the cluster region may be rejected as an occupancy zone. It may be possible to also factor the confidence of the object detection algorithm in the detected object data into the determination occupancy zone. Other factors which might be considered when identifying one or occupancy zones include the density of a cluster zone or how tightly clustered the detected objects are in a cluster zone.
190271 As will be appreciated, the identification of occupancy zones can be performed based on first object type clusters, second object type clusters, or a combination of first object type clusters and second object type clusters. If both first and second object types clusters overlap or at least partially intersect, then there may be greater confidence in the presence of an occupancy region.
[0028] As illustrated in Figure 1, once the occupancy zones are identified, occupancy zone data is generated which represents the occupancy zones within the space, the method proceeds to step 14 in which the occupancy zone data is output. The outputting of the occupancy data may involve displaying occupancy zone data, for example overlaid over a captured image of the space or a plan view of the space. The output may involve transmitting the occupancy zone data for further use or providing a notification of occupancy zone changes when compared with prior occupancy zone data.
[0029] As further illustrated in Figure 1, method 10 returns to step 11 at which further object data is obtained. This further object data may be object data captured at a plurality of time instances subsequent to the time instances at which the previous occurrence of step 11 took place. The method 10 can then be repeated based on the further object data. The addition of the further object data, a second set of object data is formed to which a clustering algorithm is applied, which may produce modified clusters of object data which might result in a different identification of occupancy zones.
NOM It will be appreciated the second set of object data may comprise at least a portion of the first set of object data based on an overlap in the time instances in the captured sensor data from which the object data was derived. For example, the method of Figure 1 may apply a rolling time-window during which the captured object data is selected to define the set of object data to which the clustering algorithm is applied. The precise manner by which old object data is discarded and new object data is accepted for the purposes of performing clustering will depend on the application to which these techniques are applied and the rate at which it is anticipated that occupancy zones may change over time.
[0031] Figure 2 illustrates an example system 20 configured to implement the method of Figure 1, System 20 comprises a sensor 21, a gateway device 22, and a server 23 which are communicatively coupled to one another.
10032] Sensor 21 is configured to monitor at least a portion of the space for which occupancy zones are to be determined and to capture data representing at least a portion of the state of the space at a plurality of different time instances. The plurality of time instances need not be consecutive but could be so. It is anticipated that the spacing and window of time instances is application specific.
10033] The sensor may be an optical camera, an infrared camera or any form of sensor able to capture data upon which object detection can be performed. For example, the sensor could be an image sensor capable of detecting images in colour, black and white, or infrared (for example where the space is not visibly illuminated infrared would enable the space to be viewed by detecting variations in temperature), or a mixture of the above. The sensor could also use mm wave radar (e.g. 60GHz), sonar or other sensor types. In an example, the sensor 21 may be formed of multiple discrete sensing components that each capture different data types, the results of which are combined to generate a sensor data packet representing the state of the monitored space at an instance of time.
[0034] The number of operations described in Figure 1 that are performed at the sensor 21 may vary depending on the implementation constraints -for example the operations of the sensor may be limited based on the processing capabilities of the sensor 21 or based on the available communication bandwidth between the sensor 21 and the gateway device 22. In some examples, the application of an object detection algorithm to the sensor data to generate object data may be performed at the sensor 21 such that the sensor transmits to the gateway device 22 object data. This requires that the sensor 21 is operable to perform object detection locally. In other arrangements, no object detection operations may be performed at the sensor 21 and the object detection operations may be offloaded to one or more of the gateway device 22 and the server 23 such that the sensor 21 transmits sensor data to the gateway device 22 to enable the gateway device 22 to perform object detection.
[0035] In other arrangements, partial processing of the sensor data may be achieved. For example, the sensor 21 may be configured to perform feature extraction which would reduce the amount of data transmitted from the sensor 21 to the gateway device 22 and reduce the workload at the gateway device 22. However, this comes at increased processing requirements for the sensor.
[0036] In alternative embodiments, depth data may also be extracted at sensor 21 or at gateway device 22 as discussed later.
10037] As described in relation to Figure 1, an object detection algorithm is performed on the sensor data to obtain object data. The result of running the object detection algorithm on the sensor data is that, for each time instance, object data identifying one or more objects is generated. The objects detected may be selected to be objects of a second object type, i.e. that are indicative of occupancy within the space.
10038] Object detection may be performed using a separate object detector or in the sensor 21 or gateway device 22 The object detection algorithm may use algorithmic computer vision techniques or using semantic SLAM (Simultaneous Location and Mapping) based on the image data obtained by the sensor 2L In an alternative approach, object detection is performed using a Machine Learning (ML) techniques, such as a Convolutional Neural Network (CNN). As discussed previously, the sensor data used may be one or more of visual image, infrared, and millimeter radar sensor data. The object detection may also use depth data from a depth sensor or a mono-depth CNN.
19039] In addition, to the above-described approaches, image segmentation may be employed to determine where the detected object is within the captured image or scene. The object detection algorithm may generate image segmentation information directly. Alternatively, image segmentation may be performed in a separate pass, for example by using deconvoluti on layers in a CNN.
190401 The position of the object in the image, position of the sensor unit, zoom, region of focus of the sensor 21 may be used to determine the x, y, and optionally z location or position of the object in the space. This may be stored alongside time data t indicative of the time at which the data was obtained or the data's absolute or relative position within a sequence of captured object data. Depth data may be obtained from a depth sensor that forms part of sensor 21 or a separate discrete component in communication with the gateway device 22. The depth sensor may also be employed to derive the depth z, or a mono-depth CNN may be used to more accurately determine the x, y, z location in the space.
100411 In some examples, object recognition is performed locally in the sensor unit and the output of the sensor unit being a x, y, and optionally z location in the space, as well as an indication as to whether the object is a first or second object type. Other embodiments may transfer the sensor data to a separate object detector, this unit may process multiple sensor feeds. The object detector may be located in the gateway device 22 or may be a distinct system component.
[0042] Wherever the object detection is performed, whether at sensor 21 or at gateway device, the object data forms a set of object data to which a clustering algorithm is applied. In one example, the clustering is performed at gateway device 22 based on a set of object data stored at the gateway device 22. In some embodiments, clustering may be performed locally at the sensor 21. As such, the object detection and clustering may be performed locally at a single device. In some embodiments, the identification of an occupancy zone may also be undertaken at the sensor 21.
190431 The clustering algorithm seeks to identify one or more clusters of object data which are indicative of regions of the space in which objects of interest have been found to reside more regularly within the pre-defined time-window defined by instances of time in which sensor data is captured. The cluster data relating to clusters of object data enables the identification of an occupancy zone by recognizing that the regular or weighted presence of an object type at a specific location within the space is indicative of the region being an occupancy zone for the space.
[0044] In examples where the sensor data does not include depth information, the clustering may be performed based on three dimensions of object data namely x, y, and t -where x and y represent the position of the cluster region in two-dimensional space and I represents the time instance at which that the object presence was determined.
[0045] Having identified clusters of related object data types at locations within the space, the cluster data is transmitted to server 23 which processes the cluster data to determine one or more occupancy zones. The processing of the cluster data may involve a confidence threshold above which the cluster of object data is to be determined as an occupancy zone or a more nuanced identification occupancy zones may occur as will be described in more detail later.
[0046] Figure 3 shows schematically an example space to which the methods and systems described herein can be applied.
[0047] In the example of Figure 3, the space is an office space in which there are occupancy zones such as desks and/or seats at which people can work and workers who can occupy such occupancy zones. In this example, the location, size, and capacity of the occupancy zones in the space may change over time. For example, the position of the desks may differ over the course of several days, weeks, or months as the office is re-configured or the size of a workspace may change over time.
[0048] As can be seen in Figure 3, a sensor monitoring the space is able to capture sensor data 31 from the space at a time instance I. In this example, the sensor data 31 is image data and the captured image includes image data relating to three people 311, 312, 313 in the space as well as two desks 314 and 315. For example, an office space may contain one or more sensors. A sensor may offer a 60-degree, 90-degree, wide angle, or 360-degree field of view and may be mounted on the ceiling or a wall.
[0049] An object detection algorithm is configured to only detect second object types by being configured for people detections since people are candidates to occupy an occupancy zone. However, it will be appreciated that it is also possible for the object detection algorithm to be configured to detect first object types, i.e. objects that define the occupancy zones. For example, the object detection algorithm could be configured to detect the presence of desks 314, 315, and/or chairs and utilize that object data in determining the presence of an occupancy zone.
10051-11 As discussed, the sensor data 31 captured at a time instance is processed using an object detection algorithm to generate object data 32 identifying the presence of one or more objects with the space. In the example of Figure 3, the object detection algorithm is configured to perform person detection and has generated object data 32 that identifies three people 321, 322, 323. In this example, each of these detected objects is identified using a respective position and a bounding box to outline each object 321, 322, 323 Each object can be represented by one or more multidimensional datasets.
[9051] The generation of object data 321, 322, 323 by the object detection algorithm is repeated using different sensor data 31 captured at different time instances to generate a set of object data representing the presence of objects in the space. In this example, the set of object data might comprise tens, hundreds, or thousands of occurrences of objects captured at different time instances over the course of minutes, hours, days, or longer in an attempt to characterize the location of people within the space over the course of the usage of the space.
[0052] An example of the set of object data 33 is illustrated in Figure 3. As shown in this example, despite some captured object data identifying the presence of an object at some locations, such as location 332, there are two significant regions at which people objects have been detected -namely the cluster of object instances illustrated at 331 and 333. Regions 331 and 333 have identified significantly more instances of the presence of a person object and thus can be considered to result in a greater level of confidence that these regions represent occupancy zones within the space.
[0053] As illustrated at 34, cluster data is generated based upon a cluster of instances 341, 342 of a detected object according to a clustering algorithm. This cluster data may represent cluster regions in which there is a higher probability that there is an occupancy zone within the space. Having generated the cluster data, it is then possible to utilize the cluster data to determine one or more occupancy zones.
[0054] In some arrangements, the cluster data of first and/or second object types could be used to define an occupancy type. An occupancy type can be considered to be an indication as to the nature of the occupancy zone. For example, in an office space, a desk could be configured as a hot desk or a static desk and the nature of the cluster data could be used to infer the occupancy type.
[0055] Figure 4 illustrates a further flow diagram which represents a method of determining the current occupancy level of an occupancy zone of a space in an adaptive manner as the use and occupancy of the space changes over time.
[0056] The method of determining current occupancy level comprises the method 10 of determining occupancy zones as illustrated in Figure 1 and elements described with respect to Figure 1 apply equally to Figure 4.
[0057] As previously described, the method 10 of determining occupancy zones comprises obtaining object data 11, performing clustering on the object data 12, identifying one or more occupancy zones 3, and outputting occupancy zone data 14 representing the occupancy zones. This method then returns to step 11 in which further object data is obtained and the adaptive determination of occupancy zone data.
[0058] Method 40 comprises obtaining the occupancy zone data identifying one or more occupancy zones. Following identification of the one or more occupancy zones, further object data may be obtained which identifies the presence of one or more objects at subsequent time instances. In the method 40 of Figure 4, the object detection algorithm utilized to generate the further object data obtained at step 41 is configured to detect the presence of objects which second object types -i.e. they are candidates for occupying an occupancy zone, such as a person, rather than objects which define the occupancy zone (first object types).
[0059] At step 42, the occupancy level of an occupancy zone is determined based on a determination as to whether there is an object detected in the presence of a determined occupancy zone. This determination may be done in a number of different ways, for example based on an overlap and/or proximity of object cluster data and an occupancy zone. The skilled person will appreciate other methods are available.
190601 At step 43, the determined occupancy level can be output in a similar manner to the occupancy zone data at step 14.
[0061] It will be appreciated that the time-instance windows in which methods 10 and 40 occur will vary. For example, it might be expected that the location and presence of an occupancy zone varies at a much slower rate that the occupancy level of such occupancy zones. However, the rate at which the occupancy zone location and occupancy level of the occupancy zone vary will depend on the specific application to which these methods have been applied.
[0062] In the example of Figure 4, it is assumed that a space is either occupied or not based on the presence of a detected object in the occupancy zone. This method therefore assumes that there is a I-1 relationship between occupant and occupancy zone. However, in some examples there may be a 1-many relationship between occupant and occupancy zone, for example where more than one person can be seated at a single desk. In order to determine the occupancy level in spaces with a l-many relationship, it may in some examples be necessary to determine the occupancy capacity of an occupancy zone.
[0063] Figure 5 illustrates a further example flowchart in which a method 50 of determining the occupancy capacity of an occupancy zone is set forth.
10064] Method 50 begins by obtaining occupancy zone data at step 52 as well as some indication of the object data obtained at step 11. Alternatively, further object data at a subsequent time instance by utilized for determining occupancy capacity.
100651 At step 52, a determination as to the capacity of an occupancy zone is made. In a first example, the occupancy capacity might be determined based upon the number of identified objects in the occupancy zone at a given instance in time. For example, over the course of multiple time instances the occupancy capacity could be determined based on the maximum number of identified objects in the occupancy zone.
1110661 More complex approaches for determining capacity might require that there be the presence of a number of objects within an occupancy zone for a pre-determined period or for a pre-determined number of time-instances either consecutively or within a time-window.
1110671 The approaches set out in Figures 4 and 5 can be combined to determine, for the multiple occupancy of an occupancy zone, both a capacity of an occupancy zone, where that capacity is greater than one occupant, and the proportion of that capacity that is occupied.
[0068] Figure 6 illustrates a variant of system 20 of Figure 2 in which a plurality of gateway devices is configured within the system.
[0069] Figure 6 illustrates a system 60 comprising a plurality of gateway devices 22 that are configured to communicate with the server 23. Each gateway device 22 is configured to receive data from a plurality of sensors 21 as described above in relation to Figure 2. As such, the plurality of sensors 21 may provide coverage to a space and may collectively provide an occupancy zone data for an entire space where each sensor 21 covers a portion of the entire space.
[0070] It will be appreciated, that there a number of possible applications to which these disclosed techniques may be applied. For example, these techniques could be applied to a car park or street parking where the occupants are automobiles or motorbikes, libraries where the occupants are books and the occupants are bookshelves, restaurants where the occupants are people and the occupancy zones are tables and chairs, and goods in supermarkets or warehouses where the occupants are stock sitting on shelves.
[0071] By providing the above-described techniques, it is possible to make determinations regarding the occupancy zones, and occupancy levels in a space as they dynamically change and without the requirement for a map or schematic outlining the layout or configuration of a space. However, these techniques can be utilized to enable population of such a map or layout of a space.
[0072] Can be used with known occupancy to determine presence of objects, such as people in the space or can be used to detect specific people and determine their presence.
100731 Approaches described herein can also be applied to determine the nature of the space or the nature of the usage of the space -for example it is possible for the system to determine whether hot-desking or static desk allocations are utilized.
[0074] As described above a sensor may include one or more of visual image sensors, infra-red sensors or millimeter radar sensors, or depth sensors (e.g. Time of Flight (ToF) or active or passive stereoscopic depth sensor). Alternatively, depth information may be determined by processing the image data through a mono-depth Convolutional Neural Network (CNN). In an embodiment a sensor may comprise multiple sensors types, for example visual sensor, infrared sensor, and depth sensor. The infrared sensor is used more predominantly in low light conditions. Depending upon the space, a sensor may be located on a mounting on wall or on the ceiling. The sensor units may different fields of view, for example 360-degree FoV (Field of View), or camera FoV depending upon the sensor unit location and mounting.
[0075] A sensor may be controllable, potentially allowing magnification and direction of the sensor to be controlled. The sensor may be able to signal its effective FoV (direction, magnification etc.) and other parameters to the object detection algorithm to aid in the detection of objects.
[0076] The above description refers to clustering which can be regarded a technique that involves the grouping of data points. A clustering algorithm is used to classify each data point into a specific group, such as a cluster group or region. Data points that are in the same group should have similar properties which in this disclosure may involve similar position within the space.
[0077] The skilled person will appreciate that a number of clustering algorithms, for example K-Means, Mean-Shift, DBSCAN (Density-Based Spatial Clustering of Applications with Noise), HDB SCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise), Expectation-Maximization (EM) with Gaussian Mixture Models (GMN4) and Agglomerative Hierarchical Clustering etc. 10078] The components of the systems of Figures 2 and 6 comprise computing systems including at least one processor. The at least one processor is for example configured to perform at least a portion of the methods described herein. In this example, the computing system includes a central processor unit (CPU). The computing system may also include also includes a CNN accelerator, which is a processor dedicated to implementing the classification of data using a neural network. In other examples though, processor may include other or alternative processors such as a microprocessor, a general-purpose processor, a digital signal processor (DSP), a graphics processing unit (CPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), neural network accelerator (NNA), neural network processor (NNP), or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any suitable combination thereof designed to perform the functions described herein.
[0079] The components of the systems of Figures 2 and 6 may also or alternatively include a processor implemented as a combination of computing devices, for example a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0080] The components of the systems of Figures 2 and 6 may further comprise memory such as a random-access memory (RAM) such as DDR-SDRAM (double data rate synchronous dynamic random-access memory). In other examples, the storage may be or include a non-volatile memory such as Read Only Memory (ROM) or a solid-state drive (SSD) such as Flash memory or Non-Volatile RAM (NVRAM). The storage in examples may include further storage devices, for example magnetic, optical or tape media, compact disc (CD), digital versatile disc (DVD) or other data storage media. The storage may be removable or non-removable from the computing system [0081] Methods described herein may be implemented in hardware or software or a combination thereof For example, methods may be held in transitory or non-transitory storage such as that described above which, when executed by a processor, causes the processor to perform the methods described herein. The methods described herein may be embodied in a computer program product.
[0082] It is to be appreciated that the systems are merely examples and other processing systems may be used in other examples.
[0083] It is to be understood that any feature described in relation to any one example may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the examples, or any combination of any other of the examples. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the accompanying claims.

Claims (21)

  1. CLAIMSWHAT IS CLAIMED IS: 1. A method of determining one or more occupancy zones of a space, the method comprising: obtaining object data identifying one or more objects detected in the space at a plurality of time instances; performing clustering on the object data to identify one or more clusters of object data; and identifying, based on the clusters of object data, occupancy zone data representing one or more identified occupancy zones in the space.
  2. 2. The method of claim 1, wherein sensor data is processed to determine the received object data.
  3. 3. The method of claim 2, wherein the sensor data is processed using a Convolutional Neural Network "CNN-for the receiving object data
  4. 4. The method of claim 2, wherein the sensor data is obtained from a visible light image sensor, an Infra-Red image sensor, or millimetre wave radar.
  5. 5. The method of any of claims 2 to 4, wherein the sensor data also comprises depth data
  6. 6. The method of claim 5, wherein depth data is obtained from a time of flight "tof', a mono-depth CNN, or stereo depth sensor.
  7. 7. The method of any of claims I_ to 6, further comprising determining, based on obtained object data identifying one or more objects detected in the space, an occupancy capacity for at least one of the identified one or more occupancy zones
  8. 8. The method of claim 7, wherein the obtained object data used to determine the occupancy capacity is further object data obtained at a time instance subsequent to the time instances of the object data used to identified occupancy zones.
  9. 9. The method of any of claims I_ to 8, further comprising: determining, based on the object data and the occupancy zone data, an occupancy level for the at least one occupancy zone.
  10. 10. The method of claim 9, further comprising obtaining further object data and wherein the determining of the occupancy level is based on the further object data for the at least one occupancy zone.
  11. 11. The method of any preceding claim, wherein the obtaining object data comprises obtaining object data relating to at least one object of a first object type and obtaining object data relating to at least one object of a second object type
  12. 12. The method of claim 11, wherein the second object type comprises objects that are candidates to occupy an occupancy zone.
  13. 13. The method of claim 11 or claim 12, wherein the first object type comprises objects that are candidates to define an occupancy zone.
  14. 14. The method of any preceding claim, wherein the space is an office space and the occupancy zones comprise desks and the one or more objects comprise people.
  15. 15. The method of any of claims Ito 13, wherein the space is a vehicle parking area and the occupancy zones comprise vehicle parking spaces and the one or more objects comprise vehicles.
  16. 16. The method of any preceding claim, further comprising defining, using cluster object data, an occupancy zone type.
  17. 17. The method of claim 13, further comprising defining, using object data identifying an occupancy zone to define an occupancy zone type.
  18. 18. The method of any preceding claim, further comprising: obtaining further object data identifying one or more objects detected in the space at a plurality of further time instances, the further time instances being subsequent to the plurality of time instances; performing clustering on the further object data to identify one or further clusters of the further object data; and identifying modified occupancy zone data based at least in part on the further clusters of the further object data.
  19. 19. The method of any preceding claim, wherein identifying the modified occupancy zone data comprises performing clustering based at least in part on the object data and the further object data.
  20. 20. A system to determine one or more occupancy zones of a space, the system comprising a processor to: obtain, at the system, object data identifying one or more objects detected in the space at a plurality of time instances; performing, using processing circuitry, clustering on the object data to identify one or more clusters of object data; and identifying, using the processing circuitry and based on the clusters of object data, occupancy zone data representing one or more identified occupancy zones in the space.
  21. 21. A computer-readable medium comprising instructions which, when executed by processing circuitry, cause the processing circuitry to perform the method of any of claims Ito 19
GB1914441.9A 2019-10-07 2019-10-07 System and method for determining occupancy zones Withdrawn GB2588106A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1914441.9A GB2588106A (en) 2019-10-07 2019-10-07 System and method for determining occupancy zones
PCT/GB2020/052196 WO2021069857A1 (en) 2019-10-07 2020-09-11 System and method for determining occupancy zones

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1914441.9A GB2588106A (en) 2019-10-07 2019-10-07 System and method for determining occupancy zones

Publications (2)

Publication Number Publication Date
GB201914441D0 GB201914441D0 (en) 2019-11-20
GB2588106A true GB2588106A (en) 2021-04-21

Family

ID=68541216

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1914441.9A Withdrawn GB2588106A (en) 2019-10-07 2019-10-07 System and method for determining occupancy zones

Country Status (2)

Country Link
GB (1) GB2588106A (en)
WO (1) WO2021069857A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114924274B (en) * 2022-04-08 2023-06-30 苏州大学 High-dynamic railway environment radar sensing method based on fixed grid

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017114846A1 (en) * 2015-12-28 2017-07-06 Robert Bosch Gmbh Depth sensing based system for detecting, tracking, estimating, and identifying occupancy in real-time
WO2018204332A1 (en) * 2017-05-01 2018-11-08 Sensormatic Electronics, LLC Space management monitoring and reporting using video analytics
US20180324393A1 (en) * 2017-05-05 2018-11-08 VergeSense, Inc. Method for monitoring occupancy in a work area

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017114846A1 (en) * 2015-12-28 2017-07-06 Robert Bosch Gmbh Depth sensing based system for detecting, tracking, estimating, and identifying occupancy in real-time
WO2018204332A1 (en) * 2017-05-01 2018-11-08 Sensormatic Electronics, LLC Space management monitoring and reporting using video analytics
US20180324393A1 (en) * 2017-05-05 2018-11-08 VergeSense, Inc. Method for monitoring occupancy in a work area

Also Published As

Publication number Publication date
GB201914441D0 (en) 2019-11-20
WO2021069857A1 (en) 2021-04-15

Similar Documents

Publication Publication Date Title
Amato et al. Deep learning for decentralized parking lot occupancy detection
Xu et al. Depth information guided crowd counting for complex crowd scenes
WO2019137137A1 (en) Activity recognition method using videotubes
US20180150704A1 (en) Method of detecting pedestrian and vehicle based on convolutional neural network by using stereo camera
Ryan et al. Scene invariant multi camera crowd counting
Tian et al. Robust and efficient foreground analysis in complex surveillance videos
EP2131328A2 (en) Method for automatic detection and tracking of multiple objects
CN114022830A (en) Target determination method and target determination device
Han et al. Image crowd counting using convolutional neural network and Markov random field
JP2009064410A (en) Method for detecting moving objects in blind spot of vehicle and blind spot detection device
Zulkifley et al. Robust hierarchical multiple hypothesis tracker for multiple-object tracking
EP2951783B1 (en) Method and system for detecting moving objects
US10824881B2 (en) Device and method for object recognition of an input image for a vehicle
Farag et al. Deep learning versus traditional methods for parking lots occupancy classification
US11921774B2 (en) Method for selecting image of interest to construct retrieval database and image control system performing the same
WO2021069857A1 (en) System and method for determining occupancy zones
Sharma et al. Automatic vehicle detection using spatial time frame and object based classification
Kalaiselvi et al. A comparative study on thresholding techniques for gray image binarization
KR101337423B1 (en) Method of moving object detection and tracking using 3d depth and motion information
Fei et al. Change detection in remote sensing images of damage areas with complex terrain using texture information and SVM
Kwak et al. Boundary detection based on supervised learning
Sathesh et al. Speedy detection module for abandoned belongings in airport using improved image processing technique
Bagwe Video frame reduction in autonomous vehicles
US20210343029A1 (en) Image processing system
Nayan et al. Real time multi-class object detection and recognition using vision augmentation algorithm

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: SEECHANGE TECHNOLOGIES LIMITED

Free format text: FORMER OWNER: ARM IP LIMITED

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)