GB2584619A - Electronic counting device and method for counting objects - Google Patents

Electronic counting device and method for counting objects Download PDF

Info

Publication number
GB2584619A
GB2584619A GB1907298.2A GB201907298A GB2584619A GB 2584619 A GB2584619 A GB 2584619A GB 201907298 A GB201907298 A GB 201907298A GB 2584619 A GB2584619 A GB 2584619A
Authority
GB
United Kingdom
Prior art keywords
data
radar
camera
count
people
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1907298.2A
Other versions
GB201907298D0 (en
Inventor
Jones Alistair
Zeni Claudio
Ling Martin
Ziarowski Maciej
Booker Aidan
Mackevicius Aurimas
Nicholls Carl
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Local Data Co Ltd
Original Assignee
Local Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Local Data Co Ltd filed Critical Local Data Co Ltd
Priority to GB1907298.2A priority Critical patent/GB2584619A/en
Publication of GB201907298D0 publication Critical patent/GB201907298D0/en
Publication of GB2584619A publication Critical patent/GB2584619A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

An electronic counting device 114 for counting objects proximal to the electronic counting device comprises: a radar device 130 for detecting and/or tracking objects within the field of view of the radar device and output radar data indicative of the number of objects detected within the field of view over a predetermined acquisition time period; a camera 132 aligned with the field of view of the radar device to capture an image or a video over the period; and a processor 134 which controls the operation of the camera and the radar device and receives the output radar data and the image or video from the camera. The processor operates with an external communication interface to transfer count data from the radar device and image data from the camera to a remote server 124. The camera data is used by the server to corroborate the radar data and to derive a count of objects proximal to the electronic counting device within the predefined region for the period. In a further invention, radar data and image data are received by a server which corroborates the received radar data using the received image data and determines a count of objects from the corroborated data.

Description

ELECTRONIC COUNTING DEVICE AND METHOD FOR COUNTING OBJECTS
TECHNICAL FIELD
The present invention relates to a device and method for counting objects. In particular, but not exclusively, the present invention relates to a sensor device and method for counting people using detecting, localisation and tracking.
BACKGROUND
Many conventional object counting systems, such as people counting systems, are based on video camera imaging. Such systems capture images of the scene in which people may be present and then use image analysis to count the number of individual people within the scene. However, these types of systems suffer from several drawbacks. For example, camera-based techniques can become unreliable and inaccurate during periods of adverse lighting conditions. More specifically, issues arise where there is too much sunlight during the day where captured images become over-exposed, or too little light during the night where captured images are under-exposed. The accuracy of these techniques also can be affected by unfavourable weather conditions, for example fog or rain, which detract from the quality of the captured images to the detriment of the performance of the device. A further anomaly is that camera systems can sometimes make mistakes in differentiating between objects and people. For example, strong sunlight can generate strong shadows which can also lead to the systems counting the shadow as an additional person. Also, obstructions in the visual scene can cause further issues with count accuracy in such systems.
Various other people counting devices are known such as one-dimensional infra-red beam counters which can be placed in doorways and used to count the number of times a beam is broken as being the number of people passing through that doorway. However, these counters can be highly inaccurate when there are multiple people passing through the beam at the same time, when a single person turns around and passes back through the beam and when people linger at the location of the beam.
Thermal imaging sensors are known for use in systems for counting people. These systems have an array of thermal sensors to detect heat sources (people) passing through a field of view. Whilst they can in theory track movement of a human even if they change direction, these systems tend to be unreliable due to their susceptibility to give different results dependent on the ambient temperature.
Furthermore, the resolution of the images generated is sometimes insufficient to enable groups of two or more people to be differentiated.
An alternative sensor technology, which has been used more recently for object detection, localisation and tracking is Ultra-Wide Band (UWB) radar sensor technology. UWB is a radio technology that can use very low energy level radiation for short-range high-bandwidth communications over a lame portion of the radio spectrum. These types of systems have been used primarily indoors as an alternative to Global Positioning System (GPS) because of GPS's signal propagation limitations, which do not exist for UWB sensor technology. Also UWB technology has been used for counting people moving through confined passageways for example. However, UWB radar sensor technology is generally considered to be susceptible to distorting reflections from large metallic objects, such that the use of UWB radar sensors for people counting has been either confined to indoor use or use in a confined passageway.
In comparison to conventional camera-based techniques, UWB radar sensors have a poor resolution, which makes their results inaccurate for counting large numbers of people in a scene, for example counting people that are moving as part of a large and/or dense group. More specifically, UWB radar sensors are not capable of producing accurate counts of people moving within a large crowd made up of several sub-groups of people where the people are in close proximity to one another. This feature has limited the use of UWB radar sensor technology in outdoor people counting applications.
The present invention has been devised against the above background, and, therefore, aims to overcome or at least partly mitigate one or more of the above problems.
SUMMARY OF THE INVENTION
According to a first aspect of the present invention, there is provided an electronic counting device for counting objects proximal to the electronic counting device. The electronic counting device comprises a radar device for detecting and/or tracking one or more objects in a scene within a predefined region associated with a field of view of the radar device, wherein the radar device is configured to output radar data indicative of the number of objects detected within the field of view over a predetermined acquisition time period. The electronic counting device also comprises a camera aligned with the field of view of the radar device and being configured to capture an image or a video of the scene, wherein the camera is configured to output camera data regarding the field of view, over the predetermined acquisition time period. The electronic counting device further comprises a processor connectable to the radar device and the camera, the processor being configured to control the operation of the camera and the radar device for data acquisition, to receive the output radar data and the output camera data and determine a communications channel to be used to send the radar data and the camera data to a remotely-located processing server. The electronic counting device further comprises one or more communications engines, operatively connectable to the processor and in communication with the remotely-located processing server, the one or more communications engines being configured to transmit the radar data and the camera data to the remotely-located processing server via the communications channel, such that the camera data can be used to corroborate the radar data and to derive a count of objects proximal to the electronic counting device within the predefined region for the predetermined acquisition time period.
The scene in which one or more objects are detected and/or tracked may be, for example, a portion of a predefined region associated with a field of view of the radar device, where the field of view encompasses a pavement region in front of a commercial or residential building. In all embodiments of the present invention, the field of view of the radar device may be adjustable, such that the region in which objects are detected and/or tracked can be controlled. For example, the field of view of the radar device can be adjusted to include the pavement region and exclude a road portion within the scene. Advantageously, this can prevent distortion of the captured radar data by large anomalous objects, e.g. metallic objects such as a car or a bus, where the electronic counting device is used for detecting and/or tracking people within a pavement region. This enables the present invention to be suitable for outdoor use as well as indoor use or use in a confined passageway. In all embodiments of the present invention, the camera may also be adjustably aligned with the field of view of the radar device.
A communications channel is defined as a type of communication, for example a data communication such as 3G or 4G, or Wi-Fi. The communications channel is used by the one or more communication engines to send the radar data and the camera data to the remotely-located processing server.
The electronic counting device enables better connectivity with external communication networks and better data processing means with respect to known devices. This in turn enables cloud-based radar data improvements and data aggregation to be carried out. Moreover, using the camera data to corroborate and adjust the radar data enables reliable and accurate counts of objects, such as passers-by on a street, at a given location over a predetermined time period to be obtained.
The present invention enables reliable and accurate counts even during periods of adverse lighting conditions. In addition, the present invention allows multiple objects present within a scene to be detected and/or tracked, regardless of the direction of movement of the object or the time spent by the object in the predefined region of the scene. Moreover, the resolution provided by the counting device of the present invention enables groups of two or more people to be differentiated. The present invention thereby produces accurate results for counting large numbers of objects in a scene, for example counting objects that are moving as part of a large and/or dense group.
Furthermore, the provision of a camera in the electronic counting device enables the position the device to be optimised on installation. The camera can also be used to detect any changes or anomalies in the site configuration or counts produced.
The radar data may comprise an object count indicating the number of objects detected by the radar device for a predetermined frame time period. Advantageously, the radar device may be configured to output the object count such that it can be sent to a remotely-located processing server with no further processing of the radar data being required. The object count may be for stationary or moving objects.
The radar device may be configured to track movement of objects by acquiring points data in frames over the predetermined acquisition time period, the points data being acquired in one of the frames for a predetermined frame time period, wherein the points data represent detection of objects within the scene, convert corresponding points in the frames acquired over the predetermined acquisition time period into tracks, each track representing movement of an object in the scene over the predetermined acquisition period, and output the track data to the processor, and wherein the processor may be configured to convert the track data into radar count data. Advantageously, the movement of objects can be tracked to produce accurate radar counts for stationary or moving objects.
The processor may be configured to aggregate radar count data for a predefined aggregate time period, such that the aggregate radar count data can be sent to the one or more communication engines for transmitting to the remotely-located processing server. Advantageously, aggregating radar count data for a predefined aggregate time period provides a more efficient way of sending large amounts of data to the one or more communication engines and onto the remotely-located processing server.
The radar count data may comprise object counts that have been obtained by counting object movement from left to right (left-to-right counts) and/or by counting object movement from right to left (right-to-left counts). The ability of the device to count object movement in different directions can be important for tracking movement of objects and trends in the direction of movement. For example, lento-right counts and right-to-left counts may both be obtained such that the left-to-right counts may be compared with the right-to-left counts to determine which direction the majority of people are travelling from. Obtaining one of the left-to-right counts or the right-to-left counts may be useful, for example, for determining the flow of people towards a particular landmark. The device may count object movement in other directions, for example toward a store entrance or around an obstacle. This would be particularly useful for determining the effect of objects, such as a landmark or building, within or around the scene on the direction of movement of objects.
The electronic counting device may further comprise an accelerometer for detecting changes in movement or orientation of the electronic counting device, wherein the accelerometer is configured to generate accelerometer data associated with the changes in movement or orientation of the electronic counting device for sending to the remotely-located processing server via the communications channel. Advantageously, detecting changes in movement or orientation of the device and sending the accelerometer data to the server enables changes or anomalies in object counts to be identified.
The accelerometer can also be used to monitor the stability of the device by sensing movement of the device and subsequently sending an alert to the processor of the device and subsequently onto the remotely-located processing server.
The accelerometer may be configured to generate the accelerometer data upon receiving a signal from the remotely-located processing server via the processor, the signal being associated with an anomaly in the radar count data. Advantageously, any changes or anomalies in the counts are detected while the counts are being considered on the server, which enables the cause of the change or anomaly to be assessed without delay.
The radar device may be configured to define multiple sub-regions in the field of view of the radar device and in use to obtain radar count of objects moving from one sub-region to another sub-region.
This feature enables a wide range of situations to be analysed for the purpose of tracking movement of objects. For example, the number of objects moving from a first sub-region to a second sub-region can be compared to the number of objects moving across the first sub-region only in order to determine a conversion metric of the number of objects moving across a scene, e.g. a pavement region, in comparison to the number of objects moving toward a particular region of interest, e.g. a store front.
The radar device may be configured to determine a time period taken for an object to move from one predefined sub-region to another predefined sub-region. This enables dwell times to be determined, e.g. the time spent by an object in a first predefined sub-region before moving into a second predefined sub-region.
The radar device may be configured to determine a time period taken for an object to move across a single sub-region. This feature can be useful for determining the time taken for an object to pass by, This timing can be used as a measure of how much time an object has spent to pass by, for example, a commercial or residential building. This can be used for a variety of applications; for example, if a relatively long amount of time is taken for the object to pass by, this may indicate a feature of interest.
In contrast, if a relatively short amount of time is taken for the object to pass by, this may be indicative of passers-by who are not interested in the appearance of the building.
The radar device may comprise an Ultra Wide Band Radar (UWB) sensor. User of a UWB sensor within the radar device provides an electronic counting device which is suitable for use in outdoor object counting applications as well as indoor object counting applications.
The radar device may transmit signals with a frequency of 60GHz. This is equivalent to signals with a wavelength of 5 millimetres. Millimetre wave radar technology transmits signals with a wavelength that is in the millimetre range which enables accurate counts to be obtained. Further advantages of millimetre wave radar technology include its relatively small component size.
The radar device may have a field of view from 0.5 metres to 16 metres. A field of view of this size enables a single radar device to obtain radar data for a wide region within a scene, such that the number of radar devices required can be minimised.
The camera and the radar device may be provided in a sensor unit and the processor and the one or more communications engines may be provided in a base unit separate from the sensor unit, and the camera and the radar device may communicate with the processor via a communications link. This set up is particularly useful, for example, where the base unit is required to be powered by a mains power supply and the sensor unit is to be placed in a remote location, e.g. on a window of a store front.
The communications link may be a wireless communications link. Advantageously, this enables the sensor unit to be in communication with the base unit without the use of wiring; therefore, the sensor unit and base unit are suitable for use in buildings comprising multiple rooms where it may be inconvenient to use wiring as a connection means.
The camera may be configured to capture samples of video over a period of time and to output the samples as the camera data. This enables the output camera data to comprise more manageable data files rather than a single video over the period of time. Advantageously, capturing samples of video also reduces the impact in the event of file corruption. If a single sample of video over the total period of time becomes corrupted, the camera data cannot be used, whereas if multiple samples of video over a period of time are captured, the likelihood of all samples being corrupted is low.
The electronic counting device may further comprise a scheduler to control supply of power to the camera to turn the camera on or off at predetermined times. This enables the power supply to the camera to be automated.
The electronic counting device may further comprise a further sensor unit including a further radar device and a further camera and wherein the further sensor unit is operatively coupled to the base unit via a further communications link. Advantageously, multiple sensor units may be coupled to the same base unit to produce a system for counting objects within multiple predefined regions in different locations.
The processor may be configured to synchronise the output radar data to the output camera data by providing the same device or location identifier in the output radar data and the output camera data. Synchronisation of the output radar data to the output camera data enables more efficient corroboration of the radar data and ensures that the radar data and the camera data by the same device or location are linked correctly.
The processor may be configured to optimise the output camera data for transmission to the remotely-located processing server. Advantageously, processing of camera data can be carried out by the processor of the base unit, e.g. to reduce the number of frames to a predetermined frame limit, to reduce the resolution of video data, or to transform the video data from colour to greyscale in order to minimise the volume of data being sent to the server.
According to a second aspect of the present invention, there is provided a method for acquiring data for counting objects. The method comprises detecting and/or tracking one or more objects using a radar device over a predetermined acquisition time period to generate radar data, the one or more objects being in a scene within a predefined region associated with a field of view of the radar device, and outputting radar data indicative of the number of objects detected within the field of view. The method further comprises capturing an image or a video of the scene using a camera aligned with the field of view of the radar device over the predetermined acquisition time period to generate camera data, and outputting camera data regarding the field of view of the camera. The method further comprises receiving the output radar data and the output camera data at a processor which controls the operation of the camera and the radar device for data acquisition, determining a communications channel to be used to send the radar data and the camera data to a remotely-located processing server, and transmitting the radar data and the camera data to the remotely-located processing server via the communications channel, such that the camera data can be used to corroborate the radar data and to derive a count of objects within the predefined region for the predetermined acquisition time period.
Advantages of the second aspect are analogous to the advantages associated with the first aspect as discussed above.
According to a third aspect of the present invention, there is provided a processing server for counting objects within a predefined region, the predefined region being proximal to an electronic counting device located remotely from the processing server. The processing server is configured to receive, via a communications channel, radar data and camera data generated by a radar device and a camera of the counting device, the radar data and camera data relating to objects within a field of view and sensed by the radar device and the camera over a predetermined acquisition time period. The processing server is further configured to use the camera data to corroborate the radar data for the predetermined acquisition time period. The processing server is further configured to derive a count of objects proximal to the electronic counting device within the predefined region for the predetermined acquisition time period, using the corroborated radar data.
Advantages of the third aspect are analogous to the advantages associated with the first and second aspects as discussed above.
The processing server may be configured to obtain manual count data using the camera data, and to determine calibrated count data based on a comparison of the manual count data and the radar count data; the calibrated count data being derived using a line of best fit between the manual count data and the radar count data. The processing server may be further configured to store the calibrated count data in a data store associated with the processing server.
The processing server may be configured to obtain manual count data using the camera data, and determine calibrated count data based on applying a calibration machine learning model to the radar count data. The processing server may be further configured to store the calibrated count data in a data store associated with the processing server.
The processing server may be configured to aggregate the calibrated count data for a predefined aggregation time period and store the aggregated count data to the data store.
The processing server may be configured to detect one or more missing data values in the aggregated count data and impute the one or more missing data values based on available radar count data and camera data.
The processing server may be configured to use a previously stored data model of radar data and associated camera data in different conditions to impute one or more missing data values in the aggregated count data.
The previously stored data model may comprise a plurality of different neural network data models, each neural network data model comprising radar and camera data values aggregated over a different time period.
The processing server may be configured to repeat the aggregating and imputing steps for two or more predefined aggregation time periods using a recurring batch process.
The processing server may further comprise a data model. The processing server may be configured to create the data model based on linking together radar data and camera data acquired from a plurality of counting devices at different geographic locations.
Advantageously, the camera data can used to improve precision of radar device counts by training a machine learning model. This method is particularly useful for counting objects above a threshold density for a given location and for counting objects in groups.
The radar data used for creating the data model may comprise blobs of radar data, wherein a blob represents a group of objects, and the data model may comprise data linking different shapes and/or sizes of blobs of radar data to manual counts of objects obtained using the camera data. This enables improved radar count precision of groups to be achieved. In addition, this feature enables reduction in radar count error, thereby improving the accuracy of counts obtained.
The processing server may be configured to create the data model by determining an object count threshold below which the radar count data is used to produce the count of objects, and above which the radar data linked to manual counts obtained from the camera data is used to produce the count of objects. Using this method particularly important for counting objects when counts are required for at least the threshold density of objects because at densities above the threshold, the counts achieved by conventional UWB radar sensors are imprecise.
According to a fourth aspect of the present invention, there is provided a method for counting objects within a predefined region, the predefined region being proximal to an electronic counting device. The method comprises receiving, via a communications network, radar and camera data generated by a radar device and a camera of the electronic counting device, the radar and camera data relating to objects within a field of view and sensed by the radar device and the camera over a predetermined acquisition time period. The method further comprises using the camera data to corroborate the radar count data over a predetermined acquisition time period, and deriving a count of objects proximal to the electronic counting device within the predefined region for the predetermined acquisition time period, using the corroborated radar data.
Advantages of the fourth aspect are analogous to the advantages associated with the first, second and third aspects as discussed above.
According to a fifth aspect of the present invention, there is provided a system for counting objects within one or more predefined regions. The system comprises one or more electronic counting devices according to the first aspect as described above, the one or more predefined regions being proximal to their respective one or more electronic counting devices. The system further comprises a processing server according to the third aspect as describe above, wherein the one or more electronic counting devices are configured to transmit the radar data and the camera data to the processing server via a communications channel.
The system of the fifth aspect provides a mechanism to connect one or more electronic counting devices and a processing server, such objects in multiple locations can be detected and/or tracked using the same infrastructure.
The processing server may comprise a container-based platform for remote management of the one or more electronic counting devices. Using a container-based platform enables the electronic counting devices to be managed efficiently and effectively.
According to a sixth aspect of the present invention, there is provided a method of determining an object count within one or more predefined areas. The method comprises a method for acquiring data for counting objects according to the second aspect as described above and a method of counting objects according to the fourth aspect as described above.
Advantages of the sixth aspect are analogous to the advantages associated with the second and fourth aspects as discussed above.
According to a seventh aspect of the present invention, there is provided a system for counting objects. The system comprises an electronic counting device for counting objects within a predefined region proximal to the counting device. The counting device comprises a radar sensor for detecting and/or tracking one or more objects in a scene within the predefined region and generating radar data for a predetermined acquisition time period. The counting device further comprises a communications engine operatively coupled to the radar sensor and configured to communicate the radar data to a remote location via a communications network. The system further comprises a processing server provided at the remote location and operatively coupled to the communications network. The processing server comprises a processor configured to receive radar data from the radar device and to derive a radar count from the radar data for the predetermined acquisition time period. The processing server further comprises a machine learning model based on links between radar count data associated with a predetermined acquisition time period and location and camera data associated with the same predetermined acquisition time period and same location. The processor is configured to use the camera data from the machine learning model to determine the object count for the same location and the same time period when the radar data indicates the object count to be greater than a predetermined threshold.
This system is advantageous as it does not necessarily require a camera to be included within the system for the purpose of training the machine learning model. The camera data can be acquired separately from the radar data and used to improve precision of the radar device counts. This method is particularly useful for counting objects above a threshold density for a given location and for counting objects in groups.
Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a diagram showing a known process of producing a people count for a group of people (relatively low people density), using Ultra-Wide Band (UWB) radar sensor technology; Figure 2 is a schematic block diagram showing a people counting device, in accordance with a first embodiment, installed at a store front and a system for wirelessly communicating data generated by the people counting device to a remote server; Figure 3 is a schematic block diagram showing a plan view of the people counting device shown in Figure 2; Figure 4 is a schematic block diagram showing the hardware components of the people counting device shown in Figures 2 and 3; Figure 5 is a schematic block diagram showing the software components of the people counting device shown in Figures 2 and 3; Figure 6 is a schematic block diagram showing the data flow architecture from the people counting device to the remote server shown in Figure 2, Figure 7 is a schematic block diagram showing the data processing architecture of the remote server shown in Figure 2; Figure 8 is a schematic block diagram showing an overview of a recurring batch process which is implemented within the data processing architecture shown in Figure 7; Figure 9 is a diagram showing a process of producing a people count, using UWB radar sensor technology, whereby three people are moving individually; Figure 10 is a diagram showing a process of producing a people count, using UWB radar sensor technology, whereby three people are moving within a group; Figure 11 is a flowchart showing the process by which a machine learning model is trained, in accordance with a second embodiment, for producing a people count where people are moving within 40 groups; Figure 12 is a diagram showing a process of producing a people count, using UVVB radar sensor technology, whereby many people are moving within a large and highly dense group, Figure 13 is a graph showing a comparison between people counts obtained using a UWB radar sensor and people counts obtained using a manual count; Figure 14 is a flowchart showing the process by which a machine learning model is trained, in accordance with a third embodiment, for producing a people count for large groups with a high people density; Figure 15 is a collection of calibration scatter plots showing calibration techniques for evaluating the machine learning model trained as per Figure 14, and histograms showing distribution of error in calibrated counts, for a first location; Figure 16 is a collection of calibration scatter plots showing calibration techniques for evaluating the machine learning model trained as per Figure 14, and histograms showing distribution of error in calibrated counts, for a second location; Figure 17A is a flowchart showing the process by which the machine learning model trained as per Figure 14, and evaluated as per Figure 15 and Figure 16, is applied to obtain footfall counts; Figure 17B is a flowchart showing the process by which the calibration algorithm and accuracy testing, shown in Figure 17A, are applied, Figure 18 is a schematic block diagram showing three people counting devices, in accordance with a fourth embodiment, installed at store fronts and a system for wirelessly communicating data generated by the people counting devices to a remote server; Figure 19 is a schematic block diagram showing the hardware components of a people counting device, in accordance with a fifth embodiment.
Figure 20 is a schematic block diagram showing a people counting device, in accordance with a sixth embodiment, installed at a store front and a system for wirelessly communicating data generated by the people counting device to a remote server; Figure 21 is a schematic block diagram showing a plan view of the people counting device shown in Figure 20; Figure 22 is a plot of tracks obtained using the people counting device shown in Figure 20; Figure 23 is a schematic block diagram showing two people counting devices, as a further example of the sixth embodiment, installed at a store front and a system for wirelessly communicating data generated by the people counting device to a remote server; Figure 24 is a schematic block diagram showing a plan view of the people counting devices shown in Figure 23; Figure 25 is a schematic block diagram showing a people counting device, in accordance with a seventh embodiment, installed at a store front and a system for wirelessly communicating data generated by the people counting device to a remote server; Figure 26 is a schematic block diagram showing a plan view of the people counting device shown in Figure 25; Figure 27 is a schematic diagram showing an exemplary application of the people counting device in a retail store environment; and Figure 28 is a schematic diagram showing an exemplary application of the people counting device in an office environment.
DETAILED DESCRIPTION
In accordance with a first embodiment, a new and improved people counting device, hereafter referred to as the 'people counting device', is provided. A method by which the device counts people is also provided along with the system and cloud computing architecture. The people counting device is described below with reference to Figures 1 to 5, and the cloud computing architecture and associated methods are described below with reference to Figures 6 to 8.
The process of producing a people count for a group of people, whereby the group has a relatively low people density, using UWB radar sensor technology is illustrated schematically in Figure 1. A set of points in space -a point cloud -is produced by measuring several points on the external surfaces of people detected by a radar sensor. The radar sensor data is acquired and stored over a time period in the form of point clouds, wherein each point 102 in a point cloud frame 104 represents the external surface of a single person. Seven people are represented by the initial point cloud frame 104 of Figure 1. Points 102 for each person at a first time interval and a second time interval are represented by a first target point cloud frame 106 and a second target point cloud frame 108, respectively. The set of points 102 are then converted into tracks 110 that represent movement of a person over a specified time period. To obtain a people count, the tracks 110 are then converted into counts 112 by counting the number of tracks 110 in the frame. For example, the counts 112 may be obtained by counting the number of tracks 110 that are crossing an imaginary line in the field of view. However, currently available prior art methods suffer from a lack of precision and thus inaccurate counts.
Turning to Figure 2, the people counting device comprises a sensor hardware unit 114A and a base hardware unit 114B. The sensor hardware unit 114A is installed at a store front 116 in close proximity to the store entrance 118. Figures 2 and 3 show the store front 116 in front view and plan view, respectively. The sensor hardware unit 114A comprises an UWB radar sensor. The field of view 120 of the UWB radar sensor, which may be defined as the region in which a detector is sensitive to receiving reflected electromagnetic waves transmitted by the UWB radar sensor, is also shown. The field of view is typically measured by a horizontal angle, e.g. 120 degrees, and a vertical angle, e.g. degrees or 30 degrees. The field of view may be adjusted in accordance with the requirements of the people counting device. In the present embodiment, the field of view 120 covers the region directly in front of the store entrance 118 as well as the area on either side of the entrance, such that people passing by the store front can be counted by the device. In some embodiments, the field of view may be adjusted to extend the maximum range of the sensor to the pavement edge 122 such that the total number of people passing by the store front, regardless of their proximity to the store entrance, may be counted.
As depicted in Figure 2, the base hardware unit 114B is wirelessly connectable to a remote server 124 via a wide-area communications network 126. A database 128 is provided with the server 126 to store data and/or information. In some embodiments, the people counting device is wirelessly connectable to multiple remote servers 124 and/or databases 128.
Hardware components The hardware components of the people counting device 114 will now be described in further detail with reference to Figure 4. The people counting device 114 comprises at least a radar device 130, a camera 132, a processor 134, and one or more external communications interfaces 136. In the present embodiment, the radar device 130 and the camera 132 are contained within a sensor hardware unit 114A, and the processor 134 and the external communications interfaces 136 are contained within a base hardware unit 114B. The combination of the processor 134 and the external communications interfaces 136 is sometimes herein referred to as a 'base unit' or 'base station' 114B of the people counting device. Separation of the hardware elements into their respective hardware units 114A, 114B allows the sensor hardware unit 114A to be lightweight and therefore more suitably placed at store front 116, whereas the base unit 114B can be placed in the store in a position suitable for communicating with the sensor hardware unit 114A via suitable communication means, e.g. a WiFi connection or Power over Ethernet (POE) cable. In some embodiments, the people counting device 114 may comprise the radar device 130, the camera 132, the processor 134 and the external communications interfaces 136 all within a single hardware unit, whereby the single hardware unit is positioned at the store front.
Each of the hardware components within the hardware units 114A, 114B may include further hardware elements which are discussed below. It is to be understood that the above list of hardware components is by no means limiting and that further hardware components may be included. As such, the hardware components may also include hardware elements other than those discussed below.
Radar device The radar device 130 implements millimetre wave radar technology that transmits signals with a wavelength that is in the millimetre range, whereby the signals are considered to be short-wavelength electromagnetic waves. In the present embodiment, the radar device 130 transmits signals with a wavelength of 5 millimetres, which is equivalent to a frequency of 60 Gigahertz (GHz). The signals are reflected by objects in their path, and the reflected signals are then captured by the radar device. In the present embodiment, the radar device has a maximum coverage of 16 metres and a minimum coverage of 0.5 metres.
The radar device 130 comprises radio frequency components 138 including transmitting and receiving antennas, and a mixer for combining transmitted and received signals to produce an intermediate frequency signal. The radar device further comprises analog processing components 140, such as an analog-to-digital (A to D) converter (not shown), which are configured to receive and initially process the intermediate analog frequency signal. The radar device further comprises digital signal processing components 142, such as digital signal processors and a hardware accelerator, which are configured to process a digital signal that is received from the analog processing components 140, such as the A to D converter. The digital signal processing components 142 have a timestamping function to ensure a timestamp reference is included in sensed radar data. These components 142 are configured to output the fimestamped radar data to the processor 134 of the people counting device 114. The fimestamped radar data makes it possible to know the data acquisition period to which the radar data relates. A non-limiting example of the radar device 130 used in the present embodiment is the IVVR 6843 mmWave radar sensor by Texas Instruments (http://www.ti.com/lit/ug/tidue71c/tidue71c.pdf).
The radar data is acquired and stored over a specified time period in the form of point clouds, wherein each point represents the external surface of a person. The sets of corresponding points in different frames are then converted into tracks that represent movement of a person over the time period, before being outputted to the processor 134. In some embodiments, the radar device 130 can be calibrated to count moving objects within a certain range, e.g. to cover only a defined region, so to focus the count objects passing by on the pavement rather than a road. This is because the field of view and the range of the radar device can be accurately determined during its set up or a line can be defined in the field of view of the sensor through which a track has to pass to be counted. For data collected over a given acquisition time period a timestamp is applied to the data and also output to the processor.
Camera The camera 132 comprises an image sensor 144 for capturing an image or frames of video. In use, the captured image or frames of video are then sent to an image processor 146 for processing of the image or frames of video and adjustment of image parameters such as brightness or contrast. The image processor 146 is configured to output the processed image or video data to the processor 134 of the people counting device 114. Each image or set of frames of video has a timestamp applied to it (by the image processor 146) which is also output to the processor. These fimestamps (not shown) help to provide a global frame of reference for synchronising the camera (image) data to the radar data such that radar data and camera data for the same data acquisition time period can be determined.
Processor and external communications interfaces The sensor hardware unit 114A -namely, the radar device 130 and the camera 132-are connectable to the processor 134 of the base unit 114A of the people counting device 114 by way of a wired connection such as POE or USB, or a wireless connection such as Wi-Fi or Bluetooth. It is to be understood that multiple sensor hardware units 114A may be connected to a single base unit such that they form part of the same people counting device 114. The processor 134 may be comprised within a single-board computer, such as a Raspberry Pi model (e.g. Raspberry Pi 3 Model A+/B+/B or Raspberry Pi 2 Model A+/B+/B or Raspberry Pi Zero/Zero VV). The processor 134 is equipped with one or more external communications interfaces 136 which enable the processor 134 to communicate, via the communications network 126, with an external server 124 using one of several different communications channels. By way of example only, the communication channels may comprise one or more of: a wireless internet Wi-Fi connection, a wireless telecommunications connection (e.g. via a 3G or 4G telecommunications network), a wired Ethernet connection, a wired USB, a wired mini-USB connection and a wireless POE connection. Advantageously, the capability of the processor 134 to communicate with the remote server 124 enables data to be accessed and retrieved therefrom.
Software components Software components of the base unit 114B of the people counting device 114 will now be described with reference to Figure 5.
As can be seen from Figure 5, the software components comprise a radar capture software component 150 configured to receive timestamped radar data as input, whereby the radar data has been outputted by the signal processing components 142 of the radar device 130 in the form of tracks which represent movement of people. The radar data can be forwarded to a radar processing component 152 for further processing. At this stage, a people count is derived from the number of tracks produced, where each track represents movement of a single person. The counts acquired over a time period of, for example, 1 minute may be aggregated into an aggregate people count. These aggregated people counts are indexed with the received timestamp data such that they can be readily compared to the received timestamped camera data. Once further radar processing has been completed, the aggregate people count acquired from the radar data -i.e. the radar count for a predetermined data acquisition period -is sent to a data send component 154 for transmitting to the external server 124 via the communications network 126. The data send component 154 is responsible for managing the external communications interfaces, e.g. 3G/4G connections, VVi-Fi, and POE connections.
Similarly, there is provided video capture software component 156 configured to receive timestamped camera data as input, whereby the camera data has been outputted by the image processor 146 of the camera 132. The camera data may comprise video files comprising video data for a predetermined period of time, e.g. camera data could be collected over 24 hours in samples of 5 minutes or 10 minutes, for each day of the week. Each sample has previously been timestamped by the camera 132 such that it becomes easier to synchronise the camera data with the radar data for the predetermined data acquisition time period at a later processing stage, for example by the processor 134 or by the external server 124. This sampled data can provide an efficient way of gathering video data for relatively easy comparison with radar data counts.
The camera data is sent to the data send component 154 for transmitting to the external server 124 via one of the specific channels of the communications network 126. A people count may be derived from the camera data by counting the number of people in the captured video data over time. The counts acquired over a time period of, for example, 5 minutes may be converted into intervals which are directly comparable to the 1 minute aggregate radar people count described above. The video capture software component 156 may also be configured to restart the camera 132 if required. Processing of camera data can also be carried out by the base unit, for example to reduce the number of frames to a predetermined frame limit, to reduce the resolution of video data, and to transform the video data from colour to greyscale in order to minimise the volume of data being sent to the external server 124.
There is also provided a command component 158 to retrieve data from sensor services and send to the data send component 154 for transmitting to the external server 124 via the communications network 126. There is also provided a pairing component 160 for pairing the base unit 114B to the camera 132 and radar device 130 of the sensor hardware unit 114A. Pairing is typically achieved wirelessly, for example using Wi-Fi or Bluetooth connectivity. There is also provided a POE interface 162 enabling connection of the base unit 114B to the camera 132 and radar device 130 of the sensor hardware unit 114A, for example for the purpose of powering the sensor hardware unit 114A via the base unit 1148 or when existing Wi-H connectivity is poor. In some embodiments, a USB interface enabling wired connection of the base unit 114B to the camera 132 and radar device 130 of the sensor hardware unit 114A may also be provided.
There is further provided monitoring and resilience components 164. The monitoring component 164 is configured to monitor the health of other components in the system. It may also be configured to monitor system resources and performance of the people counting device. The resilience component 164 is responsible for recovering components in event of failure as well as tracking the device's ability to recover from system failures. In addition, the monitoring and resilience components 164 may be responsible for managing error logging.
There is further provided a scheduler component 166 configured to assign and schedule hardware resources of the people counting device to processes that need to be carried out. The scheduler component 166 is also configured to manage the power options (on/off) of the camera. A networking backhaul component 168 is also provided which is configured to transport data from the people counting device 114 to the communications network 126 using a specific communications channel.
The networking backhaul component 168 may be configured to transmit mobile voice and data services using digital cellular technology, such as GSM (Global System for Mobile Communications).
The software components described above may be implemented using any suitable programming language or environment, e.g. JavaScript, Node.js, Python, Microsoft TypeScript. Container technology is used to package the software components such that they can be run in an isolated manner to other processes. Using this method, the software components are packaged into standardised units, also known as containers. Each container comprises code and all its dependencies, including runtime, system tools, system libraries and configuration files.
Advantageously, containers are portable and efficient, and multiple containers can be run on the same processor. In the present embodiment, Docker containers are used. However, in some embodiments, altemafive containers, e.g. Solaris containers or Microsoft containers, may be used.
An advantage of above-described hardware and software components of the improved people counting device is that better communication with external communication networks is enabled. This in turn enables cloud-based UWB radar data improvements and data aggregation to be carried out, the final product being an SQL database comprising accurate people counts at a given location over a specified time period. The data flow and processing architecture, from collection of data and its processing at the people counting device 114 to the final result output, will now be described with reference to Figures 6 to 8.
Data flow and processing architecture Referring to Figure 6, a container-based platform for deploying Internet-of-things (loT) applications is used to facilitate remote management of devices via the communications network. The container-based platform is provided at the server 124. One or more people counting devices 114 may be managed by the container-based platform. By way of a non-limiting example, BalenaCloud is used as the container-based platform 170 in the present embodiment. BalenaCloud can be used to check the status of the one or more people counting devices 114, to set up a configuration on the device(s) 114 or to receive data from the device(s) 114.
Data upload In the present embodiment, by way of example only, a single people counting device 114 is in operative communications with the container-based platform 170. It should be noted that multiple people counting devices 114 may be connected to the container-based platform 170 via the communications network 124. The data flow begins by a file upload request 172 being submitted from the people counting device 114 to the container-based platform 170. Data 174 to be uploaded is sent to the data send module 154 of the base unit 114B of the people counting device 114 and is then transmitted onto the container-based platform 170 via the communications network 124, following transmission of the file upload request 172. Data 174 to be uploaded may comprise people counts (radar counts) data, debug data, and/or video data captured by the camera 132. The counts data may further comprise counts that have been obtained by counting people movement from left to right (leftto-right counts) and/or by counting from right to left (right-to-left counts) both within the field of view of the radar device 130. Once the file has been uploaded to the container-based platform 170, a file upload notification 176 is produced and sent to a web application or an Application (App) running on a mobile device (not shown) which provides a view of the status and results of the people counting device 114.
The file is then uploaded 178 from the container-based platform 170 to a cloud computing data storage service for building, deploying and managing applications and services. By way of a non-limiting example, Microsoft Azure is used as the cloud computing service in the present embodiment, whereby the cloud computing service comprises the container-based platform 170 and the data storage service. The file comprising raw data from the people counting device 114 is stored on Azure.
The raw data is stored in object storage 180, such as Azure Blob storage, which is optimised for storing large amounts of unstructured data such as text data or binary data.
Processing of data stored in server The workflow showing the processing of the raw data received from the people counting device 114 and stored in object storage 180 of the remote cloud computing storage service will now be described with reference to Figure 7. The first stage of processing involves data movement and deserializafion 182. Firstly, a copy of the raw data can be stored as archive data 184 in the form of text data or binary data. A further copy of the raw data undergoes deserialization which is a process that unpacks the raw data and groups the data based on the device, date and time, left-to-right count and right-to-left count. Once the data has been grouped, it undergoes a normalisation process and the normalised data 186 is then stored in a common scale and form again within the cloud computing storage service.
The second stage of processing involves a data transformation process 188. During this process the normalised data 186 is loaded in files into a database, such as Azure Cosmos Db, by a process known as tabularization. Loading data into the database, where this data is referred to as harmonization data 190, allows SQL queries to be run on the data to access, filter or ask questions in general about the data. This can be used for technical debugging, reporting or testing.
Advantageously, all counts for a given location and time (using the timestamp data) are stored in the database such that if the same data types with different values are uploaded from the people counting device 114, all the versions of that data are stored. Subsequently, different versions of the data can be analysed. For example, it may be desirable to analyse the latest data only. Storing harmonization data 190 in a database is beneficial because it allows an SQL API (Structured Query Language Application Programming Interface) to query the data, which would otherwise be difficult to access where the data is stored within files. Furthermore, having all the versions of the data for a given combination of location and date/time is beneficial for data analysis. Harmonization data 190 combines both information from the filename and the contents on the file (e.g. date/time, Sensor ID).
Next, calibration of the harmonisation data 190 takes place. The process of calibration is based on comparing camera counts derived from camera data of the people counting device 114 and radar counts (left-to-right and right-to-left) derived from radar device data of the people counting device 114, and applying the most appropriate equation which fits all the count values to achieve the best fit. In some embodiments, a calibration ML model may be applied to the camera data and the radar data to achieve calibrated counts. At this stage, irrelevant fields can be dropped. The resulting calibrated data is stored in the database as master data 192.
The next step of the data transformation process 188 is aggregation and imputation of the master data 192. In aggregation, the master data 192 is read, searched, and aggregated for a predetermined time period (e.g. 5 minutes). If one or more data values are considered to be missing, the missing data can be replaced, i.e. imputed, with substituted values, in order to improve handling of the data. This process is known as imputation. In some embodiments, imputation may have an ML model applied. Such an imputation model is discussed in further detail below.
The imputation model is implemented using neural networks. The model predicts values of missing data and replaces the missing data with the imputed data. The imputation model can also be used to detect anomalies in data as well as forecasting of footfall counts.
Data can be missing on timescales that vary from 5 minutes, to several hours to several weeks. In the present embodiment, imputation models to impute people counts aggregated for predetermined time intervals, e.g. in 5-minute (Model 1), hourly (Model 2), and daily (Model 3) clusters, are used.
Models 1, 2 and 3 are Long Short-Term Memory (LSTM) Recurrent Neural Networks, containing 2 LSTM layers, and are trained with the objective to predict a people count given the previous N people counts (whereby N varies across the three models). For training Model 1 (5-minute imputation model), about 6 weeks of counts are used, and predictions for a 5-minute count are based on the last 3 hours of predictions. For training Model 2 (hourly imputation model), about 6 weeks of counts are used, and predictions for an hourly count are based on the last 24 hours of predictions. For training Model 3 (daily imputation model), about 26 to 52 weeks of counts are used, and predictions for a daily count are based on the last 14 days of predictions.
For different missing data gap durations (e.g. 5 minutes, 1 hour, 1 day, 1 week), the imputation model which minimises the error in imputation is selected. Using Model 1 (5-minute imputation model), the mean absolute imputation error is minimised for a missing data gap duration of 1 hour. Using Model 2 (hourly imputation model), the mean absolute imputation error is minimised for a missing data gap duration of 1 day. Using Model 3 (daily imputation model), the mean absolute imputation error is minimised for a missing data gap duration of 1 week. Therefore, for missing data gap durations between 5 minutes and 6 hours, between 6 hours and 3 days and between 3 days and 3 weeks, Model 1, Model 2 and Model 3 are used, respectively. The present method for imputing missing data values enables the imputation absolute error to be minimised to between 2% and 10%.
The aggregation and imputation processes are repeated for many different aggregation levels.
Namely, if the counts data is required to be aggregated for three different time periods (e.g. 5 minutes, 1 hour, 1 week), the aggregation and imputation processes are repeated for each of the three time periods. The results of aggregating the counts for the different time periods (aggregated data) is then stored in the database as reporting data 194. These results may be presented in a report-based summarised format as required.
The above-described calibration, aggregation and imputation processes form part of a batch process which recurs at a predetermined time interval. For example, if the radar counts and camera counts are uploaded every hour, the batch process may occur daily such that aggregated and imputed counts are produced once a day for each specified aggregate time period. In some embodiments, the batch process may occur as soon as all the required counts data for a given aggregate time period becomes available. For example, if the required aggregate for the counts data is 1 week, as soon as the counts data for a week has been recorded and uploaded, the batch process is started. The batch process may also be repeated one or more times if required in the event of late arrival of counts data from the people counting device 114.
Automatic execution and scheduling of the batch process, also known as orchestration, is achieved using a data integration service or orchestrator 196, such as Azure Data Factory. Data Factory is another Microsoft Azure service in the Cloud that allows users to create data pipelines via a graphical user interface. The orchestrator 196 is responsible for kicking off a data pipeline which executes a recurring batch process at a scheduled time. The scheduling of the batch process may vary depending on factors such as location. For example, the batch process may recur for a specified time period and switch to an on-demand processing of counts data. As well as scheduling of tasks, the orchestrator 196 allows users to execute data pipelines manually. From this point, the status of the system will be controlled and monitored. The batch processes can be expanded to include diagnostics and to check queries to monitor the system. Lastly, the data is released into production in an online SQL database 198, which can then be queried to access the data.
Recurring batch process An overview of the batch process described in the processing workflow (Figure 7) above is provided in Figure 8.
The orchestrator 196 starts the data pipeline which executes the recurring batch process at a scheduled time and schedules tasks within the process. The transfer of counts data, debug data and video data, from storage as normalisation data 186 to the data transformation loop is shown. A machine learning service 197, such as Azure Machine Learning (ML) Service, is used to deploy the calibration ML algorithm and the aggregation/imputation ML model. Within the aggregation loop 199, an analytics service, such as Azure Databricks, is used. Computed aggregates are passed onto the SQL DB, whilst lower-level aggregates are passed from the SQL database to the analytics service.
Once the models have been run on the data and the scheduled time for release is reached, the data is released into production in an online SQL database 198.
In the present embodiment, manual counts based on camera data may also be used to adjust counts if the device is counting passers-by inaccurately. For example, if the device is undercounting by 10% over a day in a particular location, the count can be adjusted based on the manual counts. This technique works particularly well when the imaging conditions for the camera are not compromised by the environmental conditions. The improved device and method described above therefore allows more reliable and more accurate counting of passers-by to be obtained.
A second embodiment of the people counting device and associated methods will now be described with reference to Figures 9 to 11. The second embodiment includes some of the features of the first embodiment. To avoid unnecessary repetition, only the differences between the first and second embodiments are discussed below.
The present embodiment seeks to improve the precision of the people count produced by the radar people counting device 114 for locations with a low-medium density of people. More specifically, radar count precision and stability is further improved by the present embodiment for groups of people walking together. A method of using the camera 132 of the people counting device 114 to improve the precision of the counts of people by training an ML model is described as follows, in accordance with the second embodiment. It should be understood that the method of the second embodiment may also be used to improve the precision of footfall counts produced by the people counting device 114 for locations where the density of people passing by is greater than a threshold density for that particular location. The threshold density may be different for different locations. For example, some locations may have a relatively low threshold density, e.g. 50 people passing by per hour. Other locations may have a relatively high threshold density, e.g. 7,000 people passing by per hour. Furthermore, the present embodiment can be used to improve the precision of the people count in certain location types whereby known counting methods are inaccurate, e.g. counting people walking in groups on a busy shopping street. It should also be understood that the people counting device used in the present embodiment may, in a further embodiment, not comprise a camera. A camera may be used to create the ML model. Subsequently, a people counting device comprising a radar device, but not a camera, can be used to collect radar data which would then be input into the ML model (application of the ML model) on the cloud computing platform or server.
Typically, in known systems, the radar device 130 tends to represent the group of people (relatively closely positioned to each other and moving together) as a 'blob', and thereby undercounts the number of people in the group. This problem is now discussed in further detail as follows with reference to Figures 9 and 10. Firstly, Figure 9 shows schematically the detection, by a radar device 130, of three people who are moving individually and in the same direction. Three points 202a, 202b, 202c make up an initial point cloud and are detected in an initial point cloud frame 204 and their corresponding points at a first time interval 206 and a second time interval 208 are shown. Each set of points is then converted into a track that represents the movement of a person, such that three tracks 210a, 210b, 210c are produced. Subsequently, the tracks are converted into counts 212. In this case, a track is produced for each of the three detected points, which results in a count of three people (where the track is only considered if it crosses a line (predetermined location) within the field of view of the radar device 130).
In contrast, Figure 10 shows the detection, by a radar device, of three people who are moving in close proximity to one another and within a group. Instead of detecting three separate points, the radar device represents the group as a blob and detects a single point 214 in the initial point cloud frame 216. Corresponding points at a first time interval 218 and a second time interval 220 are shown. The set of locations for the single point 214 in different time frames is then converted into a track 222, and subsequently, the track is converted into a count 224 of one person, even though there are three people in the group. In this case, a track is not produced for each person because the group is detected as a single point 214, rather than the individual points. The number of people in the group is, therefore, undercounted by the radar device 130.
In accordance with the present embodiment, an ML model is trained based on different shapes and sizes of detected blobs and their links with manual counts. The method steps used to train the ML model will now be described with reference to Figure 11. The method begins, at Step 226, by the radar device detecting blobs in different locations within a predetermined time period. Any suitable sample size of blobs may be used to train the ML model, e.g. 1,000 to 2,000 blobs. Samples of blobs are obtained from different locations since the velocity of the moving people has an impact on the size and shape of blobs which represent groups. For example, a first location may be a tube station exit where people tend to move quickly and mainly in one direction. A second location may be a shopping street where people tend to move more slowly and in larger groups. At Step 228, the radar data comprising the shapes and sizes of the blobs are collected by the radar device. Simultaneously or subsequently, at Step 230, the camera records videos in the same locations and same time period (data acquisition time period) in which the radar data has been collected. At Step 232, the video data comprising videos of people moving in groups are collected by the camera.
Next, at Step 234, the number of people within each blob is counted manually using the video data to obtain a manual count. In some embodiments, image recognition software may be used to obtain the manual count and to train the ML model. At Step 236, the different shapes and sizes of blobs sensed by the radar data are linked to the number of people within each blob which is obtained by the manual count using the video data. At Step 238, a trained ML model is produced based on the links between the radar data and the manual count. The ML model is trained until the error is less than a threshold error, which is determined at Step 240. For example, the model is trained until the average daily mean error for the radar sensor is between +/-5%. This training can typically be implemented using a neural network. Once training of the ML model is completed, the ML model can be used to accurately predict counts of people where the people are moving in groups and thus detected as blobs, rather than individual points, by the radar device. Movement of the blobs can be monitored in the field of view, such that they are not counted more than once.
The above-described method of the present embodiment results in a significant improvement in the precision of the people count produced by the radar device. Namely, an improvement involving 95% of all radar counts being within +/-5% of the manual count may be achieved using the above method.
The present embodiment further seeks to improve the accuracy of the radar counts obtained. The accuracy of radar counts can be analysed using mean percent error, per hour in a day or per day in a week, when the counts obtained using the radar device are compared with manual counts which are obtained from the camera data for the same scene at the same time. For example, manual counts may be obtained by visually counting the number of people within a video frame. Definitions of mean percent error per hour in a day and per day in a week are set out below.
The percent error between radar counts and manual counts, calculated per hour in a day, PE(h), is defined as: Radar count (h)-Manual count (h) PE (h) - x10() Manual count (h) The mean percent error (MPE) per hour in a day can be defined with respect to PE(h), by summing the percent errors calculated per hour in a day and dividing the total by 24, as follows: MPE per hour -where h is a value representing the hour of the day, and as such, the possible values of h are 1 to 24.
Analogously, the percent error between radar counts and manual counts, calculated per day in a week, PE(d), is defined as: Radar count (d) -Manual count (d) PE(d) = x 100 Manual count (d) The mean percent error (MPE) per day in a week can be defined with respect to PE(d), by summing the percent errors calculated per day in a week and dividing the total by 7, as follows: MPE per day -where d is a value representing the day of the week, and as such, the possible values of d are 1 to 7.
The following issues have been addressed using the method of the present embodiments to achieve an improved radar count accuracy.
In current methods, there are several issues with counting the number of target points which indicate the movement of a person. For example, some target points can be missing from the field of view, or the number of target points may reduce to a single target rather than multiple targets, which would lead to inaccuracies in the resulting number of tracks that represent movement of people in the field of view. Another issue with current methods is unreliable detection of people in that some people that are in the field of view are either not registered by the radar device or registered only for a limited
distance within the field of view.
E PE (h) E PE (d) The present method of the described embodiments has addressed the above issues by improving the sensitivity of the radar device 130 to improve the capture of passers-by data. This ensures that tracks are visible throughout the field of view. In addition, the accuracy of converting the number of tracks to people counts is improved by counting the number of tracks that are crossing two parallel lines separated by a predetermined distance (e.g. 0.5 metres or 1 metre), rather than a single line, in the field of view. Effectively, a counting zone is defined by the two lines. In an alternative configuration of this embodiment, some tracks may only appear in certain parts of the field of view, e.g. towards a corner or edge of the field of view. The accuracy of counts in these cases may be improved by counting tracks that represent a person moving in a certain direction for more than a predetermined distance (e.g. 0.5 metres or 1 metre) rather than through a predetermined set of locations. In this example, counts are not limited to counting tracks that cross two lines. In this way, tracks that appear toward, for example, the corner or edge of the field of view can be counted. Furthermore, the granularity, i.e. level of detail, and precision of the field of view of the radar device 130 has been doubled, and thus significantly improved. Namely, the present improvements enable an accuracy of less than 5% mean percent error per day to be achieved.
A third embodiment of the people counting device and method will now be described with reference to Figures 12 to 17B. The third embodiment includes some of the features of the first embodiment and/or the second embodiment. To avoid unnecessary repetition, only the differences between the embodiments are described below.
The present embodiment seeks to improve the precision/accuracy of footfall counts produced by the people counting device 114 for locations where the density of people passing by is greater than a threshold density for that particular location. The threshold density may be different for different locations. For example, some locations may have a relatively low threshold density, e.g. 50 people passing by per hour. Other locations may have a relatively high threshold density, e.g. 2,500 people passing by per hour. Using the method of the present embodiment becomes particularly important for counting people when counts are required for at least the threshold density of people because at people densities above the threshold, the people count achieved by conventional UWB radar sensors are imprecise.
As has been mentioned previously, people counting systems based on UVVB radar sensor technology often suffer from inaccurate counting of large and dense groups of people. A scenario exemplifying the difficulty in counting a group of people, whereby the group has a relatively high people density, is illustrated in Figure 12. Firstly, several people in close proximity to one another within a group are represented by points 302 in an initial point cloud frame 304. Points representing people at a first time interval and a second time interval are represented by a first target point cloud frame 306 and a second target point cloud frame 308, respectively. It should be noted that points may not be distinguished for each person in a group, as described above with reference to Figures 9 and 10-in such a case, some of the points may represent multiple people in a group rather than individual people Firstly, it is extremely difficult to associate a point in the first target point cloud frame 306 with the point representing the same person in the initial point cloud frame 304. Similarly, as each person in the group moves over time, difficulty persists in associating a point in the second target point cloud frame 308 with the point representing the same person in the first target point cloud frame 306 as well as the initial point cloud frame 304. The set of points would then need to be converted into tracks to represent movement of each person over a specified time period, and the tracks would then need to be converted into counts. However, due to the difficulty in associating points between the initial point cloud frame 304 and the target point cloud frames 306, 308 where the number of people is high, the process leads to unreliable counts 310.
The above-described problem is further illustrated with reference to Figure 13. The graph in Figure 13 shows a comparison between UWB radar counts 312 and manual counts 314 (y axis) for data which was recorded at hourly intervals (x axis) at a particular location. As can be seen by the boxes 316a, 316b in the graph of Figure 13, there is a significant difference between the radar count and the manual count when a count of above approximately 2,500 people is required. When large and dense groups of people are being counted, the radar sensor typically undercounts the number of people and the error for each hour in the day of the radar count based on a manual count is high.
In the present embodiment, a people counting method which uses camera data obtained by the people counting device to train a ML model is used to improve the precision of counts. The camera counts may also be used to adjust the radar counts such that they comply with predetermined precision criteria. The method resolves issues of known systems with respect to undercounting people that are walking in close proximity to one another and in large and/or dense groups. The method of the present embodiment may be used for all types of locations, e.g. shopping street, passageway, store front in a shopping park. It should be understood that the people counting device used in a further embodiment may not comprise a camera. A camera may be used to create the ML model. Subsequently, a people counting device comprising a radar device, but not a camera, can be used to collect radar data which would then be input into the ML model (application of the ML model) on the cloud computing platform or server.
Training of ML model using camera data The method used to train the ML model will now be discussed with reference to Figure 14. The method begins, at Step 318, by the camera recording videos of different locations within a predetermined time period. The videos may be recorded in different location configurations, e.g. facing the street in front of a store, or facing a passageway. The videos may also be recorded using different camera angles and/or different densities of people passing by. At Step 320, the radar device collects data which corresponds to the locations, location configurations, camera angles and/or people densities for which the videos are recorded for the same time period (data acquisition period). For any groups visible within the video frames of the recorded videos, a 'blob' is created, at Step 322, for each group of people. At Step 324, the number of people within each blob is counted manually using the video data. Subsequently, at Step 326, each blob is then linked to the number of people obtained for the corresponding blob via the manual count.
The method continues by, at Step 328, obtaining a people count from the radar data, and, at Step 330, obtaining a manual people count for each of 500 video frames from the camera data. The accuracy of the counts obtained from the radar data is then determined by comparing these counts to the manual counts obtained from the camera data for the same time period. At Step 332, a 'count limit' of the radar counts is then determined by ascertaining the point at which the accuracy of counts drops. For example, the error in counts when comparing radar counts to manual counts may be less than 5% for 2 standard deviations, i.e. 95% of radar counts may be within +/-5% of the manual counts, up until a count of 2,500 people. The count limit in this case would be 2,500 people. For video frames containing more people than the count limit, a manual count is carried out, at Step 334.
At Step 336, the radar data, for which the people count is higher than the count limit, is linked to the number of people determined via the manual count to be in the corresponding video frame. The method continues by producing, at Step 338, a trained ML model based on the links between the radar data and the manual count. The ML model is trained until the error between radar counts and manual counts is below a predefined threshold, or until the accuracy between radar counts and manual counts is above a predefined threshold, which is determined at Step 340. For example, the ML model may be trained until the error in counts when comparing radar counts to manual counts is less than 5% for 2 standard deviations, i.e. 95% of radar counts are within +/-5% of the manual counts.
Calibration techniques Once the ML model has been trained, it is evaluated to assess its behaviour and performance. It is important for the predictions of the ML model to not only be accurate, but also to be well-calibrated. To measure how well-calibrated the ML model is, and thus provide a measure of how realistic the ML model prediction is, a calibration plot can be used to show the relationship between the true class of the samples and the predicted probabilities. In the present embodiment, the quality of the ML model is evaluated using the calibration techniques discussed below, with reference to Figure 15 and Figure 16.
The calibration techniques involve firstly fine-tuning the parameters on the radar sensor, followed by fitting a calibration formula that produces the best results based on a sample of data. The sample data is collected on a semi-pedestrian street with medium/high people density, whereby people are passing by a store front on the pavement of a relatively busy street. Fitting of the calibration formula is repeated for different types of location configurations, such as pedestrianised streets, squares, and indoor space. In some embodiments, fitting of the calibration formula may also be repeated for different people densities or volumes.
Calibration for a semi-pedestrian street, i.e. a street with two pavements and a road in the middle, is performed on a total of 100 10-minute counts. In the present embodiment, approximately 50 10-minute counts have been obtained from a first location -Old Street (Figure 15) -and approximately 50 10-minute counts have been obtained from a second location -Oxford Street (Figure 16).
Two calibration formulas can be considered and fitted: a linear calibration and a power law calibration. Different calibration formulas may be used for other location configurations, such as wide pedestrian streets or streets with metal objects. The linear calibration and power law calibration formulae use for a semi-pedestrian street are discussed in further detail below.
Linear fit: Fitted Count = Mx radar count where M has a value between 0.8 and 1.1. Power law fit: Fitted Count = A x radar countB where A has a value between 0.4 and 1, and B has a value between 1 and 1.2.
Figure 15 shows scatter plots for the linear fit 342 and the power fit 344, whereby the scatter plots display calibrated measurements against true counts for 50 10-minute counts in Old Street. Calibration is obtained by fitting all available values. The parameters are as follows: M=0.92 (linear fit 342) A=0.48 (power fit 344) 8=1.12 (power fit 344) Distribution of % error on the 10-minute calibrated counts in Old Street, which is discussed in further detail below, is displayed in the histograms (linear fit histogram 350; power fit histogram 352) of Figure 15.
Figure 16 shows scatter plots for the linear fit 346 and the power fit 348, whereby the scatter plots display calibrated measurements against true counts for 50 10-minute counts in Oxford Street. Calibration is obtained by fitting all available values. The parameters are as follows:
S
M= 0.86 (linear fit 346) A= 0.47 (power fit 348) 8=1.11 (power fit 348) Distribution of percentage error on the 10-minute calibrated counts in Oxford Street, which is discussed in further detail below, is displayed in the histograms (linear fit histogram 354; power fit histogram 356) of Figure 16.
The parameters obtained for the linear and power fits are discussed in detail below, followed by a discussion of the measurement and distribution of percentage error.
The parameters for the linear and power law fit are obtained through linear least squares optimisation. For the linear model, the parameter M is obtained as: = (xT x)_ 1 "T "
Y
where x is the array of radar counts, y the array of true counts, T indicates matrix transpose and -1 indicates matrix inverse.
For the power law model, the parameters A and B are obtained from the linear least square solution to the logarithm of the data.
A = ea B =b a -E12=1(in y3 b r=i(In Xg In yi) -Et-1(1n xi) E7=1 en Y3 n x -(Y7-I 171X32 where y = true counts, x = radar counts, n is the number of points used for fitting, and i is an index running through the data used for the fit Fitting the calibration formula, therefore, involves obtaining the above coefficients (M for the linear fit; A and B for the power law fit) using the information of the whole dataset, rather than a fraction of it.
More specifically, a bootstrap method is applied by which the following 3-step process is repeated 1000 times: 1) Split all of the available counts into training values (80%) and test values (20%), chosen at random; 2) Obtain the coefficients (M for linear fit; A and B for power fit) which minimise the squared error of the fitted count for the training values; and 3) Calculate the error the model incurs when using the coefficients found in the above set for the test set data Once this is done, the 'best fit' coefficients (used for the final fitting) are calculated as the average of the 1,000 coefficients obtained, one per repetition. The 'best fit' coefficients, therefore, make use of the information of the whole dataset since, statistically, fitting 1,000 times on 80% of the available data chosen at random is comparable to fitting on 100% of the data. All of the errors incurred by the model in the 1,000 repetitions are stored and used as a metric of the model's performance.
Since for every repetition the errors are calculated on test points which have not been used for that particular fitting process, the error estimate is accurate and realistic.
To measure error, percentage error (% error) is used and defined as follows: true count -fitted count % error - x 100 true COUTI t The above % errors are calculated at 10-minute intervals. Since the fitting process is repeated 1,000 times, a total of 10,000 errors are obtained per location (20% of 50 data points x 1,000 repetitions).
The histograms in Figures 15 and 16 show the 10,000 percentage errors at 10-minutes intervals. The distribution of % error on 10-minute calibrated counts is shown in the histograms, where the calibration was obtained by fitting 80% of the available data, and the error values were obtained by testing on the 20% remaining data (80-train 20-test data split).
The text on the histogram plots reports the boundaries of the 95% confidence interval for the estimated daily % error (obtained summing 8 hours-worth of data) using the fitting method, these were also obtained using 80-train 20-test data split. The key performance indicator (KPI) is set to [-5, +5]; therefore, any interval which is smaller than [-5,5] respects the KPI.
When estimating daily error, all of the footfall traffic happens during the 8 hours of daylight (which is a rough assumption but not too far from reality). Therefore, the numbers seen on the histograms are obtained by summing the fitted counts over 48 randomly-chosen 10-minutes intervals, which result in 8-hour intervals. The daily % error is then evaluated through the above formula on the summed counts for these 48 10-minute intervals.
Alternatively, if a day is considered to consist of 24 hours of homogeneous pedestrian traffic (i.e. where footfall at 04:00 is not very different form footfall at 14:00), then the daily % error is likely to be lower. However, the present method uses 8 hours of peak traffic in an effort to minimise error where it is most likely to happen.
Lastly, to obtain the 95% confidence intervals and the mean % error on a daily count, the 208 8-hour % errors obtained by summing the 10,000 10-minutes % errors in non-overlapping groups of 48 are considered. Of these 208 (10,000/6 = 1,666h; 1666/8 -208 'days') daily % errors, the mean is calculated to reach the mean daily % error, and the values of the 2.5 and 97.5 percentiles are taken to obtain the 95% confidence interval of the daily % error.
Application of ML model An exemplary method of applying the ML model to obtain counts based on radar data will now be described with reference to Figure 17A. The method begins, at Step 358, by uploading an ML model, which has been trained and evaluated, to a service of the cloud computing platform (such as Azure ML Service) for deployment. The next step is, at Step 360, applying a calibration algorithm, as discussed in detail above. The method then tests, at Step 362, the accuracy of radar counts. A first accuracy test comprises comparing radar counts against a sample of test camera data captured on video over 24 hours. The method checks, at Step 364, if the error when comparing radar counts against the test samples is less than an error limit, such as 5% for 2 standard deviations over 24 hours of data, i.e. 95% of radar counts are within +/-5% of the test samples. In some embodiments, the sample may comprise 8 hours of data to capture counts during the day or night only. If the test at Step 364 is passed, then the method passes the first accuracy test and moves onto a second accuracy test which takes place at Step 368. If, however, the error is higher than the error limit, the method, at Step 366, further trains the model for each location. Further training is carried out by obtaining 7 days of video and counting people in a minimum of 30,000 frames using shape recognition software. Alternatively, if the people density in the frames is too high for shape recognition software to be used, a manual count may be obtained. After further training of the model, the method proceeds to repeat the accuracy test at Step 364.
The second accuracy test comprises comparing radar counts against test sample whereby if the error when comparing radar counts against the test samples is less than an error limit, such as 15% for 2 standard deviations over 1 hour of data, i.e. 85% of radar counts are within +/-15% of the test samples, then the method passes the second accuracy test. If, however, the error is higher than the error limit, the model is further trained for each location, as described above. Once both the first accuracy test and the second accuracy test have been passed, the method proceeds to Step 370, where the model is used to count passers-by based on the radar data. It should be understood that the error limits discussed above are non-limiting examples to show how the accuracy of the radar counts can be tested. Error limits which are lower or higher than those discussed in the present example, as well as sample data collected over different time periods may be used to test the accuracy of the radar counts.
The processes involved in Step 360, which comprises application of the calibration algorithm, and Step 362, which comprises accuracy testing, will now be discussed further with reference to Figure 17B. In the present example, the following power law calibration formula is fitted: Fitted Count = Ax Radar C °Mlle to determine values of A and B. The process begins by, at Step 360a, installing a people counting device at multiple target locations for sample data collection such that fitting of the calibration formula can be repeated for different types of location configurations. For example, the target location may comprise a pedestrianised street, a semi-pedestrian street, a square, and an indoor space. In some embodiments, fitting of the calibration formula may also be repeated for different people densities or volumes.
At Step 360b, the calibration process begins. Radar count data are collected at Step 360c. The process continues by, at Step 360d, fitting the calibration formula and computing values of A and B for each of the target locations. Once the values of A and B have been determined, at Step 362a the accuracy of the radar counts for each location is determined by calculating the percentage error between the radar counts and the fitted counts. At Step 362b, the process continues by checking whether the device has been installed correctly. In some embodiments, Step 362b may only be carried out if an abnormal % error is calculated at Step 362a. In the event that the device has not been installed correctly, the calibration process is repeated from Step 360c. Next, the process checks, at Step 364, which is equivalent to Step 364 in Figure 17A, whether the error when comparing radar counts against the fitted counts is less than an error limit, such as 5% for 2 standard deviations over 24 hours of data, i.e. 95% of radar counts are within +/-5% of the fitted counts. If the accuracy test is not passed, further analysis is carried out, e.g. patterns in the sample data are studied and alternative solutions sought by training the model further as per Step 366 of Figure 17A. If the accuracy test is passed, then the method continues at Step 368 (shown in Figure 17A).
The camera of the people counting device is, therefore, used to fine-tune the precision of the counts per site location by adjusting the ML model. The people counting device 114 of the present embodiment functions accurately and reliably in adverse lighting conditions as well as when exposed to unfavourable weather conditions, providing a significant advantage over conventional camera-based people counting methods. Videos of large groups and camera counting techniques in good lighting conditions are used to determine accurate counts. These accurate counts are linked to the UWB radar sensor data in the model, whereby the error in the UWB radar counts for large groups is corrected by the model. Once the accuracy of correction has been tested on some sample data and it shows it to be accurate, the UWB radar counts in any environmental situation can be corrected by the model to provide an accurate count in any environmental condition (low light, bad weather etc.). In addition, the device can accurately count large and/or dense groups of people that are in the field of view of the radar sensor. Furthermore, the people counting device of the present invention provides a relatively low-cost solution in comparison to conventional video camera-based techniques.
A fourth embodiment of the people counting device and associated methods will now be described with reference to Figure 18. The fourth embodiment includes some or all of the features of the first, second and/or third embodiments. To avoid unnecessary repetition, only the differences are described below.
Multiple people counting devices 402a, 402b, 402c at different locations, each one comprising a radar device and a camera, can be linked to a central server 404 and associated database 406 via a communications network 408, as shown in Figure 18. The people counting devices 402a, 402b, 402c are installed at store fronts 410a, 410b, 410c at three different locations.
It should be understood that although the present embodiment depicts the people counting devices 402a, 402b, 402c in a single hardware unit per store, in other embodiments the people counting devices may comprise a sensor hardware unit 114A, comprising a radar device 130 and the camera 132, for installation at the store front, and a base unit 1148 comprising a processor 134 and an external communications interfaces 136 for installation within the store such that the base unit 114B is and in communication with the sensor hardware unit 114A via suitable communication means, e.g. a Wi-Fi connection or Power over Ethernet (POE) cable.
The camera of the people counting device 402a, 402b, 402c at each location can be used for the purpose of optimising installation of the device. During installation, the device is configured to collect image or video data using the camera 132. The data collected by the camera 132 can be used to determine the field of view of the radar sensor of the radar device 130. Having access to the field of view during installation enables zones within which people are to be counted, i.e. counting zones, to be determined.
Once the people counting device has been set up, the counts from the radar device 130 and data collected by the camera 132 at each location are sent to the central server 404 for data processing and fine-tuning of footfall counts for the particular location at which the device has been installed.
If any changes or anomalies in the counts are detected while the counts are being considered on the server 404, the device 402a, 402b, 402c, for that location, is configured to collect video data using the camera 132 in order to enable the cause of the change or anomaly to be assessed. In other embodiments, the camera 132 of the devices 402a, 402b, 402c may be configured to capture an image at the site at the time of setting up the device 402a, 402b, 402c, and to capture further images at the site at regular intervals thereafter. The images can be sent to the central server 404 for comparison and analysis. Changes in site configuration can be detected by the comparison. If a change in site configuration is detected, an alert signal can be sent to an App running on a mobile device to alert a user regarding the change detected, provide the status of the device 402a, 402b, 402c and/or provide a comparison of the captured images.
The images obtained from the camera 132 at the site (under instruction from the processor 134 to the camera 132 before, during and/or after the data acquisition period) can also then be investigated to determine the cause of the change or anomaly in the counts. For example, changes in the counts may be due to abnormalities or other changes in site configuration. The image or video data from the camera 132 can be used to determine whether the device has been moved, or if there is an obstruction in the field of view (e.g. scaffolding) or if some other change has occurred at that particular location. Such changes can have an impact on counting precision as well as the number of people passing by. There is no limit to the number of people counting devices 402a, 402b, 402c that can be monitored in this way.
A fifth embodiment of the people counting device and associated methods will now be described with reference to Figure 19. The fifth embodiment includes some of the features of the first, second, third and/or fourth embodiments. To avoid unnecessary repetition, only the differences are described below.
As depicted in Figure 19, the people counting device 514 of the present embodiment is wirelessly connectable to a remote server 524 via a wide-area communications network 526. A database 528 is provided with the server 524 to store data and/or information. In some embodiments, the people counting device is wirelessly connectable to multiple remote servers and/or databases.
The present embodiment comprises at least the hardware components of the people counting device described in the first embodiment with reference to Figure 4. Namely, the hardware components of the people counting device 514 include a radar device 530, a camera 532 (whereby the combination of the radar device 530 and the camera 532 is referred to as the sensor hardware unit 514A), a processor 534, and external communications interfaces 536 (whereby the combination of the processor 534 and the external communications interfaces 536 is referred to as the base unit 514B, or base station). In addition, the device of the present embodiment comprises an accelerometer 550 within the sensor hardware unit 514A. The accelerometer 550 may be built in to the device 514 or it may be attached to another hardware component of the device, e.g. the camera. The accelerometer 550 is connectable to the processor 534 of the people counting device by way of a wired connection such as USB, or a wireless connection such as Wi-Fi or Bluetooth.
The present embodiment comprises at least the software components of the people counting device 114 described in the first embodiment with reference to Figure 5. Additionally, there is provided an accelerometer capture software component (not shown) as part of the base unit 514B. The accelerometer capture software component is configured to receive accelerometer data as input, whereby the accelerometer data has been output by the accelerometer 550. The accelerometer data may be sent to a data send module, which is integrated with the external communications interfaces 536, for transmitting to the external server 524 via the communications network 526.
Accuracy of footfall counts can be affected significantly by movement of the people counting device as a whole, or movement of components of the device, such as the camera or radar device. The accelerometer 550 can be used to detect any movement of the device. The accelerometer 550 may also be used to monitor the stability of the device. For example, if the device is moved, the accelerometer 550 is configured to sense the movement of the device and subsequently send an alert to the processor 534. That alert can then result in a message being created and sent via the communications network 526 to the central server 524. The accelerometer 550 can also be used to record the angle by which the sensor hardware unit 514A is positioned. In some embodiments, the accelerometer can additionally have a built-in three-axis magnetometer or a gyroscope chip which can help increase the accuracy of the angle of tilt detection, but this is not essential in the present embodiment. The accelerometer 550 is configured to send the angle data to the base unit 514B for subsequent transmission to the external server 524 via a communications network 526. The angle data for the people counting devices can be stored and accessed via an App. This data could be used when a device in a particular location fails or malfunctions, thereby requiring replacement, such that the sensor hardware unit 514A of the replacement device can be positioned at the same angle as the malfunctioning sensor hardware unit 514B to achieve optimal positioning.
As described previously, once the people counting device has been set up, the counts from the radar device 530 and data collected by the camera 532 at each location are sent to the central server 524 for monitoring the operation of the device, namely for data processing and fine-tuning of footfall counts for the particular location at which the device has been installed. If any changes or anomalies in the counts are detected while the counts are being considered on the server 524, the device 514, for that location, is configured to collect accelerometer data using the accelerometer 550 in order to enable the cause of the change or anomaly to be assessed. The data obtained from accelerometer 550 at the site can then be investigated to determine the cause of the change or anomaly in the counts. For example, changes in the counts may be due to abnormalities or other changes in site configuration which have affected the stability of the device 514. The accelerometer data can be used to determine whether the position of the device 514 has changed or if there has been some sudden movement of the device 514 at that particular location.
A sixth embodiment of the people counting device and associated methods will now be described with reference to Figures 20 and 21. The sixth embodiment includes some of the features of the first, second, third, fourth and/or fifth embodiments. To avoid unnecessary repetition, only the differences are described below.
The method of the present embodiment can be used to compare the number of people passing-by a store to the number of people entering store. The method can be carried out using either a single people counting device or multiple people counting devices.
In a first example, a single people counting device 602, as shown in Figure 20, is positioned on a store front 604 in close proximity to the store entrance 606. Figures 20 and 21 show the store front in front view and plan view, respectively. The people counting device 602 comprises a UWB radar sensor. The UWB radar sensor defines a first counting zone 610 which covers a region of the pavement including the area on either side of the store, such that people passing by the store front can be counted by the device. The UWB radar sensor defines a second counting zone 608 which covers the region directly in front of the doorway of the store front, such that people entering the store can be counted by the device. The direction of movement is also detected. For the second counting zone 608, direction of movement is particularly useful because it can be used to determine whether people are entering the store or leaving the store. In order for the field of view of the radar sensor to capture both people passing by the store front and people entering/leaving the store, the sensor is positioned on the window of the store front at an angle which enables it to define the first counting zone 610 by two imaginary parallel lines 609 across the pavement and the second counting zone 608 by two imaginary parallel lines 607 across the entrance.
Figure 22 shows a plot of 10,000 tracks obtained by a radar device, using the set up described above with reference to Figures 20 and 21. The features in Figures 20 and 21 which are also visible in Figure 22 have been labelled as appropriate -namely, the people counting device 602, the store front 604, the store entrance 606, the first counting zone 608, and the second counting zone 610. The tracks are plotted against the distance from the store front 604 in a direction along the pavement, and the distance from the device 602 in a direction along the store front 604. The tracks represent movement of people in front of the store. The plot also depicts the position of the device 602, the field of view of the radar sensor of the device 602, the door 606 of the store, and a seating area. The first counting zone 610 is defined such that people passing by the store front can be counted by the device. The second counting zone 608 covers the region directly in front of the door of the store front, such that people entering the store 604 can be counted by the device 602.
In a second example, two UWB radar sensors 602a, 602b are positioned at the store front 604 in close proximity to the store entrance 606, as shown in Figures 23 and 24 On front view and plan view, respectively). The two radar sensors 602a, 602b are connected to a single base unit (processor and external communications interface) such that they form part of the same people counting device. In other embodiments, the radar sensors 602a, 602b may be comprised within two separate people counting devices. The first radar sensor 602a defines a first counting zone 610a which covers a region of the pavement including the area on either side of the store, i.e. a field of view of the street, such that people passing by the store front can be counted by the device. People passing by are counted in two directions of movement (left to right, and right to left).
The second radar sensor 602b defines a second counting zone 608a which covers the region directly in front of the doorway of the store front, i.e. a field of view of the store entrance, such that people entering the store can be counted by the device. People moving within the field of view of the second radar sensor 602b are counted in two directions of movement (toward the store entrance and away from the store entrance). The direction of movement of people within the second counting zone 608a is used to determine which people are entering the store, as opposed to leaving the store.
Each radar sensor 602a, 602b is configured to send its counts to the base unit of the people counting device, for sending onto the server for processing. For the second counting zone 608a, the counts are sent separately for each direction of the movement, such that the number of people entering the store and the number of people leaving the store can be determined. The number of people entering the store is then measured against the total number of people passing by to determine a conversion rate. A calibration algorithm may be applied to the counts for each direction of movement. A distinction can also be made between stationary and moving people or objects.
A seventh embodiment of the people counting device and associated methods will now be described with reference to Figures 25 and 26. The seventh embodiment includes some of the features of the first, second, third, fourth, fifth and/or sixth embodiments. To avoid unnecessary repetition, only the differences are described below.
The method of the present embodiment can be used to measure timing of a person's movement as they are passing by the store. In addition, the number of people passing-by a store can be compared to the number of people entering store.
By way of a non-limiting example, a single people counting device is used in the present embodiment.
The method can be carried out using either a single people counting device or multiple people counting devices.
As shown in Figure 25, a people counting device 702 is positioned at a store front 704 in close proximity to the store entrance 706. Figures 25 and 26 show the store front in front view and plan view, respectively. The people counting device 702 comprises a UWB radar sensor. The UWB radar sensor defines three counting regions 708, 710, 712. The UWB radar sensor defines a first counting zone 710 which covers a region of the pavement including the area on one side of the store entrance 706. In Figures 25 and 26, the first counting zone 710 is shown to be located toward the right hand side of the store entrance (when facing the store entrance from the street), and the far right end of the field of view of the radar sensor. The UWB radar sensor defines a second counting zone 712 which covers a region of the pavement including the area on the other side of the store entrance 706. Namely, in Figures 25 and 26, the second counting zone 712 is shown to be located toward the left hand side of the store entrance (when facing the store entrance from the street), and the far left end of the field of view of the radar sensor. The first counting zone 710 and the second counting zone 712 can be joined by an imaginary line. In other embodiments, the first counting zone and the second counting zone may not be aligned. Instead, the second counting zone may be situated slightly further toward the store entrance.
The first counting zone 710 can be used to detect people that are present or moving within the first counting zone 710. Analogously, the second counting zone 712 can be used to detect people that are present or moving within the second counting zone 712. People passing by the store front 704 or the store entrance 706 can be counted by the people counting device 702, by counting people passing through both the first counting zone 710 and the second counting zone 712. Direction of movement of the people passing through both zones 710, 712 is also determined, such that the device 702 can determine whether a person is moving from left-to-right or from right-to-left past the store front 704.
The time taken for each person to pass by the store front 704 can also be determined. If a person is passing by the store front 704 in a right-to-left movement, as shown in Figures 25 and 26, the time taken for a person to pass from the first counting zone 710 to the second counting zone 712 can be determined. This timing can be used as a measure of how much time the person has spent to pass by the store front 704, which can be used for a variety of applications. For example, if a relatively long amount of time is taken for a person to pass by the store front 704, this may indicate an eye-catching display or store front 704 as a whole. In contrast, if a relatively short amount of time is taken for a person to pass by the store front 704, this may be indicative of passers-by who are not interested in the appearance of the displays presented at the store front 704.
The UWB radar sensor defines a third counting zone 708 which covers the region directly in front of the doorway 706 of the store front 704, such that people entering the store can be counted by the device. In order for the field of view of the radar sensor to capture both people passing by the store front and people entering/leaving the store, the sensor is positioned on the window of the store front at an angle which enables it to define the first counting zone 710, the second counting zone 712 and the third counting zone 708. Similar to the embodiment described above with reference to Figures 20 and 21, the first counting zone 710 and the second counting zone 712 can each be defined by two imaginary parallel lines. The third counting zone 708 can also be defined by two imaginary parallel lines across the entrance. The direction of movement is also detected within the third counting zone 708. Direction of movement detected within the third counting zone 708 can be used to determine whether people are entering the store or leaving the store. The number of people entering the store is measured against the total number of people passing by to determine a conversion rate.
The UWB radar sensor is configured to send radar data or counts to the base unit of the people counting device 702, for sending onto a server, via a communications network, for processing. For the first, second and third counting zones, the counts can be sent separately for each direction of movement. For the third counting zone 708, sending counts separately for each direction of movement can be used to determine the number of people entering the store and the number of people leaving the store. Furthermore, the device 702 can be used to determine whether people who have passed by the store front 704 relatively slowly, e.g. from the first counting zone 710 to the second counting zone 712, have then entered the store by passing through the third counting zone 708. This could be indicative of, for example, a successful advertising display at the store front 704 which has led to an increase in footfall to the store.
With reference to Figures 27 and 28, further exemplary applications of the people counting device 114, 402a, 402b, 402c, 514, 602, 602a, 602b, 702 are described. Figure 27 shows application of the device in a retail store environment. Figure 2i8 shows application of the device in an office environment. In such applications, the people counting device can detect areas of a defined space that are occupied and unoccupied occupied, as well as provide a measure of dwell times within the occupied areas.
Firstly, a space is defined, such as a store space, an office space, or particular area of a store, for example a store window or an advertising display. In Figure 27, a store space 802, and a floor map 804 thereof, is defined. The floor map 804 shows the arrangement of the store, for example the layout of advertising displays and new season ranges. The people counting device can detect occupied areas 806 of the store. Subsequently, the data showing the areas of the store that are occupied can be mapped onto the floor map 804 to indicate which features of the store are of the most interest to people, e.g. adverts or new product ranges. The amount of time that an area is occupied for, i.e. dwell time, can also be measured. For example, the average amount of time spent by people reviewing an advertisement can be recorded. The application of dwell times in occupied areas can be useful to show the areas which are most popular over time.
In Figure 28, an office space 902, as well as a floor map 904 showing the arrangement of desks in the office space, is defined. The people counting device detects occupied areas 906 of the office. The data showing the occupied areas 906 of the office are then mapped onto the floor map 904 to indicate which of the desks are occupied or used most frequently. In addition to, or alternatively, unoccupied areas can be mapped onto floor map to indicate which areas of the store or office are not being used.
The application of dwell times in occupied areas in an office environment can be useful to determine utility of space over time as well as how items in the office are being used.
Many modifications may be made to the specific embodiments described above without departing from the spirit and the scope of the invention as defined in the accompanying claims. Features of one embodiment may also be used in other embodiments, either as an addition to such embodiment or as a replacement thereof For example, whilst the above described third embodiment has been described using camera data from the object counting device it is possible for this data to have been obtained previously from training of the machine learning model (using both camera and radar data) and for the actual radar people counting device which is implemented to not have a camera provided therein. In this case the radar data is used and compared to the machine learning model to derive the corrected count. This advantageously leads to a cheaper object counting device and these cost savings can be significant in system employing many multiples of these counting devices.

Claims (25)

  1. CLAIMS1. An electronic counting device for counting objects proximal to the electronic counting device, the electronic counting device comprising: a radar device for detecting and/or tracking one or more objects in a scene within a predefined region associated with a field of view of the radar device, wherein the radar device is configured to output radar data indicative of the number of objects detected within the field of view over a predetermined acquisition time period; a camera aligned with the field of view of the radar device and being configured to capture an image or a video of the scene, wherein the camera is configured to output camera data regarding the field of view, over the predetermined acquisition time period; a processor connectable to the radar device and the camera; the processor being configured to control the operation of the camera and the radar device for data acquisition, to receive the output radar data and the output camera data and determine a communications channel to be used to send the radar data and the camera data to a remotely-located processing server; and one or more communications engines, operatively connectable to the processor and in communication with the remotely-located processing server, the one or more communications engines being configured to transmit the radar data and the camera data to the remotely-located processing server via the communications channel, such that the camera data can be used to corroborate the radar data and to derive a count of objects proximal to the electronic counting device within the predefined region for the predetermined acquisition time period.
  2. 2. An electronic counting device according to Claim 1, wherein the radar data comprises an object count indicating the number of objects detected by the radar device for a predetermined frame time period.
  3. 3. An electronic counting device according to Claim 1, wherein the radar device is configured to: track movement of objects by acquiring points data in frames over the predetermined acquisition time period, the points data being acquired in one of the frames for a predetermined frame time period, wherein the points data represent detection of objects within the scene; convert corresponding points in the frames acquired over the predetermined acquisition time period into tracks, each track representing movement of an object in the scene over the predetermined acquisition period, and output the track data to the processor, and wherein the processor is configured to convert the track data into radar count data.
  4. 4. An electronic counting device according to any of Claims 1 to 3, wherein the processor is configured to aggregate radar count data for a predefined aggregate time period, such that the aggregate radar count data can be sent to the one or more communication engines for transmitting to the remotely-located processing server.
  5. 5. An electronic counting device according to any of Claims 1 to 4, wherein the electronic counting device further comprises an accelerometer for detecting changes in movement or orientation of the electronic counting device, wherein the accelerometer is configured to generate accelerometer data associated with the changes in movement or orientation of the electronic counting device for sending to the remotely-located processing server via the communications channel.
  6. 6. An electronic counting device according to Claim 5, wherein the accelerometer is configured to generate the accelerometer data upon receiving a signal from the remotely-located processing server via the processor, the signal being associated with an anomaly in the radar count data.
  7. 7. An electronic counting device according to any of Claims 1 to 6, wherein the radar device is configured to define multiple sub-regions in the field of view of the radar device and in use to obtain radar count of objects moving from one sub-region to another sub-region.
  8. 8. An electronic counting device according to Claim 7, wherein the radar device is configured to determine a time period taken for an object to move from one predefined sub-region to another predefined sub-region.
  9. 9. An electronic counting device according to Claim 7 or 8, wherein the radar device is configured to determine a time period taken for an object to move across a single sub-region.
  10. 10. An electronic counting device according to any of Claims 1 to 9, wherein the radar device comprises an Ultra Wide Band Radar (UWB) sensor.
  11. 11. An electronic counting device according to any of Claims 1 to 10, wherein the camera and the radar device are provided in a sensor unit and the processor and the one or more communications engines are provided in a base unit separate from the sensor unit, and the camera and the radar device communicate with the processor via a communications link.
  12. 12. An electronic counting device according to any of Claims 1 to 11, further comprising a scheduler to control supply of power to the camera to tum the camera on or off at predetermined 25 times.
  13. 13. An electronic counting device according to Claim 11 or Claim 12 as dependent on Claim 11, further comprising a further sensor unit including a further radar device and a further camera and wherein the further sensor unit is operatively coupled to the base unit via a further communications link.
  14. 14. An electronic counting device according to any of Claims 1 to 13, wherein the processor is configured to synchronise the output radar data to the output camera data by providing the same device or location identifier in the output radar data and the output camera data.
  15. 15. An electronic counting device according to any of Claims 1 to 14, wherein the processor is configured to optimise the output camera data for transmission to the remotely-located processing 35 server.
  16. 16. A method for acquiring data for counting objects, the method comprising: detecting and/or tracking one or more objects using a radar device over a predetermined acquisition time period to generate radar data, the one or more objects being in a scene within a predefined region associated with a field of view of the radar device; outputting radar data indicative of the number of objects detected within the field of view; capturing an image or a video of the scene using a camera aligned with the field of view of the radar device over the predetermined acquisition time period to generate camera data; outputting camera data regarding the field of view of the camera; receiving the output radar data and the output camera data at a processor which controls the operation of the camera and the radar device for data acquisition; determining a communications channel to be used to send the radar data and the camera data to a remotely-located processing server; and transmitting the radar data and the camera data to the remotely-located processing server via the communications channel, such that the camera data can be used to corroborate the radar data and to derive a count of objects within the predefined region for the predetermined acquisition time period.
  17. 17. A processing server for counting objects within a predefined region, the predefined region being proximal to an electronic counting device located remotely from the processing server, the processing server being configured to: receive, via a communications channel, radar data and camera data generated by a radar device and a camera of the counting device, the radar data and camera data relating to objects within a field of view and sensed by the radar device and the camera over a predetermined acquisition time period; use the camera data to corroborate the radar data for the predetermined acquisition time period, and derive a count of objects proximal to the electronic counting device within the predefined region for the predetermined acquisition time period, using the corroborated radar data.
  18. 18. A processing server according to Claim 17, wherein the processing server is configured to: obtain manual count data using the camera data; determine calibrated count data based on a comparison of the manual count data and the radar count data; the calibrated count data being derived using a line of best fit between the manual count data and the radar count data; and store the calibrated count data in a data store associated with the processing server.
  19. 19. A processing server according to Claim 17, wherein the processing server is configured to: obtain manual count data using the camera data; determine calibrated count data based on applying a calibration machine learning model to the radar count data; and store the calibrated count data in a data store associated with the processing server.
  20. 20. A processing server according to Claim 18 or Claim 19, wherein the processing server is configured to: aggregate the calibrated count data for a predefined aggregation time period and store the aggregated count data to the data store.
  21. 21. A processing server according to any of Claims 18 to 20, wherein the processing server is configured to: detect one or more missing data values in the aggregated count data and impute the one or more missing data values based on available radar count data and camera data.
  22. 22. The processing server of any of Claims 17 to 21, further comprising a data model; wherein the processing server is configured to create the data model based on linking together radar data and camera data acquired from a plurality of counting devices at different geographic locations.
  23. 23. The processing server of Claim 22, wherein the radar data used for creating the data model comprises blobs of radar data, wherein a blob represents a group of objects, and the data model comprises data linking different shapes and/or sizes of blobs of radar data to manual counts of objects obtained using the camera data.
  24. 24. The processing server of Claim 22 or 23, wherein the processing server is configured to create the data model by determining an object count threshold below which the radar count data is used to produce the count of objects, and above which the radar data linked to manual counts obtained from the camera data is used to produce the count of objects.
  25. 25. A system for counting objects within one or more predefined regions, the system comprising: one or more electronic counting devices according to any of Claims 1 to 15, the one or more predefined regions being proximal to their respective one or more electronic counting devices; and a processing server according to any of Claims 17 to 24, wherein the one or more electronic counting devices are configured to transmit the radar data and the camera data to the processing server via a communications channel.
GB1907298.2A 2019-05-23 2019-05-23 Electronic counting device and method for counting objects Withdrawn GB2584619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1907298.2A GB2584619A (en) 2019-05-23 2019-05-23 Electronic counting device and method for counting objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1907298.2A GB2584619A (en) 2019-05-23 2019-05-23 Electronic counting device and method for counting objects

Publications (2)

Publication Number Publication Date
GB201907298D0 GB201907298D0 (en) 2019-07-10
GB2584619A true GB2584619A (en) 2020-12-16

Family

ID=67385442

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1907298.2A Withdrawn GB2584619A (en) 2019-05-23 2019-05-23 Electronic counting device and method for counting objects

Country Status (1)

Country Link
GB (1) GB2584619A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113721255A (en) * 2021-08-17 2021-11-30 北京航空航天大学 Train platform parking point accurate detection method based on laser radar and vision fusion
WO2022157412A1 (en) * 2021-01-21 2022-07-28 Kone Corporation Solution for estimating a flow of people at an access point
TWI800471B (en) * 2022-11-09 2023-04-21 元智大學 Method for counting number of people based on mmwave radar
WO2024094620A1 (en) * 2022-11-04 2024-05-10 Assa Abloy Ab People detector for detecting when people pass through a doorway

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100191369A1 (en) * 2006-11-03 2010-07-29 Yeong-Ae Kim System of management, information providing and information acquisition for vending machine based upon wire and wireless communication and a method of management, information providing and information acquisition for vending machine
US20130151135A1 (en) * 2010-11-15 2013-06-13 Image Sensing Systems, Inc. Hybrid traffic system and associated method
US20160097849A1 (en) * 2014-10-02 2016-04-07 Trimble Navigation Limited System and methods for intersection positioning
CN109272608A (en) * 2018-08-19 2019-01-25 天津新泰基业电子股份有限公司 A kind of anti-trailing door access control system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100191369A1 (en) * 2006-11-03 2010-07-29 Yeong-Ae Kim System of management, information providing and information acquisition for vending machine based upon wire and wireless communication and a method of management, information providing and information acquisition for vending machine
US20130151135A1 (en) * 2010-11-15 2013-06-13 Image Sensing Systems, Inc. Hybrid traffic system and associated method
US20160097849A1 (en) * 2014-10-02 2016-04-07 Trimble Navigation Limited System and methods for intersection positioning
CN109272608A (en) * 2018-08-19 2019-01-25 天津新泰基业电子股份有限公司 A kind of anti-trailing door access control system and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022157412A1 (en) * 2021-01-21 2022-07-28 Kone Corporation Solution for estimating a flow of people at an access point
CN113721255A (en) * 2021-08-17 2021-11-30 北京航空航天大学 Train platform parking point accurate detection method based on laser radar and vision fusion
CN113721255B (en) * 2021-08-17 2023-09-26 北京航空航天大学 Accurate detection method for train platform parking point based on laser radar and vision fusion
WO2024094620A1 (en) * 2022-11-04 2024-05-10 Assa Abloy Ab People detector for detecting when people pass through a doorway
TWI800471B (en) * 2022-11-09 2023-04-21 元智大學 Method for counting number of people based on mmwave radar

Also Published As

Publication number Publication date
GB201907298D0 (en) 2019-07-10

Similar Documents

Publication Publication Date Title
GB2584619A (en) Electronic counting device and method for counting objects
US10708548B2 (en) Systems and methods for video analysis rules based on map data
US11854307B2 (en) Multi-target detection and tracking method, system, storage medium and application
AU2022200102B2 (en) Machine learning motion sensing with auxiliary sensors
US11774250B2 (en) Using high definition maps for generating synthetic sensor data for autonomous vehicles
CN111919476B (en) Indoor positioning method, server and positioning system
CN102307386B (en) Indoor positioning monitoring system and method based on Zigbee wireless network
US11054504B2 (en) Avian detection system
US20220327852A1 (en) Species pattern evaluation
WO2020141504A1 (en) System, method and computer program product for speeding detection
Seebacher et al. Infrastructure data fusion for validation and future enhancements of autonomous vehicles' perception on Austrian motorways
Daniş Live rssi filtering for indoor positioning with bluetooth low-energy
WO2019076954A1 (en) Intrusion detection methods and devices
Diván et al. Towards a data processing architecture for the weather radar of the INTA Anguil
CN116772860A (en) Novel indoor positioning system based on integration of wireless positioning technology and visual artificial intelligence
US20230360389A1 (en) Systems and methods of urban rooftop agriculture with smart city data integration
Chang Evaluation of the Accuracy of Traffic Volume Counts Collected by Microwave Sensors
Saito et al. Calibration of automatic performance measures-speed and volume data: volume 1, evaluation of the accuracy of traffic volume counts collected by microwave sensors.
Shariff et al. VIDEO CAMERA TECHNOLOGY FOR VEHICLE COUNTING IN TRAFFIC CENSUS: ISSUES, STRATEGIES AND OPPORTUNITIES
Bastani et al. SkyQuery: an aerial drone video sensing platform
Owen Development and Assessment of a Low-Cost Light-Detection and Ranging Traffic Count Sensor Utilizing Continuous Wavelet Transform: Preliminary Findings
Brilakis et al. Automated 3D vision tracking for project control support
Racoma et al. Accumulation-based advection field for rainfall nowcasting
US11967041B2 (en) Geospatial image processing for targeted data acquisition
Adams et al. Stereo-Optic High Definition Imaging: A New Technology to Understand Bird and Bat Avoidance of Wind Turbines

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)