WO2023161651A2 - Monitoring device and method of monitoring an area - Google Patents

Monitoring device and method of monitoring an area Download PDF

Info

Publication number
WO2023161651A2
WO2023161651A2 PCT/GB2023/050429 GB2023050429W WO2023161651A2 WO 2023161651 A2 WO2023161651 A2 WO 2023161651A2 GB 2023050429 W GB2023050429 W GB 2023050429W WO 2023161651 A2 WO2023161651 A2 WO 2023161651A2
Authority
WO
WIPO (PCT)
Prior art keywords
monitoring device
image
monitoring
area
region
Prior art date
Application number
PCT/GB2023/050429
Other languages
French (fr)
Other versions
WO2023161651A3 (en
Inventor
Timothy Paul Evans
Pak Yin Perry LIU
Original Assignee
Vodafone Group Services Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vodafone Group Services Limited filed Critical Vodafone Group Services Limited
Publication of WO2023161651A2 publication Critical patent/WO2023161651A2/en
Publication of WO2023161651A3 publication Critical patent/WO2023161651A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins

Definitions

  • the present disclosure relates to remote sensors and remote sensing devices.
  • the present disclosure relates to remote sensors which communicate with a server, for example over a wireless network.
  • Electronic sensors can be used for a wide range of monitoring applications.
  • electronic sensors such as temperature sensors, accelerometers, and light sensors can be used to monitor the conditions in a local area surrounding the sensor.
  • the electronic sensors In order to perform continuous monitoring of the local conditions surrounding such an electronic sensor, the electronic sensors typically require connection to some form of power source as well as processor in order to interpret the data generated by the electronic sensor.
  • the desired area to be monitored may be dangerous or inconvenient to access, or the desired area to be monitored may not have any suitable locations to place an electronic sensor and associated components.
  • remote areas it will be appreciated that it may not be technically or economically to install a complex sensor arrangement.
  • remote areas may not have access to a power supply or communication lines.
  • Coastal erosion is a gradual process which takes place over a number of days, if not weeks, months, or even years.
  • coastal erosion sites are located in relatively remote locations, it is extremely labour intensive to monitor such sites manually.
  • coastal erosion sites may be relatively difficult to access due to the unstable nature of the eroding coastline.
  • Such sites may not be well suited to the installation of sensing devices proximal to the site of coastal erosion.
  • coastal erosion sites are do not typically have access to a power supply or communication lines.
  • the present disclosure aims to provide an improved, or at least commercially relevant alternative, monitoring device.
  • a monitoring device for an area comprises: a camera module configured to obtain an image of an area, a data store configured to store the image of the area, a processor and a communications module.
  • the processor is configured to detect one or more spatial features in a region of the image, and to determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area.
  • the communications module is configured to transmit the monitoring parameter to a server remote from the monitoring device.
  • the monitoring device can monitor an area from a location which is remote from the area.
  • Such remote monitoring is particularly advantageous where a monitoring device cannot be easily located close to the area of interest.
  • a monitoring device cannot be easily located close to the area of interest.
  • the monitoring device of the first aspect provides a device for monitoring an area (such as a cliff face) wherein the monitoring device can be located in a location which is remote from the area to be monitored, and can thus be safely accessed.
  • the proposed location of the monitoring device will be distant from utility services such as electrical power and internet connections.
  • the power and bandwidth consumption of the monitoring device must be carefully controlled. Accordingly, it is not feasible from a power consumption or bandwidth perspective for the monitoring device to simply transmit entire images, or even portions of images to a remote server.
  • the monitoring device of the first aspect determines one or more monitoring parameters which are representative of spatial features in an image of the area being monitored. Such monitoring parameters can be transmitted to a server using significantly less bandwidth than a conventional image, thereby allowing the monitoring device to monitor an area over an extended period of time without requiring any external input.
  • the processor is configured to detect one or more spatial features in a region of the image by comparing a region of the image against a library of predefined shapes stored by the processor identifying one or more predefined shapes in the region of the image.
  • the library of predefined shapes may comprise predefined, or reference shapes for objects or features of interest to be monitored in the area of interest.
  • the library of predefined shapes may comprise reference shapes for strata in a cliff face.
  • the processor may be configured to detect one or more strata of a cliff face in a region of an image.
  • the library of predefined shapes may comprise reference shapes for water stains on a cliff face.
  • Water stain shapes, or patterns can be used to monitor erosion of a cliff face. Accordingly, the monitoring device can be used to detect a variety of different spatial features in a region of an image. By limiting the processing device to performing detection of a limited number of predefined spatial features in a region of an image, the processing tasks performed by the monitoring device can be performed rapidly without requiring a substantial amount of computational power. Accordingly, the monitoring device of the first aspect is adapted to monitoring an area in a power efficient manner.
  • the processor is configured to determine one or more monitoring parameters based on an identity of the predefined shape identified, an orientation of the predefined shape identified, a size of the predefined shape identified, and a location of the predefined shape in the region of the image. For example, when monitoring a cliff face the one or more monitoring features may determine monitoring parameters relating to one or more strata of a cliff face, such as the size, location, shape, and identify or strata in the region of the image. When monitoring water staining patterns, for example, the monitoring device may identify monitoring parameters relating to the shape, size, identity, and location of a water stain in the region of the image.
  • one or more monitoring parameters may be identified to characterise features of the area in the region of interest in a manner which can be efficiently relayed to a server.
  • the processor is configured to determine one or more monitoring parameters based on a number of spatial features detected in the region of the image.
  • the monitoring device may be configured to identify a number of spatial features (e.g. strata, water stains, or any other object) in an image and determine a monitoring parameter representative of the number.
  • the monitoring parameter may be representative of a density of spatial features within the region of the image.
  • the processor is configured to detect a plurality of spatial features in the region of the image, and the processor is configured to determine one or more monitoring parameters based on a distance between the two of the spatial features identified in the region of the image. For example, where the spatial features represent a plurality of strata, a monitoring parameter may represent a distance between two strata in the region of interest. Monitoring the distance between two spatial features of interest in a region of an image allows the monitoring device to monitor the area of an extended period of time. For example, the distance between two strata of a cliff face may be monitored in order to monitor coastal erosion over an extended period of time.
  • the communications module of the monitoring device transmits the one or more monitoring parameters to the server using a wireless network.
  • the wireless network may be a Narrow Band Internet of Things (NB-loT) wireless network.
  • NB-loT Narrow Band Internet of Things
  • the NB-loT network may have a bandwidth of no greater than 200 kHz.
  • the monitoring device may operate on an Internet of Things (loT) basis using a relatively limited amount of bandwidth compared to a smartphone or other wireless communications devices.
  • the processor is configured to determine one or more monitoring parameters associated with the region of the image having a total size of no greater than 10 bytes.
  • the total amount of information used to characterise the region of the image is a fraction of the total image size obtained by the camera module of the monitoring device.
  • the images obtained by the camera module may be around 10 megabytes in size or similar.
  • the process of determining one or more monitoring parameters is not simply a case of image compression, but rather an application specific extraction of data (monitoring parameters) for the purpose of monitoring a characteristic of the area (i.e. the monitoring parameters) over a period of time.
  • the monitoring device further comprises an additional sensor module, the sensor module configured generate a sensing parameter for monitoring the area or the surroundings of the monitoring device.
  • the additional sensor module is configured to generate a sensing parameter when the camera module obtains an image.
  • the sensor module may comprise a temperature sensing module, a wind sensing module, a light sensing module or any other type of sensor known in the art.
  • the sensing module may provide a sensing parameter which provides contextual information for the associated monitoring parameter(s) of the monitoring device.
  • the monitoring device may use the sensing parameters to decide whether or not to proceed with obtaining an image and associated monitoring parameters.
  • the monitoring device may be configured not to proceed with obtaining an image and associated monitoring parameter(s) if the sensed parameter (corresponding to a light level) is below a threshold value (i.e. if it is too dark to obtain a useful image).
  • the communication module transmits the sensing parameter associated with the image when transmitting the one or more monitoring parameters associated with the image.
  • the sensing parameter may be provided to the server in order to provide additional contextual information for the associated one or more monitoring parameters.
  • the communications module is configured to transmit a plurality monitoring parameters associated with a plurality of images to the server in a single communication.
  • the monitoring device can efficiently transmit data to a server by combining monitoring parameters associated with a plurality of images into a single transmission.
  • the monitoring device may be configured to combine monitoring parameters associated with at least: 3, 5, 7 or 10 images into a single communication.
  • the communications module is configured to transmit the plurality of monitoring parameters to the server in a communication no greater than: 500 bytes, 400 bytes, or 300 bytes in size.
  • a communication size is commensurate with, for example, transmitting data over an loT wireless network using a low power transmission method.
  • the monitoring device can monitor an area (and transmit data associated with the area) for an extended period of time utilising, for example, a modestly sized battery.
  • the communications module may be configured to receive inbound communications from the server no greater than 500 bytes, 400 bytes, or 300 bytes in size. By limiting the communication size (i.e.
  • the packet size) of communication (outbound and inbound) the computing resources required on the remote monitoring device (and the associated power consumption) may be reduced.
  • the buffer capacity of the monitoring device may be reduced.
  • the Random Access Memory (RAM) requirements for the communication module will be reduced when communication size is limited in accordance with the first aspect. This in turn reduces the power consumption of the monitoring device as well as the overall device cost and complexity.
  • the communication device is configured to transmit the monitoring parameters in a single packet.
  • the communication device is configured to transmit the monitoring parameters in a single packet.
  • the processor prior to detecting the one or more spatial features, is configured to normalise the image, wherein preferably the processor is configured to normalise at least one of a contrast, a brightness, or a hue of the image.
  • the processor may be additionally configured to perform other image processing functions, in particular to aid with the identification of spatial features or to aid with the determination of one or more monitoring parameters.
  • the monitoring device is configurable by a configuration device in order to identify the region in the image to be compared.
  • a user when setting up the monitoring device in the desired location can configure the monitoring device to perform the desired monitoring device.
  • the configuration device which may comprise a computer or smartphone for example, can by connected to the monitoring device and used to configure the area to be imaged and the region, or regions, of interest in the imaged area.
  • configuring the monitoring device to monitor the area using the configuration device also comprises specifying the comparison to be performed and the monitoring parameter to be determined based on the comparison.
  • the monitoring device is configured to repeatedly monitor the area.
  • the monitoring device is configured to monitor the area at least once every: 24 hours, 12 hours, 6 hours, 3 hours, 2 hours or 1 hours.
  • the monitoring device can be used to obtain a plurality of images of the area over a period of time.
  • the monitoring device may be configured to have a power consumption such that it can repeatedly monitor the area of interest for an extended period of time.
  • the monitoring parameters determined by the monitoring device provide a way of automatically monitoring a characteristic of an area (e.g. coastal erosion of cliff face) over an extended period of time without requiring physical access to the area of interest.
  • the monitoring device when the monitoring device is not used to monitor the area, the monitoring device is configured to be in a low power state wherein the camera module and the processor are not powered. As such, the monitoring device can be operated in a relatively low power state (relative to the power consumed when obtaining and processing an image) in order to prolong the period of time the monitoring device can monitor an area without requiring additional user intervention. For example, in some embodiments, the monitoring device is configured to consume no more than 8 pW or more preferably no more than: 7 pW or 6.7 pW when in the low power state.
  • the power consumption of the monitoring device when obtaining an image and any associated processing may also be carefully designed in order to ensure that the monitoring device can be operational for an extended period of time.
  • the monitoring device may be configured to draw no more than 1 .1 W, preferably no more than 0.75 W when monitoring the area.
  • the monitoring device may be configured to monitor an area repeatedly such that the average power consumption of the monitoring device each day may be no greater than about: 60 pW, 50 pW or 40 pW.
  • the camera module and the processor are operational for no more than 60 s, preferably no more than 40 s, in order to obtain the image and determine the associated one or more monitoring parameters.
  • the time period the monitoring device may be fully operational (to record and process images) is a fraction of the time spent in the low power state such that the overall power consumption of the monitoring device is reduced.
  • the monitoring device may comprise a power source.
  • the monitoring device may be powered by a self-contained power source (i.e. the monitoring device comprises the self-contained power) such as a battery or a renewable power source.
  • the monitoring device may be powered by a battery, or a renewable power source, or a combination of a battery and a renewable power source.
  • a renewable power source may comprise a solar panel or a wind turbine or the like.
  • the monitoring device has a low power consumption such that a e.g. battery of a comparable size to the monitoring device can provide sufficient power to operate the monitoring device for an extended period of time.
  • the battery may have a capacity of no greater than 11.1 Wh (e.g.
  • Such a battery may be sufficient to power a monitoring device according to the first aspect for at least 20 years when the monitoring device is configured to image an area once every hour and transmit monitoring parameters once every four hours (i.e. an average power consumption each day of about 50 pW) for example.
  • the monitoring device is configured to process a region of the image obtained.
  • the region of the image processed is smaller than the total area of the image. It will be appreciated that by only processing part of the image, power can be saved by avoiding processing unnecessary regions of an image. For example, where an image of a cliff face includes a region of sky and/or sea, these regions can be excluded as no information pertaining to the cliff face will be present in these regions.
  • the region of the image has an area no greater than: 50%, 40%, 30%, 20% or 10% of an area of the image.
  • the image obtained by the camera module is at least 5 megapixels, preferably at least 10 megapixels.
  • the images obtained by the camera module are data files which are significantly larger than the monitoring parameters determined by the processor (which may be around 2-3 bytes each for example). Even if the image files obtained by the processor were to be compressed by the processor (which would require power) the resulting compressed image files would still be orders of magnitude larger than the monitoring parameters determined by the processor.
  • the camera module is configured to obtain an infra-red image of the area.
  • the spatial features of the image may represent hot and/or cold spots of the area of interest.
  • the monitoring parameters may represent a shape, orientation, location and the like of the hot/cold spots.
  • a plurality of monitoring devices may be provided, each of which communicates with the server.
  • a plurality of monitoring devices may be used to automatically monitor a relatively large expanse of area (such as a coastline) over an extended period of time for the purpose of tracking changes in geography which typically occur either unpredictably (such as cliff falls) or over several days/weeks/years which are not practical for a user to manually observe.
  • the area to be monitored may be at least 5 m away from the monitoring device. In some embodiments, the area to be monitored may be at least: 10 m, 15 m, 20 m, 30 m, 50 m, 100 m or 200 m from the monitoring device. As such, the monitoring device may be configured to monitor an area which is remote from the monitoring device.
  • the camera module may comprise a lens (e.g. a telephoto lens) configured to provide a focal length for the camera module which corresponds to the distance of the area to be monitored from the monitoring device.
  • a method of monitoring an area with a monitoring device comprises: obtaining an image of the area with a camera module of the monitoring device; storing the image of the area in a data store of the monitoring device; detecting one or more spatial features in a region of the image; and determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area; transmitting the one or more monitoring parameters to a server using a communications module of the monitoring device.
  • a monitoring device may be used to perform the method of the second aspect. It will be appreciated that the method of the second aspect may incorporate any of the optional features of the first aspect described above.
  • the method may further comprise: obtaining a reference image of the area monitored by the monitoring device; storing the reference image on the server; wherein when the server receives the one or more monitoring parameters, the server determines monitoring information for the area based on comparing the monitoring parameters to the reference image.
  • the server is configured to communicate with the communication module of the monitoring device to update one or more of: a frequency with which the area is monitored, the region of the image in which spatial features are to be detected; the spatial features to be detected in the region of the image.
  • the server may be used to remotely control various aspects of the operation of the monitoring device remotely.
  • Fig. 1 is a block diagram of the monitoring device according to an embodiment of the disclosure
  • Fig. 2 is a flow chart of a method according to an embodiment of the disclosure
  • Fig. 3 is a schematic diagram of an image obtained by the monitoring device
  • Fig. 4 is an example of a library of predefined shapes.
  • a monitoring device 1 is provided.
  • a schematic diagram of the monitoring device 1 is shown in Fig. 1 .
  • the monitoring device 1 comprises a camera module 10, a processor 20, a data store 30 a communications module 40, and one or more additional sensing modules 50.
  • the various components of the monitoring device 1 may be contained within a suitable housing (not shown).
  • the housing may be configured to protect the monitoring device 1 from the elements and also from unwanted interference.
  • the housing may include an opening to allow the camera module to image the area of interest.
  • the housing may also include an attachment portion to allow the monitoring device 1 to be fixed in place.
  • the monitoring device 1 may be fixed in a predetermined location to allow the camera module to image the area of interest over an extended period of time. For example, when monitoring a cliff face, the monitoring device 1 may be attached to a pole, or a wall, and orientated to image the cliff face of interest.
  • the camera module 10 is configured to obtain an image of an area to be monitored. For example, where the monitoring device is used to monitor coastal erosion of a cliff face, the monitoring device 1 may be positioned such that that the camera module 10 images the cliff face area of interest.
  • the camera module 10 may comprise any suitable digital camera and associated circuitry suitable for obtaining a (digital) image of the area of interest.
  • the camera module may image the area of interest and provide a digital representation of the image in any suitable data format.
  • the camera module may be configured to provide a data stream representative of the image captured to the processor 20 via a serial data connection (e.g. I2C, Universal asynchronous receiver-transmitter (UART) or Serial Peripheral Interface (SPI) for example).
  • a serial data connection e.g. I2C, Universal asynchronous receiver-transmitter (UART) or Serial Peripheral Interface (SPI) for example.
  • the camera module 10 may comprise a 5 megapixel, or preferably a 10 megapixel camera.
  • the raw image data obtained by the camera module (prior to any image processing or data compression) for a 10 megapixel camera module may be about 20 Mbytes per image.
  • different camera modules 10 having different resolutions may be used.
  • the camera module 10 may be configured to obtain visible light images of an area (i.e. conventional photographs). In other embodiments, the camera module 10 may be configured to detect other wavelengths of light (i.e. light having wavelengths outside of the visible light spectrum). For example, the camera module 10 may be configured to obtain an infra-red image of the area, or an ultraviolet image of the area. By obtaining an infra-red image of the area, the camera module 10 may be configured to detect areas of an image which are relatively hot or relatively cold to their surroundings. So called, ‘hot spots’ or ‘cold spots’ can be analysed as spatial features using the image processing techniques described below.
  • the camera module 10 may comprise one or more lenses to increase the focal length of the camera module 10.
  • the camera module 10 may be used to monitor a variety of different areas at a range of different distances. That is to say, in some embodiments, the remote monitoring device 1 may be located at least: 10 m, 20 m, 30 m, 50 m, or 100 m from the area of interest.
  • the monitoring device 1 is configured to repeatedly monitor the area. Accordingly, over a period of time the camera module repeatedly obtains images of the area. For example, the monitoring device may be configured to monitor the area at least once every: 24 hours, 12 hours, 6 hours, 3 hours, 2 hours or 1 hours. Each time the monitoring device 1 monitors the area, the camera module 10 obtains an image of the area, which is subsequently processed by the processor 20.
  • the data store 30 is configured to store the image of the area.
  • the data store 30 may receive the image of the area from the processor 20 as shown in Fig. 1 , or the camera module 10 may be configured to communicate directly with the data store 30 to store the image of the area.
  • the data store 30 may be provided by any suitable form of computer memory.
  • the data store 30 is a form of non-volatile memory such that a power source is not required to maintain the data stored within the data store 30.
  • the data store 30 may be provided by a Secure Digital (SD) card or microSD card and the like.
  • SD Secure Digital
  • the data store 30 may be configured to store a plurality of the images obtained by the camera module 10.
  • the data store 30 may be sized suitably to store all images obtained by the camera module 10.
  • the processor 20 may compress the images prior to storage.
  • the data store 30 may be sized to store only some of the images obtained once they have been processed.
  • the data store 30 may also be used to store various instructions and data used by the processor 20 in order to process the images and the like.
  • the processor 20 is configured to perform various image processing tasks on the image obtained by the camera module 10.
  • the processor 20 is configured to analyse a region of the image.
  • the region of the image to be analysed may form only part of the image. That is to say, the processor 20 may not analyse all of the image obtained, but rather only regions of the image that have been predetermined as being of interest.
  • processor module may be configured to analyse a region of the image which has an area no greater than 20 % of an area of the image.
  • a user may specify that a and/or b may be equal to 100, 200, 500, or 1000 pixels, for example. That is to say, a user could define a rectangular 200 x 1 ,000 pixel region of an image which includes features of interest. For example, such a region may comprise one or more strata of a cliff face, which can then be analysed by the processor 20. It will be appreciated that the proposed rectangular region of interest is only one possible region shape and that a user may specify any particular pixels, or areas of pixels with the images to be analysed.
  • the region, or regions of interest may be predefined by a user when configuring the monitoring device 1 for use. Configuration of the monitoring device 1 is discussed in more detail below.
  • the processor 20 Upon receipt of an image, the processor 20 is configured to detect one or more spatial features in a region of the image.
  • the spatial features may be detected by the processor using any suitable spatial feature detection algorithm.
  • the most suitable spatial feature detection algorithm will depend on the desired application and the nature of the spatial feature to be detected. Spatial feature detection may, in some embodiments be provided by a suitably trained machine learning algorithm, or any other suitable image processing algorithm known to the skilled person.
  • the processor 20 may be configured to detect one or more strata in a region of the image.
  • Strata are generally horizontally extending sections of a cliff face having a generally similar brightness and/or colour.
  • different strata can be distinguished by a relatively abrupt change in brightness and/or colour along a line extending in the vertical direction of the image.
  • Such abrupt changes can be identified as lines between the different strata which run in a left to right direction across an image of strata.
  • the processor 20 may be configured to detect one or more lines in a region of an image representing the boundary between two different strata.
  • Fig. 3 shows a schematic diagram of a cliff face comprising a plurality of strata 80. The black lines shown in Fig.
  • Fig. 3 are indicative of the boundaries between strata 80 that the processor 20 may identify as spatial features, were such a portion of the image to be analysed by the processor.
  • Fig. 3 also shows a plurality of rectangular regions 82 that represent regions of the image that are analysed by the processor 20 for the purpose of detecting spatial features.
  • Detailed view A in Fig. 3 shows a detailed view of the strata 80 and rectangular regions 82 which are analysed.
  • the processor 20 may be configured to detect a water staining pattern.
  • a water stain on a cliff face may generally appear as an area of a cliff face which is a different colour to the surrounding cliff face where water staining is not present. As such, a water stain can be distinguished by a region of a different colour to the surrounding area.
  • Water stains may extend across different strata and so the feature detection algorithm can be configured to account for different possible shapes of water stains, including cases where water stains extend across two or more strata for example.
  • the boundary between a water stained region and a non-water stained region of the cliff face may be determined by the processor 20 in order to detect a water stain spatial feature in a similar manner to the process for detecting a strata spatial feature.
  • the processor 20 may analyse a plurality of regions in each image obtained by the camera module.
  • the regions of interest may be specified by a user when the monitoring device is calibrated.
  • a monitoring parameter provides a value (or similar small amount of data) which is representative of some characteristic of the region of interest.
  • the monitoring parameters are very small amounts of data (relative to the data size of the image, or even the data size of the region of interest) which can be efficiently transmitted by the communications module 40.
  • the monitoring device may provide a count of the number of spatial features identified (e.g. strata) in each region of interest.
  • the monitoring device can detect changes in the area over an extended period of time.
  • the monitoring device may detect a change in the number of strata present in the region(s) of interest which may indicate that a cliff fall has occurred.
  • the processor may detect a plurality of spatial features (e.g. strata 80) in a region 82 of the image.
  • the processor is then configured to determine one or more monitoring parameters based on a distance between the two of the spatial features identified in the region of the image. For example, the processor may determine a pixel distance between two strata in the region of interest. The pixel distance may be measured as the distance between two strata boundaries along a line spanning the region of interest. In the example of Fig. 3, a vertical line (relative to the imaged area) is provided through the centre of the region of interest.
  • the processor 20 is configured to determine the pixel distance between the lower boundary 83 of the uppermost strata in the region of interest 82 and the upper boundary 84 of the lowermost strata in the region of interest 82.
  • the monitoring device determines a pixel distance for the spacing of various spatial features
  • the pixel distance may be calibrated by a user upon set up of the monitoring device.
  • a user may calibrate the monitoring device to convert the pixel distance into an actual distance based on a measurement of the distances being observed in the image.
  • the server 60 may be configured to covert the pixel distances into actual distances based on a scale provided by a user.
  • the measurement scale may be determined, for example, by including an object of known length (e.g. a measurement stick or the like) in the area to be monitored when calibrating the monitoring device at a known distance from the monitoring device.
  • the processor is configured to compare one or more spatial features detected in a region of the image against a library of predefined shapes stored by the processor.
  • a schematic example of a library of predefined shapes is shown in Fig. 4.
  • Each shape in the library of Fig. 4 has a unique identification code which can be used as a monitoring parameter.
  • the processor 20 may be configured to find a best fit of the spatial feature to a predefined shape of the library. The processor may than use the associated unique identification code of the predefined shape as a monitoring parameter in order to indicate the general shape of the spatial feature in a data efficient manner (i.e. without sending actual image data of the spatial feature).
  • the processor 20 may be configured to attempt to fit the predefined shapes to the spatial feature by performing various transformations to the predefined shapes in order to find the closest fit. Thus, based on the transformations applied to the closest matching predefined shape, the processor 20 may determine monitoring parameters which reflect one or more of: an orientation of the predefined shape, a size of the predefined shape identified, and a location of the predefined shape (e.g. a location of the centre of the predefined shape) in the region of the image. Thus, it will be appreciated that the processor 20 may be configured to determine information concerning the general shape (i.e. the unique predefined shape identifier), size, orientation, and location of a spatial in a region of interest using up to four monitoring parameters. Typically, each monitoring parameter expressing one of the above characteristics utilises about 2-3 bytes of data. Thus, a spatial feature of interest in a region 82 of the area can be monitored using e.g. no more than 10 bytes of data per image obtained.
  • Fig. 4 shows a generic library of predefined shapes.
  • a user may populate a library with possible shapes which may be specifically tailored to the spatial features of interest.
  • the library approach to feature recognition, and communication of the resulting monitoring parameters characterising the fitted shape provides a data efficient and flexible approach to the detection and monitoring of features of interest in an area.
  • the processor 20 may be configured to normalise the image to be analysed.
  • the processor 20 may be configured to identify one or more reference areas 90a, 90b in the image.
  • the reference areas 90a, 90b may be predefined (i.e. user-defined) regions of a reference image.
  • a reference image (not shown), for example the first image obtained by the monitoring device 1 .
  • the reference areas of the reference image may be compared to the reference area 90a, 90b of the image to be analysed.
  • the processor 20 may orientate itself in order to allow the regions of interest to be accurately located. That is to say, the regions of interest may be defined relative to the reference areas 90a, 90b.
  • the monitoring device 1 may provide some limited orientation steps in order to account for small changes in the position of the monitoring device 1 .
  • the monitoring device 1 may utilise the reference areas 90a, 90b to normalise the image relative to the reference image.
  • the processor may normalise the one or more of the brightness, contrast, pixel values or hue of the obtained image such that the reference areas 90a, 90b have brightness, contrast, pixel values or hue which correspond to those of the reference areas of the reference image.
  • two reference areas 90a, 90b are indicated.
  • at least two reference areas may be identified.
  • Reference areas may be selected as location away from features of interest.
  • reference areas may be selected to include a prominent feature within an image in order to make it easier to identify the reference image.
  • a reference area 90a, 90b may be selected to include a region of high contrast which is expected to largely time-invariant.
  • a plurality of reference areas may be identified in some embodiments in order to provide a redundancy in the event that one of the reference areas 90a, 90b substantially changes over time.
  • the processor 20 may store the monitoring parameters in the data store 30.
  • the data store 30 may provide an offline store of both image data and monitoring parameters associated with the images. Such an offline store may be accessed by a user at the end of the monitoring period when a user collects the device. Of course, the monitoring period may be intended to run for many years, with many monitoring devices generating data. For projects of such a scale, it may not be feasible to routinely collect data held on individual monitoring devices 1.
  • monitoring device 1 also includes a communications module 40 configured to transmit the monitoring parameters to a server 60 remote from the monitoring device 1 .
  • a server 60 may collect the data from one or more monitoring devices 1 as the data is generated by the monitoring devices. In some embodiments, the data may be collected by the server in real-time.
  • the communications module 40 is configured to transmit monitoring parameters associated with each image to the server 60.
  • the communications module 40 may receive the monitoring parameters from the processor 20 (e.g. as shown in Fig. 1) or in some embodiments the communications module 40 may access the data store 30 in order to obtain the monitoring parameters (and any other associated data such as image identifiers) to be transmitted.
  • the communications module 40 transmits the monitoring parameters over a wireless network.
  • the communication module 40 transmits the monitoring parameters over a Low Power Wireless Personal Area Network.
  • the network may be a Narrow Band Internet of Things wireless network.
  • the server 60 is located remotely from the remote monitoring device 1 .
  • the server 60 may be connected to one or more receivers of the wireless network, for example by an internet connection.
  • the server 60 may be any suitable computing device for receiving the monitoring parameters from the monitoring device 1 . It will be appreciated that the computing resources available to the server will be significantly greater than those available to the monitoring device, both from a computational power perspective and an energy consumption perspective.
  • the monitoring parameters can be further analysed by the server 60 in near-real time for a variety of applications. Due to the limited functionality and power usage requirements of the monitoring device, it may not be possible to perform such tasks locally on the monitoring device.
  • the individual monitoring parameters to be transmitted by the communications device may have a size of around 2 or 3 bytes.
  • the total size of all the monitoring parameters associated with the region of the image may be no greater than 10 bytes (or around 5 monitoring parameters for example).
  • the communication module 40 may be configured to transmit a plurality monitoring parameters associated with a plurality of images to the server in a single communication. For example, where the monitoring device 1 obtains an image every hour, the monitoring device 1 may transmit the monitoring parameters generated over the previous 24 hours once per day for example.
  • the communication frequency of the monitoring device may be adjusted based on the rate at which monitoring parameters are acquired and the desired limit on communication size.
  • the communications module is configured to transmit the plurality of monitoring parameters to the server in a communication no greater than 500 bytes, or preferably 300 bytes in size.
  • the communications module is configured to send all data to be transmitted in a single packet, rather than transmitting data over multiple packets.
  • the monitoring device 1 may comprise an additional sensor module 50.
  • the additional sensor module 50 may be configured to generate a sensing parameter for monitoring the area or the surroundings of the monitoring device 1 .
  • the additional sensor module 50 may be a light sensor, a temperature sensor, a wind sensor, an accelerometer, a moisture/humidity sensor or any other suitable sensor.
  • the additional sensor module 50 may be configured to generate a sensing parameter when the camera module obtains an image.
  • the additional sensor module 50 may include its own processor and data store such that it is configured to obtain a sensing parameter in a manner analogous to the operation of the camera module 10, processor 20 and data store 30.
  • the processing functionality and data storage functionality for the additional sensor module 50 may be provided by the processor 20 and data store 30 respectively of the remote sensing device 1 .
  • the additional sensor module 50 may be configured to provide additional contextual information regarding the surroundings of the monitoring device 1 .
  • the communication module 40 may transmit the sensing parameter associated with the image when transmitting the monitoring parameter associated with the image.
  • the monitoring device 1 is used to repeatedly monitor an area, for example once an hour. It will be appreciated that the process of obtaining an image and processing the image to obtain the one or more monitoring parameters is a relatively quick task for modern computing equipment. For example, an image can be acquired and processed in a time period no greater than 60 seconds. As such, it will be appreciated that the camera module 10, processor 20, and data store 30 may be idle for a significant period of time. Thus, when the monitoring device 1 is not used to monitor the area, the monitoring device 1 is configured to be in a low power state. In some embodiments, the low power state may involve the camera module 10 and the processor 20 not being powered.
  • the communications module 40 may draw a relatively low amount of power in the low power state in order to be open to receiving any incoming communications, and also to trigger a wake up process for the processor 20 and camera module 10 at the appropriate time interval.
  • the monitoring device 1 may be configured to draw no more than 7 pW or 6.7 pW. Such power may be consumed, for example by the communications module 40.
  • the monitoring device 1 may be configured to operate in a low power state the majority of time in order to conserve energy, the power consumption of the monitoring device 1 when monitoring an area (i.e. when the monitoring device is used to obtain and process an image) may also be limited in order prolong the period the monitoring device can be operated without requiring manual user input.
  • the monitoring device may be configured to draw no more than 1 .1 W, preferably no more than 0.75 W over the duration of monitoring the area (including transmitting the monitoring parameters to the server).
  • the process of waking the monitoring device 1 from a low power state, obtaining an image, processing the image, and putting the monitoring device into a lower power state takes less than 60 seconds, preferably less than 40 seconds.
  • the average power consumption of the monitoring device over the course of a day may be no greater than 60 pW, or more preferably no greater than about 50 pW or 40 pW. As such, on average the monitoring device 1 , when monitoring an area may consume about no greater than 1 .44 mWh per day.
  • the power consumption of the monitoring device 1 is of particular importance where the monitoring device draws power from a battery (not shown). It will be appreciated that in order for the monitoring device 1 to operate for an extended period of time either a very large battery must be provided the total energy consumption of the monitoring must be kept to a minimum.
  • a smartphone battery typically has a capacity of about 11 Wh which typically is sufficient to power a smartphone for around 1 day of usage.
  • a 11 Wh battery would be sufficient to power a monitoring device 1 in a low power state drawing no more than 1 .44 mWh for at least 20 years.
  • the monitoring device is specifically adapted to longterm monitoring of an area without the need for constant user intervention.
  • the monitoring device 1 may be installed in a desired location and configured using a configuration device (not shown).
  • a configuration device may be used to set up the monitoring device and ensure that the camera module is orientated correctly.
  • the monitoring device 1 may include one or more output ports (not shown) to output an obtained image to a configuration device such as a smart phone or similar portable computing device such as a laptop.
  • a configuration device such as a smart phone or similar portable computing device such as a laptop.
  • Various output ports or similar communication technologies are known to the skilled person for communicating between the monitoring device and a local (i.e. within a few meters) configuration device.
  • the data storage 30 e.g. an SD card
  • the configuration device may specify one or more regions in the images obtained to be analysed by the monitoring device.
  • the monitoring device 1 can be easily customised by a user installing the monitoring device to monitor a range of different areas.
  • Configuring the monitoring device 1 may also comprise specifying spatial features to be detected and the monitoring parameter(s) to be determined based subsequently.
  • the configuration device and/or the monitoring device 1 may suggest regions of interest to be monitored to a user based on the detection of spatial features within a calibration image.
  • the processor 20 of the monitoring device 1 may analyse a calibration image obtained and identify a plurality of spatial features (e.g. the lines representative of strata boundaries in Fig. 3). Based on this, a user may mark point on strata of interest to be monitored and the processor 20 may generate appropriate regions of interest based on the user selected points. Alternatively, the regions of interest may be specified by a user using the calibration device.
  • a method of method of monitoring an area with a monitoring device 1 may comprise a step 101 comprising obtaining an image of the area with a camera module of the monitoring device.
  • step 102 of the method 100 the image of the area is stored in a data store 30 of the monitoring device 1 .
  • step 103 the processor 20 detects one or more spatial features in a region of the image.
  • the image processing performed by the processor 20 to detect the one or more spatial features is discussed above.
  • step 104 the processor 20 determines one or more monitoring parameters based on the detected one or more spatial features for monitoring the area.
  • the detection process performed by the processor 20 is discussed in more detail above.
  • the communications module 40 transmits the one or more monitoring parameters to a server 60.
  • the monitoring parameters may be stored by the server 60 for further analysis.
  • the server may be configured to obtain a reference image of the area being monitored.
  • the reference image may be the first image obtained by the monitoring device 1 when installed by a user, which a user manually transmits to the server using a configuration device (e.g. a smartphone).
  • the server 60 may store the reference image on the server 60 and use the reference image to interpret the monitoring parameters. For example, when the server receives the one or more monitoring parameters, the server determines monitoring information for the area based on comparing the monitoring parameters to the reference image.
  • the server 60 may also be configured to communicate with the communication module 40 of the monitoring device 1 to update one or more of: a frequency with which the area is monitored, the region of the image in which spatial features are to be detected; and the spatial features to be detected in the region of the image.
  • the server 60 may be used to manage the power consumption of the monitoring device by changing the frequency of the monitoring.
  • the server 60 may also be used to change the regions monitored over time in response to changes in the area being monitored.
  • the server 60 may be configured to prompt additional monitoring processes (i.e. obtaining and processing more images outside of a schedule image) in response to a detected change in a monitoring parameter (e.g. due to a cliff fall or similar event).
  • a single monitoring device 1 has been described which communicates with a server 60.
  • a plurality of monitoring devices 1 may be provided, each of which communicates with server 60.
  • a plurality of monitoring devices 1 may be used to automatically monitor a relatively large expanse of area (such as a coastline) over an extended period of time for the purpose of tracking changes in geography which typically occur either unpredictably (such as cliff falls) or over several days/weeks/years which are not practical for a user to manually observe.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Alarm Systems (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A monitoring device for an area is provided. The monitoring device comprises a camera module configured to obtain an image of an area, a data store configured to store the image of the area, a processor, and a communications module. The processor is configured to detect one or more spatial features in a region of the image, and to determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area. The communications module is configured to transmit the monitoring parameter to a server remote from the monitoring device.

Description

Monitoring device and method of monitoring an area
Field of the disclosure
The present disclosure relates to remote sensors and remote sensing devices. In particular, the present disclosure relates to remote sensors which communicate with a server, for example over a wireless network.
Background
Electronic sensors can be used for a wide range of monitoring applications. For example, electronic sensors such as temperature sensors, accelerometers, and light sensors can be used to monitor the conditions in a local area surrounding the sensor.
In order to perform continuous monitoring of the local conditions surrounding such an electronic sensor, the electronic sensors typically require connection to some form of power source as well as processor in order to interpret the data generated by the electronic sensor.
For some monitoring applications, it can be challenging to access the area which is desired to be monitored. In some applications, the desired area to be monitored may be dangerous or inconvenient to access, or the desired area to be monitored may not have any suitable locations to place an electronic sensor and associated components. For particularly remote areas, it will be appreciated that it may not be technically or economically to install a complex sensor arrangement. For example, remote areas may not have access to a power supply or communication lines.
As one example, it is desirable to monitor coastal erosion, particularly coastal erosion of cliff faces. Coastal erosion is a gradual process which takes place over a number of days, if not weeks, months, or even years. As most coastal erosion sites are located in relatively remote locations, it is extremely labour intensive to monitor such sites manually. Furthermore, coastal erosion sites may be relatively difficult to access due to the unstable nature of the eroding coastline. Such sites may not be well suited to the installation of sensing devices proximal to the site of coastal erosion. Furthermore, coastal erosion sites are do not typically have access to a power supply or communication lines. Against this background, the present disclosure aims to provide an improved, or at least commercially relevant alternative, monitoring device.
Summary
According to a first aspect of the disclosure, a monitoring device for an area is provided. The monitoring device comprises: a camera module configured to obtain an image of an area, a data store configured to store the image of the area, a processor and a communications module. The processor is configured to detect one or more spatial features in a region of the image, and to determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area. The communications module is configured to transmit the monitoring parameter to a server remote from the monitoring device.
Accordingly the monitoring device can monitor an area from a location which is remote from the area. Such remote monitoring is particularly advantageous where a monitoring device cannot be easily located close to the area of interest. For example, where a cliff face is to be monitored it may not be possible to place sensors physically on the cliff face, as the cliff face may not be structurally sound. It also may not be possible to safely locate sensors close to the cliff face (e.g. at the top or bottom of the cliff face due to the danger of cliff face collapse). Accordingly, the monitoring device of the first aspect provides a device for monitoring an area (such as a cliff face) wherein the monitoring device can be located in a location which is remote from the area to be monitored, and can thus be safely accessed.
It will be appreciated that in many application, such as the monitoring of a cliff face, the proposed location of the monitoring device will be distant from utility services such as electrical power and internet connections. In order to be able to monitor an area over a period of time (which could be several days, weeks, or even years) without requiring intervention, the power and bandwidth consumption of the monitoring device must be carefully controlled. Accordingly, it is not feasible from a power consumption or bandwidth perspective for the monitoring device to simply transmit entire images, or even portions of images to a remote server. Rather, the monitoring device of the first aspect determines one or more monitoring parameters which are representative of spatial features in an image of the area being monitored. Such monitoring parameters can be transmitted to a server using significantly less bandwidth than a conventional image, thereby allowing the monitoring device to monitor an area over an extended period of time without requiring any external input.
In some embodiments, the processor is configured to detect one or more spatial features in a region of the image by comparing a region of the image against a library of predefined shapes stored by the processor identifying one or more predefined shapes in the region of the image. For example, the library of predefined shapes may comprise predefined, or reference shapes for objects or features of interest to be monitored in the area of interest. When monitoring an area of a cliff face, for example, the library of predefined shapes may comprise reference shapes for strata in a cliff face. As such the processor may be configured to detect one or more strata of a cliff face in a region of an image. In some embodiments, when monitoring an area of a cliff face, the library of predefined shapes may comprise reference shapes for water stains on a cliff face. Water stain shapes, or patterns, can be used to monitor erosion of a cliff face. Accordingly, the monitoring device can be used to detect a variety of different spatial features in a region of an image. By limiting the processing device to performing detection of a limited number of predefined spatial features in a region of an image, the processing tasks performed by the monitoring device can be performed rapidly without requiring a substantial amount of computational power. Accordingly, the monitoring device of the first aspect is adapted to monitoring an area in a power efficient manner.
In some embodiments, the processor is configured to determine one or more monitoring parameters based on an identity of the predefined shape identified, an orientation of the predefined shape identified, a size of the predefined shape identified, and a location of the predefined shape in the region of the image. For example, when monitoring a cliff face the one or more monitoring features may determine monitoring parameters relating to one or more strata of a cliff face, such as the size, location, shape, and identify or strata in the region of the image. When monitoring water staining patterns, for example, the monitoring device may identify monitoring parameters relating to the shape, size, identity, and location of a water stain in the region of the image. As such, one or more monitoring parameters may be identified to characterise features of the area in the region of interest in a manner which can be efficiently relayed to a server. In some embodiments, the processor is configured to determine one or more monitoring parameters based on a number of spatial features detected in the region of the image. For example, the monitoring device may be configured to identify a number of spatial features (e.g. strata, water stains, or any other object) in an image and determine a monitoring parameter representative of the number. As such, the monitoring parameter may be representative of a density of spatial features within the region of the image.
In some embodiments, the processor is configured to detect a plurality of spatial features in the region of the image, and the processor is configured to determine one or more monitoring parameters based on a distance between the two of the spatial features identified in the region of the image. For example, where the spatial features represent a plurality of strata, a monitoring parameter may represent a distance between two strata in the region of interest. Monitoring the distance between two spatial features of interest in a region of an image allows the monitoring device to monitor the area of an extended period of time. For example, the distance between two strata of a cliff face may be monitored in order to monitor coastal erosion over an extended period of time.
In some embodiments, the communications module of the monitoring device transmits the one or more monitoring parameters to the server using a wireless network. For example, the wireless network may be a Narrow Band Internet of Things (NB-loT) wireless network. In some embodiments, the NB-loT network may have a bandwidth of no greater than 200 kHz. As such, the monitoring device may operate on an Internet of Things (loT) basis using a relatively limited amount of bandwidth compared to a smartphone or other wireless communications devices.
In some embodiments, the processor is configured to determine one or more monitoring parameters associated with the region of the image having a total size of no greater than 10 bytes. As such, it will be appreciated that the total amount of information used to characterise the region of the image (and thus provide a snapshot of the area of interest at a given time) is a fraction of the total image size obtained by the camera module of the monitoring device. For example, the images obtained by the camera module may be around 10 megabytes in size or similar. As such, the process of determining one or more monitoring parameters is not simply a case of image compression, but rather an application specific extraction of data (monitoring parameters) for the purpose of monitoring a characteristic of the area (i.e. the monitoring parameters) over a period of time. In some embodiments, the monitoring device further comprises an additional sensor module, the sensor module configured generate a sensing parameter for monitoring the area or the surroundings of the monitoring device. In some embodiments, the additional sensor module is configured to generate a sensing parameter when the camera module obtains an image. For example, the sensor module may comprise a temperature sensing module, a wind sensing module, a light sensing module or any other type of sensor known in the art. The sensing module may provide a sensing parameter which provides contextual information for the associated monitoring parameter(s) of the monitoring device. In some embodiments, the monitoring device may use the sensing parameters to decide whether or not to proceed with obtaining an image and associated monitoring parameters. For example, where the sensing module comprises a light sensing module, the monitoring device may be configured not to proceed with obtaining an image and associated monitoring parameter(s) if the sensed parameter (corresponding to a light level) is below a threshold value (i.e. if it is too dark to obtain a useful image).
In some embodiments, the communication module transmits the sensing parameter associated with the image when transmitting the one or more monitoring parameters associated with the image. As such, the sensing parameter may be provided to the server in order to provide additional contextual information for the associated one or more monitoring parameters.
In some embodiments, the communications module is configured to transmit a plurality monitoring parameters associated with a plurality of images to the server in a single communication. As such, the monitoring device can efficiently transmit data to a server by combining monitoring parameters associated with a plurality of images into a single transmission. For example, the monitoring device may be configured to combine monitoring parameters associated with at least: 3, 5, 7 or 10 images into a single communication.
In some embodiments, the communications module is configured to transmit the plurality of monitoring parameters to the server in a communication no greater than: 500 bytes, 400 bytes, or 300 bytes in size. Such a communication size is commensurate with, for example, transmitting data over an loT wireless network using a low power transmission method. By transmitting relatively small amounts of data, the monitoring device can monitor an area (and transmit data associated with the area) for an extended period of time utilising, for example, a modestly sized battery. Similarly, the communications module may be configured to receive inbound communications from the server no greater than 500 bytes, 400 bytes, or 300 bytes in size. By limiting the communication size (i.e. the packet size) of communication (outbound and inbound) the computing resources required on the remote monitoring device (and the associated power consumption) may be reduced. For example, it will be appreciated that by limiting communication size, the buffer capacity of the monitoring device may be reduced. Similarly, the Random Access Memory (RAM) requirements for the communication module will be reduced when communication size is limited in accordance with the first aspect. This in turn reduces the power consumption of the monitoring device as well as the overall device cost and complexity.
In some embodiments, the communication device is configured to transmit the monitoring parameters in a single packet. By using single packet transmission between the monitoring device and the server, communication between the monitoring device and the server can be provided in a reliable manner which makes efficient use of the available computing resources. Single packet communications, rather than communication data sets across multiple packets, reduces the amount of overheads required for each packet.
In some embodiments, prior to detecting the one or more spatial features, the processor is configured to normalise the image, wherein preferably the processor is configured to normalise at least one of a contrast, a brightness, or a hue of the image. Thus, in addition to the detection of spatial features, it will be appreciated that the processor may be additionally configured to perform other image processing functions, in particular to aid with the identification of spatial features or to aid with the determination of one or more monitoring parameters.
In some embodiments, the monitoring device is configurable by a configuration device in order to identify the region in the image to be compared. As such, a user, when setting up the monitoring device in the desired location can configure the monitoring device to perform the desired monitoring device. The configuration device, which may comprise a computer or smartphone for example, can by connected to the monitoring device and used to configure the area to be imaged and the region, or regions, of interest in the imaged area. In some embodiments, configuring the monitoring device to monitor the area using the configuration device also comprises specifying the comparison to be performed and the monitoring parameter to be determined based on the comparison.
In some embodiments, the monitoring device is configured to repeatedly monitor the area. For example, the monitoring device is configured to monitor the area at least once every: 24 hours, 12 hours, 6 hours, 3 hours, 2 hours or 1 hours. As such it will be appreciated that the monitoring device can be used to obtain a plurality of images of the area over a period of time. The monitoring device may be configured to have a power consumption such that it can repeatedly monitor the area of interest for an extended period of time. As such, the monitoring parameters determined by the monitoring device provide a way of automatically monitoring a characteristic of an area (e.g. coastal erosion of cliff face) over an extended period of time without requiring physical access to the area of interest.
In some embodiments, when the monitoring device is not used to monitor the area, the monitoring device is configured to be in a low power state wherein the camera module and the processor are not powered. As such, the monitoring device can be operated in a relatively low power state (relative to the power consumed when obtaining and processing an image) in order to prolong the period of time the monitoring device can monitor an area without requiring additional user intervention. For example, in some embodiments, the monitoring device is configured to consume no more than 8 pW or more preferably no more than: 7 pW or 6.7 pW when in the low power state.
It will also be appreciated that the power consumption of the monitoring device when obtaining an image and any associated processing may also be carefully designed in order to ensure that the monitoring device can be operational for an extended period of time. For example, the monitoring device may be configured to draw no more than 1 .1 W, preferably no more than 0.75 W when monitoring the area. In some embodiments, the monitoring device may be configured to monitor an area repeatedly such that the average power consumption of the monitoring device each day may be no greater than about: 60 pW, 50 pW or 40 pW.
In some embodiments, the camera module and the processor are operational for no more than 60 s, preferably no more than 40 s, in order to obtain the image and determine the associated one or more monitoring parameters. As such, the time period the monitoring device may be fully operational (to record and process images) is a fraction of the time spent in the low power state such that the overall power consumption of the monitoring device is reduced.
In some embodiments, the monitoring device may comprise a power source. As such, the monitoring device may be powered by a self-contained power source (i.e. the monitoring device comprises the self-contained power) such as a battery or a renewable power source. In some embodiments, the monitoring device may be powered by a battery, or a renewable power source, or a combination of a battery and a renewable power source. A renewable power source may comprise a solar panel or a wind turbine or the like. As such, the monitoring device has a low power consumption such that a e.g. battery of a comparable size to the monitoring device can provide sufficient power to operate the monitoring device for an extended period of time. For example, the battery may have a capacity of no greater than 11.1 Wh (e.g. a 3000 mAh Li ion battery at around 3.7 V). Such a battery may be sufficient to power a monitoring device according to the first aspect for at least 20 years when the monitoring device is configured to image an area once every hour and transmit monitoring parameters once every four hours (i.e. an average power consumption each day of about 50 pW) for example.
As discussed above, the monitoring device is configured to process a region of the image obtained. In some embodiments, the region of the image processed is smaller than the total area of the image. It will be appreciated that by only processing part of the image, power can be saved by avoiding processing unnecessary regions of an image. For example, where an image of a cliff face includes a region of sky and/or sea, these regions can be excluded as no information pertaining to the cliff face will be present in these regions. For example, in some embodiments, the region of the image has an area no greater than: 50%, 40%, 30%, 20% or 10% of an area of the image.
In some embodiments, the image obtained by the camera module is at least 5 megapixels, preferably at least 10 megapixels. As such, it will be appreciated that the images obtained by the camera module are data files which are significantly larger than the monitoring parameters determined by the processor (which may be around 2-3 bytes each for example). Even if the image files obtained by the processor were to be compressed by the processor (which would require power) the resulting compressed image files would still be orders of magnitude larger than the monitoring parameters determined by the processor. In some embodiments, the camera module is configured to obtain an infra-red image of the area. As such, the spatial features of the image may represent hot and/or cold spots of the area of interest. In turn, the monitoring parameters, may represent a shape, orientation, location and the like of the hot/cold spots.
In some embodiments, a plurality of monitoring devices may be provided, each of which communicates with the server. Thus, a plurality of monitoring devices may be used to automatically monitor a relatively large expanse of area (such as a coastline) over an extended period of time for the purpose of tracking changes in geography which typically occur either unpredictably (such as cliff falls) or over several days/weeks/years which are not practical for a user to manually observe.
In some embodiments, the area to be monitored may be at least 5 m away from the monitoring device. In some embodiments, the area to be monitored may be at least: 10 m, 15 m, 20 m, 30 m, 50 m, 100 m or 200 m from the monitoring device. As such, the monitoring device may be configured to monitor an area which is remote from the monitoring device. In some embodiments, the camera module may comprise a lens (e.g. a telephoto lens) configured to provide a focal length for the camera module which corresponds to the distance of the area to be monitored from the monitoring device.
According to a second aspect of the disclosure, a method of monitoring an area with a monitoring device is provided. The method comprises: obtaining an image of the area with a camera module of the monitoring device; storing the image of the area in a data store of the monitoring device; detecting one or more spatial features in a region of the image; and determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area; transmitting the one or more monitoring parameters to a server using a communications module of the monitoring device.
Thus, it will be appreciated that a monitoring device according to the first aspect of the disclosure may be used to perform the method of the second aspect. It will be appreciated that the method of the second aspect may incorporate any of the optional features of the first aspect described above. In some embodiments, the method may further comprise: obtaining a reference image of the area monitored by the monitoring device; storing the reference image on the server; wherein when the server receives the one or more monitoring parameters, the server determines monitoring information for the area based on comparing the monitoring parameters to the reference image.
In some embodiments, the server is configured to communicate with the communication module of the monitoring device to update one or more of: a frequency with which the area is monitored, the region of the image in which spatial features are to be detected; the spatial features to be detected in the region of the image. As such, the server may be used to remotely control various aspects of the operation of the monitoring device remotely.
Brief description of the figures
Specific embodiments of the disclosure will now be described, by way of example only, with reference to the accompanying drawings in which:
Fig. 1 is a block diagram of the monitoring device according to an embodiment of the disclosure;
Fig. 2 is a flow chart of a method according to an embodiment of the disclosure; Fig. 3 is a schematic diagram of an image obtained by the monitoring device; and
Fig. 4 is an example of a library of predefined shapes.
Detailed description
According to an embodiment of the disclosure a monitoring device 1 is provided. A schematic diagram of the monitoring device 1 is shown in Fig. 1 . The monitoring device 1 comprises a camera module 10, a processor 20, a data store 30 a communications module 40, and one or more additional sensing modules 50.
The various components of the monitoring device 1 may be contained within a suitable housing (not shown). The housing may be configured to protect the monitoring device 1 from the elements and also from unwanted interference. The housing may include an opening to allow the camera module to image the area of interest. The housing may also include an attachment portion to allow the monitoring device 1 to be fixed in place. As such, the monitoring device 1 may be fixed in a predetermined location to allow the camera module to image the area of interest over an extended period of time. For example, when monitoring a cliff face, the monitoring device 1 may be attached to a pole, or a wall, and orientated to image the cliff face of interest.
The camera module 10 is configured to obtain an image of an area to be monitored. For example, where the monitoring device is used to monitor coastal erosion of a cliff face, the monitoring device 1 may be positioned such that that the camera module 10 images the cliff face area of interest. The camera module 10 may comprise any suitable digital camera and associated circuitry suitable for obtaining a (digital) image of the area of interest. The camera module may image the area of interest and provide a digital representation of the image in any suitable data format. The camera module may be configured to provide a data stream representative of the image captured to the processor 20 via a serial data connection (e.g. I2C, Universal asynchronous receiver-transmitter (UART) or Serial Peripheral Interface (SPI) for example). In some embodiments, the camera module 10 may comprise a 5 megapixel, or preferably a 10 megapixel camera. As such, the raw image data obtained by the camera module (prior to any image processing or data compression) for a 10 megapixel camera module may be about 20 Mbytes per image. Of course in other embodiments, different camera modules 10 having different resolutions may be used.
In some embodiments, the camera module 10 may be configured to obtain visible light images of an area (i.e. conventional photographs). In other embodiments, the camera module 10 may be configured to detect other wavelengths of light (i.e. light having wavelengths outside of the visible light spectrum). For example, the camera module 10 may be configured to obtain an infra-red image of the area, or an ultraviolet image of the area. By obtaining an infra-red image of the area, the camera module 10 may be configured to detect areas of an image which are relatively hot or relatively cold to their surroundings. So called, ‘hot spots’ or ‘cold spots’ can be analysed as spatial features using the image processing techniques described below.
In some applications where the area to be monitored may be located some distance from the intended location of the remote monitoring device, the camera module 10 may comprise one or more lenses to increase the focal length of the camera module 10. As such, the camera module 10 may be used to monitor a variety of different areas at a range of different distances. That is to say, in some embodiments, the remote monitoring device 1 may be located at least: 10 m, 20 m, 30 m, 50 m, or 100 m from the area of interest.
The monitoring device 1 is configured to repeatedly monitor the area. Accordingly, over a period of time the camera module repeatedly obtains images of the area. For example, the monitoring device may be configured to monitor the area at least once every: 24 hours, 12 hours, 6 hours, 3 hours, 2 hours or 1 hours. Each time the monitoring device 1 monitors the area, the camera module 10 obtains an image of the area, which is subsequently processed by the processor 20.
The data store 30 is configured to store the image of the area. The data store 30 may receive the image of the area from the processor 20 as shown in Fig. 1 , or the camera module 10 may be configured to communicate directly with the data store 30 to store the image of the area. The data store 30 may be provided by any suitable form of computer memory. Preferably the data store 30 is a form of non-volatile memory such that a power source is not required to maintain the data stored within the data store 30. In some embodiments the data store 30 may be provided by a Secure Digital (SD) card or microSD card and the like. The data store 30 may be configured to store a plurality of the images obtained by the camera module 10. In some embodiments, the data store 30 may be sized suitably to store all images obtained by the camera module 10. In some embodiments, the processor 20 may compress the images prior to storage. Alternatively, the data store 30 may be sized to store only some of the images obtained once they have been processed. The data store 30 may also be used to store various instructions and data used by the processor 20 in order to process the images and the like.
The processor 20 is configured to perform various image processing tasks on the image obtained by the camera module 10. In particular, the processor 20 is configured to analyse a region of the image. The region of the image to be analysed may form only part of the image. That is to say, the processor 20 may not analyse all of the image obtained, but rather only regions of the image that have been predetermined as being of interest. For example, in the embodiment of Fig. 1 , processor module may be configured to analyse a region of the image which has an area no greater than 20 % of an area of the image. For example, the region of the image to be analysed by a rectangular region of a pixels by b pixels, where a and b are integers to be defined by a user. For example, a user may specify that a and/or b may be equal to 100, 200, 500, or 1000 pixels, for example. That is to say, a user could define a rectangular 200 x 1 ,000 pixel region of an image which includes features of interest. For example, such a region may comprise one or more strata of a cliff face, which can then be analysed by the processor 20. It will be appreciated that the proposed rectangular region of interest is only one possible region shape and that a user may specify any particular pixels, or areas of pixels with the images to be analysed.
The region, or regions of interest may be predefined by a user when configuring the monitoring device 1 for use. Configuration of the monitoring device 1 is discussed in more detail below.
Upon receipt of an image, the processor 20 is configured to detect one or more spatial features in a region of the image. The spatial features may be detected by the processor using any suitable spatial feature detection algorithm. The most suitable spatial feature detection algorithm will depend on the desired application and the nature of the spatial feature to be detected. Spatial feature detection may, in some embodiments be provided by a suitably trained machine learning algorithm, or any other suitable image processing algorithm known to the skilled person.
For example, the processor 20 may be configured to detect one or more strata in a region of the image. Strata are generally horizontally extending sections of a cliff face having a generally similar brightness and/or colour. As such, different strata can be distinguished by a relatively abrupt change in brightness and/or colour along a line extending in the vertical direction of the image. Such abrupt changes can be identified as lines between the different strata which run in a left to right direction across an image of strata. That is to say, the processor 20 may be configured to detect one or more lines in a region of an image representing the boundary between two different strata. Fig. 3 shows a schematic diagram of a cliff face comprising a plurality of strata 80. The black lines shown in Fig. 3 are indicative of the boundaries between strata 80 that the processor 20 may identify as spatial features, were such a portion of the image to be analysed by the processor. Fig. 3 also shows a plurality of rectangular regions 82 that represent regions of the image that are analysed by the processor 20 for the purpose of detecting spatial features. Detailed view A in Fig. 3 shows a detailed view of the strata 80 and rectangular regions 82 which are analysed. In some embodiments, the processor 20 may be configured to detect a water staining pattern. A water stain on a cliff face may generally appear as an area of a cliff face which is a different colour to the surrounding cliff face where water staining is not present. As such, a water stain can be distinguished by a region of a different colour to the surrounding area. Water stains may extend across different strata and so the feature detection algorithm can be configured to account for different possible shapes of water stains, including cases where water stains extend across two or more strata for example. In effect, the boundary between a water stained region and a non-water stained region of the cliff face may be determined by the processor 20 in order to detect a water stain spatial feature in a similar manner to the process for detecting a strata spatial feature.
It will be appreciated from schematic diagram Fig. 3 that the processor 20 may analyse a plurality of regions in each image obtained by the camera module. The regions of interest may be specified by a user when the monitoring device is calibrated.
Once the processor 20 has detected the spatial features in the region(s) of interest in the image, the processor proceeds to determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area. In essence, a monitoring parameter provides a value (or similar small amount of data) which is representative of some characteristic of the region of interest. By monitoring the value of the monitoring parameter over time, a user can monitor/detect changes in an area of interest over time in a remote manner. Importantly, the monitoring parameters are very small amounts of data (relative to the data size of the image, or even the data size of the region of interest) which can be efficiently transmitted by the communications module 40.
For example, in the schematic diagram of Fig. 3 the monitoring device may provide a count of the number of spatial features identified (e.g. strata) in each region of interest. By monitoring the number of spatial features in a region (or regions) of interest over time the monitoring device can detect changes in the area over an extended period of time. For example, the monitoring device may detect a change in the number of strata present in the region(s) of interest which may indicate that a cliff fall has occurred.
As shown in Fig. 3, the processor may detect a plurality of spatial features (e.g. strata 80) in a region 82 of the image. The processor is then configured to determine one or more monitoring parameters based on a distance between the two of the spatial features identified in the region of the image. For example, the processor may determine a pixel distance between two strata in the region of interest. The pixel distance may be measured as the distance between two strata boundaries along a line spanning the region of interest. In the example of Fig. 3, a vertical line (relative to the imaged area) is provided through the centre of the region of interest. The processor 20 is configured to determine the pixel distance between the lower boundary 83 of the uppermost strata in the region of interest 82 and the upper boundary 84 of the lowermost strata in the region of interest 82.
By monitoring the pixel distance over time, changes in the relative positions of different strata 80 in the cliff face may be monitored. While the monitoring device determines a pixel distance for the spacing of various spatial features, it will be appreciated that in some embodiments the pixel distance may be calibrated by a user upon set up of the monitoring device. As such, a user may calibrate the monitoring device to convert the pixel distance into an actual distance based on a measurement of the distances being observed in the image. Alternatively, the server 60 may be configured to covert the pixel distances into actual distances based on a scale provided by a user. The measurement scale may be determined, for example, by including an object of known length (e.g. a measurement stick or the like) in the area to be monitored when calibrating the monitoring device at a known distance from the monitoring device.
In some embodiments, the processor is configured to compare one or more spatial features detected in a region of the image against a library of predefined shapes stored by the processor. A schematic example of a library of predefined shapes is shown in Fig. 4. Each shape in the library of Fig. 4 has a unique identification code which can be used as a monitoring parameter. Thus, in some embodiments the processor 20 may be configured to find a best fit of the spatial feature to a predefined shape of the library. The processor may than use the associated unique identification code of the predefined shape as a monitoring parameter in order to indicate the general shape of the spatial feature in a data efficient manner (i.e. without sending actual image data of the spatial feature).
In some embodiments, the processor 20 may be configured to attempt to fit the predefined shapes to the spatial feature by performing various transformations to the predefined shapes in order to find the closest fit. Thus, based on the transformations applied to the closest matching predefined shape, the processor 20 may determine monitoring parameters which reflect one or more of: an orientation of the predefined shape, a size of the predefined shape identified, and a location of the predefined shape (e.g. a location of the centre of the predefined shape) in the region of the image. Thus, it will be appreciated that the processor 20 may be configured to determine information concerning the general shape (i.e. the unique predefined shape identifier), size, orientation, and location of a spatial in a region of interest using up to four monitoring parameters. Typically, each monitoring parameter expressing one of the above characteristics utilises about 2-3 bytes of data. Thus, a spatial feature of interest in a region 82 of the area can be monitored using e.g. no more than 10 bytes of data per image obtained.
It will be appreciated that Fig. 4 shows a generic library of predefined shapes. For specific applications a user may populate a library with possible shapes which may be specifically tailored to the spatial features of interest. As such, the library approach to feature recognition, and communication of the resulting monitoring parameters characterising the fitted shape provides a data efficient and flexible approach to the detection and monitoring of features of interest in an area.
Prior to the detection of the spatial features, in some embodiments the processor 20 may be configured to normalise the image to be analysed. In order to normalise the image to be analysed, the processor 20 may be configured to identify one or more reference areas 90a, 90b in the image. The reference areas 90a, 90b may be predefined (i.e. user-defined) regions of a reference image. A reference image (not shown), for example the first image obtained by the monitoring device 1 . Thus, the reference areas of the reference image may be compared to the reference area 90a, 90b of the image to be analysed. By identifying the reference areas 90a, 90b in the image to be analysed, the processor 20 may orientate itself in order to allow the regions of interest to be accurately located. That is to say, the regions of interest may be defined relative to the reference areas 90a, 90b. Thus, the monitoring device 1 may provide some limited orientation steps in order to account for small changes in the position of the monitoring device 1 .
Additionally, the monitoring device 1 may utilise the reference areas 90a, 90b to normalise the image relative to the reference image. For example, the processor may normalise the one or more of the brightness, contrast, pixel values or hue of the obtained image such that the reference areas 90a, 90b have brightness, contrast, pixel values or hue which correspond to those of the reference areas of the reference image. In the example of Fig. 3, two reference areas 90a, 90b are indicated. In other embodiments, at least two reference areas may be identified. Reference areas may be selected as location away from features of interest. In some embodiments, reference areas may be selected to include a prominent feature within an image in order to make it easier to identify the reference image. That is to say, a reference area 90a, 90b may be selected to include a region of high contrast which is expected to largely time-invariant. A plurality of reference areas may be identified in some embodiments in order to provide a redundancy in the event that one of the reference areas 90a, 90b substantially changes over time.
Once the processor 20 has detected the monitoring parameters associated with the image, the processor 20 may store the monitoring parameters in the data store 30. As such, the data store 30 may provide an offline store of both image data and monitoring parameters associated with the images. Such an offline store may be accessed by a user at the end of the monitoring period when a user collects the device. Of course, the monitoring period may be intended to run for many years, with many monitoring devices generating data. For projects of such a scale, it may not be feasible to routinely collect data held on individual monitoring devices 1. Accordingly, monitoring device 1 also includes a communications module 40 configured to transmit the monitoring parameters to a server 60 remote from the monitoring device 1 . Thus, a server 60 may collect the data from one or more monitoring devices 1 as the data is generated by the monitoring devices. In some embodiments, the data may be collected by the server in real-time.
The communications module 40 is configured to transmit monitoring parameters associated with each image to the server 60. The communications module 40 may receive the monitoring parameters from the processor 20 (e.g. as shown in Fig. 1) or in some embodiments the communications module 40 may access the data store 30 in order to obtain the monitoring parameters (and any other associated data such as image identifiers) to be transmitted.
In some embodiments, the communications module 40 transmits the monitoring parameters over a wireless network. Preferably the communication module 40 transmits the monitoring parameters over a Low Power Wireless Personal Area Network. For example, the network may be a Narrow Band Internet of Things wireless network. The server 60 is located remotely from the remote monitoring device 1 . The server 60 may be connected to one or more receivers of the wireless network, for example by an internet connection. As such, the server 60 may be any suitable computing device for receiving the monitoring parameters from the monitoring device 1 . It will be appreciated that the computing resources available to the server will be significantly greater than those available to the monitoring device, both from a computational power perspective and an energy consumption perspective. As such, the monitoring parameters can be further analysed by the server 60 in near-real time for a variety of applications. Due to the limited functionality and power usage requirements of the monitoring device, it may not be possible to perform such tasks locally on the monitoring device.
In particular, the individual monitoring parameters to be transmitted by the communications device may have a size of around 2 or 3 bytes. Where the monitoring device 1 generates a plurality of monitoring parameters associated with a region of an image, the total size of all the monitoring parameters associated with the region of the image may be no greater than 10 bytes (or around 5 monitoring parameters for example).
In order to communicate with the server in an energy efficient manner, the communication module 40 may be configured to transmit a plurality monitoring parameters associated with a plurality of images to the server in a single communication. For example, where the monitoring device 1 obtains an image every hour, the monitoring device 1 may transmit the monitoring parameters generated over the previous 24 hours once per day for example. I other embodiments, the communication frequency of the monitoring device may be adjusted based on the rate at which monitoring parameters are acquired and the desired limit on communication size. For example, in some embodiments, the communications module is configured to transmit the plurality of monitoring parameters to the server in a communication no greater than 500 bytes, or preferably 300 bytes in size. The communications module is configured to send all data to be transmitted in a single packet, rather than transmitting data over multiple packets.
In some embodiments, the monitoring device 1 may comprise an additional sensor module 50. The additional sensor module 50 may be configured to generate a sensing parameter for monitoring the area or the surroundings of the monitoring device 1 . For example, the additional sensor module 50 may be a light sensor, a temperature sensor, a wind sensor, an accelerometer, a moisture/humidity sensor or any other suitable sensor. The additional sensor module 50 may be configured to generate a sensing parameter when the camera module obtains an image. The additional sensor module 50 may include its own processor and data store such that it is configured to obtain a sensing parameter in a manner analogous to the operation of the camera module 10, processor 20 and data store 30. Alternatively, the processing functionality and data storage functionality for the additional sensor module 50 may be provided by the processor 20 and data store 30 respectively of the remote sensing device 1 .
As such, the additional sensor module 50 may be configured to provide additional contextual information regarding the surroundings of the monitoring device 1 . The communication module 40 may transmit the sensing parameter associated with the image when transmitting the monitoring parameter associated with the image.
As discussed above, the monitoring device 1 is used to repeatedly monitor an area, for example once an hour. It will be appreciated that the process of obtaining an image and processing the image to obtain the one or more monitoring parameters is a relatively quick task for modern computing equipment. For example, an image can be acquired and processed in a time period no greater than 60 seconds. As such, it will be appreciated that the camera module 10, processor 20, and data store 30 may be idle for a significant period of time. Thus, when the monitoring device 1 is not used to monitor the area, the monitoring device 1 is configured to be in a low power state. In some embodiments, the low power state may involve the camera module 10 and the processor 20 not being powered. The communications module 40 may draw a relatively low amount of power in the low power state in order to be open to receiving any incoming communications, and also to trigger a wake up process for the processor 20 and camera module 10 at the appropriate time interval. For example, in the low power state the monitoring device 1 may be configured to draw no more than 7 pW or 6.7 pW. Such power may be consumed, for example by the communications module 40.
While the monitoring device 1 may be configured to operate in a low power state the majority of time in order to conserve energy, the power consumption of the monitoring device 1 when monitoring an area (i.e. when the monitoring device is used to obtain and process an image) may also be limited in order prolong the period the monitoring device can be operated without requiring manual user input. For example, the monitoring device may be configured to draw no more than 1 .1 W, preferably no more than 0.75 W over the duration of monitoring the area (including transmitting the monitoring parameters to the server). For example, in some embodiments the process of waking the monitoring device 1 from a low power state, obtaining an image, processing the image, and putting the monitoring device into a lower power state takes less than 60 seconds, preferably less than 40 seconds. In such cases where the monitoring device 1 is operated to obtain an image once per hour with a communication being transmitted by the communications module 40 once every four hours, the average power consumption of the monitoring device over the course of a day may be no greater than 60 pW, or more preferably no greater than about 50 pW or 40 pW. As such, on average the monitoring device 1 , when monitoring an area may consume about no greater than 1 .44 mWh per day.
The power consumption of the monitoring device 1 is of particular importance where the monitoring device draws power from a battery (not shown). It will be appreciated that in order for the monitoring device 1 to operate for an extended period of time either a very large battery must be provided the total energy consumption of the monitoring must be kept to a minimum. By way of example, a smartphone battery typically has a capacity of about 11 Wh which typically is sufficient to power a smartphone for around 1 day of usage. By contrast, a 11 Wh battery would be sufficient to power a monitoring device 1 in a low power state drawing no more than 1 .44 mWh for at least 20 years. As such, relative to a smartphone, it will be appreciated that the monitoring device is specifically adapted to longterm monitoring of an area without the need for constant user intervention.
The monitoring device 1 may be installed in a desired location and configured using a configuration device (not shown). A configuration device may be used to set up the monitoring device and ensure that the camera module is orientated correctly. As such, the monitoring device 1 may include one or more output ports (not shown) to output an obtained image to a configuration device such as a smart phone or similar portable computing device such as a laptop. Various output ports or similar communication technologies are known to the skilled person for communicating between the monitoring device and a local (i.e. within a few meters) configuration device. Alternatively, the data storage 30 (e.g. an SD card) could be read with the configuration device to check the monitoring device 1 is orientated correctly.
In addition to checking the area being imaged by the monitoring device 1 , the configuration device may specify one or more regions in the images obtained to be analysed by the monitoring device. As such, the monitoring device 1 can be easily customised by a user installing the monitoring device to monitor a range of different areas. Configuring the monitoring device 1 may also comprise specifying spatial features to be detected and the monitoring parameter(s) to be determined based subsequently.
The configuration device and/or the monitoring device 1 may suggest regions of interest to be monitored to a user based on the detection of spatial features within a calibration image. For example, the processor 20 of the monitoring device 1 may analyse a calibration image obtained and identify a plurality of spatial features (e.g. the lines representative of strata boundaries in Fig. 3). Based on this, a user may mark point on strata of interest to be monitored and the processor 20 may generate appropriate regions of interest based on the user selected points. Alternatively, the regions of interest may be specified by a user using the calibration device.
Next, a method 100 of operating the monitoring device 1 will be described with reference to Fig. 2. As shown in Fig. 2, a method of method of monitoring an area with a monitoring device 1 may comprise a step 101 comprising obtaining an image of the area with a camera module of the monitoring device.
In step 102 of the method 100, the image of the area is stored in a data store 30 of the monitoring device 1 .
In step 103, the processor 20 detects one or more spatial features in a region of the image. The image processing performed by the processor 20 to detect the one or more spatial features is discussed above.
In step 104, the processor 20 determines one or more monitoring parameters based on the detected one or more spatial features for monitoring the area. The detection process performed by the processor 20 is discussed in more detail above.
In step 105, the communications module 40 transmits the one or more monitoring parameters to a server 60. Once received by the server 60 the monitoring parameters may be stored by the server 60 for further analysis. For example in some embodiments the server may be configured to obtain a reference image of the area being monitored. The reference image may be the first image obtained by the monitoring device 1 when installed by a user, which a user manually transmits to the server using a configuration device (e.g. a smartphone). The server 60 may store the reference image on the server 60 and use the reference image to interpret the monitoring parameters. For example, when the server receives the one or more monitoring parameters, the server determines monitoring information for the area based on comparing the monitoring parameters to the reference image.
In some embodiments, the server 60 may also be configured to communicate with the communication module 40 of the monitoring device 1 to update one or more of: a frequency with which the area is monitored, the region of the image in which spatial features are to be detected; and the spatial features to be detected in the region of the image. As such the server 60 may be used to manage the power consumption of the monitoring device by changing the frequency of the monitoring. The server 60 may also be used to change the regions monitored over time in response to changes in the area being monitored. In some embodiments where the server 60 receives real-time data, the server 60 may be configured to prompt additional monitoring processes (i.e. obtaining and processing more images outside of a schedule image) in response to a detected change in a monitoring parameter (e.g. due to a cliff fall or similar event).
In the above description a single monitoring device 1 has been described which communicates with a server 60. In some embodiments, it will be appreciated that a plurality of monitoring devices 1 may be provided, each of which communicates with server 60. Thus, a plurality of monitoring devices 1 may be used to automatically monitor a relatively large expanse of area (such as a coastline) over an extended period of time for the purpose of tracking changes in geography which typically occur either unpredictably (such as cliff falls) or over several days/weeks/years which are not practical for a user to manually observe.

Claims

CLAIMS:
1 . A monitoring device for monitoring an area comprising: a camera module configured to obtain an image of an area; a data store configured to store the image of the area; a processor configured to: detect one or more spatial features in a region of the image; and determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area; a communications module configured to transmit the monitoring parameter to a server remote from the monitoring device.
2. A remote sensor to claim 1 , wherein the processor is configured to detect one or more spatial features in a region of the image by comparing a region of the image against a library of predefined shapes stored by the processor identifying one or more predefined shapes in the region of the image.
3. A monitoring device according to claim 2, wherein the processor is configured to determine one or more monitoring parameters based on an identity of the predefined shape identified, an orientation of the predefined shape identified, a size of the predefined shape identified, and a location of the predefined shape in the region of the image.
4. A monitoring device according to claim 2 or claim 3, wherein the processor is configured to determine one or more monitoring parameters based on a number of spatial features detected in the region of the image.
5. A monitoring device according to claim 2, 3, or 4, wherein the processor is configured to detect a plurality of spatial features in the region of the image, and the processor is configured to determine one or more monitoring parameters based on a distance between the two of the spatial features identified in the region of the image.
6. A monitoring device according to any preceding claim, wherein the remote monitoring device monitors an area located at least 5 m away from the remote monitoring device.
7. A monitoring device according to any preceding claim, wherein the processor is configured to determine one or more monitoring parameters associated with the region of the image having a total size of no greater than 10 bytes.
8. A monitoring device according to any preceding claim, the monitoring device further comprising an additional sensor module, the sensor module configured generate a sensing parameter for monitoring the area or the surroundings of the monitoring device, wherein the additional sensor module is configured to generate a sensing parameter when the camera module obtains an image.
9. A monitoring device according to claim 8, wherein the communication module transmits the sensing parameter associated with the image when transmitting the monitoring parameter associated with the image.
10. A monitoring device according to any preceding claim, wherein the communications module is configured to transmit a plurality monitoring parameters associated with a plurality of images to the server in a single communication.
11. A monitoring device according to claim 10, wherein the communications module is configured to transmit the plurality of monitoring parameters to the server in a communication no greater than 500 bytes in size.
12. A monitoring device according to any preceding claims, wherein prior to detecting the one or more spatial features, the processor is configured to normalise the image, wherein preferably the processor is configured to normalise at least one of a contrast, a brightness, or a hue of the image.
13. A monitoring device according to any preceding claim, wherein the monitoring device is configurable by a configuration device in order to identify the region in the image.
14. A monitoring device according to claim 13, wherein configuring the monitoring device to monitor the area using the configuration device also comprises specifying the spatial features to be detected and the one or more monitoring parameters to be determined based on the detected spatial features.
15. A monitoring device according to any preceding claim, wherein the monitoring device is configured to repeatedly monitor the area, wherein preferably the monitoring device is configured to monitor the area at least once every: 24 hours, 12 hours, 6 hours, 3 hours, 2 hours or 1 hours.
16. A monitoring device according to any preceding claim, wherein when the monitoring device is not used to monitor the area, the monitoring device is configured to be in a low power state wherein the camera module and the processor are not powered.
17. A monitoring device according to claim 16, wherein the monitoring device is configured to draw no more than 1 .8 mA when in the low power state.
18. A monitoring device according to any preceding claim, wherein the monitoring device is configured to consume no more than 1 .1 W, preferably no more than 0.75 W, when monitoring the area.
19. A monitoring device according to any preceding claim, wherein the camera module and the processor are operational for no more than 60 s, preferably no more than 40 s, in order to obtain the image and determine the associated one or more monitoring parameters.
20. A monitoring device according to any preceding claim, further comprising a battery having a capacity of no greater than 11.1 Wh.
21 . A monitoring device according to any preceding claim, wherein the region of the image has an area no greater than 20 % of an area of the image.
22. A monitoring device according to any preceding claim, wherein the image obtained by the camera module is at least 5 megapixels, preferably at least 10 megapixels.
23. A monitoring device according to any preceding claim, wherein the camera module is configured to obtain an infra-red image of the area.
24. A method of monitoring an area with a monitoring device comprising: obtaining an image of the area with a camera module of the monitoring device; storing the image of the area in a data store of the monitoring device; detecting one or more spatial features in a region of the image; and determine one or more monitoring parameters based on the detected one or more spatial features for monitoring the area; transmitting the one or more monitoring parameters to a server using a communications module of the monitoring device.
25. A method according to claim 24, further comprising obtaining a reference image of the area monitored by the monitoring device; storing the reference image on the server; wherein when the server receives the one or more monitoring parameters, the server determines monitoring information for the area based on comparing the monitoring parameters to the reference image.
26. A method according to claim 24 or claim 25, wherein the server is configured to communicate with the communication module of the monitoring device to update one or more of: a frequency with which the area is monitored, the region of the image in which spatial features are to be detected; the spatial features to be detected in the region of the image.
PCT/GB2023/050429 2022-02-28 2023-02-27 Monitoring device and method of monitoring an area WO2023161651A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2202743.7A GB2616068B (en) 2022-02-28 2022-02-28 Monitoring device and method of monitoring an area
GB2202743.7 2022-02-28

Publications (2)

Publication Number Publication Date
WO2023161651A2 true WO2023161651A2 (en) 2023-08-31
WO2023161651A3 WO2023161651A3 (en) 2023-11-09

Family

ID=81075564

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2023/050429 WO2023161651A2 (en) 2022-02-28 2023-02-27 Monitoring device and method of monitoring an area

Country Status (2)

Country Link
GB (1) GB2616068B (en)
WO (1) WO2023161651A2 (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3513443B2 (en) * 1999-10-12 2004-03-31 日本無線株式会社 Rockfall monitoring system
JP2004056443A (en) * 2002-07-19 2004-02-19 Fujitsu Ltd Monitor image pickup terminal equipment and monitoring system
JP2007241377A (en) * 2006-03-06 2007-09-20 Sony Corp Retrieval system, imaging apparatus, data storage device, information processor, picked-up image processing method, information processing method, and program
CN201955875U (en) * 2011-01-21 2011-08-31 贵州江月兴科技有限公司 Remote geological disaster monitoring and early warning system
CN104751603B (en) * 2015-04-10 2017-07-18 招商局重庆交通科研设计院有限公司 Crag avalanche monitoring and pre-warning system and method
JP7205457B2 (en) * 2019-12-23 2023-01-17 横河電機株式会社 Apparatus, system, method and program
CN113051980A (en) * 2019-12-27 2021-06-29 华为技术有限公司 Video processing method, device, system and computer readable storage medium
CN113153431A (en) * 2021-04-02 2021-07-23 山东科技大学 Coal and rock dynamic disaster monitoring and early warning visualization system and method based on 5G communication

Also Published As

Publication number Publication date
GB2616068A (en) 2023-08-30
GB2616068B (en) 2024-04-24
WO2023161651A3 (en) 2023-11-09
GB202202743D0 (en) 2022-04-13

Similar Documents

Publication Publication Date Title
CN110321853B (en) Distributed cable external-damage-prevention system based on video intelligent detection
CN111583593A (en) Dual-band self-adaptive intelligent temperature measurement face recognition method
US8171129B2 (en) Smart endpoint and smart monitoring system having the same
CN204350309U (en) Wireless industrial process monitor and wireless supervisory control system
EP2840557B1 (en) Image processing system, server device, image pickup device and image evaluation method
CN102305664A (en) Thermal imaging temperature measurement and fault location inspection system
US11935378B2 (en) Intrusion detection methods and devices
CN108806148B (en) Security alarm method, infrared thermal imaging sensor and server
KR100944791B1 (en) System for measurement of the water level and method for measurement of the water level
US20200402254A1 (en) System for measuring deformations and a method for measuring deformations
CN104123516A (en) Gold label card detection method and reader based cloud computation network platform
CN108985211B (en) Criminal investigation is with fingerprint collection system
WO2023161651A2 (en) Monitoring device and method of monitoring an area
CN112085724A (en) Cabinet temperature measuring method and device based on BIM and thermal image
CN115616329A (en) Five little casees fault recognition system of transformer substation based on convolutional neural network
CN110318953B (en) Temperature monitoring method and device for wind turbine generator electric control system
CN112312076A (en) Intelligent mobile detection device
KR102504318B1 (en) Growth analysis method based on smart edge device and apparatus and system therefor
CN111860438B (en) Wearable inspection equipment and system based on AR technology
KR20010016734A (en) System for measuring snowfall
CN211180571U (en) Household environment alarm snapshot monitoring system
CN113959339A (en) Method and device for acquiring crack width, crack monitor and crack monitoring system
US10984536B2 (en) Motion detection in digital images and a communication method of the results thereof
KR101984069B1 (en) Image based intelligent vibration monitoring method
KR102193722B1 (en) Hybrid type image processing system for measuring instrument, and method thereof