EP3072292A1 - A method of assessing sensor performance - Google Patents
A method of assessing sensor performanceInfo
- Publication number
- EP3072292A1 EP3072292A1 EP14803187.5A EP14803187A EP3072292A1 EP 3072292 A1 EP3072292 A1 EP 3072292A1 EP 14803187 A EP14803187 A EP 14803187A EP 3072292 A1 EP3072292 A1 EP 3072292A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sensor
- data
- capability
- interest
- compression algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/28—Details of pulse systems
- G01S7/285—Receivers
- G01S7/292—Extracting wanted echo-signals
- G01S7/2921—Extracting wanted echo-signals based on data belonging to one radar period
- G01S7/2922—Extracting wanted echo-signals based on data belonging to one radar period by using a controlled threshold
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D18/00—Testing or calibrating apparatus or arrangements provided for in groups G01D1/00 - G01D15/00
- G01D18/002—Automatic recalibration
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D3/00—Indicating or recording apparatus with provision for the special purposes referred to in the subgroups
- G01D3/028—Indicating or recording apparatus with provision for the special purposes referred to in the subgroups mitigating undesired influences, e.g. temperature, pressure
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/66—Radar-tracking systems; Analogous systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/66—Sonar tracking systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/66—Tracking systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/28—Details of pulse systems
- G01S7/285—Receivers
- G01S7/292—Extracting wanted echo-signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/28—Details of pulse systems
- G01S7/285—Receivers
- G01S7/292—Extracting wanted echo-signals
- G01S7/2923—Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods
- G01S7/2927—Extracting wanted echo-signals based on data belonging to a number of consecutive radar periods by deriving and controlling a threshold value
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/35—Details of non-pulse systems
- G01S7/352—Receivers
- G01S7/354—Extracting wanted echo-signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/40—Means for monitoring or calibrating
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/487—Extracting wanted echo signals, e.g. pulse detection
- G01S7/4873—Extracting wanted echo signals, e.g. pulse detection by deriving and controlling a threshold value
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/491—Details of non-pulse systems
- G01S7/493—Extracting wanted echo signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/52004—Means for monitoring or calibrating
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/52017—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
- G01S7/52023—Details of receivers
- G01S7/52025—Details of receivers for pulse systems
- G01S7/52026—Extracting wanted echo signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/52017—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
- G01S7/52023—Details of receivers
- G01S7/5203—Details of receivers for non-pulse systems, e.g. CW systems
- G01S7/52031—Extracting wanted echo signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/523—Details of pulse systems
- G01S7/526—Receivers
- G01S7/527—Extracting wanted echo signals
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/534—Details of non-pulse systems
- G01S7/536—Extracting wanted echo signals
Definitions
- the present invention relates to methods of assessing, measuring or predicting the performance of a sensor, to enable timely action to be taken to address identified limitations, and is applicable to the fields of surveillance and remote sensing.
- sensor performance is often viewed as a function of the sensor itself or its location, sensor performance can vary with time, mainly due to changing environmental conditions (e.g. the position of the sun, or the presence of debris due to strong wind). It would be beneficial to be able to identify a region (e.g. a region relative to the sensor, or a part of a field of view of the sensor) where the sensor is unable to discriminate an object of interest from clutter in the whole or part of a field of view.
- a region e.g. a region relative to the sensor, or a part of a field of view of the sensor
- the inventors have identified a method to facilitate this, which can be wholly automated if desired, can be implemented on a continuous basis if required, which appears from tests to be a useful guide regarding a sensor's ability to discriminate objects of interest from clutter, and which may (depending on how it is implemented) require only modest computer processing power.
- This method may be for assessing a capability to reliably plot and track objects.
- the step of extracting plot data of potential objects is optional to the method, the method will be applied to a device or network of computers that is arranged to extract plot data of potential objects, for the purpose of tracking the objects.
- the step of extracting plot data is therefore expected to be performed in relation to an object tracking capability of an object tracking device (e.g. computer system), but need not be performed in relation to the claimed method of facilitating an assessment of that object tracking capability. Therefore preferably the step of extracting plot data is performed in conjunction with the other steps of the method, even if for a different purpose. More preferably however the step of extracting plot data is performed as part of the method of facilitating the assessment, and the compression algorithm is applied to the plot data in a later step of the method.
- the inventors have tested the method of the present invention and found that the numerical value relating to the compression efficiency (e.g. the compression ratio or the compressed data size or the amount the data is compressed by) gives an indication of whether the sensor is able to plot and track objects reliably (i.e. discriminating objects of interest and clutter).
- a higher compression ratio (or smaller compressed file size) gives greater confidence in sensor performance.
- Suitable threshold values for making a binary determination will vary widely depending on the sensor, and whether the method is being applied to the sensor as a whole or to a specific part of its field of view, and these can be readily selected based on trial and error by the person skilled in the art.
- the numerical value from the compression algorithm reflected the Accuracy, Clarity, Completeness and Continuity aspects of the object tracking performance of the sensor.
- the compression ratio was found in relation to the plot data and showed a broadly linear relationship with all four of these parameters.
- the compression ratio was found in relation to the conditioned image data, and although less linear and less pronounced, there was a clear monotonic positive relationship between the compression ratio and the capability of the sensor in various environments, which gives confidence that this approach may be useful with a range of types of sensor. The positive monotonic relationship observed gives confidence that the method is a good guide to sensor performance compared to other parameters.
- the method is performed twice, once with artificially introduced simulated objects (physically introduced object are an alternative, but are far more expensive and problematic if needed repeatedly), and once without, and the compression ratio (or related measure) in the two scenarios is compared.
- This can be applied to variety of scenes (e.g. different zones of a field of view) of differing complexity/clutter, and those where the sensor struggles to reliably plot the simulated objects and ignore the clutter can be identified.
- scenes there should be a marked difference between the compression ratio (or related measure) achieved for that scene compared to those where the sensor performs adequately. Verification that the marked difference is observed in those particular situations, gives confidence that the method is capable of identifying scenes where that type of sensor lacks adequate capability.
- the reference value is normally a threshold beyond which is indicative of acceptable plot tracking capability (above the threshold in the case of the amount the file size can be reduced by, or less than in the case of the size of the remaining file), however at a basic level the threshold could be a starting point for the compression algorithm (e.g. the size of the starting file before compression, in the case of the size of the final file, or zero in the case of the amount the file size can be reduced by).
- the threshold is a predetermined criterion for deciding whether the capability is adequate or inadequate. '
- the data compression algorithm is applied to the plot data. This is expected to permit a more linear quantification of the capability of the sensor, for at least some types of sensor, which may be desirable (compared to merely being able to discriminate inadequate and adequate capability). Additionally applying a compression algorithm to the plot data should normally be much less computationally intensive than applying it to the conditioned sensor data, as the starting file size should normally be much smaller.
- the numerical value quantifies the capability of the sensor, preferably having a substantially correlated relationship therewith, and ideally having a substantially linear relationship therewith (at least over a range of interest).
- the method includes the final step of automatically outputting an assessment of the capability of the sensor based on the numerical value. Such assessment may be provided as an alert in the event that the capability is identified as inadequate. Alternatively the assessment may be performed by a user in view of the numerical value.
- the plot data is generated and used as part of a tracking method closely associated with the above method.
- the method includes a method of tracking objects based on track data generated from plot data generated from the sensor data.
- the two methods are used jointly to track objects of interest in a region of interest, simultaneously identifying one or more parts of that region of interest where the capability to track objects of interest is currently inadequate.
- this method also includes the steps of taking action to mitigate the inadequate tracking capability in a particular location.
- the action preferably includes at least one of: adding a sensor to monitor that location, upgrading a sensor near or at that location, moving a valuable asset away from that location, or issuing instructions for a patrol of that location.
- the compression algorithm is a substantially lossless compression algorithm, and ideally is a strictly lossless algorithm.
- the compression may be a wavelet based compression algorithm such as the JPEG2000 algorithm or such as the MPEG4 algorithm. Using a lossy compression algorithm is anticipated to give a less reliable determination but may offer a satisfactory level of confidence in some situations especially if this reduces the processing power required.
- the JPEG2000 algorithm was found to offer the advantage of requiring less computer resources than some alternatives, however a wide range of compression algorithms could be used as suitable alternatives. Compression algorithms based on wavelets should advantageously give a more linear monotonic relationship than some other types of compression algorithms.
- the method is applied to data gathered with respect to a period of time (as opposed to a single photo or single frame of a video).
- This is applicable to image data and/or to plot data.
- the length of time may be varied for example a long period may be used to increase the consistency and accuracy of results, while a short period may be used to minimise computer processor time useage and/or to enable repeat checks on a frequent basis.
- the period of time may also be selected to account for the time that objects of interest are likely to remain in the field of view. This might be a number of seconds for a road based CCTV camera, a number of minutes for a shore based viewing post, or may be an hour or more for a long range maritime radar system. The duration normally will not exceed 24 hours.
- the data includes multiple frames, preferably at least 10 or at least 100 frames.
- the method is applied to .a sub-section of a field of view of the sensor. This enables identification of whether objects can be discriminated in that part of the field of view, and this is useful especially for a challenging area or line of sight, for example looking into the direction of the sun.
- the field of view may be divided into a plurality of sub-sections, and the method is applied separately to each sub-section. This can be better than evaluating the whole field of view jointly, as that method may fail to identify that objects cannot be reliably distinguished in one small part of the field of view (e.g. near the sun).
- the method is performed by a processing unit on-board the sensor. Sensors with such built in self diagnosis capability alleviate the need for substantial quantities of plot data to be transmitted to a computer for analysis.
- the method is performed by a processing unit that is remote from the sensor, and the step of receiving sensor data or plots includes receiving via a computer network.
- a processing unit that is remote from the sensor
- the step of receiving sensor data or plots includes receiving via a computer network.
- the method is repeated periodically while the sensor is in use.
- the method may advantageously be repeated substantially continuously (i.e. restarting once a field of view, or alternatively data from multiple sensors, has been assessed).
- the method includes the step of adding an additional sensor in response to identifying that the performance of the sensor is below a predetermined requirement.
- the method includes the step of upgrading the sensor by replacing it with a higher performance sensor in response to identifying that the performance of the sensor is below a predetermined requirement.
- the method includes modifying the sensor. This may include altering a gain, contrast, colour filtering parameter, or other parameter. With this approach it may be possible to optimise the sensor on a one off or continuous basis, as conditions change, most particularly ambient light levels.
- the method includes step-wise adjustment and optimisation of sensor parameters and/or of signal conditioning parameters based on feedback on the capability of the sensor.
- the method includes modifying the object identifying algorithm in response to identifying that the performance of the sensor is below a predetermined requirement.
- This may include altering threshold values for the size, brightness or shape of objects to be detected. This may prove useful if there is a change in the character of the clutter in view, for example during a snowstorm, hailstorm or sandstorm, on a windy autumn day with an abundance of leaf litter, or if there is an unexpected increase in the abundance of a particular species of animal (such as flies or pigeons).
- the method includes providing a map via a graphical user interface and highlighting one or more regions where object tracking performance is below a threshold requirement.
- a map is used and the sensor performance information is overlaid along with information about objects being tracked.
- Such an apparatus may be built in to a sensor.
- the apparatus may be a computer arranged to receive data from a sensor (or typically multiple sensors) via a computer connection or network.
- the term 'natural environment' means that the sensor is viewing a scene or region where the user has not artificially introduced objects for purposes of determining whether the sensor is able to discriminate those particular objects. Instead the method is able to take advantage of serendipitous passing objects (e.g. passing vehicles) and naturally occurring clutter (e.g. leaves and birds).
- serendipitous passing objects e.g. passing vehicles
- naturally occurring clutter e.g. leaves and birds
- 'Plots' are the identified locations of things at a particular point in time, and this typically includes both'clutter and objects of interest.
- the term 'conditioned' refers to signal conditioning which is commonly applied to the raw image output from a sensor, such as a camera or otherwise, which involves at least one of adjusting or controlling the bandwidth limiting, amplification, mapping into a log or linear output, applying maximum and minimum limits, or range or gain, and typically but not necessarily involves converting the data from analogue to digital data.
- Signal conditioning is typically performed on-board the sensor, but can be performed remotely to the sensor. Signal conditioning is important as it ensures that multiple sensors viewing the same or different scenes will provide comparable and useable image data.
- Signal conditioning typically takes into account the nature of the raw image data on a continuous basis, most typically the overall signal level (which in the case of a camera relates to an ambient light level) to help ensure that useful information is not lost during the signal conditioning process.
- the signal conditioning process may optionally include more advanced signal processing steps, such as conversion by fourier transform or any other algorithm that outputs an image representative of the scene viewed by the sensor.
- 'Compression algorithm' covers conventional data compression algorithms which produce compressed data, however the method does not typically utilize the compressed data (it might be used for unrelated reasons, such as if the data needs to be transmitted across a network and the available bandwidth is limited), therefore the term includes algorithms which assess the amount of information or entropy (or more generally the amount that the data could be compressed) in the data even if they do not actually perform data compression.
- a capability discriminate physical objects from clutter refers to the reliability with which such objects could in theory be discriminated using the data directly or indirectly from the sensor with limited processor time, although in practice the method may typically be practiced in conjunction with an object tracking apparatus and method.
- the compression algorithm may be applied to the conditioned sensor data, however further steps are possible such as transforming the signal data into a reciprocal space (E.g. via a fast fourier transform) prior to applying the compression algorithm. Such steps may be considered as part of the conditioning process. Alternatively the conditioning process may be very simple, such as controlling the gain, range or threshold of the sensor output.
- Figure 1 is a diagram illustrating how data is gathered in an object tracking sensor
- Figure 2 is an example of an image of a field of view of an optical sensor, highlighting four zones selected for analysis,
- Figure 3 is a table showing measured parameters of the four zones
- Figure 4 shows how continuity of track data gathered is lower when the plot data compression ratio is lower
- Figure 5 shows how clarity of track data gathered is lower when the plot data compression ratio is lower
- Figure 6 shows how completeness of track data gathered is lower when the plot data compression ratio is lower
- Figure 7 shows a flow diagram of a method according to one embodiment
- Figure 8 shows how the compressed (residual) file size of grey-scale conditioned image data has a positive monotonic relationship with the number of objects in the field of view.
- a sensor gathers data about a region or field of view which is limited by the properties of the sensor itself, and applies a signal conditioning method to optimise the output, typically a contrast adjusted digital output.
- An object identification algorithm (Plot extraction) is then applied to generate plots, and then a track identification algorithm (Track extraction) is applied to generate plots of identified objects. If successful this would enable a user to take action in response to the plot data, which might involve intercepting a tracked object, taking evasive action or calling in a law enforcement agency.
- the process may fail to reliably track objects of interest if there is too much clutter in the field of view, such as birds, leaves etc.
- Optional steps include forming track data from the plot data, showing the tracks of objects being tracked, taking action in response to the track data (e.g. raising an alarm and calling in a law enforcement agency), and/or taking corrective action in response to identification that the object tracking capability of the sensor has been identified as inadequate (e.g. adding another sensor).
- Figure 2 shows an image captured by an optical or infra-red sensor, although the method is applicable to many types of sensor such as an infra-red sensor, a visible band sensor, an electro- optical sensor, a radio wave sensor such as a radar and radar tracking apparatus, a terahertz sensor, an array of electrical or pressure sensors or an ultrasonic sensor and any other type of sensor or sensor array that can generate an output allowing identification of locations of objects in 2D or 3D.
- sensor such as an infra-red sensor, a visible band sensor, an electro- optical sensor, a radio wave sensor such as a radar and radar tracking apparatus, a terahertz sensor, an array of electrical or pressure sensors or an ultrasonic sensor and any other type of sensor or sensor array that can generate an output allowing identification of locations of objects in 2D or 3D.
- four zones of a scene have been selected for analysis, with zone 4 showing the greatest density of clutter - in this case mainly light reflected from waves.
- the dense clutter causes more plots to be
- Figure 3 shows information about the four zones, including the compression ratio of the image file and plot file, the mean intensity and information about plot detections. While the image file shows a greater compression ratio in zone 1 (compression to 29.9%) and lesser in zone 4 (compression only to 67.4%) this was a less linear and perhaps less reliable indicator of the ability of the sensor to track objects compared to the compression ratio of the plot file. Similarly the intensity and plot detections were less useful than the plot file compression ratio, however compression of the image file produces a useable result, and with a different type of sensor it may produce a better result than using the plot file.
- zone 4 the modest compression ratio of 89.2% (10.8% compression size) correlates with extremely poor object tracking performance.
- the high compression ratio of 97.6% (2.4% compression size) correlates with 100% object tracking performance across different measures including continuity, completeness and clarity.
- the relationship with compression size/ratio of the plot file is comparatively linear.
- the method may include the steps of: Generate, condition and output sensor data with a sensor, the sensor data relating to at least part of a sensor's field of view of a natural environment (generally over a period of time),
- Extract plot data of potential objects of interest from the conditioned sensor data (optional in some embodiments),
- the method provides an assessment of tracking capability and it would be typical for a user to respond to an assessment that there is a sub-optimal capability by upgrading the sensor, adjusting the sensor, or adding another sensor to a sensor network.
- Figure 8 shows how the compression ratio (or related measure) of the image file (in this example based on image data over a period of time) can also be used to identify scenes, fields of view or situations where the sensor is not capable of adequately tracking objects of interest.
- a threshold of 1.5MB or 2MB could be selected, and if compression of any similarly sized zone in the field of view generated a compressed file larger than the threshold value, this constitutes a warning that objects of interest cannot be adequately tracked in at least that part of the sensor's field of view.
- Figure 9 shows how the compressed file size (or related measure) of plot data similarly shows a clear relationship with the false detection rate (measured as fraction of total detections).
- This data exemplify how the compressed file size (or related measure) of the plot data is a useful measure for identifying situations where data from the sensor may be insufficient for discriminating objects of interest from clutter.
- a detection threshold could be set at around 300kB.
- the described techniques may be utilised for controlling automatic focussing of an optical apparatus such as a camera (including a video recorder), or for focussing a video projector (such as a conference room projector).
- a camera viewing a field of view captures an image or video (preferably an image), and a compression algorithm (as previously described) is applied to the whole or part of the field of view thereof, and outputs a numerical value relating to the compression efficiency (achieved or achievable).
- the capture of such data and the application of the compression algorithm is repeated after a small adjustment of the focus of the optical apparatus (in the case of focussing a projector, the focus of the projector is adjusted rather than the focus of the camera viewing the projected image) and the numerical values (related to the compression ratio) obtained are compared to determine which focus position gave the sharper image, on the basis that a less compression will be achieved with a sharply focussed image than with a poorly focussed image.
- This process may be repeatedly performed as part of a feedback loop to optimise the focus of the optical apparatus, and the optimising may be performed as a one-off action (e.g. each time a projector is turned on or a relevant button is pressed, or each time a camera is set up or primed for taking a picture), or may optionally be performed continuously during use.
- the camera preferably has a small aperture (preferably 0.01mm to 5mm) to provide a wide range of focal depth, preferably including the range 4m to 6m, and preferably having a range of focal depth equal to at least 50% of the maximum in focus distance.
- the camera is preferably mounted in the projector, and the compression algorithm is preferably implemented by a computer processor on board the projector, but alternatively on a remote computer which may be a computer which also provides an image to the projector for projection onto a screen. If the camera is mounted in the projector it is preferably mounted substantially adjacent the projector output lens and is preferably aimed (including being aim-able in use) in substantially the same direction as the projector output.
- the camera has a higher angular resolution than the projector in at least one direction, preferably at least twice, or at least four times the angular resolution of the projector.
- the camera has a smaller angular field of view than the projector and may be oriented parallel to the projector output, but optionally is adapted to change its viewing direction automatically to sample different parts of a projected image.
- the above method and apparatus are advantageous compared to known techniques such as maximising a contrast ratio, as this may not be as effective in as wide a range of circumstances - particularly scenes having blocks of artificially uniform colour (rather than natural scenes where colour and contrast variation tends to be exhibited throughout the scene), which is often the case in business and educational presentations.
- an optical imaging apparatus with an automatic focussing mechanism comprising: Optical imaging means with a focusing mechanism and an imaging lens, Means for receiving image data from a camera sensor and applying a data compression algorithm thereto to generate a numerical value related to a compression ratio achieved, and means for adjusting and optimising the focus of the focussing mechanism based on the numerical value.
- Optical imaging means with a focusing mechanism and an imaging lens Means for receiving image data from a camera sensor and applying a data compression algorithm thereto to generate a numerical value related to a compression ratio achieved, and means for adjusting and optimising the focus of the focussing mechanism based on the numerical value.
- the optical imaging means is a camera lens and sensor arrangement.
- the optical imaging means is a video projection means and there is additionally provided a camera arranged to view a projected image of the video projector displayed away from the projector on any reflective surface for providing the image data.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Geophysics And Detection Of Objects (AREA)
- Testing Or Calibration Of Command Recording Devices (AREA)
Abstract
A method and apparatus for assessing a capability to discriminate physical objects from clutter using data from a sensor in a natural environment, to enable mitigating action to be taken, the assessment is based on to what degree the sensor data, or plot data derived from the sensor data, can be compressed by a compression algorithm.
Description
A method of assessing sensor performance
The present invention relates to methods of assessing, measuring or predicting the performance of a sensor, to enable timely action to be taken to address identified limitations, and is applicable to the fields of surveillance and remote sensing.
In the past it has been known to measure the performance of a sensor in a test environment, and to choose a suitable sensor (or a suitable arrangement of sensor) as appropriate for a particular surveillance task.
While sensor performance is often viewed as a function of the sensor itself or its location, sensor performance can vary with time, mainly due to changing environmental conditions (e.g. the position of the sun, or the presence of debris due to strong wind). It would be beneficial to be able to identify a region (e.g. a region relative to the sensor, or a part of a field of view of the sensor) where the sensor is unable to discriminate an object of interest from clutter in the whole or part of a field of view.
In the past it has been known to test the performance of a sensor in situ by arranging objects of interest and checking that the sensor can identify, track and discriminate them from clutter. However this is expensive, laborious, and cannot realistically be performed as needed (or continuously) to check that current environmental conditions are not undermining the sensor's performance.
Checking sensor performance on a periodic, occasional or continuous basis in a cost effective and practical way would enable a user to add an extra sensor or take other appropriate precautions in a timely manner.
The inventors have identified a method to facilitate this, which can be wholly automated if desired, can be implemented on a continuous basis if required, which appears from tests to be a useful guide regarding a sensor's ability to discriminate objects of interest from clutter, and which may (depending on how it is implemented) require only modest computer processing power.
According to a first aspect of the present invention there is provided a method as set out in claim 1. This method may be for assessing a capability to reliably plot and track objects.
Although the step of extracting plot data of potential objects is optional to the method, the method will be applied to a device or network of computers that is arranged to extract plot data of potential objects, for the purpose of tracking the objects. The step of extracting plot data is therefore expected to be performed in relation to an object tracking capability of an object tracking device (e.g. computer system), but need not be performed in relation to the claimed method of facilitating an assessment of that object tracking capability. Therefore preferably the step of extracting plot data is performed in conjunction with the other steps of the method, even if for a different purpose. More preferably however the step of extracting plot data is performed as part of the method of facilitating the assessment, and the compression algorithm is applied to the plot data in a later step of the method.
The inventors have tested the method of the present invention and found that the numerical value relating to the compression efficiency (e.g. the compression ratio or the compressed data size or the amount the data is compressed by) gives an indication of whether the sensor is able to plot and
track objects reliably (i.e. discriminating objects of interest and clutter). A higher compression ratio (or smaller compressed file size) gives greater confidence in sensor performance. Suitable threshold values for making a binary determination will vary widely depending on the sensor, and whether the method is being applied to the sensor as a whole or to a specific part of its field of view, and these can be readily selected based on trial and error by the person skilled in the art.
Testing showed that in at least one embodiment the numerical value from the compression algorithm reflected the Accuracy, Clarity, Completeness and Continuity aspects of the object tracking performance of the sensor. In one experiment, the compression ratio was found in relation to the plot data and showed a broadly linear relationship with all four of these parameters. In another experiment, the compression ratio was found in relation to the conditioned image data, and although less linear and less pronounced, there was a clear monotonic positive relationship between the compression ratio and the capability of the sensor in various environments, which gives confidence that this approach may be useful with a range of types of sensor. The positive monotonic relationship observed gives confidence that the method is a good guide to sensor performance compared to other parameters.
Usefully, the method is performed twice, once with artificially introduced simulated objects (physically introduced object are an alternative, but are far more expensive and problematic if needed repeatedly), and once without, and the compression ratio (or related measure) in the two scenarios is compared. This can be applied to variety of scenes (e.g. different zones of a field of view) of differing complexity/clutter, and those where the sensor struggles to reliably plot the simulated objects and ignore the clutter can be identified. In such scenes there should be a marked difference between the compression ratio (or related measure) achieved for that scene compared to those where the sensor performs adequately. Verification that the marked difference is observed in those particular situations, gives confidence that the method is capable of identifying scenes where that type of sensor lacks adequate capability. Additionally, by observing the compression ratios (or similar measure) achieved (in the absence of simulated objects) for situations where the sensor performs adequately and situations where is does not enables identification and selection of a threshold numerical value, for use in future to determine whether the sensor in a particular environment is adequately capable of tracking objects. Other methods, such as trial and error, may alternatively be used to determine a suitable threshold.
The reference value is normally a threshold beyond which is indicative of acceptable plot tracking capability (above the threshold in the case of the amount the file size can be reduced by, or less than in the case of the size of the remaining file), however at a basic level the threshold could be a starting point for the compression algorithm (e.g. the size of the starting file before compression, in the case of the size of the final file, or zero in the case of the amount the file size can be reduced by). Preferably the threshold is a predetermined criterion for deciding whether the capability is adequate or inadequate. '
Optionally the data compression algorithm is applied to the plot data. This is expected to permit a more linear quantification of the capability of the sensor, for at least some types of sensor, which may be desirable (compared to merely being able to discriminate inadequate and adequate capability). Additionally applying a compression algorithm to the plot data should normally be much less computationally intensive than applying it to the conditioned sensor data, as the starting file size should normally be much smaller. Optionally the numerical value quantifies the capability of the
sensor, preferably having a substantially correlated relationship therewith, and ideally having a substantially linear relationship therewith (at least over a range of interest). Optionally the method includes the final step of automatically outputting an assessment of the capability of the sensor based on the numerical value. Such assessment may be provided as an alert in the event that the capability is identified as inadequate. Alternatively the assessment may be performed by a user in view of the numerical value.
Typically, even if the plot data is not used for purposes of compression in the above method, the plot data is generated and used as part of a tracking method closely associated with the above method. Optionally the method includes a method of tracking objects based on track data generated from plot data generated from the sensor data. Preferably the two methods are used jointly to track objects of interest in a region of interest, simultaneously identifying one or more parts of that region of interest where the capability to track objects of interest is currently inadequate. Preferably this method also includes the steps of taking action to mitigate the inadequate tracking capability in a particular location. The action preferably includes at least one of: adding a sensor to monitor that location, upgrading a sensor near or at that location, moving a valuable asset away from that location, or issuing instructions for a patrol of that location.
Preferably the compression algorithm is a substantially lossless compression algorithm, and ideally is a strictly lossless algorithm. The compression may be a wavelet based compression algorithm such as the JPEG2000 algorithm or such as the MPEG4 algorithm. Using a lossy compression algorithm is anticipated to give a less reliable determination but may offer a satisfactory level of confidence in some situations especially if this reduces the processing power required. The JPEG2000 algorithm was found to offer the advantage of requiring less computer resources than some alternatives, however a wide range of compression algorithms could be used as suitable alternatives. Compression algorithms based on wavelets should advantageously give a more linear monotonic relationship than some other types of compression algorithms.
Optionally the method is applied to data gathered with respect to a period of time (as opposed to a single photo or single frame of a video). This is applicable to image data and/or to plot data. The length of time may be varied for example a long period may be used to increase the consistency and accuracy of results, while a short period may be used to minimise computer processor time useage and/or to enable repeat checks on a frequent basis. The period of time may also be selected to account for the time that objects of interest are likely to remain in the field of view. This might be a number of seconds for a road based CCTV camera, a number of minutes for a shore based viewing post, or may be an hour or more for a long range maritime radar system. The duration normally will not exceed 24 hours. Preferably the data includes multiple frames, preferably at least 10 or at least 100 frames.
Optionally the method is applied to .a sub-section of a field of view of the sensor. This enables identification of whether objects can be discriminated in that part of the field of view, and this is useful especially for a challenging area or line of sight, for example looking into the direction of the sun.
More generally the field of view may be divided into a plurality of sub-sections, and the method is applied separately to each sub-section. This can be better than evaluating the whole field of view jointly, as that method may fail to identify that objects cannot be reliably distinguished in one small part of the field of view (e.g. near the sun).
Optionally the method is performed by a processing unit on-board the sensor. Sensors with such built in self diagnosis capability alleviate the need for substantial quantities of plot data to be transmitted to a computer for analysis.
Optionally the method is performed by a processing unit that is remote from the sensor, and the step of receiving sensor data or plots includes receiving via a computer network. This has the advantage that a conventional sensor network can be upgraded by the addition of a central networked computer able to analyse the performance of multiple sensors in a network.
Preferably the method is repeated periodically while the sensor is in use. The method may advantageously be repeated substantially continuously (i.e. restarting once a field of view, or alternatively data from multiple sensors, has been assessed).
Optionally the method includes the step of adding an additional sensor in response to identifying that the performance of the sensor is below a predetermined requirement.
Optionally the method includes the step of upgrading the sensor by replacing it with a higher performance sensor in response to identifying that the performance of the sensor is below a predetermined requirement.
Optionally the method includes modifying the sensor. This may include altering a gain, contrast, colour filtering parameter, or other parameter. With this approach it may be possible to optimise the sensor on a one off or continuous basis, as conditions change, most particularly ambient light levels. Optionally the method includes step-wise adjustment and optimisation of sensor parameters and/or of signal conditioning parameters based on feedback on the capability of the sensor.
Optionally the method includes modifying the object identifying algorithm in response to identifying that the performance of the sensor is below a predetermined requirement. This may include altering threshold values for the size, brightness or shape of objects to be detected. This may prove useful if there is a change in the character of the clutter in view, for example during a snowstorm, hailstorm or sandstorm, on a windy autumn day with an abundance of leaf litter, or if there is an unexpected increase in the abundance of a particular species of animal (such as flies or pigeons).
Optionally the method includes providing a map via a graphical user interface and highlighting one or more regions where object tracking performance is below a threshold requirement. This has the advantage of enabling a user to quickly understand the performance of a network of sensors across a surveillance area. Preferably a surveillance map is used and the sensor performance information is overlaid along with information about objects being tracked.
According to a second aspect of the present invention there is provided an apparatus as set out in the second independent claim.
Such an apparatus may be built in to a sensor. Alternatively the apparatus may be a computer arranged to receive data from a sensor (or typically multiple sensors) via a computer connection or network.
There may similarly be provided a computer program adapted to control the apparatus of the second aspect to perform the method of the first aspect.
The term 'natural environment' means that the sensor is viewing a scene or region where the user has not artificially introduced objects for purposes of determining whether the sensor is able to
discriminate those particular objects. Instead the method is able to take advantage of serendipitous passing objects (e.g. passing vehicles) and naturally occurring clutter (e.g. leaves and birds).
'Plots' are the identified locations of things at a particular point in time, and this typically includes both'clutter and objects of interest.
The term 'conditioned' refers to signal conditioning which is commonly applied to the raw image output from a sensor, such as a camera or otherwise, which involves at least one of adjusting or controlling the bandwidth limiting, amplification, mapping into a log or linear output, applying maximum and minimum limits, or range or gain, and typically but not necessarily involves converting the data from analogue to digital data. Signal conditioning is typically performed on-board the sensor, but can be performed remotely to the sensor. Signal conditioning is important as it ensures that multiple sensors viewing the same or different scenes will provide comparable and useable image data. Signal conditioning typically takes into account the nature of the raw image data on a continuous basis, most typically the overall signal level (which in the case of a camera relates to an ambient light level) to help ensure that useful information is not lost during the signal conditioning process. The signal conditioning process may optionally include more advanced signal processing steps, such as conversion by fourier transform or any other algorithm that outputs an image representative of the scene viewed by the sensor.
'Compression algorithm' covers conventional data compression algorithms which produce compressed data, however the method does not typically utilize the compressed data (it might be used for unrelated reasons, such as if the data needs to be transmitted across a network and the available bandwidth is limited), therefore the term includes algorithms which assess the amount of information or entropy (or more generally the amount that the data could be compressed) in the data even if they do not actually perform data compression.
The term "a capability discriminate physical objects from clutter" refers to the reliability with which such objects could in theory be discriminated using the data directly or indirectly from the sensor with limited processor time, although in practice the method may typically be practiced in conjunction with an object tracking apparatus and method.
The compression algorithm may be applied to the conditioned sensor data, however further steps are possible such as transforming the signal data into a reciprocal space (E.g. via a fast fourier transform) prior to applying the compression algorithm. Such steps may be considered as part of the conditioning process. Alternatively the conditioning process may be very simple, such as controlling the gain, range or threshold of the sensor output.
Embodiments of the present invention will now be described by way of example only and with reference to the accompanying figures in which:
Figure 1 is a diagram illustrating how data is gathered in an object tracking sensor,
Figure 2 is an example of an image of a field of view of an optical sensor, highlighting four zones selected for analysis,
Figure 3 is a table showing measured parameters of the four zones,
Figure 4 shows how continuity of track data gathered is lower when the plot data compression ratio is lower,
Figure 5 shows how clarity of track data gathered is lower when the plot data compression ratio is lower,
Figure 6 shows how completeness of track data gathered is lower when the plot data compression ratio is lower,
Figure 7 shows a flow diagram of a method according to one embodiment, and
Figure 8 shows how the compressed (residual) file size of grey-scale conditioned image data has a positive monotonic relationship with the number of objects in the field of view.
Description of the preferred embodiment
Referring to figure 1 a sensor gathers data about a region or field of view which is limited by the properties of the sensor itself, and applies a signal conditioning method to optimise the output, typically a contrast adjusted digital output. An object identification algorithm (Plot extraction) is then applied to generate plots, and then a track identification algorithm (Track extraction) is applied to generate plots of identified objects. If successful this would enable a user to take action in response to the plot data, which might involve intercepting a tracked object, taking evasive action or calling in a law enforcement agency. However, the process may fail to reliably track objects of interest if there is too much clutter in the field of view, such as birds, leaves etc. Optional steps include forming track data from the plot data, showing the tracks of objects being tracked, taking action in response to the track data (e.g. raising an alarm and calling in a law enforcement agency), and/or taking corrective action in response to identification that the object tracking capability of the sensor has been identified as inadequate (e.g. adding another sensor).
Figure 2 shows an image captured by an optical or infra-red sensor, although the method is applicable to many types of sensor such as an infra-red sensor, a visible band sensor, an electro- optical sensor, a radio wave sensor such as a radar and radar tracking apparatus, a terahertz sensor, an array of electrical or pressure sensors or an ultrasonic sensor and any other type of sensor or sensor array that can generate an output allowing identification of locations of objects in 2D or 3D. In this example, four zones of a scene have been selected for analysis, with zone 4 showing the greatest density of clutter - in this case mainly light reflected from waves. The dense clutter causes more plots to be generated which makes it difficult to identify and track objects such as ships.
Figure 3 shows information about the four zones, including the compression ratio of the image file and plot file, the mean intensity and information about plot detections. While the image file shows a greater compression ratio in zone 1 (compression to 29.9%) and lesser in zone 4 (compression only to 67.4%) this was a less linear and perhaps less reliable indicator of the ability of the sensor to track objects compared to the compression ratio of the plot file. Similarly the intensity and plot detections were less useful than the plot file compression ratio, however compression of the image file produces a useable result, and with a different type of sensor it may produce a better result than using the plot file.
The linearity of the relationship between the plot file compression ratio and aspects of plot tracking such as continuity, completeness and clarity is shown for this embodiment in figures 4, 5 and 6.
In these diagrams for zone 4 the modest compression ratio of 89.2% (10.8% compression size) correlates with extremely poor object tracking performance. In contrast for zone 1 the high compression ratio of 97.6% (2.4% compression size) correlates with 100% object tracking performance across different measures including continuity, completeness and clarity. Usefully the relationship with compression size/ratio of the plot file is comparatively linear.
As shown in figure 7 the method may include the steps of:
Generate, condition and output sensor data with a sensor, the sensor data relating to at least part of a sensor's field of view of a natural environment (generally over a period of time),
Extract plot data of potential objects of interest from the conditioned sensor data (optional in some embodiments),
Apply a data compression algorithm to at least one of the conditioned sensor data and the plot data, to generate a numerical value relating to the compression efficiency, and ^
Output an indicator of the capability to reliably track objects, based on a comparison of a generated numerical value and a reference value.
The method provides an assessment of tracking capability and it would be typical for a user to respond to an assessment that there is a sub-optimal capability by upgrading the sensor, adjusting the sensor, or adding another sensor to a sensor network.
Figure 8 shows how the compression ratio (or related measure) of the image file (in this example based on image data over a period of time) can also be used to identify scenes, fields of view or situations where the sensor is not capable of adequately tracking objects of interest. In this example a threshold of 1.5MB or 2MB could be selected, and if compression of any similarly sized zone in the field of view generated a compressed file larger than the threshold value, this constitutes a warning that objects of interest cannot be adequately tracked in at least that part of the sensor's field of view.
Figure 9 shows how the compressed file size (or related measure) of plot data similarly shows a clear relationship with the false detection rate (measured as fraction of total detections). This data exemplify how the compressed file size (or related measure) of the plot data is a useful measure for identifying situations where data from the sensor may be insufficient for discriminating objects of interest from clutter. In this example a detection threshold could be set at around 300kB.
Further embodiments of the invention are set out in the claims.
In addition.to the invention described above, the inventors have further identified that the described techniques (applying compression algorithms as described above, however specifically to sensor data, which may or may not be conditioned sensor data, but optionally is conditioned sensor data) may be utilised for controlling automatic focussing of an optical apparatus such as a camera (including a video recorder), or for focussing a video projector (such as a conference room projector). In this case a camera viewing a field of view captures an image or video (preferably an image), and a compression algorithm (as previously described) is applied to the whole or part of the field of view thereof, and outputs a numerical value relating to the compression efficiency (achieved or achievable).
The capture of such data and the application of the compression algorithm is repeated after a small adjustment of the focus of the optical apparatus (in the case of focussing a projector, the focus of the projector is adjusted rather than the focus of the camera viewing the projected image) and the numerical values (related to the compression ratio) obtained are compared to determine which focus position gave the sharper image, on the basis that a less compression will be achieved with a sharply focussed image than with a poorly focussed image. This process may be repeatedly performed as part of a feedback loop to optimise the focus of the optical apparatus, and the optimising may be performed as a one-off action (e.g. each time a projector is turned on or a
relevant button is pressed, or each time a camera is set up or primed for taking a picture), or may optionally be performed continuously during use.
In the case of using the techniques for focussing a projector the camera preferably has a small aperture (preferably 0.01mm to 5mm) to provide a wide range of focal depth, preferably including the range 4m to 6m, and preferably having a range of focal depth equal to at least 50% of the maximum in focus distance. The camera is preferably mounted in the projector, and the compression algorithm is preferably implemented by a computer processor on board the projector, but alternatively on a remote computer which may be a computer which also provides an image to the projector for projection onto a screen. If the camera is mounted in the projector it is preferably mounted substantially adjacent the projector output lens and is preferably aimed (including being aim-able in use) in substantially the same direction as the projector output. Preferably the camera has a higher angular resolution than the projector in at least one direction, preferably at least twice, or at least four times the angular resolution of the projector. Optionally the camera has a smaller angular field of view than the projector and may be oriented parallel to the projector output, but optionally is adapted to change its viewing direction automatically to sample different parts of a projected image.
The above method and apparatus are advantageous compared to known techniques such as maximising a contrast ratio, as this may not be as effective in as wide a range of circumstances - particularly scenes having blocks of artificially uniform colour (rather than natural scenes where colour and contrast variation tends to be exhibited throughout the scene), which is often the case in business and educational presentations.
According to this second invention there is provided an optical imaging apparatus with an automatic focussing mechanism comprising: Optical imaging means with a focusing mechanism and an imaging lens, Means for receiving image data from a camera sensor and applying a data compression algorithm thereto to generate a numerical value related to a compression ratio achieved, and means for adjusting and optimising the focus of the focussing mechanism based on the numerical value. There is also provided a method of automatically focussing an optical imaging apparatus having the steps of receiving image data from a camera sensor and applying a data compression algorithm thereto to generate a numerical value related to a compression ratio achieved, and adjusting and optimising the focus of a focussing mechanism of the optical imaging means based on the numerical value.
In the case of the optical imaging apparatus being an automatically focussing camera, the optical imaging means is a camera lens and sensor arrangement. In the case of the optical imaging apparatus being an automatically focussing video projector, the optical imaging means is a video projection means and there is additionally provided a camera arranged to view a projected image of the video projector displayed away from the projector on any reflective surface for providing the image data.
Claims
1. A method of assessing a capability to discriminate physical objects of interest from clutter using data from a sensor in a natural environment, the assessment being for enabling action to be taken to mitigate risk associated with inadequate capability, the method having the steps of:
generating, conditioning and outputting sensor data with a sensor, the sensor data relating to at least part of a sensor's field of view of a natural environment,
outputting an indicator of the capability to discriminate physical objects of interest from clutter using data from the sensor in the at least part of the sensor's field of view, based.on a comparison of a generated numerical value and a reference value,
characterised in that the method has the further step of applying a data compression algorithm to at least one of conditioned sensor data and plot data of potential physical objects of interest extracted from the conditioned sensor data, to generate the numerical value, such that the numeral value relates to the compression efficiency achieved by the data compression algorithm.
2. The method of claim 1 where the data compression algorithm is applied to received plot data.
3. The method of claim 1 or 2 where the data compression algorithm is a substantially lossless compression algorithm.
4. The method of any preceding claim where the data compression algorithm is applied to data gathered with respect to a period of time.
5. The method of any preceding claim where the sensor data relates to a sub-section of a field of view of the sensor.
6. The method of claim 5 where the field of view of the sensor is divided into a plurality of subsections, and the data compression algorithm applied separately to data relating to each subsection.
7. The method of any preceding claim where the compression algorithm is implemented by a processing unit on-board the sensor.
8. The method of any one of claims 1 to 6 where the compression algorithm is implemented by a computer processing unit that is remote from the sensor, and the step of receiving sensor data or plot data includes receiving via a computer network.
9. The method of any preceding claim where the method is repeated periodically while the sensor is in use.
10. The method of any preceding claim having the further step of adding an additional sensor in response to identifying that the capability to discriminate physical objects of interest from clutter is below a predetermined requirement.
11. The method of any preceding claim having the further step of upgrading the sensor by replacing it with a higher performance sensor in response to identifying that the capability to discriminate physical objects of interest from clutter is below a predetermined requirement.
12. The method of any preceding claim having the further step of modifying the sensor in response to identifying that the capability to discriminate physical objects of interest from clutter is below a predetermined requirement.
13. The method of any preceding claim having the further step of modifying an object identifying algorithm of an object tracking apparatus in response to identifying that the capability to discriminate physical objects of interest from clutter is below a predetermined requirement.
14. The method of any preceding claim includes the step of providing a map via a graphical user interface and highlighting one or more regions where object tracking performance is below a threshold requirement.
15. Apparatus for assessing a capability to discriminate physical objects of interest from clutter using data from a sensor in a natural environment, the assessment being for enabling action to be taken to mitigate risk associated with inadequate capability,
the apparatus comprising means for receiving at least one of conditioned sensor data relating to at least part of a sensor's field of view of a natural environment over a period of time, and plot data of potential physical objects of interest extracted from such conditioned sensor data,
the apparatus being adapted to output an indicator of a capability to discriminate physical objects of interest from clutter using data from the sensor in the at least part of the sensor's field of view, based on a comparison of a generated numerical value and a reference value,
and characterised in that the apparatus is adapted to apply a data compression algorithm to the at least one of received conditioned sensor data and received plot data, to generate the numerical value, such that the numerical value relates to the compression efficiency achieved by the data compression algorithm.
16. A computer program adapted to control the apparatus of claim 15 to perform the method of any one of claims l to 14.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB1320369.0A GB201320369D0 (en) | 2013-11-19 | 2013-11-19 | A method of assessing sensor performance |
PCT/GB2014/000476 WO2015075413A1 (en) | 2013-11-19 | 2014-11-18 | A method of assessing sensor performance |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3072292A1 true EP3072292A1 (en) | 2016-09-28 |
Family
ID=49883814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14803187.5A Withdrawn EP3072292A1 (en) | 2013-11-19 | 2014-11-18 | A method of assessing sensor performance |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3072292A1 (en) |
GB (2) | GB201320369D0 (en) |
WO (1) | WO2015075413A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3748395B1 (en) * | 2019-06-06 | 2023-01-04 | Infineon Technologies AG | Method and apparatus for compensating stray light caused by an object in a scene that is sensed by a time-of-flight camera |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5363311A (en) * | 1992-09-04 | 1994-11-08 | Hughes Aircraft Company | Data compression system using response waveform discrimination |
US5602760A (en) * | 1994-02-02 | 1997-02-11 | Hughes Electronics | Image-based detection and tracking system and processing method employing clutter measurements and signal-to-clutter ratios |
JPH0943339A (en) * | 1995-07-26 | 1997-02-14 | Nec Corp | Radar video compression device |
US6717545B2 (en) * | 2002-03-13 | 2004-04-06 | Raytheon Canada Limited | Adaptive system and method for radar detection |
JP2008070258A (en) * | 2006-09-14 | 2008-03-27 | Toshiba Corp | Radar apparatus and clutter map compressing method of same |
CN102944873B (en) * | 2012-11-27 | 2014-05-14 | 西安电子科技大学 | Low-altitude target detection method based on multi-frequency point echo amplitude reversed order statistics |
CN103353594B (en) * | 2013-06-17 | 2015-01-28 | 西安电子科技大学 | Two-dimensional self-adaptive radar CFAR (constant false alarm rate) detection method |
-
2013
- 2013-11-19 GB GBGB1320369.0A patent/GB201320369D0/en not_active Ceased
-
2014
- 2014-11-17 GB GB1420350.9A patent/GB2521267B/en active Active
- 2014-11-18 WO PCT/GB2014/000476 patent/WO2015075413A1/en active Application Filing
- 2014-11-18 EP EP14803187.5A patent/EP3072292A1/en not_active Withdrawn
Non-Patent Citations (2)
Title |
---|
None * |
See also references of WO2015075413A1 * |
Also Published As
Publication number | Publication date |
---|---|
GB2521267A (en) | 2015-06-17 |
GB2521267B (en) | 2018-06-13 |
GB201420350D0 (en) | 2014-12-31 |
WO2015075413A1 (en) | 2015-05-28 |
GB201320369D0 (en) | 2014-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020078229A1 (en) | Target object identification method and apparatus, storage medium and electronic apparatus | |
US10070053B2 (en) | Method and camera for determining an image adjustment parameter | |
CN112740023B (en) | Machine learning system and data fusion for optimizing deployment conditions for detecting corrosion under insulation | |
KR102195706B1 (en) | Method and Apparatus for Detecting Intruder | |
US20160260306A1 (en) | Method and device for automated early detection of forest fires by means of optical detection of smoke clouds | |
CN111210399B (en) | Imaging quality evaluation method, device and equipment | |
CN109508583B (en) | Method and device for acquiring crowd distribution characteristics | |
KR101076734B1 (en) | Device for monitoring forest fire of information analysis type and method therefor | |
CN110956104A (en) | Method, device and system for detecting overflow of garbage can | |
CN110458126B (en) | Pantograph state monitoring method and device | |
CN103324919A (en) | Video monitoring system based on face recognition and data processing method thereof | |
KR102559586B1 (en) | Structural appearance inspection system and method using artificial intelligence | |
CN209433517U (en) | It is a kind of based on more flame images and the fire identification warning device for combining criterion | |
US20210225146A1 (en) | Image-based disaster detection method and apparatus | |
CN109409173B (en) | Driver state monitoring method, system, medium and equipment based on deep learning | |
WO2015075413A1 (en) | A method of assessing sensor performance | |
WO2020031643A1 (en) | Monitoring information recording apparatus, monitoring information recording system, control method for monitoring information recording apparatus, monitoring information recording program, and recording medium | |
CN108692709A (en) | A kind of farmland the condition of a disaster detection method, system, unmanned plane and cloud server | |
CN110969875B (en) | Method and system for road intersection traffic management | |
Brown et al. | Machine vision for rat detection using thermal and visual information | |
CN111355948A (en) | Method of performing an operational condition check of a camera and camera system | |
CN115018742B (en) | Target detection method, device, storage medium, electronic equipment and system | |
CN114663404B (en) | Line defect identification method and device, electronic equipment and storage medium | |
CN109089052A (en) | A kind of verification method and device of target object | |
KR102705211B1 (en) | Intelligent integrated analysis system and method using metadata from lidar, radar and camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20160517 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20181001 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20200127 |