GB2582988A - Object classification - Google Patents

Object classification Download PDF

Info

Publication number
GB2582988A
GB2582988A GB1905256.2A GB201905256A GB2582988A GB 2582988 A GB2582988 A GB 2582988A GB 201905256 A GB201905256 A GB 201905256A GB 2582988 A GB2582988 A GB 2582988A
Authority
GB
United Kingdom
Prior art keywords
time
sensor
location
classifying
property
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1905256.2A
Other versions
GB201905256D0 (en
Inventor
Sonander Sean
Kolev Denis
Markarian Garick
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rinicom Ltd
Original Assignee
Rinicom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rinicom Ltd filed Critical Rinicom Ltd
Priority to GB1905256.2A priority Critical patent/GB2582988A/en
Publication of GB201905256D0 publication Critical patent/GB201905256D0/en
Publication of GB2582988A publication Critical patent/GB2582988A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0017Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
    • G08G5/0026Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located on the ground
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/144Image acquisition using a slot moved over the image; using discrete sensing elements at predetermined points; using automatic curve following means
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0047Navigation or guidance aids for a single aircraft
    • G08G5/0069Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • G08G5/0082Surveillance aids for monitoring traffic from a ground station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • H04N23/662Transmitting camera control signals through networks, e.g. control via the Internet by using master/slave camera arrangements for affecting the control of camera image capture, e.g. placing the camera in a desirable condition to capture a desired image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Object classification method comprising: a first sensor 212, detecting an object at a first time (12, Fig. 1); determining a property of the object’s motion; predicting, from the property, the object’s location at a second time (12A, Fig. 1); a second sensor 214 to obtaining further, high resolution, object information at the second time in the predicted location; and classifying the object in dependence thereon. The first sensor may be a stationary fixed field of view (FOV) camera. The second sensor may be a pan-tilt-zoom (PTZ) camera. The sensors may comprise a radar antenna. The second time may be dependent on an adjustment of the sensors. The detected property may be a: location; displacement; velocity; and/or acceleration. A plurality of properties may be detected at different times. An image of the object, obtained at the second time, may be classified by a neural network to determine object: type; size; make and/or model; risk; and confidence score. When detecting a plurality of objects, they may be sorted by drone likelihood and/or threat level; the property of the highest ranked object determined first. The object may be determined to be a drone based on the property or a determined trajectory.

Description

Intellectual Property Office Application No. GII1905256.2 RTM Date:21 NT 2020 The following terms are registered trademarks and should be read as such wherever they occur in this document: WiFi Intellectual Property Office is an operating name of the Patent Office www.gov.uk /ipo -1 -Object classification
Field of disclosure
The present disclosure relates to a method of classifying objects. In particular, the present disclosure relates to a method and apparatus for detecting and classifying drones.
Background
In recent years, the use of airborne systems, such as drones, has increased significantly. These drones are often small, fast moving, and/or difficult to distinguish. This can be problematic for persons attempting to identify drones within an area.
According to at least one aspect of the disclosure herein, there is described a method of classifying an object, the method comprising: at a first sensor detecting an object at a first time; and determining a property of the object; predicting a location of the object at a second time based on the property; at a second sensor: obtaining information relating to the predicted location at the second time; and classifying the object in dependence on the information. The property may be a value of a property. The method enables two sensors to be used in combination to capture a high quality image of an object that is useable to classify features of the object.
Preferably, one of the first sensor and the second sensor comprises at least one of: a camera; and a radar antenna. Optionally, the first sensor is fixed; optionally, the second sensor is moveable. Optionally, the first sensor has a fixed field of view; optionally, the second sensor has an alterable field of view. Optionally, the first sensor is an electrooptic (E0) camera; optionally, the second sensor is a pan-tilt-zoom (PTZ) camera.
Preferably, obtaining information relating to the predicted location at the second time comprises imaging the location at the second time.
Preferably, the method comprises moving the second sensor to the predicted location before the second time. By pre-positioning the camera, a high-resolution image can be obtained by zooming into the predicted location. Tracking the object while zoomed in, or zooming in and maintaining a track on the object might not be possible, so that the tracking enables capture of a high-quality image of the object that might not otherwise be possible.
Optionally, the predicted location comprises at least one of: an area, preferably an area of less than 10 m2, more preferably an area of less than 1m2; a volume, preferably a volume of less than 100m3, more preferably a volume of less than 10m3, yet more preferably a volume of less than 1m3; an azimuth; an altitude; and a solid angle, -2 -preferably a solid angle of less than 1 Steradians, more preferably a solid angle of less than 0.5 Steradians.
Optionally determining a property comprises determining at least one of: a location; a displacement; a velocity; an acceleration; a trajectory; a size; a means of propulsion; and a shape.
Optionally, the obtained information comprises at least one of: an image; a type of object; a make and/or model; a size; a means of propulsion; and a shape.
Preferably, classifying the object comprises classifying based on an image.
Optionally, classifying the object comprises at least one of: identifying a characteristic of the object; identifying a type of the object; identifying a size of the object; identifying make and/or model of the object; and assigning a risk level to the object.
Optionally, the method further comprises determining a plurality of properties of the object, preferably wherein the plurality of properties are obtained at different times. The object may be repeated to image the object multiple times where each image is used to evaluate different properties of the object. Different objects might be determinable based on the location of the object relative to the classification system, or due to the object altering its behaviour between the capture of the images (e.g. speeding up or changing direction).
Optionally, classifying the object comprises presenting a likelihood of successful classification. Optionally, classifying the object comprises comparing the obtained information to a database. Optionally, classifying the object comprises performing image recognition and/or pattern recognition. Optionally, classifying the object comprises using a neural network.
Optionally, the method comprises determining whether the object is a drone based on the property; optionally, the method comprises determining whether the object is a drone based on a determined trajectory. Optionally, the determining comprises using a neural network.
Optionally, the method comprises training the neural network in dependence on the accuracy of the output of the neural network. This enables the operation of the neural network to be improved by indicating whether prior predictions were accurate.
Preferably, the second time is selected in dependence on in dependence on a parameter of the first and/or second sensor, such as: an adjustment time; and a movement speed. Typically, there is a minimum movement time of the second sensor relating to the second sensor beginning to move; after this time the second sensor may be able to move rapidly. -3 -
By predicting the second location in dependence on the time it will take the second sensor to move, it can be ensured that the second sensor has sufficient time to preposition.
The second time may be selected in dependence on a feature of the system used for implementing the method, such as a processing time and/or a calculation time.
Optionally, the method comprises detecting a plurality of objects.
According to another aspect of the disclosure herein, there is described a method of classifying objects, the method comprising: at a first sensor: detecting a plurality of objects at a first time; and determining a property of at least one of the objects; predicting a location of the object at a second time based on the property; at a second sensor: obtaining information relating to the predicted location at the second time; and classifying the object in dependence on the information.
Optionally, the method comprises sorting the plurality of objects, preferably sorting comprises sorting by at least one of: location; speed; size; likelihood of being a drone; and a threat level. This enables the second sensor to focus on classifying objects determined to be high priority before considering lower priority objects.
Preferably, the determining a property of at least one of the objects is dependent on an object rank, preferably wherein determining a property comprises determining a property of the object with the highest rank.
Optionally, the method comprises: at the first sensor: detecting the object at a third time; and determining a second property of the object; predicting a further location of the object at a fourth time based on the property; at the second sensor obtaining further information relating to the further predicted location at the fourth time; and classifying the object in dependence on the further obtained information.
Preferably, the information and the further information comprise different types of information.
Preferably, the detecting the object at a third time occurs is in dependence on a classification being beneath a predetermined classification threshold. This enables multiple images of an object to be used for classification where the information from one image is insufficient to classify the object.
Optionally, the method comprises: at the second sensor: determining a property of the object; predicting a further location of the object at a third time based on the property; at a third sensor: obtaining further information relating to the further predicted location at the third time; and classifying the object in dependence on the further obtained information. -4 -
The use of three sensors typically comprises the use of increasingly high-resolution sensors to obtain a high quality image based on an object initially detected by a low resolution sensor. The use of an intermediate sensor enables prediction of a more accurate predicted location for the third sensor.
According to yet another aspect of the present disclosure, there is described a computer program product comprising software code adapted when executed to carry out any of the methods described herein.
According to yet another aspect of the present disclosure, there is described an apparatus adapted to execute the computer program product.
According to yet another aspect of the present disclosure, there is described an apparatus for classifying an object, the apparatus comprising: means for detecting the object at a first time; means for determining a property of the object; means for predicting a location of the object at a second time based on the property; means for obtaining information relating to the determined location at the second time; and means for classifying the object in dependence on the obtained information. The means for detecting and the means for obtaining information typically comprise (preferably separate) sensors. The means for predicted and the means for classifying typically comprise a processor.
According to yet another aspect of the present disclosure, there is described an apparatus for classifying an object, the apparatus comprising: a first sensor adapted to: detecting the object at a first time; and determine a property of the object a second sensor adapted to: obtain information relating to a location at a second time; and a processor adapted to: predict the location of the object at the second time based on the property; and classify the object in dependence on the obtained information.
Also described here is a method of image processing, useable to classify objects based on features in an image.
The disclosure extends to any novel aspects or features described and/or illustrated herein.
Further features of the disclosure are characterised by the other independent and dependent claims.
It will also be appreciated that the methods can be implemented, at least in part, using computer program code. According to another aspect of the present disclosure, there is therefore provided computer software or computer program code adapted to carry out these methods described above when processed by a computer processing means. The computer software or computer program code can be carried by computer readable -5 -medium, and in particular a non-transitory computer readable medium. The medium may be a physical storage medium such as a Read Only Memory (ROM) chip. Alternatively, it may be a disk such as a Digital Video Disk (DVD-ROM) or Compact Disk (CD-ROM). It could also be a signal such as an electronic signal over wires, an optical signal, or a radio signal such as to a satellite or the like. The disclosure also extends to a processor running the software or code, e.g. a computer configured to carry out the methods described above.
Any feature in one aspect of the disclosure may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to apparatus aspects, and vice versa.
Furthermore, features implemented in hardware may be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
Any apparatus feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure, such as a suitably programmed processor and associated memory.
It should also be appreciated that particular combinations of the various features described and defined in any aspects of the disclosure can be implemented and/or supplied and/or used independently.
The disclosure also provides a computer program and a computer program product comprising software code adapted, when executed on a data processing apparatus, to perform any of the methods described herein, including any or all of their component steps.
The disclosure also provides a computer program and a computer program product comprising software code which, when executed on a data processing apparatus, comprises any of the apparatus features described herein.
The disclosure also provides a computer program and a computer program product having an operating system, which supports a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
The disclosure also provides a computer readable medium having stored thereon the computer program as aforesaid. -6 -
The disclosure also provides a signal carrying the computer program as aforesaid, and a method of transmitting such a signal.
The disclosure extends to methods and/or apparatus substantially as herein described with reference to the accompanying drawings.
The disclosure will now be described by way of example, with references to the accompanying drawings in which: Figure 1 shows a system containing a number of objects; Figure 2 shows a detection system useable for classifying objects; Figure 3 is an illustration of a generic computer device on which a method of detecting and classifying objects can be implemented; Figure 4 is a flowchart of a method of detecting and classifying objects; Figure 5 is a flowchart of a method of adjusting a camera to image and classify drones; Figure 6 is a flowchart of a detailed method of adjusting a camera to image and classify drones; Figure 7 is a flowchart of a method of sorting and classifying objects; Figure 8 is a flowchart of a method of object detection; Figure 9 is a flowchart of a method of object ranking; Figure 10 is a flowchart of a method of object classification; and Figure 11 shows a detection system comprising a plurality of detection subsystems.
Detailed description
Referring to Figure 1, there is shown a system 1 that comprises a number of objects 12, 14, 16. Typically, the objects are aerial objects and, typically, at least one of the objects comprises a drone 12. Each of the objects 12, 14, 16 is capable of changing position so that over a period of time the drone 12 moves from a first location to a second location 12A.
The system further comprises a detection system 10. The detection system 10 is arranged to detect and classify the objects 12, 14, 16. More specifically, the detection system 10 is typically arranged to determine for each of the objects 12, 14, 16 whether that object is a drone; typically, the system is further arranged to classify each object that is determined to be a drone as a type of drone. -7 -
Referring to Figure 2, the detection system 10 comprises a mounting body 200 and a classification head 210. The classification head 210 is mounted on the mounting body 200.
The mounting body 200 comprises an altitude adjuster 202 and an azimuth adjustor 204 which are capable of respectively altering the altitude and azimuth of the classification head 210. More specifically, the altitude adjustor 202 is capable of tilting the classification head 210 about a first axis such that the classification head moves between positions where the classification head 210 points skywards and positions where the classification head 210 points towards the ground. The azimuth adjustor 204 is capable of panning the classification head 210 about a second axis that is perpendicular to the first axis. The combination of the altitude adjustor 202 and the azimuth adjustor 204 enables the classification head 210 to be pointed in any direction. Typically, the altitude adjustor is arranged to move between a limited range of altitudes, where the range of altitudes is between -20 degrees and 90 degrees relative to the ground. Typically, the azimuth adjustor 204 is arranged to rotate through 360 degrees. These typical ranges enable the classification head to be pointed at any object in the sky. It will be appreciated that other ranges of movement may be used.
The classification head 210 comprises a fixed electro-optical (EO) camera 212 and a pan-tilt-zoom (PTZ) 214 camera. The EO camera 212 typically has a wide field of view (FOV) and so is capable of imaging objects within a large area range. The PTZ camera 214 typically has a high resolution and a significant zoom capability and so is capable of imaging objects within a small area range in high detail.
The classification head 210 further comprises a PTZ altitude adjustor 216 and a PTZ azimuth adjustor 218 that are configured to tilt and pan the PTZ camera. The operation of the PTZ altitude adjustor 216 and the PTZ azimuth adjustor 218 is comparable to that of the altitude adjustor 202 and the azimuth adjustor 204 of the mounting body 200.
The detection system 10 is capable of detecting an object using the fixed EO camera 212; positioning the classification head 210 to point towards the object using the altitude adjustor 202 and the azimuth adjustor 204; positioning the PTZ camera 214 to point towards a small area in which the object is or will be located using the PTZ altitude adjustor 216 and the PTZ azimuth adjustor 218; and capturing a high resolution image of the object using the PTZ camera 214.
In some embodiments, the detection system 10 contains other detection means instead of, or in addition to, the fixed EO camera 212. In particular, the classification system may contain a number of other detection means so that it can detect objects at any position in the sky. This may comprise, for example, an array of EO cameras with overlapping -8 -FOVs. In some embodiments, the detection system 10 comprises a radar sensor, a sonar sensor, and/or a radiation sensor. In some embodiments, the detection system 10 comprises a fixed radar sensor with a wide or long field of view (e.g. a non-scanning and/or low frequency radar) and a moveable radar sensor with a higher resolution (e.g. a scanning and/or high frequency radar).
Referring to Figure 3, the detection system 10 typically comprises a computer device 3000; the computer device 3000 implements aspects of the detection method. The computer device 3000 comprises a processor in the form of a CPU 3002, a communication interface 3004, a memory 3006, storage 3008, a sensor 3010 and a user interface 3012 coupled to one another by a bus 3014. The user interface 3012 comprises a display 3016 and an input/output device, which in this embodiment is a keyboard 3018 and a mouse 3020.
The CPU 3002 is a computer processor, e.g. a microprocessor. It is arranged to execute instructions in the form of computer executable code, including instructions stored in the memory 3006 and the storage 3008. The instructions executed by the CPU 3002 include instructions for coordinating operation of the other components of the computer device 3000, such as instructions for controlling the communication interface 3004 as well as other features of a computer device 3000 such as a user interface 3012.
The memory 3006 is implemented as one or more memory units providing Random Access Memory (RAM) for the computer device 3000. In the illustrated embodiment, the memory 3006 is a volatile memory, for example in the form of an on-chip RAM integrated with the CPU 3002 using System-on-Chip (SoC) architecture. However, in other embodiments, the memory 3006 is separate from the CPU 3002. The memory 3006 is arranged to store the instructions processed by the CPU 3002, in the form of computer executable code. Typically, only selected elements of the computer executable code are stored by the memory 3006 at any one time, which selected elements define the instructions essential to the operations of the computer device 3000 being carried out at the particular time. In other words, the computer executable code is stored transiently in the memory 3006 whilst some particular process is handled by the CPU 3002.
The storage 3008 is provided integral to and/or removable from the computer device 3000, in the form of a non-volatile memory. The storage 3008 is in most embodiments embedded on the same chip as the CPU 3002 and the memory 3006, using SoC architecture, e.g. by being implemented as a Multiple-Time Programmable (MTP) array. However, in other embodiments, the storage 3008 is an embedded or external flash memory, or such like. The storage 3008 stores computer executable code defining the instructions processed by the CPU 3002. The storage 3008 stores the computer executable code permanently or semi permanently, e.g. until overwritten. That is, the -9 -computer executable code is stored in the storage 3008 non-transiently. Typically, the computer executable code stored by the storage 3008 relates to instructions fundamental to the operation of the CPU 3002.
The communication interface 3004 is configured to support short-range wireless communication, in particular Bluetooth® and WiFi communication, long-range wireless communication, in particular cellular communication, and/or an Ethernet network adaptor. In particular, the communications interface is configured to establish communication connections with other computing devices.
The storage 3008 provides mass storage for the computer device 3000. In different implementations, the storage 3008 is an integral storage device in the form of a hard disk device, a flash memory or some other similar solid state memory device, or an array of such devices.
The sensor 3010 is configured to obtain information relating to the objects 12, 14, 16. In this embodiment, the sensor comprises the EO camera 212 and the PTZ camera 214.
Other sensors may be used in addition or instead of the EO camera 212 and the PTZ camera 214 as has been described with reference to Figure 2.
In some embodiments, there is provided removable storage, which provides auxiliary storage for the computer device 3000. In different implementations, the removable storage is a storage medium for a removable storage device, such as an optical disk, for example a Digital Versatile Disk (DVD), a portable flash drive or some other similar portable solid state memory device, or an array of such devices. In other embodiments, the removable storage is remote from the computer device 3000, and comprises a network storage device or a cloud-based storage device.
A computer program product is provided that includes instructions for carrying out aspects of the method(s) described below. The computer program product is stored, at different stages, in any one of the memory 3006, storage device 3008 and removable storage. The storage of the computer program product is non-transitory, except when instructions included in the computer program product are being executed by the CPU 3002, in which case the instructions are sometimes stored temporarily in the CPU 3002 or memory 3006. It should also be noted that the removable storage 3008 is removable from the computer device 3000, such that the computer program product is held separately from the computer device 3000 from time to time.
Figure 4 is a flowchart of a method 400 of detecting and classifying objects; In a first step 402, an object is detected. This detection typically comprises receiving one or more images from the fixed EO camera 212 and identifying an object based upon these frames. The object is identified, for example, by using image recognition techniques or by comparing two images taken at different times and identifying an area of difference, this area relating to a moving object.
In a second step 404, a second location of the object is predicted. A plurality of images obtained using the fixed EO camera 212 are compared to obtain an object location and an estimated object speed at a first time. The location at which the object will be at a second time is predicted based on this location and speed.
In a third step 406, the object is imaged at the predicted second location. This typically comprises taking a high-resolution image using the PTZ camera 214. The PTZ camera 214 is pre-positioned so that it points at a limited area containing the predicted second location. By pre-positioning the camera, a high resolution, zoomed-in image is obtainable even for fast-moving objects that would be difficult or impossible to track by moving a single camera.
In a fourth step 408, the object is classified based upon the image taken at the second location. The classification typically comprises performing a comparison between the image of the object and a database of known objects. In this embodiment, this comprises comparing the image of the object to a database of drone types and identifying a similarity between the imaged object and a known drone type.
The method 400 described with reference to Figure 4 enables an initial detection of an object to be made using the fixed BO camera 212, which has a wide FOV. This initial detection is useable to identify that the object is likely to be the drone 12, but the image taken by the fixed BO camera 212 is normally not detailed enough to enable the drone 12 to be classified by type. The pre-positioning of the PTZ camera 214 enables a detailed image of the drone 12 to be captured that enables accurate classification.
With respect to the second step 404, it will be appreciated that the described process is only an exemplary method for calculated an object position. In some embodiments, other methods are used, such as estimating an acceleration and considering this acceleration within the calculation of the second location. An acceleration is determinable by using three or more images from the fixed BO camera 212 and/or by considering motion blur within an image taken by the fixed EO camera 212. In some embodiments, a single image from the fixed EO camera 212 is used in the calculation of the second location, where, for example, motion blur is used to estimate a speed and/or acceleration of a detected object. In some embodiments, the calculation depends on an estimated speed of the object; a fast-moving object is imaged only a few times before calculation of the second location to ensure that the estimated second location is within range of the detection system 10, a slow-moving object is imaged many times to obtain a more accurate prediction for speed and acceleration before calculation of the second location.
Referring to Figure 5, there is shown a flowchart of a method 500 of adjusting the classification head 210 of the detection system 10 and the PTZ camera 214 to detect and classify the drone 12. While this method is described with reference to the drone 12 of the exemplary system 1 referred to in Figure 1, it will be appreciated that the method is useable in any system where there may or may not be a drone present -and the method is useable without the user having knowledge that a drone is present.
In a first step 502, the detection system 10 searches for the drone 12. This search typically comprises either analysis of the fixed EO camera feed or consideration of another sensor (not shown). In some embodiments, either an array of cameras or a radar sensor is used to search for drones. The use of an array of sensors enables simultaneous consideration of a large portion of, or the entirety of, the sky.
If, in a second step 504, the drone 12 is detected, then in a third step 506 the azimuth and altitude of the drone 12 are estimated. Further properties of the drone 12, such as a velocity and/or an acceleration may also be estimated.
In a fourth step 508, the estimated properties of the drone 12 are used to calculate a second location which the drone 12 will later occupy as has been described with reference to the second and third steps 404,406 of Figure 4.
In a fifth step 510, the classification head 210 is adjusted so that the second location is within the FOV of the fixed EO camera 212 and/or the PTZ camera 214.
In an optional sixth step 512, a third location is calculated which the drone 12 will later occupy using properties estimated using the fixed EO camera 212, these properties typically comprise the azimuth, altitude, and velocity of the drone 12.
In an optional seventh step 514 that follows the optional sixth step 512, the PTZ camera 214 is adjusted so that the third location is within the FOV of the PTZ camera 214.
The use of a third location enables a first location to be detected using a wide ranging sensing system e.g. a large radar or a camera array (not shown); a second more precise location to be calculated using properties estimated by this wide ranging sensing system; the classification head 210 to be positioned towards this second location; a third, yet more precise, location to be calculated using the fixed E0 camera 212 of the classification head; and the PTZ camera 214 to be positioned towards this third location.
In an eighth step 516, a drone image is captured using the PTZ camera 214.
-12 -In a ninth step 518, the drone 518 is classified as has been described with reference to the fourth step 508 of Figure 4.
Referring to Figure 6, there is shown a flowchart of a detailed method 600 of adjusting a camera to image and classify objects, this method being similar to the method 500 of Figure 5.
As has been described with reference to the first, second and third step 502, 504, 506 of Figure 5, in a first, second and third step 602, 604, 606 the detection system 10 searches for the drone 12, detects the drone 12, and then estimates the azimuth and altitude of the drone 12.
In a fourth step 608, the detection system calculates a second drone location X2 that corresponds to the location of the drone at a time 12. The time T2 used to calculate the second drone location X2 typically depends on the adjustment mechanisms of the classification head 210, and more specifically the capabilities of the altitude adjustor 202 and the azimuth adjustor 204. The time T2 and location X2 are selected so that the detection system 10 is capable of moving the classification head 210 using the altitude adjustor 202 and the azimuth adjustor 204 to include the drone 12 within the FOV of the fixed BO camera 212. There is typically some calculation time and some initiation time required as well as some movement time so that there is a minimum time required to adjust the classification head 210, but after the initial movement the classification head is capable of being adjusted rapidly. The time T2 typically corresponds to this minimum adjustment time.
In some embodiments, calculating the first drone location comprises calculating a location corresponding a given time, determining whether the detection system 10 is capable of moving the classification head 210 to the calculated location at that time, and if the detection system 10 is not capable of this, calculating another location that corresponds to a different time.
In some embodiments, the second time T2 is selected to be as small as possible while allowing for adjustment of the classification head 210. By minimising the time T2 any difference in the predicted location of the drone 12 at the second time T2 and the actual location of the drone 12 at the second time T2 that may occur if the prediction is not entirely correct is minimised.
In a fifth step 610, the classification head 210 is panned and tilted to point towards the predicted second drone location X2.
In a sixth step 612, the drone 12 is detected using the fixed DO camera 212 and properties of the drone 12 estimated using images from the fixed BO camera 214 are used to calculate a third drone location X3 corresponding to a predicted drone location at a time T3. The third time T3 typically corresponds to an adjustment time of the PTZ camera 214.
In seventh, eighth and ninth steps 614, 616, 618 the PTZ camera is adjusted, an image of the drone is captured, and the drone is classified as has been described with reference to the seventh, eighth and ninth steps 514, 516, 518 of the method 500 of Figure 5.
Figure 7 is a flowchart of a method of sorting and classifying objects 700; certain steps within this method are explained in more detail in Figures 8, 9 and 10.
In a first step 702, the detection system 10 receives input from the fixed EO camera 212.
In a second step 704, the detection system 10 detects one or more objects that are present in the system 1 using the fixed EO camera input.
In a third step 706, the detection system 10 sorts the detected objects, for example by a likelihood of being a drone, or by a speed.
In a fourth step 708, the detection system 10 receives input from the PTZ camera 214.
In a fifth step 710, the detection system classifies the detected objects using the PTZ camera input.
Figure 8 is a flowchart of a method of object detection 704 that is typically used within the second step of the method of sorting and classifying objects 700 of Figure 7.
As has been described with reference to the first step 702 of the method of sorting and classifying objects 700 of Figure 7, the detection system 10 receives input from the fixed EO camera 212.
In a first step 802 of the method of object detection 704, the detection system 10 performs background subtraction on the fixed EO camera input. More specifically, the detection system 10 identifies differences between the values within an image and baseline values. Areas where an image differs from the baseline value are areas in which there is likely to be an object, such as the drone 12.
In a second step 804, potential objects are detected based on areas of difference (to the background values) that are identified within the fixed E0 camera input.
In a third step 806, the detection system 10 updates a list of existing objects. This comprises adding to the list newly detected objects and/or removing from the list objects that are no longer detected.
In a fourth step 808, the background is updated based on the most recent image taken. Typically, the determination of background values occurs using a plurality of images to -14 -ensure that objects are not included in the background, e.g. a set of image pixel values that is only present in a single frame is likely due to the presence of an object and thus these pixel values do not comprise part of the background. The fourth step 808 feeds back into the first step 802, where the background is continuously updated using recent images. This enables the system to be used, for example, with different weather conditions without there being a high rate of false positives.
In a fifth step 810, properties of the detected objects are estimated, such as an altitude, an azimuth, and a velocity. Estimated properties may also include physical attributes, such a size or a colour and electronic signature attributes, such as an operating frequency for radio-controlled drones.
In a sixth step 812, the trajectories of the detected objects are analysed. This typically comprises determining a velocity vector or a movement path; in some embodiments, an acceleration is also analysed. The trajectories of the objects can be used to infer features of the objects and/or to calculate locations at which the object will later be present.
In an optional seventh step 814, the detection system 10 performs coordinate transformation for the objects. Typically, this comprises transforming the trajectories of the objects into a coordinate system that simplifies later operations (e.g. calculation of locations).
Figure 9 is a flowchart of a method of object sorting 706 that is typically used within the second step of the method of sorting and classifying objects 700 of Figure 7.
In a first step 902, detected objects from multiple sources are merged. In some embodiments, the detection system comprises a number of detection subsystems, each detection subsystem comprising mounting bodies 200 and/or classification heads 210. Each of these subsystems is configured to detect objects and, optionally, analyse object trajectories. The detected objects and analysed trajectories are then merged by the detection system 10 to obtain a database of object properties. Typically, a number of detection subsystems are spread about an area to provide full coverage of the skies surrounding the area (e.g. detection subsystems may be placed at the corners of a compound). By merging the objects detected by each subsystem, a full picture of the contents of the skies around the compound is obtainable. This merged database is further useable to determine objects that are crossing between the FOV of two detection subsystems and thus enables an object that has been sighted by the fixed EO camera of a first detection subsystem to have its location predicted so that it can be imaged by a second detection subsystem.
Where there are multiple detection subsystems used, the coordinate transformation that has been described within the seventh step 814 of the method 704 of Figure 8 typically comprises transforming analysed object trajectories from a coordinate system based upon a specific detection subsystem to a global coordinate system determined by the detection system 10.
In a second step 904, the detection system 10 ranks the detected objects. Typically, this comprises ranking objects by a speed, a likelihood of being a drone, and/or a risk rating, which may be related to the probability that a drone crosses into a restricted area. It will be appreciated that numerous other ranking criteria can also be used.
The likelihood of being a drone is typically determined through an analysis of the trajectory of each object. Drones are capable of performing movements, and accelerations, that cannot be performed by other objects; this enables detection of drones via analysis of trajectory.
Typically, the analysis of trajectory comprises the use of a neural network. More specifically, a neural network is trained using data relating to drone trajectories as well as trajectories of other objects, such as birds; when an object is detected the trained neural network is used to determine the likelihood of the trajectory belonging to a drone. In some embodiments, the neural network is periodically retrained using information obtained from the detection system 10, for example feedback is recorded relating to whether prior assessments of whether an object is a drone were correct. The feedback may involve input from a user, or may involve input from the PTZ camera 214. The images recorded by the E0 camera 212 may be used to evaluate trajectories and output a likelihood of an object being a drone; the images taken by the PTZ camera 214 and the resultant classification may then be used to assess the evaluation and improve the neural network.
In a third step 906, the detection system 10 outputs the rankings and details of each of the detected objects. These rankings and details enable the detection system 10 to prioritise objects for classification, as described with reference to the fifth step 710 of the method of sorting and classifying objects 700 of Figure 7.
In some embodiments, objects that are below a certain rank, for example that are below a certain probability of being a drone, are culled from the object database, and are not considered further.
Figure 10 is a flowchart of a method of object classification 710 that is typically used within the fifth step of the method of sorting and classifying objects 700 of Figure 7.
In a first step 1002, the detection system 10 selects an object to classify. This typically comprises selecting the highest ranked unclassified object from the sorted list of objects.
In a second step 1004, the PTZ camera 214 is synchronised with the calculated object trajectory, this comprises moving the PTZ camera 214 so that it points towards the -16 -second location X2 or the third location X3 where it is predicted the object will be at a the second time T2 or the third time T3 respectively. This prediction and pre-positioning has been described with reference to the second and third steps 404, 406 of Figure 4.
In a third step 1006, at the relevant time (T3) the PTZ camera 214 captures an image of the second location X2 or the third location X3. The detection system 10 then identifies the precise location of the object within the captured image frame. This detection typically occurs in a similar way to the initial detection of the object by the fixed EO camera 214 that has been described with reference to the method 704 of Figure 8. That is, typically the PTZ camera 214 captures one or more images of the location X3 before the time T3 and uses these to determine a background. The image captured at time T3 is then compared to the background image to determine the precise location of the object.
In a fourth step 1008, the detection system 10 classifies at least one feature of the object as has been described with reference to the fourth step 408 of Figure 4. In various embodiments, classification comprises at least one of: determining an object size, determining an object shape, determining an object type, and determining an object risk factor. Classification typically comprises comparing one or more feature of the imaged object with known data. The feature, and the known data, can relate to a type of drone, a number of propellers, a make of drone, and/or a model of drone.
In some embodiments, classification comprises using a neural network that has been trained using images of objects, e.g. images of drones. Images of drones may be obtained using a web-scraper, which may obtain open-source data that relates images of drones to information about those drones (e.g. a model number). Proprietary data may also be used in training the neural network.
In some embodiments" classification comprises comparison of an image captured by the PTZ camera 214 to images contained in an image database. As such, the method may be considered to be a method of image processing, which extracts image information from a captured image and processes this to determine similarity to other images.
In a fifth step 1010, the detection system 10 updates the object database with information relating to the classified feature.
In a sixth step 1010, the detection system 10 determines whether enough information is known about the object to classify the object. Depending on the requirements of the user, objects can be classified to differing degrees of precision. An object can be classified imprecisely, e.g. by size, or precisely, e.g. by a drone model number. In some situations, a number of features are required to obtain an object classification, for example to classify an object as a certain model of drone it might be necessary to obtain an object size, a number of propellers, a shape, and an operating frequency.
In a sixth step 1012, the detection system 10 determines whether enough information is known about the object to enable classification. In some embodiments, this comprises determining whether the detection system 10 is able to classify an object with a certain probability of correct classification.
In a seventh step 1014, if enough information is known about the object, the detection system 10 classifies the object.
If there is not enough information known about the object, a further object location is calculated and the PTZ camera 1004 is positioned towards this further location. Another image is captured of the object and this is used to classify further features. It will be appreciated that this further image capture can occur before or after other objects are classified, for example a first image can be captured for a first object, then a first image for a second object, and then a second image for the first object. This enables efficient image capturing where object trajectories overlap.
In an eighth step 1016, the detection system determines whether there are more objects in the list of sorted objects. If there are, the method 710 repeats from the first step 1002, selecting the next object in the ranked list.
In a ninth step 1018, if all the objects are classified, the method 710 ends.
In some embodiments, the method 710 is repeated continuously, so that once all objects have been classified, the detection system continues to select objects in order to verify and/or improve the object classifications.
Figure 11 shows a system 1A in which the detection system 10 comprises three subsystems 10A, 10B, 10C each subsystem comprising a mounting body 200 and a classification head 210 as described with reference to Figure 2. It can be seen that the drone 12 is initially detected by subsystem A 10A before moving into the FOV of subsystem C. The fixed EC camera of subsystem A 10A may be used to predict the second location 12A of the drone 12, which is then imaged by the PTZ camera of subsystem C 10C. As is apparent from this image, this arrangement enables a continuous monitoring of the drone 12 as it moves through the FOV of each subsystem 10A, 10B, 10C.
Altematives and modifications Various other modifications will be apparent to those skilled in the art for example the detection system 10 may comprise any number of mounting bodies 200 and/or classification heads 210 and each classification head may comprise any number of fixed E0 cameras 212 and PTZ cameras 214. Similarly, the detection system 10 may comprise other sensors, such as radars, sonars, and/or Geiger counters.
The detailed description has primarily considered the detection system 10 being used to detect drones. It will be appreciated that the detection system 10 could be used more generally to detect any objects in the air, on land, or at sea.
It will be understood that the present disclosure has been described above purely by way of example, and modifications of detail can be made within the scope of the disclosure.
Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.

Claims (25)

  1. Claims 1. A method of classifying an object, the method comprising: at a first sensor: detecting an object at a first time; and determining a property of the object; predicting a location of the object at a second time based on the property; at a second sensor obtaining information relating to the predicted location at the second time; and classifying the object in dependence on the information.
  2. 2. The method of claim 1, wherein the first sensor and/or the second sensor comprises at least one of: a camera; and a radar antenna.
  3. 3. The method of any preceding claim, wherein the first sensor is fixed and/or wherein the first sensor has a fixed field of view, preferably wherein the first sensor is an electro-optic (E0) camera.
  4. 4. The method of any preceding claim, wherein the second sensor is moveable and/or wherein the second sensor has an alterable field of view, preferably wherein the second sensor is a pan-tilt-zoom (PTZ) camera.
  5. 5. The method of any preceding claim, wherein obtaining information relating to the predicted location at the second time comprises imaging the location at the second time.
  6. 6. The method of any preceding claim, further comprising moving the second sensor to the predicted location prior to the second time.
  7. 7. The method of any preceding claim, wherein determining a property comprises determining a location and/or a derivative of a location, such as: a location; a displacement; a velocity; and an acceleration.
  8. 8. The method of any preceding claim, wherein the obtained information is an image.
  9. 9. The method of any preceding claim, wherein classifying the object comprises classifying based on an image.
  10. 10. The method of any preceding claim, wherein classifying the object comprises identifying a characteristic of the object, such as: a type of the object; a size of the object; and/or a make and/or model of the object.
  11. 11. The method of any preceding claim, wherein classifying the object comprises assigning a risk level to the object.
  12. 12. The method of any preceding claim, comprising determining a plurality of properties of the object, preferably wherein the plurality of properties are obtained at different times.
  13. 13. The method of any preceding claim, wherein classifying the object comprises presenting a likelihood of successful classification.
  14. 14. The method of any preceding claim, wherein classifying the object comprises comparing the obtained information to an entry in a database.
  15. 15. The method of any preceding claim, wherein classifying the object comprises using a neural network.
  16. 16. The method of any preceding claim, further comprising determining whether the object is a drone based on the property.
  17. 17. The method of any preceding claim, further comprising determining whether the object is a drone based on a determined trajectory.
  18. 18. The method of any preceding claim, wherein the second time is selected in dependence on a parameter of the first and/or second sensor, such as: an adjustment time; and a movement speed.
  19. 19. The method of any preceding claim, further comprising detecting a plurality of objects.
  20. 20. A method of classifying objects, the method comprising: at a first sensor: detecting a plurality of objects at a first time; and determining a property of at least one of the objects; predicting a location of the object at a second time based on the property; -21 -at a second sensor: obtaining information relating to the predicted location at the second time; and classifying the object in dependence on the information.
  21. 21. The method of claim 19 or 20, further comprising sorting the plurality of objects, preferably wherein sorting comprises sorting by a likelihood of being a drone and/or a threat level.
  22. 22. The method of any of claims 19 to 21, wherein the determining a property of at least one of the objects is dependent on an object rank, preferably wherein determining a property comprises determining a property of the object with the highest rank.
  23. 23. A computer program product comprising software code adapted when executed to carry out the method of any of claims 1 to 22.
  24. 24. An apparatus adapted to execute the computer program product of claim 23.
  25. 25. An apparatus for classifying an object, the apparatus comprising: means for detecting the object at a first time; means for determining a property of the object; means for predicting a location of the object at a second time based on the property; means for obtaining information relating to the determined location at the second time; and means for classifying the object in dependence on the obtained information.
GB1905256.2A 2019-04-12 2019-04-12 Object classification Withdrawn GB2582988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1905256.2A GB2582988A (en) 2019-04-12 2019-04-12 Object classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1905256.2A GB2582988A (en) 2019-04-12 2019-04-12 Object classification

Publications (2)

Publication Number Publication Date
GB201905256D0 GB201905256D0 (en) 2019-05-29
GB2582988A true GB2582988A (en) 2020-10-14

Family

ID=66809873

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1905256.2A Withdrawn GB2582988A (en) 2019-04-12 2019-04-12 Object classification

Country Status (1)

Country Link
GB (1) GB2582988A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4307245A1 (en) * 2022-07-14 2024-01-17 Helsing GmbH Methods and systems for object classification and location

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111679257B (en) * 2019-12-30 2023-05-23 中国船舶集团有限公司 Target recognition method and device for light unmanned aerial vehicle based on radar detection data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140313345A1 (en) * 2012-11-08 2014-10-23 Ornicept, Inc. Flying object visual identification system
US20160055400A1 (en) * 2014-08-21 2016-02-25 Boulder Imaging, Inc. Avian detection systems and methods
EP3419283A1 (en) * 2017-06-21 2018-12-26 Axis AB System and method for tracking moving objects in a scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140313345A1 (en) * 2012-11-08 2014-10-23 Ornicept, Inc. Flying object visual identification system
US20160055400A1 (en) * 2014-08-21 2016-02-25 Boulder Imaging, Inc. Avian detection systems and methods
EP3419283A1 (en) * 2017-06-21 2018-12-26 Axis AB System and method for tracking moving objects in a scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4307245A1 (en) * 2022-07-14 2024-01-17 Helsing GmbH Methods and systems for object classification and location
WO2024013389A1 (en) * 2022-07-14 2024-01-18 Helsing Gmbh Methods and systems for object classification and location

Also Published As

Publication number Publication date
GB201905256D0 (en) 2019-05-29

Similar Documents

Publication Publication Date Title
US11915502B2 (en) Systems and methods for depth map sampling
US11392146B2 (en) Method for detecting target object, detection apparatus and robot
US10928838B2 (en) Method and device of determining position of target, tracking device and tracking system
Zheng et al. Air-to-air visual detection of micro-uavs: An experimental evaluation of deep learning
US11887318B2 (en) Object tracking
US20170345180A1 (en) Information processing device and information processing method
JP2018508078A (en) System and method for object tracking
CN110264495B (en) Target tracking method and device
US9336446B2 (en) Detecting moving vehicles
AU2018429301A1 (en) Vessel height detection through video analysis
RU2755603C2 (en) System and method for detecting and countering unmanned aerial vehicles
GB2582988A (en) Object classification
RU2019130604A (en) System and method of protection against unmanned aerial vehicles in the airspace of a settlement
CN114359714A (en) Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
EP2946283B1 (en) Delay compensation while controlling a remote sensor
US20210174079A1 (en) Method and apparatus for object recognition
CN110764526A (en) Unmanned aerial vehicle flight control method and device
Nalcakan et al. Monocular vision-based prediction of cut-in maneuvers with lstm networks
JP2019068325A (en) Dynamic body tracker and program therefor
CN112639405A (en) State information determination method, device, system, movable platform and storage medium
US11769257B2 (en) Systems and methods for image aided navigation
KR102630236B1 (en) Method and apparatus for tracking multiple targets using artificial neural networks
WO2021199286A1 (en) Object tracking device, object tracking method, and recording medium
JP2022061059A (en) Moving object tracking apparatus and program thereof
KR20240033502A (en) Method and apparatus for improving targeting ability base on artificial intelligence by utilizing domain knowledge

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20210708 AND 20210714

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)