GB2516321A - Object detection and recognition system - Google Patents

Object detection and recognition system Download PDF

Info

Publication number
GB2516321A
GB2516321A GB1314412.6A GB201314412A GB2516321A GB 2516321 A GB2516321 A GB 2516321A GB 201314412 A GB201314412 A GB 201314412A GB 2516321 A GB2516321 A GB 2516321A
Authority
GB
United Kingdom
Prior art keywords
image
vehicle
computing device
cameras
underside
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1314412.6A
Other versions
GB201314412D0 (en
Inventor
Tim Luft
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SERIOUS GAMES INTERNAT Ltd
Original Assignee
SERIOUS GAMES INTERNAT Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SERIOUS GAMES INTERNAT Ltd filed Critical SERIOUS GAMES INTERNAT Ltd
Publication of GB201314412D0 publication Critical patent/GB201314412D0/en
Publication of GB2516321A publication Critical patent/GB2516321A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Abstract

A method and system for detecting anomalies, such as debris, on or near the underside of vehicles 10, comprises one or more cameras 14 arranged to capture one or more images of the underside of a car or other vehicle; a computing device in communication with the one or more cameras and comprising a means to display images; wherein the computing device is configured to: receive one or more images of at least part of the underside of a vehicle captured by the one or more cameras, receive identification information relating to the vehicle (e.g. its license plate), retrieve an image of the underside of a vehicle from a database, compare the retrieved image with the one or more images captured, determine differences between the retrieved image and the one or more captured images, and display a visual indication of the differences. The cameras may be housed in a mat 12, and may be time-of-flight cameras. Independent claims are also included for an image-capture device, and a system for simulating detection of anomalies on the underside of vehicles using trigger markers.

Description

Object Detection and RecoQnition System
Field of the Invention
The present invention relates to object detection and recognition, and more specifically to object detection and recognition of anomalous objects on or near the underside ofvchicles.
Background of the Invention
In many circumstances it is desirable or indccd necessary to check the underside of vehicles for alterations, objects or debris. This may be for maintenance or security reasons, amongst others. Since the underside chassis of most vehicles are not easily viewable when the vehicle is in normal use, mirrors are utilised and have been used by mechanics as well as in police, military and security sectors for many years.
An inspection mirror often comprises a minor attached to a handle. Such inspection mirrors are used by users whilst standing or sitting in close proximity to the vehicle while the vehicle is stationary. The need to position users close to the vehicle, as well as the need for the vehicle to be stationary, can results in inefficiencies, particularly when a number of vehicles must be inspected and when there are limited numbers of trained users who can be appropriately positioned.
Users of inspection mirrors may be stationed to inspect vehicle for prolonged periods. The repetitious nature of inspections can be monotonous and so lead to decreasing concentration and alertness. This can result in bad assessments and ultimately jeopardise the quality and effectiveness of the inspection exercise.
Furthermore, users using an inspection minor must be adequately trained in order to make proper assessments regarding the existence or otherwise of vehicle alterations, anomalous objects or debris that may be encountered. The existing training mechanisms used are often a resuh of direct experience and do not prepare users for the risk of concentration loss.
It is an aim of the present invention aims to avoid or at least mitigate some of the drawbacks
of the prior art.
Summary of the Invention
According to a first aspect of the invention, there is provided a method of detecting anomalies on or near the underside of vehicles, comprising, by a computing device: receiving one or more images of at least part of the underside of a vehicle captured by one or more cameras, receiving identification information relating to the vehicle, retrieving an image of the underside of a vehicle from a database, comparing the retrieved image with the one or more images captured, determining differences between the retrieved image and the one or more captured images, and displaying a visual indication of the differences.
Preferably, the step of displaying further comprises displaying the captured image.
Preferably, the step of retrieving comprises querying the database based on the one or more captured images to identify the closest matching image in the database. The step of retrieving may comprise querying the database based on the identification information relating to the vehicle.
Preferably, the computing device comprises a graphical user interface and wherein the method further comprising outputting instructions, wherein the instructions may comprise prompts for user input.
Preferably, the method further comprises collating one or more of the one or more captured images into a single image of the whole or part of the underside of a vehicle. The collating is preferably based on the spatial separation of the cameras.
Preferably, the method frirther comprises outputting a description of the vehicle to which the captured image relates, wherein the description is based at least in part of the identification information. Optionally, the method further comprises prompting, by the image processing device, art input to vcrift the description of the vehicle.
Preferably, the computing device is connected to a network, and wherein the step of querying comprises querying a database stored remotely from the computing device.
The method preferably further comprises outputting, by the computing device, an alert when the differences are determined. The alert may be is visual and!or audible.
According to a second aspect of the invention there is provided a system for detecting anomalies on or near the underside of vehicles, compnsing one or more cameras arranged to capture one or more images of the underside of a vehicle; a computing device in communication with the one or more cameras and comprising a graphical user interface configured to display images; wherein the computing device is configured to: receive one or more images of at least part of the underside of a vehicle captured by the one or more cameras, receive identification information relating to the vehicle, retrieve an image of the underside of a vehicle from a database, compare the retrieved image with the one or more images captured, determine differences between the retrieved image and the one or more captured images, and display a visual indication of the differences.
Preferably, the cameras are comprised in a mat and the mat is configured to lie on the ground.
Operation of the one or more cameras may be controlled by a microprocessor, and preferably the microprocessor is comprised in the mat. The mat may comprise a time of flight camera.
The system may frirther comprise a sensor arranged to sense the presence of an approaching vehicle, and the sensor may also be comprised in the mat.
Preferably, the computing device is portable and located remotely from the one or more cameras. Optionally, the operation of the one or more cameras is controlled by the computing device. Preferably, the computing device is further configured to output instructions to prompt user input.
The methods and systems claimed provide for the automated collection of images and improved image recognition of physical objects and modifications of objects. This facilitates a quicker and more accurate assessment of potential anomalies and thereby also improves safety.
According to a third aspect of the invention, there is provided a device for capturing images of the underside of vehicles, comprising an elongate body comprising one or more cameras partially embedded in the body wherein the one or more cameras arc configured to capture an image of the underside of a vehicle as a vehicle drives over the device.
The device provides a convenient and efficient means by which images of the underside of vehicles can be obtained whilst the vehicle is moving and which does not require human intervention.
Preferably, the device further comprises a microprocessor in communication with the one or more cameras and may be configured to control operation of the one or more cameras.
The device preferably comprises a sensor, wherein the sensor senses an approaching of a vehicle. Thc device may comprise one or more LEDs.
Preferably, the microprocessor is configured to send images captured by each of the one or more cameras to a computing device having a display.
Optionally, the lenses of the one or more cameras are oriented upwards when the device lies on the ground. The device preferably comprises more than one camera and preferably the cameras are evenly spaced apart from one another.
According to a fourth aspect of the invention, there is provided a system for simulating detection of anomalies on or near the underside of vehicles, comprising: a plurality of augmented reality trigger markers, a computing device for scanning one or more of the markers and a display for displaying images; whcrcin each marker is rccognisablc by the computing device, when scanned, as an augmented reality trigger and wherein the computing device is configured to: retrieve an image of the underside of a vehicle, wherein the image is associated with the marker, select, from a database, an image of an anomalous object or modification and merge the selected image and retrieved image, and display the merged image on the display.
The computing device may comprise the display, and may also comprise a graphical user interface. The computing device is preferably further configured to display prompts to prompt user input, and may also be configured to receive and store user input. Preferably, the computing device is a tablet computing device.
According to a fifth aspect of the invention, there is provided a method for simulating object detcction, comprising, by a computing device: scanning one or more markers, recognising one of the one or more markers as an augmented reality trigger, retrieving an image of the underside of a vehicle, wherein the image is associated with the one or more markers recognised, selecting an image of an anomalous object or modification, and displaying, as a composite image, the selected image and the retrieved image.
Brief description of the drawings
A prefcrred embodiment of the invention will now be described by way of example with reference to the following drawings in which: Figure 1 is perspective view of a vehicle approaching a camera mat; Figure 2 is an underside view of a vehicle and camera mat; Figure 3 shows a schematic of a camera mat; Figure 4 is a flow diagram outlining operation steps involved according to an embodiment of the invention; Figure 5 is a perspective view of training apparatus; Figure 6 illustrates recognition of an augmented reality marker by a computing device; Figure 7 illustrates the display of a vehicle chassis on a computing device; Figure 8 is a flow diagram outlining the operation steps involved according to a further embodiment of the invention.
Detailed description
Figure 1 shows vehicle 10 whose chassis underside is to be inspected and a camera mat 12 lying on the ground in front of the vehicle. The length of camera mat 12 is preferably larger than the width of the vehicles which are to be inspected, and is typically approximately 3-4 metres in length. Mat 12 is preferably solid and is constructed from a resilient material such as rubber, although other suitable materials may be used. Mat 12 houses, in its body, one or more cameras, denoted generally by 14 and described further with reference to Figure 3. The lenses of cameras 14 are oriented upwards. In use, and as shown in figure 1, vehicle 10 approaches the camera mat 12 in a direction perpendicular to the length of the camera mat 12.
The underside of the chassis of vehicle 10 is shown in figure 2. The cameras 14 of the camera mat 12 capture images of the underside of the vehicle as the vehicle drives over the mat at a slow spccd. Each camcra capturcs one or more images of thc underside of the vchiclc. The number of images captured by each camera depends on the location of the mat 12, the number and arrangement of cameras in the mat 12, the type/specification of the cameras and the types of vehicles 10 being inspected, and/or specific foreign objects that the vehicles are being checked for. If analysis of only a specific part of the underside of the chassis of vehicles are requested (for example, the rear section of the vehicle) the cameras may be configured to capture only a single image as the rear part of the vehicle passes over the mat 12.
The mat 12 also comprises a microprocessor (not shown). The microprocessor is in communication with the cameras 14 such that the microprocessor controls the operation of the cameras and further controls the time at which the cameras 14 arc to capture images. The mat 12 further houses a sensor (not shown) in communication with the microprocessor which senses when a vehicle is approaching the mat 12. The sensor is configured to sense an approaching vehicle by any suitable means. When the sensor determines that a vehicle is approaching, it sends a signal to the microprocessor and the microprocessor controls the operation of one or more of the cameras 14 accordingly. For example, the microprocessor may delay operation of the cameras until after a predetermined time interval has passed. The time interval may be chosen based on the distance at which the sensor senses the approaching vehicles and an estimated speed of the vehicle.
In an ahernative embodiment, the microprocessor housed in the mat 12 is configured to receive instructions, via a wired or wireless connection, from a computing device located remotely from the mat. The computing device has image processing capabilities and a display and is preferably a portable tablct computer but may be a mobilc telephone or a desktop computer. Control of the operation of the cameras by the computing device via the microproccssor is by user input (for example a user may provide an input in real time to instruct the microprocessor to control image capture by the cameras 14). In a flirther alternative embodiment, each of the camera devices is in wired or wireless communication with the image processing device.
As shown in Figure 3, the camera mat 12 comprises 6 cameras 14 spaced substantially evenly along length of mat 12. The cameras 6 are embedded in the body of the mat 12; the mat 12 typically has a domed cross section. It will be appreciated that alternative arrangements of thc camcras 14 may bc adoptcd. The lens of each of the cameras are exposed by gaps in the outer material of the mat 12. To protect the lens of the cameras 14, a durable plastic shell is located above the lens. The plastic shell typically extends slightly above the rubber surface of the mat and has an aperture through which the camera may capture an image of a vehicle above the mat 12. The mat 12 may also comprise reflective spheres such that the mat is made visible by the reflection of the spheres caused by incident vehicle headlights. This serves as an indication to the driver of the vehicle that the mat is ahead of the vehicle and may act as a further prompt to reduce the speed of the vehicle. Although not shown in Figure 3, the mat 12 additionally or altematively comprises one or more lights, such as LEDs, to facilitate identification of the mat 12 in low light conditions.
As or before the vehicle approaches the mat 12, information identiiing the vehicle such as the make and model is detected or is manually input into a computing device having display and image processing capabilities and operated by trained users. For example, users may enter a registration number of the vehicle, may be prompted to manually enter the make and model or select the make and model from a list. The type, make and model or other form of vehicle identification may be automatically detected, for example by RFID tags in or on the vehicles, or by automatic vehicle recognition.
Upon determination of the type andior make and model of the vehicle, the computing device accesses a directory of images. The directory is a database which may be stored locally on the computing device or accessed via a network such as the internet via a suitable connection.
The directory comprises images of the underside of the chassis of substantially all vehicle makes and models as originally manufactured. The images in the directory thus illustrate what the underside of the approaching vehicle should look like (i.e. as they would appear before the vehicles are used, and therefore free of debris, modifications etc.). The image corresponding to the specific vehicle approaching the mat is retrieved and displayed on the display of the computing device. The user may be prompted to veri' that the make and model of the vehicle corresponding to the selected image is in fact the same make and model of the vehicle approaching the mat 12.
In an alternative embodiment, the directory is queried after the images of the underside of the vehicle have been captured by the cameras. In this embodiment, an image of part or the whole of the underside of the vehicle is collated by the image processing device. Instead searching for the make and model of the vehicle in the database, the database is queried based on the captured image to find an image which provides the closest match to the captured image. This may be particularly useful when the make and model of the approaching vehicle is difficult to determine because of significant bodywork alterations or false number plates.
Once the image has been retrieved, a match analysis routine is then executed, as discussed below, to identi the differences between the retrieved image and the captured image. The match analysis routing typically involves image subtraction, although other known techniques can be used.
The process of capturing images and detecting anomalies will now be described with reference to Figure 4. At step 16, the cameras capture an image of the underside of the vehicle. The images arc sent, at stop 18, via a wired or wireless connection to the computing device having a graphical user interface and operated by users. The computing device collates the images taken by each camera and constructs an image of part or the whole of the underside of the vehicle. The collated image is also displayed on the display of the computing device alongside the image retrieved from the directory.
At step 20, the computing device executes a match analysis software routine to compare the two images. The execution may be automatic upon the receiving of the captured image and!or the retrieved image or may be triggered by a user command. The match analysis routine analyses the two images to identify discrepancies between, for example, the shapes and arrangement of components identifiable from the images. Any discrepancies that are identified arc highlighted on the captured image by any suitable means that will be apparent to a person skilled in the art. For example, a visual indicator will appear overlaid on the image, or an alarm will sound, and the user of the image processing device will be prompted by the alarm to review the images to ascertain the exact cause of the discrepancy. It will be apparent to a person skilled in the art that the sequence of method steps described may be altered according to specific circumstances.
At step 22, the computing device presents the user with further instructions. Where discrepancies have been identified, the image processing device may prompt the user to make a further visual assessment based on the image displayed and then prompt an input such as clear' or hold fur manual inspection'. This particular embodiment may be used where the discrepancies arc relatively minor. In other cases where discrepancies arc thund, thc imagc processing device is configured to display instructions. Such instructions may relate to preventing the vehicle flvm travelling further, directing the vehicle into a particular cordoned area fur repairs or inspection by an engineer or mechanic, requesting that the driver and all passengers get out of the vehicle, manually inspecting the discrepancies using a mirror, capturing further images of the underside of the vehicle using a tablet computer or mobile phone having a camera fitcility, or requesting the driver of the vehicle to drive over the mat 12 again.
In a further embodiment, the camera of a tablet computer or mobile phone is used to capture images of the underside of the vehicle. In this embodiment, a vehicle is stationary while users hold the tablet computer, which may be attached to an elongate handle, under the vehicle.
Once a sufflcicnt number of images are captured, software on thc tablet computer is configured to collate the images appropriately. The image is compared with an image selected from a database in accordance with any of the embodiments described above.
In an alternative embodiment, the mat 12 comprises a 3D imaging device. The 3D imaging device is preferably a 3D laser scanner and more preferably a time-of-flight camera (such as that provided by a Kinect (TM) motion sensing input device). The 3D imaging device is in communication with the microprocessor. As described above, the processor controls operation of the 3D imaging device either automatically based on the approach of a vehicle sensed by the sensor, or by manual user controL The 3D imaging device is configured to capture a depth image, preferably in real time, when the vehicle is positioned above the mat 12 and constructs a 3D surface image of all or part of the underside of the vehicle. This image is thcn compared with 3D imagcs of vehicle undersidcs stored in an imagc directory using a match analysis routine to identify discrepancies, if any, between the constructed image and the stored image. As described above, the make and model of the vehicle may have been identified before the 3D image is captured (and the match analysis based on the image retrieved from the directory corresponding to that particular vehicle), or the directory is queried based on the captured image to determine the closest matching image stored (and the match analysis based on the closest matching image found).
To facilitate effective training of users in the process of object detection and analysis as described above, there is described, with reference to figures 5, 6 and 7, training apparatus utilising an augmented reality software application.
Figure 5 is a perspective view of apparatus used for training according to an embodiment of the invention. Shown in figure 5 is a model 30 of the underside of a generic vehicle. The model 30 is a substantially flat and rectangular piece of material and has dimensions substantially corresponding to the underside of an average vehicle applicable for the specific training exercise. It may be constructed from any suitable material which allows for ease of transportation and low cost, for example MDF. The model 30 is raised above the ground by a distance corresponding substantially to an average vehicle chassis and is supported by legs 44 and/or wheels.
The underside 32 of model 30 comprises one or more markers, denoted generally by reference 36. As shown in Figure 5, the markers 36 arc arranged in a grid, although it will be appreciated that the markers can be arranged in other ways, and may be arranged randomly.
A computing device 34 comprising a graphical user interface, camera and an image scanning application is mounted on a wheeled block 38 which is attached to a handle 40. The handle facilitates adjustment of the angle of the computing device 34 and therefore the angle of the lens of the camera. The computing device 34 shown in Figure 5 is a tablet computer, although it will be appreciated that other portable devices having scanning capabilities can be used.
The handle 40 and wheels of mounting block 38 facilitate movement of the computing device under the model 30. When the camera 42 of the computing device is operational, an image scanning application scans the markers 32 on the underside of the model 30. The markers 32 are recognisable by the image scanning application as augmented reality (AR) triggers.
Figure 6 shows the marker displayed on the display'viewfinder of the computing device as the marker 01 is scanned. A marker can be any graphic or image that can act as a trigger for a suitably programmed augmented reality application.
Figure 8 illustratcs the operational steps of an embodiment of the present invention. At step 50, the computing device scans the markers 32 on the underside of model 30. Recognition of a marker as an AR trigger by the image scanning application (step 52) causes the retrieval of an image associated with the marker. The image is of a part of the underside of a particular vehicle without any anomalous objects or modifications (i.e. as manufactured) and may be retrieved from a database stored locally on the computing device, or may alternatively be retrieved from a database stored remotely from the computing device. At step 54, the image scanning application randomly selects any number of images of any number of anomalous objects or modifications from a database. At step 56, the two images are merged or overlaid to form a merged or composite image by techniques apparent to a person skilled in the art.
The composite image is displayed on a display of the computing device.
Figure 7 illustrates the simulation of the display of an image of part of the underside of an actual vehicle. The image may also, or altematively, be displayed on a device that is remote from the computing device 34. In this embodiment the image displayed on the computing device 34 may be mirrored on a desktop computing device, for example, or the recognition of the marker by image scanning software of the computing device 34 may trigger the execution of augmented reality application on a desktop computing device which is communication with the computing device.
The actual vehicle and the part of the underside of the vehicle that is retrieved due to the marker as a trigger is dependent upon the specific marker recognised by the camera. Thus, as the training user moves the computing device under the model 30, the camera of the computer device 34 can recognise different markers and display different vehicle chassis on the display. For a specific marker, the augmented reality application can be programmed to trigger display of a specific image of a particular vehicle or programmed to trigger random display of images of different vehicles every time it is scanned.
At step 58, the computing device (or other device having a graphical user interface) generates and displays instructions prompting thc uscr to provide input rclating thc imagcs displaycd (step 56 of Figure 8) such as providing answers to questions such as is foreign object present? Yes/No' andior answer multiple choose questions relating to the type of object detected. The input data can be stored and analysed to provide a competency assessment.
In this way, real-life object detection is simulated by a training methodology and apparatus to effectively facilitate object detection and recognition training.

Claims (38)

  1. Claims 1. A method of detecting anomalies on or near the underside of vehicles, comprising, by a computing device: receiving one or more images of at least part of the underside of a vehicle captured by one or more cameras, receiving identification information relating to the vehicle, retrieving an image of the underside of a vehicle from a database, comparing the retrieved image with the one or more images captured, determining differences between the retrieved image and the one or more captured images, and displaying a visual indication of the differences.
  2. 2. The method of claim 1, wherein displaying further comprises displaying the captured image.
  3. 3. The method of claim 1 or 2, wherein the step of retrieving comprises querying the database based on the one or more captured images to identi' the closest matching image in the database.
  4. 4. The method of claim I or 2, wherein the step of retrieving comprises querying the database based on the identification information relating to the vehicle.
  5. 5. The method of any preceding claim, wherein the computing device comprises a graphical user interface and wherein the method further comprising outputting instructions, wherein the instructions comprise prompts for user input.
  6. 6. The method of any preceding claim, further comprising collating one or more of the one or more captured images into a single image of the whole or part of the underside of a vehicle.
  7. 7. The method of claim 6, wherein the collating is based on the spatial separation of the cameras.
  8. 8. The method of any preceding claim, further comprising outputting, a description of the vehicle to which the captured image relates, wherein the description is based at least in part of the identification information.
  9. 9. The method of claim 8, further comprising prompting, by the image processing device, art input to vcrifj the description of the vehicle.
  10. 10. The method of any preceding claim, wherein the computing device is connected to a network, and wherein the step of querying comprises querying a database stored remotely from the computing device.
  11. 11. The method of any preceding claim, further comprising outputting, by the computing device, art alert when the differences are determined.
  12. 12. The method according to claim 11, wherein the alert is visual.
  13. 13. The method according to claim 11, wherein the alert is audible.
  14. 14. A system for detecting anomalies on or near the underside of vehicles, comprising one or more cameras arranged to capture one or more images of the underside of a vehicle; a computing device in communication with the one or more cameras and comprising a graphical user interface configured to display images; wherein the computing device is configured to: receive one or more images of at least part of the underside of a vehicle captured by the one or more cameras, receive identification information relating to the vehicle, retrieve an image of the underside of a vehicle from a database, compare the retrieved image with the one or more images captured, determine differences between the retrieved image and the one or more captured images, and display a visual indication of the differences.
  15. 15. The system of claim 14, wherein the cameras are comprised in a mat and wherein the mat is configured to lie on the ground.
  16. 16. The system of claim 14 or 15, wherein at least one of the one or more cameras is a time of flight camera.
  17. 17. The system of claim 15 or 16, wherein operation of the one or more cameras is controlled by a microprocessor, and preferably wherein the microprocessor is comprised in the mat.
  18. 18. The system of any of claims 14 to 16, further comprising a sensor arranged to sense the presence of an approaching vehicle.
  19. 19. The system of claim 17, wherein the sensor is comprised in the mat.
  20. 20. The system of any of claims 14 to 18, wherein the computing device is portable and located remotely from the one or more cameras.
  21. 21. The system of any of claims 14 to 19, wherein the operation of the one or more cameras is controlled by the computing device.
  22. 22. The system of any of claims 14 to 20, wherein the computing device is further configured to output instructions to prompt user input.
  23. 23. A device for capturing images of the underside of vehicles, comprising an elongate body comprising one or more cameras partially embedded in the body wherein the one or more cameras are configured to capture an image of the underside of a vehicle as a vehicle drives over the device.
  24. 24. The device of claim 23, further comprising a microprocessor in communication with the one or more cameras and configured to control operation of the one or more cameras.
  25. 25. The device of claim 23 or 24, frirther comprising a sensor, wherein the sensor senses an approaching of a vehicle.
  26. 26. The device of any of claims 23 to 25, wherein at least one of the one or more cameras is a time of flight camera.
  27. 27. The device of any of claims 23 to 25, further comprising one or more LEDs.
  28. 28. The device of any of claims 23 to 6 wherein the microprocessor is further configured to send images captured by each of the one or more cameras to a computing device having a d Isp lay.
  29. 29. The device of any of claims 23 to 28, wherein lenses of the one or more cameras are oriented upwards when the device lies on the ground.
  30. 30. The device of any of claims 23 to 29, wherein the device comprises more than one camera and wherein the cameras are evenly spaced apart from one another.
  31. 31. An system for simulating detection of anomalies on or near the underside of vehicles, comprising: a plurality of augmented reality trigger markers, a computing device for scanning one or more of the markers and a display for displaying images; wherein cach marker is rccognisablc by the computing device, when scanned, as an augmented reality trigger and wherein the computing device is configured to: retrieve an image of the underside of a vehicle, wherein the image is associated with the marker, select, from a database, an image of an anomalous object or modification and merge the selected image and retrieved image, and display the merged image on the display.
  32. 32. The system of claim 31, wherein the computing device comprises the display, and wherein the computing device further comprises a graphical user interface.
  33. 33. The system of claim 32, wherein the computing device is further configured to display prompts to prompt user input.
  34. 34. The system of claim 33, wherein the computing device is further configured to receive and store user input.
  35. 35. The system of any of claims 31 to 34, wherein the computing device is a tablet computing device.
  36. 36. A method for simulating object detection, comprising, by a computing device: scanning one or more markers, recognising onc of the one or more markers as an augmented reality trigger, retrieving an image of the underside of a vehicle, wherein the image is associated with the one or more markers recognised, selecting an image of an anomalous object or modification, and displaying, as a composite image, the selected image and the retrieved image.
  37. 37. A computer readable medium comprising instructions, that, when executed, causes the method of claim 36 to be performed.
  38. 38. A system, device or method as herein described substantially with reference to, or as shown in, one or more of the accompanying drawings.
GB1314412.6A 2013-07-17 2013-08-12 Object detection and recognition system Withdrawn GB2516321A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1312799.8A GB2516279A (en) 2013-07-17 2013-07-17 Object detection and recognition system

Publications (2)

Publication Number Publication Date
GB201314412D0 GB201314412D0 (en) 2013-09-25
GB2516321A true GB2516321A (en) 2015-01-21

Family

ID=49081418

Family Applications (2)

Application Number Title Priority Date Filing Date
GB1312799.8A Withdrawn GB2516279A (en) 2013-07-17 2013-07-17 Object detection and recognition system
GB1314412.6A Withdrawn GB2516321A (en) 2013-07-17 2013-08-12 Object detection and recognition system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB1312799.8A Withdrawn GB2516279A (en) 2013-07-17 2013-07-17 Object detection and recognition system

Country Status (1)

Country Link
GB (2) GB2516279A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3287944A1 (en) * 2016-08-25 2018-02-28 Rolls-Royce plc Methods, apparatus, computer programs, and non-transitory computer readable storage mediums for processing data from a sensor
US10823877B2 (en) 2018-01-19 2020-11-03 Intelligent Security Systems Corporation Devices, systems, and methods for under vehicle surveillance

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2535536B (en) * 2015-02-23 2020-01-01 Jaguar Land Rover Ltd Apparatus and method for displaying information

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030185340A1 (en) * 2002-04-02 2003-10-02 Frantz Robert H. Vehicle undercarriage inspection and imaging method and system
WO2004061771A1 (en) * 2003-01-07 2004-07-22 Stratech Systems Limited Intelligent vehicle access control system
US20040199785A1 (en) * 2002-08-23 2004-10-07 Pederson John C. Intelligent observation and identification database system
EP1482329A1 (en) * 2003-04-01 2004-12-01 VBISS GmbH Method and system for detecting hidden object under vehicle
WO2004110054A1 (en) * 2003-06-10 2004-12-16 Teleradio Engineering Pte Ltd Under vehicle inspection shuttle system
WO2006091874A2 (en) * 2005-02-23 2006-08-31 Gatekeeper , Inc. Entry control point device, system and method
US20070009136A1 (en) * 2005-06-30 2007-01-11 Ivan Pawlenko Digital imaging for vehicular and other security applications
WO2007120206A2 (en) * 2005-11-11 2007-10-25 L-3 Communications Security And Detection Systems, Inc. Imaging system with long-standoff capability

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030185340A1 (en) * 2002-04-02 2003-10-02 Frantz Robert H. Vehicle undercarriage inspection and imaging method and system
US20040199785A1 (en) * 2002-08-23 2004-10-07 Pederson John C. Intelligent observation and identification database system
WO2004061771A1 (en) * 2003-01-07 2004-07-22 Stratech Systems Limited Intelligent vehicle access control system
EP1482329A1 (en) * 2003-04-01 2004-12-01 VBISS GmbH Method and system for detecting hidden object under vehicle
WO2004110054A1 (en) * 2003-06-10 2004-12-16 Teleradio Engineering Pte Ltd Under vehicle inspection shuttle system
WO2006091874A2 (en) * 2005-02-23 2006-08-31 Gatekeeper , Inc. Entry control point device, system and method
US20070009136A1 (en) * 2005-06-30 2007-01-11 Ivan Pawlenko Digital imaging for vehicular and other security applications
WO2007120206A2 (en) * 2005-11-11 2007-10-25 L-3 Communications Security And Detection Systems, Inc. Imaging system with long-standoff capability

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3287944A1 (en) * 2016-08-25 2018-02-28 Rolls-Royce plc Methods, apparatus, computer programs, and non-transitory computer readable storage mediums for processing data from a sensor
US10515258B2 (en) 2016-08-25 2019-12-24 Rolls-Royce Plc Methods, apparatus, computer programs, and non-transitory computer readable storage mediums for processing data from a sensor
US10823877B2 (en) 2018-01-19 2020-11-03 Intelligent Security Systems Corporation Devices, systems, and methods for under vehicle surveillance

Also Published As

Publication number Publication date
GB201312799D0 (en) 2013-08-28
GB201314412D0 (en) 2013-09-25
GB2516279A (en) 2015-01-21

Similar Documents

Publication Publication Date Title
US11455565B2 (en) Augmenting real sensor recordings with simulated sensor data
US20190065933A1 (en) Augmenting Real Sensor Recordings With Simulated Sensor Data
US11069257B2 (en) System and method for detecting a vehicle event and generating review criteria
CN104282150B (en) Recognition device and system of moving target
CN106485233A (en) Drivable region detection method, device and electronic equipment
US9990376B2 (en) Methods for identifying a vehicle from captured image data
CN107545232A (en) Track detection system and method
RU2017120682A (en) METHOD AND TRAINING SYSTEM FOR PREVENTING COLLISIONS USING AUDIO DATA
US20130107052A1 (en) Driver Assistance Device Having a Visual Representation of Detected Objects
JP2021534494A (en) Camera evaluation technology for autonomous vehicles
RU2017123627A (en) METHOD FOR VIRTUAL DATA GENERATION FROM SENSORS FOR IDENTIFICATION OF PROTECTIVE POST RECEIVERS
US10814800B1 (en) Vehicle imaging station
US11410526B2 (en) Dynamic rollover zone detection system for mobile machinery
CN109664889B (en) Vehicle control method, device and system and storage medium
Shamsudin et al. Fog removal using laser beam penetration, laser intensity, and geometrical features for 3D measurements in fog-filled room
GB2516321A (en) Object detection and recognition system
CN114445780A (en) Detection method and device for bare soil covering, and training method and device for recognition model
CN110544312A (en) Video display method and device in virtual scene, electronic equipment and storage device
JP2022531361A (en) Complex road type scene attribute annotations
US20230230203A1 (en) Vehicle undercarriage imaging
CN108664695B (en) System for simulating vehicle accident and application thereof
Tao et al. Smoky vehicle detection in surveillance video based on gray level co-occurrence matrix
CN211904213U (en) Vehicle bottom checking system
CN108363985B (en) Target object perception system testing method and device and computer readable storage medium
DE102019213930B4 (en) Method for optimizing the detection of surroundings in a vehicle

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)