GB2516279A - Object detection and recognition system - Google Patents
Object detection and recognition system Download PDFInfo
- Publication number
- GB2516279A GB2516279A GB1312799.8A GB201312799A GB2516279A GB 2516279 A GB2516279 A GB 2516279A GB 201312799 A GB201312799 A GB 201312799A GB 2516279 A GB2516279 A GB 2516279A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- vehicle
- computing device
- cameras
- underside
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
A method and system for detecting anomalies, such as debris, on or near the underside of vehicles 10, comprises one or more cameras 14 arranged to capture one or more images of the underside of a car or other vehicle; a computing device in communication with the one or more cameras and comprising a means to display images; wherein the computing device is configured to: receive one or more images of at least part of the underside of a vehicle captured by the one or more cameras, receive identification information relating to the vehicle (e.g. its license plate), retrieve an image of the underside of a vehicle from a database, compare the retrieved image with the one or more images captured, determine differences between the retrieved image and the one or more captured images, and display a visual indication of the differences. The cameras may be housed in a mat 12. Independent claims are also included for an image-capture device, and a system for simulating detection of anomalies on the underside of vehicles using trigger markers.
Description
Object Detection and RecoQnition System
Field of the Invention
The present invention relates to object detection and recognition, and more specifically to object detection and recognition of anomalous objects on or near the underside ofvchicles.
Background of the Invention
In many circumstances it is desirable or indccd necessary to check the underside of vehicles for alterations, objects or debris. This may be for maintenance or security reasons, amongst others. Since the underside chassis of most vehicles are not easily viewable when the vehicle is in normal use, mirrors are utilised and have been used by mechanics as well as in police, military and security sectors for many years.
An inspection mirror often comprises a minor attached to a handle. Such inspection mirrors are used by users whilst standing or sitting in close proximity to the vehicle while the vehicle is stationary. The need to position users close to the vehicle, as well as the need for the vehicle to be stationary, can results in inefficiencies, particularly when a number of vehicles must be inspected and when there are limited numbers of trained users who can be appropriately positioned.
Users of inspection mirrors may be stationed to inspect vehicle for prolonged periods. The repetitious nature of inspections can be monotonous and so lead to decreasing concentration and alertness. This can result in bad assessments and ultimately jeopardise the quality and effectiveness of the inspection exercise.
Furthermore, users using an inspection minor must be adequately trained in order to make proper assessments regarding the existence or otherwise of vehicle alterations, anomalous objects or debris that may be encountered. The existing training mechanisms used are often a resuh of direct experience and do not prepare users for the risk of concentration loss.
It is an aim of the present invention aims to avoid or at least mitigate some of the drawbacks
of the prior art.
Summary of the Invention
According to a first aspect of the invention, there is provided a method of detecting anomalies on or near the underside of vehicles, comprising, by a computing device: receiving one or more images of at least part of the underside of a vehicle captured by one or more cameras, receiving identification information relating to the vehicle, retrieving an image of the underside of a vehicle from a database, comparing the retrieved image with the one or more images captured, determining differences between the retrieved image and the one or more captured images, and displaying a visual indication of the differences.
Preferably, the step of displaying further comprises displaying the captured image.
Preferably, the step of retrieving comprises querying the database based on the one or more captured images to identify the closest matching image in the database. The step of retrieving may comprise querying the database based on the identification information relating to the vehicle.
Preferably, the computing device comprises a graphical user interface and wherein the method further comprising outputting instructions, wherein the instructions may comprise prompts for user input.
Preferably, the method further comprises collating one or more of the one or more captured images into a single image of the whole or part of the underside of a vehicle. The collating is preferably based on the spatial separation of the cameras.
Preferably, the method frirther comprises outputting a description of the vehicle to which the captured image relates, wherein the description is based at least in part of the identification information. Optionally, the method further comprises prompting, by the image processing device, art input to vcrift the description of the vehicle.
Preferably, the computing device is connected to a network, and wherein the step of querying comprises querying a database stored remotely from the computing device.
The method preferably further comprises outputting, by the computing device, an alert when the differences are determined. The alert may be is visual and!or audible.
According to a second aspect of the invention there is provided a system for detecting anomalies on or near the underside of vehicles, compnsing one or more cameras arranged to capture one or more images of the underside of a vehicle; a computing device in communication with the one or more cameras and comprising a graphical user interface configured to display images; wherein the computing device is configured to: receive one or more images of at least part of the underside of a vehicle captured by the one or more cameras, receive identification information relating to the vehicle, retrieve an image of the underside of a vehicle from a database, compare the retrieved image with the one or more images captured, determine differences between the retrieved image and the one or more captured images, and display a visual indication of the differences.
Preferably, the cameras are comprised in a mat and the mat is configured to lie on the ground.
Operation of the one or more cameras may be controlled by a microprocessor, and preferably the microprocessor is comprised in the mat. The system may further comprise a sensor arranged to sense the presence of an approaching vehicle, and the sensor may also be comprised in the mat.
Preferably, the computing device is portable and located remotely from the one or more cameras. Optionally, the operation of the one or more cameras is controlled by the computing device. Preferably, the computing device is further configured to output instructions to prompt user input.
The methods and systems claimed provide for the automated collection of images and improved image recognition of physical objects and modifications of objects. This facilitates a quicker and more accurate assessment of potential anomalies and thereby also improves safety.
According to a third aspect of the invention, there is provided a device for capturing images of the underside of vehicles, comprising an elongate body comprising one or more cameras partially embedded in the body wherein the one or more cameras arc configured to capture an image of the underside of a vehicle as a vehicle drives over the device.
The device provides a convenient and efficient means by which images of the underside of vehicles can be obtained whilst the vehicle is moving and which does not require human intervention.
Preferably, the device further comprises a microprocessor in communication with the one or more cameras and may be configured to control operation of the one or more cameras.
The device preferably comprises a sensor, wherein the sensor senses an approaching of a vehicle. Thc device may comprise one or more LEDs.
Preferably, the microprocessor is configured to send images captured by each of the one or more cameras to a computing device having a display.
Optionally, the lenses of the one or more cameras are oriented upwards when the device lies on the ground. The device preferably comprises more than one camera and preferably the cameras are evenly spaced apart from one another.
According to a fourth aspect of the invention, there is provided a system for simulating detection of anomalies on or near the underside of vehicles, comprising: a plurality of augmented reality trigger markers, a computing device for scanning one or more of the markers and a display for displaying images; whcrcin each marker is rccognisablc by the computing device, when scanned, as an augmented reality trigger and wherein the computing device is configured to: retrieve an image of the underside of a vehicle, wherein the image is associated with the marker, select, from a database, an image of an anomalous object or modification and merge the selected image and retrieved image, and display the merged image on the display.
The computing device may comprise the display, and may also comprise a graphical user interface. The computing device is preferably further configured to display prompts to prompt user input, and may also be configured to receive and store user input. Preferably, the computing device is a tablet computing device.
According to a fifth aspect of the invention, there is provided a method for simulating object detcction, comprising, by a computing device: scanning one or more markers, recognising one of the one or more markers as an augmented reality trigger, retrieving an image of the underside of a vehicle, wherein the image is associated with the one or more markers recognised, selecting an image of an anomalous object or modification, and displaying, as a composite image, the selected image and the retrieved image.
Brief description of the drawings
A prefcrred embodiment of the invention will now be described by way of example with reference to the following drawings in which: Figure 1 is perspective view of a vehicle approaching a camera mat; Figure 2 is an underside view of a vehicle and camera mat; Figure 3 shows a schematic of a camera mat; Figure 4 is a flow diagram outlining operation steps involved according to an embodiment of the invention; Figure 5 is a perspective view of training apparatus; Figure 6 illustrates recognition of an augmented reality marker by a computing device; Figure 7 illustrates the display of a vehicle chassis on a computing device; Figure 8 is a flow diagram outlining the operation steps involved according to a further embodiment of the invention.
Detailed description
Figure 1 shows vehicle 10 whose chassis underside is to be inspected and a camera mat 12 lying on the ground in front of the vehicle. The length of camera mat 12 is preferably larger than the width of the vehicles which are to be inspected, and is typically approximately 3-4 metres in length. Mat 12 is preferably solid and is constructed from a resilient material such as rubber, although other suitable materials may be used. Mat 12 houses, in its body, one or more cameras, denoted generally by 14 and described further with reference to Figure 3. The lenses of cameras 14 are oriented upwards. In use, and as shown in figure 1, vehicle 10 approaches the camera mat 12 in a direction perpendicular to the length of the camera mat 12.
The underside of the chassis of vehicle 10 is shown in figure 2. The cameras 14 of the camera mat 12 capture images of the underside of the vehicle as the vehicle drives over the mat at a slow spccd. Each camcra captures one or more images of thc underside of the vchiclc. The number of images captured by each camera depends on the location of the mat 12, the number and arrangement of cameras in the mat 12, the type/specification of the cameras and the types of vehicles 10 being inspected, and/or specific foreign objects that the vehicles are being checked for. If analysis of only a specific part of the underside of the chassis of vehicles are requested (for example, the rear section of the vehicle) the cameras may be configured to capture only a single image as the rear part of the vehicle passes over the mat 12.
The mat 12 also comprises a microprocessor (not shown). The microprocessor is in communication with the cameras 14 such that the microprocessor controls the operation of the cameras and further controls the time at which the cameras 14 arc to capture images. The mat 12 further houses a sensor (not shown) in communication with the microprocessor which senses when a vehicle is approaching the mat 12. The sensor is configured to sense an approaching vehicle by any suitable means. When the sensor determines that a vehicle is approaching, it sends a signal to the microprocessor and the microprocessor controls the operation of one or more of the cameras 14 accordingly. For example, the microprocessor may delay operation of the cameras until after a predetermined time interval has passed. The time interval may be chosen based on the distance at which the sensor senses the approaching vehicles and an estimated speed of the vehicle.
In an ahernative embodiment, the microprocessor housed in the mat 12 is configured to receive instructions, via a wired or wireless connection, from a computing device located remotely from the mat. The computing device has image processing capabilities and a display and is preferably a portable tablct computer but may be a mobilc telephone or a desktop computer. Control of the operation of the cameras by the computing device via the microproccssor is by user input (for example a user may provide an input in real time to instruct the microprocessor to control image capture by the cameras 14). In a flirther alternative embodiment, each of the camera devices is in wired or wireless communication with the image processing device.
As shown in Figure 3, the camera mat 12 comprises 6 cameras 14 spaced substantially evenly along length of mat 12. The cameras 6 are embedded in the body of the mat 12; the mat 12 typically has a domed cross section. It will be appreciated that alternative arrangements of thc camcras 14 may bc adoptcd. The lens of each of the cameras are exposed by gaps in the outer material of the mat 12. To protect the lens of the cameras 14, a durable plastic shell is located above the lens. The plastic shell typically extends slightly above the rubber surface of the mat and has an aperture through which the camera may capture an image of a vehicle above the mat 12. The mat 12 may also comprise reflective spheres such that the mat is made visible by the reflection of the spheres caused by incident vehicle headlights. This serves as an indication to the driver of the vehicle that the mat is ahead of the vehicle and may act as a further prompt to reduce the speed of the vehicle. Although not shown in Figure 3, the mat 12 additionally or alternatively comprises one or more lights, such as LEDs, to facilitate identification of the mat 12 in low light conditions.
As or before the vehicle approaches the mat 12, information identiiing the vehicle such as the make and model is detected or is manually input into a computing device having display and image processing capabilities and operated by trained users. For example, users may enter a registration number of the vehicle, may be prompted to manually enter the make and model or select the make and model from a list. The type, make and model or other form of vehicle identification may be automatically detected, for example by RFID tags in or on the vehicles, or by automatic vehicle recognition.
Upon determination of the type andior make and model of the vehicle, the computing device accesses a directory of images. The directory is a database which may be stored locally on the computing device or accessed via a network such as the internet via a suitable connection.
The directory comprises images of the underside of the chassis of substantially all vehicle makes and models as originally manufactured. The images in the directory thus illustrate what the underside of the approaching vehicle should look like (i.e. as they would appear before the vehicles are used, and therefore free of debris, modifications etc.). The image corresponding to the specific vehicle approaching the mat is retrieved and displayed on the display of the computing device. The user may be prompted to veri' that the make and model of the vehicle corresponding to the selected image is in fact the same make and model of the vehicle approaching the mat 12.
In an alternative embodiment, the directory is queried after the images of the underside of the vehicle have been captured by the cameras. In this embodiment, an image of part or the whole of the underside of the vehicle is collated by the image processing device. Instead searching for thc make and modcl of thc vchiclc in thc databasc, thc database is quericd bascd on the captured image to find an image which provides the closest match to the captured image. This may be particularly useful when the make and model of the approaching vehicle is difficult to determine because of significant bodywork alterations or false number plates.
Once the image has been retrieved, a match analysis routine is then executed, as discussed below, to identi the differences between the retrieved image and the captured image. The match analysis routing typically involves image subtraction, although other known techniques can be used.
The process of capturing images and detecting anomalies will now be described with reference to Figure 4. At step 16, the cameras capture an image of the underside of the vehicle. The images are sent, at stop 18, via a wired or wireless connection to the computing device having a graphical user interface and operated by users. The computing device collates the images taken by each camera and constructs an image of part or the whole of the underside of the vehicle. The collated image is also displayed on the display of the computing device alongside the image retrieved from the directory.
At step 20, the computing device executes a match analysis software routine to compare the two images. The execution may be automatic upon the receiving of the captured image and!or the retrieved image or may be triggered by a user command. The match analysis routine analyses the two images to identify discrepancies between, for example, the shapes and arrangement of components identifiable from the images. Any discrepancies that are identified are highlighted on the captured image by any suitable means that will be apparent to a person skilled in the art. For example, a visual indicator will appear overlaid on the image, or an alarm will sound, and the user of the image processing device will be prompted by the alarm to review the images to ascertain the exact cause of the discrepancy. It will be apparent to a person skilled in the art that the sequence of method steps described may be altered according to specific circumstances.
At stcp 22, the computing device presents thc user with thrthcr instructions. Where discrepancies have been identified, the image processing device may prompt the user to make a further visual assessment based on the image displayed and then prompt an input such as clear' or hold for manual inspection'. This particular embodiment may be used where the discrepancies arc relatively minor. In other cases where discrepancies are found, the image processing device is configured to display instructions. Such instructions may relate to preventing the vehicle from travelling further, directing the vehicle into a particular cordoned area for repairs or inspection by an engineer or mechanic, requesting that the driver and all passengers get out of the vehicle, manually inspecting the discrepancies using a mirror, capturing flirther images of the underside of the vehicle using a tablet computer or mobile phone having a camera facility, or requesting the driver of the vehicle to drive over the mat 12 again.
In a further embodiment, the camera of a tablet computer or mobile phone is used to capture images of the underside of the vehicle. In this embodiment, a vehicle is stationary while users hold the tablet computer, which may be attached to an elongate handle, under the vehicle.
Once a sufficient number of images are captured, software on the tablet computer is configured to collate the images appropriately. The image is compared with an image selected from a database in accordance with any of the embodiments described above.
To facilitate effective training of users in the process of object detection and analysis as described above, there is described, with reference to figures 5, 6 and 7, training apparatus utilising an augmented reality software application.
Figure 5 is a perspective view of apparatus used for training according to an embodiment of the invention. Shown in figure 5 is a model 30 of the underside of a generic vehicle. The model 30 is a substantially flat and rectangular piece of material and has dimensions substantially corresponding to the underside of an average vehicle applicable for the specific training exercise. It may be constructed from any suitable material which allows for case of transportation and low cost, for example MDF. The model 30 is raised above the ground by a distance corresponding substantially to an average vehicle chassis and is supported by legs 44 and/or wheels.
The underside 32 of model 30 comprises one or more markers, denoted generally by reference 36. As shown in Figure 5, the markers 36 are arranged in a grid, although it will be appreciated that the markers can be arranged in other ways, and may be arranged randomly.
A computing device 34 comprising a graphical user interface, camera and an image scanning application is mounted on a wheeled block 38 which is attached to a handle 40. The handle facilitates adjustment of the angle of the computing device 34 and therefore the angle of the lens of the camera. The computing device 34 shown in Figure 5 is a tablet computer, although it will be appreciated that other portable devices having scanning capabilities can be used.
The handle 40 and wheels of mounting block 38 facilitate movement of the computing device under the model 30. When the camera 42 of the computing device is operational, an image scanning application scans the markers 32 on the underside of the model 30. The markers 32 are recognisable by the image scanning application as augmented reality (AR) triggers.
Figure 6 shows the marker displayed on the display/viewfinder of the computing device as the marker 01 is scanned. A marker can be any graphic or image that can act as a trigger for a suitably programmed augmented reality application.
Figure 8 illustrates the operational steps of an embodiment of the present invention. At step 50, the computing device scans the markers 32 on the underside of model 30. Recognition of a marker as an AR trigger by the image scanning application (step 52) causes the retrieval of an image associated with the marker. The image is of a part of the underside of a particular vehicle without any anomalous objects or modifications (i.e. as manufactured) and may be retrieved from a database stored locally on the computing device, or may alternatively be retrieved from a database stored remotely from the computing device. At step 54, the image scanning application randomly selects any number of images of any number of anomalous objects or modifications from a database. At step 56, the two images are merged or overlaid to form a merged or composite image by techniques apparent to a person skilled in the art.
The composite image is displayed on a display of the computing device.
Figure 7 illustrates the simulation of the display of an image of part of the underside of an actual vehicle. The image may also, or altematively, be displayed on a device that is remote from the computing device 34. In this embodiment the image displayed on the computing device 34 may be mirrored on a desktop computing device, for example, or the recognition of the marker by image scanning software of the computing device 34 may trigger the execution of augmented reality application on a desktop computing device which is communication with the computing device.
The actual vchiclc and the part of thc underside of thc vchicle that is rctricved due to the marker as a trigger is dependent upon the specific marker recognised by the camera. Thus, as the training user moves the computing device under the model 30, the camera of the computer device 34 can recognise different markers and display different vehicle chassis on the display. For a specific marker, the augmented reality application can be programmed to trigger display of a specific image of a particular vehicle or programmed to trigger random display of images of different vehicles every time it is scanned.
At step 58, the computing device (or other device having a graphical user interface) generates and displays instructions prompting the user to provide input relating the images displayed (step 56 of Figure 8) such as providing answers to questions such as is foreign object present?' Yes/No' and/or answer multiple choose questions relating to the type of object detected. The input data can be stored and analysed to provide a competency assessment.
In this way, real-life object detection is simulated by a training methodology and apparatus to effectively facilitate object detection and recognition training.
Claims (36)
- Claims 1. A method of detecting anomalies on or near the underside of vehicles, comprising, by a computing device: receiving one or more images of at least part of the underside of a vehicle captured by one or more cameras, receiving identification information relating to the vehicle, retrieving an image of the underside of a vehicle from a database, comparing the retrieved image with the one or more images captured, determining differences between the retrieved image and the one or more captured images, and displaying a visual indication of the differences.
- 2. The method of claim 1, wherein displaying further comprises displaying the captured image.
- 3. The method of claim 1 or 2, wherein the step of retrieving comprises querying the database based on the one or more captured images to identi' the closest matching image in the database.
- 4. The method of claim I or 2, wherein the step of retrieving comprises querying the database based on the identification information relating to the vehicle.
- 5. The method of any preceding claim, wherein the computing device comprises a graphical user interface and wherein the method further comprising outputting instructions, wherein the instructions comprise prompts for user input.
- 6. The method of any preceding claim, further comprising collating one or more of the one or more captured images into a single image of the whole or part of the underside of a vehicle
- 7. The method of claim 6, wherein the collating is based on the spatial separation of the cameras.
- 8. The method of any preceding claim, further comprising outputting, a description of the vehicle to which the captured image relates, wherein the description is based at least in part of the identification information.
- 9. The method of claim 8, further comprising prompting, by the image processing device, art input to vcrifj the description of the vehicle.
- 10. The method of any preceding claim, wherein the computing device is connected to a network, and wherein the step of querying comprises querying a database stored remotely from the computing device.
- 11. The method of any preceding claim, further comprising outputting, by the computing device, art alert when the differences are determined.
- 12. The method according to claim 11, wherein the alert is visual.
- 13. The method according to claim 11, wherein the alert is audible.
- 14. A system for detecting anomalies on or near the underside of vehicles, comprising one or more cameras arranged to capture one or more images of the underside of a vehicle; a computing device in communication with the one or more cameras and comprising a graphical user interface configured to display images; wherein the computing device is configured to: receive one or more images of at least part of the underside of a vehicle captured by the one or more cameras, receive identification information relating to the vehicle, retrieve an image of the underside of a vehicle from a database, compare the retrieved image with the one or more images captured, determine differences between the retrieved image and the one or more captured images, and display a visual indication of the differences.
- 15. The system of claim 14, wherein the cameras are comprised in a mat and wherein the mat is configured to lie on the ground.
- 16. The system of claim 15 or 16, wherein operation of the one or more cameras is controlled by a microprocessor, and preferably wherein the microprocessor is comprised in the mat.
- 17. The system of any of claims 14 to 16, further comprising a sensor arranged to sense the presence of an approaching vehicle.
- 18. The system of claim 17, wherein the sensor is comprised in the mat.
- 19. The system of any of claims 14 to 18, wherein the computing device is portable and located remotely from the one or more cameras.
- 20. The system of claim, wherein the operation of the one or more cameras is controlled by the computing device.
- 21. The system of any of claims 14 to 20, wherein the computing device is further configured to output instructions to prompt user input.
- 22. A device for capturing images of the underside of vehicles, comprising an elongate body comprising one or more cameras partially embedded in the body wherein the one or more cameras are configured to capture an image of the underside ofa vehicle as a vehicle drives over the device.
- 23. The device of claim 22, further comprising a microprocessor in communication with the one or more cameras and configured to control operation of the one or more cameras.
- 24. The device of claim 22 or 23, frirther comprising a sensor, wherein the sensor senses an approaching of a vehicle.
- 25. The device of any of claims 22 to 24, further comprising one or more LEDs.
- 26. The device of any of claims 22 to 25 wherein the microprocessor is further configured to send images captured by each of the one or more cameras to a computing device having a display.
- 27. The device of any of claims 22 to 26, wherein lenses of the one or mere cameras are oriented upwards when the device lies on the ground.
- 28. The device ofany of claims 22to 27, wherein the device comprises more than one camera and wherein the cameras are evenly spaced apart from one another.
- 29. An system tbr simulating detection of anomalies on or near the underside of vehicles, comprising: a plurality of augmented reality trigger markers, a computing device for scanning one or more of the markers and a display for displaying images; wherein each marker is recognisable by the computing device, when scanned, as an augmented reality trigger and wherein the computing device is configured to: retrieve an image of the underside of a vehicle, wherein the image is associated with the marker, select, from a database, an image of an anomalous object or modification and merge the selected image and retrieved image, and display thc mcrgcd imagc on thc display.
- 30. The system of claim 28, wherein the computing device comprises the display, and wherein the computing device further comprises a graphical user interface.
- 31. The system of claim 29, wherein the computing device is further conflguredto display prompts to prompt user input.
- 32. The system of claim 29, wherein the computing device is further configured to receive and store user input.
- 33. The system of any of claims 28 to 30, wherein the computing device is a tablet computing dcvicc.
- 34. A method for simulating object detection, comprising, by a computing device: scanning one or more markers, recognising one of the one or more markers as an augmented reality trigger, retrieving an image of the underside of a vehicle, wherein the image is associated with the one or more markers recognised, selecting an image of an anomalous object or modification, and displaying, as a composite image, the selected image and the retrieved image.
- 35. A computer readable medium comprising instructions, that, when executed, causes the method of claim 34 to be pcrformcd.
- 36. A system, device or method as herein described substantially with reference to, or as shown in, one or more of the accompanying drawings.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1312799.8A GB2516279A (en) | 2013-07-17 | 2013-07-17 | Object detection and recognition system |
GB1314412.6A GB2516321A (en) | 2013-07-17 | 2013-08-12 | Object detection and recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1312799.8A GB2516279A (en) | 2013-07-17 | 2013-07-17 | Object detection and recognition system |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201312799D0 GB201312799D0 (en) | 2013-08-28 |
GB2516279A true GB2516279A (en) | 2015-01-21 |
Family
ID=49081418
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1312799.8A Withdrawn GB2516279A (en) | 2013-07-17 | 2013-07-17 | Object detection and recognition system |
GB1314412.6A Withdrawn GB2516321A (en) | 2013-07-17 | 2013-08-12 | Object detection and recognition system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1314412.6A Withdrawn GB2516321A (en) | 2013-07-17 | 2013-08-12 | Object detection and recognition system |
Country Status (1)
Country | Link |
---|---|
GB (2) | GB2516279A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016135056A1 (en) * | 2015-02-23 | 2016-09-01 | Jaguar Land Rover Limited | Apparatus and method for displaying information |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201614492D0 (en) * | 2016-08-25 | 2016-10-12 | Rolls Royce Plc | Methods, apparatus, computer programs, and non-transitory computer readable storage mediums for processing data from a sensor |
US10823877B2 (en) | 2018-01-19 | 2020-11-03 | Intelligent Security Systems Corporation | Devices, systems, and methods for under vehicle surveillance |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030185340A1 (en) * | 2002-04-02 | 2003-10-02 | Frantz Robert H. | Vehicle undercarriage inspection and imaging method and system |
WO2004061771A1 (en) * | 2003-01-07 | 2004-07-22 | Stratech Systems Limited | Intelligent vehicle access control system |
US20040199785A1 (en) * | 2002-08-23 | 2004-10-07 | Pederson John C. | Intelligent observation and identification database system |
EP1482329A1 (en) * | 2003-04-01 | 2004-12-01 | VBISS GmbH | Method and system for detecting hidden object under vehicle |
WO2004110054A1 (en) * | 2003-06-10 | 2004-12-16 | Teleradio Engineering Pte Ltd | Under vehicle inspection shuttle system |
WO2006091874A2 (en) * | 2005-02-23 | 2006-08-31 | Gatekeeper , Inc. | Entry control point device, system and method |
US20070009136A1 (en) * | 2005-06-30 | 2007-01-11 | Ivan Pawlenko | Digital imaging for vehicular and other security applications |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007120206A2 (en) * | 2005-11-11 | 2007-10-25 | L-3 Communications Security And Detection Systems, Inc. | Imaging system with long-standoff capability |
-
2013
- 2013-07-17 GB GB1312799.8A patent/GB2516279A/en not_active Withdrawn
- 2013-08-12 GB GB1314412.6A patent/GB2516321A/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030185340A1 (en) * | 2002-04-02 | 2003-10-02 | Frantz Robert H. | Vehicle undercarriage inspection and imaging method and system |
US20040199785A1 (en) * | 2002-08-23 | 2004-10-07 | Pederson John C. | Intelligent observation and identification database system |
WO2004061771A1 (en) * | 2003-01-07 | 2004-07-22 | Stratech Systems Limited | Intelligent vehicle access control system |
EP1482329A1 (en) * | 2003-04-01 | 2004-12-01 | VBISS GmbH | Method and system for detecting hidden object under vehicle |
WO2004110054A1 (en) * | 2003-06-10 | 2004-12-16 | Teleradio Engineering Pte Ltd | Under vehicle inspection shuttle system |
WO2006091874A2 (en) * | 2005-02-23 | 2006-08-31 | Gatekeeper , Inc. | Entry control point device, system and method |
US20070009136A1 (en) * | 2005-06-30 | 2007-01-11 | Ivan Pawlenko | Digital imaging for vehicular and other security applications |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016135056A1 (en) * | 2015-02-23 | 2016-09-01 | Jaguar Land Rover Limited | Apparatus and method for displaying information |
US10255705B2 (en) | 2015-02-23 | 2019-04-09 | Jaguar Land Rover Limited | Apparatus and method for displaying information |
Also Published As
Publication number | Publication date |
---|---|
GB2516321A (en) | 2015-01-21 |
GB201312799D0 (en) | 2013-08-28 |
GB201314412D0 (en) | 2013-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210327299A1 (en) | System and method for detecting a vehicle event and generating review criteria | |
US11487988B2 (en) | Augmenting real sensor recordings with simulated sensor data | |
JP7070683B2 (en) | Deterioration diagnosis device, deterioration diagnosis system, deterioration diagnosis method, program | |
CN104282150B (en) | Recognition device and system of moving target | |
US7889931B2 (en) | Systems and methods for automated vehicle image acquisition, analysis, and reporting | |
US8855364B2 (en) | Apparatus for identification of an object queue, method and computer program | |
US9672440B2 (en) | Damage recognition assist system | |
JP7183390B2 (en) | Camera evaluation technology for autonomous vehicles | |
CN110609288A (en) | Performance test method and device of automatic parking system | |
CN106503653A (en) | Area marking method, device and electronic equipment | |
US11273841B2 (en) | Method and apparatus for spoofing prevention | |
CN106663371A (en) | System and method for tracking vehicles in parking structures and intersections | |
US20190180132A1 (en) | Method and Apparatus For License Plate Recognition Using Multiple Fields of View | |
CN101542551A (en) | Traffic law violation recording and transmitting system | |
US20200232884A1 (en) | Systems and methods for remote visual inspection and emissions testing of vehicles | |
CN109670431A (en) | A kind of behavioral value method and device | |
GB2516279A (en) | Object detection and recognition system | |
CN110544312A (en) | Video display method and device in virtual scene, electronic equipment and storage device | |
CN109544981B (en) | Image processing method, apparatus, device and medium | |
van Nes et al. | The value of site-based observations complementary to naturalistic driving observations: A pilot study on the right turn manoeuvre | |
CN115082546B (en) | Method and device for determining pollutant discharge amount, electronic equipment and medium | |
CN111507284A (en) | Auditing method, auditing system and storage medium applied to vehicle inspection station | |
CN108664695B (en) | System for simulating vehicle accident and application thereof | |
JP2017033377A (en) | Vehicle specification device and vehicle specification system including the same | |
CN115690967A (en) | Access control method, system, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |