EP1946268A2 - A method and system for network based automatic and interactive inspection of vehicles - Google Patents

A method and system for network based automatic and interactive inspection of vehicles

Info

Publication number
EP1946268A2
EP1946268A2 EP06796200A EP06796200A EP1946268A2 EP 1946268 A2 EP1946268 A2 EP 1946268A2 EP 06796200 A EP06796200 A EP 06796200A EP 06796200 A EP06796200 A EP 06796200A EP 1946268 A2 EP1946268 A2 EP 1946268A2
Authority
EP
European Patent Office
Prior art keywords
vehicle
images
image
computer program
captured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP06796200A
Other languages
German (de)
French (fr)
Other versions
EP1946268A4 (en
Inventor
Anoop Prabuh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kritikal Securescan PVT Ltd
Original Assignee
Kritikal Securescan PVT Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kritikal Securescan PVT Ltd filed Critical Kritikal Securescan PVT Ltd
Publication of EP1946268A2 publication Critical patent/EP1946268A2/en
Publication of EP1946268A4 publication Critical patent/EP1946268A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Definitions

  • the instant invention relates to a method and system for network based automatic and interactive inspection of vehicles. It generally relates to capturing multiple images of the vehicle and calibrating the images to form a composite image or a series of video and inspect the vehicle for any unidentified object.
  • GB2249690 proposed to 'image the underside of a vehicle using a TV monitor connected to a camera set into a pit or mounted in a hump in the road over which the vehicle travels' with a wide angle camera which produces line or strip outputs as the vehicle moves over the hump.
  • GB2258321 proposed to use a line-scan camera looking up at the vehicle surface through a mirror in order to reduce the height of the underground unit.
  • Certain companies also supply a video based viewing system which consist of four or more cameras arranged transversely, while certain other companies offer systems which try to provide an image of the underside through use of camera equipped with wide-angle lens.
  • All the above systems have multiple disadvantages in terms of operational and security viewpoint.
  • the adjustable camera plus light viewer system is just an advanced version of the inspection mirror; and hence most problems inherent in the inspection mirror like viewing only parts of the vehicle, tedious operation, large inspection times, unreliable, etc are also present in with the inspection camera system.
  • the video viewing systems presents videos of the vehicle underside; the complete underside is not visible at any point in the video. Hence it is difficult for the operator to form a complete mental picture of the underside and determine if there is any abnormality in the underside.
  • Both GB2249690 and GB2258321 produce a composite image of the vehicle underside as output of the system.
  • EP1482329A1 describe a means of automated image comparison but their method detects prominent vehicle features like wheel location, wheel-base, etc to index the database and also subsequently for the image comparison purposes. Also their method assumes that multiple instances of vehicle underside images can be aligned together by a simple translation.
  • the instant invention provides a method for network based automatic inspection of vehicles comprising detecting the arrival of a vehicle near a vehicle inspection system by a first sensor in said system, initiating said system by said sensor, detecting the location and time instants of said vehicle at predefined and configurable points and providing point readouts for said location detection, capturing images of said vehicle as the vehicle moves by a set of image capturing means, transmitting said images, detected location and time instants to a computing system, processing the captured images using said time instants and detected locations, automatically adjusting the intensity and improving the resolution of the processed image, displaying, inspecting and storing said processed images, analyzing a user defined portion of the composite image, if desired, and detecting the exit of vehicle and deactivating said vehicle inspection system by a second sensor.
  • the invention also provides a system for network based automatic inspection of vehicles, wherein said system comprises a vehicle inspection system, said system including a first sensor for detecting the arrival of a vehicle near said system and initiating the system, means for detecting the location of the vehicle and instants at predefined points, a set of image capturing means for capturing a plurality of images, and a second sensor for detecting the exit of vehicle and deactivating said system, a computing means, said means including processing means for processing the images captured by said image capturing means using said detected location, means configured for automatically adjusting the intensity and improving the resolution of the processed image, means for displaying, inspecting and storing said images, and means configured for analyzing a user defined portion of the composite image, if desired, and transmitting means for transmitting said images to said computing means.
  • a vehicle inspection system said system including a first sensor for detecting the arrival of a vehicle near said system and initiating the system, means for detecting the location of the vehicle and instants at predefined points, a set of image capturing means for
  • a computer readable recording medium for storing a computer program to cause a computer to perform the steps of detecting the arrival of a vehicle near a vehicle inspection system by a first sensor in said system, initiating said system by said sensor, detecting the location and time instants of said vehicle at predefined and configurable points and providing point readouts at said location, capturing images of said vehicle as the vehicle moves over the inspection system by a set of image capturing means, transmitting said images, detected location and time instants to a computing system, processing the captured images using said time instants and detected locations, automatically adjusting the intensity and contrast for improving the visual clarity of the processed image, displaying, inspecting and storing said processed images, analyzing a user defined portion of the composite image, if desired, and detecting the exit of vehicle from the inspection system and deactivating said vehicle inspection system by a second sensor.
  • Figure 1 relates to the overview of the components of the vehicle detection system.
  • Figure 2 describes the multiuser architecture of the system.
  • Figure 3 shows the top view of the vehicle detection system.
  • Figure 4 describes the method of the invention by a flowchart.
  • Figure 5 shows the various locations of a vehicle at for which time instants of entry/exit are detected.
  • Figure 6 shows a calibrated image created from uncalibrated image.
  • Figure 7 shows the image being divided into local sub images for automatic adjustment.
  • Figure 8 shows the current image comparison result.
  • Figure 9 shows a computing system.
  • the instant system can be divided into three parts - vehicle detection and imaging system (V), - computing means (C), and transmission means for communicating between the above two.
  • V vehicle detection and imaging system
  • C computing means
  • transmission means for communicating between the above two.
  • FIG. 1 An overview of the vehicle detection system is described in figure 1. It houses a plurality of cameras for imaging the vehicle surface. These cameras are covered for protection (10). There is a slit covered by a transparent sheet (11) over the camera, so as to enable it to capture the image. Another array of camera is below the sheet (12). The camera captures the image through a mirror and light assembly (13). The mirror is arranged at a suitable angle so as to reflect the image of the vehicle to the camera, which is placed below sheet (12). The light assembly helps illuminate the surface of the vehicle for which the image is to be captured. The assembly may also incorporate means like a reflective sheath to focus the light and/or diffusion means to optimally adjust the quality and quantity of lighting. The kind of lighting and the intensity thereof would depend on the kind of cameras used.
  • the cameras include line-scan and area-scan cameras fitted with lenses capable of capturing the complete width of vehicles.
  • the vehicle can be anyone from passenger cars and other small vehicles to military trucks and trailer trucks, trains, etc driving over the vehicle detection system.
  • the cameras below sheet (11) are located at strategic positions in order to view regions of the underside which are not easily visible in the central camera array, due to the view angle of central camera and the fact that the underside of most vehicles are nowhere near flat and consist of many cavities, nooks and other structures at varying heights.
  • Array of sensors (S1 and S2) are used to provide inputs about the presence/absence of the vehicle over the scanning unit at any point in time.
  • a sensor array comprising of sensors placed at different points in the vehicle track may be used to determine the average instantaneous speed of the vehicle.
  • Sensors may be of different kinds: induction loops, load-cells or pressure sensors, proximity sensors, sonar or ultrasound sensors, laser rangefinders, infra-red sensors, etc.
  • the sensors detect the arrival of the vehicle and initiate the vehicle detection system. They also detect the location of the vehicle at various time instants, using which the images captured by the cameras, are calibrated and a composite image is formed. The sensors also detect the exit of the vehicle to deactivate the system.
  • the captured images are sent by transmission means to a computing means (15).
  • a computing means can be any electronic device, which has a display, an input means, and image-processing means and is configured for multitasking.
  • the input means may include touch panel, voice recognition module and character recognition module.
  • the system is multiuser system and is networked based.
  • Figure 2 describes the architecture. Multiple users (Ul..Un) sitting at remote locations can access the images captured or the composite image formed at the computing means. The vehicle, whose image is being captured, can be viewed at real time from the remote locations.
  • the vehicle (14) approaches the vehicle detection system (V) and the system captures the images.
  • the images are sent to the computing system (C1) which, if needed transmits it to other computing systems (C2) in other remote areas through transmission means.
  • Such transmission may include the network.
  • the users may also access the database at the various computing systems which have the previous images of the vehicle. This may be explained as, if a vehicle X reaches a location L1 , and it has previously been detected at location L2, the user may compare the images captured presently with the images captured and saved at location L2. Accordingly a vehicle can be monitored from many places as and when required.
  • the vehicle detection system on the whole, from top, can be seen in figure 3.
  • a vehicle can be seen approaching the system, which is further connected to the computing system.
  • the computing system includes a processing platform which might be any kind depending on the data-rates from the various cameras and the kind of operation to be performed on the video streams (see below for details of the various options), ranging from low-end processors like x86 or ARM, etc to a high-end Pentium, AMD Athlon, IBM PowerPC, etc which is dedicated for this purpose. In certain cases, multi-processor configurations like the ones used for servers or high-end computations could also be used.
  • the processing platform may have several hardware interfaces like parallel port, serial port, PCI, ISA, IEEE1394, USB, PS/2, etc. not all of which might be used.
  • the processing platform may have several software/firmware layers like OS, drivers, software specific to vehicle inspection, software and libraries for other applications, etc.
  • the communication link between the computing system and vehicle detection system may be any of Ethernet, RS485, USB, RF, InfraRed, BlueTooth, 802.11, GPRS, CDMA, etc.
  • the computing system houses various modules. It has a module, which helps in image calibration and helping form a composite image. In case of video images, it helps in mosaicing.
  • the computing system also has means to identify the make and model of the vehicle, The computing system also automatically adjusts the brightness/contrast and the intensity of the image. This is as per the input by the user about the desired intensity.
  • the user may enter the range of desired brightness, which is configurable.
  • the intensity of the image is also adjusted as per the distance of the vehicle from the ground i.e. the varying ground clearance. The vehicle parts close to the light source get brightly illuminated, whereas the faraway parts remain dark. Accordingly the intensity of the parts can accordingly adjusted.
  • a user may desire to inspect a particular portion of the image, corresponding to a particular area of the vehicle, which may be suspected.
  • the computing system provides the facility of selecting a particular portion of the image and adjusting its intensity as desired.
  • the computing system also houses a module for automatically improving the resolution of the captured image.
  • FIG. 4 describes the working of the instant invention through a flowchart.
  • the approach of a vehicle is first detected by the sensors installed. They send signals to initiate the vehicle detection system.
  • a light assembly, cameras and the array of sensors are activated instantly.
  • the vehicle passes over the vehicle detection system.
  • the system automatically adjusts the captured images to compensate for the varying speeds of the vehicles.
  • the sensors detect the location of the vehicle over the system at various time instants.
  • the cameras capture the image of the vehicle.
  • the computing system does the image calibration. A composite image is formed. This image is then inspected automatically by the system for any unidentified part or manually.
  • the images are stored in a database.
  • the vehicle detection system also captures images of the occupants and the license number of the vehicle.
  • the license number is extracted from its image. It is seen if any image of the same vehicle already exists in the database. The images may then be matched.
  • the vehicle is allowed to go if the computing system shows an Ok status for the vehicle.
  • the sensors detect the exit of the vehicle and deactivate the system.
  • the catnera(s) take multiple images to form the composite image. These camera(s) might be line- scan cameras or area-scan cameras. In either case, the frame rate of the camera is fixed for a particular site, and hence the number of images captured by the main camera(s) would essentially be dependent on the time for which the vehicle is over the inspection area. If the time taken by the vehicle is larger the number of images would be large and conversely if the vehicle takes less time to cross the inspection area, the number of images would be less.
  • Calibration can be done by computing the velocity of the vehicle at different points along the length of the inspection area and using these velocity inputs to adjust the length and width of the image accordingly in the processing step after all the images are captured. Calibrating the image using knowledge of the time instants at which the vehicle crosses certain pre-determined fixed locations and use these inputs to compute an average velocity over a part of the image for calibration purposes proves convenient.
  • L1, L2, ... are predetermined locations at which the vehicle's entry and exit time instants would be recorded by the system by employing suitable sensors like pressure sensors, loop sensors, infra-red sensors, laser or any other kind of sensor which can give a point readout when the vehicle enters a particular location or exits the particular location.
  • suitable sensors like pressure sensors, loop sensors, infra-red sensors, laser or any other kind of sensor which can give a point readout when the vehicle enters a particular location or exits the particular location.
  • the distances between L1, L2, ... are known to the system and time instants at which the vehicle entered/exited the corresponding locations allows the system to compute the average velocity of the vehicle within that time span.
  • V ti] D(Li 1 Lj) / (ti -tj)
  • the composite image(s) can be calibrated to a standard reference velocity which would normalize the overall length and width of the composite images for different vehicles.
  • image registration among the original images is used to calibrate the composite image.
  • Figure 6 shows a calibrated image formed from an uncalibrated image.
  • the image region, marked out by the arrows denoted Li and Lj in the uncalibrated image, is converted to the corresponding image region in the calibrated image.
  • a measure of the local intensity and contrast in each of the sub-images is computed.
  • the measures for the intensity and contrast in the region are computed. Once the above measures are computed for each of the regions the correction can be applied to these regions by comparing with a configurable preset reference values for the same measures.
  • the number of regions that the image is divided into can be configurable. Possible divisions can be regions of size 16x16 or 32x32, etc.
  • the actual size of the regions can be chosen based on the amount of local correction that is required. Smaller regions can result in better globally corrected images, but very small regions result in sudden variations across the boundaries of the regions, very large regions results in the average intensity and contrast of the image getting better but the local outliers like small dark or over-saturated patches would not be corrected. Also the amount of computation power and hence the delay in displaying the processed image vary according to the size of the region selected. Hence the size of the region is a configurable parameter which can be varied according to the operator needs and convenience.
  • the above method describes a means to globally correct the image for variations that arise due to the presence of under-exposed and over-exposed regions in the same image.
  • Another scenario that happens is the lighting variation that happens due to different vehicles having different ground clearance.
  • the lighting unit providing less illumination for vehicles having higher ground clearance and more illumination for vehicles having lower ground clearance. This results in images of varying intensities according to the varying ground clearances.
  • the identification of the make and model by the instant invention includes the system storing standard images of different makes and models of vehicles in its database. For all images in the database the following measures are computed. The image is divided into various sub-regions as illustrated in figure 7. The exact size of the different regions would depend on the level of accuracy that is required in the detection process versus the amount of computation time that be used for doing that. Smaller sized regions would produce better results but would be more time consuming. Regions of size 32x32 or 48x48 should be good choices, giving very fast results without compromising on the accuracy.
  • the method for doing this feature detection can be any of the standard feature detection techniques which are commonly known in the literature as long as they produce consistent and suitably sparse feature detections across various images. For purposes of illustration below we have used a measure provided by Lucas and Kanade Ref: An Iterative Image Registration Technique with an
  • Top few features which are sparsely located, based on their feature strength are selected and normalized cross-correlation on small windows around the features in the same region on the standard image are performed and thus the top candidate matches are computed.
  • the correct matches for the various features are selected by checking the consistency of the matches using translation as a measure across the different features, a global measure of the match of the current image and the standard image is performed by summing up the correlation measures of the pruned matching features after outlier elimination. Regions whose pruned feature count falls below a certain fraction of the original feature count are eliminated from the global similarity measure.
  • the standard image having the maximum global similarity and any other images which fall within a certain percentage threshold of the maximum are candidates for make and model match and are displayed to the operator.
  • the current image can be automatically compared with respect to the standard image stored in the database to determine if there are any dissimilarities in the image of the current vehicle.
  • the processes of vehicle make and model identification produces as output a set of feature point matches between the current image and the standard image for each of the sub-regions in the image.
  • the translation parameters can be computed to warp the sub-region of the current image to the corresponding sub-region of the standard image.
  • the computed translation parameters are used to transform the image sub-region to the same frame of reference as the standard image.
  • the two image regions can be correlated to get a measure of the similarity of the whole region. Regions whose similarity measure is less than a certain threshold (say 0.9) are flagged as candidates for possible dissimilarities and the regions are marked out in a suitable color and an alarm is raised at the operator console prompting him/her to inspect the region in greater detail.
  • Smaller sized regions are ideal choices for performing image comparison because they provide a pretty good level of zeroing in on the possible dissimilarity while at the same time not raising too many false alarms.
  • the instant system also includes computer program product configured to perform the steps described above.
  • the computer program products can run on any electronic device configured to perform the same.
  • the modules described above run with the help of executable computer programs.
  • Figure 9 shows such electronic device consisting of various subsystems interconnected with the help of a system bus (112).
  • the microprocessor (113) communicates and controls the functioning of other subsystems.
  • Memory (114) helps the microprocessor in its functioning by storing instructions and data during its execution.
  • Permanent Storage (115) is used to hold the data and instructions permanent in nature like the operating system and other programs.
  • Display adapter (116) is used as an interface between the system bus and the display device (117), which is generally a monitor.
  • the network interface (118) is used to connect the computer with other computers on a network through wired or wireless means.
  • the computer may also be connected to the internet for providing and downloading relevant lessons.
  • the computer system might also contain a sound card (not shown).
  • the system is connected to various input devices like keyboard (119) and mouse (120) and output devices like printer (121). Various configurations of these subsystems are possible.

Abstract

A method and a system for network based automatic inspection of vehicles which detects the location and time instants of said vehicle at predefined and configurable points and provides point readouts at said location. Images of the vehicle are captured and the detected location, time instants and the images are sent to a computing system. The images are processed to form a composite image automatically adjusting the intensity and contrast for improving the visual clarity of the processed image. These images are compared with pre-captured images stored in the database at the computing system.

Description

A Method And System For Network Based Automatic And Interactive Inspection Of Vehicles
Field of Invention
The instant invention relates to a method and system for network based automatic and interactive inspection of vehicles. It generally relates to capturing multiple images of the vehicle and calibrating the images to form a composite image or a series of video and inspect the vehicle for any unidentified object.
Background of the invention
The need for security at commercial and civilian establishments can never be ignored. Recent bomb attacks have made it important now to detect every possible place where bombs can be placed. In recent times, terrorists have been known to use the underside of vehicles to plant explosives and for smuggling objects across security checkpoints. Apart from this, legitimate users of the vehicles and the establishments have also been known to smuggle out goods/documents by hiding them in the underside of vehicles. Also cross-border human trafficking has been done in certain cases by hiding people in the underside of vehicles. Hence the need for a system to facilitate the easy inspection of the underside of vehicles to detect any hidden objects.
Presently inspection of the underside is performed using inspection wells, or inspection mirrors predominantly. Certain establishments also use more advanced tools like camera equipped with lighting means to inspect the underside.
GB2249690 proposed to 'image the underside of a vehicle using a TV monitor connected to a camera set into a pit or mounted in a hump in the road over which the vehicle travels' with a wide angle camera which produces line or strip outputs as the vehicle moves over the hump.
GB2258321 proposed to use a line-scan camera looking up at the vehicle surface through a mirror in order to reduce the height of the underground unit.
Certain companies also supply a video based viewing system which consist of four or more cameras arranged transversely, while certain other companies offer systems which try to provide an image of the underside through use of camera equipped with wide-angle lens. However, all the above systems have multiple disadvantages in terms of operational and security viewpoint. The adjustable camera plus light viewer system is just an advanced version of the inspection mirror; and hence most problems inherent in the inspection mirror like viewing only parts of the vehicle, tedious operation, large inspection times, unreliable, etc are also present in with the inspection camera system. The video viewing systems presents videos of the vehicle underside; the complete underside is not visible at any point in the video. Hence it is difficult for the operator to form a complete mental picture of the underside and determine if there is any abnormality in the underside. The systems which offer composite images underside images for inspection are among the best in the whole group; providing a complete view of the vehicle underside in a single image. However, most of these systems provide very little or no control to the operator in terms of interactively inspecting in greater detail, the image so provided. Apart from this all these systems also face the problem of hidden areas in the underside because the cameras always look up at the underside at a particular angle, usually vertically up, only.
Apart from the need for security, there is also a need to inspect vehicle surfaces for any defects or abnormalities from a manufacturing and maintenance point of view.
Both GB2249690 and GB2258321 produce a composite image of the vehicle underside as output of the system.
EP1482329A1 describe a means of automated image comparison but their method detects prominent vehicle features like wheel location, wheel-base, etc to index the database and also subsequently for the image comparison purposes. Also their method assumes that multiple instances of vehicle underside images can be aligned together by a simple translation.
Objectives and Summary of the invention
It is an object of the instant invention to obviate the above drawbacks and provide a method and system for network based automatic and interactive inspection of vehicles.
It is another object of the instant invention to provide a composite image and video, if required, of the vehicle which captures every portion of the vehicle.
It is yet another object of the instant invention to automatically adjust the intensity and improve the resolution of the captured image. It is further an object to compare the images captured, with prestored images.
The instant invention provides a method for network based automatic inspection of vehicles comprising detecting the arrival of a vehicle near a vehicle inspection system by a first sensor in said system, initiating said system by said sensor, detecting the location and time instants of said vehicle at predefined and configurable points and providing point readouts for said location detection, capturing images of said vehicle as the vehicle moves by a set of image capturing means, transmitting said images, detected location and time instants to a computing system, processing the captured images using said time instants and detected locations, automatically adjusting the intensity and improving the resolution of the processed image, displaying, inspecting and storing said processed images, analyzing a user defined portion of the composite image, if desired, and detecting the exit of vehicle and deactivating said vehicle inspection system by a second sensor.
The invention also provides a system for network based automatic inspection of vehicles, wherein said system comprises a vehicle inspection system, said system including a first sensor for detecting the arrival of a vehicle near said system and initiating the system, means for detecting the location of the vehicle and instants at predefined points, a set of image capturing means for capturing a plurality of images, and a second sensor for detecting the exit of vehicle and deactivating said system, a computing means, said means including processing means for processing the images captured by said image capturing means using said detected location, means configured for automatically adjusting the intensity and improving the resolution of the processed image, means for displaying, inspecting and storing said images, and means configured for analyzing a user defined portion of the composite image, if desired, and transmitting means for transmitting said images to said computing means.
It also provides for a computer readable recording medium for storing a computer program to cause a computer to perform the steps of detecting the arrival of a vehicle near a vehicle inspection system by a first sensor in said system, initiating said system by said sensor, detecting the location and time instants of said vehicle at predefined and configurable points and providing point readouts at said location, capturing images of said vehicle as the vehicle moves over the inspection system by a set of image capturing means, transmitting said images, detected location and time instants to a computing system, processing the captured images using said time instants and detected locations, automatically adjusting the intensity and contrast for improving the visual clarity of the processed image, displaying, inspecting and storing said processed images, analyzing a user defined portion of the composite image, if desired, and detecting the exit of vehicle from the inspection system and deactivating said vehicle inspection system by a second sensor.
Brief description of the accompanying drawings
Figure 1 relates to the overview of the components of the vehicle detection system.
Figure 2 describes the multiuser architecture of the system.
Figure 3 shows the top view of the vehicle detection system.
Figure 4 describes the method of the invention by a flowchart.
Figure 5 shows the various locations of a vehicle at for which time instants of entry/exit are detected.
Figure 6 shows a calibrated image created from uncalibrated image.
Figure 7 shows the image being divided into local sub images for automatic adjustment.
Figure 8 shows the current image comparison result.
Figure 9 shows a computing system.
Detailed description of the drawings
The instant system can be divided into three parts - vehicle detection and imaging system (V), - computing means (C), and transmission means for communicating between the above two.
An overview of the vehicle detection system is described in figure 1. It houses a plurality of cameras for imaging the vehicle surface. These cameras are covered for protection (10). There is a slit covered by a transparent sheet (11) over the camera, so as to enable it to capture the image. Another array of camera is below the sheet (12). The camera captures the image through a mirror and light assembly (13). The mirror is arranged at a suitable angle so as to reflect the image of the vehicle to the camera, which is placed below sheet (12). The light assembly helps illuminate the surface of the vehicle for which the image is to be captured. The assembly may also incorporate means like a reflective sheath to focus the light and/or diffusion means to optimally adjust the quality and quantity of lighting. The kind of lighting and the intensity thereof would depend on the kind of cameras used. If low light cameras are used lighting may not required at all. The cameras include line-scan and area-scan cameras fitted with lenses capable of capturing the complete width of vehicles. The vehicle can be anyone from passenger cars and other small vehicles to military trucks and trailer trucks, trains, etc driving over the vehicle detection system.
The cameras below sheet (11) are located at strategic positions in order to view regions of the underside which are not easily visible in the central camera array, due to the view angle of central camera and the fact that the underside of most vehicles are nowhere near flat and consist of many cavities, nooks and other structures at varying heights.
Array of sensors (S1 and S2) are used to provide inputs about the presence/absence of the vehicle over the scanning unit at any point in time. A sensor array, comprising of sensors placed at different points in the vehicle track may be used to determine the average instantaneous speed of the vehicle. Sensors may be of different kinds: induction loops, load-cells or pressure sensors, proximity sensors, sonar or ultrasound sensors, laser rangefinders, infra-red sensors, etc.
The sensors detect the arrival of the vehicle and initiate the vehicle detection system. They also detect the location of the vehicle at various time instants, using which the images captured by the cameras, are calibrated and a composite image is formed. The sensors also detect the exit of the vehicle to deactivate the system.
The captured images are sent by transmission means to a computing means (15). This can be any electronic device, which has a display, an input means, and image-processing means and is configured for multitasking. The input means may include touch panel, voice recognition module and character recognition module.
The system is multiuser system and is networked based. Figure 2 describes the architecture. Multiple users (Ul..Un) sitting at remote locations can access the images captured or the composite image formed at the computing means. The vehicle, whose image is being captured, can be viewed at real time from the remote locations.
The vehicle (14) approaches the vehicle detection system (V) and the system captures the images. The images are sent to the computing system (C1) which, if needed transmits it to other computing systems (C2) in other remote areas through transmission means. Such transmission may include the network. The users may also access the database at the various computing systems which have the previous images of the vehicle. This may be explained as, if a vehicle X reaches a location L1 , and it has previously been detected at location L2, the user may compare the images captured presently with the images captured and saved at location L2. Accordingly a vehicle can be monitored from many places as and when required.
The vehicle detection system on the whole, from top, can be seen in figure 3. A vehicle can be seen approaching the system, which is further connected to the computing system.
The computing system includes a processing platform which might be any kind depending on the data-rates from the various cameras and the kind of operation to be performed on the video streams (see below for details of the various options), ranging from low-end processors like x86 or ARM, etc to a high-end Pentium, AMD Athlon, IBM PowerPC, etc which is dedicated for this purpose. In certain cases, multi-processor configurations like the ones used for servers or high-end computations could also be used. The processing platform may have several hardware interfaces like parallel port, serial port, PCI, ISA, IEEE1394, USB, PS/2, etc. not all of which might be used. The processing platform may have several software/firmware layers like OS, drivers, software specific to vehicle inspection, software and libraries for other applications, etc.
The communication link between the computing system and vehicle detection system may be any of Ethernet, RS485, USB, RF, InfraRed, BlueTooth, 802.11, GPRS, CDMA, etc.
The computing system houses various modules. It has a module, which helps in image calibration and helping form a composite image. In case of video images, it helps in mosaicing.
It runs a graphical user interface (GUI), which displays the images, helps in accessing previously stored images and videos, comparing the images and giving the appropriate result. The computing system also has means to identify the make and model of the vehicle, The computing system also automatically adjusts the brightness/contrast and the intensity of the image. This is as per the input by the user about the desired intensity. The user may enter the range of desired brightness, which is configurable. The intensity of the image is also adjusted as per the distance of the vehicle from the ground i.e. the varying ground clearance. The vehicle parts close to the light source get brightly illuminated, whereas the faraway parts remain dark. Accordingly the intensity of the parts can accordingly adjusted. Also, a user may desire to inspect a particular portion of the image, corresponding to a particular area of the vehicle, which may be suspected. The computing system provides the facility of selecting a particular portion of the image and adjusting its intensity as desired.
The computing system also houses a module for automatically improving the resolution of the captured image.
The functionalities of above modules have been described in detail in the description of the figures.
Figure 4 describes the working of the instant invention through a flowchart. The approach of a vehicle is first detected by the sensors installed. They send signals to initiate the vehicle detection system. A light assembly, cameras and the array of sensors are activated instantly.
The vehicle passes over the vehicle detection system. The system automatically adjusts the captured images to compensate for the varying speeds of the vehicles. The sensors detect the location of the vehicle over the system at various time instants. The cameras capture the image of the vehicle. As the vehicle moves away from the system, the captured images and time instants are sent to the computing system. The computing system does the image calibration. A composite image is formed. This image is then inspected automatically by the system for any unidentified part or manually. The images are stored in a database.
The vehicle detection system also captures images of the occupants and the license number of the vehicle. The license number is extracted from its image. It is seen if any image of the same vehicle already exists in the database. The images may then be matched.
The vehicle is allowed to go if the computing system shows an Ok status for the vehicle. The sensors detect the exit of the vehicle and deactivate the system.
The image calibration done by the instant method is described below. The catnera(s) take multiple images to form the composite image. These camera(s) might be line- scan cameras or area-scan cameras. In either case, the frame rate of the camera is fixed for a particular site, and hence the number of images captured by the main camera(s) would essentially be dependent on the time for which the vehicle is over the inspection area. If the time taken by the vehicle is larger the number of images would be large and conversely if the vehicle takes less time to cross the inspection area, the number of images would be less.
Hence there is a need to calibrate and normalize all the images captured by the system to bring them down to the same scale, such that images of the same make and model are similar in length and width. Calibration can be done by computing the velocity of the vehicle at different points along the length of the inspection area and using these velocity inputs to adjust the length and width of the image accordingly in the processing step after all the images are captured. Calibrating the image using knowledge of the time instants at which the vehicle crosses certain pre-determined fixed locations and use these inputs to compute an average velocity over a part of the image for calibration purposes proves convenient.
Consider figure 5, which illustrates the same. L1, L2, ... are predetermined locations at which the vehicle's entry and exit time instants would be recorded by the system by employing suitable sensors like pressure sensors, loop sensors, infra-red sensors, laser or any other kind of sensor which can give a point readout when the vehicle enters a particular location or exits the particular location. The distances between L1, L2, ... are known to the system and time instants at which the vehicle entered/exited the corresponding locations allows the system to compute the average velocity of the vehicle within that time span.
For example, if D(Li1Lj) is the distance between the locations Li and Lj and ti and tj are time instants at which the vehicle had entered the corresponding locations, then the average velocity within the period ti and tj is given by
Vti] = D(Li1Lj) / (ti -tj)
Using the average velocity of the vehicle between various points and correlating the same to the time instants at which different parts of the vehicle was imaged the the composite image(s) can be calibrated to a standard reference velocity which would normalize the overall length and width of the composite images for different vehicles. In case of area-scan cameras, image registration among the original images is used to calibrate the composite image.
Figure 6 shows a calibrated image formed from an uncalibrated image. The image region, marked out by the arrows denoted Li and Lj in the uncalibrated image, is converted to the corresponding image region in the calibrated image.
The working of other modules of the computing system is now described.
Automatic adjustment of the image intensity comes into use because the undersides of vehicles are typically nowhere near flat and comprise various overhangings and cavities and objects at different heights at different parts along the length and width of the vehicle. While the lighting unit provides uniform lighting if the vehicle are being imaged is close to flat, the variations mentioned results in images captured by the cameras to have uneven lighting at different parts of the image, with some parts being very dark or under-exposed while some other parts are very bright or over-exposed. In either of the above cases the details in these parts of the images get lost. In order to compensate for the above variations and compute a resulting image which is globally corrected for proper image intensity and clarity at all parts, the image is divided into logical sub-images as depicted in figure 7.
A measure of the local intensity and contrast in each of the sub-images is computed.
The measures for the intensity and contrast in the region are computed. Once the above measures are computed for each of the regions the correction can be applied to these regions by comparing with a configurable preset reference values for the same measures.
The number of regions that the image is divided into can be configurable. Possible divisions can be regions of size 16x16 or 32x32, etc. The actual size of the regions can be chosen based on the amount of local correction that is required. Smaller regions can result in better globally corrected images, but very small regions result in sudden variations across the boundaries of the regions, very large regions results in the average intensity and contrast of the image getting better but the local outliers like small dark or over-saturated patches would not be corrected. Also the amount of computation power and hence the delay in displaying the processed image vary according to the size of the region selected. Hence the size of the region is a configurable parameter which can be varied according to the operator needs and convenience. The above method describes a means to globally correct the image for variations that arise due to the presence of under-exposed and over-exposed regions in the same image. Another scenario that happens is the lighting variation that happens due to different vehicles having different ground clearance. The lighting unit providing less illumination for vehicles having higher ground clearance and more illumination for vehicles having lower ground clearance. This results in images of varying intensities according to the varying ground clearances.
The same technique of image adjustment based on the measures described above can be applied in this case also without dividing the image into smaller regions. This results in much less computation as opposed to multiple regions. Alternatively if corrections for both kinds of variations need to be performed, the technique above will suffice.
The identification of the make and model by the instant invention includes the system storing standard images of different makes and models of vehicles in its database. For all images in the database the following measures are computed. The image is divided into various sub-regions as illustrated in figure 7. The exact size of the different regions would depend on the level of accuracy that is required in the detection process versus the amount of computation time that be used for doing that. Smaller sized regions would produce better results but would be more time consuming. Regions of size 32x32 or 48x48 should be good choices, giving very fast results without compromising on the accuracy.
For each of the sub-regions the most prominent features in the region is located. The method for doing this feature detection can be any of the standard feature detection techniques which are commonly known in the literature as long as they produce consistent and suitably sparse feature detections across various images. For purposes of illustration below we have used a measure provided by Lucas and Kanade Ref: An Iterative Image Registration Technique with an
Application to Stereo Vision (IJCAI) B. D. Lucas and T. KanadeProceedings of the 7th
International Joint Conference on Artificial Intelligence (IJCAI '81), April, 1981, pp. 674-679. http://www.ri.cmu.edu/pubs/pub 2548.html
Top few features, which are sparsely located, based on their feature strength are selected and normalized cross-correlation on small windows around the features in the same region on the standard image are performed and thus the top candidate matches are computed. The correct matches for the various features are selected by checking the consistency of the matches using translation as a measure across the different features, a global measure of the match of the current image and the standard image is performed by summing up the correlation measures of the pruned matching features after outlier elimination. Regions whose pruned feature count falls below a certain fraction of the original feature count are eliminated from the global similarity measure.
The standard image having the maximum global similarity and any other images which fall within a certain percentage threshold of the maximum are candidates for make and model match and are displayed to the operator.
Once the vehicle's make and model has been identified by the above method, the current image can be automatically compared with respect to the standard image stored in the database to determine if there are any dissimilarities in the image of the current vehicle. The processes of vehicle make and model identification produces as output a set of feature point matches between the current image and the standard image for each of the sub-regions in the image. Using the feature point matches, the translation parameters can be computed to warp the sub-region of the current image to the corresponding sub-region of the standard image. The computed translation parameters are used to transform the image sub-region to the same frame of reference as the standard image. After performing the transformation, the two image regions can be correlated to get a measure of the similarity of the whole region. Regions whose similarity measure is less than a certain threshold (say 0.9) are flagged as candidates for possible dissimilarities and the regions are marked out in a suitable color and an alarm is raised at the operator console prompting him/her to inspect the region in greater detail.
Smaller sized regions (say of size 48x84 or 64x64) are ideal choices for performing image comparison because they provide a pretty good level of zeroing in on the possible dissimilarity while at the same time not raising too many false alarms.
The instant system also includes computer program product configured to perform the steps described above. The computer program products can run on any electronic device configured to perform the same. The modules described above run with the help of executable computer programs. Figure 9 shows such electronic device consisting of various subsystems interconnected with the help of a system bus (112). The microprocessor (113) communicates and controls the functioning of other subsystems. Memory (114) helps the microprocessor in its functioning by storing instructions and data during its execution. Permanent Storage (115) is used to hold the data and instructions permanent in nature like the operating system and other programs. Display adapter (116) is used as an interface between the system bus and the display device (117), which is generally a monitor. The network interface (118) is used to connect the computer with other computers on a network through wired or wireless means. The computer may also be connected to the internet for providing and downloading relevant lessons. The computer system might also contain a sound card (not shown). The system is connected to various input devices like keyboard (119) and mouse (120) and output devices like printer (121). Various configurations of these subsystems are possible.
It will readily be appreciated by those skilled in the art that the present invention is not limited to the specific embodiments shown herein. Thus variations may be made within the scope and spirit of the accompanying claims without sacrificing the principal advantages of the invention.

Claims

We Claim
1. A method for network based automatic inspection of vehicles comprising:
- detecting the arrival of a vehicle near a vehicle inspection system by a first sensor in said system, - initiating said system by said sensor,
- detecting the location and time instants of said vehicle at predefined and configurable points and providing point readouts at said location,
- capturing images of said vehicle as the vehicle moves over the inspection system by a set of image capturing means, - transmitting said images, detected location and time instants to a computing system,
- processing the captured images using said time instants and detected locations,
- automatically adjusting the intensity and contrast for improving the visual clarity of the processed image,
- displaying, inspecting and storing said processed images, ' - analyzing a user defined portion of the composite image, if desired, and
- detecting the exit of vehicle from the inspection system and deactivating said vehicle inspection system by a second sensor.
2. The method as claimed in claim 1, wherein multiple users can perform the method at multiple remote locations for the same vehicle.
3. The method as claimed in claim 1, wherein multiple users can perform the method at multiple remote locations for different vehicles.
4. The method as claimed in claim 1 , wherein a plurality of still images of said vehicle is captured using said set of cameras.
5. The method as claimed in claim 5, wherein a single composite image is formed by said processing of the still images.
6. The method as claimed in claim 6, wherein the images are processed by calibrating the captured images.
7. The method as claimed in claim 1, wherein a set of videos is captured using the image capturing means.
8. The method as claimed in claim 1 , wherein super-resolution techniques are applied to the said videos captured in claim 7 in order to improve their resolution.
9. The method as claimed in claim 7, wherein said videos are processed to form separate mosaics, the mosaics covering the entire length of the vehicle, the width being according to the view of the image capturing means.
10. The method as claimed in claim 1, wherein said automatic intensity adjustment is configurable according to intensity range given as input,
11. The method as claimed in claims 1 and 10, wherein said intensity adjustment is configurable to take care of varying ground clearance of different vehicles,
12. The method as claimed in claim 1, wherein said method comprises identifying the make and model of a vehicle.
13. The method as claimed in claim 1 , wherein said computing system compares said images with the images in the database.
14. The method as claimed in claim 1, wherein said method comprises capturing images of license plate and occupants of the vehicle.
15. The method as claimed in claim 1, wherein said method comprises converting the license plate number image to machine readable alpha-numeric format,
16. A system for network based automatic inspection of vehicles, wherein said system comprises a), a vehicle inspection system, said system including:
a first sensor for detecting the arrival of a vehicle near said system and initiating the system,
• means for detecting the location of the vehicle and instants at predefined points,
• a set of image capturing means for capturing a plurality of images, and
• a second sensor for detecting the exit of vehicle and deactivating said system, b) a computing means, said means including:
• processing means for processing the images captured by said image capturing means using said detected location,
• means configured for automatically adjusting the intensity and contrast for improving the visual clarity of the processed image, • means for displaying, inspecting and storing said images, and
• means configured for analyzing a user defined portion of the processed images and videos, if desired, and c) transmitting means for transmitting said images to said computing means.
17. The system as claimed in claim 16, wherein said system further comprises a light and mirror assembly.
18. The system as claimed in claim 17, wherein said light and mirror assembly comprises:
• a reflecting mirror, arranged at appropriate angle, for viewing the lighted area of the vehicle, a reflective sheath for the light and, • diffusion means to optimally adjust the quality and quantity of light.
19. The system as claimed in claim 16, wherein said image capturing means comprise:
- an array of line scan cameras,
- a plurality of area scan camera, and - a plurality of cameras for capturing the license number and occupants of the vehicle.
20. The system as claimed in claim 19, wherein the angles of said cameras are configurable.
21. The system as claimed in claims 18 and 20, wherein if said camera comprises low light camera, the light assembly is disabled.
22. The system as claimed in claim 19, wherein said area scan cameras capture the vehicle portion not visible in the composite image.
23. The system as claimed in claim 16, wherein said computing means further comprises: . input and display means, means for communicating with the vehicle inspection system, and . means for storing the videos and images forwarded by vehicle inspection system, and means for comparing the captured images with the stored images.
24. The system as claimed in claim 23, wherein said input means comprise touch panel, voice recognition and handwriting recognition.
25. The system as claimed in claim 24, wherein said electronic device has multitasking means for image processing to improve effective throughput.
26. The system as claimed in claim 16, wherein said vehicles have tag to identify the vehicles, said tags comprising RFID tags or other tags.
27. The system as claimed in claim 16 and claim 26 where the computing means comprise functionality for reading the tags on the vehicles and index (or lookup) means for associating the tag with a unique vehicle license plate number.
28. The system as claimed in claim 16, wherein said system is deployed to inspect the top of the vehicle moving through the inspection area.
29. The system as claimed in claim 16, wherein said system is portable and can be deployed as per the requirement.
30. The system as claimed in claim 11 wherein said system comprise chemical and explosive sensors.
31. The system as claimed in claim 11 wherein said communication means comprise wireless communication.
32. A computer readable recording medium for storing a computer program to cause a computer to perform the steps of:
- detecting the arrival of a vehicle near a vehicle inspection system by a first sensor in said system,
- initiating said system by said sensor,
- detecting the location and time instants of said vehicle at predefined and configurable points and providing point readouts at said location, - capturing images of said vehicle as the vehicle moves over the inspection system by a set of image capturing means,
- transmitting said images, detected location and time instants to a computing system,
- processing the captured images using said time instants and detected locations,
- automatically adjusting the intensity and contrast for improving the visual clarity of the processed image,
- displaying, inspecting and storing said processed images,
- analyzing a user defined portion of the composite image, if desired, and
- detecting the exit of vehicle from the inspection system and deactivating said vehicle inspection system by a second sensor.
33. The computer program product as claimed in claim 32, wherein multiple users can perform the method at multiple remote locations for the same vehicle.
34. The computer program product as claimed in claim 32, wherein multiple users can perform the method at multiple remote locations for different vehicles.
35. The computer program product as claimed in claim 32, wherein a plurality of still images of said vehicle is captured using said set of cameras.
36. The computer program product as claimed in claim 36, wherein a single composite image is formed by said processing of the still images.
37, The computer program product as claimed in claim 37, wherein the images are processed by calibrating the captured images.
38, The computer program product as claimed in claim 32, wherein a set of videos is captured using the image capturing means,
39, The computer program product as claimed in claim 32, wherein super-resolution techniques are applied to the said videos captured in claim 7 in order to improve their resolution.
40. The computer program product as claimed in claim 38, wherein said videos are processed to form separate mosaics, the mosaics covering the entire length of the vehicle, the width being according to the view of the image capturing means.
41. The computer program product as claimed in claim 32, wherein said automatic intensity adjustment is configurable according to intensity range given as input.
42. The computer program product as claimed in claim 32 and 40, wherein said intensity adjustment is configurable to take care of varying ground clearance of different vehicles.
43. The computer program product as claimed in claim 32, wherein said method comprises identifying the make and model of a vehicle.
44. The computer program product as claimed in claim 32, wherein said computing system compares said images with the images in the database.
45. The computer program product as claimed in claim 32, wherein said method comprises capturing images of license plate and occupants of the vehicle.
46. The computer program product as claimed in claim 32, wherein said method comprises converting the license plate number image to machine readable alpha-numeric format.
EP06796200A 2005-09-12 2006-09-12 A method and system for network based automatic and interactive inspection of vehicles Withdrawn EP1946268A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN2459DE2005 2005-09-12
PCT/IN2006/000348 WO2007032025A2 (en) 2005-09-12 2006-09-12 A method and system for network based automatic and interactive inspection of vehicles

Publications (2)

Publication Number Publication Date
EP1946268A2 true EP1946268A2 (en) 2008-07-23
EP1946268A4 EP1946268A4 (en) 2012-08-01

Family

ID=37865393

Family Applications (1)

Application Number Title Priority Date Filing Date
EP06796200A Withdrawn EP1946268A4 (en) 2005-09-12 2006-09-12 A method and system for network based automatic and interactive inspection of vehicles

Country Status (3)

Country Link
EP (1) EP1946268A4 (en)
AP (1) AP2008004411A0 (en)
WO (1) WO2007032025A2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3308356B1 (en) * 2015-06-09 2020-04-08 Vehant Technologies Private Limited System and method for detecting a dissimilar object in undercarriage of a vehicle
US11580800B2 (en) * 2018-11-08 2023-02-14 Verizon Patent And Licensing Inc. Computer vision based vehicle inspection report automation
US11080327B2 (en) * 2019-04-18 2021-08-03 Markus Garcia Method for the physical, in particular optical, detection of at least one usage object
BR112023011476A2 (en) 2020-12-15 2024-02-06 Selex Es Inc SYSTEMS AND METHODS FOR TRACKING ELECTRONIC SIGNATURES

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5473318A (en) * 1992-01-10 1995-12-05 Active Control Technology Inc. Secure remote control system with receiver controlled to add and delete identity codes
US20030185340A1 (en) * 2002-04-02 2003-10-02 Frantz Robert H. Vehicle undercarriage inspection and imaging method and system
WO2004061771A1 (en) * 2003-01-07 2004-07-22 Stratech Systems Limited Intelligent vehicle access control system
WO2005053314A2 (en) * 2003-11-25 2005-06-09 Fortkey Limeted Inspection apparatus and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4030285B2 (en) * 2001-10-10 2008-01-09 株式会社トクヤマ Substrate and manufacturing method thereof
US7315321B2 (en) * 2002-03-05 2008-01-01 Leonid Polyakov System of and method for warning about unauthorized objects attached to vehicle bottoms and/or adjoining areas
US7102665B1 (en) * 2002-12-10 2006-09-05 The United States Of America As Represented By The Secretary Of The Navy Vehicle underbody imaging system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5473318A (en) * 1992-01-10 1995-12-05 Active Control Technology Inc. Secure remote control system with receiver controlled to add and delete identity codes
US20030185340A1 (en) * 2002-04-02 2003-10-02 Frantz Robert H. Vehicle undercarriage inspection and imaging method and system
WO2004061771A1 (en) * 2003-01-07 2004-07-22 Stratech Systems Limited Intelligent vehicle access control system
WO2005053314A2 (en) * 2003-11-25 2005-06-09 Fortkey Limeted Inspection apparatus and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DICKSON P ET AL: "Mosaic generation for under vehicle inspection", APPLICATIONS OF COMPUTER VISION, 2002. (WACV 2002). PROCEEDINGS. SIXTH IEEE WORKSHOP ON 3-4 DEC. 2002, PISCATAWAY, NJ, USA,IEEE, 3 December 2002 (2002-12-03), pages 251-256, XP010628757, ISBN: 978-0-7695-1858-9 *
See also references of WO2007032025A2 *
YEOMAN C R ED - SANSON L D: "Under vehicle examination and novel applications of digital storage techniques", SECURITY TECHNOLOGY, 1995. PROCEEDINGS. INSTITUTE OF ELECTRICAL AND EL ECTRONICS ENGINEERS 29TH ANNUAL 1995 INTERNATIONAL CARNAHAN CONFERENCE ON SANDERSTEAD, UK 18-20 OCT. 1995, NEW YORK, NY, USA,IEEE, US, 18 October 1995 (1995-10-18), pages 119-128, XP010196401, DOI: 10.1109/CCST.1995.524744 ISBN: 978-0-7803-2627-9 *

Also Published As

Publication number Publication date
AP2008004411A0 (en) 2008-04-30
EP1946268A4 (en) 2012-08-01
WO2007032025A3 (en) 2008-10-02
WO2007032025A2 (en) 2007-03-22

Similar Documents

Publication Publication Date Title
US7349007B2 (en) Entry control point device, system and method
US5956122A (en) Iris recognition apparatus and method
US6829371B1 (en) Auto-setup of a video safety curtain system
US7889931B2 (en) Systems and methods for automated vehicle image acquisition, analysis, and reporting
US5760829A (en) Method and apparatus for evaluating an imaging device
AU2006333500B2 (en) Apparatus and methods for inspecting a composite structure for defects
US10861147B2 (en) Structural health monitoring employing physics models
US20070009136A1 (en) Digital imaging for vehicular and other security applications
US20050002544A1 (en) Apparatus and method for sensing the occupancy status of parking spaces in a parking lot
CN107662875A (en) The step of passenger conveyor and the engagement prison of fishback detect
JPH0390979A (en) Method of detecting and analyzing position of inversely reflective automobile confirming article and image analyzing system
KR101630596B1 (en) Photographing apparatus for bottom of car and operating method thereof
KR20150137666A (en) Security optical turnstile system
EP1946268A2 (en) A method and system for network based automatic and interactive inspection of vehicles
KR102031946B1 (en) System and method for surveillance of underside of vehicles
US11943570B2 (en) Imaged target detection verification
US10712146B2 (en) Distance measuring system and method using thereof
EP3742330A1 (en) Automatic damage detection for cars
Tao et al. Smoky vehicle detection in surveillance video based on gray level co-occurrence matrix
US11875544B2 (en) Annotation of infrared images for machine learning using beamsplitter-based camera system and methods
KR102472960B1 (en) Managing system of parking and managing method thereof
KR102333490B1 (en) Heterogeneous image processing device
KR20160064060A (en) Security optical turnstile system
JPH0720065A (en) Device for inspecting foreign matter on color filter for appearance
KR20210013848A (en) Search detector for vehicle bottom

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK RS

R17D Deferred search report published (corrected)

Effective date: 20081002

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 9/74 20060101ALI20081110BHEP

Ipc: G01M 17/00 20060101AFI20081110BHEP

17P Request for examination filed

Effective date: 20090325

RBV Designated contracting states (corrected)

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20120702

RIC1 Information provided on ipc code assigned before grant

Ipc: G01M 17/00 20060101AFI20120626BHEP

Ipc: G06T 7/00 20060101ALI20120626BHEP

Ipc: G06T 3/40 20060101ALI20120626BHEP

Ipc: H04N 9/74 20060101ALI20120626BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20130130