WO2006059998A1 - Vehicle-undercarriage inspection system - Google Patents

Vehicle-undercarriage inspection system Download PDF

Info

Publication number
WO2006059998A1
WO2006059998A1 PCT/US2004/040131 US2004040131W WO2006059998A1 WO 2006059998 A1 WO2006059998 A1 WO 2006059998A1 US 2004040131 W US2004040131 W US 2004040131W WO 2006059998 A1 WO2006059998 A1 WO 2006059998A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
cameras
images
undercarriage
camera
Prior art date
Application number
PCT/US2004/040131
Other languages
French (fr)
Inventor
Roger Dale Hiatt
Carl Donald Busboom
Edward John Rowlance
Original Assignee
Advanced Detection Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Detection Technologies, Inc. filed Critical Advanced Detection Technologies, Inc.
Priority to PCT/US2004/040131 priority Critical patent/WO2006059998A1/en
Publication of WO2006059998A1 publication Critical patent/WO2006059998A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • H04N5/2627Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect for providing spin image effect, 3D stop motion effect or temporal freeze effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the invention relates generally to inspection of a vehicle's undercarriage, and more particularly, to a system for capturing a plurality of images that collectively span the entire width of the vehicle's undercarriage and optionally generating a composite image of the vehicle's undercarriage from the plurality of pictures for inspection purposes.
  • Passenger vehicles, large trucks, busses, and other such forms of transportation are often used to hide contraband that is to be smuggled beyond a checkpoint or inspection station.
  • the undercarriage of such vehicles can include a maze of frame members and other structural features that can be used to conceal the contraband from view, thereby enabling those wishing harm upon others or smuggling illegal contraband to accomplish their mission.
  • Physical inspection of the undercarriage of vehicles is impractical due to the often low ground clearance of the vehicles' undercarriage.
  • the shutter on the still camera is triggered by a hardware device that determines and monitors the changing distance between the camera and an approaching vehicle.
  • a hardware device that determines and monitors the changing distance between the camera and an approaching vehicle.
  • the composite image generated by the automated system described above can be distorted due to limitations of the cameras disposed within the speed bump.
  • a wide variety of vehicles can be driven over the speed bump, each having a different ground clearance.
  • Sports cars are at one spectrum of ground clearance and can have an undercarriage supported only 6 inches above the ground.
  • commercial trucks can pull trailers that have an undercarriage supported up to 40 inches above the ground.
  • the cameras of conventional systems rely on lenses to bend the light in a manner permitting the capture of images that span the entire width of the vehicle. The optical characteristics of such lenses typically distort the resulting images to ensure an acceptable angle of view. Distorted images make inspection of vehicle undercarriages and the identification of contraband difficult.
  • Adding to the distortion of images captured by a conventional automated vehicle- inspection system are variations in the speed and angle of approach of oncoming vehicles. Driving over the cameras in the speed bump at excess speeds will result in a blurred image and possibly a composite image missing portions of the undercarriage, making inspection impossible without making the driver repeat the image-capturing process. Likewise, an approaching vehicle that advances over the cameras in the speed bump without straddling the speed bump at least close to the center of the vehicle will result in a composite image that excludes portions of the vehicles undercarriage near one the vehicle's sides. Subsequent inspection of this image will not detect the presence of contraband at this location. Further, allowing vehicles to advance over the cameras in the speed bump differently each time will make comparison of the resulting composite image to previously captured and assembled composite images difficult.
  • the composite image should include the entire undercarriage of the vehicle, and should be generated in a substantially consistent manner each time a vehicle is inspected. Further, the vehicle-inspection system should permit remote inspection of vehicles to minimize the threat to inspectors.
  • the present invention achieves these and other objectives by providing A vehicle inspection system for capturing video images of a vehicle's undercarriage as the vehicle approaches a secure location.
  • the vehicle-inspection system includes a first camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; and a second camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance.
  • the vehicle-inspection system further comprises a computer-readable memory provided with means for electronically assembling images of the vehicle's undercarriage captured by each of the plurality of cameras in at least one of the first and second camera sets to form a widthwise image of a portion of the vehicle's undercarriage.
  • the present invention provides a vehicle-inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location.
  • the vehicle-inspection system includes a first camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; and a second camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance.
  • the vehicle-inspection system further comprises a height sensor for sensing the ground clearance of the vehicle, and computer-readable logic for automatically selecting one of the first and second camera sets to capture images of the undercarriage of the vehicle based on the ground clearance measured by the height sensor without user intervention.
  • the present invention also provides a vehicle- inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location.
  • the vehicle-inspection system includes a first camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; and a second camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance.
  • a guidance device is provided to display visual instructions that instruct a driver of the vehicle to alter at least one of the vehicle's speed and approach angle relative to the first and second camera sets.
  • Figure 1 is a perspective view of a vehicle-undercarriage imaging system in accordance with one embodiment of the present invention
  • Figure 2 is an overhead view of an arrangement of cameras within a protective housing in accordance with an embodiment of the present invention
  • Figure 3 is a partial perspective view of a camera arrangement within a protective housing in accordance with an embodiment of the present invention
  • Figure 4a is a cutaway view of an arrangement of low cameras within a protective housing in accordance with an embodiment of the present invention
  • Figure 4b is a cutaway view of an arrangement of high cameras within a protective housing in accordance with an embodiment of the present invention
  • Figure 5a is a side view of a vehicle approaching a protective housing enclosing cameras in accordance with an embodiment of the present invention
  • Figure 6 is a perspective view of a vehicle approaching an inspection system in accordance with an embodiment of the present invention.
  • Figure 7 illustrates a vehicle-guidance system for displaying instructions to a driver of an approaching vehicle in accordance with an embodiment of the present invention
  • Figure 8 is a view of a display device displaying images of a vehicle's undercarriage in quadrants in accordance with an embodiment of the present invention
  • Figure 9 is a view of a display device displaying images of a vehicle's undercarriage in columns in accordance with an embodiment of the present invention.
  • Figure 10 is a side-by-side comparison of two composite images of a vehicle's undercarriage, and a difference image generated from such comparison.
  • Figure 1 illustrates an embodiment of a vehicle- inspection system 10 of the present invention for capturing images of a vehicle's undercarriage 12 (Figure 5a) as the vehicle 14 approaches a secure location.
  • the system 10 includes a protective housing 18, a first camera set 22, best shown in Figure 2, comprising a plurality of cameras 26a-d arranged along axis 99a, and a second camera set 28 that also comprises a plurality of cameras 32a-d arranged along axis 99b.
  • the cameras 26a-d, 32a-d that make up the first and second camera sets 22, 28 are arranged in each set to capture images that collectively span at least a width of a vehicle's undercarriage 12 as the vehicle 14 advances over the cameras 26a-d, 32a-d.
  • the cameras 26a-d belonging to the first camera set 22 capture images of a low -vehicle undercarriage
  • the cameras 32a-d belonging to the second camera set 28 capture images of a high-vehicle undercarriage.
  • the first camera set 22 will be referred to herein as the low camera set 22 and the second camera set 28 will be referred to as the high camera set 28.
  • cameras belonging to these sets will be referred to as low cameras 26a-d and high cameras 32a-d, respectively.
  • the cameras 26a-d, 32a-d of the present invention are conventional video cameras.
  • An example of a suitable camera is an NTSC-compliant video camera that captures about 30 frames per second.
  • video cameras of different formats that operate at different film speeds are also considered within the scope of the present invention.
  • film speeds refer to digital-recording rates that are the equivalents to the specified film speeds.
  • the distance between the ground 37 ( Figure 5a) and a vehicle's undercarriage 12 is referred to herein as the ground clearance D of that vehicle 14.
  • a low-vehicle undercarriage 12 or low ground clearance D is an undercarriage 12 that rests a short distance from the ground 37, typically within a range from about 4 inches to about 26 inches from the ground 37.
  • a high-vehicle undercarriage 12 or high ground clearance D is an undercarriage 12 that rests a greater distance from the ground 37 than the low ground clearance D, and is typically within a range from about 26 inches to about 50 inches from the ground 37.
  • Images of vehicle undercarriages 12 having a variety of different ground clearances D can be captured by the cameras 26a-d, 32a-d of the present invention.
  • high cameras 32a-d can be adapted within the scope of the present invention to capture images of a vehicle 14 having a ground clearance D of 50+ inches without significant distortion.
  • the low cameras 26a-d include a wide angle of view ⁇ suitable to capture images of the undercarriage 12 of a vehicle 14 having a low ground clearance D without excluding significant portions of the vehicle's undercarriage 12 along a transverse axis 36 ( Figure 5b) that extends across the width W of the vehicle 14.
  • Wide-angle lenses 39a can be provided to maximize the angle of view ⁇ of each low camera 26a-d, thereby requiring a minimal number of low cameras 26a-d to ensure that the images captured by the low cameras 26a-d collectively span at least the entire width W of a vehicle's undercarriage 12 without excluding a portion of the vehicle's undercarriage 12.
  • the term wide-angle lens 39a refers to any lens having a focal length less than the standard. Thus, for example, for a 3.5 mm format, a lens having a focal length less than 3.5 mm is considered to be a wide-angle lens 39a.
  • the minimum spacing between adjacent low cameras 26a-d depends at least on the angle of view ⁇ of the lens 39a provided thereto, and the maximum vehicle width W and minimum vehicle ground clearance D mm that can be imaged by the low cameras 26a-d of the present invention.
  • capturing images with the low cameras 26a-d of a vehicle 14 having a ground clearance D 1 that is below a minimum allowable ground clearance D m ⁇ n will result in images that do not collectively span at least the entire width W of the vehicle's undercarriage 12 along the transverse axis 36 without omitting portions of the vehicle's undercarriage 12.
  • portions of the vehicle's undercarriage 12 will fall within gaps 42 between the angle of view ⁇ of two adjacent low cameras 26a-d, and thus will be absent from a widthwise image 51 of the vehicle's undercarriage 12 generated by combining the individual images 48 ( Figure 5b) or by combining lengthwise images 53 ( Figure 5b) captured by each of the low cameras 26a-d.
  • Spacing the low cameras 26a-d exactly the maximum allowable distance apart for a given minimum ground clearance D m j n will cause the angle of view ⁇ of adjacent low cameras 26a-d to form an intersection 45a at the vehicle's undercarriage 12, resulting in individual images 48 taken by the low cameras 26a-d that can be assembled side by side along the transverse axis 36 without overlap to generate a composite image 52 spanning the width of the vehicle undercarriage 12.
  • Spacing the low cameras 26a-d the maximum allowable distance apart allows a minimum number of low cameras 26a-d to be included in the protective housing 18 to capture images 48 that collectively span the entire width W of the vehicle's undercarriage 12 without omitting portions therefrom.
  • the individual images 48 captured by the low cameras 26a-d will include overlapping portions 54.
  • Such individual images 48 captured by adjacent low cameras 26a-d will include common features that appear in the images captured the adjacent low cameras 26a-d.
  • These overlapping individual images 48 can also be combined electronically according to instructions embodied by computer-readable logic to form a composite image 52 that spans the entire width of undercarriage 12 as described in detail below.
  • Wide-angle lenses 39a tend to distort objects in captured images by bending the objects located near an edge of the images 48 and causing some upright features to appear as if they are leaning. Such distortion is generally attributed to the extent to which light must be bent to allow the wide-angle lens 39a to observe all objects within the expanded angle of view ⁇ . Although a trivial amount of distortion is acceptable, the images 48 capture by the low cameras 26a-d in the low-camera set 22 can be optimized by selecting a lens 39a having a suitable angle of view ⁇ that captures images 48 of acceptable quality and implementing an appropriate number of cameras greater than the theoretical minimum, as is known in the art.
  • the number of cameras must be sufficient for the lens 39a having the selected angle of view ⁇ to capture images 48 that collectively span the entire width W of an undercarriage 12 of a vehicle 14 that is the lowest and widest possible.
  • the number of low cameras 26a-d chosen should be able to accommodate the worst-case scenario of vehicle ground clearance D and width W. Then imaging vehicles 14 having a higher ground clearance D or narrower width W than the worst case scenario will produce an overlap between the individual images 48 taken by each low camera 26a-d, and imaging beyond the terminal edge of the vehicle's undercarriage 12.
  • the upper-limit ground clearance for the low cameras 26a-d is determined to be the height above the low cameras 26a-d at which significant distortion is introduced into the individual images 48. Although some distortion is acceptable, when the ground clearance D reaches a height when distortion of the individual images 48 makes viewing the features depicted therein difficult, that ground clearance exceeds the upper-limit ground clearance for the low cameras 26a-d.
  • Figure 4b schematically illustrates the high cameras 32a-d, or cameras belonging to the high camera set 28, that include a narrow-angle lens 39b having a narrow angle of view ⁇ that is narrower than the wide angle of view ⁇ exhibited by the low cameras 26a-d with the wide-angle lenses 39a, thus giving the high cameras 32a-d the ability to magnify and capture images 48 of objects a further distance away than the low cameras 26a-d without significant distortion.
  • the wide-angle lenses 39b provided to the low cameras 26a-d, objects that are located beyond the upper-limit ground clearance for the particular low cameras 26a-d appear to be located further from the low cameras 26a-d than they actually are.
  • the high cameras 32a-d are equipped to observe objects within an angle of view ⁇ that is narrower than the wide angle of view ⁇ observed by the low cameras 26a-d, but with less distortion than that observed viewing the same objects at the same distance through the low cameras 26a-d. Accordingly, the high cameras 32a-d can capture images 48 of an undercarriage 12 of a vehicle 14 having a high ground clearance D without significant distortion, which would otherwise be observed in the same images captured by the low cameras 26a-d. Further, the narrow angle of view ⁇ exhibited by the high cameras 32a-d will capture only portions of an undercarriage 12 of a low-ground-clearance vehicle.
  • a suitable narrow-angle lens 39b to be provided to the high cameras 32a-d can be selected as a function of the number of high cameras 32a-d to be used, the range of ground clearances D to be imaged, and the maximum width W of a vehicle undercarriage 12 to be imaged.
  • High cameras 32a-d equipped with a suitable narrow-angle lens 39b can: capture images 48 of vehicle undercarriages 12 falling within the desired range of high ground clearances without significant distortion, capture images 48 of vehicle undercarriages 12 within the range of high ground clearances from each high camera 32a-d that at least partially overlap with images 48 captured by adjacent high cameras 32a-d, and capture images 48 from the high cameras 32a-d that collectively span at least the entire width W of a vehicle undercarriage 12 having the lowest ground clearance in the range of high ground clearances without omitting portions along axis 36 of the vehicle undercarriage 12. Regardless of the particular angle of view of the lens provide to the low and high cameras 26a-d, 32a-d, the high cameras 32a-d can capture images 48 of objects within a narrower angle of view ⁇ than the low cameras 26a-d.
  • the minimum spacing between adjacent high cameras 32a-d depends at least on the angle of view ⁇ of the lenses 39b, the maximum vehicle width W, and minimum vehicle ground clearance D that can be imaged by the high cameras 32a- d without omitting portions of the vehicle's undercarriage 12.
  • portions of the vehicle's undercarriage 12 will fall within gaps 42b between the angle of view ⁇ of two adjacent high cameras 32a-d, and thus will be absent from a composite image 52 spanning at least the width W of the vehicle's undercarriage 12 generated by combining the individual images 48 captured by each of the high cameras 32a-d.
  • the composite image 52 of a vehicle undercarriage 12 having a ground clearance of di will appear incomplete and omit one or more portions of the vehicle's undercarriage along axis 36.
  • the individual images 48 captured by the high cameras 32a-d will include overlapping portions 54b. These overlapping individual images 48 can also be combined electronically according to instructions embodied by computer-readable logic of the present invention to form a complete composite image 52 ( Figure 5b) spanning the width of the vehicle's undercarriage 12 without reproducing substantial portions of the vehicles undercarriage along longitudinal axis 86.
  • one of the low camera set 22 and the high camera set 28, or both can each comprise one or more cameras to suitably capture images of a vehicle undercarriage 12 instead of the plurality of cameras described above.
  • an embodiment of the present invention that includes a single camera in each of the low and high camera sets 22, 28 can optionally further include a mirror (not shown) or other reflective device spaced a distance apart from the camera in each camera set 22, 28.
  • the mirror is positioned to reflect an image of the vehicle's undercarriage 12 toward the camera in each ⁇ camera set 22, 28.
  • each camera can effectively capture a single image that spans the entire width of the vehicle 14, as reflected by the mirror. This allows each camera to capture a series of images as portions of the vehicle undercarriage are reflected by the mirror.
  • one or more mirrors can be positioned a suitable distance away from two or more cameras in each camera set 22, 28 to capture individual images of portions of the vehicle undercarriage 12 that can be electronically combined to form a composite image of the vehicle's undercarriage as described below.
  • Alternate embodiments of the present invention include the use of a single camera set comprising one or more cameras for capturing images of both low and high vehicle undercarriages.
  • one or more mirrors or other reflective objects that can direct an image of the vehicle undercarriage toward the camera(s) can optionally be placed at or near ground level separated a distance from the one or more cameras.
  • the camera(s) capture images of the vehicle undercarriage directed by the mirror(s). In this manner it is possible to minimize the number of cameras used to capture images that span at least the entire width of vehicles to be imaged.
  • a specific embodiment includes the use of a single camera to capture images that span at least the entire width of the undercarriage of vehicles that are to be inspected with the present invention.
  • the widthwise images captured in this manner can be electronically combined as described below to form a composite image of the entire vehicle undercarriage.
  • an embodiment of the vehicle inspection system 10 of the present invention further includes a height sensor 62 disposed within the protective housing 18 for measuring the ground clearance D of the vehicle 14 as the vehicle 14 advances over the cameras 26a-d, 32a-d.
  • the height sensor 62 can measure the ground clearance D of an approaching vehicle by transmitting an ultrasonic signal, radio-frequency (“RF") signal, microwave signal, ultraviolet light, infrared light, laser light, and any other type of signal (collectively referred to herein as a "wireless measuring signal”), and receiving a data signal in response to the interaction of the wireless measuring signal with a portion of the vehicle 14.
  • RF radio-frequency
  • the ground clearance D of the vehicle 14 can be determined by a controller 65 based on the time it takes for the data signal to be received by the height sensor 62 in response to the transmission of the wireless measuring signal, based on modulation of the wireless measuring signal, a combination thereof, and any other known method for measuring a distance with a wireless measuring signal.
  • the controller 65 is provided with computer-readable logic stored in a computer-accessible memory for determining the ground clearance D of the vehicle 14 and selecting the appropriate camera set 22, 28 to use for imaging the vehicle's undercarriage 12 based on the measured ground clearance D.
  • the controller 65 which operates according to instructions embodied by the computer-readable instructions, automatically selects the appropriate camera set 22, 28 for imaging the vehicle's undercarriage 14 without intervention from an operator of the vehicle-inspection system 10.
  • the controller 65 can be physically located with the protective housing 18, within a user interface ( Figure 1), and at any other location where electronic data can be transmitted.
  • Figure 1 a user interface
  • the height sensor 62 can transmit the wireless measuring signal in the form of a pulse of laser light and await receipt of the data signal, which in this case can be reflected light, following interaction of the wireless measuring signal with a portion of the vehicle 14.
  • the controller 65 can manipulate the information gathered by receiving the data signal to determine that the vehicle 14 to be inspected has a ground clearance D of approximately 35 inches. Based on this information, the controller 65 selects the high cameras 32a-d of the high camera 28 set to capture images 48 of the vehicle's undercarriage 12 without operator intervention.
  • the height sensor 62 can be integrated within the protective housing 18 as shown in Figures 2 and 6, or located at a position remote from the protective housing 18 as shown schematically in Figure 5.
  • the height sensor 62 can optionally be located upstream within the flow of traffic as vehicles 14 approach the camera sets 22, 28 to be imaged.
  • the ground clearance D of oncoming vehicles 14 will be measured before the images 48 of the vehicles 14 will be captured.
  • the sequence of the vehicles 14 that have had their ground clearance D measured is maintained and the appropriate camera set 22, 28 selected in that order.
  • the controller 65 to allow the imaging process to begin.
  • the protective housing 18 of the present invention can take any form that can withstand being run over by the class of vehicles 14 to be inspected. As shown in Figures 1, 2 and 6, the protective housing 18 resembles a conventional speed bump that includes a generally-horizontal upper surface 68 and two inclined surfaces 71 that extend between the ground the speed bump rests on and the upper surface 68.
  • the upper surface 68 includes a plurality of apertures 74 through which the cameras 26a-d, 32a-d within the speed bump can observe the vehicle undercarriage 12 as the vehicle 14 advances over the cameras 26a-d, 32a-d.
  • the apertures 74 are suitably sized to create a clear line of sight between the cameras 26a-d, 32a-d in the speed bump and the vehicle 14 advancing over the speed bump.
  • An alternate embodiment of the present invention includes a protective housing 18 that has a generally U-shaped cross section as shown in Figure 3. According to this embodiment, the protective housing 18 is recessed into the ground similar to a trench drain.
  • a metallic grate 77 that includes apertures 79 arranged similar to those in the horizontal upper portion 68 of the speed bump conceals portions of the housing 18 without interfering with a line of sight extending between the cameras 26a-d, 32a-d and a vehicle 14 advancing over the cameras 26a-d, 32a-d.
  • the metallic grate 77 can be installed on the protective housing 18 to be generally flush with the ground, thereby minimizing the impact experienced by occupants of a vehicle 14 being driven over the grate 77.
  • Another embodiment of the present invention includes a low -pro file protective housing that can include the surface of the ground's itself. According to this embodiment, suitably-sized portions of the roadway are removed and the camera sets 22, 28 inserted directly into the holes. A channel such as a saw cut in the roadway surface can receive the wiring operatively connecting the camera sets 22, 28 and the controller 65. Thus, a semi-permanent installation can be created without using a trench-drain housing as in the previous embodiment.
  • the protective housing 18 can optionally be provided with an illumination device 82, shown in Figures 4a and 4b, for illuminating the undercarriage 12 of a vehicle 14 as the vehicle 14 advances over the cameras 26a-d, 32a-d.
  • suitable illumination devices 82 include incandescent lamps, light-emitting diodes, fluorescent lamps, and any other device that can convert electrical energy into visible light.
  • One or more of the illumination devices 82 can be installed adjacent to each camera 26a-d, 32a-d disposed within the protective housing 18. By providing one or more illumination devices 82 adjacent to each camera 26a-d, 32a-d, a sufficient amount of light can be directed onto the vehicle undercarriage 12 as the imaging process is conducted.
  • Video images captured according to the present invention are actually a series of individual images 48 that are taken at a given frame rate and then played back at that same rate to simulate continuous movement.
  • the rate of frame capture is generally constant based on time, regardless of the speed and position of the vehicle advancing over the cameras 26a-d, 32a-d.
  • individual images 48 that will form video images or a motion picture are captured of the underside 12 of the vehicle 14 at approximately 30 frames per second.
  • an individual image 48 or frame of the underside 12 of the vehicle 14 is captured 30 times a second, regardless of the position of the vehicle 14 relative to the cameras 26a-d, 32a-d, the speed of the vehicle advancing over the cameras 26a-d, 32a-d, and so on.
  • Alternate embodiments include a speed-sensing device (not shown) to monitor the speed of a vehicle as it advances over the present invention.
  • the speed-sensing device measures the velocity of the vehicle during the imaging process, including any changes in velocity, and transmits a signal that causes an adjustment of the rate at which the individual images captured by the cameras are captured.
  • the rate at which individual images are captured is increased while the image-capture rate decreases as the vehicle's velocity degreases. Adjustment of the image-capture rate proportional to vehicle velocity can minimize the amount of overlap between sequential individual images discussed below.
  • the four cameras 26a-d, 32a-d in each camera set 22, 28 such video or motion pictures are captured for each active camera 26a-d, 32a-d in the camera sets 22, 28.
  • the four cameras 26a-d, 32a-d have an overlapping field of view with each other and the individual images 48 can be replayed to display the video or motion picture taken as the vehicle 14 passes over the cameras 26a-d, 32a-d.
  • the image-capturing process is stopped and the analysis and stitching of the individual images 48 to form a composite image 52 of the entire vehicle undercarriage 12 by the computer-readable logic begins.
  • the composite image 52 is formed by electronically combining or stitching small pieces of each individual image 48 or frame without reproducing substantial portions of the common image features of each frame along longitudinal axis 86 or transverse axis 36. Since the individual images are captured only 1/3 Oth of a second apart, the portion of the individual images 48 combined to form a portion of the composite image must represent the portion of the vehicle undercarriage that advanced over the cameras 26a-d, 32a-d since the immediately preceding individual image 48 was captured.
  • the electronic combination or stitching of individual images taken by a camera 26a-d, 32a-d along longitudinal axis 86 for producing a composite image 52 proceeds, according to one embodiment, as follows.
  • the captured individual images 48 are each individually analyzed frame-by-frame by a control unit acting as instructed by computer-readable logic.
  • the current frame being analyzed is referred to herein as frame X, where X is an index into an array of frames that make up a video.
  • each individual image is enhanced by applying an algorithm that minimizes distortion from each image based on the geometry of the camera that captured the particular image, and how the respective camera is mounted.
  • the high vehicle cameras 32a-d point generally straight up at a perpendicular angle relative to the ground 37, but the resulting images exhibit spherical distortions because of the wide-angle lenses 39a employed.
  • image can be enhanced to remove most of the spherical distortion, making the image easily viewed by the operator.
  • low cameras 26a-d are mounted at a compound angle with respect to the vehicle undercarriage 12. So in addition to the spherical distortion that the high cameras 32a-d exhibit, the low cameras 26a-d are also subject to a "trapezoidal" distortion. A different mathematical algorithm is employed (in addition to the spherical algorithm) to remove this distortion and improve the image for operator viewing.
  • Frame X is compared with frame X-I, which is the individual image 48 or frame captured by the same camera 26a-d, 32a-d immediately before frame X. Areas of high contrast (which tend to represent a definable object) are identified in frame X by instructions embodied by computer-readable logic.
  • frame X-I Once an area of high contrast is identified, the same area of high contrast is identified in frame X-I.
  • the difference in position of the high contrast area between frame X and frame X-I is the distance along longitudinal axis 86 the vehicle has traveled in l/30th of a second.
  • this distance the vehicle 14 has traveled is measured in a number of pixels (the difference in position of one pixel of the high contrast area between frame X and X-I).
  • This number of pixels referred to herein as pixel value, defines the amount of this frame that will be used to form a lengthwise image 51 of the entire length of the vehicle undercarriage 12 along longitudinal axis 86.
  • This process is repeated for all individual images captured by the cameras 26a-d, 32a-d located internally along axis 99a and 99b from the cameras 26a-d, 32a-d located adjacent to the terminus of the protective housing and that capture images at the terminus of the width W of the vehicle.
  • the pixel value determined for contiguous frames captured by each internal camera 26a-d, 32a-d according to the instructions embodied by the computer-readable logic are compared and normalized to determine a normalized pixel value.
  • the lengthwise images 53 are formed by considering each individual image 48 or frame captured by each camera 26a-d, 32a-d and taking a slice out of the individual images 48 or frames that correspond to the normalized pixel value previously calculated. This slice is placed in an image buffer, the next slice is placed adjacent to the preceding slice in a longitudinal manner, and so on until a lengthwise image 53 of the entire length of the vehicle undercarriage 12 along longitudinal axis 86 for each camera 26a-d, 32a-d is created. When displayed side by side, the composite images of the vehicle's undercarriage 12 along longitudinal axis 86 form the composite image of the entire vehicle undercarriage 12.
  • one of the cameras next to it may provide a view past the obstruction in an overlapping portion of an individual image 48 because of the different position of the camera with respect to the exhaust pipe.
  • the captured images 48 can be displayed sequentially as a motion picture video by a display device 85, and the captured images substantially-simultaneously stored in a computer- readable memory 87.
  • the computer- readable memory 87 can be installed within a user interface 92 such as that illustrated in Figure 1.
  • the user interface 92, the display device 85 or both can optionally be secured to the interior of a rugged, waterproof carrying case 95 for simple transportation and for protection against possible contamination.
  • a power adaptor (not shown) can convert electrical energy drawn from a conventional automobile battery, alternator, generator and any other portable source electrical energy into a form suitable for powering the vehicle-inspection system 10.
  • Such power adapters are known and can include an inverter for converting direct-current electrical energy into alternating-current electrical energy.
  • the display device 85 can be any electronic display such as a CRT monitor, television screen, flat-panel display, and the like. As shown in Figure 8, the captured video images can be displayed in quadrants 97a-d by the display device 85. Each quadrant 97a-d displays the images 48 captured by a respective camera 26a-d, 32a-d within a selected camera set 22, 28.
  • the quadrants shown in Figure 8 are arranged as a 2 x 2 matrix, however, alternate embodiments include displaying the captured images 48 from each camera 26a-d, 32a-d aligned side-by-side in strips 98a-d to display a video that resembles the entire width W of the vehicle's undercarriage 12 as shown in Figure 9.
  • the images displayed by the display device 85 as shown in Figure 9 can be a sequential display of the widthwise images 51 (Figure 5b) from time t, the time at which the image-capture process commenced, to time t+m, the time at which the image-capture process is completed.
  • the sequential display of widthwise images 51 appears as a motion-picture video of the entire width W of the vehicle's undercarriage 12 as the vehicle 14 advances over the cameras 26a-d, 32a-d.
  • the ability of the vehicle-inspection system 10 to accurately capture images 48 of the entire vehicle undercarriage 12 is dependent upon at least the direction and speed the vehicle 14 is driven relative to the cameras 26a-d, 32a-d during the imaging process.
  • the longitudinal axis 86 of the vehicle 12 is perpendicular to the axis 99a, 99b along which the cameras are arranged as shown in Figure 2, however, minor deviations from the ideal vehicle approach are permissible so long as significant portions of the vehicle's undercarriage 12 do not fall outside the angle of view ⁇ , ⁇ of all cameras 26a-d, 32a-d. Acceptable deviations in the vehicle's approach would not result in the omission of portions of the vehicle's undercarriage 12 from the captured images 48.
  • the present invention can optionally include a guidance device 102 ( Figures 6 and 7) that conveys visual instructions to the driver of an oncoming vehicle 14.
  • Figure 7 shows an example of a guidance device 102, which can be an electronic display located anywhere within view of the driver of a vehicle 14 as the vehicle 14 approaches the cameras 26a-d, 32a-d.
  • the visual instructions conveyed by the guidance device 102 can include a network of color-coded lights, arrows, text messages, and any other symbol that can instruct the driver how to alter at least one of the speed and alignment of the vehicle 14 relative to the cameras 26a-d, 32a-d.
  • Sensors can be located adjacent to the cameras 26a-d, 32a-d to detect and monitor the speed of the vehicle 14 as it approaches and advances over the cameras 26a-d, 32a-d.
  • Any type of a conventional speed measuring device such as an ultrasonic or laser-based speed sensor can be positioned directly in front of the ideal path to be traveled by the vehicle 14 as it advances over the cameras 26a-d, 32a-d, or at any other location where the speed of the vehicle 14 can be measured as it approaches and advances over the cameras 26a-d, 32a-d.
  • position sensors can optionally be provided adjacent to terminal ends 105a, b of the camera sets 22, 28, or at any other location to monitor the position of the oncoming-vehicle's tires.
  • the position sensors can be positioned to detect when the oncoming-vehicle's tires reach an unacceptable position relative to the cameras that will result in the capture of an incomplete video image of the vehicle's undercarriage.
  • a signal is transmitted to the controller 65, which is operably connected to receive signals from the speed and position sensors.
  • the controller 65 transmits instructions that cause a suitable corrective measure 103 to be displayed to the driver to aide the drive in properly advancing the vehicle over the cameras 26a-d, 32a-d.
  • controller 65 is shown disposed within the protective housing 18 in Figures 4a and 4b, it is worth noting that the location of the controller 65 can vary without departing from the scope of the present invention.
  • the controller can be disposed within the user interface 92 and operably connected to receive and/or transmit signals to and from cameras 26a-d, 32a-d and sensors of the present invention.
  • the user interface 92 can be operatively connected to control operation of the cameras 26a-d, 32a-d, the guidance device 102, the illumination device 82, the display device 85, and other features of the vehicle-inspection system 10.
  • the communication link between the user interface 92 and one or more of the cameras 26a-d, 32a-d, controller, illumination device 82, height sensor 62, guidance device 102, and any other feature can be established by extending a cable 107 between the user interface 92 and these features, by establishing a wireless communication link therebetween with transceivers 104 ( Figure 6), or any combination thereof.
  • Suitable wireless communication links such as a local RF network, permit location of the user interface 92 a safe distance from the cameras 26a-d, 32a-d to minimize the threat posed by vehicles 14 armed with explosives to personnel assigned the duty of inspecting such vehicles 14.
  • the alphanumeric keypad 109 allows the operator to input data identifying the particular vehicle 14 approaching the inspection station to be inspected.
  • the license-plate number, the vehicle-identification number, and any other alphanumeric information that can identify the vehicle 14 being inspected can optionally be input into the system 10 where it can be stored in the computer-readable memory of the user interface 92.
  • Vehicle-identifying information input via the alphanumeric keypad 109 can be stored in the computer-readable memory and indexed with captured images 48 of that vehicle 14.
  • Subsequent video images 48 captured of this vehicle's undercarriage 12 can then be compared to the previously captured and indexed video images 48 for comparison purposes. Comparison of the video images 48 can assist in identifying alterations made to the vehicle's undercarriage 12 since that vehicle's 14 previous visit to the inspection station.
  • a pneumatic lens cleaner that can direct a flow of air over a lens provided to the cameras of at least one of the first and second camera sets to remove debris therefrom.
  • Embodiments of the present can further comprise computer-readable logic to assist an operator in detecting the presence of an improvised explosive device ("IED") or other "planted” device on the undercarriage 12 of a vehicle 14.
  • IED improvised explosive device
  • the present invention provides computer-readable memory for electronically storing that composite image for later review.
  • the operator is prompted to manually enter (via the keypad 109) a unique vehicle identification (UVI) string that can be used to subsequently refer to that image.
  • UVI unique vehicle identification
  • this manual entry could be improved to be automated by using license plate recognition (LPR) technology, bar code or Radio Frequency Identification (RFID) technology, and the like.
  • LPR license plate recognition
  • RFID Radio Frequency Identification
  • the most-recently-captured composite image 140 of the vehicle's undercarriage 12 can be displayed as arranged in Figure 10, on the right-most side of the display device 85.
  • a previously-captured and saved composite image 142 can be displayed for comparison with the most-recent composite image 140 and to highlight any visible artifacts 146 on the vehicle's undercarriage 12 that were altered since the previously- captured composite image 142 was saved.
  • the visible artifacts 146 can include components or other objects removed from the vehicle's undercarriage 12, component's or other objects added to the vehicle's undercarriage 12, modifications to the vehicle's undercarriage 12, and other differences between the previously-captured composite image 142 and the current composite image 140.
  • a difference image 148 can be created from, and displayed along with the current composite image 140 and the previously-captured composite image 142 to highlight differences between those two images.
  • the difference image 148 is an image generated by performing a comparative overlay of the previously-captured composite image 142 and the current composite image 140, and subtracting the features of the vehicle undercarriage 12 common to both images. Performing such a comparative overlay produces a difference image 148 where differences between the vehicle undercarriage 12 shown in the previously-captured composite image 142 and the current composite image 140 stand out from the features of the vehicle undercarriage 12 common to both images.
  • Performing a subtraction of the overlaid images is but one specific method of generating a difference image.
  • the difference image 148 according to the present invention can be generated in any manner that results in an image making the differences between the features shown in the previously-captured composite image 142 and the current composite image 140 stand out more than they do in the current composite image 140 alone.
  • the phrase "comparative overlay" does not necessarily require two images to be physically or electronically overlaid relative to each other. This is merely a term that requires an electronic operation or comparison of the two images to be performed by a computer processor to generate the difference image
  • the difference, or subtraction image 148 is shown between the previously-captured composite image 142 and the current composite image 140 in Figure 10.
  • each strip in the difference image 148 includes a degree of overlap owing to the overlapping fields of view between adjacent cameras used to capture the individual images.
  • An artifact 146 was added to the vehicle undercarriage shown in the current composite image 140 since the generation of the previously-captured composite image 142 to illustrate the production of a difference image 148.
  • Formation of the difference image 148 includes the steps of converting the previously-captured composite image 142 to a gray- scale image where features are shown in varying shades of gray, converting the current composite image 140 to a gray-scale image, representing each gray-scale pixel value in each of the converted images by a positive number (absolute value), and subtracting the corresponding pixel values from one of the converted images 140, 142 from the other to form the difference image 148.
  • Converting a composite image into a gray-scale image represents colors in an image with varying shades of gray.
  • Composite images captured directly as gray-scale images need not undergo the conversion to a gray-scale image if pixel values in such images can be represented by positive numbers that allow for the generation of the difference image.
  • Pixel values are numbers that represent the shade of gray displayed by a pixel forming a portion of a composite image.
  • the composite image can be thought of as a grid comprising rows and columns that intersect to form perpendicular angles.
  • a pixel is a single square or a plurality of squares defined by the intersection of one or more rows and columns. Increasing the number of pixels that form a composite image, a detailed image can be displayed by controlling the color, or shade of gray, displayed by each pixel.
  • Each shade of gray can be identified by a number that can be processed by a computational processor to cause the particular shades of gray to be displayed by the display device, thereby illuminating the pixel to the corresponding shade of gray required to fo ⁇ n that portion of the image.
  • the pixel values representing the variety of shades of gray are all positive, and can be the absolute value of those shades of gray normally represented by negative numbers.
  • the positive numbers of either the previously-captured composite image 142 or the current composite image 140 are subtracted from the other according to a suitable algorithm embodied in computer-readable logic.
  • the result of this subtraction operation is a difference image 148 that is largely black except in those areas where there is a visual difference between the previously-captured composite image 142 and the current composite image 140, which appear as a color that provides at least a slight contrast with black.
  • the contrasting areas within the generally black or darkened difference image 148 significantly improves the operator's ability to recognize differences between the previously- captured composite image 142 and the current composite image 140 by enhancing or "highlighting" alterations or artifacts 146, thereby making those alterations or artifacts 146 stand out from the surrounding features of the vehicle's undercarriage 12.
  • the alphanumeric keypad 109 allows the operator to switch between and select views so the operator can access and display a view of the previously-captured composite image 142, the current image 140, the difference image 148, and any combination thereof. As shown in Figure 10, all three such images are displayed to allow the operator to confirm the absence or presence of the artifacts 146 highlighted in the difference image 148 in either or both of the previously- captured composite image 142 and the current composite image 140. As mentioned above, the current composite image 140 can be displayed adjacent to only the difference image 148 if desired by the operator.
  • Pan and zoom functions allow the operator to examine any suspected problem areas with a more detailed view, limited at least in part by the resolution of the cameras used to capture the individual images that are used to form the composite images.
  • the pan and zoom functions are applied to both images in tandem so that the operator can easily zoom in on a region of interest in the difference image 148, and observe the corresponding region in the current composite image 140 in the same point of view simultaneously.
  • the effectiveness of the subtraction operation to generate the difference image 148 is a function of at least the position and speed the vehicle is advanced over the cameras during the imaging process relative to the position and speed of the vehicle during the imaging process when the images used to generate the previously-captured composite image 142 were captured. If the current composite image 140 differs significantly from the previously-captured composite image 142 due to misalignment of the vehicle as it advances over the cameras, or a substantial difference in speed of the vehicle as it advances over the cameras relative to the previous speed of the vehicle, the difference image 148 will not accurately reflect a comparison of corresponding features of the vehicle's undercarriage 12.
  • the present invention further comprises computer-readable logic that at least partially compensates for some variation in the position and speed with which the vehicle is advanced over the cameras between the previously- captured composite image 142 and the current composite image 140.
  • a shift in vehicle position between the previously-captured composite image 142 and the current composite image 140 results in an inaccurate number of light areas in the difference image 148 because artifacts 146 under the vehicle 14 do not align suitably for the difference algorithm.
  • An optimum "fit" between the vehicle position in the previously-captured composite image 142 and the current image 140 should result in the darkest overall difference image 148. Since darkness is represented by a low pixel number in embodiments of the present invention, the algorithms embodied in computer-readable logic processed by a computational processing unit calculate the sum of all pixel values for the difference image 148.
  • the present invention shifts at least one of the previously-captured composite image 142 and the current composite image 140 a prescribed number of pixels in each the X and Y directions. Once the shift has occurred, the difference between pixel values obtained from the subtraction operation performed in generating the difference image 148 and the sum of all pixel values are again determined. The shifting and recalculation of pixel value differences are repeated a plurality of times using an algorithm that eventually converges on the best fit while minimizing the number of calculations and computational time. Once the "best fit” is determined, the difference image 148 is generated using this "best fit” alignment and displayed to the operator. Using this algorithm minimizes the amount of "false light” regions that appear within the difference image 148 and improves the ability of the operator to see a real difference in the underside of the vehicle 14.

Abstract

A vehicle inspection system for capturing images of a vehicle's undercarriage as the vehicle (14) approaches a secure location. The vehicle-inspection system includes one or more cameras (18) arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the cameras, and can optionally include computer­readable logic for facilitating the combination of individual images to generate a composite image of the vehicle's undercarriage.

Description

U.S. PATENT APPLICATION Advanced Detection Technologies, Inc.
TITLE OF THE INVENTION VEHICLE-UNDERCARRIAGE INSPECTION SYSTEM
FIELD OF THE INVENTION
The invention relates generally to inspection of a vehicle's undercarriage, and more particularly, to a system for capturing a plurality of images that collectively span the entire width of the vehicle's undercarriage and optionally generating a composite image of the vehicle's undercarriage from the plurality of pictures for inspection purposes.
BACKGROUND OF THE INVENTION
In light of recent world events, security has become a primary concern. Public gathering places, military bases, and any other venue where an act of violence would draw the attention of the world now require strict security measures to minimize the potential for such attacks.
Passenger vehicles, large trucks, busses, and other such forms of transportation are often used to hide contraband that is to be smuggled beyond a checkpoint or inspection station. The undercarriage of such vehicles can include a maze of frame members and other structural features that can be used to conceal the contraband from view, thereby enabling those wishing harm upon others or smuggling illegal contraband to accomplish their mission. Physical inspection of the undercarriage of vehicles is impractical due to the often low ground clearance of the vehicles' undercarriage.
Conventional devices developed to aid in the inspection of vehicle undercarriages typically include the use of a mirror provided at the end of a handle. Such a device can be readily deployed where it is needed without much effort. In use, an inspector will extend the mirror at least partially under the vehicle and observe a reflection of the undercarriage in the mirror. The inspector must physically position in the mirror about the entire perimeter of the vehicle to perform a complete search, however, even upon searching the entire perimeter of the vehicle, the search can neglect portions of the vehicle's undercarriage near the center of the vehicle, and thus, out of view from the mirror. Additionally, objects viewed in this manner are inverted and are difficult to identify and differentiate from the frame members of the vehicle. And for inspection stations where many vehicles are to be searched, the use of a mirror is time consuming and results in significant delays that inconvenience the vehicles' occupants.
More recently, attempts have been made to develop an automated vehicle-inspection system implementing cameras to capture images of undercarriages of moving vehicles. An example of such an attempt is the installation of several cameras into a speed bump over which vehicles are to be driven. The speed bump can be located at the entrance of a secure location that requires vehicles to be inspected before entering. The speed of an oncoming vehicle is estimated using radar technology and images of the vehicle's undercarriage are captured by the cameras at a suitable rate for the measured speed of the vehicle. Images captured by the cameras are saved to a database and can be electronically assembled to form a composite image of the vehicle's entire undercarriage after the vehicle has completely passed over the speed bump. Only after the composite image is generated can an inspector observe this image to determine whether contraband is present in the vehicle's undercarriage.
The time required to generate the composite image of the vehicle's undercarriage increases delays experienced by the occupants of the vehicle. Further, to maintain a generally constant flow of traffic over the speed bump, a vehicle must be permitted to pass beyond the checkpoint before the composite image is displayed to the inspector and a thorough inspection performed. Thus, a vehicle carrying contraband must be pursued after being allowed to pass to recover the contraband. A holding pen for vehicles awaiting the results of the inspector's inspection of the composite image would result in a collection of vehicles subjected to the possible danger of contraband hidden in an adjacent vehicle. Conventional imaging systems use a camera to take several still pictures of the underside of the vehicle and arrange the still pictures to create a mosaic composite picture. The shutter on the still camera is triggered by a hardware device that determines and monitors the changing distance between the camera and an approaching vehicle. When the vehicle advances over the camera from one position A to another position A+X, where X is the field of view of the camera, the shutter is triggered and a still picture is captured.
The composite image generated by the automated system described above can be distorted due to limitations of the cameras disposed within the speed bump. A wide variety of vehicles can be driven over the speed bump, each having a different ground clearance. Sports cars are at one spectrum of ground clearance and can have an undercarriage supported only 6 inches above the ground. In contrast, commercial trucks can pull trailers that have an undercarriage supported up to 40 inches above the ground. To capture images of both of these vehicles, and vehicles having an intermediate ground clearance, the cameras of conventional systems rely on lenses to bend the light in a manner permitting the capture of images that span the entire width of the vehicle. The optical characteristics of such lenses typically distort the resulting images to ensure an acceptable angle of view. Distorted images make inspection of vehicle undercarriages and the identification of contraband difficult.
Adding to the distortion of images captured by a conventional automated vehicle- inspection system are variations in the speed and angle of approach of oncoming vehicles. Driving over the cameras in the speed bump at excess speeds will result in a blurred image and possibly a composite image missing portions of the undercarriage, making inspection impossible without making the driver repeat the image-capturing process. Likewise, an approaching vehicle that advances over the cameras in the speed bump without straddling the speed bump at least close to the center of the vehicle will result in a composite image that excludes portions of the vehicles undercarriage near one the vehicle's sides. Subsequent inspection of this image will not detect the presence of contraband at this location. Further, allowing vehicles to advance over the cameras in the speed bump differently each time will make comparison of the resulting composite image to previously captured and assembled composite images difficult. Modern security concerns rest, not only with secured locations within a perimeter, but also with the perimeter itself. Malicious people will take any opportunity to inflict harm upon security personnel standing in the way of their mission. Explosives or other incendiary devices coupled to the vehicles undercarriage pose a significant threat to inspectors at check points were vehicles are to be inspected. Aimed vehicles can be driven to the checkpoint and detonated where the those inside feel the explosions can inflict the most damage to both people and property. In these situations, conducting an inspection of the vehicles, regardless of the type of inspection, at the checkpoint allows puts the inspectors in danger. Accordingly, there is a need in the art for a vehicle-inspection system that permits inspection of the vehicles undercarriage in a timely manner, while minimizing the distortion of the composite image. The composite image should include the entire undercarriage of the vehicle, and should be generated in a substantially consistent manner each time a vehicle is inspected. Further, the vehicle-inspection system should permit remote inspection of vehicles to minimize the threat to inspectors.
SUMMARY OF THE INVENTION
It is an objective of the invention to minimize the high post-capture video recording time for recording video images onto a recording medium. It is a further object of the present invention to provide redundancy and data backup of video images to be recorded on the video image recording medium.
The present invention achieves these and other objectives by providing A vehicle inspection system for capturing video images of a vehicle's undercarriage as the vehicle approaches a secure location. The vehicle-inspection system includes a first camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; and a second camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance. The vehicle-inspection system further comprises a computer-readable memory provided with means for electronically assembling images of the vehicle's undercarriage captured by each of the plurality of cameras in at least one of the first and second camera sets to form a widthwise image of a portion of the vehicle's undercarriage. In accordance with another aspect, the present invention provides a vehicle-inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location. The vehicle-inspection system includes a first camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; and a second camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance. The vehicle-inspection system further comprises a height sensor for sensing the ground clearance of the vehicle, and computer-readable logic for automatically selecting one of the first and second camera sets to capture images of the undercarriage of the vehicle based on the ground clearance measured by the height sensor without user intervention.
In accordance with another aspect, the present invention also provides a vehicle- inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location. The vehicle-inspection system includes a first camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; and a second camera set comprising a plurality of cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance. A guidance device is provided to display visual instructions that instruct a driver of the vehicle to alter at least one of the vehicle's speed and approach angle relative to the first and second camera sets.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other features and advantages of the present invention will become apparent to those skilled in the art to which the present invention relates upon reading the following description with reference to the accompanying drawings, in which:
Figure 1 is a perspective view of a vehicle-undercarriage imaging system in accordance with one embodiment of the present invention;
Figure 2 is an overhead view of an arrangement of cameras within a protective housing in accordance with an embodiment of the present invention;
Figure 3 is a partial perspective view of a camera arrangement within a protective housing in accordance with an embodiment of the present invention;
Figure 4a is a cutaway view of an arrangement of low cameras within a protective housing in accordance with an embodiment of the present invention;
Figure 4b is a cutaway view of an arrangement of high cameras within a protective housing in accordance with an embodiment of the present invention; Figure 5a is a side view of a vehicle approaching a protective housing enclosing cameras in accordance with an embodiment of the present invention;
Figure 6 is a perspective view of a vehicle approaching an inspection system in accordance with an embodiment of the present invention;
Figure 7 illustrates a vehicle-guidance system for displaying instructions to a driver of an approaching vehicle in accordance with an embodiment of the present invention;
Figure 8 is a view of a display device displaying images of a vehicle's undercarriage in quadrants in accordance with an embodiment of the present invention;
Figure 9 is a view of a display device displaying images of a vehicle's undercarriage in columns in accordance with an embodiment of the present invention; and Figure 10 is a side-by-side comparison of two composite images of a vehicle's undercarriage, and a difference image generated from such comparison.
DETAILED DESCRIPTIONS OF PREFERRED AND ALTERNATE EMBODIMENTS
Certain terminology is used herein for convenience only and is not to be taken as a limitation on the present invention. Further, terms such as undercarriage, underside, width, length, and the like are used herein as those terms are defined with reference to the drawings, which may show certain features in somewhat schematic form. Figure 1 illustrates an embodiment of a vehicle- inspection system 10 of the present invention for capturing images of a vehicle's undercarriage 12 (Figure 5a) as the vehicle 14 approaches a secure location. The system 10 includes a protective housing 18, a first camera set 22, best shown in Figure 2, comprising a plurality of cameras 26a-d arranged along axis 99a, and a second camera set 28 that also comprises a plurality of cameras 32a-d arranged along axis 99b. The cameras 26a-d, 32a-d that make up the first and second camera sets 22, 28 are arranged in each set to capture images that collectively span at least a width of a vehicle's undercarriage 12 as the vehicle 14 advances over the cameras 26a-d, 32a-d. The cameras 26a-d belonging to the first camera set 22 capture images of a low -vehicle undercarriage, and the cameras 32a-d belonging to the second camera set 28 capture images of a high-vehicle undercarriage. For clarity, the first camera set 22 will be referred to herein as the low camera set 22 and the second camera set 28 will be referred to as the high camera set 28. Similarly, cameras belonging to these sets will be referred to as low cameras 26a-d and high cameras 32a-d, respectively.
The cameras 26a-d, 32a-d of the present invention are conventional video cameras. An example of a suitable camera is an NTSC-compliant video camera that captures about 30 frames per second. However, video cameras of different formats that operate at different film speeds are also considered within the scope of the present invention. And although reference is made herein to film speeds, the cameras 26a-d, 32a-d can also record the captured images digitally. In such cases, film speeds refer to digital-recording rates that are the equivalents to the specified film speeds.
The distance between the ground 37 (Figure 5a) and a vehicle's undercarriage 12 is referred to herein as the ground clearance D of that vehicle 14. A low-vehicle undercarriage 12 or low ground clearance D is an undercarriage 12 that rests a short distance from the ground 37, typically within a range from about 4 inches to about 26 inches from the ground 37. Similarly, a high-vehicle undercarriage 12 or high ground clearance D is an undercarriage 12 that rests a greater distance from the ground 37 than the low ground clearance D, and is typically within a range from about 26 inches to about 50 inches from the ground 37. These distances between the ground 37 and a vehicle's undercarriage 12 are merely used to illustrate particular embodiments of the present invention. Images of vehicle undercarriages 12 having a variety of different ground clearances D, including ground clearances D greater than or less than those specifically mentioned above, can be captured by the cameras 26a-d, 32a-d of the present invention. For example, high cameras 32a-d can be adapted within the scope of the present invention to capture images of a vehicle 14 having a ground clearance D of 50+ inches without significant distortion. Shown in Figure 4a, the low cameras 26a-d include a wide angle of view φ suitable to capture images of the undercarriage 12 of a vehicle 14 having a low ground clearance D without excluding significant portions of the vehicle's undercarriage 12 along a transverse axis 36 (Figure 5b) that extends across the width W of the vehicle 14. Wide-angle lenses 39a can be provided to maximize the angle of view φ of each low camera 26a-d, thereby requiring a minimal number of low cameras 26a-d to ensure that the images captured by the low cameras 26a-d collectively span at least the entire width W of a vehicle's undercarriage 12 without excluding a portion of the vehicle's undercarriage 12. As used herein, the term wide-angle lens 39a refers to any lens having a focal length less than the standard. Thus, for example, for a 3.5 mm format, a lens having a focal length less than 3.5 mm is considered to be a wide-angle lens 39a.
The minimum spacing between adjacent low cameras 26a-d depends at least on the angle of view φ of the lens 39a provided thereto, and the maximum vehicle width W and minimum vehicle ground clearance Dmm that can be imaged by the low cameras 26a-d of the present invention. As shown in Figure 4a, capturing images with the low cameras 26a-d of a vehicle 14 having a ground clearance D1 that is below a minimum allowable ground clearance Dm{n will result in images that do not collectively span at least the entire width W of the vehicle's undercarriage 12 along the transverse axis 36 without omitting portions of the vehicle's undercarriage 12. In other words, portions of the vehicle's undercarriage 12 will fall within gaps 42 between the angle of view φ of two adjacent low cameras 26a-d, and thus will be absent from a widthwise image 51 of the vehicle's undercarriage 12 generated by combining the individual images 48 (Figure 5b) or by combining lengthwise images 53 (Figure 5b) captured by each of the low cameras 26a-d. Spacing the low cameras 26a-d exactly the maximum allowable distance apart for a given minimum ground clearance Dmjn will cause the angle of view φ of adjacent low cameras 26a-d to form an intersection 45a at the vehicle's undercarriage 12, resulting in individual images 48 taken by the low cameras 26a-d that can be assembled side by side along the transverse axis 36 without overlap to generate a composite image 52 spanning the width of the vehicle undercarriage 12. Spacing the low cameras 26a-d the maximum allowable distance apart allows a minimum number of low cameras 26a-d to be included in the protective housing 18 to capture images 48 that collectively span the entire width W of the vehicle's undercarriage 12 without omitting portions therefrom. For vehicles having a ground clearance D2 that is greater than the minimum ground clearance Dmin and less than the upper-limit ground clearance for the particular low cameras 26a- d shown in Figure 4a, the individual images 48 captured by the low cameras 26a-d will include overlapping portions 54. Such individual images 48 captured by adjacent low cameras 26a-d will include common features that appear in the images captured the adjacent low cameras 26a-d. These overlapping individual images 48 can also be combined electronically according to instructions embodied by computer-readable logic to form a composite image 52 that spans the entire width of undercarriage 12 as described in detail below.
Wide-angle lenses 39a tend to distort objects in captured images by bending the objects located near an edge of the images 48 and causing some upright features to appear as if they are leaning. Such distortion is generally attributed to the extent to which light must be bent to allow the wide-angle lens 39a to observe all objects within the expanded angle of view φ. Although a trivial amount of distortion is acceptable, the images 48 capture by the low cameras 26a-d in the low-camera set 22 can be optimized by selecting a lens 39a having a suitable angle of view φ that captures images 48 of acceptable quality and implementing an appropriate number of cameras greater than the theoretical minimum, as is known in the art. As previously mentioned, the number of cameras must be sufficient for the lens 39a having the selected angle of view φ to capture images 48 that collectively span the entire width W of an undercarriage 12 of a vehicle 14 that is the lowest and widest possible. In other words, the number of low cameras 26a-d chosen should be able to accommodate the worst-case scenario of vehicle ground clearance D and width W. Then imaging vehicles 14 having a higher ground clearance D or narrower width W than the worst case scenario will produce an overlap between the individual images 48 taken by each low camera 26a-d, and imaging beyond the terminal edge of the vehicle's undercarriage 12. Due to the expansive angle of view φ captured by the low cameras 26a-d, the upper-limit ground clearance for the low cameras 26a-d is determined to be the height above the low cameras 26a-d at which significant distortion is introduced into the individual images 48. Although some distortion is acceptable, when the ground clearance D reaches a height when distortion of the individual images 48 makes viewing the features depicted therein difficult, that ground clearance exceeds the upper-limit ground clearance for the low cameras 26a-d.
Figure 4b schematically illustrates the high cameras 32a-d, or cameras belonging to the high camera set 28, that include a narrow-angle lens 39b having a narrow angle of view β that is narrower than the wide angle of view φ exhibited by the low cameras 26a-d with the wide-angle lenses 39a, thus giving the high cameras 32a-d the ability to magnify and capture images 48 of objects a further distance away than the low cameras 26a-d without significant distortion. When viewed through the wide-angle lenses 39b provided to the low cameras 26a-d, objects that are located beyond the upper-limit ground clearance for the particular low cameras 26a-d appear to be located further from the low cameras 26a-d than they actually are. For these objects, lines and shapes are distorted, perspective is exaggerated, and even simple scenes can be rendered difficult to view. Distortion to this degree caused by viewing distant objects through a wide-angle lens 39a makes inspection of a vehicle's undercarriage 14 with the captured images 48 impracticable and unreliable.
In contrast to the low cameras 26a-d, the high cameras 32a-d are equipped to observe objects within an angle of view β that is narrower than the wide angle of view φ observed by the low cameras 26a-d, but with less distortion than that observed viewing the same objects at the same distance through the low cameras 26a-d. Accordingly, the high cameras 32a-d can capture images 48 of an undercarriage 12 of a vehicle 14 having a high ground clearance D without significant distortion, which would otherwise be observed in the same images captured by the low cameras 26a-d. Further, the narrow angle of view β exhibited by the high cameras 32a-d will capture only portions of an undercarriage 12 of a low-ground-clearance vehicle.
Similar to the low cameras 26a-d, a suitable narrow-angle lens 39b to be provided to the high cameras 32a-d can be selected as a function of the number of high cameras 32a-d to be used, the range of ground clearances D to be imaged, and the maximum width W of a vehicle undercarriage 12 to be imaged. High cameras 32a-d equipped with a suitable narrow-angle lens 39b can: capture images 48 of vehicle undercarriages 12 falling within the desired range of high ground clearances without significant distortion, capture images 48 of vehicle undercarriages 12 within the range of high ground clearances from each high camera 32a-d that at least partially overlap with images 48 captured by adjacent high cameras 32a-d, and capture images 48 from the high cameras 32a-d that collectively span at least the entire width W of a vehicle undercarriage 12 having the lowest ground clearance in the range of high ground clearances without omitting portions along axis 36 of the vehicle undercarriage 12. Regardless of the particular angle of view of the lens provide to the low and high cameras 26a-d, 32a-d, the high cameras 32a-d can capture images 48 of objects within a narrower angle of view β than the low cameras 26a-d.
Also similar to the low cameras 26a-d, the minimum spacing between adjacent high cameras 32a-d depends at least on the angle of view β of the lenses 39b, the maximum vehicle width W, and minimum vehicle ground clearance D that can be imaged by the high cameras 32a- d without omitting portions of the vehicle's undercarriage 12. As shown in Figure 4b, capturing images 48 with the high cameras 32a-d of a vehicle undercarriage 12 having a ground clearance di that is below a minimum allowable ground clearance dmjn for the chosen angle of view β and camera spacing will result in images 48 that do not collectively span at least the entire width W of the vehicles' undercarriage 12 along the transverse axis 36 without omitting portions of the undercarriage 12 along axis 36. In other words, portions of the vehicle's undercarriage 12 will fall within gaps 42b between the angle of view β of two adjacent high cameras 32a-d, and thus will be absent from a composite image 52 spanning at least the width W of the vehicle's undercarriage 12 generated by combining the individual images 48 captured by each of the high cameras 32a-d. Thus, for the given minimum-acceptable ground clearance dmin and number of cameras shown in Figure 4b, the composite image 52 of a vehicle undercarriage 12 having a ground clearance of di will appear incomplete and omit one or more portions of the vehicle's undercarriage along axis 36.
For vehicles 14 having a ground clearance d2 that is greater than the minimum ground clearance dmin, the individual images 48 captured by the high cameras 32a-d will include overlapping portions 54b. These overlapping individual images 48 can also be combined electronically according to instructions embodied by computer-readable logic of the present invention to form a complete composite image 52 (Figure 5b) spanning the width of the vehicle's undercarriage 12 without reproducing substantial portions of the vehicles undercarriage along longitudinal axis 86.
According to other embodiments of the present invention, one of the low camera set 22 and the high camera set 28, or both, can each comprise one or more cameras to suitably capture images of a vehicle undercarriage 12 instead of the plurality of cameras described above. For example, an embodiment of the present invention that includes a single camera in each of the low and high camera sets 22, 28 can optionally further include a mirror (not shown) or other reflective device spaced a distance apart from the camera in each camera set 22, 28. The mirror is positioned to reflect an image of the vehicle's undercarriage 12 toward the camera in each camera set 22, 28. Depending at least in part on the distance between the cameras and the mirror, each camera can effectively capture a single image that spans the entire width of the vehicle 14, as reflected by the mirror. This allows each camera to capture a series of images as portions of the vehicle undercarriage are reflected by the mirror.
Similarly, if the mirror cannot be placed a suitable distance apart from each camera to reflect an entire widthwise image of the vehicle undercarriage 12 toward the cameras, one or more mirrors can be positioned a suitable distance away from two or more cameras in each camera set 22, 28 to capture individual images of portions of the vehicle undercarriage 12 that can be electronically combined to form a composite image of the vehicle's undercarriage as described below. Alternate embodiments of the present invention include the use of a single camera set comprising one or more cameras for capturing images of both low and high vehicle undercarriages. According to such embodiments, one or more mirrors or other reflective objects that can direct an image of the vehicle undercarriage toward the camera(s) can optionally be placed at or near ground level separated a distance from the one or more cameras. The camera(s) capture images of the vehicle undercarriage directed by the mirror(s). In this manner it is possible to minimize the number of cameras used to capture images that span at least the entire width of vehicles to be imaged. A specific embodiment includes the use of a single camera to capture images that span at least the entire width of the undercarriage of vehicles that are to be inspected with the present invention. The widthwise images captured in this manner can be electronically combined as described below to form a composite image of the entire vehicle undercarriage.
Referring once again to Figure 2, an embodiment of the vehicle inspection system 10 of the present invention further includes a height sensor 62 disposed within the protective housing 18 for measuring the ground clearance D of the vehicle 14 as the vehicle 14 advances over the cameras 26a-d, 32a-d. The height sensor 62 can measure the ground clearance D of an approaching vehicle by transmitting an ultrasonic signal, radio-frequency ("RF") signal, microwave signal, ultraviolet light, infrared light, laser light, and any other type of signal (collectively referred to herein as a "wireless measuring signal"), and receiving a data signal in response to the interaction of the wireless measuring signal with a portion of the vehicle 14.
Upon receiving the data signal the ground clearance D of the vehicle 14 can be determined by a controller 65 based on the time it takes for the data signal to be received by the height sensor 62 in response to the transmission of the wireless measuring signal, based on modulation of the wireless measuring signal, a combination thereof, and any other known method for measuring a distance with a wireless measuring signal. The controller 65 is provided with computer-readable logic stored in a computer-accessible memory for determining the ground clearance D of the vehicle 14 and selecting the appropriate camera set 22, 28 to use for imaging the vehicle's undercarriage 12 based on the measured ground clearance D. The controller 65, which operates according to instructions embodied by the computer-readable instructions, automatically selects the appropriate camera set 22, 28 for imaging the vehicle's undercarriage 14 without intervention from an operator of the vehicle-inspection system 10. The controller 65 can be physically located with the protective housing 18, within a user interface (Figure 1), and at any other location where electronic data can be transmitted. Consider, for example, the case where the undercarriage 12 of an industrial truck having a ground clearance of 35 inches is to be inspected as shown schematically in Figure 6. The height sensor 62 can transmit the wireless measuring signal in the form of a pulse of laser light and await receipt of the data signal, which in this case can be reflected light, following interaction of the wireless measuring signal with a portion of the vehicle 14. The controller 65 can manipulate the information gathered by receiving the data signal to determine that the vehicle 14 to be inspected has a ground clearance D of approximately 35 inches. Based on this information, the controller 65 selects the high cameras 32a-d of the high camera 28 set to capture images 48 of the vehicle's undercarriage 12 without operator intervention.
The height sensor 62 can be integrated within the protective housing 18 as shown in Figures 2 and 6, or located at a position remote from the protective housing 18 as shown schematically in Figure 5. For example, the height sensor 62 can optionally be located upstream within the flow of traffic as vehicles 14 approach the camera sets 22, 28 to be imaged. According to such an embodiment, the ground clearance D of oncoming vehicles 14 will be measured before the images 48 of the vehicles 14 will be captured. The sequence of the vehicles 14 that have had their ground clearance D measured is maintained and the appropriate camera set 22, 28 selected in that order. Thus, by the time the vehicles 14 arrive at the camera sets 22, 28 the appropriate camera set 22, 28 has been selected by the controller 65 to allow the imaging process to begin.
The protective housing 18 of the present invention can take any form that can withstand being run over by the class of vehicles 14 to be inspected. As shown in Figures 1, 2 and 6, the protective housing 18 resembles a conventional speed bump that includes a generally-horizontal upper surface 68 and two inclined surfaces 71 that extend between the ground the speed bump rests on and the upper surface 68. The upper surface 68 includes a plurality of apertures 74 through which the cameras 26a-d, 32a-d within the speed bump can observe the vehicle undercarriage 12 as the vehicle 14 advances over the cameras 26a-d, 32a-d. The apertures 74 are suitably sized to create a clear line of sight between the cameras 26a-d, 32a-d in the speed bump and the vehicle 14 advancing over the speed bump.
An alternate embodiment of the present invention includes a protective housing 18 that has a generally U-shaped cross section as shown in Figure 3. According to this embodiment, the protective housing 18 is recessed into the ground similar to a trench drain. A metallic grate 77 that includes apertures 79 arranged similar to those in the horizontal upper portion 68 of the speed bump conceals portions of the housing 18 without interfering with a line of sight extending between the cameras 26a-d, 32a-d and a vehicle 14 advancing over the cameras 26a-d, 32a-d. The metallic grate 77 can be installed on the protective housing 18 to be generally flush with the ground, thereby minimizing the impact experienced by occupants of a vehicle 14 being driven over the grate 77.
Another embodiment of the present invention includes a low -pro file protective housing that can include the surface of the ground's itself. According to this embodiment, suitably-sized portions of the roadway are removed and the camera sets 22, 28 inserted directly into the holes. A channel such as a saw cut in the roadway surface can receive the wiring operatively connecting the camera sets 22, 28 and the controller 65. Thus, a semi-permanent installation can be created without using a trench-drain housing as in the previous embodiment.
The protective housing 18 can optionally be provided with an illumination device 82, shown in Figures 4a and 4b, for illuminating the undercarriage 12 of a vehicle 14 as the vehicle 14 advances over the cameras 26a-d, 32a-d. Examples of suitable illumination devices 82 include incandescent lamps, light-emitting diodes, fluorescent lamps, and any other device that can convert electrical energy into visible light. One or more of the illumination devices 82 can be installed adjacent to each camera 26a-d, 32a-d disposed within the protective housing 18. By providing one or more illumination devices 82 adjacent to each camera 26a-d, 32a-d, a sufficient amount of light can be directed onto the vehicle undercarriage 12 as the imaging process is conducted.
Video images captured according to the present invention are actually a series of individual images 48 that are taken at a given frame rate and then played back at that same rate to simulate continuous movement. In capturing the video images of the present invention, the rate of frame capture is generally constant based on time, regardless of the speed and position of the vehicle advancing over the cameras 26a-d, 32a-d. According to one embodiment, individual images 48 that will form video images or a motion picture are captured of the underside 12 of the vehicle 14 at approximately 30 frames per second. That is, an individual image 48 or frame of the underside 12 of the vehicle 14 is captured 30 times a second, regardless of the position of the vehicle 14 relative to the cameras 26a-d, 32a-d, the speed of the vehicle advancing over the cameras 26a-d, 32a-d, and so on. Alternate embodiments include a speed-sensing device (not shown) to monitor the speed of a vehicle as it advances over the present invention. According to these embodiments, the speed-sensing device measures the velocity of the vehicle during the imaging process, including any changes in velocity, and transmits a signal that causes an adjustment of the rate at which the individual images captured by the cameras are captured. Thus, as the vehicle velocity increases, the rate at which individual images are captured is increased while the image-capture rate decreases as the vehicle's velocity degreases. Adjustment of the image-capture rate proportional to vehicle velocity can minimize the amount of overlap between sequential individual images discussed below.
For the embodiment having four cameras 26a-d, 32a-d in each camera set 22, 28 such video or motion pictures are captured for each active camera 26a-d, 32a-d in the camera sets 22, 28. As described above, the four cameras 26a-d, 32a-d have an overlapping field of view with each other and the individual images 48 can be replayed to display the video or motion picture taken as the vehicle 14 passes over the cameras 26a-d, 32a-d. Once the vehicle 14 has completely passed over the cameras 26a-d, 32a-d, the image-capturing process is stopped and the analysis and stitching of the individual images 48 to form a composite image 52 of the entire vehicle undercarriage 12 by the computer-readable logic begins. The composite image 52 is formed by electronically combining or stitching small pieces of each individual image 48 or frame without reproducing substantial portions of the common image features of each frame along longitudinal axis 86 or transverse axis 36. Since the individual images are captured only 1/3 Oth of a second apart, the portion of the individual images 48 combined to form a portion of the composite image must represent the portion of the vehicle undercarriage that advanced over the cameras 26a-d, 32a-d since the immediately preceding individual image 48 was captured.
The electronic combination or stitching of individual images taken by a camera 26a-d, 32a-d along longitudinal axis 86 for producing a composite image 52 proceeds, according to one embodiment, as follows. The captured individual images 48 are each individually analyzed frame-by-frame by a control unit acting as instructed by computer-readable logic. For illustrative purposes, the current frame being analyzed is referred to herein as frame X, where X is an index into an array of frames that make up a video. As a pre-combination step, each individual image is enhanced by applying an algorithm that minimizes distortion from each image based on the geometry of the camera that captured the particular image, and how the respective camera is mounted. For example, the high vehicle cameras 32a-d point generally straight up at a perpendicular angle relative to the ground 37, but the resulting images exhibit spherical distortions because of the wide-angle lenses 39a employed. Using a mathematical algorithm along with knowledge of the geometry of the lens 39a, image can be enhanced to remove most of the spherical distortion, making the image easily viewed by the operator.
Similarly, low cameras 26a-d are mounted at a compound angle with respect to the vehicle undercarriage 12. So in addition to the spherical distortion that the high cameras 32a-d exhibit, the low cameras 26a-d are also subject to a "trapezoidal" distortion. A different mathematical algorithm is employed (in addition to the spherical algorithm) to remove this distortion and improve the image for operator viewing. Frame X is compared with frame X-I, which is the individual image 48 or frame captured by the same camera 26a-d, 32a-d immediately before frame X. Areas of high contrast (which tend to represent a definable object) are identified in frame X by instructions embodied by computer-readable logic. Once an area of high contrast is identified, the same area of high contrast is identified in frame X-I. The difference in position of the high contrast area between frame X and frame X-I is the distance along longitudinal axis 86 the vehicle has traveled in l/30th of a second.
According to an embodiment of the present invention, this distance the vehicle 14 has traveled is measured in a number of pixels (the difference in position of one pixel of the high contrast area between frame X and X-I). This number of pixels, referred to herein as pixel value, defines the amount of this frame that will be used to form a lengthwise image 51 of the entire length of the vehicle undercarriage 12 along longitudinal axis 86.
This process is repeated for all individual images captured by the cameras 26a-d, 32a-d located internally along axis 99a and 99b from the cameras 26a-d, 32a-d located adjacent to the terminus of the protective housing and that capture images at the terminus of the width W of the vehicle. The pixel value determined for contiguous frames captured by each internal camera 26a-d, 32a-d according to the instructions embodied by the computer-readable logic are compared and normalized to determine a normalized pixel value. This results in a single pixel value to be used for identifying the portion of the individual images 48 or frames captured by all four cameras 26a-d, 32a-d, according to the illustrative embodiment, that will be included in the respective lengthwise image captured by each camera 26a-d, 32a-d.
The lengthwise images 53 are formed by considering each individual image 48 or frame captured by each camera 26a-d, 32a-d and taking a slice out of the individual images 48 or frames that correspond to the normalized pixel value previously calculated. This slice is placed in an image buffer, the next slice is placed adjacent to the preceding slice in a longitudinal manner, and so on until a lengthwise image 53 of the entire length of the vehicle undercarriage 12 along longitudinal axis 86 for each camera 26a-d, 32a-d is created. When displayed side by side, the composite images of the vehicle's undercarriage 12 along longitudinal axis 86 form the composite image of the entire vehicle undercarriage 12.
The completed lengthwise images 53 captured by each camera 26a-d, 32a-d are laid adjacent to each other (in strips) (Figure 5b) in the image buffer to produce the composite image 52, which shows the entire vehicle undercarriage 12. As discussed above, this can result in overlap of the lengthwise images 53 since the cameras 26a-d, 32a-d have overlapping fields of view φ, β.
If the view from one camera is obstructed by something hanging down (like an piece of exhaust pipe), one of the cameras next to it may provide a view past the obstruction in an overlapping portion of an individual image 48 because of the different position of the camera with respect to the exhaust pipe.
As the images 48 are being captured by the cameras 26a-d, 32a-d selected by the controller 65, the captured images 48 can be displayed sequentially as a motion picture video by a display device 85, and the captured images substantially-simultaneously stored in a computer- readable memory 87. To facilitate a portable vehicle-inspection system 10, the computer- readable memory 87 can be installed within a user interface 92 such as that illustrated in Figure 1. The user interface 92, the display device 85 or both can optionally be secured to the interior of a rugged, waterproof carrying case 95 for simple transportation and for protection against possible contamination. A power adaptor (not shown) can convert electrical energy drawn from a conventional automobile battery, alternator, generator and any other portable source electrical energy into a form suitable for powering the vehicle-inspection system 10. Such power adapters are known and can include an inverter for converting direct-current electrical energy into alternating-current electrical energy.
The display device 85 can be any electronic display such as a CRT monitor, television screen, flat-panel display, and the like. As shown in Figure 8, the captured video images can be displayed in quadrants 97a-d by the display device 85. Each quadrant 97a-d displays the images 48 captured by a respective camera 26a-d, 32a-d within a selected camera set 22, 28. The quadrants shown in Figure 8 are arranged as a 2 x 2 matrix, however, alternate embodiments include displaying the captured images 48 from each camera 26a-d, 32a-d aligned side-by-side in strips 98a-d to display a video that resembles the entire width W of the vehicle's undercarriage 12 as shown in Figure 9. The images displayed by the display device 85 as shown in Figure 9 can be a sequential display of the widthwise images 51 (Figure 5b) from time t, the time at which the image-capture process commenced, to time t+m, the time at which the image-capture process is completed. In this manner, the sequential display of widthwise images 51 appears as a motion-picture video of the entire width W of the vehicle's undercarriage 12 as the vehicle 14 advances over the cameras 26a-d, 32a-d.
The ability of the vehicle-inspection system 10 to accurately capture images 48 of the entire vehicle undercarriage 12 is dependent upon at least the direction and speed the vehicle 14 is driven relative to the cameras 26a-d, 32a-d during the imaging process. Ideally, the longitudinal axis 86 of the vehicle 12 is perpendicular to the axis 99a, 99b along which the cameras are arranged as shown in Figure 2, however, minor deviations from the ideal vehicle approach are permissible so long as significant portions of the vehicle's undercarriage 12 do not fall outside the angle of view φ, β of all cameras 26a-d, 32a-d. Acceptable deviations in the vehicle's approach would not result in the omission of portions of the vehicle's undercarriage 12 from the captured images 48.
To minimize the number of vehicles 14 that unacceptably deviate from the ideal vehicle approach, the present invention can optionally include a guidance device 102 (Figures 6 and 7) that conveys visual instructions to the driver of an oncoming vehicle 14. Figure 7 shows an example of a guidance device 102, which can be an electronic display located anywhere within view of the driver of a vehicle 14 as the vehicle 14 approaches the cameras 26a-d, 32a-d. The visual instructions conveyed by the guidance device 102 can include a network of color-coded lights, arrows, text messages, and any other symbol that can instruct the driver how to alter at least one of the speed and alignment of the vehicle 14 relative to the cameras 26a-d, 32a-d. Sensors (not shown) can be located adjacent to the cameras 26a-d, 32a-d to detect and monitor the speed of the vehicle 14 as it approaches and advances over the cameras 26a-d, 32a-d. Any type of a conventional speed measuring device such as an ultrasonic or laser-based speed sensor can be positioned directly in front of the ideal path to be traveled by the vehicle 14 as it advances over the cameras 26a-d, 32a-d, or at any other location where the speed of the vehicle 14 can be measured as it approaches and advances over the cameras 26a-d, 32a-d. Similarly, position sensors (not shown) can optionally be provided adjacent to terminal ends 105a, b of the camera sets 22, 28, or at any other location to monitor the position of the oncoming-vehicle's tires. For example, the position sensors can be positioned to detect when the oncoming-vehicle's tires reach an unacceptable position relative to the cameras that will result in the capture of an incomplete video image of the vehicle's undercarriage. Upon detecting such an unacceptable position of the vehicle 14 a signal is transmitted to the controller 65, which is operably connected to receive signals from the speed and position sensors. The controller 65 transmits instructions that cause a suitable corrective measure 103 to be displayed to the driver to aide the drive in properly advancing the vehicle over the cameras 26a-d, 32a-d.
Although the controller 65 is shown disposed within the protective housing 18 in Figures 4a and 4b, it is worth noting that the location of the controller 65 can vary without departing from the scope of the present invention. For example, the controller can be disposed within the user interface 92 and operably connected to receive and/or transmit signals to and from cameras 26a-d, 32a-d and sensors of the present invention.
The user interface 92 can be operatively connected to control operation of the cameras 26a-d, 32a-d, the guidance device 102, the illumination device 82, the display device 85, and other features of the vehicle-inspection system 10. The communication link between the user interface 92 and one or more of the cameras 26a-d, 32a-d, controller, illumination device 82, height sensor 62, guidance device 102, and any other feature can be established by extending a cable 107 between the user interface 92 and these features, by establishing a wireless communication link therebetween with transceivers 104 (Figure 6), or any combination thereof. Suitable wireless communication links, such as a local RF network, permit location of the user interface 92 a safe distance from the cameras 26a-d, 32a-d to minimize the threat posed by vehicles 14 armed with explosives to personnel assigned the duty of inspecting such vehicles 14.
An alphanumeric keypad 109, illumination-device activation switch 112, camera-control switch 115, power switch 118, guidance-device controls 121, and any combination thereof can be provided to the user interface 92, thereby affording an operator control over the various system features. For example, and operator can override the automatic selection of the camera set 22, 28 in instances where the operator feels the automatic selection is erroneous. Such may be the case when a vehicle's ground clearance D is determined upstream in the flow of traffic relative to the cameras 26a-d, 32a-d and the driver subsequently abandons the vehicle-inspection process.
The alphanumeric keypad 109 allows the operator to input data identifying the particular vehicle 14 approaching the inspection station to be inspected. The license-plate number, the vehicle-identification number, and any other alphanumeric information that can identify the vehicle 14 being inspected can optionally be input into the system 10 where it can be stored in the computer-readable memory of the user interface 92. Vehicle-identifying information input via the alphanumeric keypad 109 can be stored in the computer-readable memory and indexed with captured images 48 of that vehicle 14. Subsequent video images 48 captured of this vehicle's undercarriage 12 can then be compared to the previously captured and indexed video images 48 for comparison purposes. Comparison of the video images 48 can assist in identifying alterations made to the vehicle's undercarriage 12 since that vehicle's 14 previous visit to the inspection station.
Other optional features of the present invention include a pneumatic lens cleaner that can direct a flow of air over a lens provided to the cameras of at least one of the first and second camera sets to remove debris therefrom.
Embodiments of the present can further comprise computer-readable logic to assist an operator in detecting the presence of an improvised explosive device ("IED") or other "planted" device on the undercarriage 12 of a vehicle 14. When a still, composite image is created as described above, the present invention provides computer-readable memory for electronically storing that composite image for later review. The operator is prompted to manually enter (via the keypad 109) a unique vehicle identification (UVI) string that can be used to subsequently refer to that image. Note that this manual entry could be improved to be automated by using license plate recognition (LPR) technology, bar code or Radio Frequency Identification (RFID) technology, and the like.
The first time a vehicle 14 approaches a secure location and has its undercarriage 12 imaged and inspected according to the present invention, entering a UVI causes that original composite image to be stored in a database in the computer-readable memory. If the vehicle 14 is approaching the secure location for a subsequent visit and the operator enters the correct UVI for that vehicle 14, the present invention provides an image-compare function to assist the operator in examining the still pictures as follows.
The most-recently-captured composite image 140 of the vehicle's undercarriage 12 can be displayed as arranged in Figure 10, on the right-most side of the display device 85. On the left-most portion of the display device 85, a previously-captured and saved composite image 142 can be displayed for comparison with the most-recent composite image 140 and to highlight any visible artifacts 146 on the vehicle's undercarriage 12 that were altered since the previously- captured composite image 142 was saved. The visible artifacts 146 can include components or other objects removed from the vehicle's undercarriage 12, component's or other objects added to the vehicle's undercarriage 12, modifications to the vehicle's undercarriage 12, and other differences between the previously-captured composite image 142 and the current composite image 140.
A difference image 148 can be created from, and displayed along with the current composite image 140 and the previously-captured composite image 142 to highlight differences between those two images. The difference image 148 is an image generated by performing a comparative overlay of the previously-captured composite image 142 and the current composite image 140, and subtracting the features of the vehicle undercarriage 12 common to both images. Performing such a comparative overlay produces a difference image 148 where differences between the vehicle undercarriage 12 shown in the previously-captured composite image 142 and the current composite image 140 stand out from the features of the vehicle undercarriage 12 common to both images.
Performing a subtraction of the overlaid images is but one specific method of generating a difference image. The difference image 148 according to the present invention, however, can be generated in any manner that results in an image making the differences between the features shown in the previously-captured composite image 142 and the current composite image 140 stand out more than they do in the current composite image 140 alone. Also, as will be shown below, the phrase "comparative overlay" does not necessarily require two images to be physically or electronically overlaid relative to each other. This is merely a term that requires an electronic operation or comparison of the two images to be performed by a computer processor to generate the difference image
The difference, or subtraction image 148 is shown between the previously-captured composite image 142 and the current composite image 140 in Figure 10. As with the composite images 140, 142, each strip in the difference image 148 includes a degree of overlap owing to the overlapping fields of view between adjacent cameras used to capture the individual images. An artifact 146 was added to the vehicle undercarriage shown in the current composite image 140 since the generation of the previously-captured composite image 142 to illustrate the production of a difference image 148.
Formation of the difference image 148 according to one embodiment includes the steps of converting the previously-captured composite image 142 to a gray- scale image where features are shown in varying shades of gray, converting the current composite image 140 to a gray-scale image, representing each gray-scale pixel value in each of the converted images by a positive number (absolute value), and subtracting the corresponding pixel values from one of the converted images 140, 142 from the other to form the difference image 148. Converting a composite image into a gray-scale image represents colors in an image with varying shades of gray. Composite images captured directly as gray-scale images need not undergo the conversion to a gray-scale image if pixel values in such images can be represented by positive numbers that allow for the generation of the difference image. Pixel values are numbers that represent the shade of gray displayed by a pixel forming a portion of a composite image. The composite image can be thought of as a grid comprising rows and columns that intersect to form perpendicular angles. A pixel is a single square or a plurality of squares defined by the intersection of one or more rows and columns. Increasing the number of pixels that form a composite image, a detailed image can be displayed by controlling the color, or shade of gray, displayed by each pixel.
Each shade of gray can be identified by a number that can be processed by a computational processor to cause the particular shades of gray to be displayed by the display device, thereby illuminating the pixel to the corresponding shade of gray required to foπn that portion of the image. According to embodiments of the present invention, the pixel values representing the variety of shades of gray are all positive, and can be the absolute value of those shades of gray normally represented by negative numbers.
With the shades of gray in each converted composite image represented by positive numbers, the positive numbers of either the previously-captured composite image 142 or the current composite image 140 are subtracted from the other according to a suitable algorithm embodied in computer-readable logic. The result of this subtraction operation is a difference image 148 that is largely black except in those areas where there is a visual difference between the previously-captured composite image 142 and the current composite image 140, which appear as a color that provides at least a slight contrast with black.
The contrasting areas within the generally black or darkened difference image 148 significantly improves the operator's ability to recognize differences between the previously- captured composite image 142 and the current composite image 140 by enhancing or "highlighting" alterations or artifacts 146, thereby making those alterations or artifacts 146 stand out from the surrounding features of the vehicle's undercarriage 12.
The alphanumeric keypad 109 allows the operator to switch between and select views so the operator can access and display a view of the previously-captured composite image 142, the current image 140, the difference image 148, and any combination thereof. As shown in Figure 10, all three such images are displayed to allow the operator to confirm the absence or presence of the artifacts 146 highlighted in the difference image 148 in either or both of the previously- captured composite image 142 and the current composite image 140. As mentioned above, the current composite image 140 can be displayed adjacent to only the difference image 148 if desired by the operator.
Pan and zoom functions allow the operator to examine any suspected problem areas with a more detailed view, limited at least in part by the resolution of the cameras used to capture the individual images that are used to form the composite images. When the difference image 148 is displayed along side at least one other composite image, the pan and zoom functions are applied to both images in tandem so that the operator can easily zoom in on a region of interest in the difference image 148, and observe the corresponding region in the current composite image 140 in the same point of view simultaneously.
The effectiveness of the subtraction operation to generate the difference image 148 is a function of at least the position and speed the vehicle is advanced over the cameras during the imaging process relative to the position and speed of the vehicle during the imaging process when the images used to generate the previously-captured composite image 142 were captured. If the current composite image 140 differs significantly from the previously-captured composite image 142 due to misalignment of the vehicle as it advances over the cameras, or a substantial difference in speed of the vehicle as it advances over the cameras relative to the previous speed of the vehicle, the difference image 148 will not accurately reflect a comparison of corresponding features of the vehicle's undercarriage 12. Thus, the present invention further comprises computer-readable logic that at least partially compensates for some variation in the position and speed with which the vehicle is advanced over the cameras between the previously- captured composite image 142 and the current composite image 140.
A shift in vehicle position between the previously-captured composite image 142 and the current composite image 140 results in an inaccurate number of light areas in the difference image 148 because artifacts 146 under the vehicle 14 do not align suitably for the difference algorithm. An optimum "fit" between the vehicle position in the previously-captured composite image 142 and the current image 140 should result in the darkest overall difference image 148. Since darkness is represented by a low pixel number in embodiments of the present invention, the algorithms embodied in computer-readable logic processed by a computational processing unit calculate the sum of all pixel values for the difference image 148.
To "look" for the best fit, or the closest match to perform the comparative overlay, the present invention shifts at least one of the previously-captured composite image 142 and the current composite image 140 a prescribed number of pixels in each the X and Y directions. Once the shift has occurred, the difference between pixel values obtained from the subtraction operation performed in generating the difference image 148 and the sum of all pixel values are again determined. The shifting and recalculation of pixel value differences are repeated a plurality of times using an algorithm that eventually converges on the best fit while minimizing the number of calculations and computational time. Once the "best fit" is determined, the difference image 148 is generated using this "best fit" alignment and displayed to the operator. Using this algorithm minimizes the amount of "false light" regions that appear within the difference image 148 and improves the ability of the operator to see a real difference in the underside of the vehicle 14.
From the above description of the invention, those skilled in the art will perceive improvements, changes and modifications. Such improvements, changes and modifications within the skill of the art are intended to be covered by the appended claims.

Claims

What is claimed is:
1. A vehicle inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location, the system comprising: a first camera set comprising one or more cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; a second camera set comprising one or more cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance; and a computer-readable memory provided with means for electronically combining sequential images of the vehicle's undercarriage captured by each camera in at least one of the first and second camera sets to form a lengthwise image of a portion of the vehicle's undercarriage.
2. The system according to claim 1 further comprising means for generating a composite image of the entire vehicle undercarriage by assembling the lengthwise image formed from individual images captured by each camera of the vehicle's undercarriage.
3. The system according to claim 1, wherein the first and second camera sets are adapted to be disposed within a portable speed bump comprising a generally horizontal upper surface and two inclined surfaces that each extend to the top surface.
4. The system according to claim 1, wherein each of the first and second camera sets comprise four cameras arranged in a generally-linear pattern and suitably spaced to capture images that partially overlap.
5. The system according to claim 1 further comprising an illumination device for illuminating the undercarriage of the vehicle.
6. The system according to claim 1 further comprising a computer-readable memory for storing captured images of the vehicle's undercarriage.
7. The system according to claim 1 further comprising a guidance device that conveys visual instructions to the driver of the vehicle.
8. The system according to claim 7, wherein the guidance device comprises an electronic display that instructs the driver of the vehicle how to alter at least one of the speed and alignment of the vehicle relative to the cameras.
9. The system according to claim 1 further comprising a viewing screen that displays images of the vehicle's undercarriage in real-time substantially simultaneously as the images are captured.
10. The system according to claim 9 further comprising a transmitter for transmitting the captured images over a wireless communication link to be displayed by the viewing screen at a remote location away from the first and second camera sets.
11. The system according to claim 1 further comprising a height sensor for measuring the vehicle's ground clearance as the vehicle approaches at the first and second camera sets.
12. The system according to claim 11 further comprising means for automatically selecting one of the first and second camera sets to capture images of the undercarriage of the vehicle based on the ground clearance measured by the height sensor.
13. The system according to claim 1 further comprising a control unit with an alphanumeric interface for inputting at least one of a license-plate number and a vehicle-identification number of the vehicle into the system.
14. The system according to claim 13 further comprising a computer-readable memory for storing the at least one of the license-plate number and the vehicle-identification number with the captured images of the vehicle.
15. The system according to claim 1 , wherein the one or more cameras in the first camera set each have a first maximum angle of view and the one or more cameras in the second camera set each have a second maximum angle of view that is less than the first maximum angle of view.
16. The system according to claim 1 further comprising a pneumatic lens cleaner for minimizing the presence of debris between the camera sets and the vehicle undercarriage.
17. A vehicle inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location, the system comprising: a first camera set comprising one or more cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the first camera set; a second camera set comprising one or more cameras arranged to capture images that collectively span at least a width of a vehicle having a second ground clearance as the vehicle advances over the second camera set, wherein the second ground clearance is greater than the first ground clearance; a height sensor for sensing the ground clearance of the vehicle; and means for automatically selecting one of the first and second camera sets to capture images of the undercarriage of the vehicle based on the ground clearance measured by the height sensor.
18. The system according to claim 17, wherein the first and second camera sets are disposed within a portable speed bump comprising a generally horizontal upper surface and two inclined surfaces that each extend to the top surface.
19. The system according to claim 17 further comprising a control unit comprising an alphanumeric interface for inputting at least one of a license-plate number and a vehicle- identification number of the vehicle into the system.
20. The system according to claim 19, wherein the control unit further comprises a camera- selection override to allow an operator to manually override the automatic camera selection.
21. The system according to claim 17, wherein each of the first and second camera sets comprise four cameras arranged in a generally-linear pattern and suitably spaced to capture images that partially overlap.
22. The system according to claim 17 further comprising an illumination device for illuminating the undercarriage of the vehicle.
23. The system according to claim 17 further comprising a computer-readable memory for storing captured images of the vehicle's undercarriage.
24. The system according to claim 17 further comprising a guidance device that conveys visual instructions to the driver of the vehicle.
25. The system according to claim 24, wherein the guidance device comprises an electronic display that instructs the driver how to alter at least one of the speed and alignment of the vehicle relative to the cameras.
26. The system according to claim 17 further comprising a viewing screen that displays the captured images in real-time substantially simultaneously to the capture of the images.
27. The system according to claim 26 further comprising a transmitter for transmitting the captured images over a wireless communication link to be displayed by the viewing screen at a remote location away from the first and second camera sets.
28. The system according to claim 17 further comprising a control unit comprising an alphanumeric interface for inputting at least one of a license-plate number and a vehicle- identification number of the vehicle into the system.
29. The system according to claim 28 further comprising a computer-readable memory for storing the at least one of the license-plate number and the vehicle-identification number associated with the captured images of the vehicle.
30. The system according to claim 11 , wherein the cameras in the first camera set each have a first maximum angle of view and the cameras in the second camera set each have a second maximum angle of view that is less than the first maximum angle of view.
31. A vehicle inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location, the system comprising: one or more cameras arranged to capture images that collectively span at least a width of a vehicle having a first ground clearance as the vehicle advances over the one or more cameras; and a guidance device to display visual instructions that instruct a driver of the vehicle to alter at least one of the vehicle's speed and approach angle relative to the first and second camera sets.
32. The system according to claim 31, wherein the one or more cameras are divided into first and second camera sets disposed within a protective housing, wherein the cameras in each camera set are arranged to capture images that collectively span at least the width of the vehicle.
33. The system according to claim 32, wherein the cameras in the first camera set each have a first maximum angle of view and the cameras in the second camera set each have a second maximum angle of view that is less than the first maximum angle of view.
34. The system according to claim 31 further comprising an illumination device for illuminating the undercarriage of the vehicle.
35. The system according to claim 31 further comprising a viewing screen that displays images of the vehicle's undercarriage in real-time as a motion video substantially simultaneously to the capture of the images.
36. The system according to claim 35 further comprising a transmitter for transmitting the captured images over a wireless communication link to be displayed by the viewing screen at a remote location away from the first and second camera sets.
37. The system according to claim 31 further comprising a control unit with an alphanumeric interface for inputting at least one of a license-plate number and a vehicle-identification number of the vehicle into the system.
38. The system according to claim 37 further comprising a computer-readable memory for storing the at least one of the license-plate number and the vehicle-identification number with the captured images of the vehicle.
39. A vehicle-inspection system for capturing images of a vehicle's undercarriage as the vehicle approaches a secure location, the system comprising: one or more cameras for capturing images that collectively span at least a width of a vehicle as the vehicle advances over the cameras; means for electronically assembling images captured by the one or more cameras to form a composite image of the vehicle's undercarriage; a computer- read able memory for electronically storing the composite image; and means for generating a difference image from a comparison of the composite image stored in the computer-readable memory to a current composite image formed from assembling images captured by the one or more cameras subsequent to the formation of the composite image stored in the computer-readable memory.
40. The vehicle-inspection system according to claim 39, wherein a plurality of cameras are divided into a low camera set and a high camera set.
41. The vehicle-inspection system according to claim 39, wherein the means for assembling the images electronically combines at least a portion of images captured sequentially as the vehicle advances over the cameras to generate a lengthwise image of a portion of the vehicle's undercarriage.
42. The vehicle-inspection system according to claim 41, wherein the lengthwise image generated for each camera is positioned adjacent to lengthwise images of adjacent cameras to generate a composite image of the entire- vehicle's undercarriage.
43. The vehicle-inspection system according to claim 39, where the means for generating a difference image comprises computer-readable logic for subtracting pixel values of the current composite image from the pixel values of the composite image stored in the computer-readable memory.
PCT/US2004/040131 2004-12-01 2004-12-01 Vehicle-undercarriage inspection system WO2006059998A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2004/040131 WO2006059998A1 (en) 2004-12-01 2004-12-01 Vehicle-undercarriage inspection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2004/040131 WO2006059998A1 (en) 2004-12-01 2004-12-01 Vehicle-undercarriage inspection system

Publications (1)

Publication Number Publication Date
WO2006059998A1 true WO2006059998A1 (en) 2006-06-08

Family

ID=36565339

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/040131 WO2006059998A1 (en) 2004-12-01 2004-12-01 Vehicle-undercarriage inspection system

Country Status (1)

Country Link
WO (1) WO2006059998A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012103999A1 (en) * 2011-02-03 2012-08-09 Robert Bosch Gmbh Device and method for optically recording the underbody of a vehicle
DE102012209224A1 (en) 2012-05-31 2013-12-05 Robert Bosch Gmbh Device and method for taking pictures of a vehicle underbody
WO2016054243A2 (en) 2014-09-30 2016-04-07 Ramsey Brent Tactical mobile surveillance system
US20200322546A1 (en) * 2019-04-02 2020-10-08 ACV Auctions Inc. Vehicle undercarriage imaging system
WO2020206142A1 (en) * 2019-04-02 2020-10-08 ACV Auctions Inc. Vehicle undercarriage imaging system
US10893213B2 (en) 2019-04-02 2021-01-12 ACV Auctions Inc. Vehicle undercarriage imaging system
CN114719768A (en) * 2022-03-31 2022-07-08 东风汽车集团股份有限公司 Method for measuring minimum ground clearance of vehicle
WO2022253332A1 (en) * 2021-06-04 2022-12-08 同方威视技术股份有限公司 Scanning method and apparatus, device, and computer readable storage medium
DE102021116068A1 (en) 2021-06-22 2022-12-22 Zf Cv Systems Global Gmbh Method of inspecting a vehicle and inspection system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2258321A (en) * 1991-07-31 1993-02-03 Morfax Ltd Inspection systems for vehicles using linescan camera and reflector
US6661516B1 (en) * 1998-05-26 2003-12-09 Washtec Holding Gmbh Vehicle treatment installation and operating method
US20040057042A1 (en) * 2002-09-23 2004-03-25 S.T.I. Security Technology Integration Ltd. Inspection system for limited access spaces
US20040165750A1 (en) * 2003-01-07 2004-08-26 Chew Khien Meow David Intelligent vehicle access control system
US20040199785A1 (en) * 2002-08-23 2004-10-07 Pederson John C. Intelligent observation and identification database system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2258321A (en) * 1991-07-31 1993-02-03 Morfax Ltd Inspection systems for vehicles using linescan camera and reflector
US6661516B1 (en) * 1998-05-26 2003-12-09 Washtec Holding Gmbh Vehicle treatment installation and operating method
US20040199785A1 (en) * 2002-08-23 2004-10-07 Pederson John C. Intelligent observation and identification database system
US20040057042A1 (en) * 2002-09-23 2004-03-25 S.T.I. Security Technology Integration Ltd. Inspection system for limited access spaces
US20040165750A1 (en) * 2003-01-07 2004-08-26 Chew Khien Meow David Intelligent vehicle access control system

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012103999A1 (en) * 2011-02-03 2012-08-09 Robert Bosch Gmbh Device and method for optically recording the underbody of a vehicle
US9649990B2 (en) 2011-02-03 2017-05-16 Robert Bosch Gmbh Device and method for optically recording the underbody of a vehicle
DE102012209224A1 (en) 2012-05-31 2013-12-05 Robert Bosch Gmbh Device and method for taking pictures of a vehicle underbody
WO2013178460A1 (en) 2012-05-31 2013-12-05 Robert Bosch Gmbh Device and method for recording images of a vehicle underbody
US9580018B2 (en) 2012-05-31 2017-02-28 Robert Bosch Gmbh Device and method for recording images of a vehicle underbody
WO2016054243A2 (en) 2014-09-30 2016-04-07 Ramsey Brent Tactical mobile surveillance system
EP3201595A4 (en) * 2014-09-30 2018-05-02 Black Diamond Xtreme Engineering, Inc. Tactical mobile surveillance system
AU2019226261B2 (en) * 2014-09-30 2021-09-23 Black Diamond Xtreme Engineering, Inc. Tactical mobile surveillance system
WO2020206142A1 (en) * 2019-04-02 2020-10-08 ACV Auctions Inc. Vehicle undercarriage imaging system
US10893213B2 (en) 2019-04-02 2021-01-12 ACV Auctions Inc. Vehicle undercarriage imaging system
US20200322546A1 (en) * 2019-04-02 2020-10-08 ACV Auctions Inc. Vehicle undercarriage imaging system
EP3948206A4 (en) * 2019-04-02 2022-12-21 ACV Auctions Inc. Vehicle undercarriage imaging system
US11770493B2 (en) 2019-04-02 2023-09-26 ACV Auctions Inc. Vehicle undercarriage imaging system
WO2022253332A1 (en) * 2021-06-04 2022-12-08 同方威视技术股份有限公司 Scanning method and apparatus, device, and computer readable storage medium
DE102021116068A1 (en) 2021-06-22 2022-12-22 Zf Cv Systems Global Gmbh Method of inspecting a vehicle and inspection system
WO2022268480A1 (en) 2021-06-22 2022-12-29 Zf Cv Systems Global Gmbh Method for inspecting a vehicle, and inspection system
CN114719768A (en) * 2022-03-31 2022-07-08 东风汽车集团股份有限公司 Method for measuring minimum ground clearance of vehicle
CN114719768B (en) * 2022-03-31 2023-12-29 东风汽车集团股份有限公司 Method for measuring minimum ground clearance of vehicle

Similar Documents

Publication Publication Date Title
US9953210B1 (en) Apparatus, systems and methods for improved facial detection and recognition in vehicle inspection security systems
US7663507B2 (en) Foreign object detection system and method
KR100801936B1 (en) Movable regulation system for illegal stoping and parking car
CN101542184B (en) Method and apparatus for monitoring a three-dimensional spatial area
US20170372143A1 (en) Apparatus, systems and methods for enhanced visual inspection of vehicle interiors
US20150160340A1 (en) Gated imaging using an adaptive depth of field
CA2599856A1 (en) Apparatus and method for capturing and displaying images of the undercarriage of vehicles
KR100964025B1 (en) System for controlling entrance or exit of vehicle using image of vehicles bottom and device for controlling thereof
KR101450733B1 (en) Apparatus and method for inspecting lower portion of vehicle
KR101625573B1 (en) System for controlling entrance or exit of vehicle using searching images of vehicles bottom and vehicle license plate recognition
KR101462855B1 (en) Automatic regulation system for detecting of irregular stopping and parking vehicle, and method for processing of the same
WO2006059998A1 (en) Vehicle-undercarriage inspection system
KR102031946B1 (en) System and method for surveillance of underside of vehicles
KR101911239B1 (en) The channel extension operation system for detecting illegally parked vehicles
JP3405793B2 (en) Image type pedestrian detection device
KR20100086349A (en) Vehicle monitering system with privacy protection and the method thereof
JP4064556B2 (en) Rain and snow condition detection method and apparatus
JP2020126359A (en) Information processing device, server, and traffic management system
EP1626582A2 (en) Image processing device, image processing method and vehicle monitor system
KR102317633B1 (en) System for detecting black ice in real time at roads based on multiple road image and method thereof
CN108664695B (en) System for simulating vehicle accident and application thereof
JPH07329784A (en) Confirming method for safety of railroad track
KR102562147B1 (en) Method for detecting normal image of underbody of vehicle by line scan camera information analysis
JP3224875B2 (en) Image-based signal ignoring vehicle detection method
EP3227742B1 (en) Object detection enhancement of reflection-based imaging unit

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 04812605

Country of ref document: EP

Kind code of ref document: A1