WO2025122199A2 - Visor type camera array systems - Google Patents

Visor type camera array systems Download PDF

Info

Publication number
WO2025122199A2
WO2025122199A2 PCT/US2024/037826 US2024037826W WO2025122199A2 WO 2025122199 A2 WO2025122199 A2 WO 2025122199A2 US 2024037826 W US2024037826 W US 2024037826W WO 2025122199 A2 WO2025122199 A2 WO 2025122199A2
Authority
WO
WIPO (PCT)
Prior art keywords
camera
cameras
image
parallax
imaging system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/037826
Other languages
French (fr)
Other versions
WO2025122199A3 (en
Inventor
Andrew F. Kurtz
John Bowron
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Circle Optics Inc
Original Assignee
Circle Optics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Circle Optics Inc filed Critical Circle Optics Inc
Priority to CN202480043426.7A priority Critical patent/CN121444469A/en
Priority to PCT/US2025/011731 priority patent/WO2026015170A1/en
Publication of WO2025122199A2 publication Critical patent/WO2025122199A2/en
Publication of WO2025122199A3 publication Critical patent/WO2025122199A3/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure relates to panoramic low-parallax multi-camera capture devices having a plurality of adjacent polygonal cameras arranged in a visor or halo to capture an arced array of images.
  • This disclosure principally relates to the optical and mechanical configurations thereof.
  • panoramic imaging can be provided using a camera with a fisheye lens (e.g., US 4,412,726), or a fisheye lens with an extended field of view (e.g., > 180°, US 3,737,214), or using two fisheye lenses back-to-back (e.g., US 9,019,342).
  • fisheye lenses have low resolution and high distortion, which limits their value for applications requiring real-time situational awareness of activities occurring within a large environment.
  • panoramic multi-camera devices in which a plurality of cameras is arranged around a sphere or a circumference of a sphere, such that adjacent cameras are abutting along a part or the whole of adjacent edges.
  • Commonly assigned US Patent No. 10,341,559 describes the design of low parallax imaging lenses that can be arranged in a dodecahedral geometry to enable panoramic image content capture within a nearly spherical field of view, such as for capturing cinematic or virtual reality (VR) type image content.
  • VR virtual reality
  • Commonly assigned patent application Publication No. US 20220357645 describes an approach for opto-mechanically mounting the plurality of cameras into an integrated dodecahedral unit or system. However, this camera system may not be optimized to provide panoramic situational awareness of events occurring at distance.
  • FIGURES are a diagrammatic representation of FIGURES.
  • FIG. 1 is a perspective view showing aspects of using a multi-camera system in a Detect and Avoid (DAA) scenario.
  • DAA Detect and Avoid
  • FIG. 2 is a perspective view of part of a single row, visor type multi-camera system using low-parallax cameras.
  • FIG. 3 A is a top view of the part of the single row, visor type multi-camera system using low-parallax cameras of FIG. 2.
  • FIG. 3B depicts a cross-sectional view of an exemplary lens design of the type used in one of the cameras of FIG. 3 A in greater detail.
  • FIG. 3C and FIG. 3D depict fields of view captured by adjacent cameras in a multi-camera system using low-parallax cameras.
  • FIG. 3E depicts the concept of a parallax jump between two adjacent camera channels.
  • FIG. 4A and FIG. 4B are exploded perspective views of part of a single row, visor type multi-camera system, depicting the mounting of a low-parallax camera channel onto a cylindrical frame.
  • FIG. 4C is an exploded perspective view of part of a single row, visor type multi-camera system, depicting an alternate mounting of a low-parallax camera channel onto a cylindrical frame.
  • FIG. 5 depicts side and exploded views of a camera channel assembly illustrating a sensor mounting design including athermalization.
  • FIG. 6 depicts a plan view of an example alternate single row, visor type multi-camera system with a closeup of a seam between a pair of adjacent camera channels.
  • FIG. 7 is a perspective view of a portion of a dual row, visor type multi-camera system using low-parallax cameras.
  • FIG. 8 is a perspective view of part of a dual row, visor type multi-camera system using conventional cameras.
  • FIG. 9 is a schematic representation of an object being imaged approaching a boundary between two channels.
  • FIG. 10 is a flowchart illustrating a method for blending images from adjacent cameras.
  • FIG. 11 is a cross-sectional portion of an arced array of cameras, as can be used in the FIG. 8 system.
  • aspects of this disclosure relate to enabling improved air traffic safety.
  • autonomous drones or other types of unmanned aerial vehicles “UAVs”
  • eVTOLs electric vertical take-off and landing
  • on-board sensor equipment e g., flying cars
  • drones or VTOLs can be equipped with acoustic, optical or radar sensors, GPS detectors, and/or ADS-B transponders.
  • each of these equipment types has deficiencies, and multiple types are needed to provide redundancy. The potential problems will likely escalate as the diversity and density of air traffic intensifies.
  • FIG. 1 depicts an example of such a system, with an aircraft 150 equipped with a nose-mounted arced multi-camera system 100 to look for potential collision risks, including other or bogey aircraft 160 within a field of view 105.
  • FIG. 1 depicts an example of such a system, with an aircraft 150 equipped with a nose-mounted arced multi-camera system 100 to look for potential collision risks, including other or bogey aircraft 160 within a field of view 105.
  • FIG. 2 depicts the visor-type multi-camera system 100 in greater detail, with adjacent cameras 110 mounted to a frame 130 with offset gaps or seams 120, and with the outer truncated lens elements protected by protruding hoods 115.
  • Mechanical gap or seams 120 span the distance been lens housings, while optical gaps or seams are larger, and span a distance between a coated clear aperture (CA) of a camera to the CA of an adjacent camera.
  • CA coated clear aperture
  • the multi-camera system 100 which is of a type of the present invention, for example, simultaneously monitors a full field of view (FOV), or field of regard (FOR), with approximately ⁇ 100° horizontal FOV and approximately a ⁇ 20° vertical FOV.
  • FOV full field of view
  • FOR field of regard
  • the FOV orientation can be defined relative to the aircraft 150 rather than the environment.
  • the pitch or tilt of the aircraft 150 can change during operation, by plan (e.g., speed), or because of wind conditions.
  • the multi-camera system 100 can then be tilted to compensate.
  • This system 100 enables staring mode detection over the full field of view, versus gimballed camera systems that are dependent on tine scanning camera motion.
  • Such visor-type multi-camera systems 100 can image visible light, infrared (IR) light, or a combination thereof.
  • IR infrared
  • For visible imaging either monochrome or color filtered (e.g., with a Bayer filter) image sensors can be used. The resulting image data can be analyzed for collision avoidance, to enable detect and avoid (DAA) functionality.
  • DAA detect and avoid
  • the image data from the image sensors can be output to an image processor, containing a GPU, FPGA, or SOC, on which algorithms are used to examine an airspace, as sampled by the imaged FOVs from each of the cameras, to look for one or more bogey aircraft 160 or other objects. If a bogey aircraft 160, such as a Cessna 172, is detected, the DAA software is then used to track it within the imaged FOV. This data can then be output to another processor which assesses the current collision risk and determines appropriate collision avoidance maneuvers. That data can then be delivered to an autopilot, a pilot, or a remote operator.
  • an image processor containing a GPU, FPGA, or SOC, on which algorithms are used to examine an airspace, as sampled by the imaged FOVs from each of the cameras, to look for one or more bogey aircraft 160 or other objects. If a bogey aircraft 160, such as a Cessna 172, is detected, the DAA software is then used to track
  • the DAA bogey detection software can help simultaneously monitor the FOR of the entire camera system 100, or a given camera’s FOV in entirety, or subsets thereof, using iterative windowing.
  • windowing to scan over a camera’s full FOV to look for something new at reduced frame rate (e.g., 1-10 fps) can be valuable.
  • a potential bogey can be adaptively tracked at an increased frame rate (e.g., 30-60 fps) using a lightweight non-sophisticated program to look for changes in lighting, position, attitude, and/or orientation over time.
  • This software can also track multiple objects at once within the FOV of a single camera 110 or camera channel, or within the FOV of multiple cameras (FIGS. 1 and 2).
  • DAA software can include algorithms to recognize or classify objects, with priority being directed at the fastest or closest bogeys over others.
  • Bogey range estimation can then be enabled by bogey recognition, stereo camera detection, LIDAR scanning, or radar.
  • a bogey 160 can be detected using a tracking window or region of interest (ROI) or instantaneous FOV (IFOV) that can be modestly bigger than the captured image of the bogey, but which is much smaller than a camera channel’s full FOV.
  • ROI region of interest
  • IFOV instantaneous FOV
  • This disclosure relates to alternate and improved lens system designs and architectures for enabling visor type camera systems 100 that can provide improved detect and avoid, sense and track, search and track, navigation, and/or other functionality.
  • the cameras 110 include lenses that are generally designed to limit parallax or perspective errors optically and opto-mechanically, as is described in commonly assigned lens design related patent applications US 20220252848 and WO2022173515, or improvements thereof.
  • These camera lenses are used in the cameras 110 of multi-camera systems 100, in which the outer lens elements are typically truncated along polygonal edges, so that the camera channels 110 can be mounted in close proximity with narrow intervening gaps or seams 120, and the lens design method controls image light ray behavior (e.g., parallax or perspective) for chief rays along the polygonal lens edges.
  • image light ray behavior e.g., parallax or perspective
  • these prior commonly assigned disclosures describe lens design methods and exemplary lens designs, in which the paraxial entrance pupil, and non-paraxial variations thereof, are positioned behind or beyond the image plane.
  • lenses can be characterized by many specifications, including focal length, field of view (FOV), image quality (e.g., MTF), bandwidth (e.g., for use with visible light).
  • FOV field of view
  • MTF image quality
  • bandwidth e.g., for use with visible light
  • the location of the entrance pupil can be found by identifying a paraxial chief ray from object space, that transits through the center of the aperture stop, and projecting or extending its object space vectorial direction forward to the location where it crosses the optical axis of the lens system.
  • a paraxial ray has an optical direction into a lens for an off axis chief ray that is modestly offset in tilt (e.g., ⁇ 7-10°) from the optical axis. Whereas non-paraxial chief rays are typically incident to the first lens element at much higher angles (e.g., 20°, 40°, or even 90°).
  • the entrance pupil is located in the front third of the lens system, and its location, and the differences in location where there projections cross the optic axis between the paraxial and non-paraxial rays are not specified or analyzed, and are substantially irrelevant to the lens design.
  • a fisheye lens system is a partial exception, as the lenses can be designed to control distortion over a large FOV (e.g., ⁇ 90°), without direct control of entrance pupil aberrations, although there are fisheye lens design approaches which optimize the pupil aberrations directly. In either case, in fisheye lenses, it is generally recognized that the “entrance pupil” position varies widely, near the front of the lens system, for the chief ray angles versus FOV.
  • FOV large FOV
  • optimization of the entrance pupil in the rear of the lens, and particularly behind or beyond the image plane is a deliberate goal, so as to control parallax or perspective difference versus chief ray angle over a FOV.
  • the lens design goals to control parallax can be realized by optimizing the lens system using chief rays’ constraints, pupil spherical aberration (PSA or PSA sum), or spherochromatic pupil aberration (SCPA) terms in the merit function within the lens design program (e g., Code V).
  • optimization priority will be directed to a range of off axis chief rays that lie along or near truncated polygonal lens edges of the outermost lens element, or front lens or compressor lens.
  • the chief rays along a truncated polygonal lens edge can span a range of 31 .7-37.4°.
  • the typical low-parallax lens design is optimized to control parallax for maximum FOV angles in the ⁇ 20° to ⁇ 40° angular range, although design with larger or smaller angles are possible. Residual parallax or perspective errors can be tracked in various ways, including as an angular difference from the nominal geometric angle, or as a fractional difference in image pixels.
  • These low-parallax lenses can also be designed to control front color, which is a residual color shifting of the chief rays for a given chief ray field angle. Color variable cropping or vignetting of the edge of field chief rays through the lens system can then cause a rainbow color artifact at the image edges that are proj ected onto the image plane or image sensor.
  • the truncated outer lens element also acts as a “fuzzy” field stop, such that the image within a core FOV underfills the image sensor.
  • field stops are located at the image plane or at images of the image plane, but the front lens of these cameras 110 is neither of these.
  • the size of the fuzziness is related to the entrance pupil diameter and the degree of overlap in beam footprints between nearby fields.
  • the “fuzziness” is also influenced by the residual front color (which is color dependent overlap) and the truncation of the lens edges, which are not as sharp as the edges as can be defined in a typical black sheet metal mask used at or near an image plane.
  • the entrance pupil location would coincide with the device center.
  • the mechanical gap or seam between adjacent camera channels would be effectively zero.
  • This can be difficult to realize, particularly in multi-camera systems where the lens elements of the respective camera channels are mounted in lens housings with a finite thickness, which in turn causes a real or finite gap or seam between adjacent cameras, and thus a real offset between the entrance pupil location and the device center.
  • the real offset of the entrance pupil to the device center can be relatively small (e.g., 2-5 mm).
  • a system constructed with a faceted dome or faceted arc of integrated front lens elements provides one approach for reducing both the gap between adjacent camera channels and the entrance pupil to device center offset distance.
  • the gaps or seams between adjacent camera channels can be optically masked by designing the lens systems with some extra or extended FOV (e.g., XFOV of -0.3-1.0°) per side or gap.
  • FOV e.g., XFOV of -0.3-1.0°
  • Adding XFOV can help compensate for camera channel alignment tolerances, support camera calibration and image blending or tiling operations, and reduce blind regions in front of the camera system.
  • adding or increasing allowed XFOV and gaps between channels both increase the distance from the device center to the center of perspective (COP).
  • the aforementioned commonly assigned lens design applications also detail the design differences that can occur at or near the projected entrance pupil location, for the off-axis chief rays, relative to a nominal paraxial ray entrance pupil.
  • Several terms including the no-parallax point (NP point), center of perspective (COP), and low-parallax smudge (LP smudge) are used to describe, or provide context to, these differences.
  • the residual low parallax smudge for the chief rays transiting along a polygonal lens edge of the outer or front lens element, when projected from that front lens surface can be locally offset from the paraxial entrance pupil location by 1 -2 mm.
  • the location or offset distance of the LP smudge for the chief rays along a polygonal front lens edge can be optimized relative to the image plane or the device center, instead of using the paraxial entrance pupil position in such a metric.
  • the “LP” smudge is a measurement of the variation in the location of the paraxial entrance pupil position to the pupil position(s) for one or more non-paraxial chief rays. It can be measured as a longitudinal distance difference or length 275 along the optical axis 230 see FIG. 3E), or as an area or volume that encompasses the chief ray wander out of plane.
  • Parallax errors versus field and color can also be analyzed using calculations of the Center of Perspective (COP), which is a parameter that is more directly relatable to visible image artifacts than is a low parallax volume, and which can be evaluated in image pixel errors or differences for imaging objects at two different distances from a camera system.
  • the center of perspective error is essentially the change in a chief ray trajectory given multiple object distances - such as for an object at a close distance (3 ft), versus another at “infinity.”
  • the COP can be estimated as being at a location or position within the LP smudge.
  • COP location distance differences or COP jump between two adjacent cameras that are viewing an overlapping FOV can also be analyzed to assess parallax viewing differences between the cameras.
  • the offset for the paraxial entrance pupil or the non-paraxial LP smudge from the device center can also depend on the application.
  • the camera system may be expected to provide in-focus images of objects that are only 3-4 feet away, while the maximum in-focus imaging distance may be only on the order of about 500 ft away.
  • the gaps or seams between camera channels and the nominal offset of the entrance pupil or non-paraxial LP smudge to the device center both tend to be modest (e.g., ⁇ 6 mm), to help control the blind regions in front of the camera system to be less than the minimum imaging distance.
  • the COP can be good for the COP to be far from the front vertex and near or behind the image sensor because it enables several things, including: • Compact mechanical packaging with small optical and mechanical gaps.
  • the COP can be much closer to the device center.
  • FIG. 2 which depicts aspects of the present invention, illustrates a multicamera system 100 with 7 low-parallax cameras 110 mounted arranged in an arc to form a visor.
  • this type of system can enable enhanced situational awareness or safety for air or ground vehicles for “long” distance imaging with a range of several miles.
  • the camera channels 110 are independently mounted on a cylindrical frame 130 such that a portion of a camera 110 is inserted into the frame 130 and interfaces with the outer surface of the cylinder.
  • the primary function of the cylindrical frame is to accurately locate each camera channel, but it also serves as an enclosure for the electronics and, in some cases, as a heat sink for electronics cooling.
  • Multi-camera system 100 can take on similar embodiments consisting of multi-camera capture devices with a plurality of cameras arranged in a circular or polygonal shape.
  • Multi-camera system 100 can also include covers or lids (not shown) on the top and bottom, which can help seal the system from internal contamination, and further include features for external mounting or to aid internal thermal control (e.g., heat sinks).
  • Cameras 110 can image FOV cross-sections that are nominally polygonal (e.g., rectangular or square) cones of collected and imaged incident light.
  • corresponding polygonal outer lens element truncation is only on two opposing sides.
  • each of the example cameras 110 have about a 50 mm wide clear aperture, image rectangular FOVs, and have ⁇ 15 mm wide seams or gaps between them.
  • the cylindrical frame 139 has a diameter of ⁇ 200 mm. The inward tapering of the lens elements and supporting lens housings into a conical or frustum shape enables the camera channels to be closely packed together with narrow intervening gaps or seams 120.
  • the overall system can have a high fdl factor, with for example, the ratio of the summed camera apertures (truncated size of the outer lens element) to the arced system shape > 85%.
  • the individual camera channels are shown as having protruding hoods 115, to protect against direct solar exposure, internal ghosting, or contact by external debris.
  • the camera channels can also include clear protective shields or windows (not shown).
  • the multi-camera system 100 of FIG. 2 and FIG. 3A can be used for other applications beyond DAA or sense and avoid.
  • the output image data can also be used for navigation or inspection applications.
  • the system can also include an arc or one or more cameras that point or image with a downward tilt.
  • these adjacent camera channels 110 are opto-mechanically mounted onto frame 130 in proximity, to maintain nominal FOV parallelism across the intervening seams 120 so as to retain the optical benefits of low-parallax control between adjacent cameras 110.
  • these cameras can image several miles distant, while also supporting a minimum in-focus imaging distance of only 50-100 feet away.
  • the individual cameras can use Teledyne 36M image sensors and provide an imaging resolution of 2-3 feet width per image pixel, at a distance of 3-5 miles out, which can be sufficient to detect a Cessna airplane.
  • the imaging software can then position a digital ROI around the detected bogey, to enable tracking over time, with an increased relative resolution and data or frame rate compared to the surrounding bogey-free image areas.
  • the cameras can provide an extended FOV (e g., XFOV ⁇ 1 °), including an angular FOV overlap 107 that can span both a mechanical seam 120, and a larger optical gap or seam 118 between the lens clear apertures. With this XFOV or limited FOV margin, the blind regions can be less than this minimum in-focus imaging distance. This limited XFOV both provides tolerance for camera to camera alignment and enables camera calibration and smooth image tiling or blending.
  • a low computational burden image blending software process can be applied during the image tiling to smooth out calibration differences that can cause image artifacts at the edges or gaps adjacent lenses.
  • the blending can be applied dynamically or selectively in time, such as when a tracked bogey, or its associated ROI, are crossing from the FOV of one camera 110 to that of an adjacent camera 110.
  • FIG. 3 A provides a cross-sectional view of a visor type multi-camera imaging system 100 of the type of FIG. 2, with more optical details, but less mechanical details, shown.
  • the cameras 110 generally comprise lenses 140, which image incident light onto an image sensor (not shown).
  • the lenses 140 include a front compressor lens group 142, which may have 1 to 3 lens elements, including at least an outer lens element that is truncated into a polygonal shape.
  • the lens systems then further comprise a multi-lens element wide-angle group 144, which includes lens elements both before and after the aperture stop, and provided prior to the image plane 146 and associated image sensor.
  • FIG. 3 A also depicts the nominally conical or frustum shape of the cameras 1 10 and lenses 140, although the enclosing lens housings are not shown.
  • the taper angle 135 is in the range of 13-18 degrees. This tapering of the lens housings enables the camera channels to be collocated in close proximity with narrow intervening seams 120.
  • FIG. 2 depicts the cameras 110 with outer lens elements that are horizontally truncated to enable narrow seams, but remain rounded vertically.
  • FIG. 3 A also depicts a pair of adjacent lenses receiving nominally parallel incident chief rays 145.
  • the optical gap 118 can be measured as an angular or distance difference between the chief rays of the adjacent channel, or as a nominally smaller distance between the coated clear apertures of the two lenses.
  • FIG. 3 A depicts an axial or AZ offset distance 122 is shown between the nominal device center 126 and the paraxial entrance pupil 124 of a camera channel 110.
  • the imaging lenses are designed with longer focal lengths and higher magnifications than for near imaging applications (e.g., cinema and VR).
  • the blind regions, gaps or seams 120 between adjacent camera channels, and the offset distances 122 of the paraxial or non-paraxial “entrance pupils” 124 to the device center 126 can likewise be larger, while keeping the same limited XFOV of -0.5-1.0° per camera channel.
  • the nominal axial offset distance 122 between the device center 126 and the paraxial entrance pupil 124 can be 30-70 mm, while the axial offset between the paraxial entrance pupil and the non- paraxial LP smudge or COP can be design optimized to still be small (e.g., ⁇ 2 mm).
  • the offset distance 122 can also be measured as a distance AZ between the device center 126 and the COP of a camera channel.
  • the lenses 140 in the camera channels 120 can be designed differently as compared to the lenses for near-imaging applications where the offset distance is small (e g., ⁇ 5 mm).
  • the offset distance is small (e g., ⁇ 5 mm).
  • parallax or perspective can still be adequately optimized even if the entrance pupil or nearby LP smudge (or NP-point or COP) is located at or near the image plane 146 or even modestly in front of the image plane (see FIG. 3B).
  • a low parallax lens system with a focal length of ⁇ 8 mm and a track length from the front lens vertex to the image plane of - 60 mm was designed, for which the NP point was acceptably optimized using the PSA sum method to a location at about 10% of the 60 mm lens length in front of the image plane.
  • FIG. 3B depicts a cross-sectional view of an example lens 240 with 11 optical elements designed for longer distance imaging, and that can be used in a camera 140.
  • Lens 240 comprises a compressor lens group 242 and a wide-angle lens group 244 that images light (e g., ray bundles 212) to an image plane 246.
  • This example lens includes 10 lens elements and a window, to be a UV or IR cut fdter substrate. Projections of chief rays within the light ray bundles 212 are projected towards the entrance pupil 224, with the actual projected locations varying amongst both the paraxial and non-paraxial chief rays (e.g., the LP smudge).
  • FIG. 3C depicts a 2-D illustration of core FOVs 262 (for two adjacent channels) as an area in front of each camera 240 that are respectively imaged onto an image sensor. Three distances from the camera are shown, and the imaged area of a scene grows as the distance from the multi - camera system increases.
  • the optical gap 130 remains constant because the chief rays are parallel at the boundary. Regions are aligned to the optic axis of the left channel. These imaged regions are not vignetted. There is a larger surrounding region with vignetting, which would typically be cropped from the captured data.
  • FIG. 3D then provides a 2-D illustration of imaged fields of view for two adjacent cameras 240, which are larger, as they further include XFOV’s 264.
  • the imaged FOV’s are again shown as an area in front of each camera, that is then imaged onto a sensor. Three distances from the camera are shown, and again the imaged area grows as the distance increases. The imaged regions are aligned to the optic axis of the left channel.
  • the cameras are nominally designed to prevent vignetting within the XFOV’s 264.
  • the edges of the FOVs 260 nominally correspond to both the truncated lens edges and to the edges of active pixels on the image sensor array, with parallax optimized for nominal chief ray parallelism along the truncated lens edges.
  • Two adjacent cameras can be mounted onto a supporting frame with these edge chief rays of the two adjacent cameras being nominally parallel to each other. This configuration essentially extends the blind regions to infinity.
  • FOV 260 can correspond to a Core FOV 262, which can be defined as the largest low parallax field of view that a given real camera lens 240 can image. Equivalently, the core FOV 262 can be defined as the sub-FOV of a camera channel whose boundaries are nominally parallel to the boundaries of its polygonal cone (see FIGS. 4A and 4B).
  • the nominal Core FOV 262 approaches or matches an ideal FOV, where adjacent Core FOVs meet with negligible intervening gaps.
  • some extended FOV is needed so the cameras can be less than perfectly aligned, while images are still collected. Additional extended FOV can be needed to enable geometric camera channel calibration (e.g., intrinsics and extrinsics).
  • the extrinsic parameters represent the locations of each of the cameras in the 3-D scene.
  • the intrinsic parameters represent the optical center and focal lengths for each of the individual cameras.
  • Including XFOV 264 can be accomplished by having the Core FOV correspond to an area of active image pixels on the image sensor that underfills the sensor, while leaving an outer boundary area of pixels for the extended FOV.
  • an image sensor may have 4096 x 5120 active pixels, and smaller portion, such as 3800 x 4800 pixels may correspond to a Core FOV, leaving an outer boundary on all four sides of -150 pixels width.
  • FIG. 3D illustrates crosssections of polygonal field of views for two adjacent camera channels, with some field of view overlap.
  • Each camera channel has a Core FOV 262 that corresponds to an optimally parallax optimized field of view, and the pair of Core FOVs remain parallel to one another when projected out into object space (e.g., the environment or scene).
  • object space e.g., the environment or scene.
  • Each camera channel also supports a larger extended FOV 264, which then overlap, thereby limiting the extent of blind regions in front of the camera, and which provide margin for both mis-alignments (e.g., offsets and tilts) and camera calibration.
  • the associated XFOVs 264 may fully or partially overlap.
  • parallax lens design optimization is targeted at chief ray alignment to defined edge boundaries of a Core FOV 262
  • the residual parallax or center of perspective differences within an XFOV 264 of a lens 240 are typically still small (e.g., ⁇ 1 pixel).
  • FIG. 3E depicts the concept of a parallax jump between two adjacent camera channels 240.
  • the previously described LP-smudge is conceptually illustrated here as an elliptical volume of a LP smudge length 275, in which projections of both paraxial and non-paraxial chief ray cross the optical axis 230.
  • the COP has been described as well as an exemplary location within an LP smudge where the parallax within a single camera channel 240 is minimized.
  • the image plane is offset from the device center 226 by a distance 277, and the COP is within the LP smudge length 275 and can be offset from the image plane by distance 229.
  • the two COPs are separated from each other by a distance 270 (e.g., causing the parallax jump).
  • Lens design methods have been described that enable control or optimization of the size and positioning of an LP-smudge for monoscopic or single channel parallax.
  • each camera channel 240 has its own LP-smudge and its own COP. But in an integrated multi-camera system, adjacent cameras 240 are offset by a seam of a finite width, and a modest FOV overlap (FIG. 3D) can be included to reduce the extent of the blind regions that correspond to the seams, resulting in a modest parallax difference in the overlap regions of the adjacent channels. As depicted in FIG. 3E, the COPs of the adjacent channels are separated from each other by a COP separation 270, whose distance or extent can be impacted by system design constraints such as the size of the image sensor package 247. These system constraints are generally “fixed” by application requirements.
  • the location of the entrance pupil 224 or the LP smudge position and width are parameters that can be controlled to modify the COP separation 270.
  • the entrance pupil and COP may not be in proximity.
  • the entrance pupil and COP can be optimized in deliberate proximity. For many applications, it is desirable to position the entrance pupil behind the image plane and close to the device center 226, as this limits the amount of parallax that occurs at the boundary of two channels, by decreasing the physical separation 270 of adjacent COPS. But then the lens design can be burdened with increased lens length, diameter, weight, and cost. Appropriate trade-offs must be made for each application to determine how these should be balanced.
  • a low parallax lens design can prioritize or optimize the location of the entrance pupil 224 or the LP smudge position to the device center, as well as the LP smudge size. If the intervening camera to camera gap, and particularly the optical gap, are of significant size, which for systems imaging only 10’s of feet away, may be a few mm, then a blind region of missing image content can be noticed. But increasing the device center to entrance pupil distance can provide a modest COP separation and FOV increase to cover an extended field of view and shrink the blind region. This may not be an issue if the “critical feature size” is larger than the camera channel separation.
  • a small XFOV 264 When a small XFOV 264 is included in the design, the chief rays within that XFOV converge at a finite distance in front of the cameras. This is the maximum distance where information may be hidden from the cameras (blind). It can also be desirable to have a small XFOV or overlap in these systems, to provide a FOV budget for opto-mechanical tolerance and camera calibration and image blending.
  • the distance between chief rays is principally controlled by the entrance pupil (LP smudge) optimization for location and extent over a camera FOV, including both the Core FOV 262 and the XFOV 264.
  • LP smudge entrance pupil
  • cinema type multi-camera systems with low-parallax camera lenses image close and a ratio of the device center to EP distance / LP smudge length can be small, e.g. ⁇ 2:1.
  • lens parallax contributions from direct parallax optimization (chief rays, PSA) and from COP separation 270 can be comparable.
  • the offset distance 122 can be several 10’s of millimeters, and the entrance pupil can be located near to, or even in front of the image plane.
  • FIG. 3E for two cameras with their COPs located more distant from the device center.
  • the ratio of the Device center to EP distance / LP smudge length can be comparatively large (e g., ⁇ 20: 1).
  • COP separation 270 can be allowed to increase and image aberration correction can be given proportionally more priority over reducing the PA sum to limit parallax, by optimizing the lens with less relative weighting for the PSA_sum value in the lens merit function.
  • having the entrance pupil or COP near or in front of the image plane or image sensor can helpfully make the overall lens and lens housing shorter and weigh less than similar lenses with the entrance pupil located closer to the device center.
  • FIG 4A depicts an outside exploded view
  • FIG. 4B an inside exploded view, respectively, of a single camera channel 340 separated from the cylindrical frame 330, to illustrate an example of how the channel and frame can interface.
  • Each camera 340 can contain a low parallax lens system 240 of the type of FIG. 3B.
  • these figures portions of a multicamera system 300 to illustrate aspects of kinematic mounting of the individual low-parallax camera channels 340 around an arced portion of a cylindrical frame 330.
  • FIGS. 4A and 4B depict the cameras 340 with outer lens elements that are horizontally truncated to enable narrow seams, but which remain rounded vertically. Again, as with the FIG.
  • system 300 can have lids or covers (not shown) that include mounting or thermal control features (e.g., fins).
  • frame 330 has a cylindrical shape, but the frame can instead have a polygonal cylindrical shape (e.g., be octagonal) with congruent rectangular side faces, onto which the cameras can be kinematically mounted.
  • the upper and lower covers or lids would nominally provide two parallel polygonal faces with a matching polygonal shape.
  • the inner portion of the camera housing 345 locally has a square, rather than tapered, shape.
  • This inner square portion of housing 345 nominally contains the lens elements of the wide-angle lens group and includes mounting features to interface with the image sensor board 347.
  • the camera housing 345 is inserted into a nominally square opening or slot 335 in frame 330.
  • a pair of shaped vee pins on the underside of the housing 345 are used to create a vee block 350, which registers to a ball feature 354 when the housing 345 is inserted into the slot 335.
  • FIG. 4C depicts a close-up exploded 3D view of an alternate construction for a single row, visor type multi-camera system, depicting the mounting of a channel 340 onto a cylindrical frame 330.
  • the outer portions of the camera channels 340 are truncated in both the horizontal and vertical directions.
  • the inner portion of the housing 345 has a nominally circular cross-section that interfaces with a nominally circular slot 335 on frame 330.
  • the vee-shaped features built into the channel housing 345 interface with the outer diameter of the frame 330, locking in all but two degrees of freedom (e.g., translation about the z axis, and rotation about x, y, and z axes). These remaining degrees of freedom (e g., translation about y and z axes) are eliminated using a small vee-block 350 on the underside of housing 346 that docks to a ball feature 354 mounted on the frame 330. Mounting hardware (screws) and compression springs 356 are employed to supply the necessary vertical and horizontal nesting forces.
  • kinematic mounting elements components that form a simple device providing a connection between two objects, typically amounting to six local contact areas (i.e. exact constraint). These contact areas are usually configured by combining classic kinematic or exact constraint mechanical elements such as balls, cylinders, vees, tetrahedrons, cones, and flats.
  • the accompanying nesting or holding forces are supplied by springs or spring pins, but a variety of mechanisms can be used to provide dynamic loading forces including the springs, spring or vlier pins, or flexures, magnets, elastics, or adhesives to support mounting and alignment of the cameras to the cylindrical frame.
  • Frame 330 can also include features (not shown) to help maintain the rigidity, shape, and structural integrity of the frame, relative to withstanding eternal loads, vibrations, or shocks.
  • the alignment tolerances for the camera channels 340 to the frame 330 should be minimized to ensure sufficient extended field of view remains for camera calibration and image blending or tiling operations.
  • mounting stresses from changing environmental conditions such as from temperature, shock, or vibration, can affect channel alignment or pointing accuracy. Achieving the required precision often necessitates the use of exact-constraints methodology, employing kinematic components such as vee and ball features to accurately position the channels.
  • Kinematic mounting not only contributes to repeatable mounting but also minimizes stresses from thermal expansion and vibration.
  • each individual camera channel 340 can respond to and compensate for external loads, and return consistent positions as the system 300 experiences temperature changes while nominally maintaining previously exhibited mounting or assembly precision.
  • positional and rotational variation of each individual camera channel 340 of less than 75 pm and 0.07 degrees, respectively can be achieved with average milling operations and commercially available kinematic components.
  • a camera channel 340 is mounted with spring-loaded hardware, where the springs 356 are selected to be strong enough to guide and nest the kinematic components together. These can be rigid enough to hold a camera channel 340 in the presence of shock and vibration.
  • the mounting mechanisms can be designed to withstand the residual vibration from a small multi-engine fixed-wing aircraft (maximum 0.1-inch peak-to-peak amplitude between 5 and 62 Hz).
  • the springs 356 or spring pins 360 allow the channels to re-nest if jostled by an unexpected shock event (e.g., 6 to 18 Gs). This functionality can be aided by lubrication between the kinematic components.
  • kinematic kinematic
  • partial use of kinematic components can be based on the requirements of the multi-camera system 300, such as systems requiring more rigidity or less positional accuracy.
  • a system can have a camera channel’s cylindrical body or housing 345 positioned into a pilot hole or slot 335 in the frame 330 and simply fastened with screws. The screws or an additional pin can be used to set the orientation. That system can be more rigid but also have larger positioning variation due to the clearances between the camera channel’s housing 345, screws, or pins, and their corresponding holes.
  • the multi-camera system 300 can also include active or passive isolation (not shown) at the mounting to the vehicle or fixture that the system is mounted to, to reduce the impact of operational shocks or vibrations.
  • active or passive isolation not shown
  • ambient vibration or shock stimulus can originate with rotors, jet engines, or other propulsion means, or from the impact of temperature changes, air or wind turbulence, or from take-off or landing events.
  • This isolation can reduce substantially reduce the transfer of shock and vibration impact at the system mounting interfaces, and the kinematic features can then reduce the impact of the residual environmental loads that reach the system 300.
  • the use of plastic lens elements within a lens 240 also increases camera channel (340) sensitivity to external temperature changes, leading to thermal defocus.
  • Thermal defocus predominantly caused by the materials of the lens barrel or housing 345 (e.g., aluminum), can significantly affect optical performance.
  • Athermalization can be at least partially achieved by using materials with a more favorable coefficient of thermal expansion (CTE) or by replacing a portion of the aluminum housing 345 with a material that has an opposite or negative CTE.
  • CTE coefficient of thermal expansion
  • FIG. 5 depicts a lens barrel, or housing 345 with a taper angle 346 in two views, assembled and disassembled.
  • the image sensor board 347 is bonded to a plate 367 that is mounted to a structural composite material 365 with a negative CTE to compensate for the optical changes.
  • a compensating thermal defocus motion can be achieved.
  • Allvar which is a negative CTE composite structured material from Allvar Alloys Inc. can be used.
  • This material can be configured in different ways, for example as plates or pins, in providing a compensating thermally sensitive motion.
  • the optimized lens can exhibit only a few microns of residual thermal defocus across a temperature range of - 15°C to +55°C.
  • the image sensor orientation alignment to the housing 345 or camera channel 340 can also be required due to the inward tapering of the lens housings, and the lens elements therein, that enables the camera channels 340 to be closely packed together about frame 330.
  • the rectangular-shaped image sensor must align with respect to the truncated sides of an outer lens element 243 to ensure that the entirety of the square or rectangular image formed falls within the active pixel region of the image sensor.
  • the image sensor orientation is not as critical, and the image sensor can be mounted to within several degrees accuracy, and the cameras can then be rotated in a mount or frame to parallel align the image capture of adjacent cameras to each other.
  • a given image sensor can be 2,160 pixels wide, with 1,740 pixels used to capture the image. This leaves 420 pixels, or 210 pixels on either side, to extend the field of view.
  • the multi-camera system (100, 300) can be designed so that these 210 pixels, which represent 1.5 degrees of the entire FOV, overlap with the adjacent channels’ XFOV (264).
  • This overlap FOV 266 is used for calibration, establishing camera boundaries, and absorbing errors from manufacturing tolerances, such as camera channel alignment and sensor alignment variations. For a pixel size of 2 pm, the image sensor would have to shift 210 pm or rotate 1.28 degrees before falling outside of the image FOV, assuming no other sources of error.
  • the alignment of the image sensor (on board 347) with respect to the camera channel 340 must be much more precise so that, when combined with other errors, there is still enough XFOV for the software functions.
  • a sample budget allocation of the XFOV’s 210 pixels might be no more than 9 pixels for part and assembly tolerances, 57 pixels for sensor alignment errors, and 35 pixels for extrinsic software calibration and camera boundary creation.
  • the fixture controls the relative positioning of an image sensor to a camera channel in six degrees of freedom by leveraging the precise kinematic features of a camera channel 340 that are used to mount a camera channel to the frame, to first mount the camera channel temporarily to the fixture.
  • the image sensor alignment fixture can include a temporary masking fixture and pre-aligned light sources to illuminate the image sensor with optical datums, and thus creating a reference image that can be measured and used for image sensor alignment. Once the desired alignments are achieved, the image sensor can be bonded to the camera channel housing 345.
  • FIG. 6 depicts a cross-sectional view of an example alternate single row, visor type multicamera system 300 with an accompanying exploded view of a mechanical gap or seam 320 between a pair of adjacent low-parallax camera channels 340.
  • a distance measurement sensor 380 such as an inductive or capacitance proximity sensor, can be used to monitor the width of a seam 320 between adjacent camera channels 340.
  • the sense plate in the sensor forms a capacitor with the adjacent channel, which would vary with the distance to the object. This capacitance formed by the sensor plate and channel determines the frequency of the oscillator, which conditioned into an output that can be monitored.
  • a capacitive distance measurement sensor 380 typically includes an oscillator, signal conditioning, an output driver, and a controller.
  • a seam width can change dynamically due to the impacts of residual shock or vibrations that have leaked through the vibration isolation and the paired kinematic features and nesting force mechanisms, to cause a change or displacement from the nominal seam width.
  • a distance measurement sensor 380 can provide real-time seam width data, which can then be analyzed to determine relative changes on an instantaneous or time-averaged basis.
  • Multiple distance measurement sensors 380 can be provided in a seam 320 to provide data on changes in tilt(s) between adjacent camera channels 340.
  • the resulting data can be used to dynamically modify the extrinsic calibration or image blending operations that can be applied to the image data coming from adjacent cameras 340. Furthermore, any changes in the values obtained during system operation can be used as feedback for recalibration.
  • Distance sensors 380 can be inserted in gaps as smaller than 1 mm with accuracy in the tenths of microns.
  • visor type multi-camera systems (100, 300) have been illustrated with a single row of low-parallax cameras (140, 340) that extend part way around a cylindrical circumference. Alternately, the cameras can extend around the full circumference to provide a halo or completely annular system.
  • FIG. 7 depicts another alternate configuration, in which a dual row of adjacent low-parallax cameras 340 is positioned to provide image capture from a more conical FOR, with an increased vertical FOV.
  • the multiple cameras 340 can be mounted to a toroidal or barrel shaped frame and controlled seams and image sensor alignments are provided between adjacent cameras in both horizontal and vertical directions.
  • Kinematic connections can be used to couple a first row of cameras to a second row of cameras, or to individually couple a camera in the first row to an adjacent camera in a second row.
  • cameras in the upper and lower rows are generally illustrated as being vertically aligned, e.g., such that each camera is aligned vertically with another camera and horizontally with at least one other camera.
  • the cameras in the upper row may be offset relative to cameras in the lower row. For instance, a seam between adjacent cameras in the upper row may align vertically with a lens of one of the cameras in the lower row.
  • the multiple cameras 340 can provide conventional visible light imaging, infrared (IR) imaging, or hybrid visible and IR imaging (VIS & SWIR).
  • the multiple cameras 340 can use different kinds of optical sensors, including an arrangement of conventional visible or IR image sensors and event sensors (e.g., from Prophesee.ai (Paris FR) or Oculi Inc. (Baltimore MD)).
  • event sensor cameras 341 can be provided at the outer or leading edges or boundaries of the multi-camera system 300, to have their high dynamic range and fast capture times (e.g., 10,000 fps) applied to detect a rapidly moving object.
  • the event sensor cameras 341 can be either low- parallax or conventional cameras.
  • the captured image data can then be used to determine an expected vectorial path of images of the rapidly moving object across the conventional image sensor(s).
  • Region of Interest (ROI) targeting can then be determined, and conventional camera image capture targeted (e.g., resolution, capture time, or frame rate) for improved image capture of the object by the conventional camera(s).
  • FIG. 8 depicts an alternate configuration for a high fill factor multi-camera visor system 400, where cameras 440 are offset 450 and alternately positioned in two nominally parallel arced sub-visors 442.
  • the conventional cameras in a given visor 442 have conventional round or circular outer lens elements and lens housing shapes.
  • the lens housings of these cameras 440 can also have circular cross-sections, and may have a cylindrical cross-section along their length, or be tapered into a modestly angled frustum.
  • Each of these cameras 440 has the associated image sensor (not shown), or a mask provided in close proximity thereto, functioning as a field stop, such that image light is collected into a rectangular or square FOV 445.
  • the lens housings are typically cylindrical, or with a little tapering (e.g., ⁇ 5 degrees), as compared to the frustum shapes of the low-parallax cameras of FIG. 2 and FIG. 3A.
  • these cameras 440 cannot be easily packed closely together, without including optical folds from mirrors or prisms in the light paths.
  • the total number of camera channels that can be provided in a tight mechanical assembly is limited by the space needed for the optical folds.
  • a system 400 with a single row visor 442 with conventional cameras 440 has a low optomechanical aperture fdl factor (e.g., 20-40%) along the arc.
  • the effective optical factor can be increased by having the individual cameras 440 capture image light from larger FOVs 445, so that they overlap. This reduces the blind regions between cameras 440, but the image resolution is reduced, unless the number of pixels on the image sensors is increased. With a large FOV overlap between adjacent cameras, then when images are stitched together to form a panoramic composite, the computational burden and image artifacts from image stitching are increased compared to prior systems with low-parallax cameras (e.g., FIG. 2, FIG. 3A, FIG. 4A-C).
  • FIG. 8 shows a multi-arced row visor type imaging system 400, with two abutting arced arrays of cameras or visors 442 stacked vertically in a cylindrical manner with an offset 450.
  • any number of parallel arced array can be used, using two or three stacked arrays or layers may be the most probable.
  • dual arced arrays of conventional cameras 440 are provided, and the camera channels in a given arced visor array 442 have a low optical fill factor along the arc, but the effective optical fill factor is increased by providing a two arced visor arrays 442 of cameras 440, with the cameras 440 in the visors angularly offset around the cylindrical shape from each other.
  • FIG. 8 depicts the visor arrays 442 and their associated frame in the upper arc, lower arc as having a cylindrical shape. But one or both, can instead have a polygonal cylindrical shaped (e.g., be octagonal) frame with congruent rectangular side faces, onto which the cameras can be mounted.
  • the multiple multi-camera arced arrays in the system of FIG. 8 can also be stacked in a barrel like fashion, to have one arced visor array 442 be tilted vertically, inwards, or outwards, relative to a second arced array 442.
  • This multi-row multi-camera system 400 of FIG. 8, with conventional cameras 440 is a potential alternative to the multi-cameras system (100, 300) with special low-parallax lenses that were previously depicted e(.g., FIG. 2 and FIG. 3).
  • the dual array of conventional cameras 440 can cost less in aggregate compared to the custom low- parallax cameras (140, 340) or lenses 240 or to a single array of higher resolution, larger FOV cameras.
  • the conventional cameras 440 will not have reduced parallax as provided by the optical and opto-mechanical design approaches that were used for the camera lenses of FIG. 2 and FIG. 3.
  • this plurality of cameras 440 can provide reduced parallax by mechanical alignment of the cameras to position the horizontal edges of the FOV 445 of one camera 449 to be parallel aligned to the FOV 445 of the next camera 440.
  • a horizontal FOV overlap from one camera to the next can be reduced to a modest 1-2°.
  • the cameras 440 of the at least first and second multiple arrays 442 are arranged to be angularly offset from each other along the arc, such that at least one camera of the first array is nominally equally angularly positioned between two cameras of the second array, such that the three adjacent cameras function as a contiguous imaging array, as they collect image light from object space.
  • the “conventional” cameras 440 shown in FIG. 8 can be replaced with low-parallax cameras, such as the cameras 110 described herein.
  • the dual array system of FIG. 8 will also occupy a larger volume, and can weigh more, as compared to the FIG. 2 and FIG. 3A systems with multiple low-parallax cameras. These differences can matter for applications, such as airborne DAA, where the size, weight, and power (SWaP) constraints can be tight. It can also be more difficult to establish and maintain rotational alignment of adjacent cameras 440 in a dual array system 400 (FIG. 8), for a camera in an upper arced array 442 to an adjacent camera in a lower arced array 442, as compared to a single row system (FIG. 2 and FIG. 3 A). Kinematic features can be used for aligning and assembling cameras 440 within a visor array 442, or between arrays 442.
  • this system can use cameras 440, using standard design approaches without parallax correction using the PSA sum or chief ray pointing, but which have truncated outer lens elements to help reduce camera channel weight and channel to channel spacing or seam widths.
  • a system 400 with a plurality of visors 442 can also have low-parallax cameras, such as those of the type depicted in FIG. 3B.
  • conventional camera channels can also be truncated horizontally, vertically, or both. However, this truncation can cause vignetting.
  • a vertical offset 450 between the dual arced arrays 442 of a few inches will not cause much vertical resolution loss when an imaged pixel at 3-4 miles out corresponds to an area 2-3 feet wide. However, that difference can matter when imaging a bogey aircraft closer in, such as only a 14 mile away.
  • the vertical offset 450 between the adjacent dual camera visors 442 can complicate camera calibration, camera-to-camera factory alignment, and image blending and tiling operations of images from adjacent cameras 440 that can be applied, for example, when an imaged bogey aircraft, and its associated ROI, crosses from the imaged FOV 445 of one camera 440 to another.
  • the vertically offset (450) adjacent cameras 440 may see different direct light exposures (e.g., solar, glare, or object reflections) that could vary with angle or position, and where the differences are accentuated by the offset 450.
  • the camera alignment and mounting accuracy of multi-layered system in FIG 8 is essential to avoid consuming too much of the extended field of view with pointing errors.
  • Pointing accuracy in the vertical direction is more critical in a vertically arrayed system, as the alignment between layers and the respective vertical extended fields of view need to be accounted for.
  • the same principles of kinematic mounting can be applied to such a system to achieve the required accuracy under the operating environment.
  • the gaps or seams between the channels in all these systems further enables structural stability, which can be more difficult to achieve in systems where camera channels are required to be constrained to each other due to their proximity.
  • the type of multi-camera system of FIG. 2, FIG. 3A, or FIG. 8 can also be “ground” mounted, for example on a pole or a building, and then used to monitor UAV or eVTOL air traffic.
  • the resulting image data can be used for collision avoidance (e.g., DAA) or for airspace monitoring for safety (e.g., keeping drones out of airports) or intrusion prevention (e.g., counter- UAS operations).
  • collision avoidance e.g., DAA
  • airspace monitoring for safety e.g., keeping drones out of airports
  • intrusion prevention e.g., counter- UAS operations
  • FIG. 11 depicts a cross-sectional portion of an arced array of cameras, as can be used in the FIG. 8 system.
  • cameras 440 which are mounted on frame 430, have conventional commercial double Gauss type lenses, which image light to an image plane 446. Projections of incident chief rays 455 and 457 are directed towards an entrance pupil 424, which has a finite size (e.g., is an LP smudge).
  • the captured imaged can be digitally cropped to a shape (e.g., rectangular) where the FOV's 445 match in object space.
  • a calibration process which creates a mapping from pixel space to object space to know where to crop, and avoid FOV overlap between camera 400.
  • the cameras can be intrinsically calibrated using a dot pattern to identify chief rays which are parallel within some specification (e.g., ⁇ 0.3 deg) to define a Core FOV, and thus determine where to digitally crop the images. Then during system assembly onto a frame, the cameras can be aligned, with the benefit of targets, extrinsic calibration, and digital cropping, to position adjacent cameras with parallel cropped FOV edges adjacent to each other. After cropping the FOVs between channels, during image capture, blending can be employed utilizing a small amount of image overlap (e.g. 3% of the half FOV).
  • This ray mapping approach can also be applied to a multi-camera system with conventional cameras with an internal fold mirror or prism to provide closer mechanical packing of the multiple cameras.
  • This approach can also be improved by selecting conventional cameras that have been analyzed or tested to determine that they have advantaged entrance pupil positioning. For example, lenses can be selected, that once mounted on a frame, preferentially enable a nominal separation of entrance pupils (or COP offset) that has a size on the order of the features being detected offset (e.g., ⁇ l/5th the feature size). And the size of the LP smudge should be on the order of the entrance pupil offset (e.g., ⁇ l/10th the EP offset).
  • a physical mask can also be provided on the outside of the camera, aligned with the cropped FOV shape (e.g., rectangular) to function as a fuzzy field stop, and enhance the contrast of the FOV edges.
  • a target can be used to measure the magnification of a given lens, then the number of pixels to give the target FOV can be calculated.
  • a target can then be used to aim or align the center pixel at the center of the target, while the camera tilts and displacements, relative to physical datums or features on the camera housings, that are needed to provide that alignment are measured. That data can be used when aligning the camera to the frame.
  • This low parallax camera imaging technology can be applied to the detect and avoid application (FIG.l), where real-time stitch-free, panoramic imaging can enable situational awareness.
  • the image data acquired by the image sensors can be output to an image processor, containing a GPU, FPGA, or SOC, on which algorithms are used to examine an airspace, as sampled by the imaged FOVs from each of the cameras, to look for one or more bogey aircraft. If a bogey aircraft 160, such as a Cessna 172, is detected, the DAA software is then used to track it within the imaged FOV 105. This data can then be output to another processor which assesses the current collision risk and determines appropriate collision avoidance maneuvers. That data can then be delivered to an autopilot, a pilot, or a remote operator.
  • the DAA bogey detection software can simultaneously monitor each camera’s FOV in entirety, or subsets thereof, using iterative windowing.
  • FIG. 9 depicts an example, with the image of a bogey 160 being tracked within an ROI 280 as the image approaches a seam or overlap FOV 266 for two adjacent cameras (140, 340).
  • windowing to scan over a camera’s full FOV to look for something new at reduced frame rate (e.g., 1-5 fps) can be valuable.
  • a potential bogey 160 can be adaptively tracked using a lightweight non-sophisticated program to look for changes in lighting, attitude, or orientation over time.
  • This software can also track multiple objects at once within a FOV 135 of a single camera 110, or within the FOR of multiple cameras.
  • DAA software can include algorithms to recognize or classify objects, with priority being directed at the fastest or closest bogeys over others.
  • the Haar Cascade classifier can be used to detect specific objects based on their features, such as size, shape, and color.
  • Bogey range estimation can then be enabled by bogey recognition, stereo camera detection, LIDAR scanning, or radar.
  • a lightweight tracking algorithm such as the Kanade-Lucas-Tomasi (KLT) tracker can be used to track the bogey's movement over time.
  • KLT Kanade-Lucas-Tomasi
  • Bogey tracking can be aided by using a tracking window or region of interest (ROI) or instantaneous FOV (IFOV) that can be modestly bigger than the captured image of the bogey, but which is much smaller than a camera channel’s full FOV.
  • ROI region of interest
  • IFOV instantaneous FOV
  • Multiple objects can be tracked simultaneously using a multi-object tracker such as the Multiple Object Tracker (MOT) algorithm.
  • MOT Multiple Object Tracker
  • various sensors such as stereo cameras, LIDAR, or radar can be used.
  • SGM Semi-Global Matching
  • signal processing algorithms can be used to estimate range based on time-of-flight or Doppler shift.
  • depth estimation can be a challenging problem.
  • Some methods to determine depth from monoscopic imagery include object identification to determine the object, and looking up its size from a lookup table. Knowledge of the objects size and pixels subtended can be used to estimate its range.
  • Another method is to use depth from focus, where the image sensor position is adjusted to find the position of best focus. This knowledge can be used to determine the approximate distance to the object.
  • Machine learning and neural networks can also be employed to estimate range from a large training set of data.
  • a circumstance can occur, when a low parallax multi-camera system (e.g., FIG. 1), mounted on a first aircraft (an own ship), is used to capture images to help enable aircraft collision avoidance via DAA software analysis, that a bogey 160 can travel through an overlap region 127 between two adjacent cameras, or within an overlap region, as it flies either towards or away from the first aircraft.
  • a visor system is deployed on an air or ground vehicle
  • the plurality of cameras can enable panoramic situational awareness of events or objects within an observed environment.
  • it can be advantageous to apply a blending method (e.g., FIG. 10) to the plurality of overlap regions, to produce a seamless panoramic image for object or DAA detection analysis.
  • an image blending method (FIG. 10) can be applied selectively only when a bogey aircraft is traversing an overlap region, and for a short time both prior to and after such a traversal (FIG. 9).
  • the blending method can preferentially be applied locally, within an oversized digital window that includes the bogey image, to follow the bogey through the overlap region FOV 107 from a first camera 110 to a second camera 110.
  • the blending method can be applied to a larger portion, or the entirety of the overlap region between the two cameras 110, without necessarily applying it to the overlap regions between other camera pairings.
  • the image transition from one camera source to another can be managed by a form of image rendering known as blending.
  • image rendering In the case of using adjacent low-parallax cameras, parallax errors, background differences, and scene motion issues are reduced, as is the amount of FOV overlap between cameras.
  • Image blending combines two images to ensure the same pixel values. This intermediate process of image blending can be advantageously used without the larger burdens of image stitching or the abruptness of image tiling without image averaging.
  • adjacent images captured by adjacent cameras can be assembled into a panoramic composite by image tiling, stitching, or blending.
  • image tiling the adjacent images are each cropped to their predetermined FOV and then aligned together, side by side, to form a composite image.
  • the individual images can be enhanced by intrinsic, colorimetric, and extrinsic calibrations and corrections prior to tiling. While this approach is computationally quick, image artifacts and differences can occur at or near the tiled edges.
  • image stitching is the process of combining multiple images with overlapping fields of view to produce a segmented panorama or high-resolution image.
  • Most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results.
  • algorithms that combine direct pixel-to-pixel comparisons with gradient descent can be used to estimate these parameters. Distinctive features can be found in each image and then efficiently matched to rapidly establish correspondences between pairs of images.
  • techniques have been developed to compute a globally consistent set of alignments and to efficiently discover which images overlap one another.
  • a final compositing surface onto which to warp or projectively transform and place all of the aligned images is needed, as are algorithms to seamlessly blend the overlapping images, even in the presence of parallax, lens distortion, scene motion, and exposure differences.
  • differences in illumination and exposure, background differences, scene motion, camera performance, and parallax can create detectable artifacts.
  • parallax errors, background differences, and scene motion issues are reduced, as is the amount of FOV overlap between cameras.
  • An intermediate process of image blending which is a form of image rendering, can then be advantageously used, without the larger burdens of image stitching.
  • Image blending combines two images to ensure nominally the same pixel values, or smooth transitions, for content from adjacent cameras in a local overlap region. If the residual parallax errors within the extended FOVs that capture content in or near the seams are similarly small enough, and the two adjacent cameras are appropriately aligned to one another, then the overlapped captured image content by the two cameras can be quickly cropped, or locally averaged or blended together, and included in the output panoramic images. Blending, can for example, apply weighted averaging on image content seen by two adjacent cameras, based on the distances or estimated distances from the center of the images.
  • Image blending method can also be optimized for the application, using, for example a frequency decomposition to identify and favor the camera that locally provides better image quality, or using the parallax data for a camera lens 240 to locally correct away from an ideal virtual pinhole assumption.
  • a frequency decomposition to identify and favor the camera that locally provides better image quality
  • the parallax data for a camera lens 240 to locally correct away from an ideal virtual pinhole assumption.
  • the intrinsics and extrinsics calibration data for both cameras can be used to form a perspective projection of the pixels as defined by the Kalman filter.
  • the DAA system including the visor camera system (100, 300), can use data from an inertial measurement unit (IMU) to help correct for changes in the aircraft’ s own motion or vibrations.
  • IMU inertial measurement unit
  • Data collected by distance sensors 380 mounted within the seams (FIG. 6) between adjacent lens housings can also be used to dynamically adapt the application of extrinsic calibration data, or the image blending (e g., FIG. 10), or both, at least locally where a bogey and ROI are crossing a seam or overlap FOV.
  • extrinsic calibration data or the image blending (e g., FIG. 10), or both, at least locally where a bogey and ROI are crossing a seam or overlap FOV.
  • measured position or tilt data can be used to modify application of the stored extrinsic data, and thus the application the relative intrinsics data to actively modify image blending.
  • Parameters in the image blending algorithm can also be changed directly.
  • a dual visor or halo system where a second visor or halo system is out of plane parallel offset from a first one.
  • This second visor or halo system can also image the same spectral band (e.g., visible, with or without RGB color), so that in cooperation with the first system, stereo imaging and range or depth detection is enabled.
  • the second visor can be equipped with another sensing modality, such as monochrome, LIDAR, IR, or event sensor cameras.
  • the monochrome camera can be filled in with color data, using a trained neural network that uses up-resolution techniques to merge the color data with the higher resolution monochrome camera feed.
  • a trained neural network that uses up-resolution techniques to merge the color data with the higher resolution monochrome camera feed.
  • high framerates of 10k FPS+ can also be used to detect sounds in the video feed.
  • the parallax data for lenses 240 can be applied using modeled or measured data, to modify the weighting factors over a lens field of view that are applied during image blending to enable a more accurate blending of image content of key features in a scene.
  • the image data in overlap regions 107 can be analyzed via frequency decomposition, to identify the best image data available from either of the adjacent cameras 110. The better-quality image data can then be favored for at least key image features during a local blending in an overlap region.
  • Image blending can also be applied selectively, in overlap regions, or portions thereof, where high quality photogrammetric image data is needed, but skipped elsewhere where the content is feature poor. This blending method, or variants thereof, can also be applied with the multi-camera systems of 300 of FIG. 7 and 400 of FIG. 8, respectively.
  • FIG. 10 illustrates a preferred image blending method that can be employed in a processor within a multicamera system to create a blend within a region of overlap in which two cameras are contributing image data (FIG. 3D and FIG. 9).
  • the field of view overlap region is enabled by designing the camera lenses to have an extended field of view.
  • JNDs just noticeable difference
  • JNDs can be used to measure local color, pattern, or content discontinuities between images of an object captured by two adjacent cameras within an overlap region.
  • JNDs noticeable difference
  • the FOV angle for each camera is determined, and in a third step, a FOV angular distance for each camera and the distance to the bisecting plane are calculated to determine which quadrant of the overlap region the image pixels are in.
  • the appropriate linear coefficients are used to estimate the distance to the edge of the area of overlap. This step can include determining and applying a mean RMS re-projection error, at multiple object-space conjugates, to yield a measure of the field overlap, so as to apply and improve the determined pinhole variation along the lens edges.
  • this information is used to determine how much each camera contributes to the final RGB values.
  • the method described can be referred to as spatially varying alpha blending. In this method, image data from more than one camera is combined as a weighted average. The weights are normalized to sum to 1.0 and are proportional to the relative “closeness” to a given camera’s center pixel.
  • the image intensities of any given pixel that is seen by both cameras are averaged together.
  • the output images are first corrected for radiometric or colorimetric variations using predetermined calibration data.
  • predetermined intrinsic and extrinsic geometric calibration data the pixel-to-pixel correspondence of the image pixels in the overlap region between cameras is predetermined.
  • the output pixel values of the corresponding pixels within the overlap region are averaged together using one or more weighting factors.
  • the resulting corrected images can be kept separate or combined into a larger panoramic image.
  • the effect of the blending method is to provide a smooth transition from one camera color to another.
  • the blending method of FIG. 10 can use as spatially varying alpha transparency blending in which image data from more than one camera is combined as a weighted average.
  • the weights are normalized to sum to 1.0 and are proportional to the relative “closeness” to each camera region or to each camera’s optical axis (central pixel).
  • Another approach to blending more than one camera within a region of overlap could be referred to as spatially varying stochastic blending.
  • This method is similar to alpha blending, but instead of combining the image data from multiple cameras, the weights are used to control a stochastic sampling of the corresponding cameras.
  • stochastic sampling is a Monte Carlo technique in which an image is sampled at appropriate nonuniformly spaced locations rather than at regularly spaced locations. Both of these image blending methods are agnostic to the content of the images.
  • the blending method of FIG. 10 can be adapted for use in a multi-camera system in which imaging algorithms for creating equirectangular projections are imbedded in a field programmable gate array (FPGA) or other comparable processor, ongoing or on-demand pixel projection recalculation can be used to enable image blending.
  • the blending correction values can be rapidly recalculated with little memory burden in real time.
  • the image blending method of FIG. 10 can be applied to multi-camera systems by evaluating the overlap regions and using a “grassfire” based algorithm to control the blending between cameras in the overlap regions.
  • the grassfire algorithm is used to express the length of the shortest path from a pixel to the boundary of the region containing it, and is advantaged for applications that can support the use of a large, precomputed grassfire mapping LUT that needs significant memory when creating the panoramic image re-projection.
  • An image blending method (e.g., FIG. 10) can be applied selectively across some overlap regions 127, if objects or features of interest are identified therein. Alternately, when a panoramic composite image is wanted, image blending can be applied selectively for an overlap region, or an ROI therein, when the image data therein is of high quality (e.g., MTF) and high confidence. [0103] For some applications, it can be advantageous to apply a blending method (FIG. 10) to the plurality of overlap regions, to produce a seamless panoramic image for object or DAA detection analysis.
  • an image blending method (FIG. 10) can be applied selectively only when a bogey aircraft is traversing an overlap region, and for a short time both prior to and after such a traversal.
  • the blending method can preferentially be applied locally, within an oversized digital window that includes the bogey image, to follow the bogey through the overlap region from a first camera to a second camera.
  • the blending method can be applied to a larger portion, or the entirety of the overlap region between the two cameras, without necessarily applying it to the overlap regions between other camera pairings.
  • the optical designs of the low- parallax cameras 110 can be optimized to enable co-axial imaging and LIDAR.
  • the camera optical designs can include both a low-parallax objective lens, paired with an imaging relay lens system, the latter having an extended optical path in which a beam splitter can be included to have an image sensor in one path, and a LIDAR scanning system in another path.
  • the beam splitter can be imbedded in the low-parallax objective lens design, with the imaging sensor and the LIDAR scanning system both working directly with the objective lens optics and light paths.
  • a single LIDAR scanning system can be shared across multiple low-parallax objective lenses.
  • light from a laser source is directed through beam shaping optics and off of MEMs scan mirrors, to scan through a given camera system.
  • the beam splitters would direct image light out of the plane of the page.
  • the LIDAR beam resolution may not match a camera’s imaging resolution, that can be partially compensated for by controlling the LIDAR scan addressing resolution.
  • LIDAR can have less resolution than the low-parallax imaging cameras, and this will subsample imaged object and the 3D model.
  • the LIDAR data can add accuracy to the range or depth measurements to an imaged object or features therein.
  • interpolation can be used to accurately determine a correct 3D location for scanned 3D points and intermediate points in between.
  • the LIDAR data adds depth information to spherical image data, such that multiple RGB-D spherical images can be fused together to create a 3D or 4D vector space representation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

A multi-camera imaging system includes a cylindrical frame and a plurality of cameras kinematically mounted to the cylindrical frame. The cameras include outer optical element that are truncated to have a pair of nominally parallel edges and are configured to capture fields of view having angular edges. The cameras are arranged such that the fields of view of adjacent cameras overlap at an overlap region along an optical gap between the parallel edges.

Description

Visor Type Camera Array Systems
CROSS REFERENCE TO PRIOR APPLICATIONS:
[0001] This disclosure claims benefit of priority of U.S. Provisional Patent Application Ser. No. 65/513,721, entitled “Visor Type Camera Array Systems”, and U.S. Provisional Patent Application Ser. No. 65/513,707, entitled “Image Compositing with Adjacent Low Parallax Cameras”, both of which were filed on July 14th, 2023, and the entirety of each of which is incorporated herein by reference.
DISCLOSURE:
[0002] This invention was made with U.S. Government support under grant number 2136737 awarded by the National Science Foundation. The Government has certain rights to this invention.
TECHNICAL FIELD:
[0003] The present disclosure relates to panoramic low-parallax multi-camera capture devices having a plurality of adjacent polygonal cameras arranged in a visor or halo to capture an arced array of images. This disclosure principally relates to the optical and mechanical configurations thereof.
BACKGROUND:
[0004] In imaging applications requiring real-time situational awareness of activity, events, or objects, within wide field of view or panoramic environments, a common answer is to image the scene with a gimballed camera. For example, US 7,136,726, entitled “Airborne Reconnaissance System” describe a system having both internal and external gimbals, each having at least two degrees of freedom, a servo means for directing the gimbals, and a portion selection unit for selecting, one at a time, another area portion from a viewed area of interest. Although widely used, including on military drones such as the MQ-9, gimballed camera systems are limited by their scan viewing to providing real-time situational awareness in whatever direction a camera happens to be pointed. Gimballed camera systems are also sensitive to both jitters caused by the internal servo mechanisms and shock and vibration from external sources, such as the drone or ship that is carrying the system, resulting in reduced image quality.
[0005] Alternately, panoramic imaging can be provided using a camera with a fisheye lens (e.g., US 4,412,726), or a fisheye lens with an extended field of view (e.g., > 180°, US 3,737,214), or using two fisheye lenses back-to-back (e.g., US 9,019,342). However, fisheye lenses have low resolution and high distortion, which limits their value for applications requiring real-time situational awareness of activities occurring within a large environment.
[0006] There are also panoramic multi-camera devices in which a plurality of cameras is arranged around a sphere or a circumference of a sphere, such that adjacent cameras are abutting along a part or the whole of adjacent edges. Commonly assigned US Patent No. 10,341,559 describes the design of low parallax imaging lenses that can be arranged in a dodecahedral geometry to enable panoramic image content capture within a nearly spherical field of view, such as for capturing cinematic or virtual reality (VR) type image content. Commonly assigned patent application Publication No. US 20220357645 describes an approach for opto-mechanically mounting the plurality of cameras into an integrated dodecahedral unit or system. However, this camera system may not be optimized to provide panoramic situational awareness of events occurring at distance.
FIGURES:
[0007] FIG. 1 is a perspective view showing aspects of using a multi-camera system in a Detect and Avoid (DAA) scenario.
[0008] FIG. 2 is a perspective view of part of a single row, visor type multi-camera system using low-parallax cameras. [0009] FIG. 3 A is a top view of the part of the single row, visor type multi-camera system using low-parallax cameras of FIG. 2.
[0010] FIG. 3B depicts a cross-sectional view of an exemplary lens design of the type used in one of the cameras of FIG. 3 A in greater detail.
[0011] FIG. 3C and FIG. 3D depict fields of view captured by adjacent cameras in a multi-camera system using low-parallax cameras.
[0012] FIG. 3E depicts the concept of a parallax jump between two adjacent camera channels.
[0013] FIG. 4A and FIG. 4B are exploded perspective views of part of a single row, visor type multi-camera system, depicting the mounting of a low-parallax camera channel onto a cylindrical frame.
[0014] FIG. 4C is an exploded perspective view of part of a single row, visor type multi-camera system, depicting an alternate mounting of a low-parallax camera channel onto a cylindrical frame. [0015] FIG. 5 depicts side and exploded views of a camera channel assembly illustrating a sensor mounting design including athermalization.
[0016] FIG. 6 depicts a plan view of an example alternate single row, visor type multi-camera system with a closeup of a seam between a pair of adjacent camera channels.
[0017] FIG. 7 is a perspective view of a portion of a dual row, visor type multi-camera system using low-parallax cameras.
[0018] FIG. 8 is a perspective view of part of a dual row, visor type multi-camera system using conventional cameras.
[0019] FIG. 9 is a schematic representation of an object being imaged approaching a boundary between two channels.
[0020] FIG. 10 is a flowchart illustrating a method for blending images from adjacent cameras. [0021] FIG. 11 is a cross-sectional portion of an arced array of cameras, as can be used in the FIG. 8 system.
DETAILED DESCRIPTION:
[0022] Although there are numerous applications that can benefit from improved real-time situational awareness of events or objects present in a wide field of view (WFOV) or panoramic scene or environment, aspects of this disclosure relate to enabling improved air traffic safety. In particular, as autonomous drones (or other types of unmanned aerial vehicles “UAVs”) and eVTOLs (electric vertical take-off and landing) aircraft (e g., flying cars) are being developed and used, there is increasing need for on-board sensor equipment to help prevent collisions. As examples, drones or VTOLs can be equipped with acoustic, optical or radar sensors, GPS detectors, and/or ADS-B transponders. However, each of these equipment types has deficiencies, and multiple types are needed to provide redundancy. The potential problems will likely escalate as the diversity and density of air traffic intensifies.
[0023] The present disclosure provides improved solutions for optical sensing, using an arced array of cameras (e.g., six cameras) in a visor configuration, which can be mounted onto a UAV (drone) or eVTOL and be used in-flight to look for potential collision risks and/or otherwise monitor an environment. FIG. 1 depicts an example of such a system, with an aircraft 150 equipped with a nose-mounted arced multi-camera system 100 to look for potential collision risks, including other or bogey aircraft 160 within a field of view 105. FIG. 2 depicts the visor-type multi-camera system 100 in greater detail, with adjacent cameras 110 mounted to a frame 130 with offset gaps or seams 120, and with the outer truncated lens elements protected by protruding hoods 115. Mechanical gap or seams 120 span the distance been lens housings, while optical gaps or seams are larger, and span a distance between a coated clear aperture (CA) of a camera to the CA of an adjacent camera. The multi-camera system 100, which is of a type of the present invention, for example, simultaneously monitors a full field of view (FOV), or field of regard (FOR), with approximately ± 100° horizontal FOV and approximately a ± 20° vertical FOV. While aircraft typically fly horizontally, the FOV orientation can be defined relative to the aircraft 150 rather than the environment. The pitch or tilt of the aircraft 150 (particularly multi-rotor UAVs) can change during operation, by plan (e.g., speed), or because of wind conditions. The multi-camera system 100 can then be tilted to compensate.
[0024] This system 100 enables staring mode detection over the full field of view, versus gimballed camera systems that are dependent on tine scanning camera motion. Such visor-type multi-camera systems 100, as shown in FIG. 1, and in greater detail in FIG. 2, can image visible light, infrared (IR) light, or a combination thereof. For visible imaging, either monochrome or color filtered (e.g., with a Bayer filter) image sensors can be used. The resulting image data can be analyzed for collision avoidance, to enable detect and avoid (DAA) functionality. The image data from the image sensors can be output to an image processor, containing a GPU, FPGA, or SOC, on which algorithms are used to examine an airspace, as sampled by the imaged FOVs from each of the cameras, to look for one or more bogey aircraft 160 or other objects. If a bogey aircraft 160, such as a Cessna 172, is detected, the DAA software is then used to track it within the imaged FOV. This data can then be output to another processor which assesses the current collision risk and determines appropriate collision avoidance maneuvers. That data can then be delivered to an autopilot, a pilot, or a remote operator.
[0025] The DAA bogey detection software can help simultaneously monitor the FOR of the entire camera system 100, or a given camera’s FOV in entirety, or subsets thereof, using iterative windowing. As real-time detection of non-cooperative or bogey aircraft 160 flying in an airspace can be a difficult task, and can impose a significant computational burden, windowing, to scan over a camera’s full FOV to look for something new at reduced frame rate (e.g., 1-10 fps) can be valuable. Once a potential bogey is detected, it can be adaptively tracked at an increased frame rate (e.g., 30-60 fps) using a lightweight non-sophisticated program to look for changes in lighting, position, attitude, and/or orientation over time. This software can also track multiple objects at once within the FOV of a single camera 110 or camera channel, or within the FOV of multiple cameras (FIGS. 1 and 2). DAA software can include algorithms to recognize or classify objects, with priority being directed at the fastest or closest bogeys over others. Bogey range estimation can then be enabled by bogey recognition, stereo camera detection, LIDAR scanning, or radar. Once detected, a bogey 160 can be detected using a tracking window or region of interest (ROI) or instantaneous FOV (IFOV) that can be modestly bigger than the captured image of the bogey, but which is much smaller than a camera channel’s full FOV. An open-source example of DAA bogey detection and tracking software is presently available from Purdue University.
[0026] This disclosure relates to alternate and improved lens system designs and architectures for enabling visor type camera systems 100 that can provide improved detect and avoid, sense and track, search and track, navigation, and/or other functionality. The cameras 110 include lenses that are generally designed to limit parallax or perspective errors optically and opto-mechanically, as is described in commonly assigned lens design related patent applications US 20220252848 and WO2022173515, or improvements thereof. These camera lenses are used in the cameras 110 of multi-camera systems 100, in which the outer lens elements are typically truncated along polygonal edges, so that the camera channels 110 can be mounted in close proximity with narrow intervening gaps or seams 120, and the lens design method controls image light ray behavior (e.g., parallax or perspective) for chief rays along the polygonal lens edges. In particular, these prior commonly assigned disclosures describe lens design methods and exemplary lens designs, in which the paraxial entrance pupil, and non-paraxial variations thereof, are positioned behind or beyond the image plane.
[0027] As background, it is noted that in the field of optics, lenses can be characterized by many specifications, including focal length, field of view (FOV), image quality (e.g., MTF), bandwidth (e.g., for use with visible light). One term of art in the field, is the “entrance pupil.” For context, by standard practice the location of the entrance pupil can be found by identifying a paraxial chief ray from object space, that transits through the center of the aperture stop, and projecting or extending its object space vectorial direction forward to the location where it crosses the optical axis of the lens system. A paraxial ray has an optical direction into a lens for an off axis chief ray that is modestly offset in tilt (e.g., < 7-10°) from the optical axis. Whereas non-paraxial chief rays are typically incident to the first lens element at much higher angles (e.g., 20°, 40°, or even 90°). In most standard lenses, such as a double Gauss or a Cooke Triplet, the entrance pupil is located in the front third of the lens system, and its location, and the differences in location where there projections cross the optic axis between the paraxial and non-paraxial rays are not specified or analyzed, and are substantially irrelevant to the lens design. A fisheye lens system is a partial exception, as the lenses can be designed to control distortion over a large FOV (e.g., ± 90°), without direct control of entrance pupil aberrations, although there are fisheye lens design approaches which optimize the pupil aberrations directly. In either case, in fisheye lenses, it is generally recognized that the “entrance pupil” position varies widely, near the front of the lens system, for the chief ray angles versus FOV.
[0028] Most optical lenses are designed with little or no attention to entrance pupil location and diameter. For example, in the paper, "Theory of the “no-parallax” point in panorama photography" by Rik Littlefield, Pano Post 7 (2006): 1-19, it is stated that the perspective of an image is determined by the light rays that formed it, and because the aperture selects those rays, its location determines the perspective. This paper states that the center of perspective and the noparallax point are the same, and are located at the center of the entrance pupil. However, for some lens designs, such as fisheye lenses, these generalities can break down in significant ways, when considering the differences between paraxial and non-paraxial chief rays. By comparison, these commonly assigned patent applications, and the current application, identify and address significant differences in the non-paraxial versus paraxial entrance pupil positioning, or in associated parallax or center of perspective differences and optimization, that this reference did not anticipate.
[0029] In contrast, in the aforementioned commonly assigned patent applications, optimization of the entrance pupil in the rear of the lens, and particularly behind or beyond the image plane, is a deliberate goal, so as to control parallax or perspective difference versus chief ray angle over a FOV. The lens design goals to control parallax can be realized by optimizing the lens system using chief rays’ constraints, pupil spherical aberration (PSA or PSA sum), or spherochromatic pupil aberration (SCPA) terms in the merit function within the lens design program (e g., Code V). Typically, optimization priority will be directed to a range of off axis chief rays that lie along or near truncated polygonal lens edges of the outermost lens element, or front lens or compressor lens. As an example, for a multi-camera system with a dodecahedral geometry, the chief rays along a truncated polygonal lens edge can span a range of 31 .7-37.4°. Considering different system geometries and applications, the typical low-parallax lens design is optimized to control parallax for maximum FOV angles in the ± 20° to ± 40° angular range, although design with larger or smaller angles are possible. Residual parallax or perspective errors can be tracked in various ways, including as an angular difference from the nominal geometric angle, or as a fractional difference in image pixels.
[0030] These low-parallax lenses, including those to be discussed in the present disclosure, can also be designed to control front color, which is a residual color shifting of the chief rays for a given chief ray field angle. Color variable cropping or vignetting of the edge of field chief rays through the lens system can then cause a rainbow color artifact at the image edges that are proj ected onto the image plane or image sensor. In such lens systems, the truncated outer lens element also acts as a “fuzzy” field stop, such that the image within a core FOV underfills the image sensor. Whereas, in typical optical imaging systems, field stops are located at the image plane or at images of the image plane, but the front lens of these cameras 110 is neither of these. The size of the fuzziness is related to the entrance pupil diameter and the degree of overlap in beam footprints between nearby fields. The “fuzziness” is also influenced by the residual front color (which is color dependent overlap) and the truncation of the lens edges, which are not as sharp as the edges as can be defined in a typical black sheet metal mask used at or near an image plane.
[0031] In applying this low-parallax lens design method, it can be useful to track the nominal distance of the paraxial entrance pupil behind the image plane, and also the nominal distance or offset of this entrance pupil to the device center. As one ideal, to optimally control or limit residual parallax, the entrance pupil location would coincide with the device center. In this case, the mechanical gap or seam between adjacent camera channels would be effectively zero. This can be difficult to realize, particularly in multi-camera systems where the lens elements of the respective camera channels are mounted in lens housings with a finite thickness, which in turn causes a real or finite gap or seam between adjacent cameras, and thus a real offset between the entrance pupil location and the device center. For example, in a multi-camera system designed for content capture of nominally near environments or activity, such as for cinema or VR, the real offset of the entrance pupil to the device center can be relatively small (e.g., 2-5 mm).
[0032] A system constructed with a faceted dome or faceted arc of integrated front lens elements (see commonly assigned application US Publication No. 2022/0357646) provides one approach for reducing both the gap between adjacent camera channels and the entrance pupil to device center offset distance. Alternately, or in addition, the gaps or seams between adjacent camera channels can be optically masked by designing the lens systems with some extra or extended FOV (e.g., XFOV of -0.3-1.0°) per side or gap. Adding XFOV can help compensate for camera channel alignment tolerances, support camera calibration and image blending or tiling operations, and reduce blind regions in front of the camera system. However, adding or increasing allowed XFOV and gaps between channels both increase the distance from the device center to the center of perspective (COP).
[0033] The aforementioned commonly assigned lens design applications also detail the design differences that can occur at or near the projected entrance pupil location, for the off-axis chief rays, relative to a nominal paraxial ray entrance pupil. Several terms, including the no-parallax point (NP point), center of perspective (COP), and low-parallax smudge (LP smudge) are used to describe, or provide context to, these differences. Notably, in many real lens designs, the residual low parallax smudge for the chief rays transiting along a polygonal lens edge of the outer or front lens element, when projected from that front lens surface, can be locally offset from the paraxial entrance pupil location by 1 -2 mm. Thus, as one way to refine a lens design, the location or offset distance of the LP smudge for the chief rays along a polygonal front lens edge, can be optimized relative to the image plane or the device center, instead of using the paraxial entrance pupil position in such a metric. [0034] It is noted that the “LP” smudge is a measurement of the variation in the location of the paraxial entrance pupil position to the pupil position(s) for one or more non-paraxial chief rays. It can be measured as a longitudinal distance difference or length 275 along the optical axis 230 see FIG. 3E), or as an area or volume that encompasses the chief ray wander out of plane. Parallax errors versus field and color can also be analyzed using calculations of the Center of Perspective (COP), which is a parameter that is more directly relatable to visible image artifacts than is a low parallax volume, and which can be evaluated in image pixel errors or differences for imaging objects at two different distances from a camera system. The center of perspective error is essentially the change in a chief ray trajectory given multiple object distances - such as for an object at a close distance (3 ft), versus another at “infinity.” Within a lens, the COP can be estimated as being at a location or position within the LP smudge. COP location distance differences or COP jump between two adjacent cameras that are viewing an overlapping FOV can also be analyzed to assess parallax viewing differences between the cameras.
[0035] The offset for the paraxial entrance pupil or the non-paraxial LP smudge from the device center can also depend on the application. For applications such as imaging for cinema or virtual reality (VR) content, where close-ups can be desirable, the camera system may be expected to provide in-focus images of objects that are only 3-4 feet away, while the maximum in-focus imaging distance may be only on the order of about 500 ft away. In such systems, the gaps or seams between camera channels and the nominal offset of the entrance pupil or non-paraxial LP smudge to the device center both tend to be modest (e.g., ~6 mm), to help control the blind regions in front of the camera system to be less than the minimum imaging distance.
[0036] In summary, for near imaging applications, it can be good for the COP to be far from the front vertex and near or behind the image sensor because it enables several things, including: • Compact mechanical packaging with small optical and mechanical gaps.
• The COP can be much closer to the device center.
• This in turn means the COPs of one camera channels can be much closer to the COPs of adjacent camera channels.
• This in turn means the parallax differences between channels are smaller and that parallax errors in FOV overlap regions are smaller.
[0037] In contrast, FIG. 2, which depicts aspects of the present invention, illustrates a multicamera system 100 with 7 low-parallax cameras 110 mounted arranged in an arc to form a visor. For example, this type of system can enable enhanced situational awareness or safety for air or ground vehicles for “long” distance imaging with a range of several miles. As depicted herein, the camera channels 110 are independently mounted on a cylindrical frame 130 such that a portion of a camera 110 is inserted into the frame 130 and interfaces with the outer surface of the cylinder. The primary function of the cylindrical frame is to accurately locate each camera channel, but it also serves as an enclosure for the electronics and, in some cases, as a heat sink for electronics cooling. Alternatively, the system shown in FIG 2 can take on similar embodiments consisting of multi-camera capture devices with a plurality of cameras arranged in a circular or polygonal shape. Multi-camera system 100 can also include covers or lids (not shown) on the top and bottom, which can help seal the system from internal contamination, and further include features for external mounting or to aid internal thermal control (e.g., heat sinks).
[0038] Cameras 110 can image FOV cross-sections that are nominally polygonal (e.g., rectangular or square) cones of collected and imaged incident light. In the FIG. 2 example corresponding polygonal outer lens element truncation is only on two opposing sides. As shown in FIG. 2, each of the example cameras 110 have about a 50 mm wide clear aperture, image rectangular FOVs, and have ~15 mm wide seams or gaps between them. The cylindrical frame 139 has a diameter of ~ 200 mm. The inward tapering of the lens elements and supporting lens housings into a conical or frustum shape enables the camera channels to be closely packed together with narrow intervening gaps or seams 120. Thus, the overall system can have a high fdl factor, with for example, the ratio of the summed camera apertures (truncated size of the outer lens element) to the arced system shape > 85%. The individual camera channels are shown as having protruding hoods 115, to protect against direct solar exposure, internal ghosting, or contact by external debris. The camera channels can also include clear protective shields or windows (not shown).
[0039] The multi-camera system 100 of FIG. 2 and FIG. 3A can be used for other applications beyond DAA or sense and avoid. For example, the output image data can also be used for navigation or inspection applications. For these, and similar applications, it may also be advantageous to pivot or tilt the multi-camera system 100 downwards. Alternately, the system can also include an arc or one or more cameras that point or image with a downward tilt.
[0040] Although the gaps 120 may seem mechanically wide, these adjacent camera channels 110 are opto-mechanically mounted onto frame 130 in proximity, to maintain nominal FOV parallelism across the intervening seams 120 so as to retain the optical benefits of low-parallax control between adjacent cameras 110. For bogey aircraft detection, these cameras can image several miles distant, while also supporting a minimum in-focus imaging distance of only 50-100 feet away. For example, the individual cameras can use Teledyne 36M image sensors and provide an imaging resolution of 2-3 feet width per image pixel, at a distance of 3-5 miles out, which can be sufficient to detect a Cessna airplane. The imaging software can then position a digital ROI around the detected bogey, to enable tracking over time, with an increased relative resolution and data or frame rate compared to the surrounding bogey-free image areas. The cameras can provide an extended FOV (e g., XFOV < 1 °), including an angular FOV overlap 107 that can span both a mechanical seam 120, and a larger optical gap or seam 118 between the lens clear apertures. With this XFOV or limited FOV margin, the blind regions can be less than this minimum in-focus imaging distance. This limited XFOV both provides tolerance for camera to camera alignment and enables camera calibration and smooth image tiling or blending. But with minimal parallax differences between images captured by adjacent cameras 110, the images can be tiled together in real time, without the computational burden or image artifacts encountered during the normal image stitching processes. A low computational burden image blending software process can be applied during the image tiling to smooth out calibration differences that can cause image artifacts at the edges or gaps adjacent lenses. The blending can be applied dynamically or selectively in time, such as when a tracked bogey, or its associated ROI, are crossing from the FOV of one camera 110 to that of an adjacent camera 110.
[0041] FIG. 3 A provides a cross-sectional view of a visor type multi-camera imaging system 100 of the type of FIG. 2, with more optical details, but less mechanical details, shown. The cameras 110 generally comprise lenses 140, which image incident light onto an image sensor (not shown). The lenses 140 include a front compressor lens group 142, which may have 1 to 3 lens elements, including at least an outer lens element that is truncated into a polygonal shape. The lens systems then further comprise a multi-lens element wide-angle group 144, which includes lens elements both before and after the aperture stop, and provided prior to the image plane 146 and associated image sensor. In combination, the lens elements of the two lens groups form a polygonal image to the image sensor, where the polygonal image shape nominally corresponds to the truncated polygonal lens shape of the first lens element. [0042] FIG. 3 A also depicts the nominally conical or frustum shape of the cameras 1 10 and lenses 140, although the enclosing lens housings are not shown. Typically, depending on the lens and system design the taper angle 135 is in the range of 13-18 degrees. This tapering of the lens housings enables the camera channels to be collocated in close proximity with narrow intervening seams 120. FIG. 2 depicts the cameras 110 with outer lens elements that are horizontally truncated to enable narrow seams, but remain rounded vertically. However, the polygonal outer lenses and cameras can also be truncated in the vertical direction, to provide camera channels that are generally either square or rectangular in cross-section. FIG. 3 A also depicts a pair of adjacent lenses receiving nominally parallel incident chief rays 145. The optical gap 118 can be measured as an angular or distance difference between the chief rays of the adjacent channel, or as a nominally smaller distance between the coated clear apertures of the two lenses.
[0043] Additionally, FIG. 3 A depicts an axial or AZ offset distance 122 is shown between the nominal device center 126 and the paraxial entrance pupil 124 of a camera channel 110. In applications such as this one, involving imaging of more distant objects, such as for detection of bogey aircraft (e.g., FIG. 1) that are miles distant, the imaging lenses are designed with longer focal lengths and higher magnifications than for near imaging applications (e.g., cinema and VR). As the minimum imaging distances, without focus readjustment, are longer, then the blind regions, gaps or seams 120 between adjacent camera channels, and the offset distances 122 of the paraxial or non-paraxial “entrance pupils” 124 to the device center 126 can likewise be larger, while keeping the same limited XFOV of -0.5-1.0° per camera channel. For example, in such a system, the nominal axial offset distance 122 between the device center 126 and the paraxial entrance pupil 124 can be 30-70 mm, while the axial offset between the paraxial entrance pupil and the non- paraxial LP smudge or COP can be design optimized to still be small (e.g., < 2 mm). The offset distance 122 can also be measured as a distance AZ between the device center 126 and the COP of a camera channel.
[0044] As a result of this increased offset distance or difference AZ, the lenses 140 in the camera channels 120 can be designed differently as compared to the lenses for near-imaging applications where the offset distance is small (e g., < 5 mm). In particular, for more distant imaging applications, parallax or perspective can still be adequately optimized even if the entrance pupil or nearby LP smudge (or NP-point or COP) is located at or near the image plane 146 or even modestly in front of the image plane (see FIG. 3B). As an example, a low parallax lens system with a focal length of ~ 8 mm and a track length from the front lens vertex to the image plane of - 60 mm was designed, for which the NP point was acceptably optimized using the PSA sum method to a location at about 10% of the 60 mm lens length in front of the image plane.
[0045] FIG. 3B depicts a cross-sectional view of an example lens 240 with 11 optical elements designed for longer distance imaging, and that can be used in a camera 140. Lens 240 comprises a compressor lens group 242 and a wide-angle lens group 244 that images light (e g., ray bundles 212) to an image plane 246. This example lens includes 10 lens elements and a window, to be a UV or IR cut fdter substrate. Projections of chief rays within the light ray bundles 212 are projected towards the entrance pupil 224, with the actual projected locations varying amongst both the paraxial and non-paraxial chief rays (e.g., the LP smudge). The entrance pupil 224 is located in front of the image plane 246, by an image offset distance 228, and the offset 222 spans the distance from the entrance pupil 224 to the device center 226. At least some lens elements in a lens 240 can consist of optical glass, or optical plastics, or have aspherical profile surfaces, or be meta-optics with sub-wavelength surface structures, or combinations thereof. [0046] FIG. 3C depicts a 2-D illustration of core FOVs 262 (for two adjacent channels) as an area in front of each camera 240 that are respectively imaged onto an image sensor. Three distances from the camera are shown, and the imaged area of a scene grows as the distance from the multi - camera system increases. The optical gap 130 remains constant because the chief rays are parallel at the boundary. Regions are aligned to the optic axis of the left channel. These imaged regions are not vignetted. There is a larger surrounding region with vignetting, which would typically be cropped from the captured data.
[0047] FIG. 3D then provides a 2-D illustration of imaged fields of view for two adjacent cameras 240, which are larger, as they further include XFOV’s 264. The imaged FOV’s are again shown as an area in front of each camera, that is then imaged onto a sensor. Three distances from the camera are shown, and again the imaged area grows as the distance increases. The imaged regions are aligned to the optic axis of the left channel. The cameras are nominally designed to prevent vignetting within the XFOV’s 264.
[0048] In some multi-camera systems 100 of the present type, the edges of the FOVs 260 nominally correspond to both the truncated lens edges and to the edges of active pixels on the image sensor array, with parallax optimized for nominal chief ray parallelism along the truncated lens edges. Two adjacent cameras can be mounted onto a supporting frame with these edge chief rays of the two adjacent cameras being nominally parallel to each other. This configuration essentially extends the blind regions to infinity. But if, for example, the mechanical seams, or optical gaps or blind regions are 15-25 mm wide, at the outside surface of the frame or outer lens 243, then the relative resolution loss when imaging a bogey aircraft a few to several miles away, with a pixel resolution of 1-3 ft width at that distance, is insignificant. [0049] FOV 260 can correspond to a Core FOV 262, which can be defined as the largest low parallax field of view that a given real camera lens 240 can image. Equivalently, the core FOV 262 can be defined as the sub-FOV of a camera channel whose boundaries are nominally parallel to the boundaries of its polygonal cone (see FIGS. 4A and 4B). Ideally, with small seams 160, and proper control and calibration of FOV pointing, the nominal Core FOV 262 approaches or matches an ideal FOV, where adjacent Core FOVs meet with negligible intervening gaps. However, in reality, in allowing for variations in the camera to camera alignment tolerances, some extended FOV is needed so the cameras can be less than perfectly aligned, while images are still collected. Additional extended FOV can be needed to enable geometric camera channel calibration (e.g., intrinsics and extrinsics). The extrinsic parameters represent the locations of each of the cameras in the 3-D scene. The intrinsic parameters represent the optical center and focal lengths for each of the individual cameras.
[0050] Including XFOV 264 can be accomplished by having the Core FOV correspond to an area of active image pixels on the image sensor that underfills the sensor, while leaving an outer boundary area of pixels for the extended FOV. For example, an image sensor may have 4096 x 5120 active pixels, and smaller portion, such as 3800 x 4800 pixels may correspond to a Core FOV, leaving an outer boundary on all four sides of -150 pixels width. FIG. 3D illustrates crosssections of polygonal field of views for two adjacent camera channels, with some field of view overlap. Each camera channel has a Core FOV 262 that corresponds to an optimally parallax optimized field of view, and the pair of Core FOVs remain parallel to one another when projected out into object space (e.g., the environment or scene). Each camera channel also supports a larger extended FOV 264, which then overlap, thereby limiting the extent of blind regions in front of the camera, and which provide margin for both mis-alignments (e.g., offsets and tilts) and camera calibration. Within an overlap FOV 266 provided between adjacent camera lenses 240, the associated XFOVs 264 may fully or partially overlap. Although parallax lens design optimization is targeted at chief ray alignment to defined edge boundaries of a Core FOV 262, the residual parallax or center of perspective differences within an XFOV 264 of a lens 240 are typically still small (e.g., < 1 pixel).
[0051] FIG. 3E depicts the concept of a parallax jump between two adjacent camera channels 240. The previously described LP-smudge is conceptually illustrated here as an elliptical volume of a LP smudge length 275, in which projections of both paraxial and non-paraxial chief ray cross the optical axis 230. The COP has been described as well as an exemplary location within an LP smudge where the parallax within a single camera channel 240 is minimized. The image plane is offset from the device center 226 by a distance 277, and the COP is within the LP smudge length 275 and can be offset from the image plane by distance 229. The two COPs are separated from each other by a distance 270 (e.g., causing the parallax jump). Lens design methods have been described that enable control or optimization of the size and positioning of an LP-smudge for monoscopic or single channel parallax.
[0052] However, these cameras are also designed or optimized for use in multi-channel camera systems (100, 300). Each camera channel 240 has its own LP-smudge and its own COP. But in an integrated multi-camera system, adjacent cameras 240 are offset by a seam of a finite width, and a modest FOV overlap (FIG. 3D) can be included to reduce the extent of the blind regions that correspond to the seams, resulting in a modest parallax difference in the overlap regions of the adjacent channels. As depicted in FIG. 3E, the COPs of the adjacent channels are separated from each other by a COP separation 270, whose distance or extent can be impacted by system design constraints such as the size of the image sensor package 247. These system constraints are generally “fixed” by application requirements.
[0053] In greater detail, during lens design, the location of the entrance pupil 224 or the LP smudge position and width are parameters that can be controlled to modify the COP separation 270. In a general imaging system, the entrance pupil and COP may not be in proximity. However, in lenses that have been well corrected for PSA, to control parallax within the lens, the entrance pupil and COP can be optimized in deliberate proximity. For many applications, it is desirable to position the entrance pupil behind the image plane and close to the device center 226, as this limits the amount of parallax that occurs at the boundary of two channels, by decreasing the physical separation 270 of adjacent COPS. But then the lens design can be burdened with increased lens length, diameter, weight, and cost. Appropriate trade-offs must be made for each application to determine how these should be balanced.
[0054] For applications or systems with imaging distances proximate to the cameras (e.g. < 500 ft.), a low parallax lens design can prioritize or optimize the location of the entrance pupil 224 or the LP smudge position to the device center, as well as the LP smudge size. If the intervening camera to camera gap, and particularly the optical gap, are of significant size, which for systems imaging only 10’s of feet away, may be a few mm, then a blind region of missing image content can be noticed. But increasing the device center to entrance pupil distance can provide a modest COP separation and FOV increase to cover an extended field of view and shrink the blind region. This may not be an issue if the “critical feature size” is larger than the camera channel separation. When a small XFOV 264 is included in the design, the chief rays within that XFOV converge at a finite distance in front of the cameras. This is the maximum distance where information may be hidden from the cameras (blind). It can also be desirable to have a small XFOV or overlap in these systems, to provide a FOV budget for opto-mechanical tolerance and camera calibration and image blending.
[0055] In such near imaging systems and applications, the distance between chief rays is principally controlled by the entrance pupil (LP smudge) optimization for location and extent over a camera FOV, including both the Core FOV 262 and the XFOV 264. For some context, cinema type multi-camera systems with low-parallax camera lenses image close and a ratio of the device center to EP distance / LP smudge length can be small, e.g. ~ 2:1. In such systems, lens parallax contributions from direct parallax optimization (chief rays, PSA) and from COP separation 270 can be comparable.
[0056] Whereas for systems optimized for more distant imaging applications, such as for enabling DAA sensing for collision avoidance, parallax optimization within a lens can be eased, particularly to allow greater separation between the device center and the paraxial or non-paraxial entrance pupil, or center of perspective 226. For example, as illustrated in FIG. 3A, the offset distance 122 can be several 10’s of millimeters, and the entrance pupil can be located near to, or even in front of the image plane. This is further illustrated in FIG. 3E, for two cameras with their COPs located more distant from the device center. As the LP smudge can still be small, the ratio of the Device center to EP distance / LP smudge length can be comparatively large (e g., ~20: 1).
[0057] For such lenses, in which the entrance pupil can be located near to, or in front of, the image plane, during lens design, COP separation 270 can be allowed to increase and image aberration correction can be given proportionally more priority over reducing the PA sum to limit parallax, by optimizing the lens with less relative weighting for the PSA_sum value in the lens merit function. In this type of application, where the camera system would be airborne, having the entrance pupil or COP near or in front of the image plane or image sensor can helpfully make the overall lens and lens housing shorter and weigh less than similar lenses with the entrance pupil located closer to the device center.
[0058] FIG 4A depicts an outside exploded view, and FIG. 4B an inside exploded view, respectively, of a single camera channel 340 separated from the cylindrical frame 330, to illustrate an example of how the channel and frame can interface. Each camera 340 can contain a low parallax lens system 240 of the type of FIG. 3B. In particular, these figures portions of a multicamera system 300 to illustrate aspects of kinematic mounting of the individual low-parallax camera channels 340 around an arced portion of a cylindrical frame 330. As in FIG. 2, FIGS. 4A and 4B depict the cameras 340 with outer lens elements that are horizontally truncated to enable narrow seams, but which remain rounded vertically. Again, as with the FIG. 2 system 100, system 300 can have lids or covers (not shown) that include mounting or thermal control features (e.g., fins). Also, as depicted, frame 330 has a cylindrical shape, but the frame can instead have a polygonal cylindrical shape (e.g., be octagonal) with congruent rectangular side faces, onto which the cameras can be kinematically mounted. In this example, the upper and lower covers or lids would nominally provide two parallel polygonal faces with a matching polygonal shape.
[0059] Although an overall camera channel 340 and its lens barrel or housing 345 are contained with a conical volume or frustum, the inner portion of the camera housing 345 locally has a square, rather than tapered, shape. This inner square portion of housing 345 nominally contains the lens elements of the wide-angle lens group and includes mounting features to interface with the image sensor board 347. In this example, the camera housing 345 is inserted into a nominally square opening or slot 335 in frame 330. A pair of shaped vee pins on the underside of the housing 345 are used to create a vee block 350, which registers to a ball feature 354 when the housing 345 is inserted into the slot 335. The inside surface of the housing 345 has vee features 352 which will register against the outer surface of the frame 330. For example, a pair of spring pins 360 are mounted within frame 330 to register the lens housing 345 into a slot 335 in the vertical (Z) direction. As another example, springs 356, mounted with shoulder screws 358, on the top and bottom sides of a slot 335, to register a housing 345 into a slot 335 in the X and theta-Y directions. [0060] FIG. 4C depicts a close-up exploded 3D view of an alternate construction for a single row, visor type multi-camera system, depicting the mounting of a channel 340 onto a cylindrical frame 330. In this example, the outer portions of the camera channels 340, including outer lens element and outer portion of the lens housing, are truncated in both the horizontal and vertical directions. Whereas the inner portion of the housing 345 has a nominally circular cross-section that interfaces with a nominally circular slot 335 on frame 330.
[0061] In either of these examples, the vee-shaped features built into the channel housing 345 interface with the outer diameter of the frame 330, locking in all but two degrees of freedom (e.g., translation about the z axis, and rotation about x, y, and z axes). These remaining degrees of freedom (e g., translation about y and z axes) are eliminated using a small vee-block 350 on the underside of housing 346 that docks to a ball feature 354 mounted on the frame 330. Mounting hardware (screws) and compression springs 356 are employed to supply the necessary vertical and horizontal nesting forces. The example systems 300 of FIGS. 4A-C employ kinematic mounting elements, components that form a simple device providing a connection between two objects, typically amounting to six local contact areas (i.e. exact constraint). These contact areas are usually configured by combining classic kinematic or exact constraint mechanical elements such as balls, cylinders, vees, tetrahedrons, cones, and flats. The accompanying nesting or holding forces are supplied by springs or spring pins, but a variety of mechanisms can be used to provide dynamic loading forces including the springs, spring or vlier pins, or flexures, magnets, elastics, or adhesives to support mounting and alignment of the cameras to the cylindrical frame. Frame 330 can also include features (not shown) to help maintain the rigidity, shape, and structural integrity of the frame, relative to withstanding eternal loads, vibrations, or shocks.
[0062] The alignment tolerances for the camera channels 340 to the frame 330 should be minimized to ensure sufficient extended field of view remains for camera calibration and image blending or tiling operations. In addition to manufacturing tolerances, mounting stresses from changing environmental conditions, such as from temperature, shock, or vibration, can affect channel alignment or pointing accuracy. Achieving the required precision often necessitates the use of exact-constraints methodology, employing kinematic components such as vee and ball features to accurately position the channels. Kinematic mounting not only contributes to repeatable mounting but also minimizes stresses from thermal expansion and vibration.
[0063] In the example designs depicted in FIGS. 4A-C, an exact constraint or kinematic mounting approach was selected so that camera channels 340 can respond to and compensate for external loads, and return consistent positions as the system 300 experiences temperature changes while nominally maintaining previously exhibited mounting or assembly precision. For example, positional and rotational variation of each individual camera channel 340 of less than 75 pm and 0.07 degrees, respectively, can be achieved with average milling operations and commercially available kinematic components. A camera channel 340 is mounted with spring-loaded hardware, where the springs 356 are selected to be strong enough to guide and nest the kinematic components together. These can be rigid enough to hold a camera channel 340 in the presence of shock and vibration. For example, the mounting mechanisms can be designed to withstand the residual vibration from a small multi-engine fixed-wing aircraft (maximum 0.1-inch peak-to-peak amplitude between 5 and 62 Hz). At the same time, the springs 356 or spring pins 360 allow the channels to re-nest if jostled by an unexpected shock event (e.g., 6 to 18 Gs). This functionality can be aided by lubrication between the kinematic components.
[0064] The accumulation or stack-up of alignment tolerances can be minimized in a design such as this, where camera channels 340 are mounted directly to the frame 339 and independently of each other. This also facilitates the use of exact constraints, as there are few or no direct mechanical interactions to manage between adjacent camera channels 340, unlike in the aforementioned commonly assigned patent documents where tight space constraints motivated the use of direct kinematic interfaces between adjacent camera channels.
[0065] During design, choosing between exact constraints (kinematic) and partial use of kinematic components (partially kinematic) can be based on the requirements of the multi-camera system 300, such as systems requiring more rigidity or less positional accuracy. For example, a system can have a camera channel’s cylindrical body or housing 345 positioned into a pilot hole or slot 335 in the frame 330 and simply fastened with screws. The screws or an additional pin can be used to set the orientation. That system can be more rigid but also have larger positioning variation due to the clearances between the camera channel’s housing 345, screws, or pins, and their corresponding holes. Some of the variations can be mitigated with higher precision machining, targeted component selection, or by implementing kinematic components for at least some of the constraints (partially kinematic). It is noted that the multi-camera system 300 can also include active or passive isolation (not shown) at the mounting to the vehicle or fixture that the system is mounted to, to reduce the impact of operational shocks or vibrations. For example, if the system 300 is mounted to an aircraft, ambient vibration or shock stimulus can originate with rotors, jet engines, or other propulsion means, or from the impact of temperature changes, air or wind turbulence, or from take-off or landing events. This isolation can reduce substantially reduce the transfer of shock and vibration impact at the system mounting interfaces, and the kinematic features can then reduce the impact of the residual environmental loads that reach the system 300. [0066] The use of plastic lens elements within a lens 240 also increases camera channel (340) sensitivity to external temperature changes, leading to thermal defocus. Thermal defocus, predominantly caused by the materials of the lens barrel or housing 345 (e.g., aluminum), can significantly affect optical performance. Athermalization can be at least partially achieved by using materials with a more favorable coefficient of thermal expansion (CTE) or by replacing a portion of the aluminum housing 345 with a material that has an opposite or negative CTE. FIG. 5 depicts a lens barrel, or housing 345 with a taper angle 346 in two views, assembled and disassembled. In these examples, the image sensor board 347 is bonded to a plate 367 that is mounted to a structural composite material 365 with a negative CTE to compensate for the optical changes. By controlling the length of composite material spacers 365 between the image sensor and the mounting points on the lens housing 345, a compensating thermal defocus motion can be achieved. As an example, Allvar, which is a negative CTE composite structured material from Allvar Alloys Inc. can be used. This material can be configured in different ways, for example as plates or pins, in providing a compensating thermally sensitive motion. As a result, the optimized lens can exhibit only a few microns of residual thermal defocus across a temperature range of - 15°C to +55°C.
[0067] Special consideration of the image sensor orientation alignment to the housing 345 or camera channel 340 can also be required due to the inward tapering of the lens housings, and the lens elements therein, that enables the camera channels 340 to be closely packed together about frame 330. The rectangular-shaped image sensor must align with respect to the truncated sides of an outer lens element 243 to ensure that the entirety of the square or rectangular image formed falls within the active pixel region of the image sensor. By comparison, for standard camera systems with round lenses, the image sensor orientation is not as critical, and the image sensor can be mounted to within several degrees accuracy, and the cameras can then be rotated in a mount or frame to parallel align the image capture of adjacent cameras to each other.
[0068] For example, considering the horizontal field of view, a given image sensor can be 2,160 pixels wide, with 1,740 pixels used to capture the image. This leaves 420 pixels, or 210 pixels on either side, to extend the field of view. The multi-camera system (100, 300) can be designed so that these 210 pixels, which represent 1.5 degrees of the entire FOV, overlap with the adjacent channels’ XFOV (264). This overlap FOV 266 is used for calibration, establishing camera boundaries, and absorbing errors from manufacturing tolerances, such as camera channel alignment and sensor alignment variations. For a pixel size of 2 pm, the image sensor would have to shift 210 pm or rotate 1.28 degrees before falling outside of the image FOV, assuming no other sources of error. The alignment of the image sensor (on board 347) with respect to the camera channel 340 must be much more precise so that, when combined with other errors, there is still enough XFOV for the software functions. A sample budget allocation of the XFOV’s 210 pixels might be no more than 9 pixels for part and assembly tolerances, 57 pixels for sensor alignment errors, and 35 pixels for extrinsic software calibration and camera boundary creation.
[0069] In assembling camera channels 340 for use in a multi-camera system 300, use of precise instrumentation and custom fixturing can be required to achieve the allocated sensor alignment errors. Most importantly, a unique method is needed to align the image sensor to the camera channel's truncated lens or housing edges. The fixture controls the relative positioning of an image sensor to a camera channel in six degrees of freedom by leveraging the precise kinematic features of a camera channel 340 that are used to mount a camera channel to the frame, to first mount the camera channel temporarily to the fixture. The image sensor alignment fixture can include a temporary masking fixture and pre-aligned light sources to illuminate the image sensor with optical datums, and thus creating a reference image that can be measured and used for image sensor alignment. Once the desired alignments are achieved, the image sensor can be bonded to the camera channel housing 345.
[0070] FIG. 6 depicts a cross-sectional view of an example alternate single row, visor type multicamera system 300 with an accompanying exploded view of a mechanical gap or seam 320 between a pair of adjacent low-parallax camera channels 340. In this example, a distance measurement sensor 380, such as an inductive or capacitance proximity sensor, can be used to monitor the width of a seam 320 between adjacent camera channels 340. The sense plate in the sensor forms a capacitor with the adjacent channel, which would vary with the distance to the object. This capacitance formed by the sensor plate and channel determines the frequency of the oscillator, which conditioned into an output that can be monitored.
[0071] A capacitive distance measurement sensor 380 typically includes an oscillator, signal conditioning, an output driver, and a controller. For example, a seam width can change dynamically due to the impacts of residual shock or vibrations that have leaked through the vibration isolation and the paired kinematic features and nesting force mechanisms, to cause a change or displacement from the nominal seam width. However, a distance measurement sensor 380 can provide real-time seam width data, which can then be analyzed to determine relative changes on an instantaneous or time-averaged basis. Multiple distance measurement sensors 380 can be provided in a seam 320 to provide data on changes in tilt(s) between adjacent camera channels 340. The resulting data can be used to dynamically modify the extrinsic calibration or image blending operations that can be applied to the image data coming from adjacent cameras 340. Furthermore, any changes in the values obtained during system operation can be used as feedback for recalibration. Distance sensors 380 can be inserted in gaps as smaller than 1 mm with accuracy in the tenths of microns.
[0072] In the prior figures (e.g., FIGS. 2, 3, 4A-4C, and 6), visor type multi-camera systems (100, 300) have been illustrated with a single row of low-parallax cameras (140, 340) that extend part way around a cylindrical circumference. Alternately, the cameras can extend around the full circumference to provide a halo or completely annular system. FIG. 7 depicts another alternate configuration, in which a dual row of adjacent low-parallax cameras 340 is positioned to provide image capture from a more conical FOR, with an increased vertical FOV. In this example, the multiple cameras 340 can be mounted to a toroidal or barrel shaped frame and controlled seams and image sensor alignments are provided between adjacent cameras in both horizontal and vertical directions. Kinematic connections can be used to couple a first row of cameras to a second row of cameras, or to individually couple a camera in the first row to an adjacent camera in a second row. In the example of FIG. 7 cameras in the upper and lower rows are generally illustrated as being vertically aligned, e.g., such that each camera is aligned vertically with another camera and horizontally with at least one other camera. However, in other examples the cameras in the upper row may be offset relative to cameras in the lower row. For instance, a seam between adjacent cameras in the upper row may align vertically with a lens of one of the cameras in the lower row.
[0073] The multiple cameras 340 can provide conventional visible light imaging, infrared (IR) imaging, or hybrid visible and IR imaging (VIS & SWIR). The multiple cameras 340 can use different kinds of optical sensors, including an arrangement of conventional visible or IR image sensors and event sensors (e.g., from Prophesee.ai (Paris FR) or Oculi Inc. (Baltimore MD)). For example, event sensor cameras 341 can be provided at the outer or leading edges or boundaries of the multi-camera system 300, to have their high dynamic range and fast capture times (e.g., 10,000 fps) applied to detect a rapidly moving object. The event sensor cameras 341 can be either low- parallax or conventional cameras. The captured image data can then be used to determine an expected vectorial path of images of the rapidly moving object across the conventional image sensor(s). Region of Interest (ROI) targeting can then be determined, and conventional camera image capture targeted (e.g., resolution, capture time, or frame rate) for improved image capture of the object by the conventional camera(s).
[0074] FIG. 8 depicts an alternate configuration for a high fill factor multi-camera visor system 400, where cameras 440 are offset 450 and alternately positioned in two nominally parallel arced sub-visors 442. As shown, the conventional cameras in a given visor 442 have conventional round or circular outer lens elements and lens housing shapes. The lens housings of these cameras 440 can also have circular cross-sections, and may have a cylindrical cross-section along their length, or be tapered into a modestly angled frustum. Each of these cameras 440 has the associated image sensor (not shown), or a mask provided in close proximity thereto, functioning as a field stop, such that image light is collected into a rectangular or square FOV 445. By using conventional cameras, special custom low parallax lens and lens housing designs, and the associated costs are not needed. However, the system of the type of FIG. 8 can be larger and heavier and have lower optical performance.
[0075] In such systems 400 with conventional lens designs, the lens housings are typically cylindrical, or with a little tapering (e.g., < 5 degrees), as compared to the frustum shapes of the low-parallax cameras of FIG. 2 and FIG. 3A. As a result, these cameras 440 cannot be easily packed closely together, without including optical folds from mirrors or prisms in the light paths. In the latter cases, when fold mirrors or prisms are used, the total number of camera channels that can be provided in a tight mechanical assembly is limited by the space needed for the optical folds. Thus, a system 400 with a single row visor 442 with conventional cameras 440 has a low optomechanical aperture fdl factor (e.g., 20-40%) along the arc. The effective optical factor can be increased by having the individual cameras 440 capture image light from larger FOVs 445, so that they overlap. This reduces the blind regions between cameras 440, but the image resolution is reduced, unless the number of pixels on the image sensors is increased. With a large FOV overlap between adjacent cameras, then when images are stitched together to form a panoramic composite, the computational burden and image artifacts from image stitching are increased compared to prior systems with low-parallax cameras (e.g., FIG. 2, FIG. 3A, FIG. 4A-C).
[0076] As an alternative, FIG. 8 shows a multi-arced row visor type imaging system 400, with two abutting arced arrays of cameras or visors 442 stacked vertically in a cylindrical manner with an offset 450. Although any number of parallel arced array can be used, using two or three stacked arrays or layers may be the most probable. In this example, dual arced arrays of conventional cameras 440 are provided, and the camera channels in a given arced visor array 442 have a low optical fill factor along the arc, but the effective optical fill factor is increased by providing a two arced visor arrays 442 of cameras 440, with the cameras 440 in the visors angularly offset around the cylindrical shape from each other. Two adjacent arced arrays can be further offset from each other by a small intervening vertical gap (not shown), while being nominally parallel to each other. Adding a second arced visor array 442 increases the number of cameras 440, but reduces the FOV and sensor resolution needed per camera. FIG. 8 depicts the visor arrays 442 and their associated frame in the upper arc, lower arc as having a cylindrical shape. But one or both, can instead have a polygonal cylindrical shaped (e.g., be octagonal) frame with congruent rectangular side faces, onto which the cameras can be mounted. The multiple multi-camera arced arrays in the system of FIG. 8 can also be stacked in a barrel like fashion, to have one arced visor array 442 be tilted vertically, inwards, or outwards, relative to a second arced array 442.
[0077] This multi-row multi-camera system 400 of FIG. 8, with conventional cameras 440, is a potential alternative to the multi-cameras system (100, 300) with special low-parallax lenses that were previously depicted e(.g., FIG. 2 and FIG. 3). In providing comparable range and FOV, the dual array of conventional cameras 440 can cost less in aggregate compared to the custom low- parallax cameras (140, 340) or lenses 240 or to a single array of higher resolution, larger FOV cameras. Thus, the conventional cameras 440 will not have reduced parallax as provided by the optical and opto-mechanical design approaches that were used for the camera lenses of FIG. 2 and FIG. 3. However, this plurality of cameras 440 can provide reduced parallax by mechanical alignment of the cameras to position the horizontal edges of the FOV 445 of one camera 449 to be parallel aligned to the FOV 445 of the next camera 440. A horizontal FOV overlap from one camera to the next can be reduced to a modest 1-2°. In combination, the cameras 440 of the at least first and second multiple arrays 442 are arranged to be angularly offset from each other along the arc, such that at least one camera of the first array is nominally equally angularly positioned between two cameras of the second array, such that the three adjacent cameras function as a contiguous imaging array, as they collect image light from object space. In other examples, the “conventional” cameras 440 shown in FIG. 8 can be replaced with low-parallax cameras, such as the cameras 110 described herein.
[0078] The dual array system of FIG. 8 will also occupy a larger volume, and can weigh more, as compared to the FIG. 2 and FIG. 3A systems with multiple low-parallax cameras. These differences can matter for applications, such as airborne DAA, where the size, weight, and power (SWaP) constraints can be tight. It can also be more difficult to establish and maintain rotational alignment of adjacent cameras 440 in a dual array system 400 (FIG. 8), for a camera in an upper arced array 442 to an adjacent camera in a lower arced array 442, as compared to a single row system (FIG. 2 and FIG. 3 A). Kinematic features can be used for aligning and assembling cameras 440 within a visor array 442, or between arrays 442. It is noted that this system can use cameras 440, using standard design approaches without parallax correction using the PSA sum or chief ray pointing, but which have truncated outer lens elements to help reduce camera channel weight and channel to channel spacing or seam widths. A system 400 with a plurality of visors 442 can also have low-parallax cameras, such as those of the type depicted in FIG. 3B. Relative to the arced arrays, conventional camera channels can also be truncated horizontally, vertically, or both. However, this truncation can cause vignetting.
[0079] A vertical offset 450 between the dual arced arrays 442 of a few inches will not cause much vertical resolution loss when an imaged pixel at 3-4 miles out corresponds to an area 2-3 feet wide. However, that difference can matter when imaging a bogey aircraft closer in, such as only a 14 mile away. Likewise, the vertical offset 450 between the adjacent dual camera visors 442 can complicate camera calibration, camera-to-camera factory alignment, and image blending and tiling operations of images from adjacent cameras 440 that can be applied, for example, when an imaged bogey aircraft, and its associated ROI, crosses from the imaged FOV 445 of one camera 440 to another. For example, the vertically offset (450) adjacent cameras 440 may see different direct light exposures (e.g., solar, glare, or object reflections) that could vary with angle or position, and where the differences are accentuated by the offset 450.
[0080] As in the systems (100, 300) depicted in FIG. 2 and FIG. 3 A, the camera alignment and mounting accuracy of multi-layered system in FIG 8 is essential to avoid consuming too much of the extended field of view with pointing errors. Pointing accuracy in the vertical direction is more critical in a vertically arrayed system, as the alignment between layers and the respective vertical extended fields of view need to be accounted for. Nevertheless, the same principles of kinematic mounting can be applied to such a system to achieve the required accuracy under the operating environment. The gaps or seams between the channels in all these systems further enables structural stability, which can be more difficult to achieve in systems where camera channels are required to be constrained to each other due to their proximity.
[0081] The type of multi-camera system of FIG. 2, FIG. 3A, or FIG. 8 can also be “ground” mounted, for example on a pole or a building, and then used to monitor UAV or eVTOL air traffic. As such, the resulting image data can be used for collision avoidance (e.g., DAA) or for airspace monitoring for safety (e.g., keeping drones out of airports) or intrusion prevention (e.g., counter- UAS operations). For this use case, it can be advantageous to add cameras that look upwards or overhead, such that the system has a more hemispheric configuration.
[0082] In a multi-camera system, such as the dual-row system 400 in FIG. 8, that uses conventional cameras 400, it can be advantageous to adapt the calibration and blending tools of FIG. 10 to provide enhanced low-parallax imaging. But before blending can be used, a proper calibration is needed. FIG. 11 depicts a cross-sectional portion of an arced array of cameras, as can be used in the FIG. 8 system. In this example, cameras 440, which are mounted on frame 430, have conventional commercial double Gauss type lenses, which image light to an image plane 446. Projections of incident chief rays 455 and 457 are directed towards an entrance pupil 424, which has a finite size (e.g., is an LP smudge).
[0083] For example, in the FIG. 11 system, the captured imaged can be digitally cropped to a shape (e.g., rectangular) where the FOV's 445 match in object space. This is enabled by a calibration process which creates a mapping from pixel space to object space to know where to crop, and avoid FOV overlap between camera 400. There are a series of chief rays 455 within a large circular cone of a camera 440 that are parallel to a series of chief rays 455 from an adjacent camera 440, but calibration is needed to find them. For example, the cameras can be intrinsically calibrated using a dot pattern to identify chief rays which are parallel within some specification (e.g., < 0.3 deg) to define a Core FOV, and thus determine where to digitally crop the images. Then during system assembly onto a frame, the cameras can be aligned, with the benefit of targets, extrinsic calibration, and digital cropping, to position adjacent cameras with parallel cropped FOV edges adjacent to each other. After cropping the FOVs between channels, during image capture, blending can be employed utilizing a small amount of image overlap (e.g. 3% of the half FOV). There are also other chief rays 457, which are incident on the outer lens element at larger field positions, whose projections converge towards entrance pupil 44, but which cross the optical axis at a different location than the chief ray 455 projection. These rays contribute to capturing image light with an overlap with the adjacent camera to span the blind region, but with some residual parallax.
[0084] This ray mapping approach can also be applied to a multi-camera system with conventional cameras with an internal fold mirror or prism to provide closer mechanical packing of the multiple cameras. This approach can also be improved by selecting conventional cameras that have been analyzed or tested to determine that they have advantaged entrance pupil positioning. For example, lenses can be selected, that once mounted on a frame, preferentially enable a nominal separation of entrance pupils (or COP offset) that has a size on the order of the features being detected offset (e.g., < l/5th the feature size). And the size of the LP smudge should be on the order of the entrance pupil offset (e.g., ~ l/10th the EP offset). A physical mask can also be provided on the outside of the camera, aligned with the cropped FOV shape (e.g., rectangular) to function as a fuzzy field stop, and enhance the contrast of the FOV edges. As another option, during calibration, a target can be used to measure the magnification of a given lens, then the number of pixels to give the target FOV can be calculated. A target can then be used to aim or align the center pixel at the center of the target, while the camera tilts and displacements, relative to physical datums or features on the camera housings, that are needed to provide that alignment are measured. That data can be used when aligning the camera to the frame.
Object Detection, Recognition, Depth Estimation and Tracking
[0085] This low parallax camera imaging technology, as exemplified by the systems of FIG. 2 and FIG. 3 A, can be applied to the detect and avoid application (FIG.l), where real-time stitch-free, panoramic imaging can enable situational awareness. The image data acquired by the image sensors can be output to an image processor, containing a GPU, FPGA, or SOC, on which algorithms are used to examine an airspace, as sampled by the imaged FOVs from each of the cameras, to look for one or more bogey aircraft. If a bogey aircraft 160, such as a Cessna 172, is detected, the DAA software is then used to track it within the imaged FOV 105. This data can then be output to another processor which assesses the current collision risk and determines appropriate collision avoidance maneuvers. That data can then be delivered to an autopilot, a pilot, or a remote operator.
[0086] The DAA bogey detection software can simultaneously monitor each camera’s FOV in entirety, or subsets thereof, using iterative windowing. FIG. 9 depicts an example, with the image of a bogey 160 being tracked within an ROI 280 as the image approaches a seam or overlap FOV 266 for two adjacent cameras (140, 340). As real-time detection of bogey or non-cooperative aircraft flying in an airspace can be a difficult task, and can impose a significant computational burden, windowing, to scan over a camera’s full FOV to look for something new at reduced frame rate (e.g., 1-5 fps) can be valuable. Once a potential bogey 160 is detected, it can be adaptively tracked using a lightweight non-sophisticated program to look for changes in lighting, attitude, or orientation over time. This software can also track multiple objects at once within a FOV 135 of a single camera 110, or within the FOR of multiple cameras.
[0087] DAA software can include algorithms to recognize or classify objects, with priority being directed at the fastest or closest bogeys over others. For bogey detection, the Haar Cascade classifier can be used to detect specific objects based on their features, such as size, shape, and color. Bogey range estimation can then be enabled by bogey recognition, stereo camera detection, LIDAR scanning, or radar. Once a potential bogey is detected, a lightweight tracking algorithm such as the Kanade-Lucas-Tomasi (KLT) tracker can be used to track the bogey's movement over time. Bogey tracking can be aided by using a tracking window or region of interest (ROI) or instantaneous FOV (IFOV) that can be modestly bigger than the captured image of the bogey, but which is much smaller than a camera channel’s full FOV. Multiple objects can be tracked simultaneously using a multi-object tracker such as the Multiple Object Tracker (MOT) algorithm. [0088] To estimate bogey distance or range, various sensors such as stereo cameras, LIDAR, or radar can be used. For stereo camera detection, algorithms such as the Semi-Global Matching (SGM) algorithm can be used to compute depth maps and estimate range. For LIDAR or radar, signal processing algorithms can be used to estimate range based on time-of-flight or Doppler shift. When using a monoscopic camera alone, depth estimation can be a challenging problem. Some methods to determine depth from monoscopic imagery include object identification to determine the object, and looking up its size from a lookup table. Knowledge of the objects size and pixels subtended can be used to estimate its range. Another method is to use depth from focus, where the image sensor position is adjusted to find the position of best focus. This knowledge can be used to determine the approximate distance to the object. Machine learning and neural networks can also be employed to estimate range from a large training set of data.
[0089] A circumstance can occur, when a low parallax multi-camera system (e.g., FIG. 1), mounted on a first aircraft (an own ship), is used to capture images to help enable aircraft collision avoidance via DAA software analysis, that a bogey 160 can travel through an overlap region 127 between two adjacent cameras, or within an overlap region, as it flies either towards or away from the first aircraft. For this type of application, whether a visor system is deployed on an air or ground vehicle, the plurality of cameras can enable panoramic situational awareness of events or objects within an observed environment. For some applications, it can be advantageous to apply a blending method (e.g., FIG. 10) to the plurality of overlap regions, to produce a seamless panoramic image for object or DAA detection analysis. In other applications, such as airborne DAA, where constraints may strongly limit the system capabilities, it may be preferable to analyze imagery from each camera 110 separately and prioritize computational power at any given time to image content from one or more portions of a camera’s FOV where a bogey 160 has been detected. In such cases, an image blending method (FIG. 10) can be applied selectively only when a bogey aircraft is traversing an overlap region, and for a short time both prior to and after such a traversal (FIG. 9). During such circumstances, the blending method can preferentially be applied locally, within an oversized digital window that includes the bogey image, to follow the bogey through the overlap region FOV 107 from a first camera 110 to a second camera 110. Alternately, the blending method can be applied to a larger portion, or the entirety of the overlap region between the two cameras 110, without necessarily applying it to the overlap regions between other camera pairings. [0090] The image transition from one camera source to another can be managed by a form of image rendering known as blending. In the case of using adjacent low-parallax cameras, parallax errors, background differences, and scene motion issues are reduced, as is the amount of FOV overlap between cameras. Image blending combines two images to ensure the same pixel values. This intermediate process of image blending can be advantageously used without the larger burdens of image stitching or the abruptness of image tiling without image averaging.
[0091] In a multi-camera system, adjacent images captured by adjacent cameras can be assembled into a panoramic composite by image tiling, stitching, or blending. In image tiling, the adjacent images are each cropped to their predetermined FOV and then aligned together, side by side, to form a composite image. The individual images can be enhanced by intrinsic, colorimetric, and extrinsic calibrations and corrections prior to tiling. While this approach is computationally quick, image artifacts and differences can occur at or near the tiled edges.
[0092] By comparison, image stitching is the process of combining multiple images with overlapping fields of view to produce a segmented panorama or high-resolution image. Most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results. For example, algorithms that combine direct pixel-to-pixel comparisons with gradient descent can be used to estimate these parameters. Distinctive features can be found in each image and then efficiently matched to rapidly establish correspondences between pairs of images. When multiple images exist in a panorama, techniques have been developed to compute a globally consistent set of alignments and to efficiently discover which images overlap one another. A final compositing surface onto which to warp or projectively transform and place all of the aligned images is needed, as are algorithms to seamlessly blend the overlapping images, even in the presence of parallax, lens distortion, scene motion, and exposure differences. However, differences in illumination and exposure, background differences, scene motion, camera performance, and parallax, can create detectable artifacts. In the case of using adjacent low-parallax cameras, parallax errors, background differences, and scene motion issues are reduced, as is the amount of FOV overlap between cameras. An intermediate process of image blending, which is a form of image rendering, can then be advantageously used, without the larger burdens of image stitching. Image blending combines two images to ensure nominally the same pixel values, or smooth transitions, for content from adjacent cameras in a local overlap region. If the residual parallax errors within the extended FOVs that capture content in or near the seams are similarly small enough, and the two adjacent cameras are appropriately aligned to one another, then the overlapped captured image content by the two cameras can be quickly cropped, or locally averaged or blended together, and included in the output panoramic images. Blending, can for example, apply weighted averaging on image content seen by two adjacent cameras, based on the distances or estimated distances from the center of the images.
[0093] Image blending method can also be optimized for the application, using, for example a frequency decomposition to identify and favor the camera that locally provides better image quality, or using the parallax data for a camera lens 240 to locally correct away from an ideal virtual pinhole assumption. As an example, when an object is tracked across an adjacent camera’s boundary by using a Kalman filter to predict the object’s location, the intrinsics and extrinsics calibration data for both cameras can be used to form a perspective projection of the pixels as defined by the Kalman filter. During such operations, the DAA system, including the visor camera system (100, 300), can use data from an inertial measurement unit (IMU) to help correct for changes in the aircraft’ s own motion or vibrations. Data collected by distance sensors 380 mounted within the seams (FIG. 6) between adjacent lens housings can also be used to dynamically adapt the application of extrinsic calibration data, or the image blending (e g., FIG. 10), or both, at least locally where a bogey and ROI are crossing a seam or overlap FOV. For example, if a seam width changes, measured position or tilt data can be used to modify application of the stored extrinsic data, and thus the application the relative intrinsics data to actively modify image blending. Parameters in the image blending algorithm can also be changed directly.
[0094] For this type of DAA applications, or ones for UAV or eVTOL traffic monitoring, or other applications, it can be advantageous to have a dual visor or halo system, where a second visor or halo system is out of plane parallel offset from a first one. This second visor or halo system can also image the same spectral band (e.g., visible, with or without RGB color), so that in cooperation with the first system, stereo imaging and range or depth detection is enabled. Alternately, the second visor can be equipped with another sensing modality, such as monochrome, LIDAR, IR, or event sensor cameras. The monochrome camera can be filled in with color data, using a trained neural network that uses up-resolution techniques to merge the color data with the higher resolution monochrome camera feed. When an event sensor is used, high framerates of 10k FPS+ can also be used to detect sounds in the video feed.
[0095] Image blending can be applied in the overlap regions 266 of one or both cameras, either generally, or selectively, as needed. Also, the offset camera arrays can be aligned with their overlap regions aligned to one another, or with a radial offset. In the latter case, image data from one camera array can be used to inform image blending in a corresponding overlap region of the other camera array.
[0096] As discussed previously, the parallax data for lenses 240 can be applied using modeled or measured data, to modify the weighting factors over a lens field of view that are applied during image blending to enable a more accurate blending of image content of key features in a scene. As another example, the image data in overlap regions 107 can be analyzed via frequency decomposition, to identify the best image data available from either of the adjacent cameras 110. The better-quality image data can then be favored for at least key image features during a local blending in an overlap region. Image blending can also be applied selectively, in overlap regions, or portions thereof, where high quality photogrammetric image data is needed, but skipped elsewhere where the content is feature poor. This blending method, or variants thereof, can also be applied with the multi-camera systems of 300 of FIG. 7 and 400 of FIG. 8, respectively.
[0097] But then for the multi-camera system of the present invention, the flowchart in FIG. 10 illustrates a preferred image blending method that can be employed in a processor within a multicamera system to create a blend within a region of overlap in which two cameras are contributing image data (FIG. 3D and FIG. 9). The field of view overlap region is enabled by designing the camera lenses to have an extended field of view. There are at least 4 parameters that can degrade a composite blended or tiled image at the boundary or overlap region where two adjacent images are combined; color changes, alignment errors, missing data (optical gap), and parallax alignment differences versus depth or angle, which can then cause image artifacts. For example, “just noticeable difference (JNDs) metric can be used to measure local color, pattern, or content discontinuities between images of an object captured by two adjacent cameras within an overlap region. In a first step of the exemplary blending method of FIG. 10, it is determined that the image corresponds to an overlap region, and which two cameras are contributing the image data. For example, when creating an equi rectangular projection it is determined which cameras in the system have valid image data to contribute to the projection at that pixel. The pixel selections can depend on both the camera angle in the overlapping or extended FOV, and on the object distance. [0098] In a second step of the example blending method of FIG. 10, the FOV angle for each camera is determined, and in a third step, a FOV angular distance for each camera and the distance to the bisecting plane are calculated to determine which quadrant of the overlap region the image pixels are in. In a fourth step, the appropriate linear coefficients are used to estimate the distance to the edge of the area of overlap. This step can include determining and applying a mean RMS re-projection error, at multiple object-space conjugates, to yield a measure of the field overlap, so as to apply and improve the determined pinhole variation along the lens edges. In a fifth step, this information is used to determine how much each camera contributes to the final RGB values. The method described can be referred to as spatially varying alpha blending. In this method, image data from more than one camera is combined as a weighted average. The weights are normalized to sum to 1.0 and are proportional to the relative “closeness” to a given camera’s center pixel.
[0099] In greater detail, when applying image blending (e.g., FIG. 10) at the fifth step, the image intensities of any given pixel that is seen by both cameras are averaged together. The output images are first corrected for radiometric or colorimetric variations using predetermined calibration data. Likewise, using predetermined intrinsic and extrinsic geometric calibration data, the pixel-to-pixel correspondence of the image pixels in the overlap region between cameras is predetermined. Then, the output pixel values of the corresponding pixels within the overlap region are averaged together using one or more weighting factors. The resulting corrected images can be kept separate or combined into a larger panoramic image. Thus, the effect of the blending method is to provide a smooth transition from one camera color to another.
[0100] The blending method of FIG. 10 can use as spatially varying alpha transparency blending in which image data from more than one camera is combined as a weighted average. The weights are normalized to sum to 1.0 and are proportional to the relative “closeness” to each camera region or to each camera’s optical axis (central pixel). Another approach to blending more than one camera within a region of overlap could be referred to as spatially varying stochastic blending. This method is similar to alpha blending, but instead of combining the image data from multiple cameras, the weights are used to control a stochastic sampling of the corresponding cameras. In particular, stochastic sampling is a Monte Carlo technique in which an image is sampled at appropriate nonuniformly spaced locations rather than at regularly spaced locations. Both of these image blending methods are agnostic to the content of the images.
[0101] It is noted that the blending method of FIG. 10 can be adapted for use in a multi-camera system in which imaging algorithms for creating equirectangular projections are imbedded in a field programmable gate array (FPGA) or other comparable processor, ongoing or on-demand pixel projection recalculation can be used to enable image blending. The blending correction values can be rapidly recalculated with little memory burden in real time. Alternately, the image blending method of FIG. 10 can be applied to multi-camera systems by evaluating the overlap regions and using a “grassfire” based algorithm to control the blending between cameras in the overlap regions. The grassfire algorithm is used to express the length of the shortest path from a pixel to the boundary of the region containing it, and is advantaged for applications that can support the use of a large, precomputed grassfire mapping LUT that needs significant memory when creating the panoramic image re-projection.
[0102] An image blending method (e.g., FIG. 10) can be applied selectively across some overlap regions 127, if objects or features of interest are identified therein. Alternately, when a panoramic composite image is wanted, image blending can be applied selectively for an overlap region, or an ROI therein, when the image data therein is of high quality (e.g., MTF) and high confidence. [0103] For some applications, it can be advantageous to apply a blending method (FIG. 10) to the plurality of overlap regions, to produce a seamless panoramic image for object or DAA detection analysis. In other applications, such as airborne DAA, where constraints may strongly limit the system capabilities, it may be preferable to analyze imagery from each camera separately and prioritize computational power at any given time to image content from one or more portions of a camera’s FOV where a bogey has been detected. In such cases, an image blending method (FIG. 10) can be applied selectively only when a bogey aircraft is traversing an overlap region, and for a short time both prior to and after such a traversal. During such circumstances, the blending method can preferentially be applied locally, within an oversized digital window that includes the bogey image, to follow the bogey through the overlap region from a first camera to a second camera. Alternately, the blending method can be applied to a larger portion, or the entirety of the overlap region between the two cameras, without necessarily applying it to the overlap regions between other camera pairings.
[0104] It is also noted that for applications including photogrammetry and collision avoidance, where accurate range data is needed to an object or feature, that the optical designs of the low- parallax cameras 110 can be optimized to enable co-axial imaging and LIDAR. As one example, the camera optical designs can include both a low-parallax objective lens, paired with an imaging relay lens system, the latter having an extended optical path in which a beam splitter can be included to have an image sensor in one path, and a LIDAR scanning system in another path. Alternately, the beam splitter can be imbedded in the low-parallax objective lens design, with the imaging sensor and the LIDAR scanning system both working directly with the objective lens optics and light paths. As another alternative, a single LIDAR scanning system can be shared across multiple low-parallax objective lenses. In this system, light from a laser source is directed through beam shaping optics and off of MEMs scan mirrors, to scan through a given camera system. The beam splitters would direct image light out of the plane of the page. Although the LIDAR beam resolution may not match a camera’s imaging resolution, that can be partially compensated for by controlling the LIDAR scan addressing resolution.
[0105] For example, for photogrammetry applications, LIDAR can have less resolution than the low-parallax imaging cameras, and this will subsample imaged object and the 3D model. However, the LIDAR data can add accuracy to the range or depth measurements to an imaged object or features therein. Using the subsampled range data, and the delta between the estimated 3D point using photogrammetry and its actual value determined with LIDAR, interpolation can be used to accurately determine a correct 3D location for scanned 3D points and intermediate points in between. The LIDAR data adds depth information to spherical image data, such that multiple RGB-D spherical images can be fused together to create a 3D or 4D vector space representation.
[0106] While the subject technology has been described with respect to preferred embodiments, those skilled in the art will readily appreciate that various changes and/or modifications can be made to the subjecttechnology without departing from the spirit or scope of the subject technology. For example, each claim may depend from any or all claims in a multiple dependent manner even though such has not been originally claimed.

Claims

1. A multi-camera imaging system comprising: a cylindrical frame; a first camera kinematically mounted to the cylindrical frame, the first camera comprising a first housing and a plurality of first optical elements disposed in the housing, the first optical elements including a first outer optical element that is truncated to have at least a pair of first nominally parallel edges, the first camera being configured to capture a first field of view having first angular edges nominally parallel to the first nominally parallel edges; and a second camera kinematically mounted to the cylindrical frame, the second camera comprising a second housing and a plurality of second optical elements disposed in the second housing, the second optical elements including a second outer optical element that is truncated to have at least a pair of second nominally parallel edges and capturing a second field of view having second angular edges nominally parallel to the second nominally parallel edges, wherein: the first camera is optically designed with nominally low parallax along the first nominally parallel edges and the second camera is optically designed with nominally low parallax along the second nominally parallel edges by controlling a position of an entrance pupil and extent, the first camera is disposed adjacent to the second camera such that the first field of view overlaps the second field of view at an overlap region along an optical gap width between an edge of the first nominally parallel edges and an edge of the second nominally parallel edges, and image artifacts in the overlap region are reduced by both reduced perspective differences and reduced dynamic misalignments between the first camera and the second camera.
2. The multi-camera imaging system of claim 1, further comprising one or more kinematic elements configured to kinematically mount at least one of the first camera or the second camera to the cylindrical frame, wherein the kinematic elements comprise one or more of balls, cylinders, vees, tetrahedrons, cones, or flats.
3. The multi-camera imaging system of claim 1 wherein the holding forces are provided by springs, spring pins, flexures, magnets, elastics, adhesives, or combinations thereof, to support mounting and alignment of the cameras to the cylindrical frame.
4. The multi-camera imaging system of claim 1 wherein first image content captured by the first camera in the overlap region and second image content captured by the second camera in the overlap region are blended to provide a smooth transition when combining the first image content and the second image content.
5. The multi-camera imaging system of claim 4 wherein the blending is provided dynamically within the overlap region when a region of interest associated with specific captured image content traverses at least a portion of the overlapped field of view.
6. The multi-camera imaging system of claim 4 wherein the blending is provided dynamically within the overlap region based on data acquired by a sensor measuring change in distance between the first housing and the second housing.
7. The multi-camera imaging system of claim 1 wherein parallax for the first camera is determined based at least in part by a distance from an entrance pupil of the first camera or center of perspective (COP) associated with the first camera to a device center of the multi-camera imaging system.
8. The multi-camera imaging system of claim 7, wherein parallax for the first camera or for the second camera is further determined by a distance between the COP of the first camera and the COP of the second camera.
9. The multi-camera imaging system of claim 7, wherein parallax for the first camera is further determined by a length of a low-parallax volume that includes the projection of both paraxial and non-paraxial chief rays incident to the first camera to the entrance pupil or the COP of the first camera.
10. The multi-camera imaging system of claim 1 wherein the first camera or the second camera is subject to thermal defocus, the multi-camera imaging system further comprising: an image sensor; and a structure comprising a negative CTE material coupling the image sensor to the first housing.
11. The multi-camera imaging system of claim 1 wherein the first camera and the second camera comprise a first row of cameras, the multi-camera imaging system further comprising: a second row of cameras comprising at least two adjacent cameras adjacent to the first row of cameras.
12. The multi-camera imaging system of claim 11, wherein a seam between the first camera and the second camera is offset relative to a seam between adjacent cameras in the second row of cameras.
13. The multi-camera imaging system of claim 11, wherein the first field of view and the second field of view are offset horizontally relative to fields of views of cameras in the second row of cameras.
14. The multi-camera imaging system of claim 1, wherein the first camera and the second camera are optically designed to control parallax, such that, in combination, image artifacts from residual parallax differences and dynamic misalignments between the first camera and the second camera are limited.
15. The multi -imaging system of claim 1, wherein the first housing or the second housing nominally maintained in position and orientation in 6 degree of freedom (DOF) to the cylindrical frame by a set of kinematic features and complimentary holding force mechanisms.
PCT/US2024/037826 2023-07-14 2024-07-12 Visor type camera array systems Pending WO2025122199A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202480043426.7A CN121444469A (en) 2023-07-14 2024-07-12 Mask-type camera array system
PCT/US2025/011731 WO2026015170A1 (en) 2024-07-12 2025-01-15 Image compositing for adjacent cameras with low-parallax imaging

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202363513721P 2023-07-14 2023-07-14
US202363513707P 2023-07-14 2023-07-14
US65/513,721 2023-07-14
US65/513,707 2023-07-14

Publications (2)

Publication Number Publication Date
WO2025122199A2 true WO2025122199A2 (en) 2025-06-12
WO2025122199A3 WO2025122199A3 (en) 2025-08-14

Family

ID=94211292

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2024/037872 Pending WO2025116985A2 (en) 2023-07-14 2024-07-12 Image compositing with adjacent low-parallax cameras
PCT/US2024/037826 Pending WO2025122199A2 (en) 2023-07-14 2024-07-12 Visor type camera array systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2024/037872 Pending WO2025116985A2 (en) 2023-07-14 2024-07-12 Image compositing with adjacent low-parallax cameras

Country Status (3)

Country Link
US (1) US20250022103A1 (en)
CN (1) CN121444469A (en)
WO (2) WO2025116985A2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4348587A1 (en) * 2021-06-02 2024-04-10 Dolby Laboratories Licensing Corporation Method, encoder, and display device for representing a three-dimensional scene and depth-plane data thereof
ES3029225T3 (en) * 2022-08-17 2025-06-23 Contemporary Amperex Technology Hong Kong Ltd Calibration ruler, calibration method and apparatus, and detection method and apparatus
WO2026015170A1 (en) 2024-07-12 2026-01-15 Circle Optics, Inc. Image compositing for adjacent cameras with low-parallax imaging
CN120510349B (en) * 2025-07-22 2025-10-10 天目山实验室 Space alignment method based on traditional camera and event camera combined camera device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5703604A (en) * 1995-05-22 1997-12-30 Dodeca Llc Immersive dodecaherdral video viewing system
JP2005128286A (en) * 2003-10-24 2005-05-19 Olympus Corp Superwide angle lens optical system, and imaging device and display device equipped with the same
WO2012136388A1 (en) * 2011-04-08 2012-10-11 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Capturing panoramic or semi-panoramic 3d scenes
CA2890174A1 (en) * 2012-11-05 2014-05-08 360 Heros, Inc. 360 degree camera mount and related photographic and video system
US20190306385A1 (en) * 2014-01-31 2019-10-03 Digimarc Corporation Concerning digital marking and reading of plastic items, useful in recycling
CN107637060B (en) * 2015-05-27 2020-09-29 谷歌有限责任公司 Camera rig and stereoscopic image capture
US10038887B2 (en) * 2015-05-27 2018-07-31 Google Llc Capture and render of panoramic virtual reality content
US9792709B1 (en) * 2015-11-23 2017-10-17 Gopro, Inc. Apparatus and methods for image alignment
US11449061B2 (en) * 2016-02-29 2022-09-20 AI Incorporated Obstacle recognition method for autonomous robots
WO2017182789A1 (en) * 2016-04-18 2017-10-26 Argon Design Ltd Blending images
US9838614B1 (en) * 2016-06-20 2017-12-05 Amazon Technologies, Inc. Multi-camera image data generation
WO2019079398A1 (en) * 2017-10-18 2019-04-25 Gopro, Inc. Chrominance denoising
WO2019135163A2 (en) * 2018-01-08 2019-07-11 Scandit Ag Mobile device case and techniques for multi-view imaging
EP3987344A4 (en) * 2019-06-24 2023-08-09 Circle Optics, Inc. Lens design for low parallax panoramic camera systems
EP4423730A1 (en) * 2021-10-28 2024-09-04 Mobileye Vision Technologies Ltd. Stereo-assist network for determining an object's location

Also Published As

Publication number Publication date
CN121444469A (en) 2026-01-30
US20250022103A1 (en) 2025-01-16
WO2025116985A2 (en) 2025-06-05
WO2025116985A3 (en) 2025-08-14
WO2025122199A3 (en) 2025-08-14

Similar Documents

Publication Publication Date Title
WO2025122199A2 (en) Visor type camera array systems
US10057509B2 (en) Multiple-sensor imaging system
US9182228B2 (en) Multi-lens array system and method
JP7753229B2 (en) Panoramic camera system for enhanced detection
US6304285B1 (en) Method and apparatus for omnidirectional imaging
US4527055A (en) Apparatus for selectively viewing either of two scenes of interest
US20110134249A1 (en) Optical Detection and Ranging Sensor System For Sense and Avoid, and Related Methods
US9200966B2 (en) Dual field of view telescope
EP3004958B1 (en) Optical configuration for a compact integrated day/night viewing and laser range finding system
US20110164108A1 (en) System With Selective Narrow FOV and 360 Degree FOV, And Associated Methods
US9671616B2 (en) Optics system with magnetic backlash reduction
US9025256B2 (en) Dual field of view refractive optical system for GEO synchronous earth orbit
US9121758B2 (en) Four-axis gimbaled airborne sensor having a second coelostat mirror to rotate about a third axis substantially perpendicular to both first and second axes
US9500518B2 (en) Advanced optics for IRST sensor having afocal foreoptics positioned between a scanning coelostat mirror and focal imaging optics
EP1916838A1 (en) Integrated multiple imaging device
CA2140681C (en) Wide area coverage infrared search system
US12566320B2 (en) Panoramic MWIR lens for cooled detectors
US12526526B1 (en) Image compositing with adjacent low parallax cameras
Bates et al. Foveated imager providing reduced time-to-threat detection for micro unmanned aerial system
Gerken et al. Multispectral optical zoom camera system using two fix-focus lenses
US20210274096A1 (en) Optronic sight and associated platform
Fritze et al. Innovative optronics for the new PUMA tank
KR20250137184A (en) Optical temperature-independent infrared reimaging lens assembly
Hoefft et al. Multispectral EO LOROP camera
Raizman Phaseone 190 MP aerial system: camera design principles and productivity analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24901260

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24901260

Country of ref document: EP

Kind code of ref document: A2