US20180072226A1 - Extended perception system - Google Patents

Extended perception system Download PDF

Info

Publication number
US20180072226A1
US20180072226A1 US15/330,973 US201615330973A US2018072226A1 US 20180072226 A1 US20180072226 A1 US 20180072226A1 US 201615330973 A US201615330973 A US 201615330973A US 2018072226 A1 US2018072226 A1 US 2018072226A1
Authority
US
United States
Prior art keywords
exp
sensor
subsystem
user
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/330,973
Inventor
Jesse Clement Bunch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US15/330,973 priority Critical patent/US20180072226A1/en
Publication of US20180072226A1 publication Critical patent/US20180072226A1/en
Priority to US16/271,614 priority patent/US11247607B1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/31Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing stereoscopic vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/102Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using 360 degree surveillance camera system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/205Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using a head-up display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/301Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with other obstacle sensor information, e.g. using RADAR/LIDAR/SONAR sensors for estimating risk of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/302Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing combining image information with GPS information or vehicle data, e.g. vehicle speed, gyro, steering angle data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/50Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the display information being shared, e.g. external display, data transfer to other traffic participants or centralised traffic controller

Definitions

  • This invention relates to an Extended Perception System that extends the perception of an object's surroundings.
  • Three closely related primary sets of embodiments of this invention include: one set of embodiments is mounted on a set of one or more vehicles; a second set of embodiments is worn by a set of one or more persons and/or animals; and a third set of embodiments which can be in a location fixed with respect to terrestrial and/or other features. Improvements to head-mounted displays are disclosed.
  • Art related to certain elements of this invention includes security systems with multiple cameras and one display, radio telescope arrays, array microphones, esurance's DriveSense, OnStar, black boxes on aircraft, elements of self-driving car technology, 360° car camera systems, dashboard camera systems (such as Falcon Zero HD Car Rear-View Dash Cam), and head-mounted displays, such as: Google Glass, Occulus Rift, Microsoft HoloLens, Carl Zeiss Cinemiser, and Sony SmartEyeGlass.
  • This invention relates to an Extended Perception System (EXP) that extends the perception of an object's surroundings.
  • EXP Extended Perception System
  • perception comprises sensing, processing, monitoring, storage of data, provision of intervention regarding the results of any operation(s) on the data and/or any combination thereof.
  • an “object” can comprise a single object, (for example a vehicle and/or a person), a set, or sets of objects.
  • surroundings comprises any area and/or group of areas relevant to said object. As such, said “object” might be local or remote from said “surroundings”.
  • a “view” can comprise a representation and/or representations of the surroundings resulting from any form of radiant energy emanating from, reflected from, refracted through, and/or diffracted about said surroundings and/or any combination thereof and/or other types of information, including but not limited to other types of data described herein.
  • “view” comprises view as specified in the previous sentence, in addition to representations derived from previously stored data about the surroundings.
  • “view” also comprises state data about the object, such as its location and orientation in a coordinate system.
  • the perception can be local, remote, and/or any combination thereof.
  • the sensors of the data, the processing of the data, the monitoring of the data, the provision of intervention, and/or the recording of the data can be done locally, remotely and/or any combination thereof.
  • sensor refers to a set of one or more sensors and/or virtual sensors.
  • Three closely related primary sets of embodiments of this invention include: one set of embodiments is mounted on a set of one or more vehicles; a second set of embodiments is worn by a set of one or more persons and/or animals; and a third set of embodiments which can be in a location fixed with respect to terrestrial and/or other features.
  • EXP Systems can comprise: one or more Sensor Subsystem(s); and/or one or more Processor Subsystem(s); and/or one or more Display Subsystem(s); and/or one or more Monitor Subsystem(s); and/or one or more Intervention Subsystem(s); and/or one or more Storage Subsystem(s).
  • Some EXP embodiments may comprise a single of the elements listed in the previous sentence.
  • FIG. 1 illustrates an overview of the Sensor-based embodiments of the instant invention.
  • FIG. 2 illustrates an overview of the Platform-based embodiments where the Display Subsystem(s) get the information to display from a Platform such as a Gaming device and/or a computer that generate video display information.
  • a Platform such as a Gaming device and/or a computer that generate video display information.
  • FIG. 3 illustrates a high lens-to-camera ratio array.
  • FIG. 4 illustrates an Extended Perception System on the exterior of a car.
  • FIGS. 5A-5D illustrate an Extended Perception System for use in the interior of a car.
  • FIGS. 6A and 6B illustrate an alternative Extended Perception System for use in the interior of a car.
  • FIGS. 7A and 7B illustrate a second alternative Extended Perception System for use in the interior of a car.
  • FIGS. 8A-8C illustrate an Extended Perception System embodiment that can be worn by a person.
  • FIGS. 9A-9C illustrate an Extended Perception System with a head-mounted display and an integrated sensor subsystem.
  • FIGS. 10A-10C illustrate an alternative Extended Perception System with a head-mounted display and an integrated sensor subsystem.
  • FIGS. 11A-11C illustrate an Extended Perception System with a head-mounted display.
  • FIG. 12 illustrates an Extended Perception System with a head-mounted display and an integrated sensor subsystem employing a rotating focusing mirror.
  • FIGS. 13A-13B illustrate an Extended Perception System Sensor Subsystem 2900 employing a rotating pair of cameras.
  • FIG. 1 illustrates an overview of the Sensor-based embodiments of the instant invention.
  • At least one Sensor Subsystem feeds sensor data to at least one Processor Subsystem.
  • the Processor Subsystem(s) optionally can regulate the Sensor Subsystem(s).
  • the Processor Subsystem(s) can include a short-term loop that temporarily stores a record of the data from the Sensor Subsystem(s).
  • the Display Subsystem(s) potentially can display information from a subset of the other subsystems to the User. In a subset of embodiments, the User can select what information is displayed by the Display Subsystem(s) at a given time.
  • the Monitor Subsystem(s) comprises one or more non-user observers of the situation. There are cases where the Monitor Subsystem comprises the User.
  • the Monitor Subsystem(s), the User, and/or the Processor Subsystem(s) can activate the Intervention Subsystem(s) to intervene so as to influence the outcome of the situation.
  • the Storage Subsystem(s) stores data for a useful period of time.
  • Shown data paths represent potential data paths not necessarily essential data paths. Shown data paths represent logical data flow when, in practice, data paths might bypass particular subsystems, for example, the User might be able to directly communicate with the Intervention Subsystem(s) without passing first through a Processor Subsystem.
  • FIG. 2 illustrates an overview of the Platform-based embodiments where the Display Subsystem(s) get the information to display from a Platform such as a Gaming device and/or a computer that generates videos. User feedback to the Platform can alter what information is displayed by the Display Subsystem(s).
  • the source of information to the Display Subsystem(s) can be switched between a set of Platforms, between a set of Sensor Subsystem(s), from a set of Sensor Subsystems to a set of Platforms, and/or from a set of Platforms to a set of Sensor Subsystems.
  • Input to sensors can comprise transverse waves such as electromagnet radiation, comprising, for example, one or more of X-ray, UV, visible, IR, microwave, terahertz, and/or radio waves.
  • Sensor input can comprise longitudinal waves such as sound, comprising, for example, one or more of infrasound, audible sound, ultrasound, seismic waves, sound passing through the ground or another solid, a liquid, a gas, a plasma, and/or any mixture or solution of any combination of these.
  • Sensor input can comprise olfactory (identification of chemical composition and/or gradients dissolved in a gas and/or liquid) information.
  • Sensor input can comprise haptic data.
  • Other sensor input can comprise “state of the user (or another object)” information, such as his or its: location, acceleration, velocity, orientation in space, and/or the values of the user's health and/or functionality, comprising variables normally under homeostatic control (e.g. temperature, CO 2 concentration in blood, heart rate, etc) or any other detectible information.
  • state of the user or another object
  • Sensor input can be real and/or virtual.
  • FIG. 2 illustrates a very simple EXP where the Display Subsystem receives the data to be displayed from a Platform Subsystem, said Platform Subsystem comprising, for example, a gaming platform such as a PS/4 or Xbox 1.
  • a gaming platform such as a PS/4 or Xbox 1.
  • Such data to be displayed in this embodiment typically represents data generated by a video game.
  • the Display Subsystem can include controls that can provide input to the Platform Subsystem to affect what it displays.
  • FIG. 3 illustrates 300 , a high lens-to-camera ratio array (HLCRA).
  • Device 300 comprises a linear array of lenses 310 . Some of those lenses 310 , designated 315 , focus on a light-sensing array, such as a CCD, to form a camera. Many of the lenses 310 do not focus on a light-sensing array and thus are not components of actual cameras.
  • the ratio of total lenses 310 to those focusing to a light-sensing array 315 is typically high (over 2:1) often at least 10:1. There is typically no obvious difference in the casual appearance of a lens that does not focus to a camera and a lens 315 that does.
  • the lens array 300 can be molded as a single unit or the lenses can be made individually.
  • the location of the actual cameras on an HLCRA 300 is random or pseudo-random or some other pattern that makes it difficult for an observer to determine which lens (or lenses) is a part of an actual camera and which are pseudo-cameras.
  • the purpose of HLCRA 300 is to have a minimum number of actual cameras (to reduce the cost) while making it difficult for someone to defeat the system by covering specific lenses. It will be much more difficult to cover an extended array of lenses than a small number of easy-to-identify lenses. If someone were to cover an entire array of lenses, it would be clear that they were attempting to prevent the monitoring and/or recording of the situation . . . a clear indication of negative intent. If someone could cover the entire array, their image would very likely already have been recorded.
  • the HLCRA cameras can be connected to an EXP Processor Subsystem by wires in the back of the array or wirelessly via Bluetooth or another wireless communication system.
  • the normals (the centers of the fields of view) of the cameras can be parallel to one another and perpendicular to the overall surface of HLCRA 300 .
  • the normals of one or more of the actual cameras can be tilted vertically to provide a greater vertical field of view, forward or backward to provide a greater horizontal field of view, or any combination thereof. In this case, the normals of pseudo-cameras will also be tilted so the actual cameras cannot be discerned from the pseudo-cameras.
  • the HLCRA 300 can be on a flexible substrate that contains the communication wires (when present).
  • the back can have a peel-off strip revealing an adhesive surface for easy application to the surface on which HLCRA 300 is to be mounted.
  • the cameras are shown as fixed with respect to the array.
  • the array can be moved relative to the surface it is mounted on and/or individual subsets of the cameras and/or pseudo-cameras can be moved relative to one another.
  • FIG. 4 illustrates the use of an Extended Perception System on the exterior of car 410 .
  • HLCRA 420 is shown mounted on the front region of the driver's side of the car.
  • HLCRA 430 is shown mounted on middle upper region of the driver's side of the car.
  • HLCRA 440 is shown mounted on rear region of the driver's side of the car.
  • One, two or all of these HLCRAs can be used, as long as a combination of the cameras 315 in the HLCRA(s) can view the entire region on its side of the car.
  • similar HLCRAs can be mounted on the passenger's side of the car and on the front and rear of the car.
  • HLCRA(s) can also be mounted on any subset of the sides of the car.
  • the HLCRA(s) can be built into a new car 410 or added on later. Similar HLCRAs can be mounted in the trunk or other locations in or on the car 410 .
  • FIGS. 5A-5D illustrate an Extended Perception System 500 for use in the interior of car 510 .
  • FIG. 5D illustrates Extended Perception System (EXP) 500 installed in car 510 .
  • EXP 500 can be mounted such that it is a between the driver and front passenger, preferably above the level of the tops of their heads such that at least some of EXP 500 's sensors have a clear line of sight above their heads. It is not required that EXP 500 's line of sight be above the heads of the driver and passengers because the extended length of the sensor array provides sufficient parallax such that EXP 500 can view and/or monitor all, or nearly all, of the regions to the sides of the car.
  • Visual sensor array 530 can view the space on the driver's side of car 510 .
  • Another visual sensor array can view the space on the front passenger's side of the car 510 .
  • Visual sensor array 532 can view the space in the front of car 510 .
  • Visual sensor array 534 can view the space behind car 510 .
  • Four other visual sensor arrays can view the regions between those of visual sensor arrays 530 , the passenger side array, 532 , and 534 .
  • Each of the visual sensor arrays might also be configured to also view part of the interior of car 510 .
  • Each sensor array can be an HLCRA or just an array of one or more sensors. Audio sensor arrays 540 , 542 , 544 (and the five others) are shown below their respective visual sensor arrays.
  • Mounting 520 is affixed to the interior roof of car 310 and supports the rest of EXP 500 .
  • FIGS. 5B-5C illustrate 550 , an optional inner surface of EXP 500 .
  • Mounting 520 supports arms 525 that support the rest of 550 .
  • Inner Surface 550 views downward with normals oriented between vertical and horizontal.
  • Cameras 560 , 562 , 564 , 566 and the four between each consecutive pair of these provide EXP 500 views of the interior of car 510 including, but not limited to what the driver and passengers are doing. These views can be transmitted to another observer (as described elsewhere herein) and can, for example, be used to assure that observer that the driver and passengers of car 510 are not a threat to said observer or to others.
  • microphones can be mounted on the outside of said vehicle and/or a sound detection system inside the car can be employed that reflects laser light off the windows to detect the sound vibrations in the glass (as is known in the art). By reflecting the laser light off the surfaces of multiple windows and/or multiple locations on a single window, the reflected light information can be used to capture the sound and, by measuring the delays in the same sounds reaching different parts, that information can be used to locate the source of that sound.
  • One or more glass (or other materials) laser targets can be used with synthetic aperture hardware and/or software to achieve the benefits of that technology.
  • vehicular embodiments can provide cameras in the trunk and/or other areas so that it would be harder for others to plant contraband in the trunk area and/or those other areas.
  • FIGS. 6A and 6B illustrate an alternative Extended Perception System (EXP) 900 for use in the interior of car 910 .
  • FIG. 6B illustrates Extended Perception System 900 installed in car 910 . These embodiments are designed to reduce the EXP's cost by employing a mobile device to collect the sensor data and transmit it.
  • FIG. 6A illustrates an embodiment where the mobile device is embedded in the EXP 900 .
  • Mounting 920 is affixed to the interior roof of car 910 and supports housing 934 . Opening 936 in housing 934 supports adapter tray 938 .
  • Adapter tray 938 supports the mobile device in such a way that the mobile device will be held securely in opening 936 and aligns one of the mobile device's camera lenses with EXP 900 's terminal optics 950 .
  • EXP 900 optics 970 redirect light rays 962 from EXP 900 's surroundings to the paths 964 .
  • An app on the mobile device can transform the image, distorted by EXP's optics, into standard images. This stream can then be input to an app on the mobile device such as Meerkat or Periscope to broadcast images of the surroundings via a cellular network. Alternatively, the broadcast app can transmit the distorted image that can then be corrected after transmission.
  • the image stream can be used for perception (such as, displayed locally and/or sent to an EXP Monitoring Subsystem and/or a storage subsystem).
  • EXP 900 can be linked to the electrical system of car 910 to keep the mobile device that is being used by EXP 900 charged.
  • the mobile device can be controlled or used for other functions via voice control.
  • EXP 900 's optional EXP display subsystem can have a mode where it displays the image of the mobile device's display for any of its functions.
  • FIGS. 7A and 7B illustrate a second alternative Extended Perception System (EXP) 1000 for use in the interior of car 1010 .
  • FIG. 7B illustrates Extended Perception System 1000 installed in car 1010 .
  • this embodiment is designed to reduce the EXP's cost by employing a mobile device to collect the sensor data and transmit it.
  • FIG. 7 illustrates an embodiment where the mobile device is located separately from EXP 1000 .
  • Mounting 1020 is affixed to the interior roof of car 1010 and supports housing 1034 .
  • EXP 1000 optics 1070 redirect light rays 1062 to the paths 1064 .
  • Light rays 1064 are redirected via EXP 1000 's camera 1050 .
  • a separate unit mounted on, or near, car 1010 's dash supports the mobile device.
  • Said separate unit can be connected to car 1010 's electrical system, such that when the mobile device is plugged therein, its battery can be charged.
  • Camera 1050 's output is transmitted to the mobile device via wire or wirelessly.
  • An app on the mobile device can transform the image, distorted by EXP's optics, to normal images.
  • This stream can then be input to an app, such as Meerkat or Periscope, on the mobile device to broadcast images of car 1010 's surroundings via a cellular network to a Monitor Subsystem.
  • the broadcast app can transmit the distorted image that can then be corrected after transmission.
  • the image stream can be used for perception (such as, displayed locally and/or sent to an EXP Monitoring Subsystem and/or a storage subsystem).
  • the mobile device's display can be used as the entirety, or part of, the EXP 1000 display subsystem.
  • One of multiple optional lasers 1080 reflects off a window (or windshield) to collect sound information to be processed, for example, in the mobile device then made audible to the driver and/or passengers of car 1010 and/or to be uploaded to the Monitor Subsystem.
  • data from car 1010 's computer can be accessed via its OBD II port and sent wirelessly or via wire to the mobile device.
  • Said Monitor System for example, can be one or more individuals in a distant location, observing and/or recording what the EXP sensing subsystem is perceiving.
  • EXP 1000 can be temporarily assigned to a vehicle.
  • Mount 1020 can contain a magnet powerful enough to secure EXP 1000 to the interior ceiling of a vehicle as the roofs of most vehicles are mostly steel, which is ferromagnetic.
  • the data can be wirelessly transmitted to a Monitoring Subsystem such that safely located monitors can observe the actions in the interior of the vehicle and hear what is occurring therein and observe where the vehicle is and where it is going. This function can be enhanced by the incorporation of a GPS unit in EXP 1000 .
  • EXP 900 and EXP 1000 observe 360° (as viewed from above) around the car 910 or 1010 respectively. Because the EXP System observes the rear of the vehicle, the EXP System can replace the rear view mirror.
  • FIGS. 8A-8C illustrate EXP 600 , an embodiment that can be worn by a person.
  • visual sensors are mounted on a hat.
  • said sensors can be mounted on a helmet, glasses, a flexible net, and/or any other structure worn on the head and/or any other part of the body.
  • Visual sensors 620 , 630 , 632 , and 644 can view the direction in front of the person's head.
  • Visual sensors 622 , 632 , 634 , and 636 can view to the side left of the person's face.
  • Visual sensors 626 , 640 , 642 , and 644 can view to the side right of the person's face.
  • Visual sensors 624 , 636 , 638 , and 640 can view the direction behind the person's face. This allows the user to monitor what is going on behind him, like having eyes in the back of one's head.
  • Visual sensors 620 , 622 , 624 , and 626 can view the space approximately normal to the plane defined by the eyes and ears. Thus, when a user has his head in the position typically associated with walking, visual sensors 620 , 622 , 624 , and 626 view the space above the user's head.
  • a subset of the sensors in all EXP embodiments can be movable and/or fixed depending on what is most cost-effective in a specific application. This includes, but is not limited to, vehicle-mounted, head-mounted sensors and/or recruited sensors.
  • Directional sensors can be directed towards the target of interest.
  • the sensor output can be displayed visually, auditorally, olfactorally, gustatorially, haptically, kinesthetically, and/or by any means perceivable by the user.
  • the user can be human, animal, other natural organism and/or artificially intelligent.
  • Data displayed as olfactory information can be very useful to dogs, even more useful to sharks whose brain structures for evaluating olfactory information are comparable to those that humans used to evaluate visual information.
  • each sensor in EXP 600 can be a vibration device that informs the user when there is a stimulus of potential importance in the field of view of that sensor. Said vibration can inform the user that he should consider looking in that direction to get more information on the stimulus.
  • an EXP Processor Subsystem built into brim 650 can deploy different response algorithms depending upon the nature of said stimulus. For example, a stimulus deemed to be of moderate importance might only trigger vibration in the sensor which has that visual stimulus closest to the center of its “field of view”, or multiple sensors if it is relatively equidistant from said multiple sensors. A stimulus deemed to be of greater importance might trigger a vibration under all of the visual sensors that can view it.
  • Different simultaneous stimuli of interest can be distinguished by different frequencies of vibration. Different degrees of potential importance of stimuli can be mapped to different intensities of the vibrations. As the user moves his head, the locus of the vibrations changes to indicate the direction of the stimulus relative to the user's current head orientation.
  • auditory feedback Another means to inform the user of a stimulus of interest is by auditory feedback.
  • the EXP Processor Subsystem can create auditory signals, fed to the user's ears separately and tailored to cause the human auditory system to perceive the direction of the combined signal as the same as the stimulus of interest. Different intensities, frequencies and/or patterns in the auditory feedback can be used to signal different types of information about the stimulus to the user.
  • a means to provide an EXP user information from one or more E ⁇ P sensors is to display that information.
  • said information can be displayed on a display in a fixed location within a car.
  • Different camera views can be displayed in different windows on the display so as to potentially monitor the entire surrounding space simultaneously.
  • Such wearable displays can include watches such as the Apple watch and/or head-mounted devices, such as Google Glass, Occulus Rift. Microsoft HoloLens, Carl Zeiss Cinemiser, and Sony SmartEyeGlass. There are numerous ways to display such data on head-mounted displays known to the art.
  • FIGS. 9A-9C illustrate a head-mounted display 2200 with an integrated sensor subsystem similar to 600 .
  • One or more real image(s) and/or virtual image(s) and/or holographic image(s) can be projected onto, or by means of, all or part of display housing 2204 .
  • Display housing 2204 is secured to the user's head using display housing band 2202 .
  • FIGS. 9A-9C also illustrate the optional Amplified Glance Angle Mode (AGAM), a user interface mode for example, for the display of very wide angle visual data and/or visual representations of non-visual data.
  • AGAM can be used to display EXP Sensor Subsystem data, including EXP Sensor Subsystem data from vehicular-based EXP Sensor Subsystems, worn EXP Sensor Subsystems, fixed EXP Sensor Subsystems and/or other Sensor Subsystems, including but not limited to virtual data sensor data.
  • AGAM can be used for the visual display of any type of data, such as data produced from business software, 2D or 3D Views software (see other patent applications by Jesse Clement Bunch), and/or video games.
  • the GADS can comprise a central camera 2292 that provides an image of one or both of the user's eyes to an EXP Processor 2295 . All or part of Processor 2295 can be attached to, or built into, housing 2203 housing and/or band 2202 and/or located elsewhere. Processor 2295 has eye movement detection software that determines the viewing angle of one or both eyes with respect to straight ahead. Said angle of deflection of the eye (glance angle) is mapped into a deflection of the field of view on the display.
  • a glance angle of 15° to the right can cause the display to show the center of the field of view at a deflection of 90° from the user's face.
  • the user sees in the center of his current field of view the center of what he would be seeing if his head were rotated 90° to the right via the view through camera 2242 .
  • Embodiments of 2200 can provide a stereoscopic view by feeding different virtual and/or image information to the user's right eye than is fed to the user's left eye.
  • glance angle of 15° to the right can cause the display to show a stereoscopic view of what is 90° to the user's right by displaying the data from camera 2244 to his left eye and camera 2240 to his right eye.
  • a glance angle of 15° to the left can cause the display to show a stereoscopic view of what is 90° to the user's left by displaying the data from camera 2236 to his left eye and camera 2232 to his right eye.
  • a glance angle of 30° to the right (or left) can result in a stereoscopic view of what is directly behind the user. This does not alert observers to the direction the user is looking in.
  • the image from camera 2240 is fed to the user's left eye and the image of camera 2236 is fed to the user's right eye.
  • a glance angle of 15° vertically can cause the display to view the center of the field of view at a vertical deflection of 45° (upper portions of camera 2232 's field of view to left eye and upper portions of camera 2244 to right eye) or 30° vertically to cause the display to view the center of the field of view at a vertical deflection of 90° (camera 2222 to left eye and camera 2226 to right eye).
  • the mappings of glance angle to viewing angle herein are meant to be examples.
  • the relationships between horizontal glance angle and displayed field view and vertical glance angle and displayed field of view need not be the same. Those relationships need not be limited to being linear or fixed. In some embodiments, those relationships can be adjustable by the user.
  • the stereoscopic image can be constructed from two or more images in a processor before being displayed to the user. This applies to all statements in this disclosure regarding feeding one image to one eye and a different image to a second eye.
  • a “horizontal” glance means a movement of the eyes caused substantially by the medial rectus and lateral rectus muscles of the eye.
  • a “vertical” glance means a movement of the eyes caused substantially by the superior rectus and inferior rectus muscles of the eye.
  • the glance angle detection subsystem can comprise camera 2291 which provides an image of the user's right eye to processor 2295 and camera 2293 which provides an image of the user's left eye to processor 2295 .
  • GADS Convergent Glance Angle Mode
  • the user can signal the GADS that he wants to magnify the visual and/or audio display from that location by, for example, rapidly blinking twice or tensing the facial muscles around his eyes. Multiple applications of said signal can cause repeated magnifications in the displayed image.
  • a different signal such as three rapid blinks can signal a desired decrease in magnification, i.e. to increase the field of view that is displayed.
  • the user's signals to the EXP Display Subsystem can be by a verbal command such as “magnify 10 ⁇ ”, and/or other movements and/or by the tensing of given muscle groups and/or by the detection of the firing of particular neurons and/or sets of neurons in the central and/or peripheral nervous systems.
  • Forward looking visual sensors can be used to improve or even replace the direct forward view of the user's unaided eyes.
  • Combinations of the front facing sensors 2220 , 2230 , 2232 and 2244 can replace or augment the direct view forward through the heads up partially transparent display.
  • the transparency of the heads up display may be incrementally or continuously variable from complete transparency (direct view only, no image from the forward sensors) to partial transparency (a mix of direct view forward and image from forward sensors) to no transparency (no direct forward view, total image from forward sensors).
  • FIGS. 10A-10C illustrate a head-mounted display 2300 with an integrated sensor subsystem that is similar to 2200 .
  • all but the forward facing sensors 2390 , 2391 , 2393 , 2394 are mounted on display housing band 2302 .
  • the glance angle detection subsystem can comprise a central camera 2392 that provides an image of one or both of the user's eyes to an EXP processor 2395 .
  • Glance location can be used to point to a menu location and a signal from the user can trigger the selection of the menu item.
  • Said signal can, for example, be verbal and/or visual (such as a sequence of blinks).
  • a specific signal can directly trigger a command.
  • FIGS. 11A-11C illustrate a head-mounted display 2500 , similar to the display subsystem of EXP 2200 .
  • the glance angle detection subsystem can comprise camera 2591 which provides an image of the user's right eye to processor 2595 and camera 2592 which provides an image of the user's left eye to processor 2595 or central camera 2593 that provides an image of one or both of the user's eyes to an EXP processor 2595 .
  • One or more real image(s) and/or virtual image(s) and/or holographic image(s) can be projected onto, or by means of, all or part of display housing 2504 .
  • Display housing 2504 is secured to the user's head using display housing band 2502 .
  • Glance location can be used to point to a menu location and a signal from the user can trigger the selection of the menu item.
  • Said signal can, for example, be verbal and/or visual (such as a sequence of blinks).
  • a specific signal can directly trigger a command.
  • the user can change the orientation of his head to indicate the view angle that he wants to be displayed.
  • accelerometers in the head-mounted display or in a device separately connected to the user's head, can be used to determine the roll and pitch of the user's head relative to the vertical.
  • Data from an accelerometer mounted to the torso of the user can be compared with data from one or more accelerometers in the head-mounted display, or in a device separately connected to the user's head, to determine yaw motion.
  • Other means known to the art such as Pointing and Identification Device technologies
  • these changes can be used to signal what the EXP Display Subsystem should display.
  • AHOM Amplified Head Orientation Mode
  • a change in head orientation by angle of 15° to the right can cause the display to show the center of the field of view at a deflection of 90° from the user's face. Without a side glance, the user sees in the center of his current field of view (the center of the display) the center of what he would be seeing if his head were rotated 90° to the right via the view through camera 2242 .
  • AGAM and AHOM can be implemented separately or combined to function simultaneously.
  • a standard display such as an LED or LCD display can be used to display the sensor data from a vehicle's EXP Sensor Subsystem.
  • a standard 3D display can be used to display stereoscopic AGAM.
  • a standard display such as an LED or LCD display can be used to display the sensor data from an AGAM or AHOM.
  • an array of cameras on a hat can provide different views of a space
  • an array of cameras can cover nearly all of the head.
  • Cameras on other body parts can also provide useful views. For example, cameras on the front and/or tops of shoes can view upwards to see under cars for security officers to look for bombs and/or contraband mounted under vehicles.
  • a camera mounted between the shoulder blades of a jacket or vest can provide a view behind the user.
  • a camera mounted on each side of said jacket or vest below the elbows when the arms are to the side of the body and behind the normal locations of the arms can view the directions lateral to the user.
  • the display can be divided into areas. In each area, a view from a different combination of sensors is displayed.
  • AGAM and AHOM can combined such a change in head orientation and/or glance and/or in combination can be used to signal what the EXP Display Subsystem should display.
  • FIG. 12 illustrates an Extended Perception System with a head-mounted display 2700 and an integrated sensor subsystem 2750 employing a rotating focusing mirror, said mirror focusing to an imaging sensor.
  • the head mounted display 2700 is similar to the display subsystem of E ⁇ P System 2500 .
  • the glance angle detection subsystem can comprise camera 2791 (not shown, but analogous to 2591 ) which provides an image of the user's right eye to processor 2795 and camera 2792 which provides an image of the user's left eye to processor 2795 or central camera 2593 (not shown, but analogous to 2591 ) which that provides an image of one or both of the user's eyes to an EXP processor 2795 .
  • One or more real image(s) and/or virtual image(s) and/or holographic image(s) can be projected onto, or by means of, all or part of display housing 2704 .
  • Display housing 2704 is secured to the user's head using display housing band 2702 .
  • EXP Sensor Subsystem 2750 employs a rotating focusing mirror, said mirror focusing to an imaging sensor.
  • the Sensor Base 2751 of 2750 is affixed to the top of helmet 2724 .
  • Sensor Base 2751 is illustrated in cross-section so that Imaging Sensor 2752 can be illustrated.
  • the center of Mirror Base 2760 rotates about the center of Sensor Base 2751 .
  • Sensor Base 2751 is illustrated in cross-section so that Radiation Aperture 2762 can be illustrated.
  • Dome 2766 affixed to helmet 2764 and/or Sensor Base 2751 , protects all or part of the remainder of EXP Sensor Subsystem from contamination and damage. In the illustrated orientation, radiation from behind helmet 2764 passes through Dome 2766 (which is substantially transparent to said radiation) to the interior surface of Mirror Unit 2762 .
  • EXP Sensor Subsystem 2750 can be integrated with a display subsystem as illustrated in FIG. 12 or can be a separate EXP Sensor Subsystem. A benefit of EXP Sensor Subsystem 2750 is that it permits a large area of radiation to be imaged.
  • FIGS. 13A-13B illustrate an Extended Perception System Sensor Subsystem 2900 employing a rotating pair of cameras.
  • the Sensor Base 2903 of 2900 is affixed to the top of a helmet like 2724 .
  • Cylindrical Platform 2907 is affixed to the center of Sensor Base 2903 .
  • Sensor Unit Support Axis 2911 rotates in the center of Cylindrical Platform 2907 .
  • Cylindrical Platform 2907 can be an electric motor with Sensor Unit Support Axis 2911 being the rotating portion of said electric motor.
  • the center of Mirror Base 2760 rotates about the center of Sensor Base 2751 .
  • Sensor Unit Arm 2913 is affixed to Sensor Unit Support Axis 2911 .
  • Substantially identical Sensor Collector Units 2928 and 2929 are affixed to opposite ends of Sensor Unit Arm 2913 .
  • Collector Unit Body 2920 is a shell, illustrated in cross-section so that the radiation path can be illustrated.
  • Dome 2935 affixed to helmet 2764 and/or Sensor Base 2903 , is substantially transparent to the radiation to be imaged on Imaging Sensor 2925 and protects all or part of the remainder of EXP Sensor Subsystem 2900 from contamination and damage. Radiation from the left is focused by Focusing Element 2927 to produce an image on Imaging Sensor 2925 .
  • the output of Collector Units 2928 and 2929 can be fed to the right and left eyes of the user(s) respectively to provide a stereoscopic view. It is easy for those knowledgeable in the art to adapt 2900 such that Sensor Collector Units 2928 and 2929 can converge to match the convergence of a user's eyes.
  • the EXP Processor Subsystem uses image processing software to stabilize the image from each camera and adapt the images so that they can be superimposed into a clear stable stereoscopic image by the user's brain. The experience is as if the user had eyes spaced the distance between the cameras. This improves depth perception at greater distances. Distances between converging cameras greater and less than the distance between eyes can be profoundly useful. Cameras, such as tiny sensors close to one another with very short focal lengths can be input to the EXP display subsystem to generate stereoscopic microscopic views.
  • aircraft with motor-driven, double gimbal-mounted sensors e.g. cameras, and/or microphones
  • sensors at the front and back of the aircraft and/or on the wingtips can at the user's direction or based on feedback from the EXP Processor Subsystem and/or the Monitor Subsystem converge on a location of interest by means of CGAM, for example, to provide useful stereoscopic images at great distances.
  • Data from sensors in or on a vehicle being fed to a user and/or remote or local operator can help that user and/or operator to “become one” with the vehicle.
  • Data from any Sensor Subsystem (such as the one illustrated in FIGS. 5A-5D ) can be fed (for example via a Processor Subsystem as illustrated in FIG. 1 ) to the EXP Display Subsystem illustrated in FIGS. 11A-11C .
  • the EXP display can have multiple display modes. For example, in one mode, a subset of the display can show a constant sweep of the views and provide an indicator as to the direction that the sweep is showing. In second mode, a subset of the display can show one or more views in which the EXP Processor Subsystem has determined to contain events that the user needs to see.
  • Part of the display can be a map showing what part of the visual field is being displayed on one or a given part of the display. For example, an image of the user's head can be displayed with one or more rays pointing towards the center of the head, said each of said rays emanating from an area of interest.
  • Sensors e.g. cameras, and/or microphones, etc.
  • a set of stationary locations and/or in or on a set of moving objects can work together to collect real-time sensory data of about a set of locations and/or a set and/or range of view angles of a set of locations.
  • a Sensor Database can store or access the present locations of sensors, the present direction of their fields of view and other data necessary and/or useful to determine which sensor and/or sensors will be useful when there is a need to collect sensory data on a set of objects and/or locations from the desired point of view.
  • said Sensor Database can store or access the range of freedom of motion of the field of view of sensors to determine if one or more nearby sensors can be directed to view the set of objects and/or locations of interest. Additionally, said Sensor Database can store or access the speeds and availability of sufficiently proximal mobile sensor platforms relative to said set of objects and/or locations.
  • CGAM input can be used in the recruitment of multiple sensors (recruited sensors) to provide a stereoscopic view of the location of interest.
  • “access” refers to the process of rapidly finding that information if it is not already stored in the Sensor Database.
  • Tiny sensors floating in the wind, or self-propelled, or dropped from vehicles are additional examples of potentially proximal mobile sensors and/or sensor arrays.
  • a CGAM-based display can be used to “view” that space by moving about an area or volume designated to represent the space. As the user moves about that area or volume, data from single sensors or pairs (or other groups) of sensors can be directed to one or both of the user's eyes to produce a changing 2D image or a changing stereoscopic image of the view from that direction and the selected range. AGAM is also useful for displaying such a space. This space can be actual or virtual or a combination thereof.
  • each element of a subset of such effectors can be recruited to act as a heliostat to redirect energy from a separate source (such as the sun) to one or more areas.
  • Each member of a subset of such effectors can be recruited to direct energy from itself to one or more areas.
  • energy can include, but is not limited to, electromagnetic radiation of one or more frequencies (such as radio, microwave, IR, visible, UV) collimated or uncollimated, such as a laser or LED or acoustic radiation.
  • Coordination of a group of effectors including, but not limited to recruited effectors can result in a device that can behave as a solar concentrator, a display, a microphone and/or many other devices.
  • Each effector can produce a different and varying electromagnet field strength and/or direction in order for the group to implement complex static or dynamic electromagnetic fields.
  • the embodiments described herein represent mostly visual sensors and effectors. Sound (infrasound, audible sound, and/or ultrasound—through any medium, e.g. air, water, and/or ground) analog embodiments can be quite useful.
  • arrays of directional microphones or array microphones can work together as described herein for visual sensors to localize the source of sound events, to correlate sound events with visual events, and/or to work in reconfigurable groups to “visualize” different locations from different angles and ranges.
  • AGAM analogs can be used to have “ears in the back of the head.”
  • a sound sensor array on the bottom(s) of a set of shoes can be used to detect and locate the origin of one or more sources of sound that has traveled through the ground. This can result in a modern and greatly improved version of the “ears down to the ground” technique.
  • AGAM and CGAM will be useful interfaces for video games with displays worn by gamers like those described herein or displayed on conventional screens.
  • Functions of the EXP Processor Subsystem include (but are not limited to) one or more of the following, and/or combinations thereof:
  • the view of an event of interest can automatically be displayed in a display window.
  • the type of event can be determined and displayed as well as its direction and/or location and the view in which the source is currently best observable.
  • the user's observation of the event of interest or its source can be enhanced by highlighting its vicinity on the display or otherwise directing the user to its location in the displayed view(s).
  • Image enhancement can improve its observation.
  • the integration of data from multiple sensors For example, visual data from two cameras can be “aligned” for presentation to a user's eyes so as to create a 3D stereoscopic image.
  • the EXP Processor Subsystem can correlate a flash of light and a particular sound signature to determine that a shot has been fired, the type of weapon that fired the shot and sound and/or light localization algorithms can verify location and determine range of the shots.
  • Input to an EXP Processor Subsystem comprises data from one or more, local and/or distant sensors, and/or computed and/or stored information from one or more local and/or distant sources.
  • Output from the EXP Processor Subsystem can be displayed locally to the user of a wearable sensor array and/or other users of wearable displays (local and/or remote) and/or others simply viewing displays (local and/or remote).
  • Output from the EXP Processor Subsystem can be displayed locally to the driver and/or passengers of a vehicular-based sensor array and/or to other users of vehicular-based sensor arrays local and/or remote and/or others.
  • Others local and/or remote can comprise security personnel and/or systems who can determine more about the motives and/or activities of the user of a wearable or vehicular sensor array by seeing what said user is seeing and/or is available from the user's sensor array.
  • the view inside of a vehicle can be transmitted to a police car that has stopped the vehicle. This can help the police officer to determine the risk or lack thereof of approaching the vehicle. Said views can be transmitted to the police officers' supervisors so that they can assist the police officer to make better decisions. Another example is for these views to be transmitted to a security checkpoint so that the checkpoint personnel can view the actions of the vehicle's occupants.
  • Others local and/or remote can comprise personnel and/or systems that can monitor views that the user is not viewing, evaluate the views available to his sensor array, and/or use information from other sources such as other sensor arrays to: advise the user; intervene in the situation; and/or direct the user to change his situation.
  • All, or parts, of the EXP Processor Subsystem can be local with respect to a subset of the sensor array. All, or parts, of the EXP Processor Subsystem can move as a subset of the sensor array moves.
  • All, or parts, of the EXP Processor Subsystem can be local and/or remote with respect to a subset of the sensor array.
  • All, or parts, of the EXP Processor Subsystem can be mobile.
  • a subset of the EXP Processor Subsystem can comprise a mobile device, such as a mobile phone or tablet.
  • a subset of the EXP Processor Subsystem can be worn or carried by a person.
  • a subset of the EXP Processor Subsystem can be built into or carried in a vehicle.
  • a subset of the sensor array's communication with a subset of the EXP Processor Subsystem can be routed by direct wiring.
  • a subset of the sensor array's communication with a subset of the EXP Processor Subsystem can be routed wirelessly via, short range communications (e.g. Bluetooth, IR, WiFi, etc), longer range communications such as cell tower communications, and/or long range communications such as satellite or ULF or radio communications.
  • short range communications e.g. Bluetooth, IR, WiFi, etc
  • longer range communications such as cell tower communications
  • long range communications such as satellite or ULF or radio communications.
  • Aperture synthesis processing can treat a subset of a visual or auditory array as one big antenna.
  • the user and/or any of the observers of EXP Sensor Subsystem data (processed or raw), the EXP Processor Subsystem data, and/or the EXP Monitoring Subsystem data that are suitably authorized can trigger an intervention. Said intervention can be real time.
  • Said intervention can comprise providing advice, commands, and/or information to the user and/or any of the observers of EXP Sensor Subsystem data (processed or raw), EXP Processor Subsystem data, and/or an EXP Monitoring Subsystem.
  • Said intervention can comprise providing advice, commands, and/or information to others. For example, police, fire, rescue, ambulance, military, news and/or others might be informed to intervene in a situation.
  • the intervention can comprise information through a local loud speaker and/or a mobile device audible and/or visible to a party or parties local and/or remote who can intervene in the situation. Otherwise uninvolved persons and/or systems that are nearby can be recruited to intervene.
  • Intervention can comprise the activation of an automated response by a user and/or any of the observers of EXP Sensor Subsystem data (processed or raw), the EXP Processor Subsystem data, and/or the EXP Monitoring Subsystem.
  • EXP 1000 in car 1051 and is intended primarily to protect the rights of the driver.
  • Video and audio data of car 1051 's surroundings and interior are captured by EXP 1000 's sensors and transmitted to the EXP Processor of EXP 1000 .
  • the data from the EXP 1000 can be streamed to a remote monitoring service (Monitor Subsystem) via the mobile device's cellular connection.
  • EXP 1000 can detect, document and report to a monitoring service and/or car 1051 's user(s) and/or insurance company and/or authorities a crash or other damage accidental or intentional to car 1051 .
  • EXP 1000 can also detect, document and report to a monitoring service and/or car 1051 's user(s) and/or insurance company and/or authorities any unauthorized entry into the vehicle. As such, EXP 1000 can serve as an alarm system. This can help reduce car theft and hijackings and expedite the recovery of car 1051 in the unlikely event that it is stolen or hijacked.
  • a device plugged into car 1051 's OBD II port can wirelessly, or by wire, transmit information about car 1051 including speed information to the mobile device acting as the EXP Processor for EXP 1000 .
  • GPS and acceleration data can be captured by functions internal to that mobile device. This data from the EXP 1000 can be streamed to a remote monitoring service via the mobile device's cellular connection.
  • this data can be used to verify or deny the veracity of an accusation by a police officer.
  • Data from car 1051 's state of registration can be downloaded via the mobile device's cellular connection from that state's department of motor vehicles indicating its current registration status and emissions test status to further confirm or deny possible charges regarding the status of those variables.
  • the monitoring service can, in real time, observe and record the interaction between the police officer and the driver and passengers of car 1051 .
  • An agent of the monitoring service (with the streamed data in front of him), for example a lawyer, can then discuss the situation with the police officer and potentially provide data that confirms or denies the officer's accusation(s) via the EXP Processor's mobile device or microphone/loud speaker 1052 .
  • Microphone/loud speaker can be temporarily or of a longer duration affixed to car 1051 or can be integrated into 1034 . This embodiment can help reduce inaccurate traffic stops and help to maintain a civil interaction between the police officer and driver of car 1051 .
  • EXP 1000 can also document and record road rage occurrences. Car 1051 if clearly marked as having an EXP system such as this is much less likely to be involved in theft, hijacking, road rage events, or inaccurate traffic stops.
  • the view of the interior of car 1051 can be transmitted to a display viewed by the police officer and/or his supervisor(s) to make the police officer's job safer.
  • the Intervention Subsystem can be used to monitor and report on other vehicles in the view of the EXP System's Sensor Subsystem.
  • an EXP Sensor Subsystem can be used to sense the tags of vehicles within their view.
  • the EXP Processor Subsystem can interpret the sensor data to recognize the tag numbers of said vehicles.
  • the tag number information can be compared with data downloaded locally or said tag data can be uploaded to another system to be compared with information located there.
  • Said information can comprise information about the tag numbers of interest to authorities such as: stolen vehicles, vehicles involved in crimes (including, but not limited to Amber Alerts), vehicles with unpaid parking tickets, vehicles with expired registrations, vehicles owned by people with outstanding warrants.
  • the EXP System can compare its vehicle's speed data (from said vehicle's computer) to visual information about other vehicles tags to determine the speed of said other vehicle. That data can be combined with GPS data to determine the legal speed limit at the location of those other vehicles. In this way, the EXP System can determine if viewed vehicles are exceeding the speed limit. When a violator has been detected, that information, plus information regarding the time, location and direction of the offending vehicle can be transmitted to the police for them to take the appropriate action. Alternatively, the EXP System can transmit its evidence of the offence to the appropriate authorities so that they can automatically generate a ticket.
  • EXP Processor Subsystem software can detect other offenses such as reckless driving, road rage, car jackings, impaired driving, running stop signs or lights, violations of school bus laws, etc.
  • EXP Systems can be in police cruisers or other vehicles. When such an EXP System is in an individual's vehicle, it will be useful for the authorities to reward the EXP System's owner with a bounty or a reduction in the owner's vehicle registration fees.
  • An EXP System can record crashes to determine who was responsible.
  • An EXP System can also record activities inside a car to provide an objective record of what has occurred inside a vehicle, to determine veracity of claims such as date rape.
  • An EXP System in a car might utilize elements of a self-driving or otherwise automated car as EXP subsystems.
  • An EXP System in a car might utilize elements of a car equipped with one or more built-in cameras.
  • Data from more than one vehicle can be integrated to provide a “view” greater than is possible with data from just one vehicle.
  • real-time data integration from multiple vehicles can track a given vehicle in real-time and non-realtime data from multiple vehicles can provide information about the location history of an otherwise non-cooperating vehicle at any given time in the past. This can be used to provide a list of suspects to a crime or to prove the innocence of others.

Abstract

This invention relates to an Extended Perception System that extends the perception of an object's surroundings. Three closely related primary sets of embodiments of this invention include: one set of embodiments is mounted on a set of one or more vehicles; a second set of embodiments is worn by a set of one or more persons and/or animals; and a third set of embodiments which can be in a location fixed with respect to terrestrial and/or other features. Improvements to head-mounted displays are disclosed.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims priority to U.S. Provisional Application No. 62/177,498, filed Mar. 16, 2015, by Jesse Clement Bunch, entitled “Extended Perception System 1.0”, the disclosure in which is incorporated herein in its entirety by this reference.
  • FIELD OF THE INVENTION
  • This invention relates to an Extended Perception System that extends the perception of an object's surroundings. Three closely related primary sets of embodiments of this invention include: one set of embodiments is mounted on a set of one or more vehicles; a second set of embodiments is worn by a set of one or more persons and/or animals; and a third set of embodiments which can be in a location fixed with respect to terrestrial and/or other features. Improvements to head-mounted displays are disclosed.
  • Art related to certain elements of this invention includes security systems with multiple cameras and one display, radio telescope arrays, array microphones, esurance's DriveSense, OnStar, black boxes on aircraft, elements of self-driving car technology, 360° car camera systems, dashboard camera systems (such as Falcon Zero HD Car Rear-View Dash Cam), and head-mounted displays, such as: Google Glass, Occulus Rift, Microsoft HoloLens, Carl Zeiss Cinemiser, and Sony SmartEyeGlass.
  • SUMMARY OF THE INVENTION
  • This invention relates to an Extended Perception System (EXP) that extends the perception of an object's surroundings. Herein “perception” comprises sensing, processing, monitoring, storage of data, provision of intervention regarding the results of any operation(s) on the data and/or any combination thereof. Herein, an “object” can comprise a single object, (for example a vehicle and/or a person), a set, or sets of objects. Herein “surroundings” comprises any area and/or group of areas relevant to said object. As such, said “object” might be local or remote from said “surroundings”.
  • Some embodiments provide a multidirectional view of said object's surroundings. Herein, a “view” can comprise a representation and/or representations of the surroundings resulting from any form of radiant energy emanating from, reflected from, refracted through, and/or diffracted about said surroundings and/or any combination thereof and/or other types of information, including but not limited to other types of data described herein. Herein, “view” comprises view as specified in the previous sentence, in addition to representations derived from previously stored data about the surroundings. Herein, “view” also comprises state data about the object, such as its location and orientation in a coordinate system.
  • Relative to the object, the perception can be local, remote, and/or any combination thereof. For example, the sensors of the data, the processing of the data, the monitoring of the data, the provision of intervention, and/or the recording of the data can be done locally, remotely and/or any combination thereof. Herein “sensor” refers to a set of one or more sensors and/or virtual sensors.
  • Three closely related primary sets of embodiments of this invention include: one set of embodiments is mounted on a set of one or more vehicles; a second set of embodiments is worn by a set of one or more persons and/or animals; and a third set of embodiments which can be in a location fixed with respect to terrestrial and/or other features.
  • EXP Systems can comprise: one or more Sensor Subsystem(s); and/or one or more Processor Subsystem(s); and/or one or more Display Subsystem(s); and/or one or more Monitor Subsystem(s); and/or one or more Intervention Subsystem(s); and/or one or more Storage Subsystem(s). Some EXP embodiments may comprise a single of the elements listed in the previous sentence.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 illustrates an overview of the Sensor-based embodiments of the instant invention.
  • FIG. 2 illustrates an overview of the Platform-based embodiments where the Display Subsystem(s) get the information to display from a Platform such as a Gaming device and/or a computer that generate video display information.
  • FIG. 3 illustrates a high lens-to-camera ratio array.
  • FIG. 4 illustrates an Extended Perception System on the exterior of a car.
  • FIGS. 5A-5D illustrate an Extended Perception System for use in the interior of a car.
  • FIGS. 6A and 6B illustrate an alternative Extended Perception System for use in the interior of a car.
  • FIGS. 7A and 7B illustrate a second alternative Extended Perception System for use in the interior of a car.
  • FIGS. 8A-8C illustrate an Extended Perception System embodiment that can be worn by a person.
  • FIGS. 9A-9C illustrate an Extended Perception System with a head-mounted display and an integrated sensor subsystem.
  • FIGS. 10A-10C illustrate an alternative Extended Perception System with a head-mounted display and an integrated sensor subsystem.
  • FIGS. 11A-11C illustrate an Extended Perception System with a head-mounted display.
  • FIG. 12 illustrates an Extended Perception System with a head-mounted display and an integrated sensor subsystem employing a rotating focusing mirror.
  • FIGS. 13A-13B illustrate an Extended Perception System Sensor Subsystem 2900 employing a rotating pair of cameras.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates an overview of the Sensor-based embodiments of the instant invention. At least one Sensor Subsystem feeds sensor data to at least one Processor Subsystem. The Processor Subsystem(s) optionally can regulate the Sensor Subsystem(s). The Processor Subsystem(s) can include a short-term loop that temporarily stores a record of the data from the Sensor Subsystem(s). The Display Subsystem(s) potentially can display information from a subset of the other subsystems to the User. In a subset of embodiments, the User can select what information is displayed by the Display Subsystem(s) at a given time. The Monitor Subsystem(s) comprises one or more non-user observers of the situation. There are cases where the Monitor Subsystem comprises the User. The Monitor Subsystem(s), the User, and/or the Processor Subsystem(s) can activate the Intervention Subsystem(s) to intervene so as to influence the outcome of the situation. The Storage Subsystem(s) stores data for a useful period of time.
  • Shown data paths represent potential data paths not necessarily essential data paths. Shown data paths represent logical data flow when, in practice, data paths might bypass particular subsystems, for example, the User might be able to directly communicate with the Intervention Subsystem(s) without passing first through a Processor Subsystem.
  • FIG. 2 illustrates an overview of the Platform-based embodiments where the Display Subsystem(s) get the information to display from a Platform such as a Gaming device and/or a computer that generates videos. User feedback to the Platform can alter what information is displayed by the Display Subsystem(s). The source of information to the Display Subsystem(s) can be switched between a set of Platforms, between a set of Sensor Subsystem(s), from a set of Sensor Subsystems to a set of Platforms, and/or from a set of Platforms to a set of Sensor Subsystems.
  • EXP Sensor Subsystem
  • Input to sensors can comprise transverse waves such as electromagnet radiation, comprising, for example, one or more of X-ray, UV, visible, IR, microwave, terahertz, and/or radio waves. Sensor input can comprise longitudinal waves such as sound, comprising, for example, one or more of infrasound, audible sound, ultrasound, seismic waves, sound passing through the ground or another solid, a liquid, a gas, a plasma, and/or any mixture or solution of any combination of these. Sensor input can comprise olfactory (identification of chemical composition and/or gradients dissolved in a gas and/or liquid) information. Sensor input can comprise haptic data. Other sensor input can comprise “state of the user (or another object)” information, such as his or its: location, acceleration, velocity, orientation in space, and/or the values of the user's health and/or functionality, comprising variables normally under homeostatic control (e.g. temperature, CO2 concentration in blood, heart rate, etc) or any other detectible information. Some information regarding the state of a vehicle can be accessed, for example, by plugging into the OBD-II port of said vehicle. Sensor input can be real and/or virtual.
  • FIG. 2 illustrates a very simple EXP where the Display Subsystem receives the data to be displayed from a Platform Subsystem, said Platform Subsystem comprising, for example, a gaming platform such as a PS/4 or Xbox 1. Such data to be displayed in this embodiment typically represents data generated by a video game. The Display Subsystem can include controls that can provide input to the Platform Subsystem to affect what it displays.
  • FIG. 3 illustrates 300, a high lens-to-camera ratio array (HLCRA). Device 300 comprises a linear array of lenses 310. Some of those lenses 310, designated 315, focus on a light-sensing array, such as a CCD, to form a camera. Many of the lenses 310 do not focus on a light-sensing array and thus are not components of actual cameras. The ratio of total lenses 310 to those focusing to a light-sensing array 315 is typically high (over 2:1) often at least 10:1. There is typically no obvious difference in the casual appearance of a lens that does not focus to a camera and a lens 315 that does. The lens array 300 can be molded as a single unit or the lenses can be made individually. The location of the actual cameras on an HLCRA 300 is random or pseudo-random or some other pattern that makes it difficult for an observer to determine which lens (or lenses) is a part of an actual camera and which are pseudo-cameras. The purpose of HLCRA 300 is to have a minimum number of actual cameras (to reduce the cost) while making it difficult for someone to defeat the system by covering specific lenses. It will be much more difficult to cover an extended array of lenses than a small number of easy-to-identify lenses. If someone were to cover an entire array of lenses, it would be clear that they were attempting to prevent the monitoring and/or recording of the situation . . . a clear indication of negative intent. If someone could cover the entire array, their image would very likely already have been recorded. The HLCRA cameras can be connected to an EXP Processor Subsystem by wires in the back of the array or wirelessly via Bluetooth or another wireless communication system. The normals (the centers of the fields of view) of the cameras can be parallel to one another and perpendicular to the overall surface of HLCRA 300. The normals of one or more of the actual cameras can be tilted vertically to provide a greater vertical field of view, forward or backward to provide a greater horizontal field of view, or any combination thereof. In this case, the normals of pseudo-cameras will also be tilted so the actual cameras cannot be discerned from the pseudo-cameras. The HLCRA 300 can be on a flexible substrate that contains the communication wires (when present). The back can have a peel-off strip revealing an adhesive surface for easy application to the surface on which HLCRA 300 is to be mounted. In this embodiment, the cameras are shown as fixed with respect to the array. In general, the array can be moved relative to the surface it is mounted on and/or individual subsets of the cameras and/or pseudo-cameras can be moved relative to one another.
  • FIG. 4 illustrates the use of an Extended Perception System on the exterior of car 410. HLCRA 420 is shown mounted on the front region of the driver's side of the car. HLCRA 430 is shown mounted on middle upper region of the driver's side of the car. HLCRA 440 is shown mounted on rear region of the driver's side of the car. One, two or all of these HLCRAs can be used, as long as a combination of the cameras 315 in the HLCRA(s) can view the entire region on its side of the car. Likewise similar HLCRAs can be mounted on the passenger's side of the car and on the front and rear of the car. Herein, “side” refers to the 6 directions defined by the outfacing normals of a rectangular prism resting on the ground . . . front, back, left and right laterals, top and bottom. HLCRA(s) can also be mounted on any subset of the sides of the car. The HLCRA(s) can be built into a new car 410 or added on later. Similar HLCRAs can be mounted in the trunk or other locations in or on the car 410.
  • FIGS. 5A-5D illustrate an Extended Perception System 500 for use in the interior of car 510. FIG. 5D illustrates Extended Perception System (EXP) 500 installed in car 510. As illustrated in FIG. 5A, EXP 500 can be mounted such that it is a between the driver and front passenger, preferably above the level of the tops of their heads such that at least some of EXP 500's sensors have a clear line of sight above their heads. It is not required that EXP 500's line of sight be above the heads of the driver and passengers because the extended length of the sensor array provides sufficient parallax such that EXP 500 can view and/or monitor all, or nearly all, of the regions to the sides of the car. Visual sensor array 530 can view the space on the driver's side of car 510. Another visual sensor array can view the space on the front passenger's side of the car 510. Visual sensor array 532 can view the space in the front of car 510. Visual sensor array 534 can view the space behind car 510. Four other visual sensor arrays can view the regions between those of visual sensor arrays 530, the passenger side array, 532, and 534. Each of the visual sensor arrays might also be configured to also view part of the interior of car 510. Each sensor array can be an HLCRA or just an array of one or more sensors. Audio sensor arrays 540, 542, 544 (and the five others) are shown below their respective visual sensor arrays. Mounting 520 is affixed to the interior roof of car 310 and supports the rest of EXP 500.
  • FIGS. 5B-5C illustrate 550, an optional inner surface of EXP 500. Mounting 520 supports arms 525 that support the rest of 550. Inner Surface 550 views downward with normals oriented between vertical and horizontal. Cameras 560, 562, 564, 566 and the four between each consecutive pair of these provide EXP 500 views of the interior of car 510 including, but not limited to what the driver and passengers are doing. These views can be transmitted to another observer (as described elsewhere herein) and can, for example, be used to assure that observer that the driver and passengers of car 510 are not a threat to said observer or to others.
  • To better detect sounds outside of a closed vehicle, microphones can be mounted on the outside of said vehicle and/or a sound detection system inside the car can be employed that reflects laser light off the windows to detect the sound vibrations in the glass (as is known in the art). By reflecting the laser light off the surfaces of multiple windows and/or multiple locations on a single window, the reflected light information can be used to capture the sound and, by measuring the delays in the same sounds reaching different parts, that information can be used to locate the source of that sound. One or more glass (or other materials) laser targets can be used with synthetic aperture hardware and/or software to achieve the benefits of that technology.
  • Other vehicular embodiments can provide cameras in the trunk and/or other areas so that it would be harder for others to plant contraband in the trunk area and/or those other areas.
  • FIGS. 6A and 6B illustrate an alternative Extended Perception System (EXP) 900 for use in the interior of car 910. FIG. 6B illustrates Extended Perception System 900 installed in car 910. These embodiments are designed to reduce the EXP's cost by employing a mobile device to collect the sensor data and transmit it. FIG. 6A illustrates an embodiment where the mobile device is embedded in the EXP 900. Mounting 920 is affixed to the interior roof of car 910 and supports housing 934. Opening 936 in housing 934 supports adapter tray 938. Adapter tray 938 supports the mobile device in such a way that the mobile device will be held securely in opening 936 and aligns one of the mobile device's camera lenses with EXP 900's terminal optics 950. EXP 900 optics 970 redirect light rays 962 from EXP 900's surroundings to the paths 964. An app on the mobile device can transform the image, distorted by EXP's optics, into standard images. This stream can then be input to an app on the mobile device such as Meerkat or Periscope to broadcast images of the surroundings via a cellular network. Alternatively, the broadcast app can transmit the distorted image that can then be corrected after transmission. The image stream can be used for perception (such as, displayed locally and/or sent to an EXP Monitoring Subsystem and/or a storage subsystem). EXP 900 can be linked to the electrical system of car 910 to keep the mobile device that is being used by EXP 900 charged. The mobile device can be controlled or used for other functions via voice control. EXP 900's optional EXP display subsystem can have a mode where it displays the image of the mobile device's display for any of its functions.
  • FIGS. 7A and 7B illustrate a second alternative Extended Perception System (EXP) 1000 for use in the interior of car 1010. FIG. 7B illustrates Extended Perception System 1000 installed in car 1010. Like EXP 900, this embodiment is designed to reduce the EXP's cost by employing a mobile device to collect the sensor data and transmit it. FIG. 7 illustrates an embodiment where the mobile device is located separately from EXP 1000. Mounting 1020 is affixed to the interior roof of car 1010 and supports housing 1034. EXP 1000 optics 1070 redirect light rays 1062 to the paths 1064. Light rays 1064 are redirected via EXP 1000's camera 1050. A separate unit mounted on, or near, car 1010's dash supports the mobile device. Said separate unit can be connected to car 1010's electrical system, such that when the mobile device is plugged therein, its battery can be charged. Camera 1050's output is transmitted to the mobile device via wire or wirelessly. An app on the mobile device can transform the image, distorted by EXP's optics, to normal images. This stream can then be input to an app, such as Meerkat or Periscope, on the mobile device to broadcast images of car 1010's surroundings via a cellular network to a Monitor Subsystem. Alternatively, the broadcast app can transmit the distorted image that can then be corrected after transmission. The image stream can be used for perception (such as, displayed locally and/or sent to an EXP Monitoring Subsystem and/or a storage subsystem). The mobile device's display can be used as the entirety, or part of, the EXP 1000 display subsystem. One of multiple optional lasers 1080 reflects off a window (or windshield) to collect sound information to be processed, for example, in the mobile device then made audible to the driver and/or passengers of car 1010 and/or to be uploaded to the Monitor Subsystem. Likewise, data from car 1010's computer can be accessed via its OBD II port and sent wirelessly or via wire to the mobile device. Said Monitor System, for example, can be one or more individuals in a distant location, observing and/or recording what the EXP sensing subsystem is perceiving.
  • An embodiment of EXP 1000 can be temporarily assigned to a vehicle. Mount 1020, for example, can contain a magnet powerful enough to secure EXP 1000 to the interior ceiling of a vehicle as the roofs of most vehicles are mostly steel, which is ferromagnetic. In this case, the data can be wirelessly transmitted to a Monitoring Subsystem such that safely located monitors can observe the actions in the interior of the vehicle and hear what is occurring therein and observe where the vehicle is and where it is going. This function can be enhanced by the incorporation of a GPS unit in EXP 1000.
  • EXP 900 and EXP 1000 observe 360° (as viewed from above) around the car 910 or 1010 respectively. Because the EXP System observes the rear of the vehicle, the EXP System can replace the rear view mirror.
  • FIGS. 8A-8C illustrate EXP 600, an embodiment that can be worn by a person. In this embodiment, visual sensors are mounted on a hat. Alternatively, said sensors can be mounted on a helmet, glasses, a flexible net, and/or any other structure worn on the head and/or any other part of the body. Visual sensors 620, 630, 632, and 644 can view the direction in front of the person's head. Visual sensors 622, 632, 634, and 636 can view to the side left of the person's face. Visual sensors 626, 640, 642, and 644 can view to the side right of the person's face. Visual sensors 624, 636, 638, and 640 can view the direction behind the person's face. This allows the user to monitor what is going on behind him, like having eyes in the back of one's head. Visual sensors 620, 622, 624, and 626 can view the space approximately normal to the plane defined by the eyes and ears. Thus, when a user has his head in the position typically associated with walking, visual sensors 620, 622, 624, and 626 view the space above the user's head.
  • Note that a subset of the sensors in all EXP embodiments can be movable and/or fixed depending on what is most cost-effective in a specific application. This includes, but is not limited to, vehicle-mounted, head-mounted sensors and/or recruited sensors. Directional sensors can be directed towards the target of interest.
  • EXP Display Subsystem
  • Independent of the sensor input type, the sensor output can be displayed visually, auditorally, olfactorally, gustatorially, haptically, kinesthetically, and/or by any means perceivable by the user. The user can be human, animal, other natural organism and/or artificially intelligent. Data displayed as olfactory information can be very useful to dogs, even more useful to sharks whose brain structures for evaluating olfactory information are comparable to those that humans used to evaluate visual information.
  • Below each sensor in EXP 600 can be a vibration device that informs the user when there is a stimulus of potential importance in the field of view of that sensor. Said vibration can inform the user that he should consider looking in that direction to get more information on the stimulus. If said stimulus is in the field of view of multiple sensors, an EXP Processor Subsystem built into brim 650 can deploy different response algorithms depending upon the nature of said stimulus. For example, a stimulus deemed to be of moderate importance might only trigger vibration in the sensor which has that visual stimulus closest to the center of its “field of view”, or multiple sensors if it is relatively equidistant from said multiple sensors. A stimulus deemed to be of greater importance might trigger a vibration under all of the visual sensors that can view it. Different simultaneous stimuli of interest can be distinguished by different frequencies of vibration. Different degrees of potential importance of stimuli can be mapped to different intensities of the vibrations. As the user moves his head, the locus of the vibrations changes to indicate the direction of the stimulus relative to the user's current head orientation.
  • Another means to inform the user of a stimulus of interest is by auditory feedback. For example, the EXP Processor Subsystem can create auditory signals, fed to the user's ears separately and tailored to cause the human auditory system to perceive the direction of the combined signal as the same as the stimulus of interest. Different intensities, frequencies and/or patterns in the auditory feedback can be used to signal different types of information about the stimulus to the user.
  • A means to provide an EXP user information from one or more E×P sensors is to display that information. For example, said information can be displayed on a display in a fixed location within a car. Different camera views can be displayed in different windows on the display so as to potentially monitor the entire surrounding space simultaneously.
  • Amplified Glance Angle Mode
  • Information from an EXP Sensor Subsystem can be shown on a display that is worn. Such wearable displays can include watches such as the Apple watch and/or head-mounted devices, such as Google Glass, Occulus Rift. Microsoft HoloLens, Carl Zeiss Cinemiser, and Sony SmartEyeGlass. There are numerous ways to display such data on head-mounted displays known to the art.
  • FIGS. 9A-9C illustrate a head-mounted display 2200 with an integrated sensor subsystem similar to 600. One or more real image(s) and/or virtual image(s) and/or holographic image(s) can be projected onto, or by means of, all or part of display housing 2204. Display housing 2204 is secured to the user's head using display housing band 2202.
  • FIGS. 9A-9C also illustrate the optional Amplified Glance Angle Mode (AGAM), a user interface mode for example, for the display of very wide angle visual data and/or visual representations of non-visual data. AGAM can be used to display EXP Sensor Subsystem data, including EXP Sensor Subsystem data from vehicular-based EXP Sensor Subsystems, worn EXP Sensor Subsystems, fixed EXP Sensor Subsystems and/or other Sensor Subsystems, including but not limited to virtual data sensor data. AGAM can be used for the visual display of any type of data, such as data produced from business software, 2D or 3D Views software (see other patent applications by Jesse Clement Bunch), and/or video games. In AGAM, the deflection of the user's glance with respect to the direction immediately in front of the head is detected by a glance angle detection subsystem (GADS). The GADS can comprise a central camera 2292 that provides an image of one or both of the user's eyes to an EXP Processor 2295. All or part of Processor 2295 can be attached to, or built into, housing 2203 housing and/or band 2202 and/or located elsewhere. Processor 2295 has eye movement detection software that determines the viewing angle of one or both eyes with respect to straight ahead. Said angle of deflection of the eye (glance angle) is mapped into a deflection of the field of view on the display. For example, a glance angle of 15° to the right can cause the display to show the center of the field of view at a deflection of 90° from the user's face. The user sees in the center of his current field of view the center of what he would be seeing if his head were rotated 90° to the right via the view through camera 2242.
  • Embodiments of 2200 can provide a stereoscopic view by feeding different virtual and/or image information to the user's right eye than is fed to the user's left eye. In FIGS. 9A-9C, glance angle of 15° to the right can cause the display to show a stereoscopic view of what is 90° to the user's right by displaying the data from camera 2244 to his left eye and camera 2240 to his right eye. A glance angle of 15° to the left can cause the display to show a stereoscopic view of what is 90° to the user's left by displaying the data from camera 2236 to his left eye and camera 2232 to his right eye. A glance angle of 30° to the right (or left) can result in a stereoscopic view of what is directly behind the user. This does not alert observers to the direction the user is looking in. In this case, the image from camera 2240 is fed to the user's left eye and the image of camera 2236 is fed to the user's right eye. A glance angle of 15° vertically can cause the display to view the center of the field of view at a vertical deflection of 45° (upper portions of camera 2232's field of view to left eye and upper portions of camera 2244 to right eye) or 30° vertically to cause the display to view the center of the field of view at a vertical deflection of 90° (camera 2222 to left eye and camera 2226 to right eye). As the range of easy motion of each eye is roughly circular, not rectilinear, a combination of smaller glance angles in the vertical and horizontal can provide a vertical view as if the user were facing upwards and backwards. The mappings of glance angle to viewing angle herein are meant to be examples. The relationships between horizontal glance angle and displayed field view and vertical glance angle and displayed field of view need not be the same. Those relationships need not be limited to being linear or fixed. In some embodiments, those relationships can be adjustable by the user. Alternatively, the stereoscopic image can be constructed from two or more images in a processor before being displayed to the user. This applies to all statements in this disclosure regarding feeding one image to one eye and a different image to a second eye. Herein, a “horizontal” glance means a movement of the eyes caused substantially by the medial rectus and lateral rectus muscles of the eye. Herein, a “vertical” glance means a movement of the eyes caused substantially by the superior rectus and inferior rectus muscles of the eye.
  • The glance angle detection subsystem can comprise camera 2291 which provides an image of the user's right eye to processor 2295 and camera 2293 which provides an image of the user's left eye to processor 2295. When GADS has access to orientation data for both eyes, it can determine both the direction and the range where user is directing his attention. This Convergent Glance Angle Mode (CGAM) can be used to determine the location in space that the user wants to focus on. The user can signal the GADS that he wants to magnify the visual and/or audio display from that location by, for example, rapidly blinking twice or tensing the facial muscles around his eyes. Multiple applications of said signal can cause repeated magnifications in the displayed image. A different signal, such as three rapid blinks can signal a desired decrease in magnification, i.e. to increase the field of view that is displayed. Alternatively, the user's signals to the EXP Display Subsystem can be by a verbal command such as “magnify 10×”, and/or other movements and/or by the tensing of given muscle groups and/or by the detection of the firing of particular neurons and/or sets of neurons in the central and/or peripheral nervous systems.
  • Forward looking visual sensors can be used to improve or even replace the direct forward view of the user's unaided eyes. Combinations of the front facing sensors 2220, 2230, 2232 and 2244 can replace or augment the direct view forward through the heads up partially transparent display. The transparency of the heads up display may be incrementally or continuously variable from complete transparency (direct view only, no image from the forward sensors) to partial transparency (a mix of direct view forward and image from forward sensors) to no transparency (no direct forward view, total image from forward sensors).
  • FIGS. 10A-10C illustrate a head-mounted display 2300 with an integrated sensor subsystem that is similar to 2200. In FIGS. 10A-10C, all but the forward facing sensors 2390, 2391, 2393, 2394 are mounted on display housing band 2302. Other cameras comprising the optional sensor subsystem illustrated are cameras 2312, 2322, 2332, 2342, 2352, 2362, 2372, 2382, 2390, 2391 facing horizontally and cameras 2310, 2320, 2330, 2340, 2350, 2360, 2370, 2380 facing vertically upwards and cameras 2314, 2324, 2334, 2344, 2354, 2364, 2374, 2384 facing vertically downwards. The glance angle detection subsystem can comprise a central camera 2392 that provides an image of one or both of the user's eyes to an EXP processor 2395.
  • Glance location can be used to point to a menu location and a signal from the user can trigger the selection of the menu item. Said signal can, for example, be verbal and/or visual (such as a sequence of blinks). A specific signal can directly trigger a command.
  • FIGS. 11A-11C illustrate a head-mounted display 2500, similar to the display subsystem of EXP 2200. The glance angle detection subsystem can comprise camera 2591 which provides an image of the user's right eye to processor 2595 and camera 2592 which provides an image of the user's left eye to processor 2595 or central camera 2593 that provides an image of one or both of the user's eyes to an EXP processor 2595. One or more real image(s) and/or virtual image(s) and/or holographic image(s) can be projected onto, or by means of, all or part of display housing 2504. Display housing 2504 is secured to the user's head using display housing band 2502.
  • Glance location can be used to point to a menu location and a signal from the user can trigger the selection of the menu item. Said signal can, for example, be verbal and/or visual (such as a sequence of blinks). A specific signal can directly trigger a command.
  • Amplified Head Orientation Mode (AHOM)
  • Similar to AGAM, the user can change the orientation of his head to indicate the view angle that he wants to be displayed. For example, accelerometers in the head-mounted display, or in a device separately connected to the user's head, can be used to determine the roll and pitch of the user's head relative to the vertical. Data from an accelerometer mounted to the torso of the user can be compared with data from one or more accelerometers in the head-mounted display, or in a device separately connected to the user's head, to determine yaw motion. Other means known to the art (such as Pointing and Identification Device technologies) can be used to determine relative changes in the orientation of the user's head. Just as in AGAM, these changes can be used to signal what the EXP Display Subsystem should display. Analogous to AGAM, this is referred to as Amplified Head Orientation Mode (AHOM). For example, a change in head orientation by angle of 15° to the right can cause the display to show the center of the field of view at a deflection of 90° from the user's face. Without a side glance, the user sees in the center of his current field of view (the center of the display) the center of what he would be seeing if his head were rotated 90° to the right via the view through camera 2242. AGAM and AHOM can be implemented separately or combined to function simultaneously.
  • A standard display, such as an LED or LCD display can be used to display the sensor data from a vehicle's EXP Sensor Subsystem. A standard 3D display can be used to display stereoscopic AGAM.
  • A standard display, such as an LED or LCD display can be used to display the sensor data from an AGAM or AHOM.
  • Just as an array of cameras on a hat can provide different views of a space, an array of cameras can cover nearly all of the head. Cameras on other body parts can also provide useful views. For example, cameras on the front and/or tops of shoes can view upwards to see under cars for security officers to look for bombs and/or contraband mounted under vehicles. Alternatively or together, a camera mounted between the shoulder blades of a jacket or vest can provide a view behind the user. Additionally, a camera mounted on each side of said jacket or vest below the elbows when the arms are to the side of the body and behind the normal locations of the arms can view the directions lateral to the user.
  • The display can be divided into areas. In each area, a view from a different combination of sensors is displayed.
  • AGAM and AHOM can combined such a change in head orientation and/or glance and/or in combination can be used to signal what the EXP Display Subsystem should display.
  • FIG. 12 illustrates an Extended Perception System with a head-mounted display 2700 and an integrated sensor subsystem 2750 employing a rotating focusing mirror, said mirror focusing to an imaging sensor. The head mounted display 2700 is similar to the display subsystem of E×P System 2500. The glance angle detection subsystem can comprise camera 2791 (not shown, but analogous to 2591) which provides an image of the user's right eye to processor 2795 and camera 2792 which provides an image of the user's left eye to processor 2795 or central camera 2593 (not shown, but analogous to 2591) which that provides an image of one or both of the user's eyes to an EXP processor 2795. One or more real image(s) and/or virtual image(s) and/or holographic image(s) can be projected onto, or by means of, all or part of display housing 2704. Display housing 2704 is secured to the user's head using display housing band 2702.
  • EXP Sensor Subsystem 2750 employs a rotating focusing mirror, said mirror focusing to an imaging sensor. The Sensor Base 2751 of 2750 is affixed to the top of helmet 2724. Sensor Base 2751 is illustrated in cross-section so that Imaging Sensor 2752 can be illustrated. The center of Mirror Base 2760 rotates about the center of Sensor Base 2751. Sensor Base 2751 is illustrated in cross-section so that Radiation Aperture 2762 can be illustrated. Dome 2766, affixed to helmet 2764 and/or Sensor Base 2751, protects all or part of the remainder of EXP Sensor Subsystem from contamination and damage. In the illustrated orientation, radiation from behind helmet 2764 passes through Dome 2766 (which is substantially transparent to said radiation) to the interior surface of Mirror Unit 2762. Said radiation is reflected by said interior surface of Mirror Unit 2762, passing through Radiation Aperture 2762 to produce an image on Imaging Sensor 2752. EXP Sensor Subsystem 2750 can be integrated with a display subsystem as illustrated in FIG. 12 or can be a separate EXP Sensor Subsystem. A benefit of EXP Sensor Subsystem 2750 is that it permits a large area of radiation to be imaged.
  • FIGS. 13A-13B illustrate an Extended Perception System Sensor Subsystem 2900 employing a rotating pair of cameras. The Sensor Base 2903 of 2900 is affixed to the top of a helmet like 2724. Cylindrical Platform 2907 is affixed to the center of Sensor Base 2903. Sensor Unit Support Axis 2911 rotates in the center of Cylindrical Platform 2907. For example, Cylindrical Platform 2907 can be an electric motor with Sensor Unit Support Axis 2911 being the rotating portion of said electric motor. The center of Mirror Base 2760 rotates about the center of Sensor Base 2751. Sensor Unit Arm 2913 is affixed to Sensor Unit Support Axis 2911. Substantially identical Sensor Collector Units 2928 and 2929 are affixed to opposite ends of Sensor Unit Arm 2913. Collector Unit Body 2920 is a shell, illustrated in cross-section so that the radiation path can be illustrated. Dome 2935, affixed to helmet 2764 and/or Sensor Base 2903, is substantially transparent to the radiation to be imaged on Imaging Sensor 2925 and protects all or part of the remainder of EXP Sensor Subsystem 2900 from contamination and damage. Radiation from the left is focused by Focusing Element 2927 to produce an image on Imaging Sensor 2925. The output of Collector Units 2928 and 2929 can be fed to the right and left eyes of the user(s) respectively to provide a stereoscopic view. It is easy for those knowledgeable in the art to adapt 2900 such that Sensor Collector Units 2928 and 2929 can converge to match the convergence of a user's eyes.
  • Camera Hands
  • When the data from a camera mounted on one hand (via a glove for example) is fed to one eye of the user and the data from a camera mounted on a second hand, via a glove for example, is fed to the user's other eye, theoretically the user could move his hands so that the images would superimpose in the brain to form a single stereoscopic view. Muscle jitter and imprecision of muscle control might make that difficult. In this embodiment, the EXP Processor Subsystem uses image processing software to stabilize the image from each camera and adapt the images so that they can be superimposed into a clear stable stereoscopic image by the user's brain. The experience is as if the user had eyes spaced the distance between the cameras. This improves depth perception at greater distances. Distances between converging cameras greater and less than the distance between eyes can be profoundly useful. Cameras, such as tiny sensors close to one another with very short focal lengths can be input to the EXP display subsystem to generate stereoscopic microscopic views.
  • As is the case with EXP embodiments herein, this can be useful in the world or simulated in a video game.
  • Likewise, aircraft with motor-driven, double gimbal-mounted sensors (e.g. cameras, and/or microphones) at the front and back of the aircraft and/or on the wingtips can at the user's direction or based on feedback from the EXP Processor Subsystem and/or the Monitor Subsystem converge on a location of interest by means of CGAM, for example, to provide useful stereoscopic images at great distances.
  • Data from sensors in or on a vehicle being fed to a user and/or remote or local operator can help that user and/or operator to “become one” with the vehicle. Data from any Sensor Subsystem (such as the one illustrated in FIGS. 5A-5D) can be fed (for example via a Processor Subsystem as illustrated in FIG. 1) to the EXP Display Subsystem illustrated in FIGS. 11A-11C.
  • The EXP display can have multiple display modes. For example, in one mode, a subset of the display can show a constant sweep of the views and provide an indicator as to the direction that the sweep is showing. In second mode, a subset of the display can show one or more views in which the EXP Processor Subsystem has determined to contain events that the user needs to see.
  • Part of the display can be a map showing what part of the visual field is being displayed on one or a given part of the display. For example, an image of the user's head can be displayed with one or more rays pointing towards the center of the head, said each of said rays emanating from an area of interest.
  • Sensor Database
  • Sensors (e.g. cameras, and/or microphones, etc.) in a set of stationary locations and/or in or on a set of moving objects (e.g. UAVs, robots, cars, trucks, people with mobile devices and/or sensor arrays) can work together to collect real-time sensory data of about a set of locations and/or a set and/or range of view angles of a set of locations. A Sensor Database can store or access the present locations of sensors, the present direction of their fields of view and other data necessary and/or useful to determine which sensor and/or sensors will be useful when there is a need to collect sensory data on a set of objects and/or locations from the desired point of view. Additionally, said Sensor Database can store or access the range of freedom of motion of the field of view of sensors to determine if one or more nearby sensors can be directed to view the set of objects and/or locations of interest. Additionally, said Sensor Database can store or access the speeds and availability of sufficiently proximal mobile sensor platforms relative to said set of objects and/or locations. CGAM input can be used in the recruitment of multiple sensors (recruited sensors) to provide a stereoscopic view of the location of interest. Herein, “access” refers to the process of rapidly finding that information if it is not already stored in the Sensor Database.
  • Tiny sensors floating in the wind, or self-propelled, or dropped from vehicles, such as unmanned vehicles (including but not limited to UAVs), are additional examples of potentially proximal mobile sensors and/or sensor arrays.
  • As the Internet of Things grows, many more sensors will be available for access. Data from properly located sensors, a high density of disorganized sensors, and/or numerous mobile sensors can create a detailed 3D representation of a space. A CGAM-based display can be used to “view” that space by moving about an area or volume designated to represent the space. As the user moves about that area or volume, data from single sensors or pairs (or other groups) of sensors can be directed to one or both of the user's eyes to produce a changing 2D image or a changing stereoscopic image of the view from that direction and the selected range. AGAM is also useful for displaying such a space. This space can be actual or virtual or a combination thereof.
  • Groups of analogous devices with efferent capabilities can be used for a wide variety of applications. For example, each element of a subset of such effectors (devices used to produce a desired change in an object in response to input) can be recruited to act as a heliostat to redirect energy from a separate source (such as the sun) to one or more areas. Each member of a subset of such effectors can be recruited to direct energy from itself to one or more areas. Such energy can include, but is not limited to, electromagnetic radiation of one or more frequencies (such as radio, microwave, IR, visible, UV) collimated or uncollimated, such as a laser or LED or acoustic radiation. Coordination of a group of effectors including, but not limited to recruited effectors can result in a device that can behave as a solar concentrator, a display, a microphone and/or many other devices. Each effector can produce a different and varying electromagnet field strength and/or direction in order for the group to implement complex static or dynamic electromagnetic fields.
  • The embodiments described herein represent mostly visual sensors and effectors. Sound (infrasound, audible sound, and/or ultrasound—through any medium, e.g. air, water, and/or ground) analog embodiments can be quite useful.
  • For example, arrays of directional microphones or array microphones can work together as described herein for visual sensors to localize the source of sound events, to correlate sound events with visual events, and/or to work in reconfigurable groups to “visualize” different locations from different angles and ranges.
  • For example, AGAM analogs can be used to have “ears in the back of the head.” A sound sensor array on the bottom(s) of a set of shoes can be used to detect and locate the origin of one or more sources of sound that has traveled through the ground. This can result in a modern and greatly improved version of the “ears down to the ground” technique.
  • As is the case with other EXP embodiments AGAM and CGAM will be useful interfaces for video games with displays worn by gamers like those described herein or displayed on conventional screens.
  • EXP Processor Subsystem
  • Functions of the EXP Processor Subsystem include (but are not limited to) one or more of the following, and/or combinations thereof:
  • 1. the processing of sensor data for display to the user and/or to others (local and/or remote) and/or transfer to others. Others can comprise the EXP Monitoring Subsystem. This can be as simple as transferring the sensor data to a local display.
  • 2. the processing of sensor data to make it more useful to display and/or transfer.
  • 3. looking for patterns in one or multiple sensor views simultaneously to algorithmically determine the most relevant data to display and/or transfer. Many types of events may be of interest, such as the presence of potential game or dangerous animals or the presence of, or suspicious movement of one or more vehicles. This can be as simple as the detection of movement relative to the surroundings and thus can be analogous to the function of rods in the retina. This can be very useful in EXP displays where the sensor array provides data to the EXP Processor about what is outside the view of the user. When movement is detected, the User is notified of said movement and he can then observe the relevant sensor view via AGAM, for example.
  • 4. The view of an event of interest can automatically be displayed in a display window. The type of event can be determined and displayed as well as its direction and/or location and the view in which the source is currently best observable. The user's observation of the event of interest or its source can be enhanced by highlighting its vicinity on the display or otherwise directing the user to its location in the displayed view(s). Image enhancement can improve its observation.
  • 5. the integration of data from multiple sensors. For example, visual data from two cameras can be “aligned” for presentation to a user's eyes so as to create a 3D stereoscopic image. For example, the EXP Processor Subsystem can correlate a flash of light and a particular sound signature to determine that a shot has been fired, the type of weapon that fired the shot and sound and/or light localization algorithms can verify location and determine range of the shots.
  • 6. the integration of sensor data, processed data, and/or augmented data and/or information from other sources.
  • 7. the storage of sensor data and/or transfer of sensor data for storage elsewhere. This stored data can provide a multiview representation of a situation that can serve as an objective witness to that situation. The knowledge that this record is being kept will tend to prevent those with that knowledge from committing wrongful acts.
  • Input to an EXP Processor Subsystem comprises data from one or more, local and/or distant sensors, and/or computed and/or stored information from one or more local and/or distant sources.
  • Output from the EXP Processor Subsystem can be displayed locally to the user of a wearable sensor array and/or other users of wearable displays (local and/or remote) and/or others simply viewing displays (local and/or remote). Output from the EXP Processor Subsystem can be displayed locally to the driver and/or passengers of a vehicular-based sensor array and/or to other users of vehicular-based sensor arrays local and/or remote and/or others. Others local and/or remote can comprise security personnel and/or systems who can determine more about the motives and/or activities of the user of a wearable or vehicular sensor array by seeing what said user is seeing and/or is available from the user's sensor array. For example, the view inside of a vehicle, including the actions of the occupants, can be transmitted to a police car that has stopped the vehicle. This can help the police officer to determine the risk or lack thereof of approaching the vehicle. Said views can be transmitted to the police officers' supervisors so that they can assist the police officer to make better decisions. Another example is for these views to be transmitted to a security checkpoint so that the checkpoint personnel can view the actions of the vehicle's occupants. Others local and/or remote can comprise personnel and/or systems that can monitor views that the user is not viewing, evaluate the views available to his sensor array, and/or use information from other sources such as other sensor arrays to: advise the user; intervene in the situation; and/or direct the user to change his situation.
  • All, or parts, of the EXP Processor Subsystem can be local with respect to a subset of the sensor array. All, or parts, of the EXP Processor Subsystem can move as a subset of the sensor array moves.
  • All, or parts, of the EXP Processor Subsystem can be local and/or remote with respect to a subset of the sensor array.
  • All, or parts, of the EXP Processor Subsystem can be mobile. A subset of the EXP Processor Subsystem can comprise a mobile device, such as a mobile phone or tablet. A subset of the EXP Processor Subsystem can be worn or carried by a person. A subset of the EXP Processor Subsystem can be built into or carried in a vehicle.
  • All, or parts, of the EXP Processor Subsystem can be fixed.
  • A subset of the sensor array's communication with a subset of the EXP Processor Subsystem can be routed by direct wiring. A subset of the sensor array's communication with a subset of the EXP Processor Subsystem can be routed wirelessly via, short range communications (e.g. Bluetooth, IR, WiFi, etc), longer range communications such as cell tower communications, and/or long range communications such as satellite or ULF or radio communications.
  • Aperture synthesis processing can treat a subset of a visual or auditory array as one big antenna.
  • EXP Intervention Subsystem
  • The user and/or any of the observers of EXP Sensor Subsystem data (processed or raw), the EXP Processor Subsystem data, and/or the EXP Monitoring Subsystem data that are suitably authorized can trigger an intervention. Said intervention can be real time.
  • Said intervention can comprise providing advice, commands, and/or information to the user and/or any of the observers of EXP Sensor Subsystem data (processed or raw), EXP Processor Subsystem data, and/or an EXP Monitoring Subsystem. Said intervention can comprise providing advice, commands, and/or information to others. For example, police, fire, rescue, ambulance, military, news and/or others might be informed to intervene in a situation. For example, the intervention can comprise information through a local loud speaker and/or a mobile device audible and/or visible to a party or parties local and/or remote who can intervene in the situation. Otherwise uninvolved persons and/or systems that are nearby can be recruited to intervene. Intervention can comprise the activation of an automated response by a user and/or any of the observers of EXP Sensor Subsystem data (processed or raw), the EXP Processor Subsystem data, and/or the EXP Monitoring Subsystem.
  • One embodiment employs EXP 1000 in car 1051 and is intended primarily to protect the rights of the driver. Video and audio data of car 1051's surroundings and interior are captured by EXP 1000's sensors and transmitted to the EXP Processor of EXP 1000. The data from the EXP 1000 can be streamed to a remote monitoring service (Monitor Subsystem) via the mobile device's cellular connection. Thus, EXP 1000 can detect, document and report to a monitoring service and/or car 1051's user(s) and/or insurance company and/or authorities a crash or other damage accidental or intentional to car 1051. EXP 1000 can also detect, document and report to a monitoring service and/or car 1051's user(s) and/or insurance company and/or authorities any unauthorized entry into the vehicle. As such, EXP 1000 can serve as an alarm system. This can help reduce car theft and hijackings and expedite the recovery of car 1051 in the unlikely event that it is stolen or hijacked. A device plugged into car 1051's OBD II port can wirelessly, or by wire, transmit information about car 1051 including speed information to the mobile device acting as the EXP Processor for EXP 1000. GPS and acceleration data can be captured by functions internal to that mobile device. This data from the EXP 1000 can be streamed to a remote monitoring service via the mobile device's cellular connection. If the driver is pulled over by a police officer, this data can be used to verify or deny the veracity of an accusation by a police officer. Data from car 1051's state of registration can be downloaded via the mobile device's cellular connection from that state's department of motor vehicles indicating its current registration status and emissions test status to further confirm or deny possible charges regarding the status of those variables. When a subscriber to the monitoring service is pulled over, the monitoring service can, in real time, observe and record the interaction between the police officer and the driver and passengers of car 1051. An agent of the monitoring service (with the streamed data in front of him), for example a lawyer, can then discuss the situation with the police officer and potentially provide data that confirms or denies the officer's accusation(s) via the EXP Processor's mobile device or microphone/loud speaker 1052. Microphone/loud speaker can be temporarily or of a longer duration affixed to car 1051 or can be integrated into 1034. This embodiment can help reduce inaccurate traffic stops and help to maintain a civil interaction between the police officer and driver of car 1051. EXP 1000 can also document and record road rage occurrences. Car 1051 if clearly marked as having an EXP system such as this is much less likely to be involved in theft, hijacking, road rage events, or inaccurate traffic stops. As described elsewhere herein, the view of the interior of car 1051 can be transmitted to a display viewed by the police officer and/or his supervisor(s) to make the police officer's job safer.
  • In EXP embodiments, the Intervention Subsystem can be used to monitor and report on other vehicles in the view of the EXP System's Sensor Subsystem. For example, an EXP Sensor Subsystem can be used to sense the tags of vehicles within their view. The EXP Processor Subsystem can interpret the sensor data to recognize the tag numbers of said vehicles. The tag number information can be compared with data downloaded locally or said tag data can be uploaded to another system to be compared with information located there. Said information can comprise information about the tag numbers of interest to authorities such as: stolen vehicles, vehicles involved in crimes (including, but not limited to Amber Alerts), vehicles with unpaid parking tickets, vehicles with expired registrations, vehicles owned by people with outstanding warrants. More actively, the EXP System can compare its vehicle's speed data (from said vehicle's computer) to visual information about other vehicles tags to determine the speed of said other vehicle. That data can be combined with GPS data to determine the legal speed limit at the location of those other vehicles. In this way, the EXP System can determine if viewed vehicles are exceeding the speed limit. When a violator has been detected, that information, plus information regarding the time, location and direction of the offending vehicle can be transmitted to the police for them to take the appropriate action. Alternatively, the EXP System can transmit its evidence of the offence to the appropriate authorities so that they can automatically generate a ticket. Likewise, EXP Processor Subsystem software can detect other offenses such as reckless driving, road rage, car jackings, impaired driving, running stop signs or lights, violations of school bus laws, etc. Such EXP Systems can be in police cruisers or other vehicles. When such an EXP System is in an individual's vehicle, it will be useful for the authorities to reward the EXP System's owner with a bounty or a reduction in the owner's vehicle registration fees. An EXP System can record crashes to determine who was responsible. An EXP System can also record activities inside a car to provide an objective record of what has occurred inside a vehicle, to determine veracity of claims such as date rape.
  • An EXP System in a car might utilize elements of a self-driving or otherwise automated car as EXP subsystems. An EXP System in a car might utilize elements of a car equipped with one or more built-in cameras.
  • Data from more than one vehicle can be integrated to provide a “view” greater than is possible with data from just one vehicle. For example, real-time data integration from multiple vehicles can track a given vehicle in real-time and non-realtime data from multiple vehicles can provide information about the location history of an otherwise non-cooperating vehicle at any given time in the past. This can be used to provide a list of suspects to a crime or to prove the innocence of others.

Claims (5)

What is claimed is:
1. A sensor array system comprising:
an array of sensors implemented to detect electromagnetic radiation;
an array of lenses wherein at least one of said lenses creates an image on at least one of said sensors; and
the ratio of said lenses to said sensors is greater than or equal to 3:2.
2. The sensor array system as claimed in claim 1 wherein said array of sensors is mounted on a vehicle.
3. The sensor array system as claimed in claim 1 wherein said array of lenses is mounted on a vehicle.
4. The sensor array system as claimed in claim 2 wherein the data from said array of sensors is displayed to an occupant of said vehicle.
5. The sensor array system as claimed in claim 2 wherein the data from said array of sensors is displayed to an observer who is not an occupant of said vehicle.
US15/330,973 2016-03-16 2016-03-16 Extended perception system Abandoned US20180072226A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/330,973 US20180072226A1 (en) 2016-03-16 2016-03-16 Extended perception system
US16/271,614 US11247607B1 (en) 2016-03-16 2019-02-08 Extended perception system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/330,973 US20180072226A1 (en) 2016-03-16 2016-03-16 Extended perception system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/271,614 Division US11247607B1 (en) 2016-03-16 2019-02-08 Extended perception system

Publications (1)

Publication Number Publication Date
US20180072226A1 true US20180072226A1 (en) 2018-03-15

Family

ID=61559447

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/330,973 Abandoned US20180072226A1 (en) 2016-03-16 2016-03-16 Extended perception system
US16/271,614 Active US11247607B1 (en) 2016-03-16 2019-02-08 Extended perception system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/271,614 Active US11247607B1 (en) 2016-03-16 2019-02-08 Extended perception system

Country Status (1)

Country Link
US (2) US20180072226A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345829A (en) * 2018-10-29 2019-02-15 百度在线网络技术(北京)有限公司 Monitoring method, device, equipment and the storage medium of unmanned vehicle
US20190109938A1 (en) * 2016-04-29 2019-04-11 Joyfun Inc., Message display method according to event occurrence in vr device and apparatus therefor
US10412315B1 (en) * 2018-01-09 2019-09-10 Timothy Rush Jacket camera
US11000952B2 (en) * 2017-06-23 2021-05-11 Casio Computer Co., Ltd. More endearing robot, method of controlling the same, and non-transitory recording medium
US20210326563A1 (en) * 2019-06-20 2021-10-21 Christopher Gordon Kossor Electronic fingerprint device for identifying perpetrators and witnesses of a crime and method thereof
US11343462B2 (en) * 2018-09-10 2022-05-24 Jvckenwood Corporation Recording reproduction apparatus, recording reproduction method, and non-transitory computer readable medium
US11415980B2 (en) * 2016-07-29 2022-08-16 Nec Solution Innovators, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium
US11467572B2 (en) 2016-07-29 2022-10-11 NEC Solution Innovations, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6351273B1 (en) * 1997-04-30 2002-02-26 Jerome H. Lemelson System and methods for controlling automatic scrolling of information on a display or screen
US20110270135A1 (en) * 2009-11-30 2011-11-03 Christopher John Dooley Augmented reality for testing and training of human performance
JP2013521576A (en) * 2010-02-28 2013-06-10 オスターハウト グループ インコーポレイテッド Local advertising content on interactive head-mounted eyepieces
WO2012083415A1 (en) * 2010-11-15 2012-06-28 Tandemlaunch Technologies Inc. System and method for interacting with and analyzing media on a display using eye gaze tracking
US20130083003A1 (en) * 2011-09-30 2013-04-04 Kathryn Stone Perez Personal audio/visual system
US9788714B2 (en) * 2014-07-08 2017-10-17 Iarmourholdings, Inc. Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance
JP2014157466A (en) * 2013-02-15 2014-08-28 Sony Corp Information processing device and storage medium
JP6539672B2 (en) * 2013-11-25 2019-07-03 テッセランド・エルエルシーTesseland Llc Immersive compact display glass
US10459254B2 (en) * 2014-02-19 2019-10-29 Evergaze, Inc. Apparatus and method for improving, augmenting or enhancing vision
US10567641B1 (en) * 2015-01-19 2020-02-18 Devon Rueckner Gaze-directed photography
NZ773819A (en) * 2015-03-16 2022-07-01 Magic Leap Inc Methods and systems for diagnosing and treating health ailments
US11609427B2 (en) * 2015-10-16 2023-03-21 Ostendo Technologies, Inc. Dual-mode augmented/virtual reality (AR/VR) near-eye wearable displays
US10116873B1 (en) * 2015-11-09 2018-10-30 Ambarella, Inc. System and method to adjust the field of view displayed on an electronic mirror using real-time, physical cues from the driver in a vehicle
US10061352B1 (en) * 2017-08-14 2018-08-28 Oculus Vr, Llc Distributed augmented reality system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190109938A1 (en) * 2016-04-29 2019-04-11 Joyfun Inc., Message display method according to event occurrence in vr device and apparatus therefor
US11415980B2 (en) * 2016-07-29 2022-08-16 Nec Solution Innovators, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium
US11467572B2 (en) 2016-07-29 2022-10-11 NEC Solution Innovations, Ltd. Moving object operation system, operation signal transmission system, moving object operation method, program, and recording medium
US11000952B2 (en) * 2017-06-23 2021-05-11 Casio Computer Co., Ltd. More endearing robot, method of controlling the same, and non-transitory recording medium
US10412315B1 (en) * 2018-01-09 2019-09-10 Timothy Rush Jacket camera
US11343462B2 (en) * 2018-09-10 2022-05-24 Jvckenwood Corporation Recording reproduction apparatus, recording reproduction method, and non-transitory computer readable medium
US20220256110A1 (en) * 2018-09-10 2022-08-11 Jvckenwood Corporation Recording reproduction apparatus, recording reproduction method, and non-transitory computer readable medium
US11917326B2 (en) * 2018-09-10 2024-02-27 Jvckenwood Corporation Recording reproduction apparatus, recording reproduction method, and non-transitory computer readable medium
CN109345829A (en) * 2018-10-29 2019-02-15 百度在线网络技术(北京)有限公司 Monitoring method, device, equipment and the storage medium of unmanned vehicle
US20210326563A1 (en) * 2019-06-20 2021-10-21 Christopher Gordon Kossor Electronic fingerprint device for identifying perpetrators and witnesses of a crime and method thereof

Also Published As

Publication number Publication date
US11247607B1 (en) 2022-02-15

Similar Documents

Publication Publication Date Title
US11247607B1 (en) Extended perception system
US11828945B2 (en) Personal electronic target vision system, device and method
US11210873B2 (en) Safety for vehicle users
US11270538B2 (en) Control, monitoring, and/or security, apparatus and method for premises, vehicles, and/or articles
JP6535382B2 (en) Method and system for determining the position of an unmanned aerial vehicle
JP6524545B2 (en) Geo-fencing device and method of providing a set of flight restrictions
CN110290945A (en) Record the video of operator and around visual field
CN104620259B (en) Use the Vehicle security system of audio/visual clue
CN109572702A (en) Controller of vehicle and vehicle including the controller of vehicle
US20150298654A1 (en) Control, monitoring, and/or security, apparatus and method for premises, vehicles, and/or articles
ES2705848T3 (en) Systems and methods to provide emergency resources
CN106167045A (en) Human pilot auxiliary device and control method thereof
US20160251081A1 (en) Apparatus and method employing autonomous vehicles to reduce risk
CN108028873B (en) Vehicles camera chain
CN107329478A (en) A kind of life detection car, wearable device and virtual reality detection system
CN107436491A (en) The threat caution system and its threat alarming method for power of virtual reality display device
US20210383688A1 (en) Traffic monitoring and evidence collection system
US20210272437A1 (en) Public Safety Smart Belt
WO2020246251A1 (en) Information processing device, method, and program
Matviienko et al. QuantiBike: Quantifying Perceived Cyclists' Safety via Head Movements in Virtual Reality and Outdoors
US20210133406A1 (en) Control, monitoring, and/or security, apparatus and method for premises, vehicles, and/or articles
Gerasimova USE OF SCIENTIFIC AND TECHNICAL ACHIEVEMENTS TO FIGHT CRIME
CN114290998B (en) Skylight display control device, method and equipment
GB2563342A (en) Personal electronic target vision system, device and method

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION