WO2023235093A1 - A method and apparatus for a stereoscopic smart phone - Google Patents

A method and apparatus for a stereoscopic smart phone Download PDF

Info

Publication number
WO2023235093A1
WO2023235093A1 PCT/US2023/020758 US2023020758W WO2023235093A1 WO 2023235093 A1 WO2023235093 A1 WO 2023235093A1 US 2023020758 W US2023020758 W US 2023020758W WO 2023235093 A1 WO2023235093 A1 WO 2023235093A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
stereoscopic
pointing direction
location
convergence point
Prior art date
Application number
PCT/US2023/020758
Other languages
French (fr)
Inventor
Robert Edwin DOUGLAS
David Byron DOUGLAS
Kathleen Mary DOUGLAS
Original Assignee
Douglas Robert Edwin
Douglas David Byron
Douglas Kathleen Mary
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/829,256 external-priority patent/US11627299B1/en
Priority claimed from US18/120,422 external-priority patent/US11877064B1/en
Application filed by Douglas Robert Edwin, Douglas David Byron, Douglas Kathleen Mary filed Critical Douglas Robert Edwin
Publication of WO2023235093A1 publication Critical patent/WO2023235093A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity

Definitions

  • US patent application 17/829,256 filed on 5/31/2022 is also a continuation-in-part of US 17/558,606 filed on 12/22/2021 (issued as US11,445,322 on 9/13/22), which is a continuation-in-part of US 17/225,610 filed on 04/08/2021 (issued as 11,366,319 on 6/21/22). All of these are incorporated by reference in their entirety.
  • aspects of this disclosure are generally related to three-dimensional imaging.
  • This patent provides a novel stereoscopic imaging system.
  • the improved stereoscopic imaging system would be incorporated onto a smart phone, which is called the stereoscopic smart phone (SSP).
  • SSP stereoscopic smart phone
  • SHDUs stereoscopic head display units
  • a SSP can be set up near the SSP and object tracking of the roily polly can occur.
  • Each camera of the stereoscopic camera system on the SSP can track the roily polly and the stereo distance can change closer and farther away based on the distance of the roily polly.
  • the roily polly can climb onto a rock and the convergence point of the stereo cameras moves upward and as it climbs downward into a hole, the convergence point of the stereo cameras moves downwards. While all of this is happening, the stereoscopic imagery can be passed via a wired or wireless connection to a stereoscopic head display unit (SHDU).
  • SHDU stereoscopic head display unit
  • a child in near real time can view the roily polly while wearing the SHDU.
  • the digital images of the roily polly can be enlarged so the roily polly appears the size of a black lab puppy.
  • a wide array of other objects and situations can also be imaged using this system, which overall enhance the viewing experience.
  • the preferred embodiment is a method of stereoscopic imaging comprising: using a left camera and a right camera of a stereoscopic camera system image to perform initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a stereoscopic distance, wherein said left camera has first pointing direction, and wherein said right camera has first pointing direction; changing said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’s first pointing direction, and wherein said left camera’s second pointing direction points towards a convergence point; changing said right camera’s first pointing direction to a second pointing direction wherein said right camera’s second pointing direction is different than said right camera’s first pointing direction, and wherein said right camera’s second pointing direction points towards said convergence point;
  • Some embodiments comprise wherein said convergence point is positioned such that a distance from said left camera’s location to said convergence point is not equal to a distance from said right camera’s location to said convergence point. [008] Some embodiments comprise wherein said convergence point is positioned such that a distance from said left camera’s location to said convergence point is equal to a distance from said right camera’s location to said convergence point.
  • Some embodiments comprise wherein said left camera’s first pointing direction points towards a second convergence point and said left camera’s second pointing direction points towards said convergence point; and wherein said second convergence point is different from said convergence point.
  • Some embodiments comprise wherein said initial stereoscopic imaging has a first zoom setting; wherein said subsequent stereoscopic imaging has a second zoom setting; and wherein said second zoom setting has greater magnification than said first zoom setting.
  • Some embodiments comprise wherein said left camera’s first pointing direction is determined based on said left camera’s orientation; and wherein said right camera’s first pointing direction is determined based on said right camera’s orientation.
  • Some embodiments comprise displaying left eye imagery and right eye imagery from said initial stereoscopic imaging of said area on a stereoscopic head display unit (SHDU); and
  • SHDU comprises a virtual reality display, an augmented reality display or a mixed reality display.
  • Some embodiments comprise wherein said stereoscopic system is used on a smart phone; and wherein said SHDU and said smart phone communicate via a wired connection, a wireless connection via BlueTooth or a wireless connection via an Internet. Some embodiments comprise wherein automatic object recognition is performed on said left eye imagery and said right eye imagery from said initial stereoscopic imaging of said area on said SHDU. Some embodiments comprise wherein artificial intelligence is performed in conjunction with said automatic object recognition to alert a user regarding findings in said area. Some embodiments comprise wherein stereoscopic image stabilization is performed on said left eye imagery and said right eye imagery from said initial stereoscopic imaging of said area on said SHDU.
  • Some embodiments comprise determining a spatial relationship between said stereoscopic camera system and an object of interest; and reconfiguring said stereoscopic cameras based on said spatial relationship wherein reconfiguring said stereoscopic cameras comprises changing said stereoscopic distance to a subsequent stereoscopic distance wherein said subsequent stereoscopic distance is different than said stereoscopic distance.
  • Some embodiments comprise wherein said subsequent stereoscopic imaging of said area is performed using a second stereoscopic distance; and wherein said second stereoscopic distance is smaller than said first stereoscopic distance.
  • Some embodiments comprise wherein said stereoscopic camera system is placed on a smart phone, a tablet or a laptop.
  • Some embodiments comprise wherein said convergence point is determined based on an object’s location in said area.
  • Some embodiments comprise wherein said convergence point is determined based on eye tracking metrics of a user.
  • Some embodiments comprise wherein said convergence point is determined based on an artificial intelligence algorithm.
  • Some embodiments comprise wherein a sensor system of said stereoscopic camera system comprises a composite sensor array.
  • a stereoscopic head display unit comprising: a head display unit with a left eye display and a right eye display wherein said SHDU is configured to: receive initial stereoscopic imagery from a stereoscopic imaging system wherein said initial stereoscopic imagery comprises initial left eye imagery and initial right eye imagery; display said initial left eye imagery on said left eye display; display said initial right eye imagery on said right eye display; receive subsequent stereoscopic imagery from said stereoscopic imaging system wherein said subsequent stereoscopic imagery comprises subsequent left eye imagery and subsequent right eye imagery; display said subsequent left eye imagery on said left eye display; and display said subsequent right eye imagery on said right eye display; wherein said stereoscopic imaging system comprises a left camera and a right camera; and wherein said stereoscopic camera system image is configured to: use said left camera and said right camera of said stereoscopic camera system to perform said initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right
  • Some embodiments comprise a stereoscopic smart phone comprising: a smart phone; and a stereoscopic imaging system operably connected to said smart phone comprising a left camera and a right camera wherein said stereoscopic camera system image is configured to: perform initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a stereoscopic distance, wherein said left camera has first pointing direction, and wherein said right camera has first pointing direction; change said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’s first pointing direction, and wherein said left camera’s second pointing direction points towards a convergence point; change said right camera’s first pointing direction to a second pointing direction wherein said right camera’s second pointing direction is different than said right camera’
  • a very high-resolution camera pair(s) could be used in connection with the type pairs described above. In some embodiments, there could be a disparity between the camera resolution and that of the display system wherein the camera resolution was greater (i.e., provided ‘better resolution’) than that of the display system. These very high-resolution camera pair(s) could be used in connection with changes in camera field of view (FOV). In some embodiments, the field of view could change (e.g., decrease in size) with a corresponding change of image resolution (e g., increase in resolution).
  • FOV camera field of view
  • this embodiment could be used with a feedback mechanism wherein a user could, for example, start with a large FOV and then, thru an interactive cursor, indicate an area of interest and desired FOV. Then, the center point of the FOV would change to that point and the image area corresponding to that FOV and the resolution corresponding that FOV would create the image to be displayed.
  • the stereoscopic camera system has a variety of components, which include: aperture(s); lens(es); shutter(s); detector(s); mirror(s); and, display(s).
  • Aperture diameter would be consistent with the different lenses described below. Changeable or fixed type lenses or user selection of type lens chosen by the user. The current set of lenses within the smart devices is one option. Multi-shaped lenses (Note: this is analogous to reading glasses with variable portions of the lens based on look angle (e.g., top portion for looking straight forward and bottom portion for reading. This would be different to allow convergence (i.e., left lens and right lens in bottom portion would be canted differently.) Differing pointing angles can be based on the particular portion of the lens. Differing zoom can be based on the particular portion of the lens. Fisheye type lenses with high resolution.
  • the idea is that different portions of the digital collection array would be associated with corresponding look angles through the fisheye lens
  • the portion of the array which is used could be user specified.
  • automatic selection of the portion of the array selected could be based on input data from an inclinometer.
  • Some embodiments comprise using a variable/ differing radius of curvature.
  • Some embodiments comprise using a variable/ differing horizontal fields of view (FOV).
  • Shutters timelines, etc. would be in accordance with the type technology chosen for the particular type detector array technology.
  • Collection array could include for example, but not limited to: charge couple devices (CCD), complementary metal-oxide semiconductor (CMOS).
  • CCD charge couple devices
  • CMOS complementary metal-oxide semiconductor
  • options would include, but are not limited to: low light level TVs; infrared detector arrays such as (mercury cadmium telluride (MCT); Indium gallium arsenide (InGaAs); and quantum well infrared photodetector (QWIP).
  • Composite collection geometries of the collection array would be based on the desired viewing mode of the user. This would include, but would not be limited to: user selection of straight ahead for general viewing of the scene and specific objects at ranges greater that 20 - 30 feet where stereoscopic viewing becomes possible); variable convergence angles based upon the proximity of the object being viewed; to the left or to the right to provide a panoramic view (or, alternatively for scanning (e.g., left, to straight ahead, then to the right). Some embodiments comprise wherein a portion of the collection array would be facing straight ahead. Some embodiments comprise wherein a portion of the collection array would be constructed with different convergence angles. Some embodiments comprise wherein a portion of the collection array would be constructed with different look angles (left/ right; far left/ far right).
  • Left eye and right eye imagery could be merged and displayed on the smart device display. This composite image could then be viewed as by polarized glasses.
  • the smart phone could be placed into a HDU with lenses to be converted into a virtual reality unit.
  • the encasing framework of each of the lens could be rotated along 2 degrees of freedom (i.e., left/ right and up// down).
  • the encasing framework of the detector array could be rotated along 2 degrees of freedom (i.e., left/ right and up// down).
  • a mirror(s) or reflective surface
  • the user could rotate the mirror such that the desired viewing angles focused on the area/ objects selected by the user.
  • mechanical turning of the collection arrays or lenses or the entire camera would correspond with the user’s desired viewing area (i.e., straight ahead or converged and some nearby location. This turning could be done electronically or by a physical mechanical linkage.
  • the first option would be for the person who collected the left and right eye data/ imagery to view the stereo imagery on his/ her head stereo display unit (HDU). This could be accomplished, for example, by a wire connection between the stereo phone and the stereo HDU. The user could also choose to send the stereo imagery to other persons.
  • the transmission could be for single stereo pair(s) or streaming stereo video.
  • the data transmitted could be interleafed (i.e., alternating between left eye data/ imagery and right eye data/ imagery).
  • the data/ imagery could be transmitted via multi channel with separate channels for left and right eye data/ imagery.
  • the left and right eye imagery frames could be merged for transmission.
  • the HDU could use polarization or anaglyph techniques to ensure proper stereo display to the user.
  • a further option would be store the left and right eye data/ imagery. This storage could be accomplished by, but not limited to the following: on the smart device; on a removable device such as a memory stick; or on a portion of the cloud set aside for the user’s storage, and at some later time download the stereo imagery to a device (e.g., computer) and subsequently displayed on a HDU.
  • Example modes of operation would include, but are not limited to the following: stereo snapshot; scanning; staring, tracking, and record then playback.
  • a very high-resolution camera pair(s) could be used in connection with the type pairs described above. In some embodiments, there could be a disparity between the camera resolution and that of the display system wherein the camera resolution was greater (i.e., provided ‘better resolution’) than that of the display system. These very high-resolution camera pair(s) could be used in connection with changes in camera field of view (FOV). In some embodiments, the field of view could change (e.g., decrease in size) with a corresponding change of image resolution (e.g., increase in resolution).
  • FOV camera field of view
  • this embodiment could be used with a feedback mechanism wherein a user could, for example, start with a large FOV and then, thru a interactive cursor, indicate an area of interest and desired FOV. Then, the center point of the FOV would change to that point and the image area corresponding to that FOV and the resolution corresponding that FOV would create the image to be displayed.
  • the type of control of the stereo camera pairs would be smart device dependent.
  • the principle screen could display a stereo camera icon.
  • HDU Stereoscopic Head Display Unit
  • Types of displays included both immersive this could include, but would not be limited to: a very dark visor that can be brought down on the far side of the display to block viewing of the external scene; an electronic shutter external to the display and coincident with the HDU eyepieces could be of varying opacity )which could be initiated by the person wearing the head display unit; note that this ); or mixed reality with a relative intensity/ brightness of the intensity of the stereoscopic display relative to the external scene.
  • a computer and memory would be integral to the HDU.
  • a power supply would be integral to the HDU.
  • the communications componentry would include, but is not limited to the following: communications port(s) (e.g., USB, HDMI, composite wire to connect to power source, smart device), antenna and receiver; associated circuitry.
  • the audio componentry would include, but is not limited to: speakers, microphone, or both.
  • a LRF is integral to ’smart device’ and used to determine range from the smart device to the location of the object selected by the user to calculate convergence angles for left and right viewing angles to provide proper stereoscopic images.
  • a pseudo-GPS system can be integrated as described in US 10,973,485, which is incorporated by reference in its entirety.
  • stereoscopic image processing will be performed on the images produced by the two stereoscopic cameras.
  • One of the image processing techniques is image enhancement. These enhancement techniques include but are not limited to the following: noise reduction, deblurring, sharpening and softening the images, fdtering, etc.
  • noise reduction there would be two separate images each of which would undergo a separate noise reduction process. (Note that noise is random in nature and, therefore, a different set of random noise would occur in the left camera image from right camera. And, after the consequent reduction, a different set of pixels would remain in the two images.) Given these images were taken beyond the stereoscopic range, then the two images could be merged resulting a more comprehensive, noise free image.
  • stereoscopic image processing will include segmentation.
  • segmentation enhancement techniques include but are not limited to the following: edge detection methods; histogram-based methods, tree/graph-based methods; neural network based segmentation; thresholding; clustering methods; graph partitioning methods; watershed transformation; probabilistic; and Bayesian approaches. Given these images were taken beyond the stereoscopic range, a different technique for left and right images could be invoked. If the segmentation produced identical results, then there would be higher confidence in results. If the results were different, however, then a third segmentation method could be invoked, and an adjudication process resolve the segmentation.
  • a set of left and right images would be produced over time.
  • the user could identify an object(s) of interest which could be tracked over time.
  • Stereoscopic image processing consisting of background suppression could be applied to both left and right images which could enhance stereoscopic viewing of the object(s) of interest.
  • false color could be added to the scene and/ or object(s) of interest within the scene.
  • An example of stereoscopic image processing would be to use opposing anaglyph colors for left and right eye images.
  • a further example would be to use color figures to provide augmented reality of stereoscopic images.
  • In some embodiments of stereoscopic image processing would be image compression for data storage and transmission.
  • stereoscopic image compression processing would be to: for portions of an image beyond stereoscopic ranges, apply stereoscopic image compression processing (to include but not limited to run-length encoding) to only that region to one of either left or right images, but not both. For that region that is within stereoscopic ranges, apply stereoscopic image compression processing to both the left and right images.
  • morphological processing which would include but not limited to: dilation, erosion, boundary extraction, region filling.
  • An example for morphological processing as it applies to of stereoscopic image processing would be to perform erosion for the left image but not for the right. Left and right images could be alternated to permit the user to evaluate whether this type of processing was desirable.
  • stereoscopic object recognition would be invoked.
  • Techniques for stereoscopic object recognition include but are not limited to: cconvolutional neural networks (CNNs); and support vector machines (SVM).
  • a number of support features include, but are not limited to: feature extraction; pattern recognition, edge detection; and corner detection.
  • Examples of automated object recognition (AOR) processing as it applies to of stereoscopic image processing would be to: recognize brands of cars; recognize types of animals, optical character reading; optical character reading coupled with a translation dictionary.
  • CNN AOR could be performed on left camera imagery and SVM on right camera imagery. If both agree on the type object, that type object is presented to the user. However, if agreement is not reached by CNN and SVM, then a third type of recognition methodology such as feature recognition would be invoked.
  • stereoscopic image processing would include image stabilization would be invoked. If a single stereoscopic image was desired by the user and if upon review of the image, the image was blurred dur to vibration or movement(s) of the stereoscopic cameras during the imaging interval then an option would be to decrease the shutter interval and repeat the stereoscopic image collection.
  • stereoscopic image processing would include, but not be limited to: selection of three or more reference points within the stereoscopic images (i.e., visible in both left and right images) and, from frame to sequential frame, adjust the sequential frame(s) to align the reference points; a border surrounding the displayed stereoscopic images could be invoked to reduce the overall area of the stereoscopic images.
  • stereoscopic viewing of the virtual 3D mannequin is performed on an extended reality display unit, which is described in US Patent 8,384,771, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, which is incorporated by reference in its entirety.
  • This patent teaches image processing techniques including volume generation, fdtering, rotation, and zooming.
  • stereoscopic viewing of the virtual 3D mannequin is performed with convergence, which is described in US Patent 9,349,183, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, which is incorporated by reference in its entirety.
  • This patent teaches shifting of convergence. This feature can be used in combination with filtering.
  • stereoscopic viewing can be performed using a display unit, which incorporates polarized lenses, which is described in US Patent 9,473,766, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, which is incorporated by reference in its entirety.
  • advancements to display units can be incorporated for viewing the virtual 3D mannequin, which are taught in US Patent Application 16/828,352, SMART GLASSES SYSTEM and US Patent Application 16/997,830, ADVANCED HEAD DISPLAY UNIT FOR FIRE FIGHTERS, which are both incorporated by reference in their entirety.
  • Some embodiments comprise utilizing an improved field of view on an extended reality head display unit, which is taught in US Patent Application 16/893,291, A METHOD AND APPARATUS FOR A HEAD DISPLAY UNIT WITH A MOVABLE HIGH-RESOLUTION FIELD OF VIEW, which is incorporated by reference in its entirety.
  • image processing steps can be performed using a 3D volume cursor, which is taught in US Patent 9,980,691, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, and US Patent 10,795,457, INTERACTIVE 3D CURSOR, both of which are incorporated by reference in its entirety.
  • a precision sub-volume can be utilized in conjunction with the virtual 3D mannequin, which is taught in US Patent Application 16/927,886, A METHOD AND APPARATUS FOR GENERATING A PRECISION SUB-VOLUME WITHIN THREE- DIMENSIONAL IMAGE DATASETS, which is incorporated by reference in its entirety.
  • viewing of a structure at two different time points can be performed using a ghost imaging technique, which is taught in US Patent 10,864,043, INTERACTIVE PLACEMENT OF A 3D DIGITAL REPRESENTATION OF A SURGICAL DEVICE OR ANATOMIC FEATURE INTO A 3D RADIOLOGIC IMAGE FOR PREOPERATIVE PLANNING, which is incorporated by reference in its entirety.
  • Some embodiments comprise selecting a specific surgical device for pre-operative planning, which is taught in US Patent Application 17/093,322, A METHOD OF SELECTING A SPECIFIC SURGICAL DEVICE FOR PREOPERATIVE PLANNING, which is incorporated by reference in its entirety.
  • Some embodiments comprise, generating the virtual 3D mannequin using techniques described in US Patent Application 16/867,102, METHOD AND APPARATUS OF CREATING A COMPUTER-GENERATED PATIENT SPECIFIC IMAGE, which is incorporated by reference in its entirety.
  • Key techniques include using patient factors (e.g., history, physical examination findings, etc.) to generate a volume.
  • Some embodiments comprise advanced image processing techniques available to the user of the virtual 3D mannequin, which are taught in US Patent 10,586,400, PROCESSING 3D MEDICAL IMAGES TO ENHANCE VISUALIZATION, and US Patent 10,657,731, PROCESSING 3D MEDICAL IMAGES TO ENHANCE VISUALIZATION, both of which are incorporated by reference in its entirety.
  • Some embodiments comprise performing voxel manipulation techniques so that portions of the virtual 3D mannequin can be deformed and move in relation to other portions of the virtual 3D mannequin, which is taught in US Patent application 16/195,251, INTERACTIVE VOXEL MANIPULATION IN VOLUMETRIC MEDICAL IMAGING FOR VIRTUAL MOTION, DEFORMABLE TISSUE, AND VIRTUAL RADIOLOGICAL DISSECTION, which is incorporated by reference in its entirety.
  • Some embodiments comprise generating at least some portions of the virtual 3D mannequin through artificial intelligence methods and performing voxel manipulation thereof, which is taught in US patent application 16/736,731, RADIOLOGIST-ASSISTED MACHINE LEARNING WITH INTERACTIVE, VOLUME SUBTENDING 3D CURSOR, which is incorporated by reference in its entirety.
  • Some embodiments comprise wherein at least some component of the inserted 3D dataset into the virtual 3D mannequin are derived from cross-sectional imaging data fine tuned with phantoms, which is taught in US Patent application 16/752,691, IMPROVING IMAGE QUALITY BY INCORPORATING DATA UNIT ASSURANCE MARKERS, which is incorporated by reference in its entirety.
  • Some embodiments comprise utilizing halo-type segmentation techniques, which are taught in US Patent Application 16/785,606, IMPROVING IMAGE PROCESSING VIA A MODIFIED SEGMENTED STRUCTURE, which is incorporated by reference in its entirety.
  • Some embodiments comprise using techniques for advanced analysis of the virtual 3D mannequin taught in US Patent Application 16/939,192, RADIOLOGIST ASSISTED MACHINE LEARNING, which are incorporated by reference in its entirety.
  • Some embodiments comprise performing smart localization from a first virtual 3D mannequin to a second virtual 3D mannequin, such as in an anatomy lab, which is performed via techniques taught in US Patent Application 17/100,902, METHOD AND APPARATUS FOR AN IMPROVED LOCALIZER FOR 3D IMAGING, which is incorporated by reference in its entirety.
  • Some embodiments comprise performing a first imaging examination with a first level of mechanical compression and a second imaging examination with a second level of mechanical compression and analyzing differences therein, which is taught in US Patent Application 16/594,139, METHOD AND APPARATUS FOR PERFORMING 3D IMAGING EXAMINATIONS OF A STRUCTURE UNDER DIFFERING CONFIGURATIONS AND ANALYZING MORPHOLOGIC CHANGES, which is incorporated by reference in its entirety.
  • Some embodiments comprise displaying the virtual 3D mannequin in an optimized image refresh rate, which is taught in US Patent Application 16/842,631, A SMART SCROLLING SYSTEM, which is incorporated by reference in its entirety.
  • Some embodiments comprise displaying the virtual 3D mannequin using priority volume rendering, which is taught in US Patent 10,776,989, A METHOD AND APPARATUS FOR PRIORITIZED VOLUME RENDERING, which is incorporated by reference in its entirety.
  • Some embodiments comprise displaying the virtual 3D mannequin using tandem volume rendering, which is taught in US Patent 17/033, 892, A METHOD AND APPARATUS FOR TANDEM VOLUME RENDERING, which is incorporated by reference in its entirety.
  • Some embodiments comprise displaying images in a optimized fashion by incorporating eye tracking, which is taught in US Patent Application 16/936,293, IMPROVING VISUALIZATION OF IMAGES VIA AN ENHANCED EYE TRACKING SYSTEM, which is incorporated by reference in its entirety.
  • Some embodiments comprise enhancing collaboration for analysis of the virtual 3D mannequin by incorporating teachings from US Patent Application 17/072,350, OPTIMIZED IMAGING CONSULTING PROCESS FOR RARE IMAGING FINDINGS, which is incorporated by reference in its entirety.
  • Some embodiments comprise improving multi-user viewing of the virtual 3D mannequin by incorporating teachings from US Patent Application 17/079,479, AN IMPROVED MULTIUSER EXTENDED REALITY VIEWING TECHNIQUE, which is incorporated by reference in its entirety.
  • Some embodiments comprise improving analysis of images through use of geo-registered tools, which is taught in US Patent 10,712,837, USING GEO-REGISTERED TOOLS TO MANIPULATE THREE-DIMENSIONAL MEDICAL IMAGES, which is incorporated by reference in its entirety.
  • Some embodiments comprise integration of virtual tools with geo-registered tools, which is taught in US Patent Application 16/893,291, A METHOD AND APPARATUS FOR THE INTERACTION OF VIRTUAL TOOLS AND GEO-REGISTERED TOOLS, which is incorporated by reference in its entirety.
  • blood flow is illustrated in the virtual 3D mannequin, which is taught in US Patent Application 16/506,073, A METHOD FOR ILLUSTRATING DIRECTION OF BLOOD FLOW VIA POINTERS, which is incorporated by reference in its entirety and US Patent 10,846,911, 3D IMAGING OF VIRTUAL FLUIDS AND VIRTUAL SOUNDS, which is also incorporated by reference in its entirety.
  • Some embodiments also involve incorporation of 3D printed objects to be used in conjunction with the virtual 3D mannequin.
  • Some embodiments also involve a 3D virtual hand, which can be geo-registered to the virtual 3D mannequin.
  • Techniques herein are disclosed in US Patent Application 17/113,062, A METHOD AND APPARATUS FOR A GEO-REGISTERED 3D VIRTUAL HAND, which is incorporated by reference in its entirety.
  • Some embodiments comprise utilizing images obtained from US Patent Application 16/654,047, METHOD TO MODIFY IMAGING PROTOCOLS IN REAL TIME THROUGH IMPLEMENTATION OF ARTIFICIAL, which is incorporated by reference in its entirety.
  • Some embodiments comprise utilizing images obtained from US Patent Application 16/597,910, METHOD OF CREATING AN ARTIFICIAL INTELLIGENCE GENERATED DIFFERENTIAL DIAGNOSIS AND MANAGEMENT RECOMMENDATION TOOLBOXES DURING MEDICAL PERSONNEL ANALYSIS AND REPORTING, which is incorporated by reference in its entirety.
  • Some embodiments comprise a method comprising using a smart phone wherein said smart phone contains a first camera and a second camera wherein said first camera has a first location on said smart phone, wherein said second camera has a second location on said smart phone, wherein said second location is different from said first location, and wherein said first location and said second location are separated by a first stereo distance. Some embodiments comprise acquiring a first set of stereoscopic imagery using said first camera at said first location on said smart phone and said second camera at said second location on said smart phone.
  • Some embodiments comprise changing a spatial relationship by at least one of the group of: moving said first camera from said first location on said smart phone to a third location on said smart phone wherein said third location is different from said first location; and moving said second camera from said second location on said smart phone to a fourth location on said smart phone wherein said fourth location is different from said second location.
  • Some embodiments comprise after said changing said spatial relationship, acquiring a second set of stereoscopic imagery using said first camera and second camera.
  • Some embodiments comprise wherein said smart phone tracks an object's location in an area.
  • Some embodiments comprise wherein said first camera's first location and said second camera's second location is based on said object's initial location in said area.
  • Some embodiments comprise wherein said initial location is a first distance from said smart phone.
  • Some embodiments comprise wherein said first camera's third location and said second camera's fourth location is based on said object's subsequent location in said area. Some embodiments comprise wherein said subsequent location is different from said initial location. Some embodiments comprise wherein said subsequent location is a second distance from said smart phone. Some embodiments comprise wherein said second distance is different from said first distance.
  • Some embodiments comprise wherein said first set of stereoscopic imagery and said second set of stereoscopic imagery comprise enhanced stereoscopic video imagery.
  • Some embodiments comprise wherein said enhanced stereoscopic video imagery comprises wherein said third location and said fourth location are separated by said first stereo distance.
  • said enhanced stereoscopic video imagery comprises wherein said third location and said fourth location are separated by a second stereo distance wherein said second stereo distance is different from said first stereo distance.
  • Some embodiments comprise wherein successive frames of said enhanced stereoscopic imagery have different stereo distances.
  • Some embodiments comprise wherein said first set of stereoscopic imagery has a first zoom setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said first zoom setting for said first camera and said second camera.
  • Some embodiments comprise wherein said first set of stereoscopic imagery has a first zoom setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said a second zoom setting for said first camera and said second camera. Some embodiments comprise wherein said second zoom setting is different from said first zoom setting.
  • Some embodiments comprise wherein said first set of stereoscopic imagery has a first aperture setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said first aperture setting for said first camera and said second camera.
  • Some embodiments comprise wherein said first set of stereoscopic imagery has a first aperture setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said first aperture setting for said first camera and said second camera. Some embodiments comprise wherein said second aperture setting is different from said first aperture setting.
  • Some embodiments comprise wherein said first set of stereoscopic imagery comprises wherein said first camera has a first cant angle and said second camera has a second cant angle and wherein said second set of stereoscopic imagery comprises wherein said first camera has said first cant angle and said second camera has said second cant angle.
  • Some embodiments comprise wherein said first set of stereoscopic imagery comprises wherein said first camera has a first cant angle and said second camera has a second cant angle and wherein said second set of stereoscopic imagery comprises wherein said first camera has a third first cant angle different from said first cant angle and said second camera has a fourth second cant angle different from said second cant angle.
  • Some embodiments comprise performing stereoscopic image stabilization wherein said stereoscopic image stabilization comprises: using said first camera to acquire imagery of an area containing a tangible object; using said second camera to acquire imagery of said area containing said tangible object; selecting at least one point on said tangible object in said area to be used as stable reference point(s); for an initial frame of said acquired imagery of said area from said first camera, identifying at least one point within said initial frame of said acquired imagery of said area from said first camera that correspond to said stable reference point; for an initial frame of said acquired imagery of said area from said second camera, identifying at least one point within said initial frame of said acquired imagery of said area from said second camera that correspond to said stable reference point; for a subsequent frame of said acquired imagery of said area from said first camera, identifying at least one point within said subsequent frame of said acquired imagery of said area from said first camera that correspond to said stable reference point; for a subsequent frame of said acquired imagery of said area from said second camera, identifying at least one point within said subsequent frame of said acquired imagery of said area from said second camera,
  • Some embodiments comprise selecting a portion of said initial frame of said acquired imagery of said area from said first camera. Some embodiments comprise selecting a portion of said initial frame of said acquired imagery of said area from said second camera. Some embodiments comprise selecting a portion of said subsequent frame of said acquired imagery of said area from said first camera. Some embodiments comprise selecting a portion of said subsequent frame of said acquired imagery of said area from said second camera.
  • Some embodiments comprise displaying imagery with said first alignment comprising said selected portion of said initial frame of said acquired imagery of said area from said first camera and said selected portion of said subsequent frame of said acquired imagery of said area from said first camera on a left eye display of an extended reality head display unit. Some embodiments comprise displaying imagery with said second alignment comprising said selected portion of said initial frame of said acquired imagery of said area from said second camera and said selected portion of said subsequent frame of said acquired imagery of said area from said second camera on a right eye display of said extended reality head display unit.
  • Some embodiments comprise wherein said camera bar design comprises wherein said first camera and said second camera are restricted to moving along a line.
  • Some embodiments comprise a uni-planar camera system wherein said uni-planar camera system comprises wherein said first camera's positions are restricted to a plane on said smart phone's surface and said second camera's positions are restricted to said plane.
  • Some embodiments comprise said first camera and said second camera are on said smart phone's back. Some embodiments comprise wherein said smart phone's face contains a third camera and a fourth camera wherein said third camera and said fourth camera are separated by a stereo distance ranging from 0.25 inch to 1.25 inches. Some embodiments comprise wherein the third camera and the fourth camera are separated by a stereo distance ranging from 0.1 inch to 2.0 inches.
  • Some embodiments comprise a smart phone comprising a first camera wherein said first camera has a first location on said smart phone, a second camera wherein said second camera has a second location on said smart phone and wherein said second location is different from said first location, a third camera wherein said third camera has a third location on said smart phone and wherein said third location is different from said first location and said second location, and a imaging system configured to track an object's location in an area.
  • Some embodiments comprise wherein said first camera and said second camera are separated by a first stereo distance.
  • Some embodiments comprise wherein said second camera and said third camera are separated by a second stereo distance.
  • Some embodiments comprise wherein said third camera and said first camera are separated by a third stereo distance.
  • Some embodiments comprise wherein said first stereo distance is smaller than said second stereo distance. Some embodiments comprise wherein said third stereo distance is larger than said first stereo distance and said second stereo distance. Some embodiments comprise wherein said smart phone is configured to use said data from imaging system, said first camera, said second camera and said third camera to acquire enhanced stereoscopic imagery of said object in said area comprising: if said object is a first distance to said smart phone, using said first camera and said second camera to generate a first set of stereoscopic imagery; if said object is a second distance to said smart phone wherein said second distance is larger than said first distance, using said second camera and said third camera to generate a second set of stereoscopic imagery; and if said object is a second distance to said smart phone wherein said third distance is larger than said second distance, using said first camera and said second camera to generate a first set of stereoscopic imagery.
  • Some embodiments comprise an extended reality head display unit (HDU) comprising: a left eye display configured to display left eye images from acquired enhanced stereoscopic imagery; a right eye display configured to display right eye images from acquired enhanced stereoscopic imagery; and wherein said enhanced stereoscopic imagery is acquired on a smart phone comprising: a first camera wherein said first camera has a first location on said smart phone; a second camera wherein said second camera has a second location on said smart phone and wherein said second location is different from said first location; a third camera wherein said third camera has a third location on said smart phone and wherein said third location is different from said first location and said second location; a imaging system configured to track an object's location in an area; wherein said first camera and said second camera are separated by a first stereo distance; wherein said second camera and said third camera are separated by a second stereo distance; wherein said third camera and said first camera are separated by a third stereo distance; wherein said first stereo distance is smaller than said second stereo distance; wherein said third stereo
  • Still other embodiments include a computerized device, configured to process all the method operations disclosed herein as embodiments of the invention.
  • the computerized device includes a memory system, a processor, communications interface in an interconnection mechanism connecting these components.
  • the memory system is encoded with a process that provides steps explained herein that when performed (e g., when executing) on the processor, operates as explained herein within the computerized device to perform all of the method embodiments and operations explained herein as embodiments of the invention.
  • any computerized device that performs or is programmed to perform processing explained herein is an embodiment of the invention.
  • a computer program product is one embodiment that has a computer-readable medium including computer program logic encoded thereon that when performed in a computerized device provides associated operations providing steps as explained herein.
  • the computer program logic when executed on at least one processor with a computing system, causes the processor to perform the operations (e.g., the methods) indicated herein as embodiments of the invention.
  • Such arrangements of the invention are typically provided as Software, code and/or other data structures arranged or encoded on a computer readable medium such as an optical medium (e.g., CD-ROM), floppy or hard disk or other a medium such as firmware or microcode in one or more ROM or RAM or PROM chips or as an Application Specific Integrated Circuit (ASIC) or as downloadable software images in one or more modules, shared libraries, etc.
  • the software or firmware or other Such configurations can be installed onto a computerized device to cause one or more processors in the computerized device to perform the techniques explained herein as embodiments of the invention.
  • Software processes that operate in a collection of computerized devices, such as in a group of data communications devices or other entities can also provide the system of the invention.
  • the system of the invention can be distributed between many software processes on several data communications devices, or all processes could run on a small set of dedicated computers, or on one computer alone.
  • the embodiments of the invention can be embodied strictly as a software program, as Software and hardware, or as hardware and/or circuitry alone. Such as within a data communications device.
  • the features of the invention as explained herein, may be employed in data processing devices and/or Software systems for Such devices.
  • each of the different features, techniques, configurations, etc. discussed in this disclosure can be executed independently or in combination.
  • the present invention can be embodied and viewed in many different ways.
  • this Summary section herein does not specify every embodiment and/or incrementally novel aspect of the present disclosure or claimed invention. Instead, this Summary only provides a preliminary discussion of different embodiments and corresponding points of novelty over conventional techniques. For additional details, elements, and/or possible perspectives (permutations) of the invention, the reader is directed to the Detailed Description section and corresponding figures of the present disclosure as further discussed below.
  • Figure 1 A illustrates the back of a smart phone with stereoscopic camera capability.
  • Figure IB illustrates the front of a smart phone with stereoscopic camera capability.
  • Figure 2A illustrates key components of the smart phone with stereoscopic camera capability.
  • Figure 2B illustrates a top-down view of the smart phone with stereoscopic camera capability.
  • Figure 2C illustrates a top-down view of the smart phone with stereoscopic camera capability with convergence.
  • Figure 3 A illustrates curved lens concept for smart phone with stereoscopic camera capability.
  • Figure 3B illustrates fish-eye type of lens concept for smart phone with stereoscopic camera capability.
  • Figure 3C illustrates a progressive type of lens concept for smart phone with stereoscopic camera capability.
  • Figure 4A illustrates a front view of a composite sensor array concept for a smart phone with stereoscopic camera capability.
  • Figure 4B illustrates a top view of a composite sensor array concept for a smart phone with stereoscopic camera capability.
  • Figure 5 A illustrates a front view of a flat mirror concept for smart phone with stereoscopic camera capability.
  • Figure 5B illustrates a top-down view of the flat mirror concept for smart phone with stereoscopic camera capability.
  • Figure 5C illustrates a front view of a curved mirror concept for smart phone with stereoscopic camera capability.
  • Figure 5D illustrates a top-down view of the curved mirror concept for smart phone with stereoscopic camera capability.
  • Figure 5E illustrates a front view of a deformable mirror concept for smart phone with stereoscopic camera capability.
  • Figure 5F illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 1.
  • Figure 5G illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 2.
  • Figure 6A illustrates a movable lens for smart phone with stereoscopic camera capability.
  • Figure 6B illustrates the composite sensor array concept.
  • Figure 6C illustrates a switching out of cameras from Day 1 to Day 2.
  • FIG. 7A illustrates the Stereoscopic Head Display Unit (SHDU).
  • SHDU Stereoscopic Head Display Unit
  • Figure 7B shows a side view of a transformable SHDU display unit with an eyepiece cover with augmented reality mode and virtual reality mode.
  • Figure 7C shows a side view of a transformable SHDU display unit with an electronic eye piece with augmented reality mode and virtual reality mode.
  • Figure 8 A illustrates wired connectivity means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
  • Figure 8B illustrates wireless connectivity via BlueTooth means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
  • Figure 8C illustrates wireless connectivity via the Internet means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
  • Figure 9A illustrates system operation using the stereoscopic smart phone (SSP).
  • Figure 9B illustrates near real-time stereo mode at time N.
  • Figure 9C illustrates near real-time stereo mode at time N+l .
  • Figure 9D illustrates convergence mode at time N+l.
  • Figure 10A illustrates before application of automatic object recognition as displayed on the stereoscopic head display unit.
  • Figure 10B illustrates after application of automatic object recognition as displayed on the stereoscopic head display unit.
  • Figure 10C invokes another stereo camera unique technology.
  • Figure 11 A illustrates integration of image stabilization for a user in a scene where there is vibration.
  • Figure 1 IB illustrates selection of points within the image to use for image stabilization.
  • Figure 11C illustrates selection of points within the scene to use for image stabilization.
  • Figure 11D illustrates stereoscopic image stabilization on the SHDU.
  • Figure 12A illustrates determining a stabilization point and field of view (FOV) for the left eye imagery and right eye imagery.
  • FOV field of view
  • Figure 12B illustrates displaying stabilized stereoscopic imagery on a SHDU.
  • Figure 13A illustrates a stereoscopic smart phone (SSP) with its stereoscopic cameras in a first position, which is wide.
  • SSP stereoscopic smart phone
  • Figure 13B illustrates the SSP with its stereoscopic cameras in a second position, which is narrow.
  • Figure 13C illustrates the SSP with its stereoscopic cameras in a third position, which is also narrow, but shifted in position as compared to Figure 12B.
  • Figure 14 illustrates optimizing stereoscopic imaging.
  • Figure 15 illustrates adjusting stereoscopic camera componentry based on eye tracking parameters of a user.
  • Figure 16 illustrates a three-camera stereoscopic smart phone.
  • Some aspects, features and implementations described herein may include machines such as computers, electronic components, optical components, and processes such as computer- implemented steps. It will be apparent to those of ordinary skill in the art that the computer- implemented steps may be stored as computer-executable instructions on a non-transitory computer-readable medium. Furthermore, it will be understood by those of ordinary skill in the art that the computer-executable instructions may be executed on a variety of tangible processor devices. For ease of exposition, not every step, device or component that may be part of a computer or data storage system is described herein. Those of ordinary skill in the art will recognize such steps, devices and components in view of the teachings of the present disclosure and the knowledge generally available to those of ordinary skill in the art. The corresponding machines and processes are therefore enabled and within the scope of the disclosure.
  • FIG. 1 A illustrates the back of a smart phone with stereoscopic camera capability.
  • This is called a stereoscopic smart phone (SSP).
  • 101 illustrate a smart phone with stereoscopic camera capability.
  • 102 illustrates a first outward facing camera on the back of the smart phone 101.
  • 103 illustrates a second outward facing camera on the back of the smart phone 101.
  • the first camera 102 and the second camera 103 are separated by a stereoscopic distance. In the preferred embodiment, these would be maximally separated to enhance stereo separation to achieve stereoscopic vision at distances greater than a user could with his/her own stereoscopic distance.
  • Some embodiments comprise placing mechanical extensions to further increase the stereo separation.
  • two phones would work in conjunction to further increase stereo distance.
  • a first smart phone could be placed in 10, 20 or 30 feet from a second smart phone.
  • the video imagery could be synchronized and video imagery from a camera from the first phone and video imagery from a camera from the second phone can be used together to generate stereoscopic imagery.
  • these phones are to be manufactured with differing stereoscopic distances.
  • the stereoscopic distance can match that of a person. Some designs will therefore have a larger stereoscopic distance and other designs will have a smaller stereoscopic distance.
  • Figure IB illustrates the front of a smart phone with stereoscopic camera capability.
  • 104 illustrates a third outward facing camera on the front of the smart phone 101.
  • 105 illustrates a fourth outward facing camera on the front of the smart phone 101.
  • the third camera 104 and the fourth camera 105 are separated by a stereoscopic distance.
  • 106 illustrates the display portion of the smart phone. As with today’s smart phones, different sizes of phones are available with the consequent increases in the surface area of the larger phones. 107 represent a location on the smart
  • 15 phone for wired communication such as for power (recharge battery) and/or external input/ output of digital data.
  • 108 would provide the port where a wire connection could communicate with the stereoscopic head display unit.
  • 108 is example location of ON/ OFF switch.
  • 109 illustrates a volume up control button.
  • 110 illustrates a volume down control button.
  • Ill is the input/ output antenna. This antenna would support communications with the Stereoscopic Head Display Unit.
  • Figure 2A illustrates key components of the smart phone with stereoscopic camera capability.
  • 200 illustrates a listing of the key components of the smart phone with stereoscopic cameras capability. These components are the same for both cameras.
  • a shutter with a variable speed there is a lens which will be described in some detail in subsequent figures.
  • the mirror is a deformable mirror.
  • the mirror is capable of working with adaptive optics.
  • the mirror is a curved mirror. In some embodiments, the mirror is a flat mirror.
  • a processor and software support the operation of the camera system. For example, changing the camera viewing angle from looking straight forward to convergence mode where the left camera would look down and the right and the right camera would down and to the left. These viewing angles would intersect at a convergence point on the object of interest. These angles would be based on the proximity of the object of interest to the cameras. Depending on the design option selected there may be mechanical elements that interact with either components or the camera as a whole.
  • FIG. 2B illustrates a top-down view of the smart phone with stereoscopic camera capability.
  • 200 illustrates a smart phone.
  • 20 IF illustrates the front face of a smart phone.
  • 20 IB illustrates the back of the smart phone.
  • 202 illustrates a first camera of a stereoscopic camera pair, which is pointed in orthogonal to the front of the smart phone 201F.
  • 202A illustrates the center angle of the first camera 202, which is the direction to the center of the field of view of the first camera 202.
  • 203 illustrates a second camera of a stereoscopic camera pair, which is pointed in orthogonal to the front of the smart phone 201F.
  • 203A illustrates the center angle of the second camera 203, which is the direction to the center of the field of view of the second camera 203.
  • Figure 2C illustrates a top-down view of the smart phone with stereoscopic camera capability with convergence.
  • this is performed using the same smart phone and the cameras alter the direction from which imagery is obtained.
  • this is performed based on altering configuration of the componentry of the camera.
  • this is performed digitally, as is taught in US 17/225,610, AN IMPROVED IMMERSIVE VIEWING EXPERIENCE filed on 4/8/2021 and US Patent Application 17/237,152, AN IMPROVED IMMERSIVE VIEWING EXPERIENCE, filed on 4/22/2021, which are incorporated by reference in their entirety.
  • 200 illustrates a smart phone.
  • 20 IF illustrates the front face of a smart phone.
  • 20 IB illustrates the back of the smart phone.
  • 202 illustrates a first camera of a stereoscopic camera pair.
  • 202B illustrates the center angle of the first camera 202, which is the direction to the center of the field of view of the first camera 202.
  • 203 illustrates a second camera of a stereoscopic camera pair.
  • 203B illustrates the center angle of the second camera 203, which is the direction to the center of the field of view of the second camera 203. Note that the center angle 202B of the first camera of the first camera 202 and the center angle 203B of the second camera 203 are both canted inward towards each other towards a convergence point “204”.
  • the convergence point is in the midline (along the plane orthogonal to the front of the smart phone 201 from a point halfway in between the first camera 202 and the second camera 203).
  • the convergence point can be: at the level of the first camera 202 and the second camera 203; above the level of the first camera 202 and the second camera 203; below the level of the first camera 202 and the second camera 203; or, off of midline.
  • the convergence point can be any point in (x, y, z) space seen by the first camera 202 and second camera 203.
  • a camera from the first smart device and a camera from the second smart device can also be used to yield stereoscopic imagery, as taught in this patent and in the patents incorporated by reference in their entirety.
  • Figure 3A illustrates curved lens concept for smart phone with stereoscopic camera capability. This figure illustrates the first of three concepts for lenses for the smart phone with stereoscopic cameras capability. 301 illustrates a curved lens with an angular field of regard 302 and with a radius of curvature of 303. The camera would also have a variable field of view which would be pointed somewhere within the field of regard.
  • Figure 3B illustrates fish-eye type of lens concept for smart phone with stereoscopic camera capability.
  • 304 illustrates a fish-eye type of lens. This lens could provide hemi- spherical or near hemispherical coverage 305.
  • Figure 3C illustrates a progressive type of lens concept for smart phone with stereoscopic camera capability.
  • 306 illustrates a first progressive type of lens, which can be used for the first camera of a stereoscopic camera pair.
  • 307 illustrates a portion of the lens that optimizes image acquisition in a straight-forward direction.
  • 308 illustrates a portion of the lens that provides an increasingly focused capability wherein there is a greater magnification as compared to 307.
  • 308 is directly associated with convergence at shorter and shorter distances. Note that 308 is located towards the bottom and towards the second camera of a stereoscopic camera pair.
  • 310 illustrates a second progressive type of lens, which can be used for the second camera of a stereoscopic camera pair.
  • 311 illustrates a portion of the lens that optimizes image acquisition in a straightforward direction.
  • 312 illustrates a portion of the lens that provides an increasingly focused capability wherein there is a greater magnification as compared to 311.
  • 312 is directly associated with convergence at shorter and shorter distances. Note that 312 is located towards the bottom and towards the first camera of a stereoscopic camera pair. Together, these provide a unique capability to the smart phone with stereoscopic cameras.
  • a progressive lens has a corrective factor to provide looking straight forward and in a ‘progressive’ manner gradually increases magnification in a downward and inward direction through the progressive lens. Note that electronic correction can occur to account for refraction through the lens.
  • the amount of magnification can be increased in the medial or inferior-medial direction. This can be performed for eyeglasses, which is referred to as “convergence type progressive eyeglasses”.
  • Figure 4A illustrates a front view of a composite sensor array concept for a smart phone with stereoscopic camera capability.
  • FIG. 4B illustrates a top view of a composite sensor array concept for a smart phone with stereoscopic camera capability.
  • Figure 5 A illustrates a flat mirror concept for smart phone with stereoscopic camera capability. This is the first of three mirror (or reflective surface) concepts which could be inserted into the stereoscopic cameras. A mirror, if included in the design, would in the preferred embodiment be placed between the lens and the detection sensors mirror to redirect the light which has passed through the lens to the sensor array.
  • This front view illustrates a typical flat mirror 501. 502 is the mirror frame.
  • Figure 5B illustrates a top-down view of a flat mirror concept for smart phone with stereoscopic camera capability.
  • 502 shows the outline of the frame.
  • 501 shows the mirror which is hidden by the frame.
  • Figure 5C illustrates a front view of a curved mirror concept for smart phone with stereoscopic camera capability.
  • 503 illustrates a curved mirror encased by a frame 504.
  • Figure 5D illustrates a front view of a curved mirror concept for smart phone with stereoscopic camera capability.
  • 503 illustrates the curved mirror encased by a frame 504.
  • a spherical-type curvature is used.
  • a cylindrical- type curvature is used.
  • the curvature can include both spherical-type curvature and cylindrical-type curvatures.
  • multiple (at least two) different curved mirrors can be used. For example, if for a long range zooming, a first type of curved mirror can be utilized. If for medium range zooming, a second type of curved mirror can be utilized. These can be used one at a time, so the first type of curved mirror and second type of curved mirror can be swapped out depending on what object is being viewed.
  • Figure 5E illustrates a front view of a deformable mirror concept for smart phone with stereoscopic camera capability.
  • 505 illustrates a deformable mirror encased by a frame 506.
  • An inventive aspect of this patent is to use a deformable mirror in conjunction with a stereoscopic camera system.
  • Figure 5F illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 1.
  • 505 illustrates a deformable mirror encased by a frame 506. Note the contour of the deformable mirror at this first time point.
  • Figure 5G illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 2.
  • 505 illustrates a deformable mirror encased by a frame 506. Note the contour of the deformable mirror at this second time point, which is different from the contour at the first time point.
  • the curvature of the first deformable mirror has a first focal point.
  • the curvature of the first deformable mirror has a second focal point, which is different from the first focal point.
  • the curvature of the second deformable mirror has a first focal point.
  • the curvature of the second deformable mirror has a second focal point, which is different from the first focal point.
  • the first deformable mirror and second deformable mirror would optimize imagery at a first location. This could be an object in the scene or a point in space. This could be a first convergence point.
  • the first deformable mirror and second deformable mirror would optimize imagery at a second location. This could be an object in the scene or a second location. This could be a second convergence point.
  • the second convergence point would be different from the first convergence point.
  • the deformable system is inventive because it maintains high light collection and can rapidly alter the projection of this light from the scene onto the detector.
  • eye tracking of a user is performed. Based on eye tracking metrics of a user, the deformable mirrors deform to optimize image acquisition from those areas. This includes a location where a user is looking. Thus, imagery collected can be optimized based on where a user is looking. Thus, as the user looks a new objects via saccades movements of his/her eyes, the stereoscopic deformable mirror system can deform to adapt to new areas where the user is looking. Similarly, as the user tracks objects as it moves with smooth tracking movements of his/her eyes, user is looking. Thus, imagery collected can be optimized based on where a user is looking. Thus, as the user looks a new objects via saccades movements of his/her eyes, the stereoscopic deformable mirror system can deform to adapt to new areas where the user is looking.
  • Figure 6A illustrates a movable lens for smart phone with stereoscopic camera capability. This figure is the first of three different potential of the smart phone with stereoscopic cameras capability.
  • Figure 6A depicts a movable lens which could be used to change the look angle.
  • Light 601 is shown as a series of parallel wavy lines entering the lens 602.
  • the lens is the progressive type lens described in Figure 3C.
  • the position of the lens at Time 1 is such that the light enters the top portion of the lens which imagery looking straight forward and no convergence point for the left and right viewing angles is used.
  • the position of the lens has shifted upward and the light 601 now enters the bottom portion of the lens 604.
  • the bottom right portion of the left lens causes the look angle to shift down and to the right.
  • the bottom left portion of the right lens causes the look angle to shift down and to the left.
  • the viewing angles of the left and right lenses intersect (i.e., converge) and, collectively, provide a stereoscopic view.
  • Figure 6B illustrates the composite sensor array concept.
  • the light has already passed through the lens and is converging on one of the sensor arrays of the composite sensor arrays described for Figure 4.
  • the light enters the center array 608 and provides looking straight ahead imagery.
  • the array has rotated, and the light impinges, in this case, on sensor array 609.
  • the viewing angles from the left and right cameras intersect and provide stereoscopic imagery.
  • Figure 6C illustrates a switching out of cameras from Day 1 to Day 2. Note this could be analogous switching lenses on today’s digital cameras.
  • 611 shows the camera box; 612 the aperture, 613 the shutter; 614 the lens; 615 the sensor array and 616 the communications port.
  • the user selects a different; takes out the camera used on day 1 and inserts a totally different camera 617 of the same size and shape as the camera uses on Day 1.
  • the aperture 618 is larger and the lens 621 has a curvature.
  • 620 illustrates the shutter. 622 the sensor array and 623 the communications port.
  • the cameras can be stored on the smart phone and swapped out through mechanical processes on the smart phone. In other embodiments, swapping out can be through manual processes, as is done with today’s SLR cameras.
  • FIG. 7A illustrates the Stereoscopic Head Display Unit (SHDU). This figure illustrates key aspects of the Stereoscopic Head Display Unit.
  • Figure 7A shows example placements of key components of the SHDU.
  • 701 illustrates the overall headset.
  • 702 illustrates the left eye display.
  • 702A illustrates the left eye tracking device.
  • 703 illustrates the right eye display.
  • 703A illustrates the right eye tracking device.
  • 704 illustrates the processor.
  • 705 illustrates the left ear speaker.
  • 7006 illustrates the power supply.
  • 707 illustrates the antenna.
  • 708 illustrates the right ear speaker.
  • 709 illustrates the inclinometer (or inertial measurement unit).
  • 710 illustrates the microphone.
  • 711 illustrates the communications port.
  • 712 illustrates a scene sensing device.
  • Figure 7B shows a side view of a transformable SHDU display unit with an eyepiece cover with augmented reality mode and virtual reality mode.
  • 713A illustrates the side of the frame of the SHDU 701 at Time Point #1.
  • 714A illustrates an eye piece cover.
  • the eyepiece cover is connected to the SHDU 701 via a hinge.
  • This is called a transformable SHDU because it can transform from a virtual reality type display (where the real world is blocked out) to an augmented reality type display (where the user can see both the virtual world and the real world).
  • the transformable SHDU is in augmented reality mode.
  • the eye piece cover is in the elevated position and does not block the wearer of the SHDU from seeing the external area around him/ her.
  • 713B illustrates the side of the frame of the SHDU 701 at Time Point #2.
  • 714B illustrates an eye piece cover, which is now in position over the front of the SHDU, so it is in virtual reality mode at Time Point #2.
  • the eye piece cover has rotated down and now covers the eye piece. The wearer of the SHDU would be blocked from seeing the external area around him/ her. If the wearer is concurrently viewing stereoscopic imagery from smart phone with stereoscopic cameras, this would constitute virtual reality.
  • Figure 7C shows a side view of a transformable SHDU display unit with an electronic eye piece with augmented reality mode and virtual reality mode.
  • 713C illustrates the side of the frame of the SHDU 701 at Time Point #1.
  • 714C illustrates an electronic eye piece affixed the SHDU 701 at Time Point #1.
  • the setting for the electronic eye piece is transparent and light 715C is able to pass through unfiltered on the SHDU display units.
  • this is augmented reality mode because the user can see both the real world and the virtual world.
  • 713D illustrates the side of the frame of the SHDU 701 at Time Point #2.
  • 714D illustrates an electronic eye piece affixed the SHDU 701 at Time Point #2.
  • the setting for the electronic eye piece is opaque and light 715D is not able to pass through unfiltered on the SHDU display units.
  • this is virtual reality mode because the user can only see the virtual world.
  • the opacity ranges in varying degrees of opacity.
  • the electronic eye piece can a range of realities - mixed reality to virtual reality.
  • Figure 8A illustrates wired connectivity means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
  • Figure 8A is the first of three principle connectivity means between the stereoscopic smart phone (SSP) and stereoscopic head display unit (SHDU).
  • Figure 8A shows the front side of the SSP 800A connected to the SHDU 801A via a wire 802.
  • the stereoscopic imagery would be available on both the smart phone display 800A and the SHDU 801A.
  • the SSP could be worn on a first user’s head and imagery obtained from the stereoscopic cameras on the back of the SSP as in Figure 1A.
  • the imagery obtained from the back stereoscopic cameras on the back of the SSP could be displayed on the SHDU worn by a second user. This would allow the second user to view the same imagery as the first user, which can be displayed in near real time.
  • stereoscopic imagery obtained from the SHDU could be sent to the SSP.
  • Figure 8B illustrates wireless connectivity via BlueTooth means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
  • the stereoscopic imagery would be transmitted from the SSP 800B via a BlueTooth connection 803 to the SHDU 80 IB which would be received through the SHDU’s antenna and subsequently routed to the processor and thence to the SHDU’s left and right displays.
  • the imagery obtained from the back stereoscopic cameras on the back of the SSP could be displayed on the SHDU 80 IB worn by a second user. This would allow the second user to view the same imagery as the first user, which can be displayed in near real time.
  • stereoscopic imagery obtained from the SHDU could be sent to the SSP for display, as discussed in Figure 8A.
  • Figure 8C illustrates wireless connectivity via the Internet means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
  • the stereoscopic imagery would be transmitted from the SSP 800C via an Internet connection 804 to the SHDU 801C which would received through the SHDU antenna and subsequently routed to the processor and thence to the SHDU left and right displays.
  • the imagery obtained from the back stereoscopic cameras on the back of the SSP could be displayed on the SHDU 801C worn by a second user. This would allow the second user to view the same imagery as the first user, which can be displayed in near real time.
  • stereoscopic imagery obtained from the SHDU could be sent to the SSP for display, as discussed in Figure 8A.
  • FIG. 9A illustrates system operation using the stereoscopic smart phone (SSP).
  • SSP stereoscopic smart phone
  • 901 illustrates the SSP.
  • 902A illustrates a digital icon symbolizing an app, which would be installed and appear on the general display containing multiple apps 903 (including settings, clock, messages).
  • apps 903 including settings, clock, messages.
  • the user could touch his/ her finger 904A to the icon which would enable the SSP to receive commands for setting for the mode of operation of the stereoscopic cameras.
  • These commands could be issued through but limited to: default settings; voice commands by the user into the smart phone via a microphone; antenna 905; via wire 906 into the data port 907; electronic message from the SHDU received at the smart phone.
  • a pull-down menu could appear on the smart phone display once the starting process has been initiated or by pull down menu on the SHDU.
  • Figure 9B illustrates near real-time stereo mode at time N.
  • Time N the user is enjoying a day at the beach and the stereoscopic cameras are pointed straight ahead at a small ship passing by.
  • the smart phone display shows a split screen of left viewing perspective imagery 908 taken by the left camera and right viewing perspective imagery 909 taken by the right camera.
  • FIG. 9C illustrates system operation using the stereoscopic smart phone (SSP) with convergence mode.
  • 901 illustrates the SSP.
  • 902B illustrates a digital icon symbolizing the convergence option, which would be installed within the app. The user could touch his/ her finger 904B to the convergence icon. The cameras will adjust according to the convergence point.
  • SSP stereoscopic smart phone
  • a command “Converge Near” is issued and the mode of operation changes to convergence at short range
  • eye tracking can be used to determine a location where in the user’s environment a user is looking. Then, the stereoscopic cameras will adjust the convergence and zoom settings in order to optimize viewing of the location where the user is looking.
  • Figure 9D illustrates convergence mode at time N+l.
  • Time N+l the user wearing the SHDU decides to read a book using the SSP under convergence mode.
  • the book is also shown on the smart phone display.
  • Split screen is illustrated 908B and 909B for left and right eyes respectively.
  • the stereo distance is adjusted so that it is different in Figure 9D as compared to Figure 9B. This could be done by using either the stereoscopic cameras on the front of the SSP or the stereoscopic cameras on the back of the SSP.
  • a novel stereoscopic camera setup with adjustable stereo distances is disclosed.
  • Figure 10A illustrates of before application of automatic object recognition as displayed on the stereoscopic head display unit.
  • Figure 10A depicts the cluttered forest scene with trees, bush and grass and, some objects of possible interest which are difficult to distinguish among all the forest vegetation.
  • the scene as it appears at Time 1 is shown on the SHDU 1001 using a left eye display 1002 and right eye display 1003.
  • the user issues the command “Run AOR” and, in a very short time interval, Figure 10B appears on the SHDU displays.
  • Figure 10B illustrates after application of automatic object recognition as displayed on the stereoscopic head display unit.
  • the smart phone is worn like a body camera and in the scanning mode (at least a 60 degree arc covering the path and 30 degrees to either side) alternating between looking straight ahead and looking down at close range.
  • the SHDU is in the mixed reality mode an the AOR and Al are operating together.
  • there was a item of interest e.g., snake which is dangerous
  • the AOR would detect and classify the snake and rapidly pass the information to the Al.
  • FIG. 10B thus illustrates wherein the AOR has correctly classified the man and the deer in the scene.
  • 1004A illustrates a man visible through the left see-through portion of the SHDU.
  • an image from a left stereoscopic camera of a man can be displayed on the left eye display of the SHDU 1004B illustrates a man visible through the right see-through portion of the SHDU
  • an image from a right stereoscopic camera of a man can be displayed on the right eye display of the SHDU.
  • 1005 A illustrates a deer visible through the see-through portion of the SHDU.
  • an image from a left stereoscopic camera of a deer can be displayed on the left eye display of the SHDU.
  • 1005B illustrates a deer visible through the see-through portion of the SHDU.
  • an image from a right stereoscopic camera of a deer displayed on the right eye display of the SHDU Note that in some embodiments, some items within the scene have been filtered (subtracted from the scene). This improves understanding of the items of interest. Note that in some embodiments, a novel augmented reality feature is to label an item within the field of view, such as the men and deer are labeled for easy user recognition.
  • Figure 10C invokes another stereo camera unique technology. Specifically, since the cameras are of higher resolution than the SHDU displays, the pixels on the display are down sampled in order to get the full scene in the field of view (FOV) onto the SHDU displays. At the user command “Focus deer”, the stereoscopic cameras change the FOV to the narrow field of view (NFOV). For the NFOV, the down sampling is discontinued and, in a very short time interval, the left image 1006A of the deer from the left stereoscopic camera is displayed in full resolution on left eye display and the right image 1006B of the deer from the right stereoscopic camera deer is displayed in full resolution on right eye display. These images are displayed at Time 1++.
  • the scene depicted in Figure 11A is that of a user 1101 riding in a subway along with other passengers labeled 1104.
  • the user 1101 is wearing a SHDU 1102 and holding a stereoscopic camera wherein the phone is linked into the internet.
  • the user spots an article of interest and downloads the article.
  • the subway environment is such that there is considerable vibration thereby, making reading difficult.
  • the user decides to invoke stereoscopic image stabilization.
  • the user could invoke either SHDU setting for either mixed reality (MR) wherein the user could read the article and simultaneously watch the ongoing activities in the subway car.
  • the user could invoke the option on the SHDU for virtual reality (VR) and obscure to external scene and focus solely on the article of interest.
  • the user could easily change form the VR mode to the MR mode if there were announcements in the subway communication system.
  • Figure 11B shows selection of points within the image to use for image stabilization.
  • the SHDU 1105 displays wherein, for the stereoscopic image stabilization for left eye display has selected three words (1106, 1107 and 1108) within the displayed image (text in this example) as reference points to adjust left eye image frame display to sequential next left eye image frame display.
  • 109, 1110, and 1111 are reference points for the right eye display and are used to adjust right eye image frame display to sequential next right eye image frame display. This adjustment process from frame to frame would continue until the user proceeded to other parts of the article at which time the stereoscopic image stabilization process would select new reference points.
  • Figure 11C illustrates selection of points within the scene to use for image stabilization.
  • the user, 1112 is wearing a SHDU 1113 and has a smart stereoscopic camera 1114 in a body wear position to operate in the continuous mode operation of streaming stereoscopic imagery from the stereoscopic cameras.
  • Within the area within which the user is walking there are structures including a first building 1115, a second building 1116 and a fountain 1117, each of which has distinctive features which could be used as reference points for stereoscopic image stabilization. Note that the user may choose to use the microphone on the SHDU to record observations during the walk thru the area of interest.
  • Figure 11D illustrates stereoscopic image stabilization on the SHDU.
  • Figure 11D shows the SHDU 1118 wherein, for the stereoscopic image stabilization for left eye display has selected three key points within the area (1119, 1120 and 1121) as reference points to adjust left eye image frame display to sequential next left eye image frame display.
  • 1122, 1123, and 1124 are reference points for the right eye display and are used to adjust right eye image frame display to sequential next right eye image frame display. This adjustment process from frame to frame would continue until the user proceeded to other parts of the article at which time the stereoscopic image stabilization process would select new reference points.
  • a first set of points is used for image stabilization and for the right image displayed in the SHDU 1118, the same first set of points is used for image stabilization.
  • a first set of points is used for image stabilization and for the right image displayed in the SHDU, a second set of points is used for image stabilization wherein the first set of points are different from the second set of points.
  • FIG. 12A illustrates determining a stabilization point and field of view (FOV) for the left eye imagery and right eye imagery.
  • 1200A illustrates the field of view (FOV) from the left eye imagery.
  • 1200B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user.
  • 1200B is a subset of 1200A, which is determined based on point 1200.
  • 1200A is based on using the point 1200 as the center of 1200A.
  • 1201A illustrates the field of view (FOV) from the left eye imagery.
  • 1201B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user.
  • 1201B is a subset of 1201A, which is determined based on point 1201.
  • 1201 A is based on using the point 1201 as the center of 1201A.
  • 1202A illustrates the field of view (FOV) from the left eye imagery.
  • 1202B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user.
  • 1202B is a subset of 1202A, which is determined based on point 1202. In the preferred embodiment, 1202A is based on using the point 1202 as the center of 1202A.
  • 1203 A illustrates the field of view (FOV) from the left eye imagery.
  • 1203B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user.
  • 1203B is a subset of 1203A, which is determined based on point 1203.
  • 1203 A is based on using the point 1203 as the center of 1203A.
  • a characteristic feature which can be easily distinguished in the image is used as a stabilizing point for both the left eye imagery and the right eye imagery.
  • the stabilizing point for the left eye imagery is different from the stabilizing point for the right eye imagery.
  • image stabilization is used for close range imagery, but not for longer range imagery. In some embodiments, a delay of less than 5 seconds is performed.
  • Figure 12B illustrates displaying stabilized stereoscopic imagery on a SHDU.
  • 1204 illustrates the stereoscopic head display unit (SHDU).
  • Figure 13A illustrates the SSP with its stereoscopic cameras in a first position, which is wide.
  • 1301A illustrates a first camera of the SSP’s stereoscopic camera system, which is located at a first position on the SSP.
  • 1302A illustrates a second camera of the SSP’s stereoscopic camera system, which is located at a second position on the SSP. Note that the first camera 1301A is separated from the second camera 1302A by a first stereo distance.
  • Figure 13B illustrates the SSP with its stereoscopic cameras in a second position, which is wide.
  • 1301B illustrates the first camera of the SSP’s stereoscopic camera system, which is located at a third position on the SSP.
  • 1302B illustrates the second camera of the SSP’s stereoscopic camera system, which is located at a fourth position on the SSP.
  • the first camera 1301B is now separated from the second camera 1302B by a second stereo distance, which is different from the first stereo distance. Note that in this example, the second stereo distance is smaller than the first stereo distance.
  • Figure 13C illustrates the SSP with its stereoscopic cameras in a third position, which is also narrow, but shifted in position as compared to Figure 12B.
  • 1301C illustrates the first camera of the SSP’s stereoscopic camera system, which is located at a fifth position on the SSP.
  • 1302C illustrates the second camera of the SSP’s stereoscopic camera system, which is located at a sixth position on the SSP. Note that the first camera 1301B is separated from the second camera 1302B by the second stereo distance, but the position of the cameras is shifted.
  • This novel design improves tracking of small objects that move, such as a roily polly.
  • the cameras are shown to move along a line. This is called a camera bar design.
  • the cameras could move up, down, forward, back, left or right.
  • the cameras’ orientations can also be adjusted in roll, pitch and yaw.
  • the movements of the cameras can be coordinated with convergence as shown in Figure 2C.
  • a novel aspect of this system is that the SSP’s stereoscopic cameras can be reconfigured into different positions and orientations on the SSP. This reconfigurable design would allow depth at various distances and could achieve depth at ranges of ⁇ 2 inches satisfactory.
  • a first set of lenses for the stereoscopic camera system could be used for a first stereo distance and a second set of lenses for the stereoscopic camera system could be used for a second stereo distance wherein the first set of lenses are nearer focus lenses than the second set of lenses and the first stereo distance is smaller than the second stereo distance.
  • Figure 14 illustrates optimizing stereoscopic imaging.
  • 1400 illustrates determining an object of interest.
  • the preferred embodiment is to use automatic object recognition (AOR).
  • Some embodiments comprise using eye tracking, as taught in US Patent Application 16/997,830, ADVANCED HEAD DISPLAY UNIT FOR FIRE FIGHTERS, filed on 8/19/2020.
  • Some embodiments comprise wherein a user selects an object of interest via a graphical user interface or voice commands.
  • 1401 illustrates determining the distance and angle from the stereoscopic cameras to the object of interest.
  • the preferred embodiment is to use a laser range finder, as taught by US Patent 11,006,100, SMART GLASSES SYSTEM. Other distance measurement technologies are also possible.
  • the colors and brightness of the images are also determined.
  • 1402 illustrates reconfiguring stereoscopic cameras based on the distance from the stereoscopic camera wherein reconfiguring the stereoscopic cameras comprises adjusting: changing the stereo separation of the left stereoscopic camera from the right stereoscopic camera; moving the left stereoscopic camera and the right stereoscopic camera; changing the convergence of the left stereoscopic camera and the right stereoscopic camera; changing the zoom setting of the left stereoscopic camera and the right stereoscopic camera; and, changing the ISO setting of the left stereoscopic camera and the right stereoscopic camera.
  • a table for optimized stereoscopic camera settings is generated based on distance. This table can be referenced as a look up table. Once the distance and angle are determined, the camera settings can be looked up and automatically implemented. For X distance, stereo distance would be looked up and found to be Y. And the stereoscopic cameras would be set at a stereo distance of Y.
  • Figure 15 illustrates adjusting stereoscopic camera componentry based on eye tracking parameters of a user.
  • 1500 illustrates performing eye tracking via headset per US to determine which object a user is looking at 16/997,830, ADVANCED HEAD DISPLAY UNIT FOR FIRE FIGHTERS, which is incorporated by reference in its entirety.
  • 1501 illustrates adjust camera componentry to optimize viewing of the eye tracking, which includes: adjusting the shutter speed; adjusting the focal length; adjusting the ISO; adjusting the field of view; adjusting the detector position / orientation; adjusting the camera position / orientation; adjust the convergence angle; and adjusting a deformable mirror.
  • eye tracking is used to adjust the camera componentry to optimize imaging using a single camera system.
  • eye tracking is used to adjust the camera componentry to optimize imaging using a stereoscopic camera system.
  • Figure 16 illustrates a three-camera stereoscopic smart phone.
  • 1600 illustrates the three- camera stereoscopic smart phone.
  • 1601A illustrates a first camera.
  • 1601B illustrates a second camera.
  • 1601C illustrates a third camera.
  • the first camera 1601A has a first location on said smart phone.
  • the second camera 160 IB has a second location on said smart phone, which is different from said first location.
  • the third camera 1601C has a third location on said smart phone, which is different from said first location and the second location.
  • the first camera 1601A location is fixed.
  • the first camera 1601A location is movable.
  • the second camera 160 IB location is fixed.
  • the second camera 1601B location is movable.
  • the third camera 1601C location is fixed.
  • the third camera 1601C location is movable.
  • the third camera 1601C a third location on said smart phone and wherein said third location is different from said first location and said second location.
  • the three-camera stereoscopic smart phone 1600 has an imaging system configured to track an object's location in an area. This can be based on using the first camera 1601 A, the second camera 160 IB, the third camera 1601C or another device including a LIDAR device or infrared device.
  • the first camera 1601 A and the second camera 1601B are separated by a first stereo distance.
  • the second camera 1601B and the third camera 1601C are separated by a second stereo distance.
  • the third camera 1601C and the first camera are separated by a third stereo distance.
  • the first stereo distance is smaller than the second stereo distance.
  • the third stereo distance is larger than the first stereo distance and the second stereo distance.
  • the smart phone is configured to use data from imaging system, the first camera 1601A, the second camera 160 IB and said third camera 1601C to acquire enhanced stereoscopic imagery of the object in the area. If the object is a first distance from the smart phone, using the first camera 1601A and the second camera 1601B to acquire the enhanced stereoscopic imagery.
  • the object is a second distance from the smart phone wherein the second distance is larger than the first distance, using the second camera 1601B and said third camera 1601C to acquire said enhanced stereoscopic imagery. If the object is a third distance from said smart phone wherein the third distance is larger than the second distance, using said first camera 1601A and said third camera 1601C to acquire said enhanced stereoscopic imagery.
  • the device(s) or computer systems that integrate with the processor(s) may include, for example, a personal computer(s), workstation(s) (e g., Sun, HP), personal digital assistant(s) (PDA(s)), handheld device(s) such as cellular telephone(s), laptop(s), handheld computer(s), or another device(s) capable of being integrated with a processor(s) that may operate as provided herein. Accordingly, the devices provided herein are not exhaustive and are provided for illustration and not limitation.
  • references to “a microprocessor and “a processor, or “the microprocessor and “the processor.” may be understood to include one or more microprocessors that may communicate in a stand-alone and/or a distributed environment(s), and may thus be configured to communicate via wired or wireless communications with other processors, where such one or more processor may be configured to operate on one or more processor-controlled devices that may be similar or different devices.
  • Use of such “microprocessor or “processor terminology may thus also be understood to include a central processing unit, an arithmetic logic unit, an application-specific integrated circuit (IC), and/or a task engine, with such examples provided for illustration and not limitation.
  • references to memory may include one or more processor-readable and accessible memory elements and/or components that may be internal to the processor-controlled device, external to the processor- controlled device, and/or may be accessed via a wired or wireless network using a variety of communications protocols, and unless otherwise specified, may be arranged to include a combination of external and internal memory devices, where Such memory may be contiguous and/or partitioned based on the application.
  • references to a database may be understood to include one or more memory associations, where such references may include commercially available database products (e g., SQL, Informix, Oracle) and also include proprietary databases, and may also include other structures for associating memory Such as links, queues, graphs, trees, with such structures provided for illustration and not limitation.
  • References to a network may include one or more intranets and/or the Internet, as well as a virtual network. References hereinto microprocessor instructions or microprocessorexecutable instructions, in accordance with the above, may be understood to include programmable hardware.
  • the software included as part of the invention may be embodied in a computer program product that includes a computer useable medium.
  • a computer usable medium can include a readable memory device, such as a hard drive device, a CD-ROM, a DVD ROM, or a computer diskette, having computer readable program code segments stored thereon.
  • the computer readable medium can also include a communications link, either optical, wired, or wireless, having program code segments carried thereon as digital or analog signals.

Abstract

This patent provides a novel stereoscopic imaging system. In the preferred embodiment, the improved stereoscopic imaging system would be incorporated onto a smart phone, which is called the stereoscopic smart phone (SSP). The SSP would work in conjunction with one or more stereoscopic head display units (SHDUs). Once this system is operational, it will allow significant improvements to obtaining stereoscopic imagery. One novel aspect of this patent comprises wherein the stereoscopic cameras on the SSP can move in position to alter stereo separation distance and change the convergence to optimally image a scene.

Description

A METHOD AND APPARATUS FOR A STEREOSCOPIC SMART PHONE
CROSS-REFERENCES TO RELATED APPLICATIONS
[001] This patent application is PCT of US 18/120,422 filed on 3/12/2023, which is a continuation-in-part of US 17/829,256 filed on 5/31/2022 (issued as US 11,627,299 on 4/11/2023), which is a continuation-in-part of US 16/997,830 filed on 8/19/2020 (issued as US 11,380,065 on 7/5/22), which claims benefit of US Provisional Application 62/889,169 filed on 08/20/2019. US patent application 17/829,256 filed on 5/31/2022 is also a continuation-in-part of US 16/936,293 filed on 07/22/2020 (issued as US11,442,538 on 9/13/2022). US patent application 17/829,256 filed on 5/31/2022 is also a continuation-in-part of US 17/558,606 filed on 12/22/2021 (issued as US11,445,322 on 9/13/22), which is a continuation-in-part of US 17/225,610 filed on 04/08/2021 (issued as 11,366,319 on 6/21/22). All of these are incorporated by reference in their entirety.
TECHNICAL FIELD
[002] Aspects of this disclosure are generally related to three-dimensional imaging.
BACKGROUND
[003] Many people use smart phones.
SUMMARY
[004] All examples, aspects and features mentioned in this document can be combined in any technically possible way.
[005] This patent provides a novel stereoscopic imaging system. In the preferred embodiment, the improved stereoscopic imaging system would be incorporated onto a smart phone, which is called the stereoscopic smart phone (SSP). This is also referred to as a 3D smart phone. The SSP would work in conjunction with one or more stereoscopic head display units (SHDUs). Once this system is operational, it will allow significant improvements to video imagery. For example, the stereoscopic cameras on the SSP can move in position to alter stereo separation distance and change the convergence to optimally image a scene. These changes can occur in near real time, so as the scene changes, the cameras change accordingly. Consider the following situation with a roily polly bug. A SSP can be set up near the SSP and object tracking of the roily polly can occur. Each camera of the stereoscopic camera system on the SSP can track the roily polly and the stereo distance can change closer and farther away based on the distance of the roily polly. The roily polly can climb onto a rock and the convergence point of the stereo cameras moves upward and as it climbs downward into a hole, the convergence point of the stereo cameras moves downwards. While all of this is happening, the stereoscopic imagery can be passed via a wired or wireless connection to a stereoscopic head display unit (SHDU). A child in near real time can view the roily polly while wearing the SHDU. The digital images of the roily polly can be enlarged so the roily polly appears the size of a black lab puppy. A wide array of other objects and situations can also be imaged using this system, which overall enhance the viewing experience.
[006] The preferred embodiment is a method of stereoscopic imaging comprising: using a left camera and a right camera of a stereoscopic camera system image to perform initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a stereoscopic distance, wherein said left camera has first pointing direction, and wherein said right camera has first pointing direction; changing said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’s first pointing direction, and wherein said left camera’s second pointing direction points towards a convergence point; changing said right camera’s first pointing direction to a second pointing direction wherein said right camera’s second pointing direction is different than said right camera’s first pointing direction, and wherein said right camera’s second pointing direction points towards said convergence point; and using said left camera and said right camera of said stereoscopic camera system to perform subsequent stereoscopic imaging of said area with said left camera’s second pointing direction and said right camera’s second pointing direction.
[007] Some embodiments comprise wherein said convergence point is positioned such that a distance from said left camera’s location to said convergence point is not equal to a distance from said right camera’s location to said convergence point. [008] Some embodiments comprise wherein said convergence point is positioned such that a distance from said left camera’s location to said convergence point is equal to a distance from said right camera’s location to said convergence point.
[009] Some embodiments comprise wherein said left camera’s first pointing direction points towards a second convergence point and said left camera’s second pointing direction points towards said convergence point; and wherein said second convergence point is different from said convergence point.
[0010] Some embodiments comprise wherein said initial stereoscopic imaging has a first zoom setting; wherein said subsequent stereoscopic imaging has a second zoom setting; and wherein said second zoom setting has greater magnification than said first zoom setting.
[0011] Some embodiments comprise wherein said left camera’s first pointing direction is determined based on said left camera’s orientation; and wherein said right camera’s first pointing direction is determined based on said right camera’s orientation.
[0012] Some embodiments comprise displaying left eye imagery and right eye imagery from said initial stereoscopic imaging of said area on a stereoscopic head display unit (SHDU); and
[0013] displaying left eye imagery and right eye imagery from said subsequent stereoscopic imaging of said area on said SHDU wherein said SHDU comprises a virtual reality display, an augmented reality display or a mixed reality display.
[0014] Some embodiments comprise wherein said stereoscopic system is used on a smart phone; and wherein said SHDU and said smart phone communicate via a wired connection, a wireless connection via BlueTooth or a wireless connection via an Internet. Some embodiments comprise wherein automatic object recognition is performed on said left eye imagery and said right eye imagery from said initial stereoscopic imaging of said area on said SHDU. Some embodiments comprise wherein artificial intelligence is performed in conjunction with said automatic object recognition to alert a user regarding findings in said area. Some embodiments comprise wherein stereoscopic image stabilization is performed on said left eye imagery and said right eye imagery from said initial stereoscopic imaging of said area on said SHDU. Some embodiments comprise determining a spatial relationship between said stereoscopic camera system and an object of interest; and reconfiguring said stereoscopic cameras based on said spatial relationship wherein reconfiguring said stereoscopic cameras comprises changing said stereoscopic distance to a subsequent stereoscopic distance wherein said subsequent stereoscopic distance is different than said stereoscopic distance.
[0015] Some embodiments comprise wherein said subsequent stereoscopic imaging of said area is performed using a second stereoscopic distance; and wherein said second stereoscopic distance is smaller than said first stereoscopic distance.
[0016] Some embodiments comprise wherein said stereoscopic camera system is placed on a smart phone, a tablet or a laptop.
[0017] Some embodiments comprise wherein said convergence point is determined based on an object’s location in said area.
[0018] Some embodiments comprise wherein said convergence point is determined based on eye tracking metrics of a user.
[0019] Some embodiments comprise wherein said convergence point is determined based on an artificial intelligence algorithm.
[0020] Some embodiments comprise wherein a sensor system of said stereoscopic camera system comprises a composite sensor array.
[0021] Some embodiments comprise a stereoscopic head display unit (SHDU) comprising: a head display unit with a left eye display and a right eye display wherein said SHDU is configured to: receive initial stereoscopic imagery from a stereoscopic imaging system wherein said initial stereoscopic imagery comprises initial left eye imagery and initial right eye imagery; display said initial left eye imagery on said left eye display; display said initial right eye imagery on said right eye display; receive subsequent stereoscopic imagery from said stereoscopic imaging system wherein said subsequent stereoscopic imagery comprises subsequent left eye imagery and subsequent right eye imagery; display said subsequent left eye imagery on said left eye display; and display said subsequent right eye imagery on said right eye display; wherein said stereoscopic imaging system comprises a left camera and a right camera; and wherein said stereoscopic camera system image is configured to: use said left camera and said right camera of said stereoscopic camera system to perform said initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a stereoscopic distance, wherein said left camera has first pointing direction, and wherein said right camera has first pointing direction; change said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’s first pointing direction, and wherein said left camera’s second pointing direction points towards a convergence point; change said right camera’s first pointing direction to a second pointing direction wherein said right camera’s second pointing direction is different than said right camera’s first pointing direction, and wherein said right camera’s second pointing direction points towards said convergence point; and use said left camera and said right camera of said stereoscopic camera system to perform said subsequent stereoscopic imaging of said area with said left camera’s second pointing direction and said right camera’s second pointing direction.
[0022] Some embodiments comprise a stereoscopic smart phone comprising: a smart phone; and a stereoscopic imaging system operably connected to said smart phone comprising a left camera and a right camera wherein said stereoscopic camera system image is configured to: perform initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a stereoscopic distance, wherein said left camera has first pointing direction, and wherein said right camera has first pointing direction; change said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’s first pointing direction, and wherein said left camera’s second pointing direction points towards a convergence point; change said right camera’s first pointing direction to a second pointing direction wherein said right camera’s second pointing direction is different than said right camera’s first pointing direction, and wherein said right camera’s second pointing direction points towards said convergence point; use said left camera and said right camera of said stereoscopic camera system to perform subsequent stereoscopic imaging of said area with said left camera’s second pointing direction and said right camera’s second pointing direction.
[0023] In some embodiments a very high-resolution camera pair(s) could be used in connection with the type pairs described above. In some embodiments, there could be a disparity between the camera resolution and that of the display system wherein the camera resolution was greater (i.e., provided ‘better resolution’) than that of the display system. These very high-resolution camera pair(s) could be used in connection with changes in camera field of view (FOV). In some embodiments, the field of view could change (e.g., decrease in size) with a corresponding change of image resolution (e g., increase in resolution). As an example, this embodiment could be used with a feedback mechanism wherein a user could, for example, start with a large FOV and then, thru an interactive cursor, indicate an area of interest and desired FOV. Then, the center point of the FOV would change to that point and the image area corresponding to that FOV and the resolution corresponding that FOV would create the image to be displayed.
[0024] The stereoscopic camera system has a variety of components, which include: aperture(s); lens(es); shutter(s); detector(s); mirror(s); and, display(s).
[0025] Aperture diameter would be consistent with the different lenses described below. Changeable or fixed type lenses or user selection of type lens chosen by the user. The current set of lenses within the smart devices is one option. Multi-shaped lenses (Note: this is analogous to reading glasses with variable portions of the lens based on look angle (e.g., top portion for looking straight forward and bottom portion for reading. This would be different to allow convergence (i.e., left lens and right lens in bottom portion would be canted differently.) Differing pointing angles can be based on the particular portion of the lens. Differing zoom can be based on the particular portion of the lens. Fisheye type lenses with high resolution. The idea is that different portions of the digital collection array would be associated with corresponding look angles through the fisheye lens The portion of the array which is used could be user specified. Alternatively, in some embodiments, automatic selection of the portion of the array selected could be based on input data from an inclinometer. Some embodiments comprise using a variable/ differing radius of curvature. Some embodiments comprise using a variable/ differing horizontal fields of view (FOV). Shutters timelines, etc. would be in accordance with the type technology chosen for the particular type detector array technology.
[0026] The standard types of detector arrays would be used for stereo camera pairs. Collection array could include for example, but not limited to: charge couple devices (CCD), complementary metal-oxide semiconductor (CMOS). For camera systems needing a nighttime capability of operation, options would include, but are not limited to: low light level TVs; infrared detector arrays such as (mercury cadmium telluride (MCT); Indium gallium arsenide (InGaAs); and quantum well infrared photodetector (QWIP).
[0027] Composite collection geometries of the collection array would be based on the desired viewing mode of the user. This would include, but would not be limited to: user selection of straight ahead for general viewing of the scene and specific objects at ranges greater that 20 - 30 feet where stereoscopic viewing becomes possible); variable convergence angles based upon the proximity of the object being viewed; to the left or to the right to provide a panoramic view (or, alternatively for scanning (e.g., left, to straight ahead, then to the right). Some embodiments comprise wherein a portion of the collection array would be facing straight ahead. Some embodiments comprise wherein a portion of the collection array would be constructed with different convergence angles. Some embodiments comprise wherein a portion of the collection array would be constructed with different look angles (left/ right; far left/ far right).
[0028] Left eye and right eye imagery could be merged and displayed on the smart device display. This composite image could then be viewed as by polarized glasses. Alternatively, the smart phone could be placed into a HDU with lenses to be converted into a virtual reality unit.
[0029] In some embodiments, the encasing framework of each of the lens could be rotated along 2 degrees of freedom (i.e., left/ right and up// down). Alternatively, the encasing framework of the detector array could be rotated along 2 degrees of freedom (i.e., left/ right and up// down). In some embodiments, a mirror(s) (or reflective surface) could be inserted to the stereo camera system. In a system configuration the user could rotate the mirror such that the desired viewing angles focused on the area/ objects selected by the user.
[0030] In some embodiments, mechanical turning of the collection arrays or lenses or the entire camera. This mechanical turning would correspond with the user’s desired viewing area (i.e., straight ahead or converged and some nearby location. This turning could be done electronically or by a physical mechanical linkage.
[0031] With respect to transmission, the first option would be for the person who collected the left and right eye data/ imagery to view the stereo imagery on his/ her head stereo display unit (HDU). This could be accomplished, for example, by a wire connection between the stereo phone and the stereo HDU. The user could also choose to send the stereo imagery to other persons. The transmission could be for single stereo pair(s) or streaming stereo video. The data transmitted could be interleafed (i.e., alternating between left eye data/ imagery and right eye data/ imagery). Alternatively, the data/ imagery could be transmitted via multi channel with separate channels for left and right eye data/ imagery. Alternatively, the left and right eye imagery frames could be merged for transmission. Using this technique, the HDU could use polarization or anaglyph techniques to ensure proper stereo display to the user. A further option would be store the left and right eye data/ imagery. This storage could be accomplished by, but not limited to the following: on the smart device; on a removable device such as a memory stick; or on a portion of the cloud set aside for the user’s storage, and at some later time download the stereo imagery to a device (e.g., computer) and subsequently displayed on a HDU.
[0032] There are a variety of modes of operations. Example modes of operation would include, but are not limited to the following: stereo snapshot; scanning; staring, tracking, and record then playback.
[0033] In some embodiments a very high-resolution camera pair(s) could be used in connection with the type pairs described above. In some embodiments, there could be a disparity between the camera resolution and that of the display system wherein the camera resolution was greater (i.e., provided ‘better resolution’) than that of the display system. These very high-resolution camera pair(s) could be used in connection with changes in camera field of view (FOV). In some embodiments, the field of view could change (e.g., decrease in size) with a corresponding change of image resolution (e.g., increase in resolution). As an example, this embodiment could be used with a feedback mechanism wherein a user could, for example, start with a large FOV and then, thru a interactive cursor, indicate an area of interest and desired FOV. Then, the center point of the FOV would change to that point and the image area corresponding to that FOV and the resolution corresponding that FOV would create the image to be displayed.
[0034] With respect to user interface, the type of control of the stereo camera pairs would be smart device dependent. For a smart phone device, for example, the principle screen could display a stereo camera icon.
[0035] Next, the Stereoscopic Head Display Unit (SHDU) will be discussed. Types of displays included both immersive (this could include, but would not be limited to: a very dark visor that can be brought down on the far side of the display to block viewing of the external scene; an electronic shutter external to the display and coincident with the HDU eyepieces could be of varying opacity )which could be initiated by the person wearing the head display unit; note that this ); or mixed reality with a relative intensity/ brightness of the intensity of the stereoscopic display relative to the external scene. A computer and memory would be integral to the HDU. A power supply would be integral to the HDU. The communications componentry would include, but is not limited to the following: communications port(s) (e.g., USB, HDMI, composite wire to connect to power source, smart device), antenna and receiver; associated circuitry. The audio componentry would include, but is not limited to: speakers, microphone, or both. A LRF is integral to ’smart device’ and used to determine range from the smart device to the location of the object selected by the user to calculate convergence angles for left and right viewing angles to provide proper stereoscopic images. In addition, a pseudo-GPS system can be integrated as described in US 10,973,485, which is incorporated by reference in its entirety.
[0036] In some embodiments, stereoscopic image processing will be performed on the images produced by the two stereoscopic cameras. One of the image processing techniques is image enhancement. These enhancement techniques include but are not limited to the following: noise reduction, deblurring, sharpening and softening the images, fdtering, etc. In an example of noise reduction, there would be two separate images each of which would undergo a separate noise reduction process. (Note that noise is random in nature and, therefore, a different set of random noise would occur in the left camera image from right camera. And, after the consequent reduction, a different set of pixels would remain in the two images.) Given these images were taken beyond the stereoscopic range, then the two images could be merged resulting a more comprehensive, noise free image. In some embodiments, stereoscopic image processing will include segmentation. These enhancement techniques include but are not limited to the following: edge detection methods; histogram-based methods, tree/graph-based methods; neural network based segmentation; thresholding; clustering methods; graph partitioning methods; watershed transformation; probabilistic; and Bayesian approaches. Given these images were taken beyond the stereoscopic range, a different technique for left and right images could be invoked. If the segmentation produced identical results, then there would be higher confidence in results. If the results were different, however, then a third segmentation method could be invoked, and an adjudication process resolve the segmentation.
[0037] In some embodiments of stereoscopic image processing, a set of left and right images would be produced over time. The user could identify an object(s) of interest which could be tracked over time. Stereoscopic image processing consisting of background suppression could be applied to both left and right images which could enhance stereoscopic viewing of the object(s) of interest. In some embodiments of stereoscopic image processing, false color could be added to the scene and/ or object(s) of interest within the scene. An example of stereoscopic image processing would be to use opposing anaglyph colors for left and right eye images. A further example would be to use color figures to provide augmented reality of stereoscopic images. [0038] In some embodiments of stereoscopic image processing, would be image compression for data storage and transmission. As an example, for compression as it applies to of stereoscopic image processing would be to: for portions of an image beyond stereoscopic ranges, apply stereoscopic image compression processing (to include but not limited to run-length encoding) to only that region to one of either left or right images, but not both. For that region that is within stereoscopic ranges, apply stereoscopic image compression processing to both the left and right images.
[0039] In some embodiments of stereoscopic image processing would include morphological processing which would include but not limited to: dilation, erosion, boundary extraction, region filling. An example for morphological processing as it applies to of stereoscopic image processing would be to perform erosion for the left image but not for the right. Left and right images could be alternated to permit the user to evaluate whether this type of processing was desirable.
[0040] In some embodiments of stereoscopic image processing, stereoscopic object recognition would be invoked. Techniques for stereoscopic object recognition include but are not limited to: cconvolutional neural networks (CNNs); and support vector machines (SVM). A number of support features include, but are not limited to: feature extraction; pattern recognition, edge detection; and corner detection. Examples of automated object recognition (AOR) processing as it applies to of stereoscopic image processing would be to: recognize brands of cars; recognize types of animals, optical character reading; optical character reading coupled with a translation dictionary. For example, CNN AOR could be performed on left camera imagery and SVM on right camera imagery. If both agree on the type object, that type object is presented to the user. However, if agreement is not reached by CNN and SVM, then a third type of recognition methodology such as feature recognition would be invoked.
[0041] In some embodiments of stereoscopic image processing would include image stabilization would be invoked. If a single stereoscopic image was desired by the user and if upon review of the image, the image was blurred dur to vibration or movement(s) of the stereoscopic cameras during the imaging interval then an option would be to decrease the shutter interval and repeat the stereoscopic image collection. If, however, a stereoscopic video collection was to be obtained by the user, then stereoscopic image processing would include, but not be limited to: selection of three or more reference points within the stereoscopic images (i.e., visible in both left and right images) and, from frame to sequential frame, adjust the sequential frame(s) to align the reference points; a border surrounding the displayed stereoscopic images could be invoked to reduce the overall area of the stereoscopic images.
[0042] In some embodiments, stereoscopic viewing of the virtual 3D mannequin is performed on an extended reality display unit, which is described in US Patent 8,384,771, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, which is incorporated by reference in its entirety. This patent teaches image processing techniques including volume generation, fdtering, rotation, and zooming.
[0043] In some embodiments, stereoscopic viewing of the virtual 3D mannequin is performed with convergence, which is described in US Patent 9,349,183, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, which is incorporated by reference in its entirety. This patent teaches shifting of convergence. This feature can be used in combination with filtering.
[0044] In some embodiments, stereoscopic viewing can be performed using a display unit, which incorporates polarized lenses, which is described in US Patent 9,473,766, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, which is incorporated by reference in its entirety.
[0045] In some embodiments, advancements to display units can be incorporated for viewing the virtual 3D mannequin, which are taught in US Patent Application 16/828,352, SMART GLASSES SYSTEM and US Patent Application 16/997,830, ADVANCED HEAD DISPLAY UNIT FOR FIRE FIGHTERS, which are both incorporated by reference in their entirety.
[0046] In some embodiments, advancements in display units are taught in US Patent Application 17/120,109, ENHANCED VOLUME VIEWING, which is incorporated by reference in its entirety. Included herein is a head display unit, which is improved by incorporating georegistration.
[0047] Some embodiments comprise utilizing an improved field of view on an extended reality head display unit, which is taught in US Patent Application 16/893,291, A METHOD AND APPARATUS FOR A HEAD DISPLAY UNIT WITH A MOVABLE HIGH-RESOLUTION FIELD OF VIEW, which is incorporated by reference in its entirety.
[0048] In some embodiments, image processing steps can be performed using a 3D volume cursor, which is taught in US Patent 9,980,691, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, and US Patent 10,795,457, INTERACTIVE 3D CURSOR, both of which are incorporated by reference in its entirety.
[0049] In some embodiments, a precision sub-volume can be utilized in conjunction with the virtual 3D mannequin, which is taught in US Patent Application 16/927,886, A METHOD AND APPARATUS FOR GENERATING A PRECISION SUB-VOLUME WITHIN THREE- DIMENSIONAL IMAGE DATASETS, which is incorporated by reference in its entirety.
[0050] In some embodiments, viewing of a structure at two different time points can be performed using a ghost imaging technique, which is taught in US Patent 10,864,043, INTERACTIVE PLACEMENT OF A 3D DIGITAL REPRESENTATION OF A SURGICAL DEVICE OR ANATOMIC FEATURE INTO A 3D RADIOLOGIC IMAGE FOR PREOPERATIVE PLANNING, which is incorporated by reference in its entirety.
[0051] Some embodiments comprise selecting a specific surgical device for pre-operative planning, which is taught in US Patent Application 17/093,322, A METHOD OF SELECTING A SPECIFIC SURGICAL DEVICE FOR PREOPERATIVE PLANNING, which is incorporated by reference in its entirety.
[0052] Some embodiments comprise, generating the virtual 3D mannequin using techniques described in US Patent Application 16/867,102, METHOD AND APPARATUS OF CREATING A COMPUTER-GENERATED PATIENT SPECIFIC IMAGE, which is incorporated by reference in its entirety. Key techniques include using patient factors (e.g., history, physical examination findings, etc.) to generate a volume.
[0053] Some embodiments comprise advanced image processing techniques available to the user of the virtual 3D mannequin, which are taught in US Patent 10,586,400, PROCESSING 3D MEDICAL IMAGES TO ENHANCE VISUALIZATION, and US Patent 10,657,731, PROCESSING 3D MEDICAL IMAGES TO ENHANCE VISUALIZATION, both of which are incorporated by reference in its entirety.
[0054] Some embodiments comprise performing voxel manipulation techniques so that portions of the virtual 3D mannequin can be deformed and move in relation to other portions of the virtual 3D mannequin, which is taught in US Patent application 16/195,251, INTERACTIVE VOXEL MANIPULATION IN VOLUMETRIC MEDICAL IMAGING FOR VIRTUAL MOTION, DEFORMABLE TISSUE, AND VIRTUAL RADIOLOGICAL DISSECTION, which is incorporated by reference in its entirety. [0055] Some embodiments comprise generating at least some portions of the virtual 3D mannequin through artificial intelligence methods and performing voxel manipulation thereof, which is taught in US patent application 16/736,731, RADIOLOGIST-ASSISTED MACHINE LEARNING WITH INTERACTIVE, VOLUME SUBTENDING 3D CURSOR, which is incorporated by reference in its entirety.
[0056] Some embodiments comprise wherein at least some component of the inserted 3D dataset into the virtual 3D mannequin are derived from cross-sectional imaging data fine tuned with phantoms, which is taught in US Patent application 16/752,691, IMPROVING IMAGE QUALITY BY INCORPORATING DATA UNIT ASSURANCE MARKERS, which is incorporated by reference in its entirety.
[0057] Some embodiments comprise utilizing halo-type segmentation techniques, which are taught in US Patent Application 16/785,606, IMPROVING IMAGE PROCESSING VIA A MODIFIED SEGMENTED STRUCTURE, which is incorporated by reference in its entirety.
[0058] Some embodiments comprise using techniques for advanced analysis of the virtual 3D mannequin taught in US Patent Application 16/939,192, RADIOLOGIST ASSISTED MACHINE LEARNING, which are incorporated by reference in its entirety.
[0059] Some embodiments comprise performing smart localization from a first virtual 3D mannequin to a second virtual 3D mannequin, such as in an anatomy lab, which is performed via techniques taught in US Patent Application 17/100,902, METHOD AND APPARATUS FOR AN IMPROVED LOCALIZER FOR 3D IMAGING, which is incorporated by reference in its entirety.
[0060] Some embodiments comprise performing a first imaging examination with a first level of mechanical compression and a second imaging examination with a second level of mechanical compression and analyzing differences therein, which is taught in US Patent Application 16/594,139, METHOD AND APPARATUS FOR PERFORMING 3D IMAGING EXAMINATIONS OF A STRUCTURE UNDER DIFFERING CONFIGURATIONS AND ANALYZING MORPHOLOGIC CHANGES, which is incorporated by reference in its entirety.
[0061] Some embodiments comprise displaying the virtual 3D mannequin in an optimized image refresh rate, which is taught in US Patent Application 16/842,631, A SMART SCROLLING SYSTEM, which is incorporated by reference in its entirety. [0062] Some embodiments comprise displaying the virtual 3D mannequin using priority volume rendering, which is taught in US Patent 10,776,989, A METHOD AND APPARATUS FOR PRIORITIZED VOLUME RENDERING, which is incorporated by reference in its entirety.
[0063] Some embodiments comprise displaying the virtual 3D mannequin using tandem volume rendering, which is taught in US Patent 17/033, 892, A METHOD AND APPARATUS FOR TANDEM VOLUME RENDERING, which is incorporated by reference in its entirety.
[0064] Some embodiments comprise displaying images in a optimized fashion by incorporating eye tracking, which is taught in US Patent Application 16/936,293, IMPROVING VISUALIZATION OF IMAGES VIA AN ENHANCED EYE TRACKING SYSTEM, which is incorporated by reference in its entirety.
[0065] Some embodiments comprise enhancing collaboration for analysis of the virtual 3D mannequin by incorporating teachings from US Patent Application 17/072,350, OPTIMIZED IMAGING CONSULTING PROCESS FOR RARE IMAGING FINDINGS, which is incorporated by reference in its entirety.
[0066] Some embodiments comprise improving multi-user viewing of the virtual 3D mannequin by incorporating teachings from US Patent Application 17/079,479, AN IMPROVED MULTIUSER EXTENDED REALITY VIEWING TECHNIQUE, which is incorporated by reference in its entirety.
[0067] Some embodiments comprise improving analysis of images through use of geo-registered tools, which is taught in US Patent 10,712,837, USING GEO-REGISTERED TOOLS TO MANIPULATE THREE-DIMENSIONAL MEDICAL IMAGES, which is incorporated by reference in its entirety.
[0068] Some embodiments comprise integration of virtual tools with geo-registered tools, which is taught in US Patent Application 16/893,291, A METHOD AND APPARATUS FOR THE INTERACTION OF VIRTUAL TOOLS AND GEO-REGISTERED TOOLS, which is incorporated by reference in its entirety.
[0069] In some embodiments blood flow is illustrated in the virtual 3D mannequin, which is taught in US Patent Application 16/506,073, A METHOD FOR ILLUSTRATING DIRECTION OF BLOOD FLOW VIA POINTERS, which is incorporated by reference in its entirety and US Patent 10,846,911, 3D IMAGING OF VIRTUAL FLUIDS AND VIRTUAL SOUNDS, which is also incorporated by reference in its entirety. [0070] Some embodiments also involve incorporation of 3D printed objects to be used in conjunction with the virtual 3D mannequin. Techniques herein are disclosed in US Patent 17/075,799, OPTIMIZING ANALYSIS OF A 3D PRINTED OBJECT THROUGH INTEGRATION OF GEO-REGISTERED VIRTUAL OBJECTS, which is incorporated by reference in its entirety.
[0071] Some embodiments also involve a 3D virtual hand, which can be geo-registered to the virtual 3D mannequin. Techniques herein are disclosed in US Patent Application 17/113,062, A METHOD AND APPARATUS FOR A GEO-REGISTERED 3D VIRTUAL HAND, which is incorporated by reference in its entirety.
[0072] Some embodiments comprise utilizing images obtained from US Patent Application 16/654,047, METHOD TO MODIFY IMAGING PROTOCOLS IN REAL TIME THROUGH IMPLEMENTATION OF ARTIFICIAL, which is incorporated by reference in its entirety.
[0073] Some embodiments comprise utilizing images obtained from US Patent Application 16/597,910, METHOD OF CREATING AN ARTIFICIAL INTELLIGENCE GENERATED DIFFERENTIAL DIAGNOSIS AND MANAGEMENT RECOMMENDATION TOOLBOXES DURING MEDICAL PERSONNEL ANALYSIS AND REPORTING, which is incorporated by reference in its entirety.
[0074] Some embodiments comprise a method comprising using a smart phone wherein said smart phone contains a first camera and a second camera wherein said first camera has a first location on said smart phone, wherein said second camera has a second location on said smart phone, wherein said second location is different from said first location, and wherein said first location and said second location are separated by a first stereo distance. Some embodiments comprise acquiring a first set of stereoscopic imagery using said first camera at said first location on said smart phone and said second camera at said second location on said smart phone. Some embodiments comprise changing a spatial relationship by at least one of the group of: moving said first camera from said first location on said smart phone to a third location on said smart phone wherein said third location is different from said first location; and moving said second camera from said second location on said smart phone to a fourth location on said smart phone wherein said fourth location is different from said second location. Some embodiments comprise after said changing said spatial relationship, acquiring a second set of stereoscopic imagery using said first camera and second camera. [0075] Some embodiments comprise wherein said smart phone tracks an object's location in an area. Some embodiments comprise wherein said first camera's first location and said second camera's second location is based on said object's initial location in said area. Some embodiments comprise wherein said initial location is a first distance from said smart phone.
[0076] Some embodiments comprise wherein said first camera's third location and said second camera's fourth location is based on said object's subsequent location in said area. Some embodiments comprise wherein said subsequent location is different from said initial location. Some embodiments comprise wherein said subsequent location is a second distance from said smart phone. Some embodiments comprise wherein said second distance is different from said first distance.
[0077] Some embodiments comprise wherein said first set of stereoscopic imagery and said second set of stereoscopic imagery comprise enhanced stereoscopic video imagery.
[0078] Some embodiments comprise wherein said enhanced stereoscopic video imagery comprises wherein said third location and said fourth location are separated by said first stereo distance.
[0079] Some embodiments comprise wherein said enhanced stereoscopic video imagery comprises wherein said third location and said fourth location are separated by a second stereo distance wherein said second stereo distance is different from said first stereo distance. Some embodiments comprise wherein successive frames of said enhanced stereoscopic imagery have different stereo distances.
[0080] Some embodiments comprise wherein said first set of stereoscopic imagery has a first zoom setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said first zoom setting for said first camera and said second camera.
[0081] Some embodiments comprise wherein said first set of stereoscopic imagery has a first zoom setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said a second zoom setting for said first camera and said second camera. Some embodiments comprise wherein said second zoom setting is different from said first zoom setting.
[0082] Some embodiments comprise wherein said first set of stereoscopic imagery has a first aperture setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said first aperture setting for said first camera and said second camera.
[0083] Some embodiments comprise wherein said first set of stereoscopic imagery has a first aperture setting for said first camera and said second camera. Some embodiments comprise wherein said second set of stereoscopic imagery has said first aperture setting for said first camera and said second camera. Some embodiments comprise wherein said second aperture setting is different from said first aperture setting.
[0084] Some embodiments comprise wherein said first set of stereoscopic imagery comprises wherein said first camera has a first cant angle and said second camera has a second cant angle and wherein said second set of stereoscopic imagery comprises wherein said first camera has said first cant angle and said second camera has said second cant angle.
[0085] Some embodiments comprise wherein said first set of stereoscopic imagery comprises wherein said first camera has a first cant angle and said second camera has a second cant angle and wherein said second set of stereoscopic imagery comprises wherein said first camera has a third first cant angle different from said first cant angle and said second camera has a fourth second cant angle different from said second cant angle.
[0086] Some embodiments comprise performing stereoscopic image stabilization wherein said stereoscopic image stabilization comprises: using said first camera to acquire imagery of an area containing a tangible object; using said second camera to acquire imagery of said area containing said tangible object; selecting at least one point on said tangible object in said area to be used as stable reference point(s); for an initial frame of said acquired imagery of said area from said first camera, identifying at least one point within said initial frame of said acquired imagery of said area from said first camera that correspond to said stable reference point; for an initial frame of said acquired imagery of said area from said second camera, identifying at least one point within said initial frame of said acquired imagery of said area from said second camera that correspond to said stable reference point; for a subsequent frame of said acquired imagery of said area from said first camera, identifying at least one point within said subsequent frame of said acquired imagery of said area from said first camera that correspond to said stable reference point; for a subsequent frame of said acquired imagery of said area from said second camera, identifying at least one point within said subsequent frame of said acquired imagery of said area from said second camera that correspond to said stable reference point; performing a first alignment comprising wherein: said identified at least one point within said initial frame of said acquired imagery of said area from said first camera with said identified at least one point within said subsequent frame of said acquired imagery of said area from said first camera; performing a second alignment comprising wherein: said identified at least one point within said initial frame of said acquired imagery of said area from said second camera with said identified at least one point within said subsequent frame of said acquired imagery of said area from said second camera; and wherein said performing said first alignment is performed independent from said performing said second alignment.
[0087] Some embodiments comprise selecting a portion of said initial frame of said acquired imagery of said area from said first camera. Some embodiments comprise selecting a portion of said initial frame of said acquired imagery of said area from said second camera. Some embodiments comprise selecting a portion of said subsequent frame of said acquired imagery of said area from said first camera. Some embodiments comprise selecting a portion of said subsequent frame of said acquired imagery of said area from said second camera.
[0088] Some embodiments comprise displaying imagery with said first alignment comprising said selected portion of said initial frame of said acquired imagery of said area from said first camera and said selected portion of said subsequent frame of said acquired imagery of said area from said first camera on a left eye display of an extended reality head display unit. Some embodiments comprise displaying imagery with said second alignment comprising said selected portion of said initial frame of said acquired imagery of said area from said second camera and said selected portion of said subsequent frame of said acquired imagery of said area from said second camera on a right eye display of said extended reality head display unit.
[0089] Some embodiments comprise wherein said camera bar design comprises wherein said first camera and said second camera are restricted to moving along a line.
[0090] Some embodiments comprise a uni-planar camera system wherein said uni-planar camera system comprises wherein said first camera's positions are restricted to a plane on said smart phone's surface and said second camera's positions are restricted to said plane.
[0091] Some embodiments comprise said first camera and said second camera are on said smart phone's back. Some embodiments comprise wherein said smart phone's face contains a third camera and a fourth camera wherein said third camera and said fourth camera are separated by a stereo distance ranging from 0.25 inch to 1.25 inches. Some embodiments comprise wherein the third camera and the fourth camera are separated by a stereo distance ranging from 0.1 inch to 2.0 inches.
[0092] Some embodiments comprise a smart phone comprising a first camera wherein said first camera has a first location on said smart phone, a second camera wherein said second camera has a second location on said smart phone and wherein said second location is different from said first location, a third camera wherein said third camera has a third location on said smart phone and wherein said third location is different from said first location and said second location, and a imaging system configured to track an object's location in an area. Some embodiments comprise wherein said first camera and said second camera are separated by a first stereo distance. Some embodiments comprise wherein said second camera and said third camera are separated by a second stereo distance. Some embodiments comprise wherein said third camera and said first camera are separated by a third stereo distance. Some embodiments comprise wherein said first stereo distance is smaller than said second stereo distance. Some embodiments comprise wherein said third stereo distance is larger than said first stereo distance and said second stereo distance. Some embodiments comprise wherein said smart phone is configured to use said data from imaging system, said first camera, said second camera and said third camera to acquire enhanced stereoscopic imagery of said object in said area comprising: if said object is a first distance to said smart phone, using said first camera and said second camera to generate a first set of stereoscopic imagery; if said object is a second distance to said smart phone wherein said second distance is larger than said first distance, using said second camera and said third camera to generate a second set of stereoscopic imagery; and if said object is a second distance to said smart phone wherein said third distance is larger than said second distance, using said first camera and said second camera to generate a first set of stereoscopic imagery.
[0093] Some embodiments comprise an extended reality head display unit (HDU) comprising: a left eye display configured to display left eye images from acquired enhanced stereoscopic imagery; a right eye display configured to display right eye images from acquired enhanced stereoscopic imagery; and wherein said enhanced stereoscopic imagery is acquired on a smart phone comprising: a first camera wherein said first camera has a first location on said smart phone; a second camera wherein said second camera has a second location on said smart phone and wherein said second location is different from said first location; a third camera wherein said third camera has a third location on said smart phone and wherein said third location is different from said first location and said second location; a imaging system configured to track an object's location in an area; wherein said first camera and said second camera are separated by a first stereo distance; wherein said second camera and said third camera are separated by a second stereo distance; wherein said third camera and said first camera are separated by a third stereo distance; wherein said first stereo distance is smaller than said second stereo distance; wherein said third stereo distance is larger than said first stereo distance and said second stereo distance; and wherein said smart phone is configured to use said data from imaging system, said first camera, said second camera and said third camera to acquire enhanced stereoscopic imagery of said object in said area comprising: if said object is a first distance to said smart phone, using said first camera and said second camera to generate a first set of stereoscopic imagery; if said object is a second distance to said smart phone wherein said second distance is larger than said first distance, using said second camera and said third camera to generate a second set of stereoscopic imagery; and if said object is a second distance to said smart phone wherein said third distance is larger than said second distance, using said first camera and said second camera to generate a first set of stereoscopic imagery.
[0094] Still other embodiments include a computerized device, configured to process all the method operations disclosed herein as embodiments of the invention. In such embodiments, the computerized device includes a memory system, a processor, communications interface in an interconnection mechanism connecting these components. The memory system is encoded with a process that provides steps explained herein that when performed (e g., when executing) on the processor, operates as explained herein within the computerized device to perform all of the method embodiments and operations explained herein as embodiments of the invention. Thus, any computerized device that performs or is programmed to perform processing explained herein is an embodiment of the invention.
[0095] Other arrangements of embodiments of the invention that are disclosed herein include Software programs to perform the method embodiment steps and operations Summarized above and disclosed in detail below. More particularly, a computer program product is one embodiment that has a computer-readable medium including computer program logic encoded thereon that when performed in a computerized device provides associated operations providing steps as explained herein. [0096] The computer program logic, when executed on at least one processor with a computing system, causes the processor to perform the operations (e.g., the methods) indicated herein as embodiments of the invention. Such arrangements of the invention are typically provided as Software, code and/or other data structures arranged or encoded on a computer readable medium such as an optical medium (e.g., CD-ROM), floppy or hard disk or other a medium such as firmware or microcode in one or more ROM or RAM or PROM chips or as an Application Specific Integrated Circuit (ASIC) or as downloadable software images in one or more modules, shared libraries, etc. The software or firmware or other Such configurations can be installed onto a computerized device to cause one or more processors in the computerized device to perform the techniques explained herein as embodiments of the invention. Software processes that operate in a collection of computerized devices, such as in a group of data communications devices or other entities can also provide the system of the invention. The system of the invention can be distributed between many software processes on several data communications devices, or all processes could run on a small set of dedicated computers, or on one computer alone.
[0097] It is to be understood that the embodiments of the invention can be embodied strictly as a software program, as Software and hardware, or as hardware and/or circuitry alone. Such as within a data communications device. The features of the invention, as explained herein, may be employed in data processing devices and/or Software systems for Such devices. Note that each of the different features, techniques, configurations, etc. discussed in this disclosure can be executed independently or in combination. Accordingly, the present invention can be embodied and viewed in many different ways. Also, note that this Summary section herein does not specify every embodiment and/or incrementally novel aspect of the present disclosure or claimed invention. Instead, this Summary only provides a preliminary discussion of different embodiments and corresponding points of novelty over conventional techniques. For additional details, elements, and/or possible perspectives (permutations) of the invention, the reader is directed to the Detailed Description section and corresponding figures of the present disclosure as further discussed below.
BRIEF DESCRIPTION OF THE FIGURES [0098] The flow diagrams do not depict the syntax of any particular programming language. Rather, the flow diagrams illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required in accordance with the present invention. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables, are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention. Thus, unless otherwise stated the steps described below are unordered meaning that, when possible, the steps can be performed in any convenient or desirable order.
[0099] The foregoing will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
[00100] Figure 1 A illustrates the back of a smart phone with stereoscopic camera capability.
[00101] Figure IB illustrates the front of a smart phone with stereoscopic camera capability.
[00102] Figure 2A illustrates key components of the smart phone with stereoscopic camera capability.
[00103] Figure 2B illustrates a top-down view of the smart phone with stereoscopic camera capability.
[00104] Figure 2C illustrates a top-down view of the smart phone with stereoscopic camera capability with convergence.
[00105] Figure 3 A illustrates curved lens concept for smart phone with stereoscopic camera capability.
[00106] Figure 3B illustrates fish-eye type of lens concept for smart phone with stereoscopic camera capability.
[00107] Figure 3C illustrates a progressive type of lens concept for smart phone with stereoscopic camera capability.
[00108] Figure 4A illustrates a front view of a composite sensor array concept for a smart phone with stereoscopic camera capability. 1 [00109] Figure 4B illustrates a top view of a composite sensor array concept for a smart phone with stereoscopic camera capability.
[00110] Figure 5 A illustrates a front view of a flat mirror concept for smart phone with stereoscopic camera capability.
[00111] Figure 5B illustrates a top-down view of the flat mirror concept for smart phone with stereoscopic camera capability.
[00112] Figure 5C illustrates a front view of a curved mirror concept for smart phone with stereoscopic camera capability.
[00113] Figure 5D illustrates a top-down view of the curved mirror concept for smart phone with stereoscopic camera capability.
[00114] Figure 5E illustrates a front view of a deformable mirror concept for smart phone with stereoscopic camera capability.
[00115] Figure 5F illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 1.
[00116] Figure 5G illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 2.
[00117] Figure 6A illustrates a movable lens for smart phone with stereoscopic camera capability.
[00118] Figure 6B illustrates the composite sensor array concept.
[00119] Figure 6C illustrates a switching out of cameras from Day 1 to Day 2.
[00120] Figure 7A illustrates the Stereoscopic Head Display Unit (SHDU).
[00121] Figure 7B shows a side view of a transformable SHDU display unit with an eyepiece cover with augmented reality mode and virtual reality mode.
[00122] Figure 7C shows a side view of a transformable SHDU display unit with an electronic eye piece with augmented reality mode and virtual reality mode.
[00123] Figure 8 A illustrates wired connectivity means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
[00124] Figure 8B illustrates wireless connectivity via BlueTooth means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
[00125] Figure 8C illustrates wireless connectivity via the Internet means between smart phone with stereoscopic camera capability and stereoscopic head display unit.
[00126] Figure 9A illustrates system operation using the stereoscopic smart phone (SSP). [00127] Figure 9B illustrates near real-time stereo mode at time N.
[00128] Figure 9C illustrates near real-time stereo mode at time N+l .
[00129] Figure 9D illustrates convergence mode at time N+l.
[00130] Figure 10A illustrates before application of automatic object recognition as displayed on the stereoscopic head display unit.
[00131] Figure 10B illustrates after application of automatic object recognition as displayed on the stereoscopic head display unit.
[00132] Figure 10C invokes another stereo camera unique technology.
[00133] Figure 11 A illustrates integration of image stabilization for a user in a scene where there is vibration.
[00134] Figure 1 IB illustrates selection of points within the image to use for image stabilization.
[00135] Figure 11C illustrates selection of points within the scene to use for image stabilization.
[00136] Figure 11D illustrates stereoscopic image stabilization on the SHDU.
[00137] Figure 12A illustrates determining a stabilization point and field of view (FOV) for the left eye imagery and right eye imagery.
[00138] Figure 12B illustrates displaying stabilized stereoscopic imagery on a SHDU.
[00139] Figure 13A illustrates a stereoscopic smart phone (SSP) with its stereoscopic cameras in a first position, which is wide.
[00140] Figure 13B illustrates the SSP with its stereoscopic cameras in a second position, which is narrow.
[00141] Figure 13C illustrates the SSP with its stereoscopic cameras in a third position, which is also narrow, but shifted in position as compared to Figure 12B.
[00142] Figure 14 illustrates optimizing stereoscopic imaging.
[00143] Figure 15 illustrates adjusting stereoscopic camera componentry based on eye tracking parameters of a user.
[00144] Figure 16 illustrates a three-camera stereoscopic smart phone.
DETAILED DESCRIPTION
[00145] Some aspects, features and implementations described herein may include machines such as computers, electronic components, optical components, and processes such as computer- implemented steps. It will be apparent to those of ordinary skill in the art that the computer- implemented steps may be stored as computer-executable instructions on a non-transitory computer-readable medium. Furthermore, it will be understood by those of ordinary skill in the art that the computer-executable instructions may be executed on a variety of tangible processor devices. For ease of exposition, not every step, device or component that may be part of a computer or data storage system is described herein. Those of ordinary skill in the art will recognize such steps, devices and components in view of the teachings of the present disclosure and the knowledge generally available to those of ordinary skill in the art. The corresponding machines and processes are therefore enabled and within the scope of the disclosure.
[00146] Figure 1 A illustrates the back of a smart phone with stereoscopic camera capability. This is called a stereoscopic smart phone (SSP). 101 illustrate a smart phone with stereoscopic camera capability. 102 illustrates a first outward facing camera on the back of the smart phone 101. 103 illustrates a second outward facing camera on the back of the smart phone 101. The first camera 102 and the second camera 103 are separated by a stereoscopic distance. In the preferred embodiment, these would be maximally separated to enhance stereo separation to achieve stereoscopic vision at distances greater than a user could with his/her own stereoscopic distance. Some embodiments comprise placing mechanical extensions to further increase the stereo separation. In some embodiments, two phones would work in conjunction to further increase stereo distance. For example, a first smart phone could be placed in 10, 20 or 30 feet from a second smart phone. The video imagery could be synchronized and video imagery from a camera from the first phone and video imagery from a camera from the second phone can be used together to generate stereoscopic imagery. In some embodiments, these phones are to be manufactured with differing stereoscopic distances. In some embodiments, the stereoscopic distance can match that of a person. Some designs will therefore have a larger stereoscopic distance and other designs will have a smaller stereoscopic distance.
[00147] Figure IB illustrates the front of a smart phone with stereoscopic camera capability. 104 illustrates a third outward facing camera on the front of the smart phone 101. 105 illustrates a fourth outward facing camera on the front of the smart phone 101. The third camera 104 and the fourth camera 105 are separated by a stereoscopic distance. 106 illustrates the display portion of the smart phone. As with today’s smart phones, different sizes of phones are available with the consequent increases in the surface area of the larger phones. 107 represent a location on the smart
15 phone for wired communication, such as for power (recharge battery) and/or external input/ output of digital data. In the case of the smart phone with stereoscopic cameras capability, 108 would provide the port where a wire connection could communicate with the stereoscopic head display unit. 108 is example location of ON/ OFF switch. 109 illustrates a volume up control button. 110 illustrates a volume down control button. Ill is the input/ output antenna. This antenna would support communications with the Stereoscopic Head Display Unit.
[00148] Figure 2A illustrates key components of the smart phone with stereoscopic camera capability. 200 illustrates a listing of the key components of the smart phone with stereoscopic cameras capability. These components are the same for both cameras. First, there is the aperture which look outward through the two holes in the back of the phone and front of the phone as described in Figure 1. Next, there is a shutter with a variable speed. The is followed by a lens which will be described in some detail in subsequent figures. Depending on the detailed layout of the components, it is novel to insert a mirror to redirect the light which has passed through the lens to the sensor array which will be illustrated in subsequent figures. In some embodiments, the mirror is a deformable mirror. In some embodiments, the mirror is capable of working with adaptive optics. In some embodiments, the mirror is a curved mirror. In some embodiments, the mirror is a flat mirror. A processor and software support the operation of the camera system. For example, changing the camera viewing angle from looking straight forward to convergence mode where the left camera would look down and the right and the right camera would down and to the left. These viewing angles would intersect at a convergence point on the object of interest. These angles would be based on the proximity of the object of interest to the cameras. Depending on the design option selected there may be mechanical elements that interact with either components or the camera as a whole. There is the display which is part of the phone. Finally, there is communications link which transport data in for the mode of operation and data out to the display. Note that concepts taught in this patent are preferred to be placed in a smart phone; however, could be used in other smart devices.
[00149] Figure 2B illustrates a top-down view of the smart phone with stereoscopic camera capability. 200 illustrates a smart phone. 20 IF illustrates the front face of a smart phone. 20 IB illustrates the back of the smart phone. 202 illustrates a first camera of a stereoscopic camera pair, which is pointed in orthogonal to the front of the smart phone 201F. 202A illustrates the center angle of the first camera 202, which is the direction to the center of the field of view of the first camera 202. 203 illustrates a second camera of a stereoscopic camera pair, which is pointed in orthogonal to the front of the smart phone 201F. 203A illustrates the center angle of the second camera 203, which is the direction to the center of the field of view of the second camera 203.
[00150] Figure 2C illustrates a top-down view of the smart phone with stereoscopic camera capability with convergence. In the preferred embodiment, this is performed using the same smart phone and the cameras alter the direction from which imagery is obtained. In the preferred embodiment, this is performed based on altering configuration of the componentry of the camera. In other embodiments, this is performed digitally, as is taught in US 17/225,610, AN IMPROVED IMMERSIVE VIEWING EXPERIENCE filed on 4/8/2021 and US Patent Application 17/237,152, AN IMPROVED IMMERSIVE VIEWING EXPERIENCE, filed on 4/22/2021, which are incorporated by reference in their entirety. 200 illustrates a smart phone. 20 IF illustrates the front face of a smart phone. 20 IB illustrates the back of the smart phone. 202 illustrates a first camera of a stereoscopic camera pair. 202B illustrates the center angle of the first camera 202, which is the direction to the center of the field of view of the first camera 202. 203 illustrates a second camera of a stereoscopic camera pair. 203B illustrates the center angle of the second camera 203, which is the direction to the center of the field of view of the second camera 203. Note that the center angle 202B of the first camera of the first camera 202 and the center angle 203B of the second camera 203 are both canted inward towards each other towards a convergence point “204”. In the preferred embodiment, the convergence point is in the midline (along the plane orthogonal to the front of the smart phone 201 from a point halfway in between the first camera 202 and the second camera 203). In some embodiments, the convergence point can be: at the level of the first camera 202 and the second camera 203; above the level of the first camera 202 and the second camera 203; below the level of the first camera 202 and the second camera 203; or, off of midline. The convergence point can be any point in (x, y, z) space seen by the first camera 202 and second camera 203. In some embodiments, if the relative positions of two smart devices are known, a camera from the first smart device and a camera from the second smart device can also be used to yield stereoscopic imagery, as taught in this patent and in the patents incorporated by reference in their entirety.
[00151] Figure 3A illustrates curved lens concept for smart phone with stereoscopic camera capability. This figure illustrates the first of three concepts for lenses for the smart phone with stereoscopic cameras capability. 301 illustrates a curved lens with an angular field of regard 302 and with a radius of curvature of 303. The camera would also have a variable field of view which would be pointed somewhere within the field of regard.
[00152] Figure 3B illustrates fish-eye type of lens concept for smart phone with stereoscopic camera capability. 304 illustrates a fish-eye type of lens. This lens could provide hemi- spherical or near hemispherical coverage 305.
[00153] Figure 3C illustrates a progressive type of lens concept for smart phone with stereoscopic camera capability. 306 illustrates a first progressive type of lens, which can be used for the first camera of a stereoscopic camera pair. 307 illustrates a portion of the lens that optimizes image acquisition in a straight-forward direction. 308 illustrates a portion of the lens that provides an increasingly focused capability wherein there is a greater magnification as compared to 307. 308 is directly associated with convergence at shorter and shorter distances. Note that 308 is located towards the bottom and towards the second camera of a stereoscopic camera pair. 310 illustrates a second progressive type of lens, which can be used for the second camera of a stereoscopic camera pair. 311 illustrates a portion of the lens that optimizes image acquisition in a straightforward direction. 312 illustrates a portion of the lens that provides an increasingly focused capability wherein there is a greater magnification as compared to 311. 312 is directly associated with convergence at shorter and shorter distances. Note that 312 is located towards the bottom and towards the first camera of a stereoscopic camera pair. Together, these provide a unique capability to the smart phone with stereoscopic cameras. A progressive lens has a corrective factor to provide looking straight forward and in a ‘progressive’ manner gradually increases magnification in a downward and inward direction through the progressive lens. Note that electronic correction can occur to account for refraction through the lens. In some embodiments, the amount of magnification can be increased in the medial or inferior-medial direction. This can be performed for eyeglasses, which is referred to as “convergence type progressive eyeglasses”.
[00154] Figure 4A illustrates a front view of a composite sensor array concept for a smart phone with stereoscopic camera capability.
[00155] This figure illustrates a novel arrangement of individual sensor arrays 401. This array would be for the left camera - a mirror image of this arrangement would be used in the right camera. Note that there could be an upper layer(s) and a lower layer(s) of composite sensor arrays. Going from left to right, there are five different arrays, array #1 402, array #2 403, array #3 404, array #4 405, array #5 406, and collectively, this creates the of composite sensor arrays. [00156] Figure 4B illustrates a top view of a composite sensor array concept for a smart phone with stereoscopic camera capability. Nominally, 402 points 60 degrees left of center; 402 points 30 degrees left of center; 404 points straight forward; 405 points 30 degrees to the right of center; and 406 points 60 degrees to the right of center. If there were a second and lower row. These arrays could be canted downward. Similarly, if there were an additional row in an upper position, these arrays could be canted upward.
[00157] Figure 5 A illustrates a flat mirror concept for smart phone with stereoscopic camera capability. This is the first of three mirror (or reflective surface) concepts which could be inserted into the stereoscopic cameras. A mirror, if included in the design, would in the preferred embodiment be placed between the lens and the detection sensors mirror to redirect the light which has passed through the lens to the sensor array. This front view illustrates a typical flat mirror 501. 502 is the mirror frame.
[00158] Figure 5B illustrates a top-down view of a flat mirror concept for smart phone with stereoscopic camera capability. 502 shows the outline of the frame. 501 shows the mirror which is hidden by the frame.
[00159] Figure 5C illustrates a front view of a curved mirror concept for smart phone with stereoscopic camera capability. 503 illustrates a curved mirror encased by a frame 504.
[00160] Figure 5D illustrates a front view of a curved mirror concept for smart phone with stereoscopic camera capability. 503 illustrates the curved mirror encased by a frame 504. In the preferred embodiment, a spherical-type curvature is used. In some embodiments, a cylindrical- type curvature is used. In some embodiments, the curvature can include both spherical-type curvature and cylindrical-type curvatures. In some embodiments, multiple (at least two) different curved mirrors can be used. For example, if for a long range zooming, a first type of curved mirror can be utilized. If for medium range zooming, a second type of curved mirror can be utilized. These can be used one at a time, so the first type of curved mirror and second type of curved mirror can be swapped out depending on what object is being viewed.
[00161] Figure 5E illustrates a front view of a deformable mirror concept for smart phone with stereoscopic camera capability. 505 illustrates a deformable mirror encased by a frame 506. An inventive aspect of this patent is to use a deformable mirror in conjunction with a stereoscopic camera system. [00162] Figure 5F illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 1. 505 illustrates a deformable mirror encased by a frame 506. Note the contour of the deformable mirror at this first time point.
[00163] Figure 5G illustrates a top-down view of the deformable mirror concept for smart phone with stereoscopic camera capability at time equals 2. 505 illustrates a deformable mirror encased by a frame 506. Note the contour of the deformable mirror at this second time point, which is different from the contour at the first time point. At a first time point, for the first camera the curvature of the first deformable mirror has a first focal point. At a second time point, for the first camera the curvature of the first deformable mirror has a second focal point, which is different from the first focal point. At a first time point, for the second camera the curvature of the second deformable mirror has a first focal point. At a second time point, for the second camera the curvature of the second deformable mirror has a second focal point, which is different from the first focal point. Thus, at the first time point the first deformable mirror and second deformable mirror would optimize imagery at a first location. This could be an object in the scene or a point in space. This could be a first convergence point. Thus, at the second time point the first deformable mirror and second deformable mirror would optimize imagery at a second location. This could be an object in the scene or a second location. This could be a second convergence point. The second convergence point would be different from the first convergence point. The deformable system is inventive because it maintains high light collection and can rapidly alter the projection of this light from the scene onto the detector. In some embodiments, eye tracking of a user is performed. Based on eye tracking metrics of a user, the deformable mirrors deform to optimize image acquisition from those areas. This includes a location where a user is looking. Thus, imagery collected can be optimized based on where a user is looking. Thus, as the user looks a new objects via saccades movements of his/her eyes, the stereoscopic deformable mirror system can deform to adapt to new areas where the user is looking. Similarly, as the user tracks objects as it moves with smooth tracking movements of his/her eyes, user is looking. Thus, imagery collected can be optimized based on where a user is looking. Thus, as the user looks a new objects via saccades movements of his/her eyes, the stereoscopic deformable mirror system can deform to adapt to new areas where the user is looking.
[00164] Figure 6A illustrates a movable lens for smart phone with stereoscopic camera capability. This figure is the first of three different potential of the smart phone with stereoscopic cameras capability. Figure 6A depicts a movable lens which could be used to change the look angle. Light 601 is shown as a series of parallel wavy lines entering the lens 602. In this case, the lens is the progressive type lens described in Figure 3C. The position of the lens at Time 1 is such that the light enters the top portion of the lens which imagery looking straight forward and no convergence point for the left and right viewing angles is used. At Time 2, the position of the lens has shifted upward and the light 601 now enters the bottom portion of the lens 604. The bottom right portion of the left lens causes the look angle to shift down and to the right. The bottom left portion of the right lens causes the look angle to shift down and to the left. Under this configuration, the viewing angles of the left and right lenses intersect (i.e., converge) and, collectively, provide a stereoscopic view.
[00165] Figure 6B illustrates the composite sensor array concept. The light has already passed through the lens and is converging on one of the sensor arrays of the composite sensor arrays described for Figure 4. At Time 1. the light enters the center array 608 and provides looking straight ahead imagery. At Time 2, the array has rotated, and the light impinges, in this case, on sensor array 609. Thus, the viewing angles from the left and right cameras intersect and provide stereoscopic imagery.
[00166] Figure 6C illustrates a switching out of cameras from Day 1 to Day 2. Note this could be analogous switching lenses on today’s digital cameras. On Day 1, 611 shows the camera box; 612 the aperture, 613 the shutter; 614 the lens; 615 the sensor array and 616 the communications port. On day 2, the user selects a different; takes out the camera used on day 1 and inserts a totally different camera 617 of the same size and shape as the camera uses on Day 1. On Day 2, the aperture 618 is larger and the lens 621 has a curvature. 620 illustrates the shutter. 622 the sensor array and 623 the communications port. In some embodiments, the cameras can be stored on the smart phone and swapped out through mechanical processes on the smart phone. In other embodiments, swapping out can be through manual processes, as is done with today’s SLR cameras. Collectively, Figure 6A, Figure 6B and Figure 6C demonstrate the flexibility of the smart phone with stereoscopic cameras.
[00167] Figure 7A illustrates the Stereoscopic Head Display Unit (SHDU). This figure illustrates key aspects of the Stereoscopic Head Display Unit. Figure 7A shows example placements of key components of the SHDU. 701 illustrates the overall headset. 702 illustrates the left eye display. 702A illustrates the left eye tracking device. 703 illustrates the right eye display. 703A illustrates the right eye tracking device. 704 illustrates the processor. 705 illustrates the left ear speaker. 706 illustrates the power supply. 707 illustrates the antenna. 708 illustrates the right ear speaker. 709 illustrates the inclinometer (or inertial measurement unit). 710 illustrates the microphone. 711 illustrates the communications port. 712 illustrates a scene sensing device.
[00168] Figure 7B shows a side view of a transformable SHDU display unit with an eyepiece cover with augmented reality mode and virtual reality mode. 713A illustrates the side of the frame of the SHDU 701 at Time Point #1. 714A illustrates an eye piece cover. In the preferred embodiment, the eyepiece cover is connected to the SHDU 701 via a hinge. This is called a transformable SHDU because it can transform from a virtual reality type display (where the real world is blocked out) to an augmented reality type display (where the user can see both the virtual world and the real world). Thus, at Time Point #1, the transformable SHDU is in augmented reality mode. The eye piece cover is in the elevated position and does not block the wearer of the SHDU from seeing the external area around him/ her. If the wearer is concurrently viewing virtual imagery (e.g., stereoscopic imagery from smart phone with stereoscopic cameras), this would constitute augmented mixed reality. 713B illustrates the side of the frame of the SHDU 701 at Time Point #2. 714B illustrates an eye piece cover, which is now in position over the front of the SHDU, so it is in virtual reality mode at Time Point #2. At Time Point #2, the eye piece cover has rotated down and now covers the eye piece. The wearer of the SHDU would be blocked from seeing the external area around him/ her. If the wearer is concurrently viewing stereoscopic imagery from smart phone with stereoscopic cameras, this would constitute virtual reality.
[00169] Figure 7C shows a side view of a transformable SHDU display unit with an electronic eye piece with augmented reality mode and virtual reality mode. 713C illustrates the side of the frame of the SHDU 701 at Time Point #1. 714C illustrates an electronic eye piece affixed the SHDU 701 at Time Point #1. At Time Point #1, the setting for the electronic eye piece is transparent and light 715C is able to pass through unfiltered on the SHDU display units. Thus, this is augmented reality mode because the user can see both the real world and the virtual world. 713D illustrates the side of the frame of the SHDU 701 at Time Point #2. 714D illustrates an electronic eye piece affixed the SHDU 701 at Time Point #2. At Time Point #2, the setting for the electronic eye piece is opaque and light 715D is not able to pass through unfiltered on the SHDU display units. Thus, this is virtual reality mode because the user can only see the virtual world. In some embodiments, the opacity ranges in varying degrees of opacity. Thus, the electronic eye piece can a range of realities - mixed reality to virtual reality.
[00170] Figure 8A illustrates wired connectivity means between smart phone with stereoscopic camera capability and stereoscopic head display unit. Figure 8A is the first of three principle connectivity means between the stereoscopic smart phone (SSP) and stereoscopic head display unit (SHDU). Figure 8A shows the front side of the SSP 800A connected to the SHDU 801A via a wire 802. The stereoscopic imagery would be available on both the smart phone display 800A and the SHDU 801A. There are multiple options of what to display on the smart phone display: single left eye image or right eye image filling the entire display; both left eye image and right eye image on a split screen display; a merged left and right eye image to be viewed by a user wearing polarized or anaglyph glasses. In some embodiments, the SSP could be worn on a first user’s head and imagery obtained from the stereoscopic cameras on the back of the SSP as in Figure 1A. In addition, the imagery obtained from the back stereoscopic cameras on the back of the SSP could be displayed on the SHDU worn by a second user. This would allow the second user to view the same imagery as the first user, which can be displayed in near real time. In some embodiments, stereoscopic imagery obtained from the SHDU could be sent to the SSP.
[00171] Figure 8B illustrates wireless connectivity via BlueTooth means between smart phone with stereoscopic camera capability and stereoscopic head display unit. In this embodiment, the stereoscopic imagery would be transmitted from the SSP 800B via a BlueTooth connection 803 to the SHDU 80 IB which would be received through the SHDU’s antenna and subsequently routed to the processor and thence to the SHDU’s left and right displays. In addition, the imagery obtained from the back stereoscopic cameras on the back of the SSP could be displayed on the SHDU 80 IB worn by a second user. This would allow the second user to view the same imagery as the first user, which can be displayed in near real time. In some embodiments, stereoscopic imagery obtained from the SHDU could be sent to the SSP for display, as discussed in Figure 8A.
[00172] Figure 8C illustrates wireless connectivity via the Internet means between smart phone with stereoscopic camera capability and stereoscopic head display unit. In this embodiment, the stereoscopic imagery would be transmitted from the SSP 800C via an Internet connection 804 to the SHDU 801C which would received through the SHDU antenna and subsequently routed to the processor and thence to the SHDU left and right displays. In addition, the imagery obtained from the back stereoscopic cameras on the back of the SSP could be displayed on the SHDU 801C worn by a second user. This would allow the second user to view the same imagery as the first user, which can be displayed in near real time. In some embodiments, stereoscopic imagery obtained from the SHDU could be sent to the SSP for display, as discussed in Figure 8A.
[00173] Figure 9A illustrates system operation using the stereoscopic smart phone (SSP). 901 illustrates the SSP. 902A illustrates a digital icon symbolizing an app, which would be installed and appear on the general display containing multiple apps 903 (including settings, clock, messages). The user could touch his/ her finger 904A to the icon which would enable the SSP to receive commands for setting for the mode of operation of the stereoscopic cameras. These commands could be issued through but limited to: default settings; voice commands by the user into the smart phone via a microphone; antenna 905; via wire 906 into the data port 907; electronic message from the SHDU received at the smart phone. Note that a pull-down menu could appear on the smart phone display once the starting process has been initiated or by pull down menu on the SHDU.
[00174] Figure 9B illustrates near real-time stereo mode at time N. At Time N, the user is enjoying a day at the beach and the stereoscopic cameras are pointed straight ahead at a small ship passing by. The smart phone display shows a split screen of left viewing perspective imagery 908 taken by the left camera and right viewing perspective imagery 909 taken by the right camera.
[00175] Figure 9C illustrates system operation using the stereoscopic smart phone (SSP) with convergence mode. 901 illustrates the SSP. 902B illustrates a digital icon symbolizing the convergence option, which would be installed within the app. The user could touch his/ her finger 904B to the convergence icon. The cameras will adjust according to the convergence point.
[00176] The user touches the display and the current mode of operation of the stereoscopic cameras is indicated by an icon 902B. A command “Converge Near” is issued and the mode of operation changes to convergence at short range The user can the read his/ her book and see it in stereo on the SHDU. In some embodiments, eye tracking can be used to determine a location where in the user’s environment a user is looking. Then, the stereoscopic cameras will adjust the convergence and zoom settings in order to optimize viewing of the location where the user is looking.
[00177] Figure 9D illustrates convergence mode at time N+l. At a later time, Time N+l, the user wearing the SHDU decides to read a book using the SSP under convergence mode. The book is also shown on the smart phone display. Split screen is illustrated 908B and 909B for left and right eyes respectively. Note that in some embodiments, the stereo distance is adjusted so that it is different in Figure 9D as compared to Figure 9B. This could be done by using either the stereoscopic cameras on the front of the SSP or the stereoscopic cameras on the back of the SSP. In some embodiments, a novel stereoscopic camera setup with adjustable stereo distances is disclosed.
[00178] Figure 10A illustrates of before application of automatic object recognition as displayed on the stereoscopic head display unit.
[00179] The purpose of these Figures is to illustrate of before and after application of automatic object recognition (AOR) as displayed on the SHDU. The context for this figure is that the wearer is taking a hike in the forest. The smart phone with stereoscopic cameras could be worn as a body camera, running continuously. The body camera is operating in a look forward mode scanning back and forth covering a hemispherical or near hemispherical field of regard. The SHDU could be operating in the mixed reality mode. Figure 10A depicts the cluttered forest scene with trees, bush and grass and, some objects of possible interest which are difficult to distinguish among all the forest vegetation. The scene as it appears at Time 1 is shown on the SHDU 1001 using a left eye display 1002 and right eye display 1003. The user issues the command “Run AOR” and, in a very short time interval, Figure 10B appears on the SHDU displays.
[00180] Figure 10B illustrates after application of automatic object recognition as displayed on the stereoscopic head display unit. Again, the user in the forest enjoying the daytime hike. The smart phone is worn like a body camera and in the scanning mode (at least a 60 degree arc covering the path and 30 degrees to either side) alternating between looking straight ahead and looking down at close range. The SHDU is in the mixed reality mode an the AOR and Al are operating together. As circumstances would happen, there was a item of interest (e.g., snake which is dangerous) near the path. The AOR would detect and classify the snake and rapidly pass the information to the Al. The Al would assess the situation as ‘danger close’ and immediately provide feedback to the user (e.g., flash ‘STOP, STOP, STOP’ in big red letters on the SHDU displays). The SHDU speakers would sound the alarm and repeat STOP. There are literally numerous situations wherein the combination of AOR and Al would be beneficial to the user. Figure 10B thus illustrates wherein the AOR has correctly classified the man and the deer in the scene. 1004A illustrates a man visible through the left see-through portion of the SHDU. Note that in some embodiments, an image from a left stereoscopic camera of a man can be displayed on the left eye display of the SHDU 1004B illustrates a man visible through the right see-through portion of the SHDU Note that in some embodiments, an image from a right stereoscopic camera of a man can be displayed on the right eye display of the SHDU. 1005 A illustrates a deer visible through the see-through portion of the SHDU. Note that in some embodiments, an image from a left stereoscopic camera of a deer can be displayed on the left eye display of the SHDU. 1005B illustrates a deer visible through the see-through portion of the SHDU. Note that in some embodiments, an image from a right stereoscopic camera of a deer displayed on the right eye display of the SHDU. Note that in some embodiments, some items within the scene have been filtered (subtracted from the scene). This improves understanding of the items of interest. Note that in some embodiments, a novel augmented reality feature is to label an item within the field of view, such as the men and deer are labeled for easy user recognition.
[00181] Figure 10C invokes another stereo camera unique technology. Specifically, since the cameras are of higher resolution than the SHDU displays, the pixels on the display are down sampled in order to get the full scene in the field of view (FOV) onto the SHDU displays. At the user command “Focus deer”, the stereoscopic cameras change the FOV to the narrow field of view (NFOV). For the NFOV, the down sampling is discontinued and, in a very short time interval, the left image 1006A of the deer from the left stereoscopic camera is displayed in full resolution on left eye display and the right image 1006B of the deer from the right stereoscopic camera deer is displayed in full resolution on right eye display. These images are displayed at Time 1++.
[00182] At each of pre-planned FOVs the AOR is run, and objects of potential interest are identified and displayed in the SHDU displays.
[00183] For example, consider this scenario as the walk progresses, it becomes necessary for the user to cross a road. A very quiet electric vehicle (EV) is approaching. The AOR recognizes the vehicle and passes it to the Al. Al self tasks the question ‘is it safe for the user to cross the road’. The SHDU is equipped with a laser range fonder (LRF) which is used to determine range from the user to the EV and range rate of change of the of the EV (i.e., the Al problem is “how fast is the EV going?” and “when will the vehicle get to the user location?”). If the expected arrival time of the EV is such that the EV might intersect with the user, then the SHDUs would flash in large letters ‘DANGER, DANGER, DANGER’. The speakers in the SHDU would sound a warning siren. This could be performed in a variety of scenarios. [00184] Figure 11 A illustrates integration of image stabilization for a user in a scene where there is vibration.
[00185] The scene depicted in Figure 11A is that of a user 1101 riding in a subway along with other passengers labeled 1104. The user 1101 is wearing a SHDU 1102 and holding a stereoscopic camera wherein the phone is linked into the internet. The user spots an article of interest and downloads the article. The subway environment is such that there is considerable vibration thereby, making reading difficult. The user decides to invoke stereoscopic image stabilization. Note that the user, in this case, could invoke either SHDU setting for either mixed reality (MR) wherein the user could read the article and simultaneously watch the ongoing activities in the subway car. Alternatively, the user could invoke the option on the SHDU for virtual reality (VR) and obscure to external scene and focus solely on the article of interest. Note that the user could easily change form the VR mode to the MR mode if there were announcements in the subway communication system.
[00186] Figure 11B shows selection of points within the image to use for image stabilization. The SHDU 1105 displays wherein, for the stereoscopic image stabilization for left eye display has selected three words (1106, 1107 and 1108) within the displayed image (text in this example) as reference points to adjust left eye image frame display to sequential next left eye image frame display. Similarly, 109, 1110, and 1111 are reference points for the right eye display and are used to adjust right eye image frame display to sequential next right eye image frame display. This adjustment process from frame to frame would continue until the user proceeded to other parts of the article at which time the stereoscopic image stabilization process would select new reference points.
[00187] Figure 11C illustrates selection of points within the scene to use for image stabilization. A user 1112 on a walk thru an area of interest which, in this case is an urban area. The user, 1112, is wearing a SHDU 1113 and has a smart stereoscopic camera 1114 in a body wear position to operate in the continuous mode operation of streaming stereoscopic imagery from the stereoscopic cameras. Within the area within which the user is walking there are structures including a first building 1115, a second building 1116 and a fountain 1117, each of which has distinctive features which could be used as reference points for stereoscopic image stabilization. Note that the user may choose to use the microphone on the SHDU to record observations during the walk thru the area of interest. [00188] Figure 11D illustrates stereoscopic image stabilization on the SHDU. Figure 11D shows the SHDU 1118 wherein, for the stereoscopic image stabilization for left eye display has selected three key points within the area (1119, 1120 and 1121) as reference points to adjust left eye image frame display to sequential next left eye image frame display. Similarly, 1122, 1123, and 1124 are reference points for the right eye display and are used to adjust right eye image frame display to sequential next right eye image frame display. This adjustment process from frame to frame would continue until the user proceeded to other parts of the article at which time the stereoscopic image stabilization process would select new reference points. In some embodiments, for the left image displayed in the SHDU 1118, a first set of points is used for image stabilization and for the right image displayed in the SHDU 1118, the same first set of points is used for image stabilization. In other embodiments, for the left image displayed in the SHDU 1118, a first set of points is used for image stabilization and for the right image displayed in the SHDU, a second set of points is used for image stabilization wherein the first set of points are different from the second set of points.
[00189] FIG. 12A illustrates determining a stabilization point and field of view (FOV) for the left eye imagery and right eye imagery. Left eye imagery and right eye imagery at time = N and time = N + 1 are shown. 1200 illustrates a point in the left eye imagery, which is used for image stabilization at time = N. 1200A illustrates the field of view (FOV) from the left eye imagery. 1200B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user. Note that 1200B is a subset of 1200A, which is determined based on point 1200. In the preferred embodiment, 1200A is based on using the point 1200 as the center of 1200A. 1201 illustrates a point in the right eye imagery, which is used for image stabilization at time = N. 1201A illustrates the field of view (FOV) from the left eye imagery. 1201B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user. Note that 1201B is a subset of 1201A, which is determined based on point 1201. In the preferred embodiment, 1201 A is based on using the point 1201 as the center of 1201A. 1202 illustrates the point in the left eye imagery, which is used for image stabilization at time = N + 1. 1202A illustrates the field of view (FOV) from the left eye imagery. 1202B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user. Note that 1202B is a subset of 1202A, which is determined based on point 1202. In the preferred embodiment, 1202A is based on using the point 1202 as the center of 1202A. 1203 illustrates the point in the right eye imagery, which is used for image stabilization at time = N + 1. 1203 A illustrates the field of view (FOV) from the left eye imagery. 1203B illustrates a portion of the field of view (FOV) from the left eye imagery, which is the portion to be displayed to a user. Note that 1203B is a subset of 1203A, which is determined based on point 1203. In the preferred embodiment, 1203 A is based on using the point 1203 as the center of 1203A. In some embodiments, a characteristic feature which can be easily distinguished in the image is used as a stabilizing point for both the left eye imagery and the right eye imagery. In some embodiments, the stabilizing point for the left eye imagery is different from the stabilizing point for the right eye imagery. In some embodiments, image stabilization is used for close range imagery, but not for longer range imagery. In some embodiments, a delay of less than 5 seconds is performed.
[00190] Figure 12B illustrates displaying stabilized stereoscopic imagery on a SHDU. 1204 illustrates the stereoscopic head display unit (SHDU). The top image illustrates what is displayed in a SHDU at time = N + delay, which includes the portion of the field of view (FOV) from the left eye imagery 1200B for the left eye display and the portion of the field of view (FOV) from the right eye imagery 1201B for the right eye display. The bottom image illustrates what is displayed in a SHDU at time = N + delay, which includes the portion of the field of view (FOV) from the left eye imagery 1202B for the left eye display and the portion of the field of view (FOV) from the right eye imagery 1203B for the right eye display.
[00191] Figure 13A illustrates the SSP with its stereoscopic cameras in a first position, which is wide. 1300A illustrates the SSP in a first configuration at time = N. 1301A illustrates a first camera of the SSP’s stereoscopic camera system, which is located at a first position on the SSP. 1302A illustrates a second camera of the SSP’s stereoscopic camera system, which is located at a second position on the SSP. Note that the first camera 1301A is separated from the second camera 1302A by a first stereo distance.
[00192] Figure 13B illustrates the SSP with its stereoscopic cameras in a second position, which is wide. 13006 illustrates the SSP in a second configuration at time = N + 1. Note that the second configuration is different from the first configuration. 1301B illustrates the first camera of the SSP’s stereoscopic camera system, which is located at a third position on the SSP. 1302B illustrates the second camera of the SSP’s stereoscopic camera system, which is located at a fourth position on the SSP. Note that the first camera 1301B is now separated from the second camera 1302B by a second stereo distance, which is different from the first stereo distance. Note that in this example, the second stereo distance is smaller than the first stereo distance.
[00193] Figure 13C illustrates the SSP with its stereoscopic cameras in a third position, which is also narrow, but shifted in position as compared to Figure 12B. 1300C illustrates the SSP in a third configuration at time = N + 2. Note that the third configuration is different from the first configuration and also different from the second configuration. 1301C illustrates the first camera of the SSP’s stereoscopic camera system, which is located at a fifth position on the SSP. 1302C illustrates the second camera of the SSP’s stereoscopic camera system, which is located at a sixth position on the SSP. Note that the first camera 1301B is separated from the second camera 1302B by the second stereo distance, but the position of the cameras is shifted. This novel design improves tracking of small objects that move, such as a roily polly. In this example, the cameras are shown to move along a line. This is called a camera bar design. In some embodiments, the cameras could move up, down, forward, back, left or right. In some embodiments, the cameras’ orientations can also be adjusted in roll, pitch and yaw. In some embodiments, the movements of the cameras can be coordinated with convergence as shown in Figure 2C. Thus, a novel aspect of this system is that the SSP’s stereoscopic cameras can be reconfigured into different positions and orientations on the SSP. This reconfigurable design would allow depth at various distances and could achieve depth at ranges of < 2 inches satisfactory. In some embodiments, a first set of lenses for the stereoscopic camera system could be used for a first stereo distance and a second set of lenses for the stereoscopic camera system could be used for a second stereo distance wherein the first set of lenses are nearer focus lenses than the second set of lenses and the first stereo distance is smaller than the second stereo distance.
[00194] Figure 14 illustrates optimizing stereoscopic imaging. 1400 illustrates determining an object of interest. The preferred embodiment is to use automatic object recognition (AOR). Some embodiments comprise using eye tracking, as taught in US Patent Application 16/997,830, ADVANCED HEAD DISPLAY UNIT FOR FIRE FIGHTERS, filed on 8/19/2020. Some embodiments comprise wherein a user selects an object of interest via a graphical user interface or voice commands. 1401 illustrates determining the distance and angle from the stereoscopic cameras to the object of interest. The preferred embodiment is to use a laser range finder, as taught by US Patent 11,006,100, SMART GLASSES SYSTEM. Other distance measurement technologies are also possible. In some embodiments, the colors and brightness of the images are also determined. 1402 illustrates reconfiguring stereoscopic cameras based on the distance from the stereoscopic camera wherein reconfiguring the stereoscopic cameras comprises adjusting: changing the stereo separation of the left stereoscopic camera from the right stereoscopic camera; moving the left stereoscopic camera and the right stereoscopic camera; changing the convergence of the left stereoscopic camera and the right stereoscopic camera; changing the zoom setting of the left stereoscopic camera and the right stereoscopic camera; and, changing the ISO setting of the left stereoscopic camera and the right stereoscopic camera. Other adjustments include: adjusting the shutter speed; adjusting the focal length; adjusting the field of view; adjusting the detector position / orientation; adjusting the camera position / orientation; adjust the convergence angle; and adjusting a deformable mirror. In some embodiments, these adjustments are also based on the colors and brightness of the images. In some embodiments, a table for optimized stereoscopic camera settings is generated based on distance. This table can be referenced as a look up table. Once the distance and angle are determined, the camera settings can be looked up and automatically implemented. For X distance, stereo distance would be looked up and found to be Y. And the stereoscopic cameras would be set at a stereo distance of Y.
[00195] Figure 15 illustrates adjusting stereoscopic camera componentry based on eye tracking parameters of a user. 1500 illustrates performing eye tracking via headset per US to determine which object a user is looking at 16/997,830, ADVANCED HEAD DISPLAY UNIT FOR FIRE FIGHTERS, which is incorporated by reference in its entirety. 1501 illustrates adjust camera componentry to optimize viewing of the eye tracking, which includes: adjusting the shutter speed; adjusting the focal length; adjusting the ISO; adjusting the field of view; adjusting the detector position / orientation; adjusting the camera position / orientation; adjust the convergence angle; and adjusting a deformable mirror. In some embodiments, eye tracking is used to adjust the camera componentry to optimize imaging using a single camera system. In some embodiments, eye tracking is used to adjust the camera componentry to optimize imaging using a stereoscopic camera system.
[00196] Figure 16 illustrates a three-camera stereoscopic smart phone. 1600 illustrates the three- camera stereoscopic smart phone. 1601A illustrates a first camera. 1601B illustrates a second camera. 1601C illustrates a third camera. The first camera 1601A has a first location on said smart phone. The second camera 160 IB has a second location on said smart phone, which is different from said first location. The third camera 1601C has a third location on said smart phone, which is different from said first location and the second location. In some embodiments, the first camera 1601A location is fixed. In some embodiments, the first camera 1601A location is movable. In some embodiments, the second camera 160 IB location is fixed. In some embodiments, the second camera 1601B location is movable. In some embodiments, the third camera 1601C location is fixed. In some embodiments, the third camera 1601C location is movable. The third camera 1601C a third location on said smart phone and wherein said third location is different from said first location and said second location. The three-camera stereoscopic smart phone 1600 has an imaging system configured to track an object's location in an area. This can be based on using the first camera 1601 A, the second camera 160 IB, the third camera 1601C or another device including a LIDAR device or infrared device. The first camera 1601 A and the second camera 1601B are separated by a first stereo distance. The second camera 1601B and the third camera 1601C are separated by a second stereo distance. The third camera 1601C and the first camera are separated by a third stereo distance. The first stereo distance is smaller than the second stereo distance. The third stereo distance is larger than the first stereo distance and the second stereo distance. The smart phone is configured to use data from imaging system, the first camera 1601A, the second camera 160 IB and said third camera 1601C to acquire enhanced stereoscopic imagery of the object in the area. If the object is a first distance from the smart phone, using the first camera 1601A and the second camera 1601B to acquire the enhanced stereoscopic imagery. If the object is a second distance from the smart phone wherein the second distance is larger than the first distance, using the second camera 1601B and said third camera 1601C to acquire said enhanced stereoscopic imagery. If the object is a third distance from said smart phone wherein the third distance is larger than the second distance, using said first camera 1601A and said third camera 1601C to acquire said enhanced stereoscopic imagery.
[00197] Throughout the entirety of the present disclosure, use of the articles “a” or “an1 to modify a noun may be understood to be used for convenience and to include one, or more than one of the modified noun, unless otherwise specifically stated. Elements, components, modules, and/or parts thereof that are described and/or otherwise portrayed through the figures to communicate with, be associated with, and/or be based on, something else, may be understood to so communicate, be associated with, and or be based on in a direct and/or indirect manner, unless otherwise stipulated herein. The device(s) or computer systems that integrate with the processor(s) may include, for example, a personal computer(s), workstation(s) (e g., Sun, HP), personal digital assistant(s) (PDA(s)), handheld device(s) such as cellular telephone(s), laptop(s), handheld computer(s), or another device(s) capable of being integrated with a processor(s) that may operate as provided herein. Accordingly, the devices provided herein are not exhaustive and are provided for illustration and not limitation. References to “a microprocessor and “a processor, or “the microprocessor and “the processor.” may be understood to include one or more microprocessors that may communicate in a stand-alone and/or a distributed environment(s), and may thus be configured to communicate via wired or wireless communications with other processors, where such one or more processor may be configured to operate on one or more processor-controlled devices that may be similar or different devices. Use of such “microprocessor or “processor terminology may thus also be understood to include a central processing unit, an arithmetic logic unit, an application-specific integrated circuit (IC), and/or a task engine, with such examples provided for illustration and not limitation. Furthermore, references to memory, unless otherwise specified, may include one or more processor-readable and accessible memory elements and/or components that may be internal to the processor-controlled device, external to the processor- controlled device, and/or may be accessed via a wired or wireless network using a variety of communications protocols, and unless otherwise specified, may be arranged to include a combination of external and internal memory devices, where Such memory may be contiguous and/or partitioned based on the application. Accordingly, references to a database may be understood to include one or more memory associations, where such references may include commercially available database products (e g., SQL, Informix, Oracle) and also include proprietary databases, and may also include other structures for associating memory Such as links, queues, graphs, trees, with such structures provided for illustration and not limitation. References to a network, unless provided otherwise, may include one or more intranets and/or the Internet, as well as a virtual network. References hereinto microprocessor instructions or microprocessorexecutable instructions, in accordance with the above, may be understood to include programmable hardware.
[00198] Unless otherwise stated, use of the word “substantially1 may be construed to include a precise relationship, condition, arrangement, orientation, and/or other characteristic, and deviations thereof as understood by one of ordinary skill in the art, to the extent that such deviations do not materially affect the disclosed methods and systems. Throughout the entirety of the present disclosure, use of the articles “a” or “an1 to modify a noun may be understood to be used for convenience and to include one, or more than one of the modified noun, unless otherwise specifically stated. Elements, components, modules, and/or parts thereof that are described and/or otherwise portrayed through the figures to communicate with, be associated with, and/or be based on, something else, may be understood to so communicate, be associated with, and or be based on in a direct and/or indirect manner, unless otherwise stipulated herein. Although the methods and systems have been described relative to a specific embodiment thereof, they are not so limited. Obviously, many modifications and variations may become apparent in light of the above teachings. Many additional changes in the details, materials, and arrangement of parts, herein described and illustrated, may be made by those skilled in the art. Having described preferred embodiments of the invention it will now become apparent to those of ordinary skill in the art that other embodiments incorporating these concepts may be used. Additionally, the software included as part of the invention may be embodied in a computer program product that includes a computer useable medium. For example, such a computer usable medium can include a readable memory device, such as a hard drive device, a CD-ROM, a DVD ROM, or a computer diskette, having computer readable program code segments stored thereon. The computer readable medium can also include a communications link, either optical, wired, or wireless, having program code segments carried thereon as digital or analog signals. Accordingly, it is submitted that that the invention should not be limited to the described embodiments but rather should be limited only by the spirit and scope of the appended claims.
[00199] Several features, aspects, embodiments and implementations have been described. Nevertheless, it will be understood that a wide variety of modifications and combinations may be made without departing from the scope of the inventive concepts described herein. Accordingly, those modifications and combinations are within the scope of the following claims.

Claims

CLAIMS What is claimed is:
1. A method of stereoscopic imaging comprising: using a left camera and a right camera of a stereoscopic camera system image to perform initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a first stereoscopic distance, wherein said left camera has a first pointing direction, wherein said right camera has a first pointing direction, and wherein said left camera's first pointing direction is different from said right camera's first pointing direction, and wherein said left camera's first pointing direction and said right camera's first pointing direction are towards a first convergence point; changing said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’ s first pointing direction, wherein said left camera’s second pointing direction points towards a second convergence point, wherein said second convergence point is different from said first convergence point, and wherein either said left camera's location on said stereoscopic camera changes based on said second convergence point or said left camera's zoom setting changes based on said second convergence point; changing said right camera’s first pointing direction to a second pointing direction wherein said right camera’s second pointing direction is different than said right camera’ s first pointing direction, wherein said right camera’s second pointing direction points towards said second convergence point, wherein said left camera's second pointing direction is different from said right camera's second pointing direction, and wherein either said right camera's location on said stereoscopic camera changes based on said second convergence point or said right camera's zoom setting changes based on said second convergence point; and using said left camera and said right camera of said stereoscopic camera system to perform subsequent stereoscopic imaging of said area with said left camera’s second pointing direction and said right camera’s second pointing direction.
2. The method of claim 1 further comprising: wherein said first convergence point is positioned such that a distance from said left camera’s location to said first convergence point is not equal to a distance from said right camera’s location to said first convergence point; and wherein said left camera's zoom setting does not equal said right camera's zoom setting.
3. The method of claim 1 further comprising wherein said second convergence point is positioned such that a distance from said left camera’s location to said second convergence point is equal to a distance from said right camera’s location to said second convergence point.
4. The method of claim 1 further comprising: wherein if said left camera's location changes to a new location and if said right camera's location changes to a new location, said left camera's new location and said right camera's new location are separated by a second stereoscopic distance; and wherein said second stereoscopic distance is different from said first stereoscopic distance.
5. The method of claim 1 further comprising: wherein a distance from said stereoscopic camera system to said first convergence point is smaller than a distance from said stereoscopic camera system to said second convergence point; wherein said initial stereoscopic imaging has a first zoom setting; wherein said subsequent stereoscopic imaging has a second zoom setting; and wherein said second zoom setting has greater magnification than said first zoom setting.
6. The method of claim 1 further comprising: wherein said left camera’s first pointing direction is determined based on said left camera’s orientation; and wherein said right camera’s first pointing direction is determined based on said right camera’s orientation.
7. The method of claim 1 further comprising: displaying left eye imagery and right eye imagery from said initial stereoscopic imaging of said area on a stereoscopic head display unit (SHDU); and displaying left eye imagery and right eye imagery from said subsequent stereoscopic imaging of said area on said SHDU wherein said SHDU comprises a virtual reality display, an augmented reality display or a mixed reality display.
8. The method of claim 7 further comprising: wherein said stereoscopic camera system is acquired on a smart phone; wherein said first convergence point and second convergence point are determined based on object tracking in said area; and wherein said SHDU and said smart phone communicate via a wired connection, a wireless connection via BlueTooth or a wireless connection via an Internet.
9. The method of claim 7 further comprising wherein automatic object recognition is performed on said left eye imagery and said right eye imagery from said initial stereoscopic imaging of said area on said SHDU.
10. The method of claim 9 further comprising wherein artificial intelligence is performed in conjunction with said automatic object recognition to alert a user regarding findings in said area.
11. The method of claim 7 further comprising wherein stereoscopic image stabilization is performed on said left eye imagery and said right eye imagery from said initial stereoscopic imaging of said area on said SHDU.
12. The method of claim 1 further comprising: determining a spatial relationship between said stereoscopic camera system and an object of interest; and reconfiguring said stereoscopic cameras based on said spatial relationship wherein reconfiguring said stereoscopic cameras comprises changing said stereoscopic distance to a subsequent stereoscopic distance wherein said subsequent stereoscopic distance is different than said stereoscopic distance.
13. The method of claim 1 further comprising: wherein said subsequent stereoscopic imaging of said area is performed using a second stereoscopic distance; and wherein said second stereoscopic distance is smaller than said first stereoscopic distance.
14. The method of claim 1 further comprising wherein said stereoscopic camera system is placed on a smart phone, a tablet or a laptop.
15. The method of claim 1 further comprising: wherein a scene sensing device maps said area; wherein tracking of an object in said area is performed, wherein said first convergence point determined based on said object’s location in said area at a first time point and said second convergence point is determined based on said object’s location in said area a first time point.
16. The method of claim 1 further comprising wherein said first convergence point and said second convergence point are determined based on eye tracking metrics of a user.
17. The method of claim 1 further comprising wherein said first convergence point and said second convergence point are determined based on an artificial intelligence algorithm.
18. The method of claim 1 further comprising wherein a sensor system of said stereoscopic camera system comprises a composite sensor array.
19. A stereoscopic head display unit (SHDU) comprising: a head display unit with a left eye display and a right eye display wherein said SHDU is configured to: receive initial stereoscopic imagery from a stereoscopic imaging system wherein said initial stereoscopic imagery comprises initial left eye imagery and initial right eye imagery; display said initial left eye imagery on said left eye display; display said initial right eye imagery on said right eye display; receive subsequent stereoscopic imagery from said stereoscopic imaging system wherein said subsequent stereoscopic imagery comprises subsequent left eye imagery and subsequent right eye imagery; display said subsequent left eye imagery on said left eye display; display said subsequent right eye imagery on said right eye display; wherein said stereoscopic imaging system comprises a left camera and a right camera; and wherein said stereoscopic camera system image is configured to: use said left camera and said right camera of said stereoscopic camera system to perform said initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a first stereoscopic distance, wherein said left camera has a first pointing direction, wherein said right camera has a first pointing direction, and wherein said left camera's first pointing direction is different from said right camera's first pointing direction, and wherein said left camera's first pointing direction and said right camera's first pointing direction are towards a first convergence point; change said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’s first pointing direction, wherein said left camera’s second pointing direction points towards a second convergence point, and wherein said left camera's location on said stereoscopic camera changes based on said second convergence point; change said right camera’s first pointing direction to a second pointing direction wherein said right camera’s second pointing direction is different than said right camera’s first pointing direction, and wherein said right camera’s second pointing direction points towards said second convergence point, wherein said left camera's second pointing direction is different from said right camera's second pointing direction, wherein said right camera's location on said stereoscopic camera setting changes based on said second convergence point, wherein said left camera's location and said right camera's location are separated by a second stereoscopic distance, and wherein said second stereoscopic distance is different from said first stereoscopic distance; and use said left camera and said right camera of said stereoscopic camera system to perform said subsequent stereoscopic imaging of said area with said left camera’s second pointing direction and said right camera’s second pointing direction.
20. A stereoscopic smart phone comprising: a smart phone; and a stereoscopic imaging system operably connected to said smart phone comprising a left camera and a right camera wherein said stereoscopic camera system image is configured to: perform initial stereoscopic imaging of an area wherein said left camera has a location on said stereoscopic camera system, wherein said right camera has a location on said stereoscopic camera system, wherein said left camera’s location and said right camera’s location are separated by a stereoscopic distance, and wherein said left camera has a first pointing direction, wherein said right camera has a first pointing direction, wherein said left camera's first pointing direction is different from said right camera's first pointing direction, and wherein said left camera's first pointing direction and said right camera's first pointing direction are towards a first convergence point; change said left camera’s first pointing direction to a second pointing direction wherein said left camera’s second pointing direction is different than said left camera’ s first pointing direction, wherein said left camera’s second pointing direction points towards a second convergence point, and wherein said left camera's zoom setting changes based on said second convergence point; change said right camera’ s first pointing direction to a second pointing direction, wherein said right camera’s second pointing direction is different than said right camera’s first pointing direction, wherein said right camera’s second pointing direction points towards said second convergence point, wherein said left camera's second pointing direction is different from said right camera's second pointing direction, and wherein said right camera's zoom setting changes based on said second convergence point; and use said left camera and said right camera of said stereoscopic camera system to perform subsequent stereoscopic imaging of said area with said left camera’s second pointing direction and said right camera’s second pointing direction.
PCT/US2023/020758 2022-05-31 2023-05-03 A method and apparatus for a stereoscopic smart phone WO2023235093A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US17/829,256 2022-05-31
US17/829,256 US11627299B1 (en) 2019-08-20 2022-05-31 Method and apparatus for a stereoscopic smart phone
US18/120,422 US11877064B1 (en) 2019-08-20 2023-03-12 Method and apparatus for a stereoscopic smart phone
US18/120,422 2023-03-12

Publications (1)

Publication Number Publication Date
WO2023235093A1 true WO2023235093A1 (en) 2023-12-07

Family

ID=89025447

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/020758 WO2023235093A1 (en) 2022-05-31 2023-05-03 A method and apparatus for a stereoscopic smart phone

Country Status (1)

Country Link
WO (1) WO2023235093A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150116202A1 (en) * 2012-03-07 2015-04-30 Sony Corporation Image processing device and method, and program
US20160086379A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Interaction with three-dimensional video
US20200349846A1 (en) * 2018-01-08 2020-11-05 Foresight Automotive Ltd. A multi-spectral system for providing precollision alerts

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150116202A1 (en) * 2012-03-07 2015-04-30 Sony Corporation Image processing device and method, and program
US20160086379A1 (en) * 2014-09-22 2016-03-24 Samsung Electronics Company, Ltd. Interaction with three-dimensional video
US20200349846A1 (en) * 2018-01-08 2020-11-05 Foresight Automotive Ltd. A multi-spectral system for providing precollision alerts

Similar Documents

Publication Publication Date Title
CN112771539B (en) Employing three-dimensional data predicted from two-dimensional images using neural networks for 3D modeling applications
JP6444886B2 (en) Reduction of display update time for near eye display
CN106255978B (en) Facial expression tracking
US20220113814A1 (en) Smart ring for manipulating virtual objects displayed by a wearable device
US20150262424A1 (en) Depth and Focus Discrimination for a Head-mountable device using a Light-Field Display System
US20130241805A1 (en) Using Convergence Angle to Select Among Different UI Elements
WO2018118538A1 (en) Interactive virtual objects in mixed reality environments
US9442292B1 (en) Directional array sensing module
CN110573996A (en) Multi-view eye tracking for VR/AR systems
US20230400913A1 (en) Virtual models for communications between user devices and external observers
US11843758B2 (en) Creation and user interactions with three-dimensional wallpaper on computing devices
CN106484116A (en) The treating method and apparatus of media file
US10764556B2 (en) Depth sculpturing of three-dimensional depth images utilizing two-dimensional input selection
US20210295587A1 (en) Stylized image painting
JPWO2019031005A1 (en) Information processing apparatus, information processing method, and program
KR20200082109A (en) Feature data extraction and application system through visual data and LIDAR data fusion
CN115668105A (en) Eye-wear including clustering
US11656471B2 (en) Eyewear including a push-pull lens set
US20210390882A1 (en) Blind assist eyewear with geometric hazard detection
EP4042230A1 (en) Multi-dimensional rendering
US20230412779A1 (en) Artistic effects for images and videos
US11627299B1 (en) Method and apparatus for a stereoscopic smart phone
US11877064B1 (en) Method and apparatus for a stereoscopic smart phone
WO2023235093A1 (en) A method and apparatus for a stereoscopic smart phone
KR102169885B1 (en) Night vision system displaying thermal energy information and its control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23816530

Country of ref document: EP

Kind code of ref document: A1