US11410330B2 - Methods, devices, and systems for determining field of view and producing augmented reality - Google Patents
Methods, devices, and systems for determining field of view and producing augmented reality Download PDFInfo
- Publication number
- US11410330B2 US11410330B2 US15/992,804 US201815992804A US11410330B2 US 11410330 B2 US11410330 B2 US 11410330B2 US 201815992804 A US201815992804 A US 201815992804A US 11410330 B2 US11410330 B2 US 11410330B2
- Authority
- US
- United States
- Prior art keywords
- camera
- fov
- real
- view
- real world
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the invention generally relates to image processing techniques, photography, and augmented reality.
- embodiments of the invention relate to image processing and field of view determinations that permit or improve augmented reality methods and systems.
- Multipurpose mobile electronic devices such as smartphones and tablets very frequently have one if not multiple built-in cameras that accompany other hardware like a display and speakers.
- the market for multipurpose devices like smartphones is considerable, and many different companies use propriety cameras or specialized camera configurations as a means for differentiating their own products from those of competitors.
- the consumer marketplace contains hundreds of different cameras with their own separate and unique specifications. Camera specifications differ not only between different companies/brands (e.g., Apple and Samsung), but also between different product series of the same company/brand (e.g., Samsung Galaxy and Samsung Note), and between different product versions within the same series (e.g., Samsung Galaxy S6 and Samsung Galaxy S7).
- any one single company controls only a fraction of the mobile device market, and because any one specific mobile device model of that single company represents only a fraction of that fraction, it is not economically savvy for software developers to develop software products which cater only to a specific mobile device model, specific mobile device series, or even a specific company. Rather, the common industry trend is for software developers to create software products which are compatible with a wide variety of disparate hardware devices.
- Augmented reality products for instance, can be heavily dependent on camera hardware. Augmented reality attempts to augment a real world experience, and to do this the devices or systems providing the augmentation must have an “understanding” of real world surroundings.
- Such “understanding” is most often obtained using one or more cameras which capture images or videos of the surroundings.
- Other sensors such as locations sensors (e.g., GPS devices) may also be used to collect data used as input to the logic decisions which ultimately produce an augmented reality experience. Because of the diversity of camera specifications, however, it becomes problematic for an augmented reality device to “understand” or characterize image or video data that is received from the camera.
- Exemplary embodiments provide methods and apparatuses which are configured to determine the field of view of a camera for which the field of view was not previously known.
- Field of view may be determined using a limited set of inputs, including a feed from the camera, referential data describing the locations of recognizable real world objects like the Empire State Building or a street sign, and location information for the camera (e.g., GPS data).
- Orientation information may also be used an input to describe the direction a camera is facing (e.g., north or south; towards the sky or towards the ground).
- this field of view may be applied to a virtual world which creates a virtual landscape corresponding with a real world landscape.
- a virtual model of New York City could have many of the same general dimensions, proportions, and other qualities of the real New York City.
- the virtual model also contains virtual objects which may or may not correspond with the real world.
- a virtual model of New York City may have a virtual building that resembles the Empire State Building, and it may also have a virtual hot dog stand at the corner of Broadway and 7 th Street which does not correspond with any real hot dog stand.
- the virtual model may simply be virtual model data stored in a memory repository.
- Embodiments of the invention using the field of view that was determined, select particular virtual objects to represent as augmentations to a user. So, for instance, just the virtual hot dog stand may be shown to a user if the user is looking at the real world corner of Broadway and 7 th Street. In such manner augmented realities with accurately delivered (e.g., positioned) augmentations are made possible despite initially lacking the field of view of the camera being used to understand the real world surroundings of a user.
- FIG. 1 is a diagram of a frustum.
- FIG. 2 is a diagram illustrating a relatively wide field of view that encompasses two real objects and a nearby virtual object.
- FIG. 3 is a diagram illustrating a relatively narrow field of view that encompasses two real objects but not a nearby virtual object.
- FIG. 4 is a diagram of an augmented reality (AR) device that may have either the relatively wide field of view from FIG. 2 or the relatively narrow field of view from FIG. 3 .
- AR augmented reality
- FIG. 5 is an image corresponding with the relatively wide field of view in FIG. 2 .
- FIG. 6 is an image corresponding with the relatively narrow field of view in FIG. 3 .
- FIG. 8A is a camera image that includes at least two detectable real world objects. The image is labeled with distance measurements used during the process of FIG. 7 .
- FIG. 8B is a top down view showing the spatial arrangement of the camera and real world objects discussed in connection with FIGS. 7 and 8A .
- FIG. 9 is a top down view showing the same spatial arrangement as in FIG. 8B but with an alternative system of measurement.
- the labels correspond with distance and angle measures used during the process of FIG. 7 .
- FIG. 10A is a camera image that includes at least one detectable real world object.
- FIG. 10B is a top down view showing the spatial arrangement of the camera and real world object discussed in connection with FIG. 10A .
- FIG. 11 is a top down view showing the same spatial arrangement as in FIG. 10B but with an alternative system of measurement.
- Augmented reality is a direct or indirect experience of a physical, real-world environment in which one or more elements are augmented by computer-generated sensory output such as but not limited to sound, video, graphics, or haptic feedback.
- Augmented reality is frequently but not necessarily live/in substantially real time. It is related to a more general concept called “mediated reality”, in which a view of reality is modified (e.g., diminished or augmented) by a computer. The general intent is to enhance one's natural perception of reality (e.g., as perceived by their senses without external devices).
- mediated reality in contrast to mediated reality, “virtual reality” replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements.
- Augmented reality For example, many Americans are accustomed to augmented reality when watching American football on a television. A football game as captured by video cameras is a real world view. However, the broadcasting company frequently augments the recorded image of the real world view with the line of scrimmage and first down markers on the field. The line and markers do not exist in reality, but rather they are virtual augmentations that are added to the real world view. As another example, in televised Olympic races, moving virtual lines can be superimposed on tracks and swimming pools to represent the position of a runner or swimmer keeping pace with the world record in the event. Augmented reality that is not in in real-time can be, for example, superimposing the line of scrimmage over the image of a football match that is being displayed after the match has already taken place. Augmented reality permits otherwise imperceptible information about the environment and its objects to supplement (e.g., be overlaid on) a view or image of the real world.
- Augmented reality differs from a heads-up display, or HUD.
- a HUD displays virtual objects overlaid onto a view of the real world, but the virtual objects are not associated visually with elements of that real world view. Instead, the HUD objects are associated with the physical device that is used to display the HUD, such as a reflective window or a smartphone.
- a HUD moves with the display and not with the real world view. As a result, the virtual objects of the HUD are not perceived as being integrated into the real world view as much as purely being an overlay.
- augmentations of an augmented reality
- Embodiments of the invention are primarily concerned with augmented reality as opposed to HUDs, although HUDs may be used in conjunction with augmented reality.
- a line of scrimmage is shown as an augmentation (augmented reality).
- the line appears in relation to the field and the players within the real world view. If a camera pans left to look at a coach on a sideline, the center of the field, the players, and the virtual scrimmage line all move off to the right hand side of the view where they will eventually exit the field of view if the camera pans sufficiently to the left.
- scores of the competing teams are also usually displayed on televisions. The scores are typically superimposed on the view of the game in a top or bottom corner of the television screen. The scores always maintain a corner position in the television.
- the scores When a camera pans left from the players in the center of the field to a coach on the sideline, the scores in essence move left along with the field of view, so that they maintain the exact same position on the display.
- the positions of the scores have no associative relationship to the positions of objects in the real world view. In this way, the scores behave like the virtual objects of a HUD as opposed to “augmentations” as generally used herein.
- a camera includes at least one lens and an image sensor.
- the lens focuses light, aligns it, and produces a round area of light on an image sensor.
- Image sensors are typically rectangular in shape, with the result that the round area of light from the lens is cropped to a standard image format.
- a lens may be a zoom lens or a fixed focal length lens. At the time of writing this application, most mobile multipurpose electronic devices have fixed focal length lens. However, embodiments of the invention may be suited for either type of lens. Lenses may be categorized according to the range of their focal length. Three standard classifications are wide angle, normal, and telephoto. Categorization depends on focal length (or focal length range) and lens speeds.
- Field of view is the extent of the observable world seen at a given moment, e.g., by a person or by a camera.
- AOV angle of view
- FOV field of view
- Angle of view is related to focal length. Smaller focal lengths allow wider angles of view. Conversely, larger focal lengths result in narrower angles of view. Unaided vision of a human tends to have an AOV of about 45°. “Normal” lenses are intended to replicate the qualities of natural vision and therefore also tend to have an AOV of about 45°.
- Angle of view is related to focal length. Smaller focal lengths allow wider angles of view. Conversely, larger focal lengths result in narrower angles of view. For a 35 mm format system, an 8 mm focal length may correspond with an AOV of 180°, while a 400 mm focal length corresponds with an AOV of 5°, for example. As an example between these two extremes, a 35 mm focal length corresponds with an AOV of 68°. Unaided vision of a human tends to have an AOV of about 45°. “Normal” lenses are intended to replicate the qualities of natural vision and therefore also tend to have an AOV of about 45°.
- Angle of view is also dependent on sensor size. Sensor size and angle of view are positively correlated. A larger sensor size means a larger angle of view. A smaller sensor size means a smaller angle of view.
- FOV or AOV
- FOV 2 * tan - 1 ⁇ ( d 2 ⁇ f ) where d is the sensor size and f is the focal length.
- FIG. 1 shows an example of a viewing frustum 100 , referred to herein simply as “frustum.”
- the frustum is usually a truncated four-sided (e.g., rectangular) pyramid.
- the frustum may have a different base shape (e.g., a cone).
- a frustum 100 may be defined according to a vertical field of view 101 (an angle, usually expressed in degrees), a horizontal field of view (an angle, usually expressed in degrees), a near limit (a distance or position), and a far limit (a distance or position).
- the near limit is given by a near clip plane 103 of the frustum.
- the far limit is given by a far clip plane 104 of the frustum.
- a frustum also generally includes position and orientation.
- an exemplary frustum includes position, orientation, field of view (horizontal, vertical, and/or diagonal), and near and far limits. Position and orientation may be referred to collectively as “pose.”
- the frustum 100 of FIG. 1 corresponds with the view from a camera or viewpoint 111 .
- a real world setting will involve a camera, whereas a virtual world setting will involve a viewpoint (e.g., a virtual camera).
- a viewpoint e.g., a virtual camera.
- virtual objects falling in the region 120 between the viewpoint 111 and the near clip plane 103 are not displayed.
- virtual objects falling in the region 140 which are beyond the far clip plane 104 are not displayed.
- Only virtual objects within the frustum 100 that is to say within the region between the near and far clip planes 103 and 104 and within the horizontal FOV 102 and vertical FOV 101 , are candidates for representation by augmentation.
- a near clip plane 103 may be set to zero (i.e., at the viewpoint) and a far clip plane 104 may be set to infinity or substantially infinite distance in order to approximate the view from a camera looking upon the real world.
- Augmented reality involves defining spatial relationships between virtual objects and real objects, and then making the virtual objects apparent to a user of the augmented reality system in such a way as to combine real and virtual objects.
- a visual augmented reality display could use virtual and real objects, and their defined spatial relationships, to generate a combined visual display in the form of a live streaming video (presenting real objects) overlaid with representations of the virtual objects.
- a spatial relationship between two objects may involve one or more of a topological relation, a distance relation, and a directional relation.
- a topological relation between an object A and an object B may be, for example, A is within B, A is touching B, A is crossing B, A is overlapping B, or A is adjacent to B.
- Precise spatial relationships between real and virtual objects allow an augmented reality system to generate perceptual experiences in which real and virtual objects are apparently combined seamlessly, e.g. for visual systems the combined presentation is apparently in the correct visual proportions, perspectives, and arrangement. Without correct reckoning of the spatial relationships in such a system, errors in the presentation of the system's output to the user can cause the system to be unusable, e.g. virtual objects appear out of place and therefore are not useful.
- An example is a virtual visual label that should label one building, but is erroneously shown overlaid onto a different building.
- the visual perspective into the real world must be matched to the effective visual perspective into the virtual world.
- the determination of which virtual objects are eligible for visual presentation to the user depends on the perspective in the virtual world, which must be matched to the real world perspective of a real world camera in order to take advantage of carefully determined spatial relationships among virtual and real objects.
- the perspective of the camera includes the position of the camera, the orientation of the camera, and its field of view.
- the need for a correctly matched perspective between virtual and real worlds means that in order to provide an accurate spatial relationship between virtual objects and real objects in an augmented reality output, it is necessary to determine the field of view of the real camera so that the virtual field of view can be matched to the real field of view.
- AR augmented reality
- existing augmented reality (AR) products in the marketplace today make use of dedicated hardware for which the camera field of view is known a priori before the augmented reality software algorithm is written.
- these solutions cannot be used on any hardware other than the dedicated hardware for which the AR system is pre-calibrated.
- a traditional AR system which does not have a priori knowledge of a camera's field of view has a problem of determining which virtual objects are candidates for presenting augmentation (e.g., display). To illustrate this problem, consider the following scenario discussed in connection with FIGS. 2-6 .
- FIG. 2 shows a camera 201 with a relatively wide field of view, FOV w .
- FOV w a field of view
- Object X is a virtual object which would not appear to the camera 201 but which is desirable to show in an augmented reality output with the spatial relationship (relative to R 1 , R 2 , and camera 201 ) depicted in FIG. 2 .
- the camera at its most basic level of functionality, is not “aware” of the physical presence of R 1 or R 2 despite having visibility of these objects. It is also not “aware” that, based on its position, viewing direction, and field of view, it would be appropriate to also perceive virtual object X.
- FIG. 3 shows a camera 301 with a relatively narrow field of view, FOV n .
- FOV n a relatively narrow field of view
- cameras 201 and 301 have the exact same positions and orientations.
- virtual object X has the same position. Accordingly, it is safe to assume that an identical spatial relationship exists between camera 201 and virtual object X as exists between camera 301 and virtual object X.
- At the left and right boundaries of the FOV n of camera 301 are real objects R 1 ′ and R 2 ′.
- Camera 301 just like camera 201 , is assumed for this illustration to have no detection or object recognition capabilities and therefore has no “awareness” of the physical presence of R 1 ′ and R 2 ′. It is also not “aware” that, based on its position, orientation, and field of view, it would be inappropriate to perceive virtual object X, since virtual object X lies outside of the field of view of the camera 301 .
- FIG. 4 consider next an augmented reality (AR) device 404 that includes a camera 401 which is identical in all characteristics to either camera 201 or 301 , but the AR device 404 is not preconfigured to have a priori knowledge of camera 401 's configurations such as its field of view. Even if there are only two possibilities, FOV w or FOV n , the system still needs a way to determine which field of view actually applies to camera 401 in order to provide accurate augmentations.
- FIG. 4 shows the combination of the two possibilities, that is to say a FOV w configuration and a FOV n configuration.
- the AR device 404 should represent the virtual object as an augmentation perceptible to a user. If the virtual object X is outside the actual field of view, the AR device should not represent the virtual object as an augmentation perceptible to the user. In short, the AR device must make a determination as to whether or not to represent virtual object X with an augmentation, and such determination is dependent on determining the actual field of view of the AR device's camera 401 .
- FIGS. 5 and 6 show visual outputs that may be displayed by the AR device 404 .
- FIG. 5 shows a proper image or video 500 if the actual field of view of camera 401 is FOV w .
- Objects R 1 , R 2 , R 1 ′, and R 2 ′ are shown be default because they are real world objects.
- the real world objects may be displayed as part of a live video stream from the camera 401 .
- the AR device 404 is a see-through HMD
- the real world objects may be visible simply by light reflecting off of the objects and passing through the HMD to the user's eyes.
- the virtual object X can only be represented to the user as an augmentation.
- FIG. 5 shows a virtual object X augmentation represented accurately, meaning it has the correct spatial relationships (e.g., topological relation, distance relation, and directional relation) with the real world objects in the image or video 500 .
- Exemplary embodiments of the invention comprise the necessary means, structural and/or functional, for an AR device or system such as AR device 404 of FIG. 4 to be configured to determine the actual FOV of the camera (or cameras) used for the augmented reality experience without a priori knowledge of the camera's field of view.
- FIG. 7 shows an exemplary system 700 used for determining a field of view of a camera 701 and providing an augmented reality with an output device 704 .
- Thy system 700 comprises a camera 701 , database 702 , processor 703 , and output device 704 .
- each of these and other hardware elements may be referred to in the singular in this description. It should be understood, however, that alternative embodiments may include one or multiple of each hardware element with one or multiple of each other hardware element while maintaining substantially the same core functionality that will now be described. In short, the number of cameras, databases, processors, and output devices may vary, and reference to such hardware elements in the singular should not necessarily be construed as limiting. It should also be understood that other hardware may also be included in a system 700 , for instance a power converter, wiring, and input/output interfaces (e.g., keyboard, touchscreen, buttons, toggles, etc.), among others.
- a power converter e.g., keyboard, touchscreen, buttons, toggles, etc.
- a camera includes at least one lens and an image sensor, generally represented in FIG. 7 as optical elements 710 .
- the optical elements 710 produce images or video(s) 711 .
- the camera 701 further includes other sensors such as but not limited to a GPS unit 713 that gives a camera location 712 , and a gyroscope and/or digital compass 714 that provide the camera's orientation 715 .
- the sensors that are additional to the optical elements 710 e.g., GPS 713 and gyroscope/compass 714
- Exemplary embodiments of the invention use a limited set of inputs to resolve the problem of determining the field of view of a camera.
- inputs include the images or videos 711 captured by the camera 701 , the camera's location 712 , and referential real object locations that are stored in database 702 a .
- an additional input is virtual object information stored in database 702 b .
- the databases 702 a and 702 b may be subparts of database 702 or, alternatively, they may be independent databases.
- the databases which store data such as real object locations and virtual object data may be accessible over a network, as indicated by the cloud in FIG. 7 .
- blocks 721 - 728 illustrate an exemplary process for determining the field of view of the camera 701 using a processor 703 .
- the steps in FIG. 7 may be performed by a single processor or divided among multiple processors, any one or multiple of which may be part of a VR device itself, part of a remote server, or in some other device that is communicatively coupled with the system 700 (e.g., over a network). While FIG. 7 is described with respect to “a processor 703 ” for ease of discussion, such processor 703 may be representative of a single processor or multiple processors according to different embodiments.
- the field of view of the camera 701 is referred to as FOV final and is given at block 728 .
- the image/video data 711 captured by the camera 701 first undergoes image processing at block 721 for object recognition.
- a minimum of two real objects are detected by the object recognition algorithm(s).
- Any of a variety of image processing software may be used for object recognition of block 721 .
- image processing is conducted is some exemplary embodiments using a convolutional neural network.
- a convolutional neural network comprises computer-implemented neurons that have learnable weights and biases.
- a convolutional neural network employs a plurality of layers and combines information from across an image to detect an object in the image.
- the image processing at block 721 may include, for example, targeting, windowing, and/or classification with a decision tree of classifiers.
- At least two real objects, R 1 and R 2 are detected and preferably identified (or are at least identifiable). If three or more real objects are detected, it is preferable to select two real objects from among these which lie on distinctly different viewing axes. In other words, the two selected objects and the camera do not form three points on a single line, but rather they are spread apart with respect to one another such that they form a clear triangle if viewed from above.
- the system is configured such that the depth of the respective objects, that is to say the camera-to-object distances, need not be of particular importance or explicitly taken into consideration.
- Detectable and identifiable real world objects may take essentially any form of object that is essentially stationary, including but not limited to monuments, recognizable or iconic buildings, street signs, stores or restaurants (e.g., McDonalds or Wendy's), location signs specific to their buildings or locations (e.g., store signs posted on building exteriors or mile markers along a highway), among other things.
- FIG. 8A shows an image or video 800 in which real objects R 1 and R 2 have been detected. From this information, the processor 703 is able to measure a distance d between the two real objects, R 1 and R 2 , at block 722 . The distance d is the distance the objects are apart in the image and may be measured in pixels. Also at block 722 , the processor 703 measures the distance w that is the width of the entire image and which may also be measured in pixels.
- FIG. 8B gives a top down view of the simplified spatial relationships assumed to exist among R 1 , R 2 , and the camera 701 . A value is assigned as an initial “best guess” value for FOV of the real camera 701 .
- This initial value, FOV initial may be set to the value that is empirically measured as correct, e.g., for the most common smartphone on the market.
- the initial value is not critical to the functioning of the invention. Generally, it is adequate that the initial value be set within 50% of the correct value. Accordingly, an initial value such as 40° is generally adequate for smartphones.
- the measured distances d and w and the initial value of FOV initial are then used to calculate a first estimate of the angle that is subtended by R 1 -camera-R 2 . This angle estimate is referred to as A 2 .
- the following equation may be used to determine A 2 , which is given at block 723 :
- the angle subtended by R 1 -camera-R 2 is calculated a second time by a different approach, the resulting angle value being referred to as A 3 .
- This process occurs at blocks 724 and 725 in FIG. 7 .
- the objects which were detected as a result of the image processing of block 721 are checked against a database 702 a . It is desirable that at least two objects detected at block 721 have corresponding location information available in database 702 a . In this example, these two objects are R 1 and R 2 . Location information for these real world objects is retrieved from the database 702 a by the processor 703 .
- the actual real world angle A 3 that is subtended by R 1 -camera-R 2 is calculated.
- Angle A 3 is illustrated in FIG. 9 .
- a 3 is determined using trigonometry, such as the law of cosines.
- the three vertices of the triangle shown in FIG. 9 all have known values: two of the vertices are the looked up locations of the real objects R 1 and R 2 obtained from database 702 a , and the third vertex is the camera location 712 (e.g., obtained from the GPS unit 713 ).
- the lengths x, y, and z of the three sides of the triangle are determined from the three vertex positions.
- the law of cosines is then used to determine A 3 using these lengths:
- Correcting FOV initial based on the error between A 2 and A 3 gives a final measure of the FOV of the camera, FOV final , at block 728 .
- the final determined value for the FOV of the camera, FOV final can be output as data at block 729 (e.g., output to memory storage).
- output at block 729 may comprise or consist of initiating a signal that controls an output (e.g., auditory, visual, and/or tactile output) of an output device based on FOV final .
- the field of view value, FOV final , given at block 728 is but one characteristic that describes the perspective of camera 701 .
- a second characteristic is the camera's position (block 712 ).
- a third characteristic is the camera's orientation (block 715 ).
- the perspective of the camera includes the field of the view of the camera, the position of the camera, and the orientation of the camera. Position and orientation may be referred to collectively as the pose of the camera.
- the orientation of the camera 715 may be obtained from sensors such as a gyroscope and digital compass 714 .
- Typical mobile devices on the market in 2017 are equipped by their manufacturers with GPS, gyroscopic sensors, and a digital compass. These sensors are typically readily available to software applications including third party applications running on the mobile devices.
- the FOV is a characteristic which is not exposed to software applications. This deficiency gives rise to the problem addressed by exemplary embodiments herein which determine FOV in order to complete the assessment of augmented reality perspective.
- a 3D real world frustum is determined at block 741 .
- This real world frustum is applied to a virtual world at block 742 using virtual world data from database 702 b .
- Virtual objects which are inside the frustum are found as candidates for augmentation.
- the selection of augmentations based on the virtual object candidates occurs at block 743 and may involve one or more criteria including, for example, user option selections and the relationships between different virtual objects.
- the processor 703 may determine which of the virtual objects obscure parts of each other based on the frustum in the virtual world.
- augmentations that are or include auditory and tactile elements still involve virtual objects that need to be identified with accurate spatial relationships with respect to real world objects.
- a VR device that is a HMD may be used to give a guided tour of a real place like New York City.
- the device may announce through a speaker “You are looking at the Empire State Building.” This announcement is an auditory augmentation corresponding with a virtual object that has a location in the virtual world which matches the location of the actual Empire State Building in the real world.
- the VR device 404 must accurately determine the FOV (be it FOV n or FOV w ) in order to make a correct decision as to whether or not to output the announcement to the user.
- Finding virtual object candidates and selecting corresponding augmentations for output may be performed according to what is disclosed in U.S. patent application Ser. No. 15/436,154, which is incorporated herein by reference.
- Viewing directions other than a centerline may also be used, although a centerline is in many cases a straightforward and therefore preferred choice. Determining a viewing direction of the camera may be achieved in a number of ways. For example, a mobile electronic device's digital compass for the horizontal orientation angle and the device's accelerometers for the vertical orientation angle may be used. By determining the direction of L 1 and knowing the location of the camera (e.g., from GPS), the system can identify the specific geographic location of any point along L 1 .
- the processor 703 is able to measure an apparent distance d′ between the real object, R 1 , and line L 1 which is a centerline of the live video and of the virtual frustum.
- the distance d′ may be measured in pixels.
- the processor 703 also measures the distance w′ that is half the width of the entire image and which may also be measured in pixels. In other words, w′ is the distance within the image from the edge E of the frame and L 1 .
- FIG. 10B gives a top down view of the simplified spatial relationships assumed to exist among R 1 , L 1 , E, and the camera 1001 . A value is assigned as an initial “best guess” value for FOV of the real camera 1001 .
- This initial value, FOV initial may be set to the value that is empirically measured as correct, e.g., for the most common smartphone on the market.
- the initial value is not critical to the functioning of the invention. Generally, it is adequate that the initial value be set within 50% of the correct value. Accordingly, an initial value such as 40° is generally adequate for smartphones.
- the measured distances d′ and w′ and the initial value of FOV initial are then used to calculate a first estimate of the angle that is subtended by R 1 -camera-L 1 . This angle estimate is referred to as A 4 .
- the following equation may be used to determine A 4 :
- the angle subtended by R 1 -camera-L 1 is calculated a second time by a different approach, the resulting angle value being referred to as A 5 .
- the detected real object, R 1 is checked against a database 702 a .
- Location information for the real world object R 1 is retrieved from the database 702 a by the processor 703 .
- the actual real world angle A 5 that is subtended by R 1 -camera-L 1 is now calculated.
- Angle A 5 is illustrated in FIG. 11 .
- a 5 is determined using trigonometry. Two of the vertices of the triangle in FIG.
- the remaining vertex, P 1 is determinable by comparing the location of R 1 with points along L 1 , all of which are known as already discussed above. P 1 is selected from other points along L 1 by selecting the point along L 1 which results in the right angle indicated in FIG. 11 . At this stage the locations of all three vertices of the triangle shown in FIG. 11 have known values. The lengths x′ and z of the sides of the triangle are determined from the vertex locations. Basic trigonometry may then be used to determine A 5 using these lengths:
- the processor 703 has two values for the same physical angle. Namely, it has value A 4 and value A 5 , both of which describe the angle subtended by R 1 -camera-L 1 .
- the values A 4 and A 5 are then compared. If FOV initial was perfectly accurate, A 4 and A 5 will be equal. Generally, however, A 4 and A 5 will disagree by some amount of error. FOV initial is therefore corrected using the magnitude of the error between A 4 and A 5 . The correction is the negative of the error. For example, if A 4 is 10% larger than A 5 , then FOV initial is reduced by 10%. Correcting FOV initial based on the error between A 4 and A 5 gives a final measure of the FOV of the camera, FOV final .
- virtual objects are stored, updated, and manipulated as data within one or more databases 702 b .
- the virtual objects have their own existence separate from how they are displayed, visualized, haptically buzzed, or otherwise output by an output device. So, generally speaking, a virtual object has its own characteristics, and then, based on those characteristics and on the real and the virtual environment, an exemplary augmented reality system determines what is presented to the user. An augmentation may be displayed (or otherwise provided) if, and only if, the system determines that a given virtual object should be apparent to the user given the viewing device's pose and field of view in the real world and therefore its pose and field of view in the virtual world.
- the characteristics of those virtual objects determine what baseline augmentation to provide and markers/indicators/tweaks may be performed on the baseline augmentation.
- the augmentation that is output depends on all of the virtual characteristics of the virtual objects that are made perceptible given the current perspective of the current image.
- a car may give haptic feedback (vibration) to the steering wheel when the operator drives over the centerline of the road without using a turn signal.
- haptic feedback vibration
- virtual objects may obscure other virtual objects in the current real world perspective.
- the obscuring object may cause the obscured object to not be represented via augmentations, even if the obscuring object is itself not shown with any augmentations.
- a user may see a real world image in which no augmentations are shown at all, despite the fact that two virtual objects are contained geometrically within the field of view.
- a first virtual object (which for illustrative purposes will be called virtual object A) would be shown with an augmentation if not otherwise obscured.
- a second virtual object (which will be called virtual object B) entirely obscures A given the field of view, but virtual object B may itself not be currently shown as an augmentation.
- the virtual objects that represent a virtual world suitable for augmenting a real world view consist of two basic classes of objects.
- the first class is associated with augmentations.
- the second class is not associated with augmentations but still interact with the other virtual objects either by obscuring them visually or through other possible interactions (e.g., an augmentation of an object of the first class might be a different color if the first class virtual object is near a virtual object of the second class).
- systems include user interactive features which can contribute to the determination of field of view.
- an output instruction may be provided to a user to pan the camera to the side in order to bring additional real world objects into view.
- the system can then in effect use a stitched panoramic view as the input to object recognition (e.g., at block 721 of FIG. 7 ).
- object recognition e.g., at block 721 of FIG. 7 .
- a single frame or multiple frames may be used to determine a field of view of a camera according to the exemplary methods disclosed herein.
- Some devices may alternatively have automated features and electronic devices (e.g., servo motors) which provide for camera panning.
- Other user interactive features may also be provided.
- exemplary embodiments may implement these elements with two separate systems: the image recognition system and the database that maps an image label to a set of coordinates.
- the recognition system as already described above as being, for example, a convolutional neural network, may be implemented locally on the electronic device (e.g., smartphone) or via the cloud (e.g., the Google Cloud Machine Learning Engine).
- the output of the recognition system is an identifier or label. Once the identifier or label is produced, the system that looks up the coordinates is next.
- An example of a commonly used key-value database in the cloud is the Google Cloud Datastore, or alternatively, Amazon Web Services' DynamoDB. Embodiments do not need an external service that combines both of these systems (recognition and lookup of coordinates). A combined system is acceptable if commercially available, or alternatively the two steps may simply be performed separately with separate systems/databases.
- the databases 702 may be or comprise computer readable storage media that are tangible devices that can retain and store instructions for use by an instruction execution device like processor 703 .
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network (LAN), a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. This may have the effect of making a general purpose computer a special purpose computer or machine.
- a “processor” as frequently used in this disclosure may refer in various embodiments to one or more general purpose computers, special purpose computers, or some combination thereof.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- the display device passively permits viewing of the real world without reproducing details of a captured real world image feed on a screen.
- a see-through HMD it is generally only the augmentations that are actively shown or output by the device.
- Visual augmentations are in any case superimposed on the direct view of the real world environment, without necessarily involving the display of any of the original video input to the system.
- Output devices and viewing devices may include or be accompanied by input devices (e.g., buttons, touchscreens, menus, keyboards, data ports, etc.) for receiving user inputs. Some devices may be configured for both input and output (I/O).
- Position may be expressed as a location.
- Location information may be absolute (e.g., latitude, longitude, elevation, and a geodetic datum together may provide an absolute geo-coded position requiring no additional information in order to identify the location), relative (e.g., “2 blocks north of latitude 30.39, longitude ⁇ 97.71 provides position information relative to a separately known absolute location), or associative (e.g., “right next to the copy machine” provides location information if one already knows where the copy machine is; the location of the designated reference, in this case the copy machine, may itself be absolute, relative, or associative).
- absolute e.g., latitude, longitude, elevation, and a geodetic datum together may provide an absolute geo-coded position requiring no additional information in order to identify the location
- relative e.g., “2 blocks north of latitude 30.39, longitude ⁇ 97.71 provides position information relative to a separately known absolute location
- associative e.g., “right next to
- user typically refers to a human interacting with or using an embodiment of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
where d is the sensor size and f is the focal length.
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/992,804 US11410330B2 (en) | 2017-05-30 | 2018-05-30 | Methods, devices, and systems for determining field of view and producing augmented reality |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201762512240P | 2017-05-30 | 2017-05-30 | |
| US15/992,804 US11410330B2 (en) | 2017-05-30 | 2018-05-30 | Methods, devices, and systems for determining field of view and producing augmented reality |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20180350103A1 US20180350103A1 (en) | 2018-12-06 |
| US11410330B2 true US11410330B2 (en) | 2022-08-09 |
Family
ID=64458939
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/992,804 Active US11410330B2 (en) | 2017-05-30 | 2018-05-30 | Methods, devices, and systems for determining field of view and producing augmented reality |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US11410330B2 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240036582A1 (en) * | 2022-07-26 | 2024-02-01 | Objectvideo Labs, Llc | Robot navigation |
| US20240342614A1 (en) * | 2023-04-11 | 2024-10-17 | Roblox Corporation | In-environment reporting of abuse in a virtual environment |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10891753B2 (en) * | 2019-02-28 | 2021-01-12 | Motorola Solutions, Inc. | Device, system and method for notifying a person-of-interest of their location within an estimated field-of-view of a camera |
| CN114175631B (en) * | 2019-07-31 | 2025-07-01 | 佳能株式会社 | Image processing device, image processing method, program and storage medium |
| CN112435300B (en) * | 2019-08-26 | 2024-06-04 | 华为云计算技术有限公司 | Positioning method and device |
| CN110726534B (en) * | 2019-09-27 | 2022-06-14 | 西安大医集团股份有限公司 | Visual field range testing method and device for visual device |
| US11816757B1 (en) * | 2019-12-11 | 2023-11-14 | Meta Platforms Technologies, Llc | Device-side capture of data representative of an artificial reality environment |
| US12361661B1 (en) | 2022-12-21 | 2025-07-15 | Meta Platforms Technologies, Llc | Artificial reality (XR) location-based displays and interactions |
| US20250054244A1 (en) * | 2023-08-11 | 2025-02-13 | Meta Platforms Technologies, Llc | Application Programming Interface for Discovering Proximate Spatial Entities in an Artificial Reality Environment |
Citations (33)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030016285A1 (en) * | 2001-04-30 | 2003-01-23 | Drost Jeffrey D. | Imaging apparatus and method |
| US20100214414A1 (en) * | 2006-10-19 | 2010-08-26 | Carl Zeiss Ag | Hmd apparatus for user with restricted field of vision |
| US8059155B2 (en) * | 2006-12-01 | 2011-11-15 | Altus Technology Inc. | System and method for measuring field of view of digital camera modules |
| US20120001938A1 (en) * | 2010-06-30 | 2012-01-05 | Nokia Corporation | Methods, apparatuses and computer program products for providing a constant level of information in augmented reality |
| US20120127327A1 (en) * | 2010-11-24 | 2012-05-24 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and methods of providing pictures thereof |
| US20120226437A1 (en) * | 2011-03-01 | 2012-09-06 | Mitac International Corp. | Navigation device with augmented reality navigation functionality |
| US20130150124A1 (en) * | 2011-12-08 | 2013-06-13 | Samsung Electronics Co., Ltd. | Apparatus and method for content display in a mobile terminal |
| US20150110398A1 (en) * | 2013-10-22 | 2015-04-23 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
| US20150234508A1 (en) * | 2014-02-17 | 2015-08-20 | Lg Electronics Inc. | Display system for displaying augmented reality image and control method for the same |
| US20160012643A1 (en) * | 2014-07-10 | 2016-01-14 | Seiko Epson Corporation | HMD Calibration with Direct Geometric Modeling |
| US20160278861A1 (en) * | 2013-03-19 | 2016-09-29 | Lutronic Corporation | Treatment method using multiple energy sources |
| US9495783B1 (en) * | 2012-07-25 | 2016-11-15 | Sri International | Augmented reality vision system for tracking and geolocating objects of interest |
| US20160350921A1 (en) * | 2015-05-29 | 2016-12-01 | Accenture Global Solutions Limited | Automatic camera calibration |
| US20170041553A1 (en) * | 2015-07-10 | 2017-02-09 | SZ DJI Technology Co., Ltd | Dual lens system having a light splitter |
| US20170042416A1 (en) * | 2015-08-13 | 2017-02-16 | Jand, Inc. | Systems and methods for displaying objects on a screen at a desired visual angle |
| US20170076497A1 (en) * | 2015-09-14 | 2017-03-16 | Colopl, Inc. | Computer program for directing line of sight |
| US20170091571A1 (en) * | 2015-09-25 | 2017-03-30 | Datalogic IP Tech, S.r.l. | Compact imaging module with range finder |
| US20170213387A1 (en) * | 2016-01-21 | 2017-07-27 | International Business Machines Corporation | Augmented reality overlays based on an optically zoomed input |
| US20170269366A1 (en) * | 2016-03-16 | 2017-09-21 | Samsung Electronics Co., Ltd. | See-through type display apparatus |
| US20170289533A1 (en) * | 2016-03-30 | 2017-10-05 | Seiko Epson Corporation | Head mounted display, control method thereof, and computer program |
| US9794542B2 (en) * | 2014-07-03 | 2017-10-17 | Microsoft Technology Licensing, Llc. | Secure wearable computer interface |
| US20170344808A1 (en) * | 2016-05-28 | 2017-11-30 | Samsung Electronics Co., Ltd. | System and method for a unified architecture multi-task deep learning machine for object recognition |
| US20180032844A1 (en) * | 2015-03-20 | 2018-02-01 | Intel Corporation | Object recognition based on boosting binary convolutional neural network features |
| US20180070018A1 (en) * | 2016-09-07 | 2018-03-08 | Multimedia Image Solution Limited | Method of utilizing wide-angle image capturing element and long-focus image capturing element for achieving clear and precise optical zooming mechanism |
| US20180075592A1 (en) * | 2016-09-15 | 2018-03-15 | Sportsmedia Technology Corporation | Multi view camera registration |
| US20180189532A1 (en) * | 2016-12-30 | 2018-07-05 | Accenture Global Solutions Limited | Object Detection for Video Camera Self-Calibration |
| US20180210542A1 (en) * | 2017-01-25 | 2018-07-26 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Emitting a visual indicator from the position of an object in a simulated reality emulation |
| US20180213217A1 (en) * | 2017-01-23 | 2018-07-26 | Multimedia Image Solution Limited | Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device |
| US20180300880A1 (en) * | 2017-04-12 | 2018-10-18 | Here Global B.V. | Small object detection from a large image |
| US20190073553A1 (en) * | 2016-02-17 | 2019-03-07 | Intel Corporation | Region proposal for image regions that include objects of interest using feature maps from multiple layers of a convolutional neural network model |
| US20190197710A1 (en) * | 2016-07-12 | 2019-06-27 | SZ DJI Technology Co., Ltd. | Processing images to obtain environmental information |
| US20190221184A1 (en) * | 2016-07-29 | 2019-07-18 | Mitsubishi Electric Corporation | Display device, display control device, and display control method |
| US20190347862A1 (en) * | 2016-12-21 | 2019-11-14 | Pcms Holdings, Inc. | Systems and methods for selecting spheres of relevance for presenting augmented reality information |
-
2018
- 2018-05-30 US US15/992,804 patent/US11410330B2/en active Active
Patent Citations (33)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030016285A1 (en) * | 2001-04-30 | 2003-01-23 | Drost Jeffrey D. | Imaging apparatus and method |
| US20100214414A1 (en) * | 2006-10-19 | 2010-08-26 | Carl Zeiss Ag | Hmd apparatus for user with restricted field of vision |
| US8059155B2 (en) * | 2006-12-01 | 2011-11-15 | Altus Technology Inc. | System and method for measuring field of view of digital camera modules |
| US20120001938A1 (en) * | 2010-06-30 | 2012-01-05 | Nokia Corporation | Methods, apparatuses and computer program products for providing a constant level of information in augmented reality |
| US20120127327A1 (en) * | 2010-11-24 | 2012-05-24 | Samsung Electronics Co., Ltd. | Digital photographing apparatus and methods of providing pictures thereof |
| US20120226437A1 (en) * | 2011-03-01 | 2012-09-06 | Mitac International Corp. | Navigation device with augmented reality navigation functionality |
| US20130150124A1 (en) * | 2011-12-08 | 2013-06-13 | Samsung Electronics Co., Ltd. | Apparatus and method for content display in a mobile terminal |
| US9495783B1 (en) * | 2012-07-25 | 2016-11-15 | Sri International | Augmented reality vision system for tracking and geolocating objects of interest |
| US20160278861A1 (en) * | 2013-03-19 | 2016-09-29 | Lutronic Corporation | Treatment method using multiple energy sources |
| US20150110398A1 (en) * | 2013-10-22 | 2015-04-23 | Canon Kabushiki Kaisha | Image processing apparatus and image processing method |
| US20150234508A1 (en) * | 2014-02-17 | 2015-08-20 | Lg Electronics Inc. | Display system for displaying augmented reality image and control method for the same |
| US9794542B2 (en) * | 2014-07-03 | 2017-10-17 | Microsoft Technology Licensing, Llc. | Secure wearable computer interface |
| US20160012643A1 (en) * | 2014-07-10 | 2016-01-14 | Seiko Epson Corporation | HMD Calibration with Direct Geometric Modeling |
| US20180032844A1 (en) * | 2015-03-20 | 2018-02-01 | Intel Corporation | Object recognition based on boosting binary convolutional neural network features |
| US20160350921A1 (en) * | 2015-05-29 | 2016-12-01 | Accenture Global Solutions Limited | Automatic camera calibration |
| US20170041553A1 (en) * | 2015-07-10 | 2017-02-09 | SZ DJI Technology Co., Ltd | Dual lens system having a light splitter |
| US20170042416A1 (en) * | 2015-08-13 | 2017-02-16 | Jand, Inc. | Systems and methods for displaying objects on a screen at a desired visual angle |
| US20170076497A1 (en) * | 2015-09-14 | 2017-03-16 | Colopl, Inc. | Computer program for directing line of sight |
| US20170091571A1 (en) * | 2015-09-25 | 2017-03-30 | Datalogic IP Tech, S.r.l. | Compact imaging module with range finder |
| US20170213387A1 (en) * | 2016-01-21 | 2017-07-27 | International Business Machines Corporation | Augmented reality overlays based on an optically zoomed input |
| US20190073553A1 (en) * | 2016-02-17 | 2019-03-07 | Intel Corporation | Region proposal for image regions that include objects of interest using feature maps from multiple layers of a convolutional neural network model |
| US20170269366A1 (en) * | 2016-03-16 | 2017-09-21 | Samsung Electronics Co., Ltd. | See-through type display apparatus |
| US20170289533A1 (en) * | 2016-03-30 | 2017-10-05 | Seiko Epson Corporation | Head mounted display, control method thereof, and computer program |
| US20170344808A1 (en) * | 2016-05-28 | 2017-11-30 | Samsung Electronics Co., Ltd. | System and method for a unified architecture multi-task deep learning machine for object recognition |
| US20190197710A1 (en) * | 2016-07-12 | 2019-06-27 | SZ DJI Technology Co., Ltd. | Processing images to obtain environmental information |
| US20190221184A1 (en) * | 2016-07-29 | 2019-07-18 | Mitsubishi Electric Corporation | Display device, display control device, and display control method |
| US20180070018A1 (en) * | 2016-09-07 | 2018-03-08 | Multimedia Image Solution Limited | Method of utilizing wide-angle image capturing element and long-focus image capturing element for achieving clear and precise optical zooming mechanism |
| US20180075592A1 (en) * | 2016-09-15 | 2018-03-15 | Sportsmedia Technology Corporation | Multi view camera registration |
| US20190347862A1 (en) * | 2016-12-21 | 2019-11-14 | Pcms Holdings, Inc. | Systems and methods for selecting spheres of relevance for presenting augmented reality information |
| US20180189532A1 (en) * | 2016-12-30 | 2018-07-05 | Accenture Global Solutions Limited | Object Detection for Video Camera Self-Calibration |
| US20180213217A1 (en) * | 2017-01-23 | 2018-07-26 | Multimedia Image Solution Limited | Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device |
| US20180210542A1 (en) * | 2017-01-25 | 2018-07-26 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Emitting a visual indicator from the position of an object in a simulated reality emulation |
| US20180300880A1 (en) * | 2017-04-12 | 2018-10-18 | Here Global B.V. | Small object detection from a large image |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240036582A1 (en) * | 2022-07-26 | 2024-02-01 | Objectvideo Labs, Llc | Robot navigation |
| US20240342614A1 (en) * | 2023-04-11 | 2024-10-17 | Roblox Corporation | In-environment reporting of abuse in a virtual environment |
| US12515137B2 (en) * | 2023-04-11 | 2026-01-06 | Roblox Corporation | In-environment reporting of abuse in a virtual environment |
Also Published As
| Publication number | Publication date |
|---|---|
| US20180350103A1 (en) | 2018-12-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11410330B2 (en) | Methods, devices, and systems for determining field of view and producing augmented reality | |
| CN107209565B (en) | Method and system for displaying fixed size augmented reality objects | |
| US9934614B2 (en) | Fixed size augmented reality objects | |
| US11232636B2 (en) | Methods, devices, and systems for producing augmented reality | |
| US20230073750A1 (en) | Augmented reality (ar) imprinting methods and systems | |
| TWI610097B (en) | Electronic system, portable display device and guiding device | |
| US11151791B2 (en) | R-snap for production of augmented realities | |
| US20190130599A1 (en) | Systems and methods for determining when to provide eye contact from an avatar to a user viewing a virtual environment | |
| US10825217B2 (en) | Image bounding shape using 3D environment representation | |
| US10796477B2 (en) | Methods, devices, and systems for determining field of view and producing augmented reality | |
| JP6932206B2 (en) | Equipment and related methods for the presentation of spatial audio | |
| US10607410B2 (en) | Displaying visual information of views captured at geographic locations | |
| US11119567B2 (en) | Method and apparatus for providing immersive reality content | |
| US20200211295A1 (en) | Methods and devices for transitioning among realities mediated by augmented and/or virtual reality devices | |
| US11227494B1 (en) | Providing transit information in an augmented reality environment | |
| US11302067B2 (en) | Systems and method for realistic augmented reality (AR) lighting effects | |
| CN111095348A (en) | Camera-Based Transparent Display | |
| US11568579B2 (en) | Augmented reality content generation with update suspension | |
| EP3422151A1 (en) | Methods, apparatus, systems, computer programs for enabling consumption of virtual content for mediated reality | |
| US10854010B2 (en) | Method and device for processing image, and recording medium | |
| KR20240133967A (en) | Device and method for providing augmented reality service | |
| CA2943247A1 (en) | Method and system for dynamically positioning, viewing and sharing location based mixed reality content | |
| JP2025531728A (en) | Method and apparatus for displaying virtual reality content on a user terminal based on the user terminal being located in a predefined custom area |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| AS | Assignment |
Owner name: EDX TECHNOLOGIES, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EDX WIRELESS, INC.;REEL/FRAME:051345/0779 Effective date: 20170803 Owner name: EDX WIRELESS, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SKIDMORE, ROGER RAY;REIFSNIDER, ERIC;SIGNING DATES FROM 20170603 TO 20170803;REEL/FRAME:051345/0698 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |