WO2018023042A1 - Method and system for creating invisible real-world links to computer-aided tasks with camera - Google Patents
Method and system for creating invisible real-world links to computer-aided tasks with camera Download PDFInfo
- Publication number
- WO2018023042A1 WO2018023042A1 PCT/US2017/044455 US2017044455W WO2018023042A1 WO 2018023042 A1 WO2018023042 A1 WO 2018023042A1 US 2017044455 W US2017044455 W US 2017044455W WO 2018023042 A1 WO2018023042 A1 WO 2018023042A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- image
- location
- link
- physical
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- This disclosure relates to systems and methods for user interfaces. More specifically, this disclosure relates to systems and methods for gesture-based user interfaces.
- non-augmented-reality user interface (UI) systems can provide convenience and safety to occupants of smart houses and other engineered spaces by allowing the occupants to select devices using body gestures, thereby maintaining their natural first person point of view.
- Each of these systems has individual flaws, plus shared ones such as the need to either: associate user-invocable computer-activating link locations (hot-spots) with the current location of physical devices (e.g., IoT devices reporting their location), or
- Described herein are systems and methods related to creating invisible real-world links to computer-aided tasks with a camera.
- a method of invoking a computer instruction by a user gesturing towards an invisible button located on a physical surface comprising: invoking a computer instruction by detecting, with a body gesture sensor, a user gesture towards a region on a physical surface.
- the instruction and region may have been established earlier by: matching an image of the physical surface within a physical space, having been captured by a mobile device of the user at a physical location relative to the physical surface, to a 3-D model of the physical space; and causing information to be stored that maps a location within the image with the corresponding physical surface and the computer instruction.
- the disclosed systems and methods create and record new real- world Invisible Link locations using user selected images, the location of the camera producing the image, and a data model of the space in the user's immediate vicinity.
- a general block diagram of one embodiment of the present systems and methods is illustrated in FIG. 1.
- a link placement module 1010 may receive inputs, such as a user selected image 1030, a camera location 1020, and a space map 1040 of the user's vicinity. After processing and analysis, the module may output and/or store a created invisible link 1050.
- the disclosed system and method creates and records new real- world Invisible Link locations (e.g., areas in the real-world which a user can select and invoke from their current POV and thereby cause a computer-aided task to occur) using:
- a data model of the space that includes object surfaces visible in the camera image (e.g., walls, windows and doors).
- an image obtained from a user's digital device is analyzed.
- a space associated with a location of the user is also analyzed.
- a correlation value between the analyzed image and analyzed space is determined.
- an invisible link is stored, the invisible link at a location based at least in part on the image obtained from the user's digital device.
- FIG. 1 is a block diagram of one embodiment of the present systems and methods for creating invisible links.
- FIGS. 2A-2B illustrate one embodiment of a user aligning a camera so that a superimposed link location target indicates where the user wants a new link to be established.
- FIG. 2C illustrates one embodiment of where a camera image, its location, and optionally the camera's orientation may be compared to a model or map of space in the user's vicinity.
- FIG. 2D illustrates one embodiment of displaying to a user a new link which has been recorded for future use.
- FIG. 2E illustrates one embodiment of how a user may interact with a previously created button or link by gesturing towards the region in relation to which the link was previously stored.
- FIG. 3A illustrates a message flow diagram of one embodiment of the systems and methods disclosed herein.
- FIG. 3B illustrates a message flow diagram of another embodiment of the systems and methods disclosed herein.
- FIG. 4 illustrates a block diagram of one embodiment of creating an invisible link and utilizing a created invisible link.
- FIG. 5 illustrates one embodiment of space map wall vertices, shown as a wireframe viewed from the front and above.
- FIG. 6 illustrates a block diagram of one embodiment of the logic for a link placement module.
- FIG. 7 illustrates an expanded block diagram of one embodiment of the link placement module.
- FIGS. 8A-8B illustrate one embodiment of analyzing the user's space and establishing the user's POV relative to a space map.
- FIG. 9 illustrates one embodiment of rendering space map wall vertices from a user POV.
- FIG. 10 illustrates an exemplary embodiment of three vertices used to establish a link location relative to a rendered space map.
- FIG. 11 illustrates a block diagram for one embodiment of a second matching attempt.
- FIG. 12 illustrates a block diagram for one embodiment of a third matching attempt.
- FIGS. 13A-13B represent two example invisible links, with different apparent shapes as the POV they are viewed from changes, where FIG. 13 A illustrates an exemplary POV above a ceiling, in front of a wall, and FIG. 13B illustrates an exemplary POV above a ceiling, but to the side of a wall.
- FIG. 14 illustrates an exemplary wireless transmit/receive unit (WTRU) that may be employed as a user's digital device in some embodiments.
- WTRU wireless transmit/receive unit
- FIG. 15 illustrates an exemplary network entity that may be employed in some embodiments.
- modules that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules.
- a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation.
- ASICs application-specific integrated circuits
- FPGAs field programmable gate arrays
- Each described module may also include executable instructions for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer- readable medium or media, such as commonly referred to as RAM, ROM, etc.
- the disclosed systems and methods create and record new real- world Invisible Link locations using user selected images, the location of the camera producing the image, and a data model of the space in the user's immediate vicinity.
- FIG. 1 A general block diagram of one embodiment of the present systems and methods is illustrated in FIG. 1.
- a link placement module may receive inputs, such as a user selected image, a camera location, and a space map of the user's vicinity. After processing and analysis, the module may output and/or store a created invisible link.
- FIGS. 2A-2E illustrate one embodiment of a user 205 aligning a camera of the user's digital device 210 (which is displaying a rendered image or model of the physical space) so that a superimposed link location target 227 indicates where on a real-world surface (e.g., desired target location 225 in FIG. 2A) the user wants a new link to be established.
- the superimposed target may be an overlay on a camera image preview screen, or the like.
- FIG. 2C illustrates one embodiment of where a camera image, its location, and optionally the camera's orientation may be compared to a model or map of space in the user's vicinity, and if a match is found, the new link location may be translated into map coordinates.
- FIG. 2C shows two space map vertices (B2 and C3) (rendered from a POV similar to the user's) associated with matching vertices (230, 232) in the camera image (shown on the user's device 210).
- FIG. 2D illustrates one embodiment of displaying to a user a new link which has been recorded for future use.
- links may be displayed to the user as an augmented reality overlay of the image on the user's device 210, for example as an indicated new link 240 and a previously stored link 245.
- the displayed links may in some embodiments display or otherwise indicate a stored associated function, such as a stored link 250 to may indicate that it may enable home security, or a stored link 255 may indicate that it may toggle blinds, etc.
- FIG. 2E illustrates one embodiment of how a user 270 (either the original user or a later authorized user) may interact with a previously created button or link, such as a link 255, for example by gesturing 280 towards the region (e.g., window) in relation to which the link 255 was previously stored.
- a gesture may be detected by a body gesture sensor and cause invocation of an instruction which was previously stored in association with the link 255, such as toggling blinds of the window open or close.
- Exemplary embodiments provide a simple method of placing computer-invocable UI links in real-world locations.
- Exemplary embodiments allow a user to maintain a natural first person POV.
- Exemplary embodiments use physical surfaces to place links, allowing users to behave naturally by pointing to a link from various locations and having the system reliably determine whether they intended to select the link. In contrast, an arbitrary XYZ location in a room is much harder to recall and indicate from different room locations.
- Exemplary embodiments can be implemented using common handheld or body-mounted networked cameras, such as those in smartphones or other digital devices.
- the systems and methods are well suited to spaces with a mix of complex and simple-to-model shapes, such as is typical for illuminated building exteriors and interiors, or the like.
- FIG. 3A One embodiment of the systems and methods disclosed herein is set forth in relation to FIG. 3A, illustrating a sequence chart for one embodiment.
- the system With a first message 325, when the user invokes link creation, the system generates a message from the user's hand-held or body- mounted device(s) 305, containing at least: the user's current location.
- a link placement module 310 may generate a request for space maps of the user's vicinity, such as from a Space Maps database 315.
- the link placement module 310 may receive the requested space maps, such as from the space maps database 315.
- the link placement module 310 may request the device and/or beacon locations in the user's vicinity, such as from a Device and Beacon Locations database 317.
- the link placement module 310 may receive the device and/or beacon locations from the Device and Beacon Locations database 317.
- the user's personal device 305 may indicate a new link location by selecting an image in its camera display.
- the message may include, but is not limited to: camera image; (optional) camera orientation; (optional) multiple camera images; (optional) multiple camera positions; and/or the like.
- the link placement module 310 may store the Link Location, such as in an "Invisible Link Locations" database 320, by saving at least: a Link ID; a Link Location; optionally a Link Shape; and/or the like.
- FIG. 3B illustrates a message flow diagram of another embodiment of the systems and methods disclosed herein.
- a user's personal device 305 may have its camera pointed at a surface region 340.
- a captured picture (or video) and the user's location may be communicated 342 to a link management system 312, which may request space maps 344 at the user's location from a space maps database 315.
- the space maps database 315 may retrieve 346 space maps, and communicate the retrieved space maps 348 to the link management system 312.
- the link management system 312 may match the receive picture or video with a received space map 350, and confirm 352 to the user's device that a corresponding space map was found.
- the user may then touch the screen of their device at a location of a desired invisible button 354, and the selected button location with regard to the picture or video may be sent 356 to the link management system 312.
- the link management system 312 may map the button image location to a corresponding space map location 358, and store 360 the button space map location and an identifier in an invisible link locations database 320.
- the user may point or otherwise gesture at a previously created invisible button 362, such as with a body gesture sensor 307.
- a detected gesture may be communicated to the link management system 312, which may communicate the detected gesture 364 to the space maps database 315 to retrieve 366 the location pointed to or gestured at by the user.
- the link management system 312 may then communicate 368 with the link locations database 320 to determine whether an invisible button was previously created at this determined location, and if so may retrieve 370 the corresponding invisible button identifier.
- the link management system 312 may then execute 372 an instruction associated with the pointed to or gestured at invisible button or link.
- FIG. 4 illustrates an overall block diagram of one embodiment of the present systems and methods.
- a link creation component 401 may include a link creator subcomponent 430.
- link creation and link invocation may be carried out by separate systems, and in other embodiments they may be carried out by a single system.
- a camera location module 405. The user device provides system an XYZ point that is translatable to the frame of reference used by the Space Map. The purpose of the camera location is to identify the approximate area of the space map the user is placing a new link, and thereby minimize the area of the space map retrieved and analyzed.
- the camera location module may be embodied in the user's personal device, such as a smartphone.
- data sufficient to specify a precise direction the camera was pointing when image was made is available.
- limited information such as approximate elevation angle or cardinal direction, increases speed and accuracy of the image processing by further restricting (e.g., in addition to restrictions based solely on location) the area retrieved and searched in the space map (e.g., filters space defining data).
- the camera orientation module may be embodied in the user's personal device, such as a smartphone.
- a user selected image module 410 When the user indicates that the camera image corresponds to the location desired for a new link, one or more camera images are stored for analysis. For example, the user may have selected a "Create gesture link here" button within a GUI. Such camera images may optionally contain a Link Location Target visually superimposed on camera's display (see FIG. 2B, discussed above). Link location target may be automatically placed in a default portion of image (e.g., center), or any location manually indicated by user (e.g., by touching display screen). For example, the user may speak "Make a new Invisible Link" into a voice UI system of the user's personal device.
- link-task association module 434 For example, the user selects which computer-aided task is executed when the link is selected either before or after the link location is determined.
- One example of such a module may be the Internet based service "If This Then That".
- Examples of computer aided tasks may include, but are not limited to: for a link associated with a front door, an instruction to enable a home security system; for a link associated with a window, an instruction to toggle powered blinds installed on the window; toggling lights; and/or the like.
- link modification module 436 there is a link modification module 436.
- a link modification module may comprise UI and logic allowing user to modify the shape, size, location, orientation, task, or other attributes of existing or to-be-created Invisible Links.
- link placement module 432 there is a link placement module 432.
- the logic of the link placement module 432 is set forth in an example embodiment for determining and storing an Invisible Link Location, including how a system can respond when it is unable to match a camera image to a space map.
- a space map datastore module 440 there is a space map datastore module 440.
- such a module may record multiple XYZ points in a shared frame of reference indicating the relative position of surface vertices of objects, such as walls, doors, windows, and/or the like.
- the points may be stored using 3D modeling, or the like.
- FIG. 5 illustrates one embodiment of space map wall vertices, shown as a wireframe viewed from the front and above.
- stationary location sensors and beacons can be employed as part of and/or representing link surfaces, especially when they are immobile and/or disclose their shape dimensions and orientation to the system.
- a beacon may be Apple's iBeacon, or the like.
- link locations datastore module 450 there is an invisible link locations datastore module 450.
- a module uses standard data storage methods to maintain correlations between a Link ID and its location. This information may be kept separate, or integrated into more complex data structures, such as enumerating the computer-assisted task associated with the link.
- link invocation 455 may be performed.
- a live user body monitoring module 460 may comprise wearable devices or environmental sensor nets capable of quickly and accurately monitoring user pointing gestures. For example, the Myo, Bird, Microsoft Band, etc.
- link selection module 470 there is a link selection module 470.
- a link selection module may comprise logic differentiating between pointing gestures and other body motions.
- the logic may be the same as or similar to that disclosed in U.S. Provisional Patent Application No. 62/351,534, filed June 17, 2016, entitled METHOD AND SYSTEM FOR SELECTING IOT DEVICES USING SEQUENTIAL POINT AND NUDGE GESTURES, and PCT/US2017/036891 filed Jun 9, 2017 entitled METHOD AND SYSTEM FOR SELECTING IOT DEVICES USING SEQUENTIAL POINT AND NUDGE GESTURES, each of which is hereby incorporated by reference in its entirety.
- a link task invocation module 480 there is a link task invocation module 480.
- a link task invocation module 480 may comprise a logic initiating system signals causing a computer or the system to execute a task associated with an Invisible Link. It may be desirable in some embodiments to combine this module with the link selection module, such as for common and non-hazardous tasks like turning a light on and off. In other circumstances, such as activating or deactivating a security or safety-related system, having multiple distinct user intimations (e.g., intimations performed serially or simultaneously using different body parts) is desirable as with separate modules and/or process steps.
- intimations can follow Link Selection that correspond to further task instructions and which may be useful for devices such as lights with controllable intensity and hue.
- Some non-limiting examples of the link task invocation module or step include, but are not limited to: Myo, Bird, Provisional Patent Application No. 62/351,534, and the like.
- link invocation 455 may comprise invoking a computer instruction by based on a user gesturing towards an invisible button located on a physical surface. This may comprise the live user body monitoring 460 or a body gesture sensor detecting a user gesture towards a region on a physical surface. The appropriate link may then be selected 470, and an associated link task or computer instruction invoked 480. As discussed above, the task or computer instruction may have been associated with an invisible button or invisible link at a physical region during link creation 401, such as by matching an image of a physical surface within a physical space captured by a device of the user at a physical location relative to the physical surface to a 3-D model of the physical space. This may then have caused information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
- the logic of the link placement module is set forth in an example embodiment for determining and storing an Invisible Link Location, including how a system can respond when it is unable to match a camera image to a space map.
- a single camera image is compared to data representing the user's vicinity to determine desired link location.
- Link placement 605 may be performed by: analyzing an image 610; analyzing the user's space 615; and matching the image and the user's space 620. If a match is found, the Link Location is stored 635. Otherwise, link placement 605 may try to match with additional image(s), e.g., in a 2nd Matching Attempt 625, a 3rd Matching Attempt 630, etc.
- analyzing the image 610 may comprise one or more subprocesses.
- analyzing the image may comprise the user selecting an image 732, detecting image edges within the user selected image 734, and determining vertices 736.
- Image edges may be detected by any of various well-known methods. Vertices within an image may be determined by any of various known techniques, including but not limited to: if a detected edge deviates by more than 15 degrees, generating a vertex on the edge at the center of the deviation.
- analyzing the user's space 615 may comprise one or more subprocesses.
- analyzing the space 615 may comprise obtaining a camera location
- the camera location module 705 may provide the system an
- the purpose of the camera location is to identify the approximate area of the space map the user is placing a new link, and thereby minimize the area of the space map retrieved and analyzed.
- the camera orientation module 708 may increase the speed and accuracy of analyzing the space by further restricting (e.g., in addition to restrictions based solely on location) of the area retrieved and searched in the space map 710 (filters space defining data). The appropriate space map 710 or portion thereof may then be obtained.
- the subprocesses may also comprise establishing the user's POV relative to a map 715, rendering the map from the user's perspective 720, and in some embodiments may comprise determining vertices 725.
- the system may use the obtained and/or received information to determine the user's POV 715 relative to the space map 710.
- An embodiment of a user's position for establishing their POV relative to a space map is shown in FIGS. 8A and 8B, from two different angles.
- the system may render the map from the user's POV 720, as shown in FIG. 9.
- the vertices may be indicated to the user, while in other embodiments the vertices are not indicated to the user.
- Determining the vertices 725 may optionally be performed if the space map 710 being referenced uses a coordinate system that does not enumerate vertices, such as one based on photos rather than vectors. If vectors need to be determined, methods for determining the vertices such as from during analyzing the image, as discussed above, may be used.
- matching the image and space 620 may comprise one or more subprocesses.
- the matching may comprise determining the highest correlation between vertices in the image and the map 742, and evaluating whether the match is sufficient 744 (e.g., exceeds a threshold).
- determining the correlation 742 may comprise: testing multiple alignments between image and map data; calculating a quantity and proximity rating for vertices alignment, for example "SUM(1 /distance between vertices)"; and selecting the alignment with the highest rating.
- a determined Correlation Rating Value between the image and the space is compared to a minimum threshold (e.g., is match sufficient 744), where the threshold is intended to eliminate all but close matches (e.g., at least three vertices must be within -2% angular deviation from the user POV, or five vertices within -5% deviation, etc.).
- a minimum threshold e.g., is match sufficient 744
- the threshold is intended to eliminate all but close matches (e.g., at least three vertices must be within -2% angular deviation from the user POV, or five vertices within -5% deviation, etc.).
- testing an alignment may comprise rendering a first test alignment of the 3-D model or space map, and evaluating the correlation of the test alignment to the image. If a match is insufficient, the system may subsequently test a second (or further) test alignment of the 3-D model to the image. [0069] In some embodiments, if a match is sufficient the link location may be stored 635. In some embodiments, if the match is not sufficient a second attempt 746 may be performed. After a successful second attempt 748, a link location may be stored 635.
- storing the link location 635 may comprise one or more subprocesses.
- storing the link location 635 may comprise: calculating map coordinates for the link target reference point(s) 752.
- three vertices may be used to establish a link location relative to the map.
- a link target reference point B2 e.g., upper right corner of a door frame
- the calculated map coordinates may then be used to store the link location in an invisible link locations database 754.
- FIG. 11 illustrates a block diagram of one embodiment of a second matching attempt.
- the first matching attempt as illustrated in relation to FIG. 7, does not successfully determine a match.
- a second matching attempt 746 may occur.
- a second attempt 625 is initiated. The user is prompted to select the target again 1110, ideally but not necessarily from further away, with the desired link location remaining in view of the camera.
- the process may analyze the second image 1115, as discussed above in relation to FIG. 7.
- the process may then analyze the second user's space 1120, as discussed above in relation to FIG. 7.
- the process may then attempt to match the analyzed second image with the second user's space 1125, as discussed above in relation to FIG. 7. From the attempted matching, the process may, as discussed above in relation to FIG. 7, determine that the correlation of the match is sufficient 1130, or not.
- the process may calculate map coordinates for the vertices in the second image 1135, as the map coordinates for the vertices were calculated for the first image in relation to FIG. 7.
- the map coordinates for the vertices of the second image may be determined prior to the correlation analysis.
- the process may attempt to match 1140 the vertices of the second image with the vertices of the first image.
- the same logic as the first matching attempt, discussed previously in relation to FIG. 7, may be used.
- the second matching attempt uses the new image, and with the space map data replaced by the original image data, thereby creating a chain of images each at least partially matched to the previous image.
- the process may then return 1145 to the first attempt.
- a third matching attempt 1132 may be made, as discussed below in relation to FIG. 12.
- FIG. 12 illustrates a block diagram of one embodiment of a third matching attempt 630.
- the second matching attempt as illustrated in relation to FIG. 11, does not successfully determine a match.
- a third matching attempt 1205 may occur.
- the process for a third matching attempt may be comparable to a second matching attempt, but with different images.
- the user is prompted to select the target again 1210, ideally but not necessarily from further away, with the desired link location remaining in view of the camera.
- the process may analyze the third image 1215, as discussed above in relation to FIG. 7.
- the process may then analyze the third user's space 1220, as discussed above in relation to FIG. 7.
- the process may then attempt to match the analyzed third image with the third user's space 1225, as discussed above in relation to FIG. 7.
- the process may, as discussed above in relation to FIG. 7, determine that the correlation of the match is sufficient 1230, or not.
- the process may calculate map coordinates for the vertices in the third image 1235, as the map coordinates for the vertices were calculated for the first image in relation to FIG. 7.
- the map coordinates for the vertices of the third image may be determined prior to the correlation analysis.
- the process may attempt to match the vertices of the third image with the vertices of the second image 1240.
- the same logic as the first matching attempt, discussed previously in relation to FIG. 7, may be used.
- the third matching attempt uses the new image, and with the space map data replaced by the second image data, thereby creating a chain of images each at least partially matched to the previous image.
- the process may then return 1245 to the second attempt.
- the process may terminate. In some embodiments (not shown), a fourth attempt may be attempted, etc.
- any arbitrary number of images may be used to produce a single link location.
- the matching algorithm may be implemented as a recursive function, with the ability to skip processing images which are determined not to form a continuous chain of at least partially matching images.
- the location of a link location target (such as in FIGS. 2 A and 2B) can be determined by the user touching a location on a touch-sensitive screen displaying a camera image.
- the size and shape of invisible links may be implemented in various ways.
- the size of invisible links may be arbitrary, and may be set before or after placement by the user.
- invisible links may also automatically decrease in size to ensure they do not overlap with other Selection Regions from the user's current POV.
- invisible links may change size depending on the user's proximity to them (e.g., when the user is further from an invisible link, the link may expand to facilitate their selection-by-pointing; or conversely further away links may shrink).
- an invisible link controlling an overhead hallway light may correspond to the shape of the hallway ceiling, which may be created by the user combining multiple images, such as by having a camera record a video as the user points the camera at the ceiling and walks along the hallway.
- having invisible link placement based on physical surfaces may allow such surfaces to behave as a 2D surface in a 3D real world environment. That is, they can maintain a single orientation based on the surface chosen in the camera image, and change apparent shape appropriate to a viewer's change in relative position to the surface (e.g., change in viewer's POV).
- FIGS. 13A and 13B represent two example links, as previously described, changing apparent shape as the position they are viewed changes (the positions shown are not ones expected of users, and are selected for ease of explanation).
- FIG. 13 A illustrates an exemplary POV above a ceiling, in front of a wall. As shown, the invisible links 1350 and 1355 are seen as being round from this POV.
- FIG. 13B illustrates an exemplary POV above a ceiling, but to the side of a wall.
- the invisible links 1353 and 1357 are no longer round, but rather ovoid (or the like).
- Other shapes of invisible link may similarly appear as different shapes from different POVs (e.g., a square invisible link in one perspective appearing as a rhombus from another perspective, etc.).
- these illustrated POVs may not be an actual user's POV, but rather are meant to illustrate that a link may act as a 2D surface in a 3D real world environment, where a link's apparent shape may change when interacted with from different positions.
- an invisible link may be related to task lighting. For example, a user may capture a single image of the center of a top surface of a kitchen counter. The user may place a link in that location on the counter, such that when the link is invoked, it causes kitchen's illumination system to adjust the lighting for that particular area (e.g., over the top surface of the counter, such as by brightening a light directly over the counter).
- an invisible link may be related to ambient lighting.
- a user may capture multiple images covering all of a hallway's ceiling. The user may use these images to create a single large link that may be selected when in the hallway, or near a doorway leading into the hallway.
- the link may be such that when invoked, the system may adjust the lighting to illuminate the entire hallway.
- Invisible Link locations using (a) at least one user selected image, (b) the location of a camera producing the at least one image, and (c) a data model of a space in the user's immediate vicinity.
- a method comprising: analyzing an image obtained from a user's digital device; analyzing a space associated with a location of the user; determining a correlation value between the analyzed image and the analyzed space; and in response to a determination that the determined correlation value exceeds a correlation threshold: storing, at a memory location, information associating an invisible link with a physical location based at least in part on the image obtained from the user's digital device.
- the method may include wherein analyzing the image further comprises: obtaining data indicating a target location in the obtained image for the invisible link; detecting edges in the obtained image; and determining vertices in the obtained image.
- the method may include wherein determining vertices in the obtained image further comprises, where a detected edge deviates by greater than a deviation threshold, generating a vertex on the detected edge at a center of the deviation.
- the method may include wherein the deviation threshold is 15 degrees.
- the method may include wherein analyzing the space associated with the location of the user further comprises: receiving, from the user's digital device, at least a camera orientation and a camera location; obtaining, based at least in part on the camera orientation and camera location, a space map of the user's vicinity; determining a point-of-view (POV) of the user relative to the obtained space map; rendering an intermediate map, based at least in part on the determined POV and the obtained space map; and obtaining vertices of the rendered intermediate map.
- POV point-of-view
- the method may include wherein obtaining the vertices of the rendered intermediate map further comprises: detecting edges in the rendered intermediate map; and where a detected edge deviates by greater than a deviation threshold, generating a vertex on the detected edge at a center of the deviation.
- the method may include wherein the deviation threshold is 15 degrees.
- the method may include wherein obtaining vertices of the rendered intermediate map comprises extracting data indicative of vertices from the obtained space map of the user's vicinity.
- the method may include wherein determining the correlation value between the analyzed image and analyzed space further comprises: determining a correlation value for at least a first test alignment between the determined vertices in the obtained image and the obtained vertices of the rendered intermediate map; and selecting a best alignment based on a highest correlation value of the at least first test alignment, wherein the determined correlation value is the correlation value of the selected best alignment.
- determining a correlation value for the first test alignment comprises calculating a quantity and proximity value for the alignment of related vertices between the obtained image and the rendered intermediate map.
- the correlation threshold comprises at least three vertices being within a 2% angular deviation, from the user' s POV.
- the method may include wherein the correlation threshold comprises at least five vertices being within a 5% angular deviation, from the user's POV.
- the method may include wherein storing the invisible link further comprises: calculating a set of map coordinates relative to the selected best alignment based at least in part on the target location in the obtained image for the invisible link; and storing a data element associating together at least i) the set of map coordinates for the invisible link, ii) a user-selected computer-aided task associated with the invisible link, and iii) an invisible link ID.
- there is a method comprising: aligning a camera of a user's digital device such that a superimposed target indicates a location of an invisible link to be created by the user; capturing an image with the camera of the location of the invisible link to be created; obtaining data representing the user's location; associating an executable computer-aided task with the invisible link to be created; and in response to a determination that the image matches a retrieved 3D map of a space in the user's vicinity: translating the location of the invisible link to be created into coordinates within the retrieved 3D map of the space in the user's vicinity; and recording the associated executable computer-aided task and the translated coordinates of the invisible link for later reference.
- the method may include wherein the captured image includes at least one physical surface.
- the method may include wherein determining that the image matches the retrieved 3D map is based at least in part on a value determined from matching edge-detected features of the image to features of the retrieved 3D map.
- there is a method comprising: receiving an invisible link creation message at a Link Placement module from a user's digital device, wherein the invisible link creation message comprises at least the user's location; obtaining a 3D space map of the user's vicinity; receiving a new link location message at the Link Placement module from the user's digital device, wherein the new link location message comprises at least a first image; recording an invisible link comprising at least a computer instruction and a link location.
- the method may further comprise obtaining at least one device or beacon location in the user's vicinity.
- the method may include wherein the new link location message further comprises an orientation of a camera of the user's digital device associated with the first image.
- the method may include wherein the new link location message further comprises at least a second image.
- the method may include wherein the new link location message further comprises a first orientation of a camera of the user's digital device associated with the first image, and a second orientation associated with the second image.
- the method may include wherein the invisible link further comprises a link shape.
- the method may include wherein recording the invisible link comprises communicating the invisible link to an Invisible Link Locations module.
- a method comprising: obtaining an image of a surface region and a user's location from a user's device; obtaining a space map related to the user's location from a space maps database; matching the obtained image with the obtained space map; obtaining a desired invisible link location from the user's device relative to the obtained image; and mapping the invisible link location relative to the obtained image to a space-map location of the obtained space map.
- the method may further comprise storing the mapped invisible link location at a location database, wherein the mapped invisible link location is stored with an identifier.
- the method may further comprise associating an executable computer-aided task with the invisible link.
- a method comprising: obtaining an image of a physical surface within a physical space; matching the image of the physical surface within the physical space to a 3-D model of the physical space; and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
- the method may include wherein the image is obtained from a user's mobile device.
- the method may include wherein the image is captured by the user's mobile device at a physical location relative to the physical surface.
- the stored mapping information includes a unique identifier for later retrieval responsive to data captured by a body gesture sensor.
- a method comprising: obtaining sensor data based on a user's point gesture towards a location; determining, based at least in part on the obtained sensor data, the location pointed to by the user; obtaining from an invisible link location database, based at least in part on the determined location pointed to by the user, at least a previously stored instruction associated with the location pointed to by the user; and executing the previously stored instruction associated with the location pointed to by the user.
- the method may include wherein the location pointed to by the user is determined in a coordinate system of a space map retrieved from a space maps database.
- a method of invoking a computer instruction by a user gesturing towards an invisible button located on a physical surface comprising: invoking a computer instruction by detecting with a body gesture sensor a user gesture towards a region on a physical surface, said instruction and region having been established earlier by: matching an image of a physical surface within a physical space, having been captured by a mobile device of the user at a physical location relative to the physical surface, to a 3-D model of the physical space, and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
- the method may include wherein mapping of the location within the image to the surface region is based on matching edge-detected features of the image to features of the 3-D model.
- the method may include wherein the user's mobile device orientation is received to reduce the search space between the image and the 3-D model.
- the method may include wherein the user indicates placement of an invisible button by touching the device display at the desired location within the image.
- the method may include wherein the stored mapping information includes a unique identifier for later retrieval responsive to data captured by the body gesture sensor.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: creating and recording real-world Invisible Link locations using at least one user selected image, the location of a camera producing the at least one image, and a data model of a space in the user's immediate vicinity.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: analyzing an image obtained from a user's digital device; analyzing a space associated with a location of the user; determining a correlation value between the analyzed image and the analyzed space; and in response to a determination that the determined correlation value exceeds a correlation threshold: storing, at a memory location, information associating an invisible link with a physical location based at least in part on the image obtained from the user's digital device.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: aligning a camera of a user's digital device such that a superimposed target indicates a location of an invisible link to be created by the user; capturing an image with the camera of the location of the invisible link to be created; obtaining data representing the user's location; associating an executable computer-aided task with the invisible link to be created; and in response to a determination that the image matches a retrieved 3D map of a space in the user's vicinity: translating the location of the invisible link to be created into coordinates within the retrieved 3D map of the space in the user's vicinity; and recording the associated executable computer-aided task and the translated coordinates of the invisible link for later reference.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: receiving an invisible link creation message at a Link Placement module from a user's digital device, wherein the invisible link creation message comprises at least the user's location; obtaining a 3D space map of the user's vicinity; receiving a new link location message at the Link Placement module from the user's digital device, wherein the new link location message comprises at least a first image; and recording an invisible link comprising at least a computer instruction and a link location.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: obtaining an image of a surface region and a user's location from a user's device; obtaining a space map related to the user's location from a space maps database; matching the obtained image with the obtained space map; obtaining a desired invisible link location from the user's device relative to the obtained image; and mapping the invisible link location relative to the obtained image to a space-map location of the obtained space map.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: obtaining an image of a physical surface within a physical space; matching the image of the physical surface within the physical space to a 3-D model of the physical space; and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: obtaining sensor data based on a user's point gesture towards a location; determining, based at least in part on the obtained sensor data, the location pointed to by the user; obtaining from an invisible link location database, based at least in part on the determined location pointed to by the user, at least a previously stored instruction associated with the location pointed to by the user; and executing the previously stored instruction associated with the location pointed to by the user.
- a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions for invoking a computer instruction by a user gesturing towards an invisible button located on a physical surface, including: invoking a computer instruction by detecting with a body gesture sensor a user gesture towards a region on a physical surface, said instruction and region having been established earlier by: matching an image of a physical surface within a physical space, having been captured by a mobile device of the user at a physical location relative to the physical surface, to a 3-D model of the physical space, and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
- Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity.
- WTRU wireless transmit/receive unit
- FIG. 14 is a system diagram of an exemplary WTRU 102, which may be employed as a user's digital device in embodiments described herein.
- the WTRU 102 may include a processor 118, a communication interface 119 including a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, a non-removable memory 130, a removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and sensors 138.
- GPS global positioning system
- the processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
- the processor 1 18 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.
- the processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122.
- the transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station over the air interface 116.
- the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals.
- the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples.
- the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
- the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MTMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
- the transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122.
- the WTRU 102 may have multi-mode capabilities.
- the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11, as examples.
- the processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
- the processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128.
- the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132.
- the non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
- the removable memory 132 may include a subscriber identity module (SFM) card, a memory stick, a secure digital (SD) memory card, and the like.
- the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
- the processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102.
- the power source 134 may be any suitable device for powering the WTRU 102.
- the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel- zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li -ion), and the like), solar cells, fuel cells, and the like.
- the processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102.
- location information e.g., longitude and latitude
- the WTRU 102 may receive location information over the air interface 1 16 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
- the processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
- the peripherals 138 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
- sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module
- FIG. 15 depicts an exemplary network entity 190 that may be used in embodiments of the present disclosure.
- network entity 190 includes a communication interface 192, a processor 194, and non-transitory data storage 196, all of which are communicatively linked by a bus, network, or other communication path 198.
- Communication interface 192 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication, communication interface 192 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 192 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 192 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi-Fi communications, and the like).
- wireless communications e.g., LTE communications, Wi-Fi communications, and the like.
- communication interface 192 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.
- Processor 194 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.
- Data storage 196 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non- transitory data storage deemed suitable by those of skill in the relevant art could be used.
- data storage 196 contains program instructions 197 executable by processor 194 for carrying out various combinations of the various network-entity functions described herein.
- ROM read only memory
- RAM random access memory
- register cache memory
- semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD- ROM disks, and digital versatile disks (DVDs).
- a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Computer instructions may be invoked by a user gesturing towards an invisible button located on a physical surface. The gesture by the user towards a region on a physical surface may be detected with a body gesture sensor, where the instruction and region have been established earlier. The instruction and region may be established earlier by: matching an image of the physical surface within a physical space, having been captured by a mobile device of the user at a physical location relative to the physical surface, to a 3-D model of the physical space; and causing information to be stored that maps a location within the image with the corresponding physical surface and the computer instruction.
Description
METHOD AND SYSTEM FOR CREATING INVISIBLE REAL-WORLD LINKS TO COMPUTER-AIDED TASKS WITH CAMERA
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is a non-provisional filing of, and claims benefit under 35 U.S.C. §119(c) from, U.S. Provisional Patent Application Serial No. 62/368,809, filed July 29, 2016, entitled "METHOD AND SYSTEM FOR CREATING INVISIBLE REAL-WORLD LINKS TO COMPUTER-AIDED TASKS WITH CAMERA", which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] This disclosure relates to systems and methods for user interfaces. More specifically, this disclosure relates to systems and methods for gesture-based user interfaces.
BACKGROUND
[0003] Theoretically, non-augmented-reality user interface (UI) systems can provide convenience and safety to occupants of smart houses and other engineered spaces by allowing the occupants to select devices using body gestures, thereby maintaining their natural first person point of view.
[0004] Current gesture-based UIs systems are close to providing users with convenient control of devices as they move about their homes and offices. For example: Myo, Bird, World or Air Pointing (see "Air pointing: Design and evaluation of spatial target acquisition with and without visual feedback." A. Cockburn et al., Int. J. Human-Computer Studies 69 (2011) 401-414), and Virtual Shelves (see "Virtual Shelves: Interactions with Orientation Aware Devices." F. Chun Yat Li et al., 2009, in Proceedings of the 22nd annual ACM symposium on User interface software and technology (UIST Ό9), 125-128), and/or the like.
[0005] Each of these systems has individual flaws, plus shared ones such as the need to either: associate user-invocable computer-activating link locations (hot-spots) with the current location of physical devices (e.g., IoT devices reporting their location), or
indicate new link locations using a dedicated GUI displaying a diagram of the real-world area (2D or 3D rendering of space map), with that diagram being displayed from other than the user's current real -world point of view.
[0006] Systems using a user's point of view (POV) to control of objects and tasks (e.g., gesture-based UIs) do not allow users to easily associate an arbitrary real-world space with a computer-aided task (e.g., placement of Invisible Links).
SUMMARY
[0007] Described herein are systems and methods related to creating invisible real-world links to computer-aided tasks with a camera.
[0008] In one embodiment, there is a method of invoking a computer instruction by a user gesturing towards an invisible button located on a physical surface, comprising: invoking a computer instruction by detecting, with a body gesture sensor, a user gesture towards a region on a physical surface. The instruction and region may have been established earlier by: matching an image of the physical surface within a physical space, having been captured by a mobile device of the user at a physical location relative to the physical surface, to a 3-D model of the physical space; and causing information to be stored that maps a location within the image with the corresponding physical surface and the computer instruction.
[0009] In some embodiments, the disclosed systems and methods create and record new real- world Invisible Link locations using user selected images, the location of the camera producing the image, and a data model of the space in the user's immediate vicinity. A general block diagram of one embodiment of the present systems and methods is illustrated in FIG. 1. A link placement module 1010 may receive inputs, such as a user selected image 1030, a camera location 1020, and a space map 1040 of the user's vicinity. After processing and analysis, the module may output and/or store a created invisible link 1050.
[0010] In some embodiments, the disclosed system and method creates and records new real- world Invisible Link locations (e.g., areas in the real-world which a user can select and invoke from their current POV and thereby cause a computer-aided task to occur) using:
- user-selected image(s),
location of the camera producing the image, and
a data model of the space that includes object surfaces visible in the camera image (e.g., walls, windows and doors).
[0011] In one embodiment, disclosed are systems and methods related to creating and recording real -world Invisible Link locations using at least one user selected image, the location of a camera producing the at least one image, and a data model of a space in the user's immediate vicinity.
[0012] In one embodiment, an image obtained from a user's digital device is analyzed. A space associated with a location of the user is also analyzed. A correlation value between the analyzed image and analyzed space is determined. In response to a determination that the determined correlation value exceeds a correlation threshold, an invisible link is stored, the invisible link at a location based at least in part on the image obtained from the user's digital device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] A more detailed understanding may be had from the following description, presented by way of example in conjunction with the accompanying drawings, wherein:
[0014] FIG. 1 is a block diagram of one embodiment of the present systems and methods for creating invisible links.
[0015] FIGS. 2A-2B illustrate one embodiment of a user aligning a camera so that a superimposed link location target indicates where the user wants a new link to be established.
[0016] FIG. 2C illustrates one embodiment of where a camera image, its location, and optionally the camera's orientation may be compared to a model or map of space in the user's vicinity.
[0017] FIG. 2D illustrates one embodiment of displaying to a user a new link which has been recorded for future use.
[0018] FIG. 2E illustrates one embodiment of how a user may interact with a previously created button or link by gesturing towards the region in relation to which the link was previously stored.
[0019] FIG. 3A illustrates a message flow diagram of one embodiment of the systems and methods disclosed herein.
[0020] FIG. 3B illustrates a message flow diagram of another embodiment of the systems and methods disclosed herein.
[0021] FIG. 4 illustrates a block diagram of one embodiment of creating an invisible link and utilizing a created invisible link.
[0022] FIG. 5 illustrates one embodiment of space map wall vertices, shown as a wireframe viewed from the front and above.
[0023] FIG. 6 illustrates a block diagram of one embodiment of the logic for a link placement module.
[0024] FIG. 7 illustrates an expanded block diagram of one embodiment of the link placement module.
[0025] FIGS. 8A-8B illustrate one embodiment of analyzing the user's space and establishing the user's POV relative to a space map.
[0026] FIG. 9 illustrates one embodiment of rendering space map wall vertices from a user POV.
[0027] FIG. 10 illustrates an exemplary embodiment of three vertices used to establish a link location relative to a rendered space map.
[0028] FIG. 11 illustrates a block diagram for one embodiment of a second matching attempt.
[0029] FIG. 12 illustrates a block diagram for one embodiment of a third matching attempt.
[0030] FIGS. 13A-13B represent two example invisible links, with different apparent shapes as the POV they are viewed from changes, where FIG. 13 A illustrates an exemplary POV above a ceiling, in front of a wall, and FIG. 13B illustrates an exemplary POV above a ceiling, but to the side of a wall.
[0031] FIG. 14 illustrates an exemplary wireless transmit/receive unit (WTRU) that may be employed as a user's digital device in some embodiments.
[0032] FIG. 15 illustrates an exemplary network entity that may be employed in some embodiments.
DETAILED DESCRIPTION
[0033] A detailed description of illustrative embodiments will now be provided with reference to the various Figures. Although this description provides detailed examples of possible implementations, it should be noted that the provided details are intended to be by way of example and in no way limit the scope of the application.
[0034] Note that various hardware elements of one or more of the described embodiments are referred to as "modules" that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules. As used herein, a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation. Each described module may also include executable instructions for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer- readable medium or media, such as commonly referred to as RAM, ROM, etc.
[0035] In some embodiments, the disclosed systems and methods create and record new real- world Invisible Link locations using user selected images, the location of the camera producing the image, and a data model of the space in the user's immediate vicinity. A general block diagram of one embodiment of the present systems and methods is illustrated in FIG. 1. A link placement module may receive inputs, such as a user selected image, a camera location, and a space map of the user's vicinity. After processing and analysis, the module may output and/or store a created invisible link.
[0036] One embodiment of the systems and methods disclosed herein relates to FIGS. 2A-2E, where a user desires to place a new "invisible link". FIGS. 2A-2B illustrate one embodiment of a user 205 aligning a camera of the user's digital device 210 (which is displaying a rendered image or model of the physical space) so that a superimposed link location target 227 indicates where on a real-world surface (e.g., desired target location 225 in FIG. 2A) the user wants a new link to be established. In some embodiments, the superimposed target may be an overlay on a camera image preview screen, or the like.
[0037] FIG. 2C illustrates one embodiment of where a camera image, its location, and optionally the camera's orientation may be compared to a model or map of space in the user's vicinity, and if a match is found, the new link location may be translated into map coordinates. FIG. 2C shows two space map vertices (B2 and C3) (rendered from a POV similar to the user's) associated with matching vertices (230, 232) in the camera image (shown on the user's device 210).
[0038] FIG. 2D illustrates one embodiment of displaying to a user a new link which has been recorded for future use. As shown in FIG. 2D, once links are recorded, they may be displayed to the user as an augmented reality overlay of the image on the user's device 210, for example as an indicated new link 240 and a previously stored link 245. The displayed links may in some embodiments display or otherwise indicate a stored associated function, such as a stored link 250 to may indicate that it may enable home security, or a stored link 255 may indicate that it may toggle blinds, etc.
[0039] FIG. 2E illustrates one embodiment of how a user 270 (either the original user or a later authorized user) may interact with a previously created button or link, such as a link 255, for example by gesturing 280 towards the region (e.g., window) in relation to which the link 255 was previously stored. Such a gesture may be detected by a body gesture sensor and cause invocation of an instruction which was previously stored in association with the link 255, such as toggling blinds of the window open or close.
[0040] Some advantages of the disclosed systems and methods include, but are not limited to the following.
- Exemplary embodiments provide a simple method of placing computer-invocable UI links in real-world locations.
- Exemplary embodiments allow a user to maintain a natural first person POV.
- Exemplary embodiments use physical surfaces to place links, allowing users to behave naturally by pointing to a link from various locations and having the system reliably determine whether they intended to select the link. In contrast, an arbitrary XYZ location in a room is much harder to recall and indicate from different room locations.
- Exemplary embodiments can be implemented using common handheld or body-mounted networked cameras, such as those in smartphones or other digital devices.
[0041] In some embodiments, the systems and methods are well suited to spaces with a mix of complex and simple-to-model shapes, such as is typical for illuminated building exteriors and interiors, or the like.
[0042] Additionally, there are various and increasing ways to build space maps, especially for building interiors with standard features, such as rectilinear walls, windows, doors, furniture, and/or the like. For example, photogrammetry or other techniques well known to those of ordinary skill in the art may be used.
[0043] One embodiment of the systems and methods disclosed herein is set forth in relation to FIG. 3A, illustrating a sequence chart for one embodiment. With a first message 325, when the user invokes link creation, the system generates a message from the user's hand-held or body- mounted device(s) 305, containing at least: the user's current location. With a message 327, a link placement module 310 may generate a request for space maps of the user's vicinity, such as from a Space Maps database 315. With a message 329, the link placement module 310 may receive the requested space maps, such as from the space maps database 315. With a message 331, optionally in some embodiments, the link placement module 310 may request the device and/or beacon locations in the user's vicinity, such as from a Device and Beacon Locations database 317. With an optional message 333, the link placement module 310 may receive the device and/or beacon locations from the Device and Beacon Locations database 317. With a message 335, the user's personal device 305 may indicate a new link location by selecting an image in its camera display. The message may include, but is not limited to: camera image; (optional) camera orientation; (optional) multiple camera images; (optional) multiple camera positions; and/or the like.
[0044] With a message 337, the link placement module 310 may store the Link Location, such as in an "Invisible Link Locations" database 320, by saving at least: a Link ID; a Link Location; optionally a Link Shape; and/or the like.
[0045] FIG. 3B illustrates a message flow diagram of another embodiment of the systems and methods disclosed herein. A user's personal device 305 may have its camera pointed at a surface region 340. A captured picture (or video) and the user's location may be communicated 342 to a link management system 312, which may request space maps 344 at the user's location from a space maps database 315. The space maps database 315 may retrieve 346 space maps, and communicate the retrieved space maps 348 to the link management system 312. The link management system 312 may match the receive picture or video with a received space map 350, and confirm 352 to the user's device that a corresponding space map was found. The user may then touch the screen of their device at a location of a desired invisible button 354, and the selected button location with regard to the picture or video may be sent 356 to the link management system 312. The link management system 312 may map the button image location to a corresponding space map location 358, and store 360 the button space map location and an identifier in an invisible link locations database 320.
[0046] At some later time, the user may point or otherwise gesture at a previously created invisible button 362, such as with a body gesture sensor 307. A detected gesture may be communicated to the link management system 312, which may communicate the detected gesture 364 to the space maps database 315 to retrieve 366 the location pointed to or gestured at by the user. The link management system 312 may then communicate 368 with the link locations database 320 to determine whether an invisible button was previously created at this determined location, and if so may retrieve 370 the corresponding invisible button identifier. The link management system 312 may then execute 372 an instruction associated with the pointed to or gestured at invisible button or link.
[0047] FIG. 4 illustrates an overall block diagram of one embodiment of the present systems and methods. In some embodiments, there may be a link creation component 401 and a link invocation component 455. The link creation component may include a link creator subcomponent 430. In some embodiments, link creation and link invocation may be carried out by separate systems, and in other embodiments they may be carried out by a single system.
[0048] In some embodiments, there is a camera location module 405. The user device provides system an XYZ point that is translatable to the frame of reference used by the Space Map. The purpose of the camera location is to identify the approximate area of the space map the user is placing a new link, and thereby minimize the area of the space map retrieved and analyzed. In
some embodiments, the camera location module may be embodied in the user's personal device, such as a smartphone.
[0049] In some embodiments, there is a camera orientation module 407. Ideally, data sufficient to specify a precise direction the camera was pointing when image was made is available. However, even limited information, such as approximate elevation angle or cardinal direction, increases speed and accuracy of the image processing by further restricting (e.g., in addition to restrictions based solely on location) the area retrieved and searched in the space map (e.g., filters space defining data). In some embodiments, the camera orientation module may be embodied in the user's personal device, such as a smartphone.
[0050] In some embodiments, there is a user selected image module 410. When the user indicates that the camera image corresponds to the location desired for a new link, one or more camera images are stored for analysis. For example, the user may have selected a "Create gesture link here" button within a GUI. Such camera images may optionally contain a Link Location Target visually superimposed on camera's display (see FIG. 2B, discussed above). Link location target may be automatically placed in a default portion of image (e.g., center), or any location manually indicated by user (e.g., by touching display screen). For example, the user may speak "Make a new Invisible Link" into a voice UI system of the user's personal device.
[0051] In some embodiments, there is a link-task association module 434. For example, the user selects which computer-aided task is executed when the link is selected either before or after the link location is determined. One example of such a module may be the Internet based service "If This Then That". Examples of computer aided tasks (or computer instructions) may include, but are not limited to: for a link associated with a front door, an instruction to enable a home security system; for a link associated with a window, an instruction to toggle powered blinds installed on the window; toggling lights; and/or the like.
[0052] In some embodiments, there is a link modification module 436. In some embodiments, such a module may comprise UI and logic allowing user to modify the shape, size, location, orientation, task, or other attributes of existing or to-be-created Invisible Links.
[0053] In some embodiments, there is a link placement module 432. As discussed below in relation to FIG. 6, the logic of the link placement module 432 is set forth in an example embodiment for determining and storing an Invisible Link Location, including how a system can respond when it is unable to match a camera image to a space map.
[0054] In some embodiments, there is a space map datastore module 440. In some embodiments, such a module may record multiple XYZ points in a shared frame of reference indicating the relative position of surface vertices of objects, such as walls, doors, windows, and/or
the like. In some embodiments, the points may be stored using 3D modeling, or the like. FIG. 5 illustrates one embodiment of space map wall vertices, shown as a wireframe viewed from the front and above.
[0055] In some embodiments, there is a device and beacon locations module 445. In some embodiments, stationary location sensors and beacons can be employed as part of and/or representing link surfaces, especially when they are immobile and/or disclose their shape dimensions and orientation to the system. One example of a beacon may be Apple's iBeacon, or the like.
[0056] In some embodiments, there is an invisible link locations datastore module 450. In some embodiments, such a module uses standard data storage methods to maintain correlations between a Link ID and its location. This information may be kept separate, or integrated into more complex data structures, such as enumerating the computer-assisted task associated with the link.
[0057] From stored links, link invocation 455 may be performed. In some embodiments, there is a live user body monitoring module 460. In some embodiments, such a module may comprise wearable devices or environmental sensor nets capable of quickly and accurately monitoring user pointing gestures. For example, the Myo, Bird, Microsoft Band, etc.
[0058] In some embodiments, there is a link selection module 470. In some embodiments, such a module may comprise logic differentiating between pointing gestures and other body motions. In some embodiments, the logic may be the same as or similar to that disclosed in U.S. Provisional Patent Application No. 62/351,534, filed June 17, 2016, entitled METHOD AND SYSTEM FOR SELECTING IOT DEVICES USING SEQUENTIAL POINT AND NUDGE GESTURES, and PCT/US2017/036891 filed Jun 9, 2017 entitled METHOD AND SYSTEM FOR SELECTING IOT DEVICES USING SEQUENTIAL POINT AND NUDGE GESTURES, each of which is hereby incorporated by reference in its entirety.
[0059] In some embodiments, there is a link task invocation module 480. In some embodiments, such a module may comprise a logic initiating system signals causing a computer or the system to execute a task associated with an Invisible Link. It may be desirable in some embodiments to combine this module with the link selection module, such as for common and non-hazardous tasks like turning a light on and off. In other circumstances, such as activating or deactivating a security or safety-related system, having multiple distinct user intimations (e.g., intimations performed serially or simultaneously using different body parts) is desirable as with separate modules and/or process steps. Also, instead of or in addition to this module and/or step, intimations can follow Link Selection that correspond to further task instructions and which may be useful for devices such as lights with controllable intensity and hue. Some non-limiting
examples of the link task invocation module or step include, but are not limited to: Myo, Bird, Provisional Patent Application No. 62/351,534, and the like.
[0060] Generally, link invocation 455 may comprise invoking a computer instruction by based on a user gesturing towards an invisible button located on a physical surface. This may comprise the live user body monitoring 460 or a body gesture sensor detecting a user gesture towards a region on a physical surface. The appropriate link may then be selected 470, and an associated link task or computer instruction invoked 480. As discussed above, the task or computer instruction may have been associated with an invisible button or invisible link at a physical region during link creation 401, such as by matching an image of a physical surface within a physical space captured by a device of the user at a physical location relative to the physical surface to a 3-D model of the physical space. This may then have caused information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
[0061] In relation to FIGS. 6-7, the logic of the link placement module is set forth in an example embodiment for determining and storing an Invisible Link Location, including how a system can respond when it is unable to match a camera image to a space map. In one embodiment, as illustrated in FIG. 6, at a minimum, a single camera image is compared to data representing the user's vicinity to determine desired link location. Link placement 605 may be performed by: analyzing an image 610; analyzing the user's space 615; and matching the image and the user's space 620. If a match is found, the Link Location is stored 635. Otherwise, link placement 605 may try to match with additional image(s), e.g., in a 2nd Matching Attempt 625, a 3rd Matching Attempt 630, etc.
[0062] A more detailed embodiment of the link placement module described in relation to FIG. 6 is set forth in relation to FIG. 7. In one embodiment, analyzing the image 610 may comprise one or more subprocesses. For example, analyzing the image may comprise the user selecting an image 732, detecting image edges within the user selected image 734, and determining vertices 736. Image edges may be detected by any of various well-known methods. Vertices within an image may be determined by any of various known techniques, including but not limited to: if a detected edge deviates by more than 15 degrees, generating a vertex on the edge at the center of the deviation.
[0063] In one embodiment, analyzing the user's space 615 may comprise one or more subprocesses. For example, analyzing the space 615 may comprise obtaining a camera location
705 and/or a camera orientation 707. The camera location module 705 may provide the system an
XYZ point that is translatable to the frame of reference used by the Space Map 710. The purpose of the camera location is to identify the approximate area of the space map the user is placing a
new link, and thereby minimize the area of the space map retrieved and analyzed. The camera orientation module 708 may increase the speed and accuracy of analyzing the space by further restricting (e.g., in addition to restrictions based solely on location) of the area retrieved and searched in the space map 710 (filters space defining data). The appropriate space map 710 or portion thereof may then be obtained.
[0064] From the space map 710 or portion thereof, the subprocesses may also comprise establishing the user's POV relative to a map 715, rendering the map from the user's perspective 720, and in some embodiments may comprise determining vertices 725.
[0065] The system may use the obtained and/or received information to determine the user's POV 715 relative to the space map 710. An embodiment of a user's position for establishing their POV relative to a space map is shown in FIGS. 8A and 8B, from two different angles. With the user's position and/or POV (from their position) established, the system may render the map from the user's POV 720, as shown in FIG. 9. In some embodiments, the vertices may be indicated to the user, while in other embodiments the vertices are not indicated to the user.
[0066] Determining the vertices 725 may optionally be performed if the space map 710 being referenced uses a coordinate system that does not enumerate vertices, such as one based on photos rather than vectors. If vectors need to be determined, methods for determining the vertices such as from during analyzing the image, as discussed above, may be used.
[0067] From the analyzed space and analyzed image, the system may match the image and space 620. In one embodiment, matching the image and space 620 may comprise one or more subprocesses. For example, the matching may comprise determining the highest correlation between vertices in the image and the map 742, and evaluating whether the match is sufficient 744 (e.g., exceeds a threshold). In one embodiment, determining the correlation 742 may comprise: testing multiple alignments between image and map data; calculating a quantity and proximity rating for vertices alignment, for example "SUM(1 /distance between vertices)"; and selecting the alignment with the highest rating. In some embodiments, a determined Correlation Rating Value between the image and the space is compared to a minimum threshold (e.g., is match sufficient 744), where the threshold is intended to eliminate all but close matches (e.g., at least three vertices must be within -2% angular deviation from the user POV, or five vertices within -5% deviation, etc.).
[0068] In some embodiments, testing an alignment may comprise rendering a first test alignment of the 3-D model or space map, and evaluating the correlation of the test alignment to the image. If a match is insufficient, the system may subsequently test a second (or further) test alignment of the 3-D model to the image.
[0069] In some embodiments, if a match is sufficient the link location may be stored 635. In some embodiments, if the match is not sufficient a second attempt 746 may be performed. After a successful second attempt 748, a link location may be stored 635.
[0070] In some embodiments, storing the link location 635 may comprise one or more subprocesses. For example, storing the link location 635 may comprise: calculating map coordinates for the link target reference point(s) 752. As illustrated in FIG. 10, in an exemplary embodiment, three vertices may be used to establish a link location relative to the map. For example, a link target reference point B2 (e.g., upper right corner of a door frame) may be related to three reference point offsets from the space map vertices, for example Al, A4, and C3. The calculated map coordinates may then be used to store the link location in an invisible link locations database 754.
[0071] FIG. 11 illustrates a block diagram of one embodiment of a second matching attempt. In some cases, the first matching attempt, as illustrated in relation to FIG. 7, does not successfully determine a match. In such cases, a second matching attempt 746 may occur.
[0072] In one embodiment, after a failed first matching attempt 1105, a second attempt 625 is initiated. The user is prompted to select the target again 1110, ideally but not necessarily from further away, with the desired link location remaining in view of the camera.
[0073] From the newly captured image, the process may analyze the second image 1115, as discussed above in relation to FIG. 7. The process may then analyze the second user's space 1120, as discussed above in relation to FIG. 7. The process may then attempt to match the analyzed second image with the second user's space 1125, as discussed above in relation to FIG. 7. From the attempted matching, the process may, as discussed above in relation to FIG. 7, determine that the correlation of the match is sufficient 1130, or not.
[0074] If the correlation is sufficient, the process may calculate map coordinates for the vertices in the second image 1135, as the map coordinates for the vertices were calculated for the first image in relation to FIG. 7. In some embodiments, the map coordinates for the vertices of the second image may be determined prior to the correlation analysis.
[0075] With sufficient correlation and calculated map coordinates for the vertices in the second image, the process may attempt to match 1140 the vertices of the second image with the vertices of the first image. The same logic as the first matching attempt, discussed previously in relation to FIG. 7, may be used. However, the second matching attempt uses the new image, and with the space map data replaced by the original image data, thereby creating a chain of images each at least partially matched to the previous image. The process may then return 1145 to the first attempt.
[0076] If the correlation is not sufficient, a third matching attempt 1132 may be made, as discussed below in relation to FIG. 12.
[0077] Although not shown in FIG. 11, in some embodiments, there may be a logic path for when a subsequent image matches a former image, but the current one does not. For example, if the second image does not sufficiently match the first image, but the third image does match the first image. In this case the process would use the third image in place of the second 1137.
[0078] FIG. 12 illustrates a block diagram of one embodiment of a third matching attempt 630. In some cases, the second matching attempt, as illustrated in relation to FIG. 11, does not successfully determine a match. In such cases, a third matching attempt 1205 may occur.
[0079] Generally, the process for a third matching attempt may be comparable to a second matching attempt, but with different images. The user is prompted to select the target again 1210, ideally but not necessarily from further away, with the desired link location remaining in view of the camera. From the newly captured image, the process may analyze the third image 1215, as discussed above in relation to FIG. 7. The process may then analyze the third user's space 1220, as discussed above in relation to FIG. 7. The process may then attempt to match the analyzed third image with the third user's space 1225, as discussed above in relation to FIG. 7. From the attempted matching, the process may, as discussed above in relation to FIG. 7, determine that the correlation of the match is sufficient 1230, or not.
[0080] If the correlation is sufficient, the process may calculate map coordinates for the vertices in the third image 1235, as the map coordinates for the vertices were calculated for the first image in relation to FIG. 7. In some embodiments, the map coordinates for the vertices of the third image may be determined prior to the correlation analysis.
[0081] With sufficient correlation and calculated map coordinates for the vertices in the third image, the process may attempt to match the vertices of the third image with the vertices of the second image 1240. The same logic as the first matching attempt, discussed previously in relation to FIG. 7, may be used. However, the third matching attempt uses the new image, and with the space map data replaced by the second image data, thereby creating a chain of images each at least partially matched to the previous image. The process may then return 1245 to the second attempt.
[0082] In some embodiments, if the correlation of the third image with the third space is not sufficient, the process may terminate. In some embodiments (not shown), a fourth attempt may be attempted, etc.
[0083] In some embodiments, any arbitrary number of images may be used to produce a single link location. For example, the matching algorithm may be implemented as a recursive function,
with the ability to skip processing images which are determined not to form a continuous chain of at least partially matching images.
[0084] In some embodiments, the location of a link location target (such as in FIGS. 2 A and 2B) can be determined by the user touching a location on a touch-sensitive screen displaying a camera image.
[0085] In some embodiments, the size and shape of invisible links may be implemented in various ways. In some embodiments, the size of invisible links may be arbitrary, and may be set before or after placement by the user. In some embodiments, invisible links may also automatically decrease in size to ensure they do not overlap with other Selection Regions from the user's current POV. In some embodiments, invisible links may change size depending on the user's proximity to them (e.g., when the user is further from an invisible link, the link may expand to facilitate their selection-by-pointing; or conversely further away links may shrink).
[0086] In some embodiments, the shape of invisible links is arbitrary. For example, and without limitation, an invisible link controlling an overhead hallway light may correspond to the shape of the hallway ceiling, which may be created by the user combining multiple images, such as by having a camera record a video as the user points the camera at the ceiling and walks along the hallway.
[0087] In some embodiments, having invisible link placement based on physical surfaces may allow such surfaces to behave as a 2D surface in a 3D real world environment. That is, they can maintain a single orientation based on the surface chosen in the camera image, and change apparent shape appropriate to a viewer's change in relative position to the surface (e.g., change in viewer's POV). FIGS. 13A and 13B represent two example links, as previously described, changing apparent shape as the position they are viewed changes (the positions shown are not ones expected of users, and are selected for ease of explanation). FIG. 13 A illustrates an exemplary POV above a ceiling, in front of a wall. As shown, the invisible links 1350 and 1355 are seen as being round from this POV. FIG. 13B illustrates an exemplary POV above a ceiling, but to the side of a wall. As shown, for the same links 1350 1355 as in FIG. 13A, from the POV in FIG. 13B the invisible links 1353 and 1357 are no longer round, but rather ovoid (or the like). Other shapes of invisible link may similarly appear as different shapes from different POVs (e.g., a square invisible link in one perspective appearing as a rhombus from another perspective, etc.). It should be noted that these illustrated POVs may not be an actual user's POV, but rather are meant to illustrate that a link may act as a 2D surface in a 3D real world environment, where a link's apparent shape may change when interacted with from different positions.
[0088] In one embodiment, an invisible link may be related to task lighting. For example, a user may capture a single image of the center of a top surface of a kitchen counter. The user may place a link in that location on the counter, such that when the link is invoked, it causes kitchen's illumination system to adjust the lighting for that particular area (e.g., over the top surface of the counter, such as by brightening a light directly over the counter).
[0089] In one embodiment, an invisible link may be related to ambient lighting. For example, a user may capture multiple images covering all of a hallway's ceiling. The user may use these images to create a single large link that may be selected when in the hallway, or near a doorway leading into the hallway. The link may be such that when invoked, the system may adjust the lighting to illuminate the entire hallway.
[0090] In an embodiment, there is a method comprising: creating and recording real-world
Invisible Link locations using (a) at least one user selected image, (b) the location of a camera producing the at least one image, and (c) a data model of a space in the user's immediate vicinity.
[0091] In an embodiment, there is a method comprising: analyzing an image obtained from a user's digital device; analyzing a space associated with a location of the user; determining a correlation value between the analyzed image and the analyzed space; and in response to a determination that the determined correlation value exceeds a correlation threshold: storing, at a memory location, information associating an invisible link with a physical location based at least in part on the image obtained from the user's digital device. The method may include wherein analyzing the image further comprises: obtaining data indicating a target location in the obtained image for the invisible link; detecting edges in the obtained image; and determining vertices in the obtained image. The method may include wherein determining vertices in the obtained image further comprises, where a detected edge deviates by greater than a deviation threshold, generating a vertex on the detected edge at a center of the deviation. The method may include wherein the deviation threshold is 15 degrees. The method may include wherein analyzing the space associated with the location of the user further comprises: receiving, from the user's digital device, at least a camera orientation and a camera location; obtaining, based at least in part on the camera orientation and camera location, a space map of the user's vicinity; determining a point-of-view (POV) of the user relative to the obtained space map; rendering an intermediate map, based at least in part on the determined POV and the obtained space map; and obtaining vertices of the rendered intermediate map. The method may include wherein obtaining the vertices of the rendered intermediate map further comprises: detecting edges in the rendered intermediate map; and where a detected edge deviates by greater than a deviation threshold, generating a vertex on the detected edge at a center of the deviation. The method may include wherein the deviation threshold is 15
degrees. The method may include wherein obtaining vertices of the rendered intermediate map comprises extracting data indicative of vertices from the obtained space map of the user's vicinity. The method may include wherein determining the correlation value between the analyzed image and analyzed space further comprises: determining a correlation value for at least a first test alignment between the determined vertices in the obtained image and the obtained vertices of the rendered intermediate map; and selecting a best alignment based on a highest correlation value of the at least first test alignment, wherein the determined correlation value is the correlation value of the selected best alignment. The method may include wherein determining a correlation value for the first test alignment comprises calculating a quantity and proximity value for the alignment of related vertices between the obtained image and the rendered intermediate map. The method may include wherein the correlation threshold comprises at least three vertices being within a 2% angular deviation, from the user' s POV. The method may include wherein the correlation threshold comprises at least five vertices being within a 5% angular deviation, from the user's POV. The method may include wherein storing the invisible link further comprises: calculating a set of map coordinates relative to the selected best alignment based at least in part on the target location in the obtained image for the invisible link; and storing a data element associating together at least i) the set of map coordinates for the invisible link, ii) a user-selected computer-aided task associated with the invisible link, and iii) an invisible link ID.
[0092] In an embodiment, there is a method comprising: aligning a camera of a user's digital device such that a superimposed target indicates a location of an invisible link to be created by the user; capturing an image with the camera of the location of the invisible link to be created; obtaining data representing the user's location; associating an executable computer-aided task with the invisible link to be created; and in response to a determination that the image matches a retrieved 3D map of a space in the user's vicinity: translating the location of the invisible link to be created into coordinates within the retrieved 3D map of the space in the user's vicinity; and recording the associated executable computer-aided task and the translated coordinates of the invisible link for later reference. The method may include wherein the captured image includes at least one physical surface. The method may include wherein determining that the image matches the retrieved 3D map is based at least in part on a value determined from matching edge-detected features of the image to features of the retrieved 3D map.
[0093] In an embodiment, there is a method comprising: receiving an invisible link creation message at a Link Placement module from a user's digital device, wherein the invisible link creation message comprises at least the user's location; obtaining a 3D space map of the user's vicinity; receiving a new link location message at the Link Placement module from the user's
digital device, wherein the new link location message comprises at least a first image; recording an invisible link comprising at least a computer instruction and a link location. The method may further comprise obtaining at least one device or beacon location in the user's vicinity. The method may include wherein the new link location message further comprises an orientation of a camera of the user's digital device associated with the first image. The method may include wherein the new link location message further comprises at least a second image. The method may include wherein the new link location message further comprises a first orientation of a camera of the user's digital device associated with the first image, and a second orientation associated with the second image. The method may include wherein the invisible link further comprises a link shape. The method may include wherein recording the invisible link comprises communicating the invisible link to an Invisible Link Locations module.
[0094] In an embodiment, there is a method comprising: obtaining an image of a surface region and a user's location from a user's device; obtaining a space map related to the user's location from a space maps database; matching the obtained image with the obtained space map; obtaining a desired invisible link location from the user's device relative to the obtained image; and mapping the invisible link location relative to the obtained image to a space-map location of the obtained space map. The method may further comprise storing the mapped invisible link location at a location database, wherein the mapped invisible link location is stored with an identifier. The method may further comprise associating an executable computer-aided task with the invisible link.
[0095] In an embodiment, there is a method comprising: obtaining an image of a physical surface within a physical space; matching the image of the physical surface within the physical space to a 3-D model of the physical space; and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface. The method may include wherein the image is obtained from a user's mobile device. The method may include wherein the image is captured by the user's mobile device at a physical location relative to the physical surface. The method may include wherein the stored mapping information includes a unique identifier for later retrieval responsive to data captured by a body gesture sensor.
[0096] In an embodiment, there is a method comprising: obtaining sensor data based on a user's point gesture towards a location; determining, based at least in part on the obtained sensor data, the location pointed to by the user; obtaining from an invisible link location database, based at least in part on the determined location pointed to by the user, at least a previously stored instruction associated with the location pointed to by the user; and executing the previously stored
instruction associated with the location pointed to by the user. The method may include wherein the location pointed to by the user is determined in a coordinate system of a space map retrieved from a space maps database.
[0097] In an embodiment, there is a method of invoking a computer instruction by a user gesturing towards an invisible button located on a physical surface, comprising: invoking a computer instruction by detecting with a body gesture sensor a user gesture towards a region on a physical surface, said instruction and region having been established earlier by: matching an image of a physical surface within a physical space, having been captured by a mobile device of the user at a physical location relative to the physical surface, to a 3-D model of the physical space, and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface. The method may include wherein mapping of the location within the image to the surface region is based on matching edge-detected features of the image to features of the 3-D model. The method may include wherein the user's mobile device orientation is received to reduce the search space between the image and the 3-D model. The method may include wherein the user indicates placement of an invisible button by touching the device display at the desired location within the image. The method may include wherein the stored mapping information includes a unique identifier for later retrieval responsive to data captured by the body gesture sensor.
[0098] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: creating and recording real-world Invisible Link locations using at least one user selected image, the location of a camera producing the at least one image, and a data model of a space in the user's immediate vicinity.
[0099] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: analyzing an image obtained from a user's digital device; analyzing a space associated with a location of the user; determining a correlation value between the analyzed image and the analyzed space; and in response to a determination that the determined correlation value exceeds a correlation threshold: storing, at a memory location, information associating an invisible link with a physical location based at least in part on the image obtained from the user's digital device.
[0100] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: aligning a camera of a user's digital device such that a superimposed target
indicates a location of an invisible link to be created by the user; capturing an image with the camera of the location of the invisible link to be created; obtaining data representing the user's location; associating an executable computer-aided task with the invisible link to be created; and in response to a determination that the image matches a retrieved 3D map of a space in the user's vicinity: translating the location of the invisible link to be created into coordinates within the retrieved 3D map of the space in the user's vicinity; and recording the associated executable computer-aided task and the translated coordinates of the invisible link for later reference.
[0101] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: receiving an invisible link creation message at a Link Placement module from a user's digital device, wherein the invisible link creation message comprises at least the user's location; obtaining a 3D space map of the user's vicinity; receiving a new link location message at the Link Placement module from the user's digital device, wherein the new link location message comprises at least a first image; and recording an invisible link comprising at least a computer instruction and a link location.
[0102] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: obtaining an image of a surface region and a user's location from a user's device; obtaining a space map related to the user's location from a space maps database; matching the obtained image with the obtained space map; obtaining a desired invisible link location from the user's device relative to the obtained image; and mapping the invisible link location relative to the obtained image to a space-map location of the obtained space map.
[0103] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: obtaining an image of a physical surface within a physical space; matching the image of the physical surface within the physical space to a 3-D model of the physical space; and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
[0104] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions including: obtaining sensor data based on a user's point gesture towards a location; determining, based at least in part on the obtained sensor data, the location pointed to by the user; obtaining from an invisible link location database, based at least in part on the determined location pointed to by the user, at least a previously stored instruction associated with the location pointed
to by the user; and executing the previously stored instruction associated with the location pointed to by the user.
[0105] In an embodiment, there is a system comprising a processor and a non-transitory storage medium storing instructions operative, when executed on the processor, to perform functions for invoking a computer instruction by a user gesturing towards an invisible button located on a physical surface, including: invoking a computer instruction by detecting with a body gesture sensor a user gesture towards a region on a physical surface, said instruction and region having been established earlier by: matching an image of a physical surface within a physical space, having been captured by a mobile device of the user at a physical location relative to the physical surface, to a 3-D model of the physical space, and causing information to be stored that maps a location within the image, indicated by or presented to a user, with the corresponding physical surface.
[0106] Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity.
[0107] FIG. 14 is a system diagram of an exemplary WTRU 102, which may be employed as a user's digital device in embodiments described herein. As shown in FIG. 14, the WTRU 102 may include a processor 118, a communication interface 119 including a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, a non-removable memory 130, a removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and sensors 138. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
[0108] The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 1 18 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 14 depicts the processor 118 and the transceiver 120 as separate components, it will be appreciated that the processor 118 and the transceiver 120 may be integrated together in an electronic package or chip.
[0109] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station over the air interface 116. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
[0110] In addition, although the transmit/receive element 122 is depicted in FIG. 14 as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MTMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 116.
[0111] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11, as examples.
[0112] The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SFM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
[0113] The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. As examples, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-
zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li -ion), and the like), solar cells, fuel cells, and the like.
[0114] The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 1 16 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
[0115] The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
[0116] FIG. 15 depicts an exemplary network entity 190 that may be used in embodiments of the present disclosure. As depicted in FIG. 15, network entity 190 includes a communication interface 192, a processor 194, and non-transitory data storage 196, all of which are communicatively linked by a bus, network, or other communication path 198.
[0117] Communication interface 192 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication, communication interface 192 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 192 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 192 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi-Fi communications, and the like). Thus, communication interface 192 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.
[0118] Processor 194 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.
[0119] Data storage 196 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random-access memory (RAM) to name but a few, as any one or more types of non- transitory data storage deemed suitable by those of skill in the relevant art could be used. As depicted in FIG. 15, data storage 196 contains program instructions 197 executable by processor 194 for carrying out various combinations of the various network-entity functions described herein.
[0120] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer- readable medium for execution by a computer or processor. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD- ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
Claims
1. A method of invoking a computer instruction by a user gesture towards a physical
surface, comprising:
generating an invisible button by:
matching an image of a physical surface within a physical space to a 3-D model of the physical space, the image captured by a mobile device of a user at a physical location relative to the physical surface; and
responsive to selection by the user of a location within the image, storing information associating a computer instruction with a region of the 3-D model of the physical space corresponding to the user selected location within the image; detecting, with a body gesture sensor, a user gesture towards the region on the physical surface; and
invoking the computer instruction associated with the region on the physical surface.
2. The method of claim 1, wherein invoking the computer instruction comprises:
obtaining from a database, based at least in part on the detected user gesture, the computer instruction previously associated with the determined region; and
executing the computer instruction associated with the determined region.
3. The method of any of claims 1-2, wherein the region of the 3-D model of the physical space corresponding to the user selected location within the image is determined based on matching edge-detected features of the image to features of the 3-D model.
4. The method of claim 3, wherein an orientation of the user's mobile device is received and utilized to align the 3-D model with the image during matching.
5. The method of any of claims 1-4, wherein the 3-D model of the physical space is
retrieved from a database based at least in part on the physical location.
6. The method of any of claims 1-5, wherein selection by the user of the location within the image comprises the user touching a desired location within the image on a touch- sensitive screen of the mobile device displaying the image
7. The method of any of claims 1-6, wherein storing information further comprises storing a unique identifier for the computer instruction and corresponding region of the 3-D model.
8. The method of any of claims 1-7, wherein matching the image to the 3-D model comprises:
determining vertices in the image;
obtaining, based at least in part on the physical location, the 3-D model of the physical space;
determining a point-of-view (POV) of the user relative to the obtained 3-D model of the physical space;
rendering a first test alignment of the 3-D model, based at least in part on the determined POV and the obtained 3-D model;
obtaining vertices of the rendered first test alignment;
determining a correlation value for the match between the image and the rendered first test alignment of the 3-D model by calculating a quantity and proximity value for the alignment of related vertices between the image and the rendered first test alignment; and responsive to a determination that the correlation value exceeds a correlation threshold, selecting the rendered first test alignment for subsequent use in determining the region of the 3-D model of the physical space corresponding to the user selected location within the image.
9. The method of claim 8, wherein obtaining the vertices of the rendered first test alignment further comprises:
detecting edges in the rendered first test alignment; and
where a detected edge deviates by greater than a deviation threshold, generating a vertex on the detected edge at a center of the deviation.
10. The method of claim 8, wherein the correlation threshold comprises at least three vertices being within a 2% angular deviation, from the user's POV.
11. The method of claim 8, wherein the correlation threshold comprises at least five vertices being within a 5% angular deviation, from the user's POV.
12. The method of claim 8, wherein the region of the 3-D model of the physical space
corresponding to the user selected location within the image is determined based at least in part on the rendered first test alignment.
13. The method of any of claims 1-12, wherein invoking the computer instruction comprises toggling a connected light on or off.
14. The method of any of claims 1-12, wherein invoking the computer instruction comprises opening or closing powered window blinds.
15. A system comprising a processor and a non-transitory storage medium storing
instructions operative, when executed on the processor, to perform functions including: generating an invisible button by:
matching an image of a physical surface within a physical space to a 3-D model of the physical space, the image captured by a mobile device of a user at a physical location relative to the physical surface; and
responsive to selection by the user of a location within the image, storing information associating a computer instruction with a region of the 3-D model of the physical space corresponding to the user selected location within the image; detecting, with a body gesture sensor, a user gesture towards the region on the physical surface; and
invoking the computer instruction associated with the region on the physical surface.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662368809P | 2016-07-29 | 2016-07-29 | |
US62/368,809 | 2016-07-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018023042A1 true WO2018023042A1 (en) | 2018-02-01 |
Family
ID=59579931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2017/044455 WO2018023042A1 (en) | 2016-07-29 | 2017-07-28 | Method and system for creating invisible real-world links to computer-aided tasks with camera |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018023042A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027435A1 (en) | 2020-08-06 | 2022-02-10 | Huawei Technologies Co., Ltd. | Activating cross-device interaction with pointing gesture recognition |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016009016A1 (en) * | 2014-07-17 | 2016-01-21 | Koninklijke Philips N.V. | Method of obtaining gesture zone definition data for a control system based on user input |
-
2017
- 2017-07-28 WO PCT/US2017/044455 patent/WO2018023042A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016009016A1 (en) * | 2014-07-17 | 2016-01-21 | Koninklijke Philips N.V. | Method of obtaining gesture zone definition data for a control system based on user input |
Non-Patent Citations (2)
Title |
---|
A. COCKBURN ET AL.: "Air pointing: Design and evaluation of spatial target acquisition with and without visual feedback", INT. J. HUMAN-COMPUTER STUDIES, vol. 69, 2011, pages 401 - 414, XP028202133, DOI: doi:10.1016/j.ijhcs.2011.02.005 |
F. CHUN YAT LI ET AL.: "Proceedings of the 22nd annual ACM symposium on User interface software and technology (UIST '09", VIRTUAL SHELVES: INTERACTIONS WITH ORIENTATION AWARE DEVICES, 2009, pages 125 - 128 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022027435A1 (en) | 2020-08-06 | 2022-02-10 | Huawei Technologies Co., Ltd. | Activating cross-device interaction with pointing gesture recognition |
EP4185939A4 (en) * | 2020-08-06 | 2023-08-30 | Huawei Technologies Co., Ltd. | Activating cross-device interaction with pointing gesture recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10846938B2 (en) | User device augmented reality based item modeling | |
JP6469706B2 (en) | Modeling structures using depth sensors | |
JP6416290B2 (en) | Tracking data provided by devices for augmented reality | |
US10068373B2 (en) | Electronic device for providing map information | |
KR102479495B1 (en) | Mobile terminal and method for operating thereof | |
EP2972973B1 (en) | Context aware localization, mapping, and tracking | |
US9756261B2 (en) | Method for synthesizing images and electronic device thereof | |
US20150187137A1 (en) | Physical object discovery | |
EP2930631B1 (en) | Method and apparatus for content output | |
CN107646109B (en) | Managing feature data for environment mapping on an electronic device | |
US20190197788A1 (en) | Method and system for synchronizing a plurality of augmented reality devices to a virtual reality device | |
US20190122423A1 (en) | Method and Device for Three-Dimensional Presentation of Surveillance Video | |
KR101680667B1 (en) | Mobile device and method for controlling the mobile device | |
US9628706B2 (en) | Method for capturing and displaying preview image and electronic device thereof | |
KR20170136797A (en) | Method for editing sphere contents and electronic device supporting the same | |
WO2019196871A1 (en) | Modeling method and related device | |
WO2015068447A1 (en) | Information processing device, information processing method, and information processing system | |
KR102401641B1 (en) | Mobile device and method for controlling the mobile device | |
WO2018023042A1 (en) | Method and system for creating invisible real-world links to computer-aided tasks with camera | |
CN117152393A (en) | Augmented reality presentation method, system, device, equipment and medium | |
KR20160011419A (en) | A mobile device, a method for controlling the mobile device, and a control system having the mobile device | |
KR101549027B1 (en) | Mobile device and method for controlling the mobile device | |
CN104750248B (en) | Electronic installation matching method | |
KR101556179B1 (en) | Mobile device and method for controlling the mobile device | |
Makita et al. | Photo-shoot localization of a mobile camera based on registered frame data of virtualized reality models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17751187 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17751187 Country of ref document: EP Kind code of ref document: A1 |