EP3953793A1 - Method, arrangement, and computer program product for three-dimensional visualization of augmented reality and virtual reality environments - Google Patents

Method, arrangement, and computer program product for three-dimensional visualization of augmented reality and virtual reality environments

Info

Publication number
EP3953793A1
EP3953793A1 EP19809884.0A EP19809884A EP3953793A1 EP 3953793 A1 EP3953793 A1 EP 3953793A1 EP 19809884 A EP19809884 A EP 19809884A EP 3953793 A1 EP3953793 A1 EP 3953793A1
Authority
EP
European Patent Office
Prior art keywords
computer
user
generated image
virtual element
menu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19809884.0A
Other languages
German (de)
French (fr)
Inventor
Aditya PREMI
Bilawal HUSSAIN
Kshitij TOMAR
Avinash PRATAP
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ainak Oy
Original Assignee
Ainak Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ainak Oy filed Critical Ainak Oy
Publication of EP3953793A1 publication Critical patent/EP3953793A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Definitions

  • the invention is generally related to the technical field of visualizing augmented reality (AR) and/or virtual reality (VR) environments and objects to a human user.
  • AR augmented reality
  • VR virtual reality
  • the invention is related to advantageous features of visualizing items like menus and dimensions, and to developed features of the user interface that facilitate an intuitive user experience.
  • VR Virtual reality
  • AR Augmented reality
  • Another objective is that the user interface features should be applicable in AR- and/or VR-based systems that utilize different kinds of display technologies.
  • the user interface features can be implemented with only relatively little intrusiveness in the visual field of the user of an AR- and/or VR-based system.
  • the user interface features allow streamlining routines and using efficiently the time when working in an AR and/or VR environment.
  • a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user.
  • the visualization arrangement is configured to repeatedly obtain an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and respond to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction.
  • the visualization arrangement is configured to respond to a menu-displaying command received through user controls of said visualization arrangement by displaying a menu as a number of displayed three-dimensional symbols within said displayed portion of said computer-generated image, wherein said menu is a symbolic representation of a number of interrelated options available to said user.
  • the visualization arrangement is configured to maintain said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment, while continuing to obtain new indications of said viewing direction and responding to such obtained indications by centering said displayed portion of said computer-generated image on the respective viewing direction.
  • the visualization arrangement is configured to display, in said menu, only a subset of a larger number of interrelated options available to said user through said menu, and respond to a menu-scrolling command received through said user controls by scrolling the subset of the interrelated options selected for display within said larger number of interrelated options and each time displaying only those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded.
  • the visualization arrangement is configured to display those three- dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded in the form of slices of a displayed three- dimensional part of a pie.
  • the visualization arrangement is configured to respond to said menu-scrolling command by rotating said displayed slices in said computer-generated image around a center point of said displayed three-dimensional part of the pie, so that in each case that slice that was furthest in the direction of rotation before said rotating is faded out of view and a new slice, representing another one of said interrelated options is brought into view on the other side of the displayed three-dimensional part of the pie.
  • the visualization arrangement is configured to display, in said menu, a number of three-dimensional symbols corresponding to options available to said user through said menu in a rotationally symmetric planar array, the planar form of which is oriented parallel to a plane displayed at or close to the location of said menu in said computer-generated image, and also configured to respond to a menu-scrolling command received through said user controls by rotating said rotationally symmetric planar array of three-dimensional symbols around its rotational axis of symmetry so that said rotating, when continues, brings each of said three-dimensional symbols in turn to front in said computer-generated image.
  • a method for displaying a computer-generated image of a three-dimensional environment to a human user comprises repeatedly obtaining an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and responding to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction.
  • the method comprises responding to a menu-displaying command received through user controls of said visualization arrangement by displaying a menu as a number of displayed three-dimensional symbols within said displayed portion of said computer-generated image, wherein said menu is a symbolic representation of a number of interrelated options available to said user, and maintaining said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three- dimensional symbols were objects located at the corresponding location in the three- dimensional environment, while continuing to obtain new indications of said viewing direction and responding to such obtained indications by centering said displayed portion of said computer-generated image on the respective viewing direction.
  • the method comprises displaying, in said menu, only a subset of a larger number of interrelated options available to said user through said menu, and responding to a menu-scrolling command received through said user controls by scrolling the subset of the interrelated options selected for display within said larger number of interrelated options and each time displaying only those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded.
  • the method comprises displaying those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded in the form of slices of a displayed three-dimensional part of a pie.
  • the method comprises responding to said menu-scrolling command by rotating said displayed slices in said computer-generated image around a center point of said displayed three-dimensional part of the pie, so that in each case that slice that was furthest in the direction of rotation before said rotating is faded out of view and a new slice, representing another one of said interrelated options is brought into view on the other side of the displayed three-dimensional part of the pie.
  • the method comprises displaying, in said menu, a number of three- dimensional symbols corresponding to options available to said user through said menu in a rotationally symmetric planar array, the planar form of which is oriented parallel to a plane displayed at or close to the location of said menu in said computer-generated image, and responding to a menu-scrolling command received through said user controls by rotating said rotationally symmetric planar array of three-dimensional symbols around its rotational axis of symmetry so that said rotating, when continues, brings each of said three-dimensional symbols in turn to front in said computer-generated image.
  • a computer program product comprising one or more sets of one or more machine-readable instructions that, when executed by one or more processors, cause the implementation of any of the methods described in this text.
  • a computer-readable storage medium comprising stored thereupon a computer program product of the kind described above.
  • Fig. 1 illustrates a schematic diagram of a visualization arrangement
  • fig. 2 illustrates certain concepts of displaying computer-generated images of three- dimensional environments
  • fig. 3 illustrates an embodiment of displaying a menu
  • fig. 4 illustrates a concept according to which the menu of fig. 3 may work
  • fig. 5 illustrates an embodiment of displaying a menu
  • fig. 6 illustrates an embodiment of indicating a direction and distance
  • fig. 7 illustrates a concept according to which the embodiment of fig. 6 may work
  • fig. 8 illustrates a concept according to which the embodiment of fig. 6 may work
  • fig. 9 illustrates an embodiment of moving a virtual element
  • fig. 10 illustrates an embodiment of indicating distances in real time, fig.
  • FIG. 11 illustrates concepts according to which the embodiment of fig. 10 may work
  • fig. 12 illustrates an early phase of an embodiment of creating virtual elements
  • fig. 13 illustrates a later phase of the embodiment of fig. 13
  • fig. 14 illustrates an embodiment of working with point-fixing commands
  • fig. 15 illustrates a later phase of the embodiment of fig. 14, and
  • fig. 16 illustrates an embodiment of working with stored and restored virtual elements.
  • Fig. 1 illustrates a schematic block diagram of a system in which embodiments of the invention may be implemented.
  • VR systems Common general features of VR systems are a display arrangement 101, a processing engine 102, and user controls 103.
  • An AR system may require additionally an image acquisition subsystem 104, although it is also possible to build a kind of an AR system according to the head-up display principle so that the "image" of the real world is what the user actually sees with his or her own eyes, and only the virtual elements augmenting it are added for example by projecting their images onto the surface of a transparent layer through which the user is looking.
  • the system may also comprise external interfaces 105.
  • a wide variety of display arrangements 101 are known, ranging from mere standalone display screens to multiple kinds of VR headsets or head-mounted-displays that can show a tailored projection of the image to each eye or even project the image directly onto the retina of the eye.
  • Display arrangements working according to the head-up display mentioned above are yet another alternative.
  • the present invention does not place requirements on what kind of display technology is used, although some display technologies may have certain specific implications on how embodiments of the invention may be used. These are described in more detail later in this text.
  • User controls 103 also come in widely different forms, as this general definition covers all possible ways in which the user may generate direct or indirect inputs to the system.
  • the most conventional forms of user controls include keyboard and mouse type of devices and touch- sensitive screens.
  • the user controls may comprise means for voice and gesture control, as well as detectors of movement and direction. These can reside in a common device with the other functionalities, like a smartphone for example, but additionally or alternatively there may be external user control devices like tactile gloves or sensors built into the structures of the surrounding environment.
  • Some embodiments of the invention described here may involve preferences of certain kinds of user interface devices, but in general the invention is not restricted to any kind of user control technology.
  • the image acquisition subsystem 104 may include one or more digital cameras capable of producing video streams and/or sequences of consecutive still images.
  • the simultaneous use of two or more cameras with at least partly overlapping fields of view involves the advantage of stereo imaging, and even more versatile imaging is possible with panoramic and/or spherical imaging.
  • the image acquisition subsystem 104 also covers other ways in which the system can automatically obtain information about the surrounding space, like distance sensors, object detectors, and inertial navigation systems.
  • the external interfaces block 105 may comprise interfaces for either or both of short- distance and long-distance communications.
  • Interfaces for short-distance communications may comprise for example wired interfaces to nearby auxiliary devices; NFC (near field communications) interfaces for wireless communications with other devices in the immediate vicinity; or Bluetooth, ZigBee, WiFi, infrared, ultrasound, or other wireless interfaces for wireless communications within meters or at least tens of meters distances.
  • Interfaces for long-distance communications if present, may comprise for example 3G, 4G, or 5G mobile communications interfaces; satellite communications interfaces; or any kind of Internet interfaces for communications with devices that may be anywhere.
  • the processing engine 102 is a computer or a networked system of two or more computers that performs all the data processing that is needed to acquire information about the environment, compose images to be displayed to the user, receive and analyze user inputs, and otherwise maintain the virtual three-dimensional environment, projections of which the user needs to see.
  • the processing engine 102 comprises one or more processors that execute one or more sets of one or more machine-readable instructions that, when executed, cause the implementation of what is described as the method embodiments of the invention.
  • the processing engine 102 also comprises the required memory means, like a program memory for storing said machine- readable instructions, a data memory for storing the data that defines the virtual three- dimensional environment and its contents, firmware storage for storing the firmware defining the low-level automatic operations of the system, and so on.
  • the interfaces between the various parts shown in fig. 1 may be internal interfaces within an integrated device, or at least some of them may involve interfacing between devices over short or long distances.
  • the system may be a standalone VR or AR device, and it may be portable or wearable.
  • the case of interfaces between devices relates to the possibility of dividing the functionalities of a VR or AR system so that resources are taken into use from where there are most practically and advantageously available.
  • a central task of a system of the kind shown in fig. 1 is to provide the user with visual information about either a virtual three-dimensional environment or an actual three- dimensional environment augmented with virtual elements
  • the system may be described as a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a user.
  • image may be understood to cover a still image, a sequence of still images, a video stream, or a frame of a video stream, which frame the arrangement frequently and regularly updates so that as a result the video stream is generated.
  • the whole image may consist of virtual elements
  • the image may comprise images of actual elements in a real-life three-dimensional environment augmented with virtual elements or - in the case of head-up displays or the like - virtual elements that in the eyes of the user augment the real-life elements that the user sees through an at least partially transparent or translucent display apparatus.
  • processing engine 102 can be made to perform a wide variety of tasks by programming it appropriately, saying that the visualization arrangement is configured to do something is essentially synonymous with saying that the tasks to be performed and the steps to be executed are written in the form of one or more computer programs that are compiled into machine- readable instructions and stored in a program memory that is available to the processing engine 102.
  • the visualization arrangement is configured to repeatedly obtain an indication of a viewing direction into which the user is currently looking in the three-dimensional environment.
  • the user controls 103 are used for such obtaining.
  • the arrangement is additionally configured to respond to such obtained indications of the viewing direction by centering a displayed portion of the computer-generated image on the indicated viewing direction.
  • the arrangement may obtain the indications and perform the centering of the image frequently enough and rapidly enough so that the user experiences it as if actually looking around in a real-life three-dimensional environment augmented with the displayed virtual elements. This is not an obligatory feature however; for example if there is not enough processing power available for updating the image in real time, the arrangement may perform the centering of the image with a certain delay.
  • Fig. 2 illustrates some of the concepts mentioned above.
  • the rounded rectangle 201 delimits the displayed portion of a computer-generated image of a three-dimensional environment.
  • the three-dimensional environment is a room. It may be an actual room, in which case what the user sees of the room is either a computer-generated reproduction of what an image acquisition system recorded, or then the real-world view of the actual room that the user sees through the transparent layer of a HUD-type display.
  • the elements here: a table 202 and two chairs 203 and 204 comprised in the computer-generated image may be real-life elements or virtual elements.
  • the table 202 is a real-life element that actually exists in the room
  • the two chairs 203 and 204 are virtual elements that do not exist in real life but appear only in the computer-generated image.
  • the user has a viewing direction, and the displayed portion 201 of the computer-generated image is centered on the viewing direction.
  • the viewing direction focuses on point 205, and the centering is illustrated with the dash-dot lines. Centering is to be understood as placing into the field of view that the user can conveniently perceive with his sense of sight, so it does not need to mean exact centering in the mathematical sense.
  • the visualization arrangement In practice the visualization arrangement must often maintain in its memory a larger image than what is currently in the displayed portion, because the user may turn his or her viewing direction at any moment, and the computer must then be ready to produce a new displayed portion of the image, centered on the new viewing direction.
  • this is illustrated so that the corners between the floor and the walls, the corners between the ceiling and the walls, and a portion of the second chair 204 is shown with dashed lines: the user does not currently see this portion of the second chair 204, but it exists in the computer-generated image that the visualization arrangement has in its memory.
  • a menu is a list from which the user may select an operation to be performed.
  • the options displayed in a menu are typically interrelated in some way, for example so that these are the options that are available at the particular stage of doing something, and/or so that all these options represent the same sub-class of alternatives like changing the visual characteristics of a displayed virtual element.
  • the options displayed in the same menu are interrelated only by the fact that they all appear in this menu.
  • menus behind headers, keywords, or symbols typically in a row at or close to the upper edge of a displayed window, so that said row of headers, keywords, or symbols actually constitutes the highest-level menu.
  • By clicking with the mouse on one of them the user may open one or more lower-level menus and select the desired alternative from there.
  • a major advantage of menus is that the user does not need to remember all available options by heart and call them with keyed- in commands, but he or she may just make the available options visible and select from there.
  • a balance should be found between easy availability of options and non-intrusiveness.
  • the last-mentioned means that the user experience should not be disturbed with something that the user does not need at the moment or something that otherwise makes it difficult to work within the displayed environment.
  • a simple example of non- intrusiveness is that the field of view that the user has into the displayed portion of the computer generated image of the three-dimensional environment should not be unnecessarily clogged with displayed elements that do not actually belong to said three-dimensional environment.
  • the user may find it awkward to make the selection: it would be much easier to see how a particular luminaire fits together with the rest of the room if the field of view would not be covered to such a large extent by the menu.
  • the visualization arrangement may be configured to respond to a menu-displaying command received from the user by displaying a menu as a number of displayed three-dimensional symbols within the displayed portion of the computer-generated image.
  • a menu is a symbolic representation of a number of interrelated options available to the user. It could also be described as a list from which the user may select an operation to be performed, but it should be emphasized that graphical way in which a menu is here presented to the user would probably not be associated with the word "list" in the first place.
  • the visualization arrangement is also configured to maintain said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment. All this is done while continuing to obtain new indications of the viewing direction and responding to such obtained indications by centering the displayed portion of the computer-generated image on the respective viewing direction.
  • Maintaining the three-dimensional symbols that constitute the menu at their original location within the computer-generated image involves a number of advantages.
  • One of them is non- intrusiveness. Since the menu appears just like any other virtual element in the three- dimensional environment, the user can "move around it” or “step aside”; in other words, change the way in which he or she is looking at the computer-generated image, so that the menu either is or is not in sight.
  • the displayed menu takes only as large a portion of the field of view as the user wants.
  • the menu becomes part of the three-dimensional environment, instead of being part of a user interface through which the user looks at the three-dimensional environment.
  • the user may interact with the three-dimensional symbols of the menu in a very similar way in which he or she interacts with other virtual elements in the three-dimensional environment. This makes it very intuitive for the user to use a menu of the kind described here.
  • Another advantage is the possibility of context-specificity.
  • the user may have given the menu- displaying command while he or she was at a particular point within the three-dimensional environment, for the reason that some very specific task needed to be done at just that point.
  • the user might have considered adding a virtual element representing a flower on a table. If the menu was displayed on or close to the table, the user may want to leave it there while he or she is doing something else at some other location within the three-dimensional environment. The displayed menu could be waiting for the user on the table, so that when he or she comes back to the table next time, the selection of the flower may continue from where it was.
  • How long the displayed menu will remain visible and accessible may be decided according to need.
  • One possibility is that a menu, once displayed, remains visible and accessible as long as the user does not interact with any other objects displayed in the three-dimensional environment. The user could look somewhere else within the three-dimensional environment, and the menu would be there waiting at its original location for the user to look back into its direction and to interact with the menu. However, if the user moves to another location and/or interacts with any other object in the three-dimensional environment, the menu would disappear.
  • Such an embodiment involves the advantage that the user does not experience the three-dimensional environment as cluttered by unnecessarily displayed menu items.
  • menu is displayed until the user gives an explicit command to close the menu.
  • Such an embodiment involves the advantage that the user may decide exactly, how many menus should be displayed and made accessible at any given time.
  • menu will automatically disappear if a time longer than a predetermined threshold has passed since the user has last interacted with the menu.
  • Such an embodiment involves the advantage that the user does not need to separately remember to close menus that are not used any more.
  • the menu 301 has the geometric appearance of a part of a pie, i.e. a part of a relatively flat cylinder so that if the bottom plane of the cylinder defined the horizontal plane, the part is cut with one or more vertical planes.
  • the menu 301 has the geometric appearance of a half of a pie, i.e. a part of a relatively flat cylinder so that if the bottom plane of the cylinder defined the horizontal plane, the part is cut with a single vertical plane that includes the central axis of symmetry of the cylinder.
  • a prism 302, a cylinder 303, and a pin 304 are shown as simplified examples of three-dimensional symbols that constitute the menu in fig. 3. What the actual options are that are represented by these symbols is irrelevant to the present description. In order to provide intuitiveness to the user experience it is recommendable that the three-dimensional symbols are such that the user can easily associate them with the options that they represent. As an example, if the menu 301 is displayed in order to offer the user the possibility of adding more pieces of furniture as virtual elements into the computer-generated image, it is recommendable that the three-dimensional symbols are miniature-sized versions of the actual pieces of furniture that they represent. It is also possible to augment the three-dimensional symbols with displayed words or other character strings that further clarify their meaning.
  • fig. 3 the three-dimensional symbols 302, 303, and 304 are displayed in the form of three slices of the displayed three-dimensional part of a pie. In many cases a menu should contain (much) more options than three.
  • Fig. 4 illustrates how such a larger number of interrelated options can be made available to the user, using the basic graphical approach shown in fig. 3.
  • the larger number of interrelated options may be thought as populating a long list, as in the schematic representation on the left in fig. 4. Only a limited number (here: three) of the options are visible to the user at a time, as is illustrated with the solid lines around the middle three listed items in fig. 4. These are the three options, the three-dimensional symbols of which are visible in the slices of the displayed three-dimensional part of the pie.
  • the currently invisible upper and lower ends of the list in the left part of fig. 4 may be thought of corresponding to infinite extensions of a three-dimensional bar that is bent by 180 degrees in the middle, as in the figurative illustration on the right in fig. 4.
  • the visualization arrangement may be configured to respond to a "menu scrolling command" received from the user by scrolling the subset of the interrelated options selected for display within said larger number of interrelated options, and each time displaying only those three- dimensional symbols in said menu 301 that represent those of said larger number of interrelated options to which said scrolling had proceeded.
  • the effect of scrolling is shown with two-ended arrows in each part of figs. 3 and 4.
  • the displayed slices may be rotated (in response to the scrolling command) around a center point of the displayed three- dimensional part of the pie. In each case that slice that was furthest in the direction of rotation before said rotating is faded out of view. A new slice, representing another one of the interrelated options, is brought into view on the other side of the displayed three-dimensional part of the pie.
  • the leftmost slice (the one with the prism 302) would be faded out of view and a new slice would be brought into view on the right side. If the contents of the menu 301 were ordered as in fig. 4, the newly displayed slice would contain the hourglass-formed three-dimensional symbol. This way the user is given the possibility of scrolling through a practically unlimited number of interrelated options by just displaying a limited subset of them at each time.
  • How the user gives the menu-scrolling command is of little importance. It will be determined by the number and nature of user controls comprised in the visualization arrangement. Examples include but are not limited to voice commands, swiping on a touch-sensitive display, clicking on arrows or other symbols, pressing some keys, winking the left or the right eye, making a gesture in the air with a finger or a hand, or moving some other part of the body.
  • one way of giving selection commands may involve voice commands with which the user designates that symbol that he or she wants to select: for example, the user may say "left", “middle", or "right". If the symbols have names that are also displayed, the user may read aloud the name of the symbol that is to be selected. If a touch screen, mouse, or similar user control is used in which the user can point and click, the sensitive field on which the click must hit may be the symbol itself, or the whole slice of the pie in the form in which the menu is displayed.
  • Fig. 5 illustrates a slightly different alternative approach.
  • the menu 501 is a symbolic representation of a number of interrelated options available to said user.
  • the visualization arrangement is configured to maintain the three- dimensional symbols that constitute the menu 501 at the location at which it was displayed within the computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment, while continuing to obtain new indications of the viewing direction and responding to such obtained indications by centering the displayed portion of said computer-generated image on the respective viewing direction.
  • the visualization arrangement is configured to display the whole menu at once.
  • the menu 501 there are a number (here: six) of the three-dimensional symbols corresponding to options available to the user through the menu. They appear in a rotationally symmetric planar array.
  • the array being "planar" means that the three-dimensional symbols appear as if they were placed on a planar, rotating, round tray 502.
  • the planar form of the array is oriented parallel to a plane that is displayed at or close to the location of the menu 501 in the computer-generated image. In fig. 5 this plane is the floor of the room.
  • the visualization arrangement is configured to respond to a menu scrolling command received from the user by rotating the rotationally symmetric planar array of three-dimensional symbols around its rotational axis of symmetry so that said rotating, when continues, brings each of said three-dimensional symbols in turn to front in said computer generated image.
  • the symbol in the front is the easiest to select, so rotating the rotationally symmetric planar array of three-dimensional symbols is essentially synonymous with looking for the option that the user would prefer to select.
  • a menu-driven user interface should include also the necessary navigation means with which the user may move back and forth between the displayed menus that may have a number of levels.
  • the navigation means should also include means with which the user may terminate the current session of using menus, i.e. closing all menus that are open for the moment.
  • Navigation means need not involve any kind of displayed symbols, if the navigation can take place through simple commands of other kind, such as voice commands or gestures.
  • a major task of designing three-dimensional environments using VR or AR is the placing of virtual elements.
  • a decorator tasked with furnishing a three- dimensional environment like a room using AR.
  • the decorator could use a visualization arrangement that acquired sufficient information of the room to display a computer-generated image of it, and then start selecting and placing pieces of furniture as virtual elements within the computer-generated image.
  • Known user interfaces of AR visualization arrangement are notoriously slow and clumsy if the user wants to place a number of similar virtual elements in a row or array, or to move an existing virtual element by some desired distance in a desired direction.
  • the decorator wanted to try placing a number of chairs in a row along a wall, he or she would typically need to go to the desired location of the first chair, open a menu structure, navigate through the menu structure to select the virtual element representing the desired chair, give a command that made the visualization arrangement place the virtual element, then move through the next desired location, repeat all these steps, and keep doing this over and over again until the whole row of virtual chairs was in place.
  • a visualization arrangement configured to display, at a first location within said computer-generated image, a computer-generated first virtual element as if said first virtual element was an object located at the corresponding location within said three-dimensional environment.
  • a visualization arrangement is configured to respond to a first command received through user controls of said visualization arrangement by displaying within said computer-generated image a graphical indicator, said graphical indicator indicating a direction from said first virtual element and a distance from said first virtual element.
  • Such a visualization arrangement is configured to respond to a direction-changing command received through said user controls by changing the direction from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed direction.
  • Such a visualization arrangement is configured to respond to a distance-changing command received through said user controls by changing the distance from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed distance, and respond to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element that were indicated by said graphical indicator.
  • the visualization arrangement is configured to, as a response to said second command, place and display said second virtual element as a copy of said first virtual element in said computer-generated image with the same orientation as the first virtual element.
  • the visualization arrangement is configured to either leave said first virtual element as it was in the computer-generated image, thus causing the first virtual element to be duplicated by the second virtual element, or delete said first virtual element from the computer-generated image, thus causing the first virtual element to be replaced by the second virtual element.
  • the visualization arrangement is configured to, as a response to the direction indicated by said graphical indicator being parallel with a first existing linear element in the computer-generated image, displaying a graphical highlighter of said first existing linear element.
  • the visualization arrangement is configured to respond to an alignment-guide-selecting command received through said user controls - said alignment-guide- selecting command being one that identifies a second existing linear element in said computer generated image - by changing the direction from said first virtual element that said graphical indicator indicates so that it coincides with the direction of said second existing linear element and displaying the graphical indicator as indicating the changed direction.
  • the visualization arrangement is configured to, as a response to the distance indicated by said graphical indicator being equal to a first dimension of a first existing element in the computer-generated image, display a graphical highlighter of said first existing element.
  • the visualization arrangement is configured to respond to a distance-guide-selecting command received through said user controls - said distance-guide- selecting command being one that identifies a second existing element in said computer generated image that has a second dimension - by changing the distance from said first virtual element that said graphical indicator indicates so that it coincides with said second dimension of said second existing element and displaying the graphical indicator as indicating the changed distance.
  • the visualization arrangement is configured to respond to a third command, if received through said user controls before said second command, by displaying within said computer-generated image a list of operations available for performing, one of said operations being an operation of placing the second virtual element, and perform said placing and displaying of the second virtual element only if said second command indicates a selection by the user of said operation of placing the second virtual element.
  • a method that comprises displaying, at a first location within said computer-generated image, a computer-generated first virtual element as if said first virtual element was an object located at the corresponding location within said three- dimensional environment.
  • Such a method comprises responding to a first command received through user controls of said visualization arrangement by displaying within said computer generated image a graphical indicator, said graphical indicator indicating a direction from said first virtual element and a distance from said first virtual element.
  • Such a method comprises responding to a direction-changing command received through said user controls by changing the direction from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed direction.
  • Such a method comprises responding to a distance-changing command received through said user controls by changing the distance from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed distance, and responding to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element that were indicated by said graphical indicator.
  • Fig. 6 illustrates an example of a computer-generated image displayed by a visualization arrangement to a human user.
  • the computer-generated image represents a three-dimensional environment.
  • the visualization arrangement may be configured to repeatedly obtain an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and respond to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction.
  • the visualization arrangement is configured to display, at a first location within said computer-generated image, a computer-generated first virtual element 602 as if said first virtual element was an object (here: chair) located at the corresponding location within said three-dimensional environment.
  • the visualization arrangement comprises user controls of some kind; examples of these have been considered earlier in this text.
  • the visualization arrangement is configured to respond to a first command received through the user controls by displaying within said computer-generated image a graphical indicator 603.
  • the graphical indicator 603 indicates a direction from the first virtual element 602 and a distance from the first virtual element 602.
  • Exemplary parts of the graphical indicator 603 are a direction arrow 604 and a distance pane 605.
  • Other exemplary parts are an origin indicator 606 and a target location indicator 607.
  • the direction arrow 604 indicates the direction from the first virtual element 602
  • the value (here: 0.95 m) in the distance pane 605 indicates a distance from the first virtual element 602.
  • the direction and distance are measured along a basic horizontal surface (here: the floor of the three-dimensional environment) from the projection of the geometrical center point of the first virtual element 602 on that surface, but other possibilities exist, like measuring from that edge of the first virtual element 602 that is closest in the indicated direction.
  • the origin indicator 606 may indicate any or both of the point from which the indicated direction and distance are measured and the first virtual element 602 that constitutes an origin of the measurement.
  • the target location indicator 607 may highlight and indicate the location within the three- dimensional environment (or actually: within the computer-generated image of the three- dimensional environment) where the measurement currently ends.
  • the visualization arrangement may be configured to display the graphical indicator 603 automatically, immediately after the user has placed the first virtual element 602, or as a response to a command received through the user controls.
  • the first-mentioned alternative involves the advantage that the user can very conveniently begin duplicating and/or moving any virtual element in the computer-generated image as soon as such a virtual element was placed.
  • the second alternative involves the advantage that the user has more freedom to decide by him- or herself, in which ways he or she wants to work within the computer-generated image.
  • the "first command" referred to above is then the command to place the first virtual element 602, while in the second alternative the "first command" is an explicit command given by the user, indicating that he or she wants the graphical indicator 603 to be displayed.
  • the visualization arrangement may be configured to respond to a direction-changing command received through said user controls by changing the direction from said first virtual element 602 that said graphical indicator 603 indicates and displaying the graphical indicator as indicating the changed direction.
  • the direction-changing command may be for example a swipe made by the user on the surface of the touch screen into the direction to which he or she wants to change the direction.
  • Other possibilities include but are not limited to voice commands and keyed-in commands.
  • the visualization arrangement may also be configured to respond to a distance-changing command received through said user controls by changing the distance from said first virtual element that the graphical indicator 603 indicates and displaying the graphical indicator as indicating the changed distance. Similar considerations apply to the distance-changing command as to the direction-changing commands above. The aim is that the user can eventually place a second virtual element at the target location. Therefore the visualization arrangement is configured to respond to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element 602 that were indicated by the graphical indicator 603.
  • the visualization arrangement is configured to, as a response to said second command, place and display said second virtual element 701 as a copy of the first virtual element 602 in said computer-generated image.
  • the second virtual element 701 is automatically placed with the same orientation as the first virtual element 602.
  • An operation of this kind may involve either duplicating or moving the first virtual element 602.
  • the user may have at least two kinds of second commands at his or her disposal.
  • the visualization arrangement is then configured to either leave the first virtual element 602 as it was in the computer-generated image, or delete the first virtual element 602 from the computer-generated image.
  • the first alternative means causing the first virtual element 602 to be duplicated by the second virtual element 701, while the second alternative means causing the first virtual element 602 to be replaced by the second virtual element 701.
  • the visualization arrangement can be configured to assist the user with the possibility of automatic aligning of this kind.
  • the visualization arrangement can be configured to automatically examine the current direction indicated by the graphical indicator 603 and compare it to the directions of other linear elements in the computer-generated image.
  • the visualization arrangement may display a graphical highlighter of said first existing linear element.
  • first existing linear element is used here only for the ease of unambiguous reference.
  • a second possibility is that the user may intentionally cause the graphical indicator 603 to assume a direction that is parallel to the direction of an existing element in the computer-generated image.
  • Such an existing element may be called a second existing linear element, and it may be the same or some other than the first existing linear element referred to above.
  • the user may for example click or tap on a displayed linear element to give an alignment- guide selecting command, i.e. to indicate the desired second existing linear element.
  • the visualization arrangement may be configured to respond to such an alignment-guide-selecting command by changing the direction from said first virtual element that said graphical indicator 603 indicates so that it coincides with the direction of said second existing linear element and displaying the graphical indicator as indicating the changed direction.
  • the first and/or second existing linear element in the computer-generated image is one of the corner lines 608 between the floor and the walls of the room.
  • the graphical highlighter may be for example a different colour and/or a blinking representation of the existing linear element that acts as the alignment guide.
  • the visualization arrangement can be configured to assist the user with the possibility of automatic distributing of this kind.
  • the visualization arrangement can be configured to automatically examine the current distance indicated by the graphical indicator 603 and compare it to the dimensions of other elements in the computer-generated image.
  • the visualization arrangement may display a graphical highlighter of said first existing element.
  • a second possibility is that the user may intentionally cause the graphical indicator 603 to assume a distance that is equal to a dimension of an existing element in the computer-generated image.
  • Such an existing element may be called a second existing element, and it may be the same or some other than the first existing element referred to above.
  • the user may for example click or tap on a displayed element to give a distance-guide selecting command, i.e. to indicate the desired second existing element.
  • the visualization arrangement may be configured to respond to such a distance-guide-selecting command by changing the distance from said first virtual element that said graphical indicator 603 indicates so that it is equal to the dimension of said second existing element and displaying the graphical indicator as indicating the changed distance.
  • the first and/or second existing element in the computer-generated image is one of edges of the table 601.
  • the graphical highlighter may be for example a different colour and/or a blinking representation of the existing element that acts as the distance guide.
  • the visualization arrangement may be configured to display a menu at the target location.
  • the visualization arrangement is configured to respond to a third command by displaying within said computer-generated image a menu 801
  • the third command should be received through the user controls before said second command.
  • the menu 801 is a list of operations available for performing. One of said operations may be an operation of placing the second virtual element, and it may be represented in the menu by a miniature copy of the second virtual element that is to be placed. In such a case the placing and displaying of the second virtual element is performed only if said second command indicates a selection by the user of an operation of placing the second virtual element.
  • the menu 801 is displayed at the target location. However, in particular if this would lead to a poor visibility of other important features at or near the target location, the menu 801 may be displayed also elsewhere within the computer-generated image. In fig. 8 it is also assumed that the menu 801 is displayed as a collection of three-dimensional symbols. This is not an essential requirement, but the menu could be displayed also in some other form, like an array of text items.
  • Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
  • the visualization arrangement is configured to:
  • a visualization arrangement according to any of the numbered paragraphs NP1 to NP3, configured to:
  • a visualization arrangement according to any of the numbered paragraphs NP1 to NP4, configured to:
  • alignment-guide-selecting command being one that identifies a second existing linear element in said computer-generated image - by changing the direction from said first virtual element that said graphical indicator indicates so that it coincides with the direction of said second existing linear element and displaying the graphical indicator as indicating the changed direction.
  • NP6 A visualization arrangement according to any of the numbered paragraphs NP1 to NP5, configured to:
  • NP7 A visualization arrangement according to any of the numbered paragraphs NP1 to NP6, configured to:
  • - respond to a distance-guide-selecting command received through said user controls - said distance-guide-selecting command being one that identifies a second existing element in said computer-generated image that has a second dimension - by changing the distance from said first virtual element that said graphical indicator indicates so that it coincides with said second dimension of said second existing element and displaying the graphical indicator as indicating the changed distance.
  • NP8 A visualization arrangement according to any of the numbered paragraphs NP1 to NP7, configured to:
  • a touch-sensitive screen often layered together with a display screen to form a touch-sensitive display, is a commonly used user interface device in visualization arrangements of the kind described above.
  • the dual use of a touch-sensitive display for both outputting visual information and receiving control commands from the user may give rise to mutually contradicting needs.
  • the touch-sensitive areas offered for the user should be large enough, and within easy enough reach for the fingers of a user operating the device, so that the operating position would be ergonomic and intuitive.
  • a known solution for allowing a user to move and rotate virtual elements consists of two touch- sensitive joystick-type areas at or close to the edges of the touch-sensitive display.
  • a touch- sensitive joystick-type area is a form of a touch-sensitive user control in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch-sensitive joystick-type area said touches occur.
  • the touches work in a way that is analogous to a physical joystick, in which bending the joystick into different directions and by different amounts give rise to different commands.
  • a typical touch-sensitive joystick type area is illustrated on the touch-sensitive display as a circular patch, and array of concentric rings, or other graphical element from which the user can easily conceive a center and edges.
  • Two of these are typically used so that the user uses one of them to move a selected virtual element to different locations and the other to change the orientation, i.e. to rotate the virtual element around a rotational axis in the computer-generated image.
  • a drawback of two touch-sensitive joystick type areas is that they reserve space, and hide from view a relatively large portion of what the user would otherwise see of the displayed computer generated image of the three-dimensional environment.
  • a visualization arrangement that is configured to display, at a first location within said computer-generated image, a computer-generated virtual element as if said virtual element was an object located at the corresponding location within said three-dimensional environment.
  • Such a visualization arrangement is configured to provide the user with a touch-sensitive user control for moving said virtual element within said computer generated image, and respond to consecutive selection commands received from the user by toggling between a first control mode and a second control mode, of which in said first control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by moving said virtual element to a different location within said computer-generated image, and in said second control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by rotating said virtual element around a rotational axis within said computer generated image.
  • the visualization arrangement is configured to provide said touch- sensitive user control within a same display device as said computer-generated image.
  • the visualization arrangement is configured to provide one or more touch-sensitive selectors at or close to the touch-sensitive user control, and configured to receive said selection commands in the form of touches of said one or more touch-sensitive selectors.
  • the visualization arrangement is configured to provide said touch- sensitive user control as a touch-sensitive joystick-type area in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch- sensitive joystick-type area said touches occur.
  • the visualization arrangement is configured to give visual feedback to the user at or close to the location of said touch-sensitive user control indicating which of said first and second control modes is currently active.
  • the visualization arrangement is configured to give visual feedback to the user at or close to the location of said virtual element indicating which of said first and second control modes is currently active. According to an embodiment the visualization arrangement is configured to give visual feedback to the user at or close to the location of said virtual element indicating at least one default moving direction of said virtual element when said first control mode is active.
  • a method that comprises displaying, at a first location within said computer-generated image, a computer-generated virtual element as if said virtual element was an object located at the corresponding location within said three- dimensional environment.
  • Such a method comprises providing the user with a touch-sensitive user control for moving said virtual element within said computer-generated image, and responding to consecutive selection commands received from the user by toggling between a first control mode and a second control mode, of which said first control mode involves responding to touch commands received through said touch-sensitive user control by moving said virtual element to a different location within said computer-generated image, and said second control mode involves responding to touch commands received through said touch- sensitive user control by rotating said virtual element around a rotational axis within said computer-generated image.
  • Fig. 9 illustrates an example of how a visualization arrangement is displaying a computer generated image of a three-dimensional environment to a human user.
  • the visualization arrangement is configured to display, at a first location within the computer generated image, a computer-generated virtual element 901 as if said virtual element was an object located at the corresponding location within said three-dimensional environment.
  • the visualization arrangement is also configured to provide the user with a touch-sensitive user control 902 for moving the virtual element 901 within the computer-generated image.
  • the concept of moving is understood to mean all kinds of movements, including but not being limited to translational movements (moving into different location) and rotational movements (rotating around a rotational axis).
  • the user may decide, whether the touch-sensitive user control 902 is used for translational or rotational movements at a given time.
  • the visualization arrangement is configured to respond to consecutive selection commands received from the user by toggling between a first control mode and a second control mode. Of these, in the first control mode the visualization arrangement is configured to respond to touch commands received through the touch-sensitive user control 902 by moving the virtual element 901 to a different location within the computer-generated image. In said second control mode the visualization arrangement is configured to respond to touch commands received through the touch-sensitive user control 902 by rotating the virtual element 901 around a rotational axis within the computer-generated image.
  • a particular feature of the embodiment shown in fig. 9 is that the visualization arrangement is configured to provide the touch-sensitive user control 902 within the same display device 903 as the computer-generated image.
  • the computer generated image could be shown on the display of the smartphone and the touch-sensitive user control could be shown on the display of the smart watch.
  • the use of the same display for both involves the advantage that the user may only need a single device for performing all operations described here.
  • the visualization arrangement is configured to provide one or more touch-sensitive selectors 904 and 905 at or close to the touch-sensitive user control 902.
  • the visualization arrangement is configured to receive said selection commands in the form of touches of the one or more touch-sensitive selectors 904 and 905. In this example, touching the first touch-sensitive selector 904 makes the visualization arrangement activate the first control mode, and touching the second touch-sensitive selector 905 makes the visualization arrangement activate the second control mode.
  • Showing the touch-sensitive selectors within the same display as the touch-sensitive user control 902 involves many advantages, like them being easily available for the user, and it being possible to use the displaying capabilities of the touch- sensitive screen to illustrate the touch-sensitive selectors in an intuitive way.
  • the touch-sensitive user control 902 is a touch-sensitive joystick-type area in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch-sensitive joystick-type area said touches occur.
  • Particular directions may be emphasized; for example in fig. 9 the visualization arrangement is configured to graphically emphasize the upper and lower edges of the touch-sensitive user control 902. This may refer to particular directions of moving the virtual element 901, like touching the upper edge for moving the virtual element 901 into its current front direction and touching the lower edge for moving the virtual element 901 into its current back direction. This involves the advantage of enhancing the intuitiveness of using the touch-sensitive user control 902.
  • the visualization arrangement may be configured to give visual feedback to the user at or close to the location of the touch-sensitive user control 902, indicating which of said first and second control modes is currently active.
  • Such visual feedback could comprise for example some visual emphasis of that one of the touch-sensitive selectors 904 and 905 that corresponds to the currently active control mode.
  • the way in which particular directions or parts of the actual touch-sensitive user control 902 are emphasized may serve as an indicator of the currently active control mode. For example, when the first control mode is active, the touch-sensitive user control 902 might be provided with some emphasis of clearly translational directions, while when the second control mode is active it could be provided with some emphasis of rotational directions. This involves the advantage of enhancing the intuitiveness of using the touch-sensitive user control 902.
  • the visualization arrangement could additionally or alternatively be configured to give visual feedback to the user at or close to the location of the virtual element 901, indicating which of said first and second control modes is currently active.
  • a graphical symbol 906 appears under the virtual element 901, with the same outline as the first touch-sensitive selector 904. This tells the user that the currently active mode is the one that was selected with the first touch-sensitive selector 904.
  • Activating the second control mode by touching the second touch-sensitive selector 905 could change the outline of the graphical symbol 906 to resemble that of the second touch-sensitive selector 905. This involves the advantage of enhancing the intuitiveness of using the touch-sensitive user control 902.
  • the arrows in the graphical symbol 906 shown in fig. 9 have also another purpose.
  • the visualization arrangement is configured to give visual feedback to the user at or close to the location of the virtual element 901, indicating at least one default moving direction of the virtual element 901 when said first control mode is active. That is, the virtual element 901 has a "front" direction, into which it will move if the user gives a command to move the virtual element 901 forward, and this "front" direction is the one into which the arrows are pointing in the graphical symbol 906.
  • a front or other direction is typically bound to the overall appearance of the virtual element in question, because many virtual elements have a distinct forwards direction. As an example, if the virtual element 907 representing a chair would be moved, its intuitive front direction would be towards the lower left in fig. 9.
  • Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
  • the visualization arrangement is configured to: - respond to consecutive selection commands received from the user by toggling between a first control mode and a second control mode, of which in said first control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by moving said virtual element to a different location within said computer generated image, and in said second control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by rotating said virtual element around a rotational axis within said computer-generated image.
  • NP12 A visualization arrangement according to numbered paragraph NP11, configured to provide said touch-sensitive user control within a same display device as said computer generated image.
  • NP13 A visualization arrangement according to any of the numbered paragraphs NP11 or NP12, configured to provide one or more touch-sensitive selectors at or close to the touch-sensitive user control, and configured to receive said selection commands in the form of touches of said one or more touch-sensitive selectors.
  • NP14 A visualization arrangement according to any of the numbered paragraphs NP11 to NP13, configured to provide said touch-sensitive user control as a touch-sensitive joystick-type area in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch-sensitive joystick-type area said touches occur.
  • a visualization arrangement configured to give visual feedback to the user at or close to the location of said touch-sensitive user control indicating which of said first and second control modes is currently active.
  • a visualization arrangement configured to give visual feedback to the user at or close to the location of said virtual element indicating which of said first and second control modes is currently active.
  • a visualization arrangement according to numbered paragraph NP16 configured to give visual feedback to the user at or close to the location of said virtual element indicating at least one default moving direction of said virtual element when said first control mode is active.
  • AR A feature particular to AR is that the user sees both real-world objects and virtual elements in the same view.
  • One possible application of AR is AR-assisted interior design, in which the user considers what kind of additional objects he or she would like to place into a given environment. For example there may be a partially furnished room, so that the task of the interior designer is to select some additional pieces of furniture and find optimal locations for them in the room.
  • AR it is possible to use AR for the purpose defined above so that the user views a computer generated image of the three-dimensional environment in question and tries placing therein virtual elements that have been made to look like the possible additional real-world objects.
  • the colour of objects affects the way in which users perceive their space requirements. This effect becomes even more manifest if the objects are transparent or translucent, because the user may fail to appropriately consider their dimensions in a displayed image.
  • a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user.
  • Such a visualization arrangement is configured to display, at a first location within said computer- generated image, an image of a first real-world object located at a corresponding location in the three-dimensional environment, and at a second location within said computer-generated image, a computer-generated virtual element as if said virtual element was a second real-world object located at the corresponding apparent location within said three-dimensional environment.
  • the visualization arrangement is configured to respond to a command received from the user by displaying within said computer-generated image, between said image of the first real-world object and said computer-generated virtual element, a first visual indication of a first real-world dimension representative of a first distance that would prevail between said first real-world object at its current location and said second real-world object if located at its apparent location.
  • the visualization arrangement is also configured to repeatedly update said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment.
  • the visualization arrangement is configured to calculate said real- world dimension as a shortest linear distance between points closest to each other of said first real-world object at its current location and said second real-world object if located at its apparent location.
  • the visualization arrangement is configured to respond to an anchor-selecting command from the user by selecting one of a plurality of real-world objects, images of which are displayed within said computer-generated image, as an anchor object to act as said first real-world object to which said first distance is measured.
  • said anchor-selecting command pertains to a point of said one of said plurality of real-world objects
  • said visualization arrangement is configured to respond to said anchor-selecting command by selecting said point as a fixed endpoint from which said first distance is measured.
  • a method for displaying a computer-generated image of a three-dimensional environment to a human user comprises displaying, at a first location within said computer-generated image, an image of a first real-world object located at a corresponding location in the three-dimensional environment, as well as displaying, at a second location within said computer-generated image, a computer-generated virtual element as if said virtual element was a second real-world object located at the corresponding apparent location within said three-dimensional environment.
  • the method comprises responding to a command received from the user by displaying within said computer-generated image, between said image of the first real-world object and said computer-generated virtual element, a first visual indication of a first real-world dimension representative of a first distance that would prevail between said first real-world object at its current location and said second real-world object if located at its apparent location. Additionally the method comprises repeatedly updating said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment.
  • Fig. 10 illustrates an example of how a visualization arrangement is displaying a computer generated image of a three-dimensional environment to a human user.
  • the visualization arrangement is configured to display, at a first location within said computer generated image, an image 1001 of a first real-world object (a table) located at a corresponding location in the three-dimensional environment.
  • the visualization arrangement is configured to display, at a second location within said computer-generated image, a computer generated virtual element 1002 (three-dimensional image of a chair) as if said virtual element 1002 was a second real-world object (a chair) located at the corresponding apparent location within said three-dimensional environment.
  • the user utilizes the user controls at his or her disposal to give the visualization arrangement a command to display visual indications of one or more real-world dimensions.
  • the visualization arrangement responds to the command it received from the user by displaying, within the computer-generated image, at least one visual indication of a real-world dimension. Two of these are shown in fig. 10 as examples; of these the one on the left is discussed first, and therefore called here the first visual indication 1003.
  • the first visual indication 1003 represents a distance that would prevail between a first real-world object (the wall 1004) at its current location and the second real-world object (chair), if the last- mentioned was located at the location made apparent by the virtual element 1002.
  • said distance is 108 cm.
  • the location of the chair is called "apparent" location because in reality there is no chair there in the imaged three-dimensional environment; the user just sees a virtual element 1002 in the computer-generated image as if it was a real-world object in the three-dimensional environment.
  • the command to display visual indications may contain or be associated with selection commands with which the user selects an image of a real-world object (the wall 1004) and the virtual element 1002 that define the endpoints of the distance to be represented.
  • a selection command of this kind may be called an anchor-selecting command.
  • the user controls comprise a mouse
  • the user may give an anchor-selecting command for example by clicking a mouse button when the cursor is on top of the displayed image of the wall 1004. This makes the visualization arrangement select the wall 1004 an anchor object, to act as the real-world object to which the distance (here 108 cm) is measured.
  • the anchor-selecting command may pertain to the whole wall 1004.
  • the anchor-selecting command may pertain to a point 1005 of the larger real-world object (the wall 1004), in which case the visualization arrangement is configured to respond to said anchor-selecting command by selecting point 1005 as a fixed endpoint to which said distance is measured.
  • the visualization arrangement calculates the real-world dimension as a shortest linear distance between points closest to each other.
  • one endpoint is a point of the (first) real-world object (i.e. the wall 1004) at its current location
  • the other endpoint is a point of that (second) real-world object that the virtual element 1002 illustrates, if located at its apparent location in the three-dimensional environment.
  • the wall 1004 is a planar surface, so in this case the real-world dimension (108 cm) is the shortest distance measured at a right angle against the surface of the wall 1004.
  • Fig. 11 illustrates some other alternatives, with the character-string-type explicit distance indications omitted to enhance graphical clarity.
  • the real-world object selected as the anchor, or the real world object to which the distance is measured is the table 1001.
  • the alternative shown as 1101 comprises calculating the real-world dimension as the shortest linear distance between selected points of the first and second real-world objects, said selected points corresponding to points 1102 and 1103 selected by the user in the computer-displayed image.
  • the user may have given specific selection commands to select the points 1102 and 1103.
  • the alternative shown as 1104 comprises calculating the real-world dimension as the a shortest distance along a selected surface (the floor 1105) of said three-dimensional environment.
  • the distance is thus between such points closest to each other of said first real-world object (table 1001; at its current location) and said second real-world object (chair, if located at its apparent location) that are located on the surface (on the floor 1105).
  • these points are the points of the respective leg ends that are closest to each other.
  • the alternative shown as 1106 comprises calculating the real-world dimension as the shortest linear distance between points closest to each other of said first real-world object (table 1001; at its current location) and said second real-world object (chair, if located at its apparent location) when projected onto a selected surface of said three-dimensional environment.
  • the selected surface is the 1105, which is a horizontal plane.
  • alternative 1106 is the narrowest gap, measured horizontally, between the table 1001 and the chair.
  • Real-world objects may move or be moved in the three- dimensional environment
  • virtual elements may move or be moved in the displayed computer-generated image.
  • the user may use the user controls of the visualization arrangement to move the virtual element around.
  • the virtual element may even have automotive characteristics, so that the user may put it into motion and watch what happens when it moves within the displayed computer-generated image.
  • the user or an assisting person may also move real-world objects, or they may have automotive characteristics.
  • the visualization arrangement to repeatedly update said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment. How frequently such updating is performed is a design choice and depends e.g. on the processing resources that are available to the visualization arrangement. If the processing resources allow, it is advantageous to perform said updating so frequently that the user perceives it as if the indications of real-world dimensions would be continuously updated in real time.
  • FIG. 10 illustrates how the visualization arrangement is configured to display, in addition to the images explained above, an image of a third real-world object (wall 1006) at a third location within the computer-generated image. Said third location naturally corresponds to the actual location of the wall 1006 in the three-dimensional environment.
  • the user may give as many commands of displaying visual indications as he or she wants.
  • the visualization arrangement received from the user a command to display the second visual indication 1007 between the image of the third real-world object (wall 1006) and the computer generated virtual element 1002.
  • the second visual indication 1007 indicates another real-world dimension, called here the second real-world dimension that represents a second distance.
  • the distance 13 cm shown in fig. 10 would prevail between the third real-world object (the wall 1006) and the second real-world object (chair) if located at its apparent location.
  • the visualization arrangement to repeatedly update the second visual indication 1007 to represent the most up-to-date value of said second distance when the virtual element 1002 moves within the computer-generated image and/or the third real-world object (wall 1006) moves within said three-dimensional environment.
  • Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
  • the visualization arrangement is configured to:
  • a visualization arrangement according to numbered paragraph NP21 configured to calculate said real-world dimension as a shortest linear distance between points closest to each other of said first real-world object at its current location and said second real-world object if located at its apparent location.
  • NP23 A visualization arrangement according to numbered paragraph NP21, configured to calculate said real-world dimension as one of:
  • a visualization arrangement configured to respond to an anchor-selecting command from the user by selecting one of a plurality of real-world objects, images of which are displayed within said computer-generated image, as an anchor object to act as said first real-world object to which said first distance is measured.
  • a visualization arrangement according to numbered paragraph NP24, wherein said anchor-selecting command pertains to a point of said one of said plurality of real-world objects, and said visualization arrangement is configured to respond to said anchor-selecting command by selecting said point as a fixed endpoint to which said first distance is measured.
  • NP26 A visualization arrangement according to any of numbered paragraphs NP21 to NP25, configured to:
  • a method for displaying a computer-generated image of a three-dimensional environment to a human user comprising:
  • REAL-TIME BLOCK OBJECT BUILDING WITH POINTS When the user uses VR or AR to design a furnished room or other kind of three-dimensional environment with three-dimensional objects in it, a preferred way to proceed is often to select new virtual elements from a library or menu and to place them at such locations in the computer generated image where they appear to serve their intended purpose best.
  • a simplified example we may consider the task of designing a dining room in which a dining table and a number of chairs should be located. The dimensions of the room may be given, for example because it is an actual room in an existing building or a building to be built according to an accepted and confirmed plan.
  • One of the tasks of the interior designer is then to find the most appropriately sized table and chairs that fit harmoniously in the room and offer enough space for everyone.
  • a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user.
  • the visualization arrangement is configured to: a) respond to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) respond to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) respond to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) respond to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated image not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a pris
  • the visualization arrangement is configured to respond to one or more additional point-placing commands received from the user between steps c) and d), which additional point-placing commands indicate points in said computer-generated image located within said plane, by displaying respective one or more additional placed points, so that in step d) said first side face has corners also at said additional placed points.
  • said plane coincides with a planar surface displayed in said computer-generated image.
  • the visualization arrangement is configured to selectively operate in one of two modes, of which a first mode is a planar mode for making said user place at least one of said first, second, third, and possible additional placed points on said plane within said computer-generated image, and a second mode is a non-planar mode for making said user place said fourth placed point out of said plane within said computer-generated image.
  • the visualization arrangement is configured to enter any of said first and second modes as a response to a mode-selecting command received from the user.
  • the visualization arrangement is configured to display within said computer-generated image, as a response to any of said point-placing commands, a draft connector line connecting a previously placed point to a displayed cursor available for the user to indicate a location for the next point to be placed.
  • the visualization arrangement is configured to display within said computer-generated image completed connector lines between consecutively placed points.
  • the visualization arrangement is configured to display one or more drawing aids within said computer-generated image, said drawing aids representing graphical regularities for placing any of said points, and also configured to attract a cursor available for the user to indicate a location for the next point to be placed towards said drawing aids, wherein said graphical regularities involve at least one of: a direction parallel with an existing direction in the computer-generated image; a direction at a predetermined angle to an existing direction in the computer-generated image; a distance from a previously placed point equal to an existing distance in the computer-generated image; a predetermined shortest distance from an existing planar surface in the computer-generated image; a predetermined shortest distance from a displayed element in said computer-generated image.
  • a method for displaying a computer-generated image of a three-dimensional environment to a human user comprises: a) responding to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) responding to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) responding to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) responding to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a prism
  • Fig. 12 illustrates an example of how a visualization arrangement is displaying a computer generated image of a three-dimensional environment to a human user.
  • the visualization arrangement is configured to respond to particular kind of commands received from the user, here called point-placing commands, by displaying so-called placed points at those locations of the computer-generated image where there was a cursor or other indication means located when the point-placing command was received.
  • the way in which the user moves the cursor in the computer-generated image is not important; the user may use e.g. a mouse, or control gestures, or his or her viewing direction, or speech commands, or other means.
  • the way in which the user gives the actual point-placing command is immaterial to this description. Any known way of giving commands can be used, including those listed elsewhere in this description.
  • the visualization arrangement received a first point-placing command from the user when the cursor was at the location shown as 1201, so consequently the visualization arrangement displays a first placed point at that location.
  • the second, subsequent point-placing command was received from the user when the cursor was at the location shown as 1202, so consequently a second placed point is displayed at that location.
  • a third, subsequent point-placing command was received from the user when the cursor was located at the location 1203, and consequently a third placed point is displayed at that location.
  • the arrows show how the user moved the cursor between giving the point-placing commands. Any three points define a plane in a three-dimensional space. Thus in the computer-generated image shown in fig. 12 the displayed first, second, and third placed points also define a plane.
  • the visualization arrangement may be configured so that when it receives a point-placing command with the cursor close to a planar element in the computer generated image, the resulting placed point is automatically made to be located on the plane defined by said planar element.
  • the visualization arrangement may be configured to selectively operate in one of two modes.
  • a first mode is a planar mode for making the user place at least one of the first, second, third, and possible additional placed points on an existing plane within the computer-generated image.
  • a second mode is a non-planar mode for making the user place at least the fourth placed point out of said plane within the computer-generated image.
  • the visualization arrangement may be configured to enter any of said first and second modes as a response to a mode-selecting command received from the user.
  • the user may want the newly created virtual element to have some regular shape.
  • One frequently encountered kind of regular shapes, particularly in indoor spaces that consist of rooms, is such where at least one side face of the virtual element is a rectangle.
  • the visualization arrangement receives one additional point-placing command from the user, which additional point-placing command indicates a further point 1204 located in the same plane as the previously placed points 1201, 1202, and 1203.
  • a cursor 1205 is shown in fig. 12 as an example.
  • the visualization arrangement may be configured to execute assistive functions.
  • the visualization arrangement may be configured to display draft connector lines 1206 and 1207 within the computer-generated image, as a response to any of the point-placing commands discussed above.
  • a draft connector line 1206 and 1207 is a displayed linear graphical element that connects a previously placed point to a displayed cursor 1205 that is available for the user to indicate a location for the next point to be placed.
  • one draft connector line 1206 connects the most recently placed point 1203 to the cursor 1205, while another draft connector line 1207 connects the first placed point 1201 to the cursor 1205.
  • Another example of a feature that may serve as a drawing aid is completed connector lines between points that were already placed.
  • the visualization arrangement may be configured to display completed connector lines between consecutively placed points within said computer generated image.
  • fig. 12 shows the completed connector line 1208 between the second and third placed points 1202 and 1203, as well as the completed connector line 1209 between the first and second placed points 1201 and 1202. It may be advantageous to make completed connector lines have a different visual appearance than draft connector lines, so that it is easy for the user to grasp, which connector lines are which.
  • support patterns that project some already existing feature to another place in the computer-generated drawing.
  • Examples of support patterns in fig. 12 are the indications 1210 and 1211 of directions parallel with the existing directions of the completed connector lines 1208 and 1209 respectively.
  • the visualization arrangement may be configured to display one or more drawing aids within said computer-generated image.
  • Said drawing aids represent graphical regularities for placing any of the points for which the user may want to give point-placing commands. It may be advantageous to make the drawing aids attract the cursor 1205 that is available for the user to indicate a location for the next point to be placed. Such attracting is a "gravitational pull", i.e. a force that tends to move an approaching cursor towards said drawing aids, so that the cursor may "snap to" a drawing aid when close to it in the computer-generated image.
  • the graphical regularities that are considered in association with drawing aids may involve at least one of: a direction parallel with an existing direction in the computer-generated image;
  • a distance from a previously placed point equal to an existing distance in the computer generated image; a predetermined shortest distance from an existing planar surface in the computer generated image;
  • Fig. 13 illustrates the step in which the new virtual element that the user is creating becomes three-dimensional.
  • the visualization arrangement receives from the user a further, subsequent point-placing command. It indicates a point 1301 in the computer-generated image that is not located within said plane.
  • the visualization arrangement responds to this "fourth" point-placing command by displaying, within the computer-generated image, a three-dimensional virtual element 1302 in the form of a prism.
  • the new virtual element 1302 has the form of a regular rectangular prism. This is a consequence of two factors. First, the "fourth" placed point 1301 is located on a line that is perpendicular against the plane of the bottom 1303 and goes through one of its corner points. Second, also for all other corner points of the bottom 1303 there is a corresponding, and correspondingly located, corner point of the top 1304. Placing the "fourth" point 1301 perpendicularly above one of the corner points of the bottom 1303 is often a desire of the user, and the visualization arrangement may help by displaying a corresponding vertical drawing aid, which in fig. 13 is the line 1305. In order to help making the new virtual element 1302 exactly as high as some other, previously existing element in the computer-generated image, there could also be one or more horizontal drawing aids, examples of which are shown as 1306 and 1307 in fig. 13.
  • the new virtual element 1302 does not need to have the form of a rectangular prism. If the user did not want to utilize a vertical drawing aid 1305, he or she could have placed the "fourth" point 1301 at a location that is not directly above any of the corners of the bottom 1303. Since at this stage the three-dimensional form is essentially formed by copying the form of the bottom 1303 onto another level to make a top 1304, at least at this stage it is intuitive that the visualization arrangement moves all other corner points of the top 1304 along with the "fourth" point 1301. Later the user may be given a possibility to change the exact location of any corner point of the new virtual element 1302, so that the final form of the new virtual element 1302 does not need to be regular in any respect.
  • Fig. 13 shows the connector lines between corner points (other than the periphery of the bottom 1303) still as draft connector lines. This may mean that the user is still considering the height of the new virtual element to be formed, with the cursor 1205 moving up and down (and possibly also sideways) in response to moving commands received from the user. Once the user has actually given the point-placing command for the "fourth" point 1301, it is advantageous to make the visualization arrangement display all edges of the new virtual element 1302 as completed connector lines.
  • Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to: a) respond to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) respond to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) respond to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) respond to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated image not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a prism, a first
  • a visualization arrangement configured to respond to one or more additional point-placing commands received from the user between steps c) and d), which additional point-placing commands indicate points in said computer generated image located within said plane, by displaying respective one or more additional placed points, so that in step d) said first side face has corners also at said additional placed points.
  • NP33 A visualization arrangement according to any of numbered paragraphs NP31 or NP32, wherein said plane coincides with a planar surface displayed in said computer-generated image.
  • a visualization arrangement configured to selectively operate in one of two modes, of which a first mode is a planar mode for making said user place at least one of said first, second, third, and possible additional placed points on said plane within said computer-generated image, and a second mode is a non-planar mode for making said user place said fourth placed point out of said plane within said computer generated image.
  • NP35 A visualization arrangement according to numbered paragraph NP34, configured to enter any of said first and second modes as a response to a mode-selecting command received from the user.
  • NP36 A visualization arrangement according to any of numbered paragraphs NP31 to NP35, configured to display within said computer-generated image, as a response to any of said point placing commands, a draft connector line connecting a previously placed point to a displayed cursor available for the user to indicate a location for the next point to be placed.
  • NP37 A visualization arrangement according to any of numbered paragraphs NP31 to NP36, configured to display within said computer-generated image completed connector lines between consecutively placed points.
  • a visualization arrangement configured to display one or more drawing aids within said computer-generated image, said drawing aids representing graphical regularities for placing any of said points, and also configured to attract a cursor available for the user to indicate a location for the next point to be placed towards said drawing aids, wherein said graphical regularities involve at least one of: a direction parallel with an existing direction in the computer-generated image; a direction at a predetermined angle to an existing direction in the computer-generated image; a distance from a previously placed point equal to an existing distance in the computer-generated image; a predetermined shortest distance from an existing planar surface in the computer-generated image; a predetermined shortest distance from a displayed element in said computer-generated image.
  • a method for displaying a computer-generated image of a three-dimensional environment to a human user comprising: a) responding to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) responding to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) responding to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) responding to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a prism, a first side face of which is
  • an AR application displays the real-world objects and the virtual elements in a computer-generated image so that they appear at "natural" locations.
  • a virtual element representative of a piece of furniture should be located on the floor of the room, or another virtual element representative of a painting should hang on a wall.
  • visualization arrangement should be aware of the coordinates, in some coordinate system it uses to make the calculations, of both the planar surfaces that make up the three-dimensional environment (like floor, walls, and ceiling) and the displayed virtual elements.
  • the visualization arrangement may comprise, or be capable of communicating with, one or more cameras and/or corresponding image-acquisition systems.
  • a mapping algorithm in the visualization arrangement cannot easily recognize them.
  • a completely white room (white floor; white walls; white ceiling) can be considered that is illuminated with relatively even and smooth lighting.
  • the scarcity of distinctive, visually recognizable points in the room may mean that when an imaging device of the visualization arrangement acquires digital images, a mapping algorithm may not easily find out, where is the borderline between a ceiling and a wall, for example.
  • a surface-scanning algorithm may fail to appropriately recognize a piece of surface, because it does not find recognizable texture on it.
  • the mapping algorithm may fail to produce feasible coordinates of the planar surfaces of the room, and/or the coordinates it produces may be erroneous.
  • Many AR engines do not allow placing further virtual elements on "non-existing" surfaces, i.e. ones that have not been correctly recognized as surfaces.
  • the problem may pertain to the whole room or to a part of it, for example so that even if the floor was recognized appropriately, the ceiling and the upper parts of the walls were not.
  • a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user.
  • the visualization arrangement is configured to use an image acquisition subsystem to obtain a digital image of said three-dimensional environment, and display said digital image to a user as a first computer-generated image of said three-dimensional environment.
  • the visualization arrangement is configured to respond to a plurality of point-fixing commands received from the user, which point-fixing commands each identify a point within said three-dimensional environment, by determining and storing a corresponding plurality of coordinates of fixed points in a three-dimensional coordinate system.
  • the visualization arrangement is configured to construct a second computer-generated image of said three-dimensional environment, which second computer-generated image comprises planar surfaces intersecting at one or more of said fixed points, and display said second computer-generated image to the user.
  • the visualization arrangement is configured to perform said determining of said coordinates so that the determined coordinates are located on a limited set of planar surfaces in said three-dimensional coordinate system.
  • the visualization arrangement is configured to perform said determining of said coordinates so that the determined coordinates are located on a set of mutually perpendicular planar surfaces in said three-dimensional coordinate system.
  • the visualization arrangement is configured to receive and process each of said point-fixing commands as containing an indication of a corresponding point in the first computer-generated image. According to an embodiment the visualization arrangement is configured to receive and process at least a subset of said point-fixing commands as each containing an indication of a real-world point in said three-dimensional environment.
  • the visualization arrangement comprises a spatial information subsystem configured to identify viewing directions of said user within said three-dimensional environment
  • the visualization arrangement is configured to receive and process point-fixing commands as each containing indications of at least two identified viewing directions, so that the point identified by such a point-fixing command is a point at which the identified viewing directions intersect.
  • the visualization arrangement is configured to display to the user one or more virtual elements located in a fixed spatial relationship with reference to said planar surfaces that intersect at one or more of said fixed points.
  • the visualization arrangement is configured to make said second computer-generated image at least partially transparent and to display to the user said first computer-generated image as a background, so that the virtual elements that have a fixed spatial relationship with reference to said planar surfaces appear to be within said displayed first computer-generated image.
  • a fourteenth aspect there is provided a method for displaying a computer generated image of a three-dimensional environment to a human user.
  • the method comprises: - using an image acquisition subsystem to obtain a digital image of said three-dimensional environment,
  • Fig. 14 illustrates a digital image of a three-dimensional environment.
  • a visualization arrangement may have used an image acquisition subsystem, such as one or more digital cameras, to obtain the digital image in fig. 14.
  • the digital image may be a still image, or it may be a piece of a video stream so that what is seen in fig. 14 is a representative frame of the video stream.
  • the visualization arrangement may be configured to display the digital image of fig. 14 to the user.
  • this image the first computer generated image of the three-dimensional environment. It does not need to contain any three- dimensional information; it may be a simple set of digital image information read from the recording CCD screen of a digital camera.
  • a characteristic feature of the first computer-generated image of fig. 14 is the scarcity of visually discernible details or texture on the planar surfaces of the three-dimensional environment.
  • the floor, walls, and ceiling of the room are all white, with only the window 1401 on one wall offering any details on the otherwise continuous, white surfaces.
  • Another feature is that there is a chair 1402 in the middle, partially obstructing the view to the point 1403 where the two walls and the floor of the room intersect. As a result it may be a difficult task for a mapping and/or scanning algorithm to produce any reasonably accurate digital model of the room.
  • the visualization arrangement is configured to receive point-fixing commands from the user.
  • a point fixing command is one that identifies a point that is or represents a real-life point of the three- dimensional environment.
  • the visualization arrangement is configured to respond to every point fixing command by determining and storing corresponding coordinates of fixed points in a three- dimensional coordinate system. In a way, the user can tell to the visualization arrangement where certain important points actually are, if the visualization arrangement is not capable of recognizing them in a purely automatic manner, using its scanning and/or mapping algorithms.
  • Point 1404 An example of a point of the three-dimensional environment that could be identified with a point-fixing command is the point 1404 where the two walls and the ceiling intersect in fig. 14. Other similar points are shown as 1405, 1406, 1407, and 1408. Also the point 1403 mentioned above could be identified with a point-fixing command.
  • the visualization arrangement is configured to construct another computer generated image of the three-dimensional environment.
  • This image is called here the second computer-generated image.
  • It is a purely virtual construction in a sense that - at least initially - it does not even try to convey a true visual image of how the three-dimensional environment actually looks like. Merely it conveys a conceptual image of what kind of planar surfaces there are that delimit the three-dimensional environment, a digital image of which originally appeared as the first computer-generated image.
  • the planar surfaces in fig. 15 are the right wall 1501, the left wall 1502, the floor 1503, and the ceiling 1504.
  • the visualization arrangement could have been capable of properly scanning and mapping the extremities of the window 1401 so that it also appears as a gap in the left wall 1502 of fig. 15.
  • the left wall 1502 is just a continuous surface without paying any attention to the existence of the window 1401.
  • the second computer-generated image can be displayed to the user, so that the user is able to check that it has been successfully generated and models the actual surfaces of the three- dimensional environment with reasonable accuracy.
  • the visualization arrangement now has exact knowledge of the coordinates of each planar surface in the second computer-generated image, it can be thereafter used as the spatial frame of reference in which virtual elements can be placed and viewed.
  • the visualization arrangement may be configured to perform the determining of coordinates (in response to receiving the point-fixing commands from the user) so that the determined coordinates are located on a limited set of planar surfaces in the three-dimensional coordinate system, such as a set of mutually perpendicular planar surfaces for example. If there is enough processing power in the visualization arrangement this restriction can be partly or completely lifted, so that even more complicated three-dimensional spaces can be modelled accurately.
  • the visualization arrangement is configured to receive and process each of said point-fixing commands as containing an indication of a corresponding point in the first computer-generated image; see the use of a cursor 1409 to indicate a point 1404 in fig. 14.
  • the first computer-generated image is just a two- dimensional distribution of colour and brightness information values
  • the visualization arrangement may be configured to receive and process at least a subset of said point-fixing commands as each containing an indication of a real- world point in said three-dimensional environment.
  • the visualization arrangement may comprise a spatial information subsystem that is configured to identify viewing directions of the user within the three-dimensional environment.
  • the visualization arrangement may be additionally configured to receive and process the point-fixing commands as each containing indications of at least two identified viewing directions.
  • the point identified by such a point-fixing command is a point at which the identified viewing directions intersect.
  • the spatial information subsystem may comprise for example spatial and directional sensors that tell at which location (with reference to a three-dimensional coordinate system) the head of the user (or a smartphone of the user, or any other device used by the user) currently is and into which direction the user (or a camera contained in the device) is currently looking.
  • the user may first go into a first location and look at a fixed point, and thereafter go a second location and look at the same fixed point, and visualization arrangement can calculate where in the current coordinate system that fixed point is.
  • the visualization arrangement is configured to display to the user one or more virtual elements 1505, 1506, and 1507 located in a fixed spatial relationship with reference to the planar surfaces 1501, 1502, 1503, and 1504 that intersect at the points 1403 to 1408 and thus delimit the representative three-dimensional space in fig. 15.
  • the visualization arrangement has good knowledge of the coordinates of said planar surfaces.
  • the user might be interested to see, how the virtual elements 1505, 1506, and 1507 would look like in the first computer-generated image; in other words, how would the room look like if it additionally contained objects represented by the virtual elements 1505, 1506, and 1507.
  • the visualization arrangement may be configured to make said second computer-generated image at least partially transparent and to display to the user said first computer-generated image as a background. This way the virtual elements 1505, 1506, and 1507 that actually have a fixed spatial relationship only with reference to said planar surfaces of the second computer-generated image appear to be within said displayed first computer- generated image.
  • Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
  • - display said digital image to a user as a first computer-generated image of said three- dimensional environment - respond to a plurality of point-fixing commands received from the user, which point-fixing commands each identify a point within said three-dimensional environment, by determining and storing a corresponding plurality of coordinates of fixed points in a three-dimensional coordinate system, - construct a second computer-generated image of said three-dimensional environment, which second computer-generated image comprises planar surfaces intersecting at one or more of said fixed points, and
  • a visualization arrangement according to numbered paragraph NP41 configured to perform said determining of said coordinates so that the determined coordinates are located on a limited set of planar surfaces in said three-dimensional coordinate system.
  • NP43 A visualization arrangement according to numbered paragraph NP42, configured to perform said determining of said coordinates so that the determined coordinates are located on a set of mutually perpendicular planar surfaces in said three-dimensional coordinate system.
  • NP44 A visualization arrangement according to any of the numbered paragraphs NP41 to NP43, configured to receive and process each of said point-fixing commands as containing an indication of a corresponding point in the first computer-generated image.
  • a visualization arrangement configured to receive and process at least a subset of said point-fixing commands as each containing an indication of a real-world point in said three-dimensional environment.
  • NP46 A visualization arrangement according to numbered paragraph NP45, wherein:
  • the visualization arrangement comprises a spatial information subsystem configured to identify viewing directions of said user within said three-dimensional environment
  • the visualization arrangement is configured to receive and process point-fixing commands as each containing indications of at least two identified viewing directions, so that the point identified by such a point-fixing command is a point at which the identified viewing directions intersect.
  • a visualization arrangement configured to display to the user one or more virtual elements located in a fixed spatial relationship with reference to said planar surfaces that intersect at one or more of said fixed points.
  • a visualization arrangement configured to make said second computer-generated image at least partially transparent and to display to the user said first computer-generated image as a background, so that the virtual elements that have a fixed spatial relationship with reference to said planar surfaces appearto be within said displayed first computer-generated image.
  • a method for displaying a computer-generated image of a three-dimensional environment to a human user comprising: - using an image acquisition subsystem to obtain a digital image of said three-dimensional environment,
  • a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user.
  • the visualization arrangement is configured to display to the user a computer-generated image acquired in real time of a real-life three-dimensional environment.
  • the visualization arrangement is configured to display within said computer-generated image at least one virtual element as if located at a particular location within said real-life three-dimensional environment, and to store a digital representation of the displayed virtual element along with location information indicative of said particular location within said real-life three-dimensional environment.
  • the visualization arrangement is configured to, at a later moment of time, display to the user a new computer-generated image acquired in real time of the same real-life three-dimensional environment, and retrieve from storage said digital representation of the previously displayed virtual element and, using said location information, displaying the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
  • the visualization arrangement is configured to store spatial coordinates of at least a part of said real-life three-dimensional environment in a coordinate system, and also configured to store said location information as spatial coordinates in the same coordinate system.
  • the visualization arrangement is configured to store said location information as a stored digital image of the virtual element as if located in the real-life three- dimensional environment.
  • the visualization arrangement is configured to use a trackability function of an AR core applied by the visualization arrangement to display the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
  • a sixteenth aspect there is provided a method for displaying a computer-generated image of a three-dimensional environment to a human user.
  • the method comprises:
  • the visualization arrangement is configured to display to the user the computer generated image 1601 of fig. 16, which is an image acquired in real time of the real-life three- dimensional environment.
  • the visualization arrangement is also configured to display, within the computer-generated image 1601, at least one virtual element as if located at a particular location within the real-life three-dimensional environment.
  • the three virtual elements 1505, 1506, and 1507 previously discussed with reference to fig. 15 appear in the computer generated image 1601. Note the scaling between images; the computer-generated image in fig.
  • the visualization arrangement is configured to store a digital representation of each of the displayed virtual elements 1505, 1506, and 1507 along with location information indicative of their particular locations within the real-life three-dimensional environment.
  • the digital representation meant here means all information that is needed to completely re-generate a completely similar virtual element if needed. Concerning visual appearance, size, and such features the digital representation may be explicit, and/or it may contain a reference to a database in which information of the features of virtual elements is stored.
  • the visualization arrangement would display to the user a new computer-generated image, acquired in real time, of the same real-life three-dimensional environment.
  • the visualization arrangement may be configured to retrieve from storage the digital representation of any or all previously displayed virtual element(s). Using the (stored) location information, the visualization arrangement may display the same virtual element 1505, 1506, and/or 1507 as if again located at the same particular location within the real-life three-dimensional environment.
  • the visualization arrangement may be configured to store spatial coordinates of at least a part of said real-life three-dimensional environment in a coordinate system, like GPS, Glonass, Galileo, and/or BeiDou.
  • the visualization arrangement may then be also configured to store said location information as spatial coordinates in the same coordinate system. By comparing the stored coordinates it is then relatively easy to later place the virtual element similarly at the same place again.
  • the visualization arrangement may be configured to store said location information as a stored digital image of the virtual element as if located in the real-life three- dimensional environment. Placing the same (or similar) virtual element at the same location again would then be based on graphical comparison between images: only if placed correctly the virtual element will look the same in the new computer-generated image as in the old one.
  • the visualization arrangement may be configured to use a trackability function of an AR core applied by the visualization arrangement to display the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
  • An example of such a trackability function is that included in the known Google ARCore software.
  • Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
  • a visualization arrangement configured to store spatial coordinates of at least a part of said real-life three-dimensional environment in a coordinate system, and also configured to store said location information as spatial coordinates in the same coordinate system.
  • NP53 A visualization arrangement according to any of numbered paragraphs NP51 or NP52, configured to store said location information as a stored digital image of the virtual element as if located in the real-life three-dimensional environment.
  • a visualization arrangement configured to use a trackability function of an AR core applied by the visualization arrangement to display the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
  • a method for displaying a computer-generated image of a three-dimensional environment to a human user comprising:

Abstract

A visualization arrangement is used to display a computer-generated image of a three-dimensional environment. The visualization arrangement obtains an indication of a viewing direction into which said user is currently looking. It responds to said obtained indication by centering a displayed portion of said computer-generated image on said viewing direction. It responds to a menu-displaying command by displaying a menu as a number of displayed three-dimensional symbols. Said menu is a symbolic representation of options available to said user. The arrangement maintains said three-dimensional symbols that constitute said menu at the location at which it was displayed, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment. At the same time it continues to obtain new indications of said viewing direction and responding to such obtained indications by centering said displayed portion of said computer-generated image on the respective viewing direction.

Description

METHOD, ARRANGEMENT, AND COMPUTER PROGRAM PRODUCT FOR THREE-DIMENSIONAL VISUALIZATION OF AUGMENTED REALITY AND VIRTUAL REALITY ENVIRONMENTS
TECHNICAL FIELD
The invention is generally related to the technical field of visualizing augmented reality (AR) and/or virtual reality (VR) environments and objects to a human user. In particular the invention is related to advantageous features of visualizing items like menus and dimensions, and to developed features of the user interface that facilitate an intuitive user experience.
BACKGROUND
Virtual reality (VR) is a general term that is used to describe computer-generated streams of sensory information that enable a human user to at least see and hear, and sometimes to obtain additional sensory information like tactile, haptic, or other, in an artificially created three- dimensional environment. Augmented reality (AR) means the combination of VR-type elements with live camera and microphone feed so that the user experiences the actual environment as if it also contained additional, computer-generated elements.
Both VR and AR have been subject to very active research and development. A major challenge in both is the development of user interface features that the user would find easy to learn, and intuitive and convenient to use. Tasks for which such user interface features would be direly needed include, but are not limited to: scrolling through menus and making menu selections; placing and moving virtual elements in the VR or AR environment; understanding the actual dimensional relations between virtual and real-word objects; creating virtual elements of arbitrary size and form in real time; placing virtual elements in environments of which the modelling computer has insufficient information; and saving and restoring previously placed objects in the VR or AR environment.
SUMMARY
It is an objective of the present invention to present methods, arrangements, systems, and computer program products for providing user interface features in AR and/or VR systems that are easy to learn, intuitive and convenient to use, and widely applicable in a large variety of different kinds of AR- and/or VR-based systems. Another objective is that the user interface features should be applicable in AR- and/or VR-based systems that utilize different kinds of display technologies. A further objective is that the user interface features can be implemented with only relatively little intrusiveness in the visual field of the user of an AR- and/or VR-based system. A yet another objective is that the user interface features allow streamlining routines and using efficiently the time when working in an AR and/or VR environment.
These and further advantageous objectives are achieved with the features recited in the independent claims. Preferable embodiments are described in the depending claims.
According to a first aspect there is provided a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user. The visualization arrangement is configured to repeatedly obtain an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and respond to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction. The visualization arrangement is configured to respond to a menu-displaying command received through user controls of said visualization arrangement by displaying a menu as a number of displayed three-dimensional symbols within said displayed portion of said computer-generated image, wherein said menu is a symbolic representation of a number of interrelated options available to said user. The visualization arrangement is configured to maintain said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment, while continuing to obtain new indications of said viewing direction and responding to such obtained indications by centering said displayed portion of said computer-generated image on the respective viewing direction.
According to an embodiment the visualization arrangement is configured to display, in said menu, only a subset of a larger number of interrelated options available to said user through said menu, and respond to a menu-scrolling command received through said user controls by scrolling the subset of the interrelated options selected for display within said larger number of interrelated options and each time displaying only those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded.
According to an embodiment the visualization arrangement is configured to display those three- dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded in the form of slices of a displayed three- dimensional part of a pie.
According to an embodiment the visualization arrangement is configured to respond to said menu-scrolling command by rotating said displayed slices in said computer-generated image around a center point of said displayed three-dimensional part of the pie, so that in each case that slice that was furthest in the direction of rotation before said rotating is faded out of view and a new slice, representing another one of said interrelated options is brought into view on the other side of the displayed three-dimensional part of the pie.
According to an embodiment the visualization arrangement is configured to display, in said menu, a number of three-dimensional symbols corresponding to options available to said user through said menu in a rotationally symmetric planar array, the planar form of which is oriented parallel to a plane displayed at or close to the location of said menu in said computer-generated image, and also configured to respond to a menu-scrolling command received through said user controls by rotating said rotationally symmetric planar array of three-dimensional symbols around its rotational axis of symmetry so that said rotating, when continues, brings each of said three-dimensional symbols in turn to front in said computer-generated image.
According to a second aspect there is provided a method for displaying a computer-generated image of a three-dimensional environment to a human user. The method comprises repeatedly obtaining an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and responding to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction. The method comprises responding to a menu-displaying command received through user controls of said visualization arrangement by displaying a menu as a number of displayed three-dimensional symbols within said displayed portion of said computer-generated image, wherein said menu is a symbolic representation of a number of interrelated options available to said user, and maintaining said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three- dimensional symbols were objects located at the corresponding location in the three- dimensional environment, while continuing to obtain new indications of said viewing direction and responding to such obtained indications by centering said displayed portion of said computer-generated image on the respective viewing direction. According to an embodiment the method comprises displaying, in said menu, only a subset of a larger number of interrelated options available to said user through said menu, and responding to a menu-scrolling command received through said user controls by scrolling the subset of the interrelated options selected for display within said larger number of interrelated options and each time displaying only those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded.
According to an embodiment the method comprises displaying those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded in the form of slices of a displayed three-dimensional part of a pie.
According to an embodiment the method comprises responding to said menu-scrolling command by rotating said displayed slices in said computer-generated image around a center point of said displayed three-dimensional part of the pie, so that in each case that slice that was furthest in the direction of rotation before said rotating is faded out of view and a new slice, representing another one of said interrelated options is brought into view on the other side of the displayed three-dimensional part of the pie.
According to an embodiment the method comprises displaying, in said menu, a number of three- dimensional symbols corresponding to options available to said user through said menu in a rotationally symmetric planar array, the planar form of which is oriented parallel to a plane displayed at or close to the location of said menu in said computer-generated image, and responding to a menu-scrolling command received through said user controls by rotating said rotationally symmetric planar array of three-dimensional symbols around its rotational axis of symmetry so that said rotating, when continues, brings each of said three-dimensional symbols in turn to front in said computer-generated image.
According to a third aspect there is provided a computer program product, comprising one or more sets of one or more machine-readable instructions that, when executed by one or more processors, cause the implementation of any of the methods described in this text.
According to a fourth aspect there is provided a computer-readable storage medium, comprising stored thereupon a computer program product of the kind described above.
Further advantageous embodiments are introduced in the depending claims and in the passages of description that explain concepts similar to those in said depending claims. BRIEF DESCRIPTION OF DRAWINGS
Fig. 1 illustrates a schematic diagram of a visualization arrangement, fig. 2 illustrates certain concepts of displaying computer-generated images of three- dimensional environments, fig. 3 illustrates an embodiment of displaying a menu, fig. 4 illustrates a concept according to which the menu of fig. 3 may work, fig. 5 illustrates an embodiment of displaying a menu, fig. 6 illustrates an embodiment of indicating a direction and distance, fig. 7 illustrates a concept according to which the embodiment of fig. 6 may work, fig. 8 illustrates a concept according to which the embodiment of fig. 6 may work, fig. 9 illustrates an embodiment of moving a virtual element, fig. 10 illustrates an embodiment of indicating distances in real time, fig. 11 illustrates concepts according to which the embodiment of fig. 10 may work, fig. 12 illustrates an early phase of an embodiment of creating virtual elements, fig. 13 illustrates a later phase of the embodiment of fig. 13, fig. 14 illustrates an embodiment of working with point-fixing commands, fig. 15 illustrates a later phase of the embodiment of fig. 14, and fig. 16 illustrates an embodiment of working with stored and restored virtual elements.
DETAILED DESCRIPTION Fig. 1 illustrates a schematic block diagram of a system in which embodiments of the invention may be implemented. Common general features of VR systems are a display arrangement 101, a processing engine 102, and user controls 103. An AR system may require additionally an image acquisition subsystem 104, although it is also possible to build a kind of an AR system according to the head-up display principle so that the "image" of the real world is what the user actually sees with his or her own eyes, and only the virtual elements augmenting it are added for example by projecting their images onto the surface of a transparent layer through which the user is looking. The system may also comprise external interfaces 105.
A wide variety of display arrangements 101 are known, ranging from mere standalone display screens to multiple kinds of VR headsets or head-mounted-displays that can show a tailored projection of the image to each eye or even project the image directly onto the retina of the eye. Display arrangements working according to the head-up display mentioned above are yet another alternative. The present invention does not place requirements on what kind of display technology is used, although some display technologies may have certain specific implications on how embodiments of the invention may be used. These are described in more detail later in this text.
User controls 103 also come in widely different forms, as this general definition covers all possible ways in which the user may generate direct or indirect inputs to the system. The most conventional forms of user controls include keyboard and mouse type of devices and touch- sensitive screens. The user controls may comprise means for voice and gesture control, as well as detectors of movement and direction. These can reside in a common device with the other functionalities, like a smartphone for example, but additionally or alternatively there may be external user control devices like tactile gloves or sensors built into the structures of the surrounding environment. Some embodiments of the invention described here may involve preferences of certain kinds of user interface devices, but in general the invention is not restricted to any kind of user control technology.
The image acquisition subsystem 104, if present, may include one or more digital cameras capable of producing video streams and/or sequences of consecutive still images. The simultaneous use of two or more cameras with at least partly overlapping fields of view involves the advantage of stereo imaging, and even more versatile imaging is possible with panoramic and/or spherical imaging. The image acquisition subsystem 104 also covers other ways in which the system can automatically obtain information about the surrounding space, like distance sensors, object detectors, and inertial navigation systems.
The external interfaces block 105, if present, may comprise interfaces for either or both of short- distance and long-distance communications. Interfaces for short-distance communications, if present, may comprise for example wired interfaces to nearby auxiliary devices; NFC (near field communications) interfaces for wireless communications with other devices in the immediate vicinity; or Bluetooth, ZigBee, WiFi, infrared, ultrasound, or other wireless interfaces for wireless communications within meters or at least tens of meters distances. Interfaces for long-distance communications, if present, may comprise for example 3G, 4G, or 5G mobile communications interfaces; satellite communications interfaces; or any kind of Internet interfaces for communications with devices that may be anywhere.
The processing engine 102 is a computer or a networked system of two or more computers that performs all the data processing that is needed to acquire information about the environment, compose images to be displayed to the user, receive and analyze user inputs, and otherwise maintain the virtual three-dimensional environment, projections of which the user needs to see. The processing engine 102 comprises one or more processors that execute one or more sets of one or more machine-readable instructions that, when executed, cause the implementation of what is described as the method embodiments of the invention. The processing engine 102 also comprises the required memory means, like a program memory for storing said machine- readable instructions, a data memory for storing the data that defines the virtual three- dimensional environment and its contents, firmware storage for storing the firmware defining the low-level automatic operations of the system, and so on.
The interfaces between the various parts shown in fig. 1 may be internal interfaces within an integrated device, or at least some of them may involve interfacing between devices over short or long distances. In the case of internal interfaces the system may be a standalone VR or AR device, and it may be portable or wearable. The case of interfaces between devices relates to the possibility of dividing the functionalities of a VR or AR system so that resources are taken into use from where there are most practically and advantageously available.
Since a central task of a system of the kind shown in fig. 1 is to provide the user with visual information about either a virtual three-dimensional environment or an actual three- dimensional environment augmented with virtual elements, the system may be described as a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a user. In this respect the term "image" may be understood to cover a still image, a sequence of still images, a video stream, or a frame of a video stream, which frame the arrangement frequently and regularly updates so that as a result the video stream is generated. In the case of VR the whole image may consist of virtual elements, while in the case of AR the image may comprise images of actual elements in a real-life three-dimensional environment augmented with virtual elements or - in the case of head-up displays or the like - virtual elements that in the eyes of the user augment the real-life elements that the user sees through an at least partially transparent or translucent display apparatus.
Since the processing engine 102 can be made to perform a wide variety of tasks by programming it appropriately, saying that the visualization arrangement is configured to do something is essentially synonymous with saying that the tasks to be performed and the steps to be executed are written in the form of one or more computer programs that are compiled into machine- readable instructions and stored in a program memory that is available to the processing engine 102.
According to the principles of VR and AR, the visualization arrangement is configured to repeatedly obtain an indication of a viewing direction into which the user is currently looking in the three-dimensional environment. The user controls 103 are used for such obtaining. The arrangement is additionally configured to respond to such obtained indications of the viewing direction by centering a displayed portion of the computer-generated image on the indicated viewing direction. The arrangement may obtain the indications and perform the centering of the image frequently enough and rapidly enough so that the user experiences it as if actually looking around in a real-life three-dimensional environment augmented with the displayed virtual elements. This is not an obligatory feature however; for example if there is not enough processing power available for updating the image in real time, the arrangement may perform the centering of the image with a certain delay.
Fig. 2 illustrates some of the concepts mentioned above. The rounded rectangle 201 delimits the displayed portion of a computer-generated image of a three-dimensional environment. In this case the three-dimensional environment is a room. It may be an actual room, in which case what the user sees of the room is either a computer-generated reproduction of what an image acquisition system recorded, or then the real-world view of the actual room that the user sees through the transparent layer of a HUD-type display. The elements (here: a table 202 and two chairs 203 and 204) comprised in the computer-generated image may be real-life elements or virtual elements. Here we may assume, as an example, that the table 202 is a real-life element that actually exists in the room, and the two chairs 203 and 204 are virtual elements that do not exist in real life but appear only in the computer-generated image.
The user has a viewing direction, and the displayed portion 201 of the computer-generated image is centered on the viewing direction. In the example of fig. 2 the viewing direction focuses on point 205, and the centering is illustrated with the dash-dot lines. Centering is to be understood as placing into the field of view that the user can conveniently perceive with his sense of sight, so it does not need to mean exact centering in the mathematical sense.
In practice the visualization arrangement must often maintain in its memory a larger image than what is currently in the displayed portion, because the user may turn his or her viewing direction at any moment, and the computer must then be ready to produce a new displayed portion of the image, centered on the new viewing direction. In fig. 2 this is illustrated so that the corners between the floor and the walls, the corners between the ceiling and the walls, and a portion of the second chair 204 is shown with dashed lines: the user does not currently see this portion of the second chair 204, but it exists in the computer-generated image that the visualization arrangement has in its memory.
Users of computers and computer-operated systems are used to work with menus. A menu is a list from which the user may select an operation to be performed. The options displayed in a menu are typically interrelated in some way, for example so that these are the options that are available at the particular stage of doing something, and/or so that all these options represent the same sub-class of alternatives like changing the visual characteristics of a displayed virtual element. In the simplest form the options displayed in the same menu are interrelated only by the fact that they all appear in this menu.
In two-dimensional graphical user interfaces it has become commonplace to arrange menus behind headers, keywords, or symbols, typically in a row at or close to the upper edge of a displayed window, so that said row of headers, keywords, or symbols actually constitutes the highest-level menu. By clicking with the mouse on one of them the user may open one or more lower-level menus and select the desired alternative from there. A major advantage of menus is that the user does not need to remember all available options by heart and call them with keyed- in commands, but he or she may just make the available options visible and select from there.
In a VR or AR visualization arrangement a balance should be found between easy availability of options and non-intrusiveness. The last-mentioned means that the user experience should not be disturbed with something that the user does not need at the moment or something that otherwise makes it difficult to work within the displayed environment. A simple example of non- intrusiveness is that the field of view that the user has into the displayed portion of the computer generated image of the three-dimensional environment should not be unnecessarily clogged with displayed elements that do not actually belong to said three-dimensional environment. We may consider a user trying to select the best-fitting luminaire to a room. While looking at a displayed image of the room the user may open a menu that shows small images of the available luminaires. If the menu covers a large portion of the displayed image, the user may find it awkward to make the selection: it would be much easier to see how a particular luminaire fits together with the rest of the room if the field of view would not be covered to such a large extent by the menu.
Consequently it is an objective to provide methods, arrangements, systems, and computer program products for providing the user with lists from which the user may select an operation to be performed, in a way that balances easy availability of options with non-intrusiveness. Another objective is that this approach is easily applicable to a very wide range of different kinds of selection situations.
These and other advantageous objectives are achieved by presenting the menu as a three- dimensional entity, containing a collection of virtual three-dimensional objects, that the user may place in the computer-generated image like any other virtual three-dimensional object.
The visualization arrangement may be configured to respond to a menu-displaying command received from the user by displaying a menu as a number of displayed three-dimensional symbols within the displayed portion of the computer-generated image. Here a menu is a symbolic representation of a number of interrelated options available to the user. It could also be described as a list from which the user may select an operation to be performed, but it should be emphasized that graphical way in which a menu is here presented to the user would probably not be associated with the word "list" in the first place.
The visualization arrangement is also configured to maintain said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment. All this is done while continuing to obtain new indications of the viewing direction and responding to such obtained indications by centering the displayed portion of the computer-generated image on the respective viewing direction.
Maintaining the three-dimensional symbols that constitute the menu at their original location within the computer-generated image involves a number of advantages. One of them is non- intrusiveness. Since the menu appears just like any other virtual element in the three- dimensional environment, the user can "move around it" or "step aside"; in other words, change the way in which he or she is looking at the computer-generated image, so that the menu either is or is not in sight. Thus the displayed menu takes only as large a portion of the field of view as the user wants. In a way the menu becomes part of the three-dimensional environment, instead of being part of a user interface through which the user looks at the three-dimensional environment. The user may interact with the three-dimensional symbols of the menu in a very similar way in which he or she interacts with other virtual elements in the three-dimensional environment. This makes it very intuitive for the user to use a menu of the kind described here.
Another advantage is the possibility of context-specificity. The user may have given the menu- displaying command while he or she was at a particular point within the three-dimensional environment, for the reason that some very specific task needed to be done at just that point. As an example, the user might have considered adding a virtual element representing a flower on a table. If the menu was displayed on or close to the table, the user may want to leave it there while he or she is doing something else at some other location within the three-dimensional environment. The displayed menu could be waiting for the user on the table, so that when he or she comes back to the table next time, the selection of the flower may continue from where it was.
How long the displayed menu will remain visible and accessible may be decided according to need. One possibility is that a menu, once displayed, remains visible and accessible as long as the user does not interact with any other objects displayed in the three-dimensional environment. The user could look somewhere else within the three-dimensional environment, and the menu would be there waiting at its original location for the user to look back into its direction and to interact with the menu. However, if the user moves to another location and/or interacts with any other object in the three-dimensional environment, the menu would disappear. Such an embodiment involves the advantage that the user does not experience the three-dimensional environment as cluttered by unnecessarily displayed menu items.
Another possibility is that the menu is displayed until the user gives an explicit command to close the menu. Such an embodiment involves the advantage that the user may decide exactly, how many menus should be displayed and made accessible at any given time.
Yet another possibility, which can be combined with any of the other possibilities described above, is that the menu will automatically disappear if a time longer than a predetermined threshold has passed since the user has last interacted with the menu. Such an embodiment involves the advantage that the user does not need to separately remember to close menus that are not used any more.
There are several possible approaches to the task of displaying a menu as a number of displayed three-dimensional symbols within the displayed portion of the computer-generated image. One such approach is shown schematically in figs. 3 and 4. A basic principle of this approach is that the visualization arrangement is configured to display in the menu only a subset of a larger number of interrelated options that are available to the user through this particular menu.
In fig. 3 the menu 301 has the geometric appearance of a part of a pie, i.e. a part of a relatively flat cylinder so that if the bottom plane of the cylinder defined the horizontal plane, the part is cut with one or more vertical planes. In particular in fig. 3 the menu 301 has the geometric appearance of a half of a pie, i.e. a part of a relatively flat cylinder so that if the bottom plane of the cylinder defined the horizontal plane, the part is cut with a single vertical plane that includes the central axis of symmetry of the cylinder.
A prism 302, a cylinder 303, and a pin 304 are shown as simplified examples of three-dimensional symbols that constitute the menu in fig. 3. What the actual options are that are represented by these symbols is irrelevant to the present description. In order to provide intuitiveness to the user experience it is recommendable that the three-dimensional symbols are such that the user can easily associate them with the options that they represent. As an example, if the menu 301 is displayed in order to offer the user the possibility of adding more pieces of furniture as virtual elements into the computer-generated image, it is recommendable that the three-dimensional symbols are miniature-sized versions of the actual pieces of furniture that they represent. It is also possible to augment the three-dimensional symbols with displayed words or other character strings that further clarify their meaning.
In the embodiment of fig. 3 the three-dimensional symbols 302, 303, and 304 are displayed in the form of three slices of the displayed three-dimensional part of a pie. In many cases a menu should contain (much) more options than three. Fig. 4 illustrates how such a larger number of interrelated options can be made available to the user, using the basic graphical approach shown in fig. 3.
The larger number of interrelated options may be thought as populating a long list, as in the schematic representation on the left in fig. 4. Only a limited number (here: three) of the options are visible to the user at a time, as is illustrated with the solid lines around the middle three listed items in fig. 4. These are the three options, the three-dimensional symbols of which are visible in the slices of the displayed three-dimensional part of the pie. The currently invisible upper and lower ends of the list in the left part of fig. 4 may be thought of corresponding to infinite extensions of a three-dimensional bar that is bent by 180 degrees in the middle, as in the figurative illustration on the right in fig. 4.
The visualization arrangement may be configured to respond to a "menu scrolling command" received from the user by scrolling the subset of the interrelated options selected for display within said larger number of interrelated options, and each time displaying only those three- dimensional symbols in said menu 301 that represent those of said larger number of interrelated options to which said scrolling had proceeded. The effect of scrolling is shown with two-ended arrows in each part of figs. 3 and 4. In the computer-generated image the displayed slices may be rotated (in response to the scrolling command) around a center point of the displayed three- dimensional part of the pie. In each case that slice that was furthest in the direction of rotation before said rotating is faded out of view. A new slice, representing another one of the interrelated options, is brought into view on the other side of the displayed three-dimensional part of the pie.
So, for example if the user saw the displayed portion of the computer-generated image as in fig. 3 and gave a menu scrolling command to the left, the leftmost slice (the one with the prism 302) would be faded out of view and a new slice would be brought into view on the right side. If the contents of the menu 301 were ordered as in fig. 4, the newly displayed slice would contain the hourglass-formed three-dimensional symbol. This way the user is given the possibility of scrolling through a practically unlimited number of interrelated options by just displaying a limited subset of them at each time.
How the user gives the menu-scrolling command is of little importance. It will be determined by the number and nature of user controls comprised in the visualization arrangement. Examples include but are not limited to voice commands, swiping on a touch-sensitive display, clicking on arrows or other symbols, pressing some keys, winking the left or the right eye, making a gesture in the air with a finger or a hand, or moving some other part of the body.
How the user gives selection commands that indicate his or her purpose to select an item in the menu has some significance to the intuitiveness and ease of use of the menu. Since the visible part of the menu consists of three-dimensional symbols that the user can see in the displayed portion of the computer-generated image, one way of giving selection commands may involve voice commands with which the user designates that symbol that he or she wants to select: for example, the user may say "left", "middle", or "right". If the symbols have names that are also displayed, the user may read aloud the name of the symbol that is to be selected. If a touch screen, mouse, or similar user control is used in which the user can point and click, the sensitive field on which the click must hit may be the symbol itself, or the whole slice of the pie in the form in which the menu is displayed.
Fig. 5 illustrates a slightly different alternative approach. With the embodiments described above it shares the principle of displaying a menu 501 as a number of displayed three-dimensional symbols within the displayed portion of the computer-generated image. Also here the menu 501 is a symbolic representation of a number of interrelated options available to said user. Additionally also here the visualization arrangement is configured to maintain the three- dimensional symbols that constitute the menu 501 at the location at which it was displayed within the computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment, while continuing to obtain new indications of the viewing direction and responding to such obtained indications by centering the displayed portion of said computer-generated image on the respective viewing direction.
As a difference to the embodiment of figs. 3 and 4, in the embodiment of fig. 5 the visualization arrangement is configured to display the whole menu at once. In the menu 501 there are a number (here: six) of the three-dimensional symbols corresponding to options available to the user through the menu. They appear in a rotationally symmetric planar array. The array being "planar" means that the three-dimensional symbols appear as if they were placed on a planar, rotating, round tray 502. The planar form of the array is oriented parallel to a plane that is displayed at or close to the location of the menu 501 in the computer-generated image. In fig. 5 this plane is the floor of the room.
In the embodiment of fig. 5 the visualization arrangement is configured to respond to a menu scrolling command received from the user by rotating the rotationally symmetric planar array of three-dimensional symbols around its rotational axis of symmetry so that said rotating, when continues, brings each of said three-dimensional symbols in turn to front in said computer generated image. The symbol in the front is the easiest to select, so rotating the rotationally symmetric planar array of three-dimensional symbols is essentially synonymous with looking for the option that the user would prefer to select. When the user has made a decision, he or she may give a selection command to which the visualization arrangement may respond by performing the operation that was represented by the three-dimensional symbol in the front.
In the embodiment of fig. 5 one of the symbols in the menu is a go-back arrow 503. It is shown here as a reminder that a menu-driven user interface should include also the necessary navigation means with which the user may move back and forth between the displayed menus that may have a number of levels. The navigation means should also include means with which the user may terminate the current session of using menus, i.e. closing all menus that are open for the moment. Navigation means need not involve any kind of displayed symbols, if the navigation can take place through simple commands of other kind, such as voice commands or gestures.
SMART ARROW
A major task of designing three-dimensional environments using VR or AR is the placing of virtual elements. As an example we may consider a decorator tasked with furnishing a three- dimensional environment like a room using AR. The decorator could use a visualization arrangement that acquired sufficient information of the room to display a computer-generated image of it, and then start selecting and placing pieces of furniture as virtual elements within the computer-generated image. Known user interfaces of AR visualization arrangement are notoriously slow and clumsy if the user wants to place a number of similar virtual elements in a row or array, or to move an existing virtual element by some desired distance in a desired direction. For example if the decorator wanted to try placing a number of chairs in a row along a wall, he or she would typically need to go to the desired location of the first chair, open a menu structure, navigate through the menu structure to select the virtual element representing the desired chair, give a command that made the visualization arrangement place the virtual element, then move through the next desired location, repeat all these steps, and keep doing this over and over again until the whole row of virtual chairs was in place.
According to a fifth aspect there is provided a visualization arrangement that is configured to display, at a first location within said computer-generated image, a computer-generated first virtual element as if said first virtual element was an object located at the corresponding location within said three-dimensional environment. Such a visualization arrangement is configured to respond to a first command received through user controls of said visualization arrangement by displaying within said computer-generated image a graphical indicator, said graphical indicator indicating a direction from said first virtual element and a distance from said first virtual element. Such a visualization arrangement is configured to respond to a direction-changing command received through said user controls by changing the direction from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed direction. Such a visualization arrangement is configured to respond to a distance-changing command received through said user controls by changing the distance from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed distance, and respond to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element that were indicated by said graphical indicator.
According to an embodiment the visualization arrangement is configured to, as a response to said second command, place and display said second virtual element as a copy of said first virtual element in said computer-generated image with the same orientation as the first virtual element.
According to an embodiment, depending on the content of said second command the visualization arrangement is configured to either leave said first virtual element as it was in the computer-generated image, thus causing the first virtual element to be duplicated by the second virtual element, or delete said first virtual element from the computer-generated image, thus causing the first virtual element to be replaced by the second virtual element.
According to an embodiment the visualization arrangement is configured to, as a response to the direction indicated by said graphical indicator being parallel with a first existing linear element in the computer-generated image, displaying a graphical highlighter of said first existing linear element.
According to an embodiment the visualization arrangement is configured to respond to an alignment-guide-selecting command received through said user controls - said alignment-guide- selecting command being one that identifies a second existing linear element in said computer generated image - by changing the direction from said first virtual element that said graphical indicator indicates so that it coincides with the direction of said second existing linear element and displaying the graphical indicator as indicating the changed direction. According to an embodiment the visualization arrangement is configured to, as a response to the distance indicated by said graphical indicator being equal to a first dimension of a first existing element in the computer-generated image, display a graphical highlighter of said first existing element.
According to an embodiment the visualization arrangement is configured to respond to a distance-guide-selecting command received through said user controls - said distance-guide- selecting command being one that identifies a second existing element in said computer generated image that has a second dimension - by changing the distance from said first virtual element that said graphical indicator indicates so that it coincides with said second dimension of said second existing element and displaying the graphical indicator as indicating the changed distance.
According to an embodiment the visualization arrangement is configured to respond to a third command, if received through said user controls before said second command, by displaying within said computer-generated image a list of operations available for performing, one of said operations being an operation of placing the second virtual element, and perform said placing and displaying of the second virtual element only if said second command indicates a selection by the user of said operation of placing the second virtual element.
According to a sixth aspect there is provided a method that comprises displaying, at a first location within said computer-generated image, a computer-generated first virtual element as if said first virtual element was an object located at the corresponding location within said three- dimensional environment. Such a method comprises responding to a first command received through user controls of said visualization arrangement by displaying within said computer generated image a graphical indicator, said graphical indicator indicating a direction from said first virtual element and a distance from said first virtual element. Such a method comprises responding to a direction-changing command received through said user controls by changing the direction from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed direction. Such a method comprises responding to a distance-changing command received through said user controls by changing the distance from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed distance, and responding to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element that were indicated by said graphical indicator.
Fig. 6 illustrates an example of a computer-generated image displayed by a visualization arrangement to a human user. The computer-generated image represents a three-dimensional environment. As is common to VR and AR systems, the visualization arrangement may be configured to repeatedly obtain an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and respond to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction.
In fig. 6 we may assume, for the sake of example, that the real-life three-dimensional environment was a room with just a table 601, and what the user sees as other pieces of furniture in the computer-generated image are virtual elements. In other words the visualization arrangement is configured to display, at a first location within said computer-generated image, a computer-generated first virtual element 602 as if said first virtual element was an object (here: chair) located at the corresponding location within said three-dimensional environment.
We may further assume that the user wants to place another virtual element, representing another chair of the same kind, at a location that is displaced from the location of the first virtual element 602 by a certain distance in a certain direction. The visualization arrangement comprises user controls of some kind; examples of these have been considered earlier in this text. The visualization arrangement is configured to respond to a first command received through the user controls by displaying within said computer-generated image a graphical indicator 603. The graphical indicator 603 indicates a direction from the first virtual element 602 and a distance from the first virtual element 602.
Exemplary parts of the graphical indicator 603 are a direction arrow 604 and a distance pane 605. Other exemplary parts are an origin indicator 606 and a target location indicator 607. Of these, the direction arrow 604 indicates the direction from the first virtual element 602, and the value (here: 0.95 m) in the distance pane 605 indicates a distance from the first virtual element 602. In this example the direction and distance are measured along a basic horizontal surface (here: the floor of the three-dimensional environment) from the projection of the geometrical center point of the first virtual element 602 on that surface, but other possibilities exist, like measuring from that edge of the first virtual element 602 that is closest in the indicated direction. Of the exemplary parts of the graphical indicator 603 mentioned above, the origin indicator 606 may indicate any or both of the point from which the indicated direction and distance are measured and the first virtual element 602 that constitutes an origin of the measurement. The target location indicator 607 may highlight and indicate the location within the three- dimensional environment (or actually: within the computer-generated image of the three- dimensional environment) where the measurement currently ends.
The visualization arrangement may be configured to display the graphical indicator 603 automatically, immediately after the user has placed the first virtual element 602, or as a response to a command received through the user controls. The first-mentioned alternative involves the advantage that the user can very conveniently begin duplicating and/or moving any virtual element in the computer-generated image as soon as such a virtual element was placed. The second alternative involves the advantage that the user has more freedom to decide by him- or herself, in which ways he or she wants to work within the computer-generated image. In the first-mentioned alternative the "first command" referred to above is then the command to place the first virtual element 602, while in the second alternative the "first command" is an explicit command given by the user, indicating that he or she wants the graphical indicator 603 to be displayed.
The user should have large freedom to easily decide, in which direction and at which distance from the first virtual element 602 the second virtual element should be placed. Therefore the visualization arrangement may be configured to respond to a direction-changing command received through said user controls by changing the direction from said first virtual element 602 that said graphical indicator 603 indicates and displaying the graphical indicator as indicating the changed direction. If the user controls comprise a touch screen, the direction-changing command may be for example a swipe made by the user on the surface of the touch screen into the direction to which he or she wants to change the direction. Other possibilities include but are not limited to voice commands and keyed-in commands.
The visualization arrangement may also be configured to respond to a distance-changing command received through said user controls by changing the distance from said first virtual element that the graphical indicator 603 indicates and displaying the graphical indicator as indicating the changed distance. Similar considerations apply to the distance-changing command as to the direction-changing commands above. The aim is that the user can eventually place a second virtual element at the target location. Therefore the visualization arrangement is configured to respond to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element 602 that were indicated by the graphical indicator 603.
In fig. 7 it is assumed that the user wanted to place a second virtual element 701 that represents another chair at the target location. Thus in this case the visualization arrangement is configured to, as a response to said second command, place and display said second virtual element 701 as a copy of the first virtual element 602 in said computer-generated image. In particular the second virtual element 701 is automatically placed with the same orientation as the first virtual element 602.
An operation of this kind may involve either duplicating or moving the first virtual element 602. The user may have at least two kinds of second commands at his or her disposal. Depending on the content of the second command the visualization arrangement is then configured to either leave the first virtual element 602 as it was in the computer-generated image, or delete the first virtual element 602 from the computer-generated image. The first alternative means causing the first virtual element 602 to be duplicated by the second virtual element 701, while the second alternative means causing the first virtual element 602 to be replaced by the second virtual element 701.
In many cases it would be helpful to the user if the act of duplicating or moving a virtual element could involve aligning the desired direction with the direction of something else that already exists within the computer-generated image. The visualization arrangement can be configured to assist the user with the possibility of automatic aligning of this kind. As a first possibility, the visualization arrangement can be configured to automatically examine the current direction indicated by the graphical indicator 603 and compare it to the directions of other linear elements in the computer-generated image. As a response to the direction indicated by the graphical indicator 603 being parallel with a first existing linear element in the computer-generated image, the visualization arrangement may display a graphical highlighter of said first existing linear element. The designation "first" existing linear element is used here only for the ease of unambiguous reference. The first possibility explained above is thus an aid that helps the user to notice when the currently indicated direction happens to be parallel to the direction of an existing element in the computer generated image. A second possibility is that the user may intentionally cause the graphical indicator 603 to assume a direction that is parallel to the direction of an existing element in the computer-generated image. Such an existing element may be called a second existing linear element, and it may be the same or some other than the first existing linear element referred to above. The user may for example click or tap on a displayed linear element to give an alignment- guide selecting command, i.e. to indicate the desired second existing linear element. The visualization arrangement may be configured to respond to such an alignment-guide-selecting command by changing the direction from said first virtual element that said graphical indicator 603 indicates so that it coincides with the direction of said second existing linear element and displaying the graphical indicator as indicating the changed direction.
In fig. 6 it is assumed that one or both of the first and second possibilities mentioned above was utilized concerning the selection of an alignment guide. The first and/or second existing linear element in the computer-generated image is one of the corner lines 608 between the floor and the walls of the room. The graphical highlighter may be for example a different colour and/or a blinking representation of the existing linear element that acts as the alignment guide.
In many cases it would be helpful to the user if the act of duplicating or moving a virtual element could involve making the desired distance equal to the dimension of something else that already exists within the computer-generated image. The visualization arrangement can be configured to assist the user with the possibility of automatic distributing of this kind. As a first possibility, the visualization arrangement can be configured to automatically examine the current distance indicated by the graphical indicator 603 and compare it to the dimensions of other elements in the computer-generated image. As a response to the distance indicated by the graphical indicator 603 being equal to a first dimension of a first existing element in the computer-generated image, the visualization arrangement may display a graphical highlighter of said first existing element. The designations "first" dimension and "first" existing element are used here only for the ease of unambiguous reference.
The first possibility explained above is thus an aid that helps the user to notice when the currently indicated distance happens to be equal to the dimension of an existing element in the computer generated image. A second possibility is that the user may intentionally cause the graphical indicator 603 to assume a distance that is equal to a dimension of an existing element in the computer-generated image. Such an existing element may be called a second existing element, and it may be the same or some other than the first existing element referred to above. The user may for example click or tap on a displayed element to give a distance-guide selecting command, i.e. to indicate the desired second existing element. The visualization arrangement may be configured to respond to such a distance-guide-selecting command by changing the distance from said first virtual element that said graphical indicator 603 indicates so that it is equal to the dimension of said second existing element and displaying the graphical indicator as indicating the changed distance.
In fig. 6 it is assumed that one or both of the first and second possibilities mentioned above was utilized concerning the selection of a distance guide. The first and/or second existing element in the computer-generated image is one of edges of the table 601. The graphical highlighter may be for example a different colour and/or a blinking representation of the existing element that acts as the distance guide.
Above it has been assumed that the user wants specifically to place a copy or replacement of the first virtual element 602 at the new location. However, the graphical indicator 603 can also be used for other purposes. As an example, the user might want to place some other virtual element, or perform some other operation at the target location. Therefore, according to an embodiment, the visualization arrangement may be configured to display a menu at the target location.
In fig. 8 it is assumed that the visualization arrangement is configured to respond to a third command by displaying within said computer-generated image a menu 801 In order to maintain consistency with the wording user earlier in this text, it may be assumed that for this purpose the third command should be received through the user controls before said second command. The menu 801 is a list of operations available for performing. One of said operations may be an operation of placing the second virtual element, and it may be represented in the menu by a miniature copy of the second virtual element that is to be placed. In such a case the placing and displaying of the second virtual element is performed only if said second command indicates a selection by the user of an operation of placing the second virtual element.
In fig. 8 it is assumed that the menu 801 is displayed at the target location. However, in particular if this would lead to a poor visibility of other important features at or near the target location, the menu 801 may be displayed also elsewhere within the computer-generated image. In fig. 8 it is also assumed that the menu 801 is displayed as a collection of three-dimensional symbols. This is not an essential requirement, but the menu could be displayed also in some other form, like an array of text items.
The embodiments that relate to the easy duplicating and moving of virtual elements in the computer-generated image may be described in a concise way as in the following consecutively numbered paragraphs.
NP1. Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
- display, at a first location within said computer-generated image, a computer-generated first virtual element as if said first virtual element was an object located at the corresponding location within said three-dimensional environment; characterized in that the visualization arrangement is configured to:
- respond to a first command received through user controls of said visualization arrangement by displaying within said computer-generated image a graphical indicator, said graphical indicator indicating a direction from said first virtual element and a distance from said first virtual element,
- respond to a direction-changing command received through said user controls by changing the direction from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed direction,
- respond to a distance-changing command received through said user controls by changing the distance from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed distance, and
- respond to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element that were indicated by said graphical indicator. NP2. A visualization arrangement according to numbered paragraph NP1, configured to:
- as a response to said second command, place and display said second virtual element as a copy of said first virtual element in said computer-generated image with the same orientation as the first virtual element. NP3. A visualization arrangement according to numbered paragraph NP2, wherein depending on the content of said second command the visualization arrangement is configured to either leave said first virtual element as it was in the computer-generated image, thus causing the first virtual element to be duplicated by the second virtual element, or delete said first virtual element from the computer-generated image, thus causing the first virtual element to be replaced by the second virtual element.
NP4. A visualization arrangement according to any of the numbered paragraphs NP1 to NP3, configured to:
- as a response to the direction indicated by said graphical indicator being parallel with a first existing linear element in the computer-generated image, display a graphical highlighter of said first existing linear element.
NP5. A visualization arrangement according to any of the numbered paragraphs NP1 to NP4, configured to:
- respond to an alignment-guide-selecting command received through said user controls - said alignment-guide-selecting command being one that identifies a second existing linear element in said computer-generated image - by changing the direction from said first virtual element that said graphical indicator indicates so that it coincides with the direction of said second existing linear element and displaying the graphical indicator as indicating the changed direction.
NP6. A visualization arrangement according to any of the numbered paragraphs NP1 to NP5, configured to:
- as a response to the distance indicated by said graphical indicator being equal to a first dimension of a first existing element in the computer-generated image, display a graphical highlighter of said first existing element.
NP7. A visualization arrangement according to any of the numbered paragraphs NP1 to NP6, configured to:
- respond to a distance-guide-selecting command received through said user controls - said distance-guide-selecting command being one that identifies a second existing element in said computer-generated image that has a second dimension - by changing the distance from said first virtual element that said graphical indicator indicates so that it coincides with said second dimension of said second existing element and displaying the graphical indicator as indicating the changed distance.
NP8. A visualization arrangement according to any of the numbered paragraphs NP1 to NP7, configured to:
- respond to a third command, if received through said user controls before said second command, by displaying within said computer-generated image a list of operations available for performing, one of said operations being an operation of placing the second virtual element, and
- perform said placing and displaying of the second virtual element only if said second command indicates a selection by the user of said operation of placing the second virtual element.
DUAL JOYSTICK
A touch-sensitive screen, often layered together with a display screen to form a touch-sensitive display, is a commonly used user interface device in visualization arrangements of the kind described above. However, the dual use of a touch-sensitive display for both outputting visual information and receiving control commands from the user may give rise to mutually contradicting needs. On one hand, as much of the display area should be available for outputting visual information, so that the user could see as much of the computer-generated image of the three-dimensional environment as possible. On the other hand the touch-sensitive areas offered for the user should be large enough, and within easy enough reach for the fingers of a user operating the device, so that the operating position would be ergonomic and intuitive.
A known solution for allowing a user to move and rotate virtual elements consists of two touch- sensitive joystick-type areas at or close to the edges of the touch-sensitive display. A touch- sensitive joystick-type area is a form of a touch-sensitive user control in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch-sensitive joystick-type area said touches occur. The touches work in a way that is analogous to a physical joystick, in which bending the joystick into different directions and by different amounts give rise to different commands. A typical touch-sensitive joystick type area is illustrated on the touch-sensitive display as a circular patch, and array of concentric rings, or other graphical element from which the user can easily conceive a center and edges. Two of these are typically used so that the user uses one of them to move a selected virtual element to different locations and the other to change the orientation, i.e. to rotate the virtual element around a rotational axis in the computer-generated image. A drawback of two touch-sensitive joystick type areas is that they reserve space, and hide from view a relatively large portion of what the user would otherwise see of the displayed computer generated image of the three-dimensional environment.
According to a seventh aspect there is provided a visualization arrangement that is configured to display, at a first location within said computer-generated image, a computer-generated virtual element as if said virtual element was an object located at the corresponding location within said three-dimensional environment. Such a visualization arrangement is configured to provide the user with a touch-sensitive user control for moving said virtual element within said computer generated image, and respond to consecutive selection commands received from the user by toggling between a first control mode and a second control mode, of which in said first control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by moving said virtual element to a different location within said computer-generated image, and in said second control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by rotating said virtual element around a rotational axis within said computer generated image.
According to an embodiment the visualization arrangement is configured to provide said touch- sensitive user control within a same display device as said computer-generated image.
According to an embodiment the visualization arrangement is configured to provide one or more touch-sensitive selectors at or close to the touch-sensitive user control, and configured to receive said selection commands in the form of touches of said one or more touch-sensitive selectors.
According to an embodiment the visualization arrangement is configured to provide said touch- sensitive user control as a touch-sensitive joystick-type area in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch- sensitive joystick-type area said touches occur.
According to an embodiment the visualization arrangement is configured to give visual feedback to the user at or close to the location of said touch-sensitive user control indicating which of said first and second control modes is currently active.
According to an embodiment the visualization arrangement is configured to give visual feedback to the user at or close to the location of said virtual element indicating which of said first and second control modes is currently active. According to an embodiment the visualization arrangement is configured to give visual feedback to the user at or close to the location of said virtual element indicating at least one default moving direction of said virtual element when said first control mode is active.
According to an eighth aspect there is provided a method that comprises displaying, at a first location within said computer-generated image, a computer-generated virtual element as if said virtual element was an object located at the corresponding location within said three- dimensional environment. Such a method comprises providing the user with a touch-sensitive user control for moving said virtual element within said computer-generated image, and responding to consecutive selection commands received from the user by toggling between a first control mode and a second control mode, of which said first control mode involves responding to touch commands received through said touch-sensitive user control by moving said virtual element to a different location within said computer-generated image, and said second control mode involves responding to touch commands received through said touch- sensitive user control by rotating said virtual element around a rotational axis within said computer-generated image.
Fig. 9 illustrates an example of how a visualization arrangement is displaying a computer generated image of a three-dimensional environment to a human user. In particular, the visualization arrangement is configured to display, at a first location within the computer generated image, a computer-generated virtual element 901 as if said virtual element was an object located at the corresponding location within said three-dimensional environment. The visualization arrangement is also configured to provide the user with a touch-sensitive user control 902 for moving the virtual element 901 within the computer-generated image. Here the concept of moving is understood to mean all kinds of movements, including but not being limited to translational movements (moving into different location) and rotational movements (rotating around a rotational axis).
The user may decide, whether the touch-sensitive user control 902 is used for translational or rotational movements at a given time. The visualization arrangement is configured to respond to consecutive selection commands received from the user by toggling between a first control mode and a second control mode. Of these, in the first control mode the visualization arrangement is configured to respond to touch commands received through the touch-sensitive user control 902 by moving the virtual element 901 to a different location within the computer-generated image. In said second control mode the visualization arrangement is configured to respond to touch commands received through the touch-sensitive user control 902 by rotating the virtual element 901 around a rotational axis within the computer-generated image.
A particular feature of the embodiment shown in fig. 9 is that the visualization arrangement is configured to provide the touch-sensitive user control 902 within the same display device 903 as the computer-generated image. This is not a requirement, because other kinds of embodiments are possible: for example, if the user has both a smartphone and a smart watch, the computer generated image could be shown on the display of the smartphone and the touch-sensitive user control could be shown on the display of the smart watch. The use of the same display for both involves the advantage that the user may only need a single device for performing all operations described here.
There are several possible ways in which the user may give the selection commands. Voice commands are one option. Other ways include for example shaking or tilting the device, or pressing a button or flipping a switch located somewhere else within the device or system. One advantageous option is shown in fig. 9: here the visualization arrangement is configured to provide one or more touch-sensitive selectors 904 and 905 at or close to the touch-sensitive user control 902. The visualization arrangement is configured to receive said selection commands in the form of touches of the one or more touch-sensitive selectors 904 and 905. In this example, touching the first touch-sensitive selector 904 makes the visualization arrangement activate the first control mode, and touching the second touch-sensitive selector 905 makes the visualization arrangement activate the second control mode. Showing the touch-sensitive selectors within the same display as the touch-sensitive user control 902 involves many advantages, like them being easily available for the user, and it being possible to use the displaying capabilities of the touch- sensitive screen to illustrate the touch-sensitive selectors in an intuitive way.
In the embodiment shown in fig. 9 the touch-sensitive user control 902 is a touch-sensitive joystick-type area in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch-sensitive joystick-type area said touches occur. Particular directions may be emphasized; for example in fig. 9 the visualization arrangement is configured to graphically emphasize the upper and lower edges of the touch-sensitive user control 902. This may refer to particular directions of moving the virtual element 901, like touching the upper edge for moving the virtual element 901 into its current front direction and touching the lower edge for moving the virtual element 901 into its current back direction. This involves the advantage of enhancing the intuitiveness of using the touch-sensitive user control 902.
The visualization arrangement may be configured to give visual feedback to the user at or close to the location of the touch-sensitive user control 902, indicating which of said first and second control modes is currently active. Such visual feedback could comprise for example some visual emphasis of that one of the touch-sensitive selectors 904 and 905 that corresponds to the currently active control mode. Also the way in which particular directions or parts of the actual touch-sensitive user control 902 are emphasized may serve as an indicator of the currently active control mode. For example, when the first control mode is active, the touch-sensitive user control 902 might be provided with some emphasis of clearly translational directions, while when the second control mode is active it could be provided with some emphasis of rotational directions. This involves the advantage of enhancing the intuitiveness of using the touch-sensitive user control 902.
The visualization arrangement could additionally or alternatively be configured to give visual feedback to the user at or close to the location of the virtual element 901, indicating which of said first and second control modes is currently active. In the embodiment of fig. 9 a graphical symbol 906 appears under the virtual element 901, with the same outline as the first touch- sensitive selector 904. This tells the user that the currently active mode is the one that was selected with the first touch-sensitive selector 904. Activating the second control mode by touching the second touch-sensitive selector 905 could change the outline of the graphical symbol 906 to resemble that of the second touch-sensitive selector 905. This involves the advantage of enhancing the intuitiveness of using the touch-sensitive user control 902.
The arrows in the graphical symbol 906 shown in fig. 9 have also another purpose. By displaying such arrows the visualization arrangement is configured to give visual feedback to the user at or close to the location of the virtual element 901, indicating at least one default moving direction of the virtual element 901 when said first control mode is active. That is, the virtual element 901 has a "front" direction, into which it will move if the user gives a command to move the virtual element 901 forward, and this "front" direction is the one into which the arrows are pointing in the graphical symbol 906. This involves the advantage of enhancing the intuitiveness of using the touch-sensitive user control 902. A front or other direction is typically bound to the overall appearance of the virtual element in question, because many virtual elements have a distinct forwards direction. As an example, if the virtual element 907 representing a chair would be moved, its intuitive front direction would be towards the lower left in fig. 9.
The embodiments that relate to the easy moving of virtual elements in the computer-generated image with a touch-sensitive user control that reserves only little space on a touch-sensitive screen may be described in a concise way as in the following consecutively numbered paragraphs.
NP11. Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
- display, at a first location within said computer-generated image, a computer-generated virtual element as if said virtual element was an object located at the corresponding location within said three-dimensional environment, and
- provide the user with a touch-sensitive user control for moving said virtual element within said computer-generated image; characterized in that the visualization arrangement is configured to: - respond to consecutive selection commands received from the user by toggling between a first control mode and a second control mode, of which in said first control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by moving said virtual element to a different location within said computer generated image, and in said second control mode the visualization arrangement is configured to respond to touch commands received through said touch-sensitive user control by rotating said virtual element around a rotational axis within said computer-generated image.
NP12. A visualization arrangement according to numbered paragraph NP11, configured to provide said touch-sensitive user control within a same display device as said computer generated image. NP13. A visualization arrangement according to any of the numbered paragraphs NP11 or NP12, configured to provide one or more touch-sensitive selectors at or close to the touch-sensitive user control, and configured to receive said selection commands in the form of touches of said one or more touch-sensitive selectors. NP14. A visualization arrangement according to any of the numbered paragraphs NP11 to NP13, configured to provide said touch-sensitive user control as a touch-sensitive joystick-type area in which touches cause different kinds of commands to be received depending on in which direction from a center of the touch-sensitive joystick-type area said touches occur.
NP15. A visualization arrangement according to any of the numbered paragraphs NP11 to NP14, configured to give visual feedback to the user at or close to the location of said touch-sensitive user control indicating which of said first and second control modes is currently active.
NP16. A visualization arrangement according to any of the numbered paragraphs NP11 to NP15, configured to give visual feedback to the user at or close to the location of said virtual element indicating which of said first and second control modes is currently active.
NP17. A visualization arrangement according to numbered paragraph NP16, configured to give visual feedback to the user at or close to the location of said virtual element indicating at least one default moving direction of said virtual element when said first control mode is active.
VIRTUAL REAL-WORLD MEASUREMENT IN REAL-TIME
A feature particular to AR is that the user sees both real-world objects and virtual elements in the same view. One possible application of AR is AR-assisted interior design, in which the user considers what kind of additional objects he or she would like to place into a given environment. For example there may be a partially furnished room, so that the task of the interior designer is to select some additional pieces of furniture and find optimal locations for them in the room.
It is possible to use AR for the purpose defined above so that the user views a computer generated image of the three-dimensional environment in question and tries placing therein virtual elements that have been made to look like the possible additional real-world objects. However, by only looking at the computer-generated image it may be difficult for the user to get a full grasp of how the placing of a corresponding real-world object would actually work out. For example, the colour of objects affects the way in which users perceive their space requirements. This effect becomes even more manifest if the objects are transparent or translucent, because the user may fail to appropriately consider their dimensions in a displayed image.
According to a ninth aspect there is provided a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user. Such a visualization arrangement is configured to display, at a first location within said computer- generated image, an image of a first real-world object located at a corresponding location in the three-dimensional environment, and at a second location within said computer-generated image, a computer-generated virtual element as if said virtual element was a second real-world object located at the corresponding apparent location within said three-dimensional environment. The visualization arrangement is configured to respond to a command received from the user by displaying within said computer-generated image, between said image of the first real-world object and said computer-generated virtual element, a first visual indication of a first real-world dimension representative of a first distance that would prevail between said first real-world object at its current location and said second real-world object if located at its apparent location. The visualization arrangement is also configured to repeatedly update said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment.
According to an embodiment the visualization arrangement is configured to calculate said real- world dimension as a shortest linear distance between points closest to each other of said first real-world object at its current location and said second real-world object if located at its apparent location.
According to an embodiment the visualization arrangement is configured to calculate said real- world dimension as one of:
- a shortest linear distance between selected points of said first and second real-world objects, said selected points corresponding to points selected by the user in the computer-displayed image
- a shortest distance along a selected surface of said three-dimensional environment between points closest to each other on said surface of said first real-world object at its current location and said second real-world object
- shortest linear distance between points closest to each other of said first real-world object at its current location and said second real-world object if located at its apparent location when projected onto a selected surface of said three-dimensional environment.
According to an embodiment the visualization arrangement is configured to respond to an anchor-selecting command from the user by selecting one of a plurality of real-world objects, images of which are displayed within said computer-generated image, as an anchor object to act as said first real-world object to which said first distance is measured.
According to an embodiment said anchor-selecting command pertains to a point of said one of said plurality of real-world objects, and said visualization arrangement is configured to respond to said anchor-selecting command by selecting said point as a fixed endpoint from which said first distance is measured.
According to an embodiment the visualization arrangement is configured to:
- display, at a third location within said computer-generated image, an image of a third real-world object located at a corresponding location in the three-dimensional environment,
- respond to a command received from the user by displaying within said computer-generated image, between said image of the third real-world object and said computer-generated virtual element, a second visual indication of a second real-world dimension representative of a second distance that would prevail between said third real-world object at its current location and said second real-world object if located at its apparent location, and
- repeatedly update said second visual indication to represent the most up-to-date value of said second distance when the virtual element moves within said computer-generated image and/or when the third real-world object moves within said three-dimensional environment.
According to a tenth aspect there is provided a method for displaying a computer-generated image of a three-dimensional environment to a human user. The method comprises displaying, at a first location within said computer-generated image, an image of a first real-world object located at a corresponding location in the three-dimensional environment, as well as displaying, at a second location within said computer-generated image, a computer-generated virtual element as if said virtual element was a second real-world object located at the corresponding apparent location within said three-dimensional environment. The method comprises responding to a command received from the user by displaying within said computer-generated image, between said image of the first real-world object and said computer-generated virtual element, a first visual indication of a first real-world dimension representative of a first distance that would prevail between said first real-world object at its current location and said second real-world object if located at its apparent location. Additionally the method comprises repeatedly updating said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment.
Fig. 10 illustrates an example of how a visualization arrangement is displaying a computer generated image of a three-dimensional environment to a human user. In particular, the visualization arrangement is configured to display, at a first location within said computer generated image, an image 1001 of a first real-world object (a table) located at a corresponding location in the three-dimensional environment. Additionally the visualization arrangement is configured to display, at a second location within said computer-generated image, a computer generated virtual element 1002 (three-dimensional image of a chair) as if said virtual element 1002 was a second real-world object (a chair) located at the corresponding apparent location within said three-dimensional environment.
Here we assume that the user would be particularly interested in how far the chair would be from the walls of the displayed room if it was actually placed at the apparent location shown in the displayed image. The user utilizes the user controls at his or her disposal to give the visualization arrangement a command to display visual indications of one or more real-world dimensions. The visualization arrangement responds to the command it received from the user by displaying, within the computer-generated image, at least one visual indication of a real-world dimension. Two of these are shown in fig. 10 as examples; of these the one on the left is discussed first, and therefore called here the first visual indication 1003.
The first visual indication 1003 represents a distance that would prevail between a first real-world object (the wall 1004) at its current location and the second real-world object (chair), if the last- mentioned was located at the location made apparent by the virtual element 1002. In this example said distance is 108 cm. The location of the chair is called "apparent" location because in reality there is no chair there in the imaged three-dimensional environment; the user just sees a virtual element 1002 in the computer-generated image as if it was a real-world object in the three-dimensional environment.
Some thought may be given to what distance(s) is/are represented by the visual indication(s). The command to display visual indications may contain or be associated with selection commands with which the user selects an image of a real-world object (the wall 1004) and the virtual element 1002 that define the endpoints of the distance to be represented. A selection command of this kind may be called an anchor-selecting command. If the user controls comprise a mouse, the user may give an anchor-selecting command for example by clicking a mouse button when the cursor is on top of the displayed image of the wall 1004. This makes the visualization arrangement select the wall 1004 an anchor object, to act as the real-world object to which the distance (here 108 cm) is measured. The anchor-selecting command may pertain to the whole wall 1004. Alternatively the anchor-selecting command may pertain to a point 1005 of the larger real-world object (the wall 1004), in which case the visualization arrangement is configured to respond to said anchor-selecting command by selecting point 1005 as a fixed endpoint to which said distance is measured.
Some thought may also be given to how the real-world dimension in question is defined, i.e. how the represented distance is measured. One option, which is employed by the first visual indication 1003 in fig. 10, is that the visualization arrangement calculates the real-world dimension as a shortest linear distance between points closest to each other. Thus one endpoint is a point of the (first) real-world object (i.e. the wall 1004) at its current location, and the other endpoint is a point of that (second) real-world object that the virtual element 1002 illustrates, if located at its apparent location in the three-dimensional environment. The wall 1004 is a planar surface, so in this case the real-world dimension (108 cm) is the shortest distance measured at a right angle against the surface of the wall 1004.
Fig. 11 illustrates some other alternatives, with the character-string-type explicit distance indications omitted to enhance graphical clarity. Here the real-world object selected as the anchor, or the real world object to which the distance is measured, is the table 1001. The alternative shown as 1101 comprises calculating the real-world dimension as the shortest linear distance between selected points of the first and second real-world objects, said selected points corresponding to points 1102 and 1103 selected by the user in the computer-displayed image. The user may have given specific selection commands to select the points 1102 and 1103. The alternative shown as 1104 comprises calculating the real-world dimension as the a shortest distance along a selected surface (the floor 1105) of said three-dimensional environment. The distance is thus between such points closest to each other of said first real-world object (table 1001; at its current location) and said second real-world object (chair, if located at its apparent location) that are located on the surface (on the floor 1105). As both the table and chair have legs, these points are the points of the respective leg ends that are closest to each other. The alternative shown as 1106 comprises calculating the real-world dimension as the shortest linear distance between points closest to each other of said first real-world object (table 1001; at its current location) and said second real-world object (chair, if located at its apparent location) when projected onto a selected surface of said three-dimensional environment. In fig. 11 the selected surface is the 1105, which is a horizontal plane. Thus alternative 1106 is the narrowest gap, measured horizontally, between the table 1001 and the chair.
Above reference was repeatedly made to the "current" location of the first real-world object and the "apparent" location of the second real-world object (which the latter is only represented by the displayed virtual element). Real-world objects may move or be moved in the three- dimensional environment, and virtual elements may move or be moved in the displayed computer-generated image. As an example, if the user is an interior designer trying to find the nicest location for a new real-world object, he or she may use the user controls of the visualization arrangement to move the virtual element around. The virtual element may even have automotive characteristics, so that the user may put it into motion and watch what happens when it moves within the displayed computer-generated image. The user or an assisting person may also move real-world objects, or they may have automotive characteristics.
For these purposes it is advantageous to configure the visualization arrangement to repeatedly update said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment. How frequently such updating is performed is a design choice and depends e.g. on the processing resources that are available to the visualization arrangement. If the processing resources allow, it is advantageous to perform said updating so frequently that the user perceives it as if the indications of real-world dimensions would be continuously updated in real time.
More than one distances can be represented by visual indications of real-world dimensions. Fig. 10 illustrates how the visualization arrangement is configured to display, in addition to the images explained above, an image of a third real-world object (wall 1006) at a third location within the computer-generated image. Said third location naturally corresponds to the actual location of the wall 1006 in the three-dimensional environment. The user may give as many commands of displaying visual indications as he or she wants. In fig. 10 it is assumed that the visualization arrangement received from the user a command to display the second visual indication 1007 between the image of the third real-world object (wall 1006) and the computer generated virtual element 1002. The second visual indication 1007 indicates another real-world dimension, called here the second real-world dimension that represents a second distance. The distance 13 cm shown in fig. 10 would prevail between the third real-world object (the wall 1006) and the second real-world object (chair) if located at its apparent location.
As above, it is advantageous to configure the visualization arrangement to repeatedly update the second visual indication 1007 to represent the most up-to-date value of said second distance when the virtual element 1002 moves within the computer-generated image and/or the third real-world object (wall 1006) moves within said three-dimensional environment.
The embodiments that relate to the intuitive, real-time indicating of distances between real- world objects and virtual elements in the computer-generated image may be described in a concise way as in the following consecutively numbered paragraphs. NP21. Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
- display, at a first location within said computer-generated image, an image of a first real-world object located at a corresponding location in the three-dimensional environment, and
- display, at a second location within said computer-generated image, a computer-generated virtual element as if said virtual element was a second real-world object located at the corresponding apparent location within said three-dimensional environment, characterized in that the visualization arrangement is configured to:
- respond to a command received from the user by displaying within said computer-generated image, between said image of the first real-world object and said computer-generated virtual element, a first visual indication of a first real-world dimension representative of a first distance that would prevail between said first real-world object at its current location and said second real-world object if located at its apparent location, and
- repeatedly update said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment.
NP22. A visualization arrangement according to numbered paragraph NP21, configured to calculate said real-world dimension as a shortest linear distance between points closest to each other of said first real-world object at its current location and said second real-world object if located at its apparent location. NP23. A visualization arrangement according to numbered paragraph NP21, configured to calculate said real-world dimension as one of:
- a shortest linear distance between selected points of said first and second real-world objects, said selected points corresponding to points selected by the user in the computer-displayed image
- a shortest distance along a selected surface of said three-dimensional environment between points closest to each other on said surface of said first real-world object at its current location and said second real-world object
- shortest linear distance between points closest to each other of said first real-world object at its current location and said second real-world object if located at its apparent location when projected onto a selected surface of said three-dimensional environment.
NP24. A visualization arrangement according to any of numbered paragraphs NP21 to NP23, configured to respond to an anchor-selecting command from the user by selecting one of a plurality of real-world objects, images of which are displayed within said computer-generated image, as an anchor object to act as said first real-world object to which said first distance is measured.
NP25. A visualization arrangement according to numbered paragraph NP24, wherein said anchor-selecting command pertains to a point of said one of said plurality of real-world objects, and said visualization arrangement is configured to respond to said anchor-selecting command by selecting said point as a fixed endpoint to which said first distance is measured.
NP26. A visualization arrangement according to any of numbered paragraphs NP21 to NP25, configured to:
- display, at a third location within said computer-generated image, an image of a third real-world object located at a corresponding location in the three-dimensional environment,
- respond to a command received from the user by displaying within said computer-generated image, between said image of the third real-world object and said computer-generated virtual element, a second visual indication of a second real-world dimension representative of a second distance that would prevail between said third real-world object at its current location and said second real-world object if located at its apparent location, and - repeatedly update said second visual indication to represent the most up-to-date value of said second distance when the virtual element moves within said computer-generated image and/or when the third real-world object moves within said three-dimensional environment.
NP27. A method for displaying a computer-generated image of a three-dimensional environment to a human user, the method comprising:
- displaying, at a first location within said computer-generated image, an image of a first real- world object located at a corresponding location in the three-dimensional environment,
- displaying, at a second location within said computer-generated image, a computer-generated virtual element as if said virtual element was a second real-world object located at the corresponding apparent location within said three-dimensional environment,
- responding to a command received from the user by displaying within said computer-generated image, between said image of the first real-world object and said computer-generated virtual element, a first visual indication of a first real-world dimension representative of a first distance that would prevail between said first real-world object at its current location and said second real-world object if located at its apparent location, and
- repeatedly updating said first visual indication to represent the most up-to-date value of said first distance when the virtual element moves within said computer-generated image and/or when the first real-world object moves within said three-dimensional environment.
REAL-TIME BLOCK OBJECT BUILDING WITH POINTS When the user uses VR or AR to design a furnished room or other kind of three-dimensional environment with three-dimensional objects in it, a preferred way to proceed is often to select new virtual elements from a library or menu and to place them at such locations in the computer generated image where they appear to serve their intended purpose best. As a simplified example we may consider the task of designing a dining room in which a dining table and a number of chairs should be located. The dimensions of the room may be given, for example because it is an actual room in an existing building or a building to be built according to an accepted and confirmed plan. One of the tasks of the interior designer is then to find the most appropriately sized table and chairs that fit harmoniously in the room and offer enough space for everyone. There may occur a problem if the library or menu of ready-made virtual elements does not contain a virtual element of just those dimensions that the interior designer would want. One solution would be to make the virtual elements scalable, so that the user who placed a new virtual element could also make it bigger or smaller in size for example by using a mouse to grab a resizing handle and to drag the grabbed handle closer to or farther from a center of the virtual element. However, if all features of the virtual elements scale automatically, as they typically do in computer-assisted drawing, this may give a distorted end result. It would be preferable if the user had a large freedom to design at least preliminary models of new virtual elements that he or she could dimension exactly to desired size in the displayed computer-generated image.
According to an eleventh aspect there is provided a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user. The visualization arrangement is configured to: a) respond to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) respond to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) respond to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) respond to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated image not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a prism, a first side face of which is within said plane with corners at said first, second, and third placed points, and a second side face of which is parallel with said first side face and has one corner at said fourth placed point.
According to an embodiment the visualization arrangement is configured to respond to one or more additional point-placing commands received from the user between steps c) and d), which additional point-placing commands indicate points in said computer-generated image located within said plane, by displaying respective one or more additional placed points, so that in step d) said first side face has corners also at said additional placed points. According to an embodiment said plane coincides with a planar surface displayed in said computer-generated image.
According to an embodiment the visualization arrangement is configured to selectively operate in one of two modes, of which a first mode is a planar mode for making said user place at least one of said first, second, third, and possible additional placed points on said plane within said computer-generated image, and a second mode is a non-planar mode for making said user place said fourth placed point out of said plane within said computer-generated image.
According to an embodiment the visualization arrangement is configured to enter any of said first and second modes as a response to a mode-selecting command received from the user.
According to an embodiment the visualization arrangement is configured to display within said computer-generated image, as a response to any of said point-placing commands, a draft connector line connecting a previously placed point to a displayed cursor available for the user to indicate a location for the next point to be placed.
According to an embodiment the visualization arrangement is configured to display within said computer-generated image completed connector lines between consecutively placed points.
According to an embodiment the visualization arrangement is configured to display one or more drawing aids within said computer-generated image, said drawing aids representing graphical regularities for placing any of said points, and also configured to attract a cursor available for the user to indicate a location for the next point to be placed towards said drawing aids, wherein said graphical regularities involve at least one of: a direction parallel with an existing direction in the computer-generated image; a direction at a predetermined angle to an existing direction in the computer-generated image; a distance from a previously placed point equal to an existing distance in the computer-generated image; a predetermined shortest distance from an existing planar surface in the computer-generated image; a predetermined shortest distance from a displayed element in said computer-generated image.
According to a twelfth aspect there is provided a method for displaying a computer-generated image of a three-dimensional environment to a human user. The method comprises: a) responding to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) responding to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) responding to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) responding to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a prism, a first side face of which is within said plane with corners at said first, second, and third placed points, and a second side face of which is parallel with said first side and has one corner at said fourth placed point.
Fig. 12 illustrates an example of how a visualization arrangement is displaying a computer generated image of a three-dimensional environment to a human user. In particular, the visualization arrangement is configured to respond to particular kind of commands received from the user, here called point-placing commands, by displaying so-called placed points at those locations of the computer-generated image where there was a cursor or other indication means located when the point-placing command was received. The way in which the user moves the cursor in the computer-generated image is not important; the user may use e.g. a mouse, or control gestures, or his or her viewing direction, or speech commands, or other means. Similarly the way in which the user gives the actual point-placing command is immaterial to this description. Any known way of giving commands can be used, including those listed elsewhere in this description.
The visualization arrangement received a first point-placing command from the user when the cursor was at the location shown as 1201, so consequently the visualization arrangement displays a first placed point at that location. The second, subsequent point-placing command was received from the user when the cursor was at the location shown as 1202, so consequently a second placed point is displayed at that location. A third, subsequent point-placing command was received from the user when the cursor was located at the location 1203, and consequently a third placed point is displayed at that location. The arrows show how the user moved the cursor between giving the point-placing commands. Any three points define a plane in a three-dimensional space. Thus in the computer-generated image shown in fig. 12 the displayed first, second, and third placed points also define a plane. Here it is assumed that the user is in the course of defining a new virtual object that appears to stand on the floor of the room illustrated in the computer-generated image, so said plane coincides with the floor plane that represents said floor in said computer-generated image. It is often desirable, when defining new virtual objects, to place them on (or at least in fixed relationship with reference to) an existing plane or other element of the displayed computer generated image. Therefore the visualization arrangement may be configured so that when it receives a point-placing command with the cursor close to a planar element in the computer generated image, the resulting placed point is automatically made to be located on the plane defined by said planar element.
In other cases the user may want the freedom to place points so that they are not necessarily on the same plane. For this purpose the visualization arrangement may be configured to selectively operate in one of two modes. A first mode is a planar mode for making the user place at least one of the first, second, third, and possible additional placed points on an existing plane within the computer-generated image. A second mode is a non-planar mode for making the user place at least the fourth placed point out of said plane within the computer-generated image. The visualization arrangement may be configured to enter any of said first and second modes as a response to a mode-selecting command received from the user.
The user may want the newly created virtual element to have some regular shape. One frequently encountered kind of regular shapes, particularly in indoor spaces that consist of rooms, is such where at least one side face of the virtual element is a rectangle. In fig. 12 it is assumed that the user is about to create a new virtual element that has a rectangular bottom face. Thus the visualization arrangement receives one additional point-placing command from the user, which additional point-placing command indicates a further point 1204 located in the same plane as the previously placed points 1201, 1202, and 1203. Also an example of a cursor 1205 is shown in fig. 12 as an example.
In order to help the user to place the further point 1204 so that the result is a regular pattern, for example a regular rectangle, the visualization arrangement may be configured to execute assistive functions. As an example, the visualization arrangement may be configured to display draft connector lines 1206 and 1207 within the computer-generated image, as a response to any of the point-placing commands discussed above. A draft connector line 1206 and 1207 is a displayed linear graphical element that connects a previously placed point to a displayed cursor 1205 that is available for the user to indicate a location for the next point to be placed. In the example of fig. 12 one draft connector line 1206 connects the most recently placed point 1203 to the cursor 1205, while another draft connector line 1207 connects the first placed point 1201 to the cursor 1205.
Another example of a feature that may serve as a drawing aid is completed connector lines between points that were already placed. The visualization arrangement may be configured to display completed connector lines between consecutively placed points within said computer generated image. As an example fig. 12 shows the completed connector line 1208 between the second and third placed points 1202 and 1203, as well as the completed connector line 1209 between the first and second placed points 1201 and 1202. It may be advantageous to make completed connector lines have a different visual appearance than draft connector lines, so that it is easy for the user to grasp, which connector lines are which.
Yet another example of a feature that may serve as a drawing aid is support patterns that project some already existing feature to another place in the computer-generated drawing. Examples of support patterns in fig. 12 are the indications 1210 and 1211 of directions parallel with the existing directions of the completed connector lines 1208 and 1209 respectively.
As a general characterization of drawing aids, the visualization arrangement may be configured to display one or more drawing aids within said computer-generated image. Said drawing aids represent graphical regularities for placing any of the points for which the user may want to give point-placing commands. It may be advantageous to make the drawing aids attract the cursor 1205 that is available for the user to indicate a location for the next point to be placed. Such attracting is a "gravitational pull", i.e. a force that tends to move an approaching cursor towards said drawing aids, so that the cursor may "snap to" a drawing aid when close to it in the computer-generated image. The graphical regularities that are considered in association with drawing aids may involve at least one of: a direction parallel with an existing direction in the computer-generated image;
a direction at a predetermined angle to an existing direction in the computer-generated image;
a distance from a previously placed point equal to an existing distance in the computer generated image; a predetermined shortest distance from an existing planar surface in the computer generated image;
a predetermined shortest distance from a displayed element in said computer-generated image; or
other features.
Fig. 13 illustrates the step in which the new virtual element that the user is creating becomes three-dimensional. After the point-placing commands for those points that are all in the same plane, the visualization arrangement receives from the user a further, subsequent point-placing command. It indicates a point 1301 in the computer-generated image that is not located within said plane. The visualization arrangement responds to this "fourth" point-placing command by displaying, within the computer-generated image, a three-dimensional virtual element 1302 in the form of a prism. A first side face (= bottom) 1303 of the new virtual element 1302 is within the plane mentioned above, with the corners at the first, second, and third placed points 1201, 1202, and 1203 respectively. Since the user added also a further point 1204 before moving out of the plane of said three points, the bottom 1303 of the new virtual element 1302 has a corner also there. A second side face (= top) 1304 of the new virtual element 1302 is parallel with the first side face 1303 and has one corner at the "fourth" placed point 1301.
In the example of fig. 13 the new virtual element 1302 has the form of a regular rectangular prism. This is a consequence of two factors. First, the "fourth" placed point 1301 is located on a line that is perpendicular against the plane of the bottom 1303 and goes through one of its corner points. Second, also for all other corner points of the bottom 1303 there is a corresponding, and correspondingly located, corner point of the top 1304. Placing the "fourth" point 1301 perpendicularly above one of the corner points of the bottom 1303 is often a desire of the user, and the visualization arrangement may help by displaying a corresponding vertical drawing aid, which in fig. 13 is the line 1305. In order to help making the new virtual element 1302 exactly as high as some other, previously existing element in the computer-generated image, there could also be one or more horizontal drawing aids, examples of which are shown as 1306 and 1307 in fig. 13.
The new virtual element 1302 does not need to have the form of a rectangular prism. If the user did not want to utilize a vertical drawing aid 1305, he or she could have placed the "fourth" point 1301 at a location that is not directly above any of the corners of the bottom 1303. Since at this stage the three-dimensional form is essentially formed by copying the form of the bottom 1303 onto another level to make a top 1304, at least at this stage it is intuitive that the visualization arrangement moves all other corner points of the top 1304 along with the "fourth" point 1301. Later the user may be given a possibility to change the exact location of any corner point of the new virtual element 1302, so that the final form of the new virtual element 1302 does not need to be regular in any respect.
Fig. 13 shows the connector lines between corner points (other than the periphery of the bottom 1303) still as draft connector lines. This may mean that the user is still considering the height of the new virtual element to be formed, with the cursor 1205 moving up and down (and possibly also sideways) in response to moving commands received from the user. Once the user has actually given the point-placing command for the "fourth" point 1301, it is advantageous to make the visualization arrangement display all edges of the new virtual element 1302 as completed connector lines.
The embodiments that relate to the fast and intuitive creating of new virtual elements in the computer-generated image may be described in a concise way as in the following consecutively numbered paragraphs.
NP31. Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to: a) respond to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) respond to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) respond to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) respond to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated image not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a prism, a first side face of which is within said plane with corners at said first, second, and third placed points, and a second side face of which is parallel with said first side face and has one corner at said fourth placed point.
NP32. A visualization arrangement according to numbered paragraph NP31, configured to respond to one or more additional point-placing commands received from the user between steps c) and d), which additional point-placing commands indicate points in said computer generated image located within said plane, by displaying respective one or more additional placed points, so that in step d) said first side face has corners also at said additional placed points.
NP33. A visualization arrangement according to any of numbered paragraphs NP31 or NP32, wherein said plane coincides with a planar surface displayed in said computer-generated image.
NP34. A visualization arrangement according to any of numbered paragraphs NP31 to NP33, configured to selectively operate in one of two modes, of which a first mode is a planar mode for making said user place at least one of said first, second, third, and possible additional placed points on said plane within said computer-generated image, and a second mode is a non-planar mode for making said user place said fourth placed point out of said plane within said computer generated image.
NP35. A visualization arrangement according to numbered paragraph NP34, configured to enter any of said first and second modes as a response to a mode-selecting command received from the user.
NP36. A visualization arrangement according to any of numbered paragraphs NP31 to NP35, configured to display within said computer-generated image, as a response to any of said point placing commands, a draft connector line connecting a previously placed point to a displayed cursor available for the user to indicate a location for the next point to be placed.
NP37. A visualization arrangement according to any of numbered paragraphs NP31 to NP36, configured to display within said computer-generated image completed connector lines between consecutively placed points.
NP38. A visualization arrangement according to any of numbered paragraphs NP31 to NP37, configured to display one or more drawing aids within said computer-generated image, said drawing aids representing graphical regularities for placing any of said points, and also configured to attract a cursor available for the user to indicate a location for the next point to be placed towards said drawing aids, wherein said graphical regularities involve at least one of: a direction parallel with an existing direction in the computer-generated image; a direction at a predetermined angle to an existing direction in the computer-generated image; a distance from a previously placed point equal to an existing distance in the computer-generated image; a predetermined shortest distance from an existing planar surface in the computer-generated image; a predetermined shortest distance from a displayed element in said computer-generated image.
NP39. A method for displaying a computer-generated image of a three-dimensional environment to a human user, the method comprising: a) responding to a first point-placing command received from a user by displaying a first placed point at a first location within said computer-generated image, b) responding to a second, subsequent point-placing command received from the user by displaying a second placed point at a second location within said computer-generated image, c) responding to a third, subsequent point-placing command received from the user by displaying a third placed point at a third location within said computer-generated image, so that said displayed first, second, and third placed points define a plane in said computer-generated image, and d) responding to a fourth, subsequent point-placing command received from the user, which fourth point-placing command indicates a point in said computer-generated not located within said plane, by displaying, within said computer-generated image, a three-dimensional virtual element in the form of a prism, a first side face of which is within said plane with corners at said first, second, and third placed points, and a second side face of which is parallel with said first side and has one corner at said fourth placed point.
MAKING A VIRTUAL ROOM FOR BETTER PLACEMENT
It is typically expected that an AR application displays the real-world objects and the virtual elements in a computer-generated image so that they appear at "natural" locations. For example, a virtual element representative of a piece of furniture should be located on the floor of the room, or another virtual element representative of a painting should hang on a wall. In order to successfully serve this purpose that visualization arrangement should be aware of the coordinates, in some coordinate system it uses to make the calculations, of both the planar surfaces that make up the three-dimensional environment (like floor, walls, and ceiling) and the displayed virtual elements. In order to become aware of the coordinates of said planar surface the visualization arrangement may comprise, or be capable of communicating with, one or more cameras and/or corresponding image-acquisition systems.
Problems may arise if the visual appearance of the surfaces that make up the three-dimensional environment are such that a mapping algorithm in the visualization arrangement cannot easily recognize them. As an example, a completely white room (white floor; white walls; white ceiling) can be considered that is illuminated with relatively even and smooth lighting. The scarcity of distinctive, visually recognizable points in the room may mean that when an imaging device of the visualization arrangement acquires digital images, a mapping algorithm may not easily find out, where is the borderline between a ceiling and a wall, for example. Or a surface-scanning algorithm may fail to appropriately recognize a piece of surface, because it does not find recognizable texture on it. As a result, the mapping algorithm may fail to produce feasible coordinates of the planar surfaces of the room, and/or the coordinates it produces may be erroneous. Many AR engines do not allow placing further virtual elements on "non-existing" surfaces, i.e. ones that have not been correctly recognized as surfaces. The problem may pertain to the whole room or to a part of it, for example so that even if the floor was recognized appropriately, the ceiling and the upper parts of the walls were not.
According to a thirteenth aspect there is provided a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user. The visualization arrangement is configured to use an image acquisition subsystem to obtain a digital image of said three-dimensional environment, and display said digital image to a user as a first computer-generated image of said three-dimensional environment. The visualization arrangement is configured to respond to a plurality of point-fixing commands received from the user, which point-fixing commands each identify a point within said three-dimensional environment, by determining and storing a corresponding plurality of coordinates of fixed points in a three-dimensional coordinate system. The visualization arrangement is configured to construct a second computer-generated image of said three-dimensional environment, which second computer-generated image comprises planar surfaces intersecting at one or more of said fixed points, and display said second computer-generated image to the user. According to an embodiment, the visualization arrangement is configured to perform said determining of said coordinates so that the determined coordinates are located on a limited set of planar surfaces in said three-dimensional coordinate system.
According to an embodiment the visualization arrangement is configured to perform said determining of said coordinates so that the determined coordinates are located on a set of mutually perpendicular planar surfaces in said three-dimensional coordinate system.
According to an embodiment the visualization arrangement is configured to receive and process each of said point-fixing commands as containing an indication of a corresponding point in the first computer-generated image. According to an embodiment the visualization arrangement is configured to receive and process at least a subset of said point-fixing commands as each containing an indication of a real-world point in said three-dimensional environment.
According to an embodiment there are the following features in the visualization arrangement:
- the visualization arrangement comprises a spatial information subsystem configured to identify viewing directions of said user within said three-dimensional environment,
- the visualization arrangement is configured to receive and process point-fixing commands as each containing indications of at least two identified viewing directions, so that the point identified by such a point-fixing command is a point at which the identified viewing directions intersect. According to an embodiment the visualization arrangement is configured to display to the user one or more virtual elements located in a fixed spatial relationship with reference to said planar surfaces that intersect at one or more of said fixed points.
According to an embodiment the visualization arrangement is configured to make said second computer-generated image at least partially transparent and to display to the user said first computer-generated image as a background, so that the virtual elements that have a fixed spatial relationship with reference to said planar surfaces appear to be within said displayed first computer-generated image.
According to a fourteenth aspect there is provided a method for displaying a computer generated image of a three-dimensional environment to a human user. The method comprises: - using an image acquisition subsystem to obtain a digital image of said three-dimensional environment,
- displaying said digital image to a user as a first computer-generated image of said three- dimensional environment,
- responding to a plurality of point-fixing commands received from the user, which point-fixing commands each identify a point within said three-dimensional environment, by determining and storing a corresponding plurality of coordinates of fixed points in a three-dimensional coordinate system,
- constructing a second computer-generated image of said three-dimensional environment, which second computer-generated image comprises planar surfaces intersecting at one or more of said fixed points, and
- displaying said second computer-generated image to the user.
Fig. 14 illustrates a digital image of a three-dimensional environment. A visualization arrangement may have used an image acquisition subsystem, such as one or more digital cameras, to obtain the digital image in fig. 14. The digital image may be a still image, or it may be a piece of a video stream so that what is seen in fig. 14 is a representative frame of the video stream. The visualization arrangement may be configured to display the digital image of fig. 14 to the user. For purposes that will become clear later we call this image the first computer generated image of the three-dimensional environment. It does not need to contain any three- dimensional information; it may be a simple set of digital image information read from the recording CCD screen of a digital camera. It is nevertheless computer-generated in the sense that an image-processing computer was needed to read the raw information from the CCD hardware, to store it at least temporarily, and to direct it onto a display in a way that suits the displaying technology of that display so that a visually observable image results.
In this example a characteristic feature of the first computer-generated image of fig. 14 is the scarcity of visually discernible details or texture on the planar surfaces of the three-dimensional environment. The floor, walls, and ceiling of the room are all white, with only the window 1401 on one wall offering any details on the otherwise continuous, white surfaces. Another feature is that there is a chair 1402 in the middle, partially obstructing the view to the point 1403 where the two walls and the floor of the room intersect. As a result it may be a difficult task for a mapping and/or scanning algorithm to produce any reasonably accurate digital model of the room.
In order to enable more accurate digital modelling of the three-dimensional environment the visualization arrangement is configured to receive point-fixing commands from the user. A point fixing command is one that identifies a point that is or represents a real-life point of the three- dimensional environment. The visualization arrangement is configured to respond to every point fixing command by determining and storing corresponding coordinates of fixed points in a three- dimensional coordinate system. In a way, the user can tell to the visualization arrangement where certain important points actually are, if the visualization arrangement is not capable of recognizing them in a purely automatic manner, using its scanning and/or mapping algorithms.
An example of a point of the three-dimensional environment that could be identified with a point-fixing command is the point 1404 where the two walls and the ceiling intersect in fig. 14. Other similar points are shown as 1405, 1406, 1407, and 1408. Also the point 1403 mentioned above could be identified with a point-fixing command.
As shown in fig. 15 the visualization arrangement is configured to construct another computer generated image of the three-dimensional environment. This image is called here the second computer-generated image. It is a purely virtual construction in a sense that - at least initially - it does not even try to convey a true visual image of how the three-dimensional environment actually looks like. Merely it conveys a conceptual image of what kind of planar surfaces there are that delimit the three-dimensional environment, a digital image of which originally appeared as the first computer-generated image. The planar surfaces in fig. 15 are the right wall 1501, the left wall 1502, the floor 1503, and the ceiling 1504. Here it is assumed that the visualization arrangement could have been capable of properly scanning and mapping the extremities of the window 1401 so that it also appears as a gap in the left wall 1502 of fig. 15. However, it is also possible that the left wall 1502 is just a continuous surface without paying any attention to the existence of the window 1401.
The second computer-generated image can be displayed to the user, so that the user is able to check that it has been successfully generated and models the actual surfaces of the three- dimensional environment with reasonable accuracy. As the visualization arrangement now has exact knowledge of the coordinates of each planar surface in the second computer-generated image, it can be thereafter used as the spatial frame of reference in which virtual elements can be placed and viewed.
As a first assumption it can be thought that the three-dimensional environment to be modelled is delimited by planar surfaces. Therefore, in order to simplify the calculations somewhat, the visualization arrangement may be configured to perform the determining of coordinates (in response to receiving the point-fixing commands from the user) so that the determined coordinates are located on a limited set of planar surfaces in the three-dimensional coordinate system, such as a set of mutually perpendicular planar surfaces for example. If there is enough processing power in the visualization arrangement this restriction can be partly or completely lifted, so that even more complicated three-dimensional spaces can be modelled accurately.
Above, with reference to fig. 14, it was assumed that the visualization arrangement is configured to receive and process each of said point-fixing commands as containing an indication of a corresponding point in the first computer-generated image; see the use of a cursor 1409 to indicate a point 1404 in fig. 14. However, if the first computer-generated image is just a two- dimensional distribution of colour and brightness information values, mathematically it may be complicated to find a set of three-dimensional coordinates for the corresponding points so that they would both fit their appearance in the first computer-generated image and define properly a limited set of intersecting planar surfaces in the second computer-generated image. Therefore, additionally or alternatively the visualization arrangement may be configured to receive and process at least a subset of said point-fixing commands as each containing an indication of a real- world point in said three-dimensional environment.
One way of producing such indications of real-world points in the three-dimensional environment involves some aspects of inertial navigation. The visualization arrangement may comprise a spatial information subsystem that is configured to identify viewing directions of the user within the three-dimensional environment. The visualization arrangement may be additionally configured to receive and process the point-fixing commands as each containing indications of at least two identified viewing directions. The point identified by such a point-fixing command is a point at which the identified viewing directions intersect. The spatial information subsystem may comprise for example spatial and directional sensors that tell at which location (with reference to a three-dimensional coordinate system) the head of the user (or a smartphone of the user, or any other device used by the user) currently is and into which direction the user (or a camera contained in the device) is currently looking. From two sets of such information it is relatively easy calculate the point of the coordinate system where the two viewing directions intersect. Thus the user may first go into a first location and look at a fixed point, and thereafter go a second location and look at the same fixed point, and visualization arrangement can calculate where in the current coordinate system that fixed point is. In fig. 15 it is assumed that the visualization arrangement is configured to display to the user one or more virtual elements 1505, 1506, and 1507 located in a fixed spatial relationship with reference to the planar surfaces 1501, 1502, 1503, and 1504 that intersect at the points 1403 to 1408 and thus delimit the representative three-dimensional space in fig. 15. As pointed out above, this is now easy and free of the limitations of prior art where objects may have not been placed on poorly or incompletely detected surfaces, because the visualization arrangement has good knowledge of the coordinates of said planar surfaces. However, the user might be interested to see, how the virtual elements 1505, 1506, and 1507 would look like in the first computer-generated image; in other words, how would the room look like if it additionally contained objects represented by the virtual elements 1505, 1506, and 1507. For the last-mentioned purpose the visualization arrangement may be configured to make said second computer-generated image at least partially transparent and to display to the user said first computer-generated image as a background. This way the virtual elements 1505, 1506, and 1507 that actually have a fixed spatial relationship only with reference to said planar surfaces of the second computer-generated image appear to be within said displayed first computer- generated image.
The embodiments that relate to the digital modelling of even those three-dimensional environments that were poorly applicable to imaging with prior art AR systems may be described in a concise way as in the following consecutively numbered paragraphs.
NP41. Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
- use an image acquisition subsystem to obtain a digital image of said three-dimensional environment,
- display said digital image to a user as a first computer-generated image of said three- dimensional environment, - respond to a plurality of point-fixing commands received from the user, which point-fixing commands each identify a point within said three-dimensional environment, by determining and storing a corresponding plurality of coordinates of fixed points in a three-dimensional coordinate system, - construct a second computer-generated image of said three-dimensional environment, which second computer-generated image comprises planar surfaces intersecting at one or more of said fixed points, and
- display said second computer-generated image to the user.
NP42. A visualization arrangement according to numbered paragraph NP41, configured to perform said determining of said coordinates so that the determined coordinates are located on a limited set of planar surfaces in said three-dimensional coordinate system.
NP43. A visualization arrangement according to numbered paragraph NP42, configured to perform said determining of said coordinates so that the determined coordinates are located on a set of mutually perpendicular planar surfaces in said three-dimensional coordinate system. NP44. A visualization arrangement according to any of the numbered paragraphs NP41 to NP43, configured to receive and process each of said point-fixing commands as containing an indication of a corresponding point in the first computer-generated image.
NP45. A visualization arrangement according to any of the numbered paragraphs NP41 to NP44, configured to receive and process at least a subset of said point-fixing commands as each containing an indication of a real-world point in said three-dimensional environment.
NP46. A visualization arrangement according to numbered paragraph NP45, wherein:
- the visualization arrangement comprises a spatial information subsystem configured to identify viewing directions of said user within said three-dimensional environment,
- the visualization arrangement is configured to receive and process point-fixing commands as each containing indications of at least two identified viewing directions, so that the point identified by such a point-fixing command is a point at which the identified viewing directions intersect.
NP47. A visualization arrangement according to any of the numbered paragraphs NP41 to NP46, configured to display to the user one or more virtual elements located in a fixed spatial relationship with reference to said planar surfaces that intersect at one or more of said fixed points.
NP48. A visualization arrangement according to numbered paragraph NP47, configured to make said second computer-generated image at least partially transparent and to display to the user said first computer-generated image as a background, so that the virtual elements that have a fixed spatial relationship with reference to said planar surfaces appearto be within said displayed first computer-generated image.
NP49. A method for displaying a computer-generated image of a three-dimensional environment to a human user, the method comprising: - using an image acquisition subsystem to obtain a digital image of said three-dimensional environment,
- displaying said digital image to a user as a first computer-generated image of said three- dimensional environment,
- responding to a plurality of point-fixing commands received from the user, which point-fixing commands each identify a point within said three-dimensional environment, by determining and storing a corresponding plurality of coordinates of fixed points in a three-dimensional coordinate system,
- constructing a second computer-generated image of said three-dimensional environment, which second computer-generated image comprises planar surfaces intersecting at one or more of said fixed points, and
- displaying said second computer-generated image to the user.
SAVING AND RESTORING PREVIOUSLY PLACED OBJECTS IN AR
When the user has worked with an AR application and observed some virtual elements as if they appeared in the real-world three-dimensional environment it may sometimes be desirable that he or she could come back later, for example to compare a more recently created virtual indoor design to a previous one. However, this may be difficult with known AR arrangements. There is the inherent incompatibility that the real-world environment is real and permanent, while the virtual elements only exist for sensory examination by the user for the duration when they are displayed. According to a fifteenth aspect there is provided a visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user. The visualization arrangement is configured to display to the user a computer-generated image acquired in real time of a real-life three-dimensional environment. The visualization arrangement is configured to display within said computer-generated image at least one virtual element as if located at a particular location within said real-life three-dimensional environment, and to store a digital representation of the displayed virtual element along with location information indicative of said particular location within said real-life three-dimensional environment. The visualization arrangement is configured to, at a later moment of time, display to the user a new computer-generated image acquired in real time of the same real-life three-dimensional environment, and retrieve from storage said digital representation of the previously displayed virtual element and, using said location information, displaying the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
According to an embodiment the visualization arrangement is configured to store spatial coordinates of at least a part of said real-life three-dimensional environment in a coordinate system, and also configured to store said location information as spatial coordinates in the same coordinate system.
According to an embodiment the visualization arrangement is configured to store said location information as a stored digital image of the virtual element as if located in the real-life three- dimensional environment.
According to an embodiment the visualization arrangement is configured to use a trackability function of an AR core applied by the visualization arrangement to display the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
According to a sixteenth aspect there is provided a method for displaying a computer-generated image of a three-dimensional environment to a human user. The method comprises:
- displaying to the user a computer-generated image acquired in real time of a real-life three- dimensional environment,
- displaying within said computer-generated image at least one virtual element as if located at a particular location within said real-life three-dimensional environment, - store a digital representation of the displayed virtual element along with location information indicative of said particular location within said real-life three-dimensional environment,
- at a later moment of time displaying to the user a new computer-generated image acquired in real time of the same real-life three-dimensional environment, and
- retrieving from storage said digital representation of the previously displayed virtual element and, using said location information, displaying the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
In fig. 16 it is assumed that the user is looking at a real-life room through the visualization arrangement. The visualization arrangement is configured to display to the user the computer generated image 1601 of fig. 16, which is an image acquired in real time of the real-life three- dimensional environment. The visualization arrangement is also configured to display, within the computer-generated image 1601, at least one virtual element as if located at a particular location within the real-life three-dimensional environment. As examples, the three virtual elements 1505, 1506, and 1507 previously discussed with reference to fig. 15 appear in the computer generated image 1601. Note the scaling between images; the computer-generated image in fig. 15 was completely virtual and only showed the virtual planar surfaces determined by the fixed points, so they can be displayed to the user even if the walls of the room did not continue that far in reality. Thus when the virtual elements 1505, 1506, and 1507 are placed at the same particular locations with reference to the real-world environment in the computer-generated image of fig. 16, only a small part of virtual element 1505 is visible. Also virtual element 1507 is partially cut out in fig. 16.
The visualization arrangement is configured to store a digital representation of each of the displayed virtual elements 1505, 1506, and 1507 along with location information indicative of their particular locations within the real-life three-dimensional environment. The digital representation meant here means all information that is needed to completely re-generate a completely similar virtual element if needed. Concerning visual appearance, size, and such features the digital representation may be explicit, and/or it may contain a reference to a database in which information of the features of virtual elements is stored.
If now the user left the room, he or she could obviously not see the inside of the room through the visualization arrangement. However, at a later moment of time, if the user (or another user who has access to the same visualization arrangement) comes back to the room, he or she can look at the same room again through the visualization arrangement. At that later time, the visualization arrangement would display to the user a new computer-generated image, acquired in real time, of the same real-life three-dimensional environment. Now the visualization arrangement may be configured to retrieve from storage the digital representation of any or all previously displayed virtual element(s). Using the (stored) location information, the visualization arrangement may display the the same virtual element 1505, 1506, and/or 1507 as if again located at the same particular location within the real-life three-dimensional environment.
The ways in which the location information is stored and processed are not limited by this description. As an example, the visualization arrangement may be configured to store spatial coordinates of at least a part of said real-life three-dimensional environment in a coordinate system, like GPS, Glonass, Galileo, and/or BeiDou. The visualization arrangement may then be also configured to store said location information as spatial coordinates in the same coordinate system. By comparing the stored coordinates it is then relatively easy to later place the virtual element similarly at the same place again.
As another example, the visualization arrangement may be configured to store said location information as a stored digital image of the virtual element as if located in the real-life three- dimensional environment. Placing the same (or similar) virtual element at the same location again would then be based on graphical comparison between images: only if placed correctly the virtual element will look the same in the new computer-generated image as in the old one. The visualization arrangement may be configured to use a trackability function of an AR core applied by the visualization arrangement to display the virtual element as if again located at the same particular location within said real-life three-dimensional environment. An example of such a trackability function is that included in the known Google ARCore software.
The embodiments that relate to the later retrieving and re-locating of previously used virtual elements may be described in a concise way as in the following consecutively numbered paragraphs.
NP51. Visualization arrangement for displaying a computer-generated image of a three- dimensional environment to a human user, the visualization arrangement being configured to:
- display to the user a computer-generated image acquired in real time of a real-life three- dimensional environment, - display within said computer-generated image at least one virtual element as if located at a particular location within said real-life three-dimensional environment,
- store a digital representation of the displayed virtual element along with location information indicative of said particular location within said real-life three-dimensional environment, - at a later moment of time display to the user a new computer-generated image acquired in real time of the same real-life three-dimensional environment, and
- retrieve from storage said digital representation of the previously displayed virtual element and, using said location information, displaying the virtual element as if again located at the same particular location within said real-life three-dimensional environment. NP52. A visualization arrangement according to numbered paragraph NP51, configured to store spatial coordinates of at least a part of said real-life three-dimensional environment in a coordinate system, and also configured to store said location information as spatial coordinates in the same coordinate system.
NP53. A visualization arrangement according to any of numbered paragraphs NP51 or NP52, configured to store said location information as a stored digital image of the virtual element as if located in the real-life three-dimensional environment.
NP54. A visualization arrangement according to numbered paragraph NP53, configured to use a trackability function of an AR core applied by the visualization arrangement to display the virtual element as if again located at the same particular location within said real-life three-dimensional environment.
NP55. A method for displaying a computer-generated image of a three-dimensional environment to a human user, the method comprising:
- displaying to the user a computer-generated image acquired in real time of a real-life three- dimensional environment, - displaying within said computer-generated image at least one virtual element as if located at a particular location within said real-life three-dimensional environment,
- store a digital representation of the displayed virtual element along with location information indicative of said particular location within said real-life three-dimensional environment, - at a later moment of time displaying to the user a new computer-generated image acquired in real time of the same real-life three-dimensional environment, and
- retrieving from storage said digital representation of the previously displayed virtual element and, using said location information, displaying the virtual element as if again located at the same particular location within said real-life three-dimensional environment.

Claims

1. Visualization arrangement for displaying a computer-generated image of a three-dimensional environment to a human user, the visualization arrangement being configured to:
- repeatedly obtain an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and
- respond to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction; characterized in that the visualization arrangement is configured to:
- respond to a menu-displaying command received through user controls of said visualization arrangement by displaying a menu as a number of displayed three-dimensional symbols within said displayed portion of said computer-generated image, wherein said menu is a symbolic representation of a number of interrelated options available to said user, and
- maintain said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment, while continuing to obtain new indications of said viewing direction and responding to such obtained indications by centering said displayed portion of said computer-generated image on the respective viewing direction.
2. Visualization arrangement according to claim 1, configured to: - display, in said menu, only a subset of a larger number of interrelated options available to said user through said menu, and
- respond to a menu-scrolling command received through said user controls by scrolling the subset of the interrelated options selected for display within said larger number of interrelated options and each time displaying only those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded.
3. Visualization arrangement according to claim 2, configured to: - display those three-dimensional symbols in said menu that represent those of said larger number of interrelated options to which said scrolling had proceeded in the form of slices of a displayed three-dimensional part of a pie.
4. Visualization arrangement according to claim 3, configured to: - respond to said menu-scrolling command by rotating said displayed slices in said computer generated image around a center point of said displayed three-dimensional part of the pie, so that in each case that slice that was furthest in the direction of rotation before said rotating is faded out of view and a new slice, representing another one of said interrelated options is brought into view on the other side of the displayed three-dimensional part of the pie.
5. Visualization arrangement according to claim 1, configured to:
- display, in said menu, a number of three-dimensional symbols corresponding to options available to said user through said menu in a rotationally symmetric planar array, the planar form of which is oriented parallel to a plane displayed at or close to the location of said menu in said computer-generated image and - respond to a menu-scrolling command received through said user controls by rotating said rotationally symmetric planar array of three-dimensional symbols around its rotational axis of symmetry so that said rotating, when continues, brings each of said three-dimensional symbols in turn to front in said computer-generated image.
6. Visualization arrangement according to any of the preceding claims, configured to: - display, at a first location within said computer-generated image, a computer-generated first virtual element as if said first virtual element was an object located at the corresponding location within said three-dimensional environment,
- respond to a first command received through user controls of said visualization arrangement by displaying within said computer-generated image a graphical indicator, said graphical indicator indicating a direction from said first virtual element and a distance from said first virtual element,
- respond to a direction-changing command received through said user controls by changing the direction from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed direction, - respond to a distance-changing command received through said user controls by changing the distance from said first virtual element that said graphical indicator indicates and displaying the graphical indicator as indicating the changed distance, and
- respond to a second command received through said user controls by placing and displaying a second virtual element within said computer-generated image at the direction and distance from the first virtual element that were indicated by said graphical indicator.
7. Visualization arrangement according to claim 6, configured to:
- as a response to said second command, place and display said second virtual element as a copy of said first virtual element in said computer-generated image with the same orientation as the first virtual element.
8. Visualization arrangement according to claim 7, wherein depending on the content of said second command the visualization arrangement is configured to either leave said first virtual element as it was in the computer-generated image, thus causing the first virtual element to be duplicated by the second virtual element, or delete said first virtual element from the computer- generated image, thus causing the first virtual element to be replaced by the second virtual element.
9. Visualization arrangement according to any of claims 6 to 8, configured to:
- as a response to the direction indicated by said graphical indicator being parallel with a first existing linear element in the computer-generated image, display a graphical highlighter of said first existing linear element.
10. Visualization arrangement according to any of claims 6 to 9, configured to:
- respond to an alignment-guide-selecting command received through said user controls - said alignment-guide-selecting command being one that identifies a second existing linear element in said computer-generated image - by changing the direction from said first virtual element that said graphical indicator indicates so that it coincides with the direction of said second existing linear element and displaying the graphical indicator as indicating the changed direction.
11. A visualization arrangement according to any of claims 6 to 10, configured to: - as a response to the distance indicated by said graphical indicator being equal to a first dimension of a first existing element in the computer-generated image, display a graphical highlighter of said first existing element.
12. Visualization arrangement according to any of claims 6 to 11, configured to: - respond to a distance-guide-selecting command received through said user controls - said distance-guide-selecting command being one that identifies a second existing element in said computer-generated image that has a second dimension - by changing the distance from said first virtual element that said graphical indicator indicates so that it coincides with said second dimension of said second existing element and displaying the graphical indicator as indicating the changed distance.
13. Visualization arrangement according to any of claims 6 to 12, configured to:
- respond to a third command, if received through said user controls before said second command, by displaying within said computer-generated image a list of operations available for performing, one of said operations being an operation of placing the second virtual element, and - perform said placing and displaying of the second virtual element only if said second command indicates a selection by the user of said operation of placing the second virtual element.
14. A method for displaying a computer-generated image of a three-dimensional environment to a human user, the method comprising:
- repeatedly obtaining an indication of a viewing direction into which said user is currently looking in said three-dimensional environment, and
- responding to said obtained indication of said viewing direction by centering a displayed portion of said computer-generated image on said viewing direction; characterized in that the method comprises:
- responding to a menu-displaying command received through user controls of said visualization arrangement by displaying a menu as a number of displayed three-dimensional symbols within said displayed portion of said computer-generated image, wherein said menu is a symbolic representation of a number of interrelated options available to said user, and
- maintaining said three-dimensional symbols that constitute said menu at the location at which it was displayed within said computer-generated image, as if said three-dimensional symbols were objects located at the corresponding location in the three-dimensional environment, while continuing to obtain new indications of said viewing direction and responding to such obtained indications by centering said displayed portion of said computer-generated image on the respective viewing direction.
15. A computer program product, comprising one or more sets of one or more machine-readable instructions that, when executed by one or more processors, cause the implementation of a method according to claim 14.
EP19809884.0A 2018-10-23 2019-10-17 Method, arrangement, and computer program product for three-dimensional visualization of augmented reality and virtual reality environments Pending EP3953793A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20187153 2018-10-23
PCT/FI2019/050743 WO2020084192A1 (en) 2018-10-23 2019-10-17 Method, arrangement, and computer program product for three-dimensional visualization of augmented reality and virtual reality environments

Publications (1)

Publication Number Publication Date
EP3953793A1 true EP3953793A1 (en) 2022-02-16

Family

ID=68699476

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19809884.0A Pending EP3953793A1 (en) 2018-10-23 2019-10-17 Method, arrangement, and computer program product for three-dimensional visualization of augmented reality and virtual reality environments

Country Status (2)

Country Link
EP (1) EP3953793A1 (en)
WO (1) WO2020084192A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240046341A1 (en) * 2021-02-06 2024-02-08 Sociograph Solutions Private Limited System and method to provide a virtual store-front

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8601389B2 (en) * 2009-04-30 2013-12-03 Apple Inc. Scrollable menus and toolbars
EP3109734A1 (en) * 2015-06-22 2016-12-28 Samsung Electronics Co., Ltd Three-dimensional user interface for head-mountable display
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario

Also Published As

Publication number Publication date
WO2020084192A1 (en) 2020-04-30

Similar Documents

Publication Publication Date Title
US11227446B2 (en) Systems, methods, and graphical user interfaces for modeling, measuring, and drawing using augmented reality
US20220084279A1 (en) Methods for manipulating objects in an environment
US10852913B2 (en) Remote hover touch system and method
US11557102B2 (en) Methods for manipulating objects in an environment
CN110603509B (en) Joint of direct and indirect interactions in a computer-mediated reality environment
CN105637559B (en) Use the structural modeling of depth transducer
CA2893586C (en) 3d virtual environment interaction system
KR101554082B1 (en) Natural gesture based user interface methods and systems
CN114080585A (en) Virtual user interface using peripheral devices in an artificial reality environment
JP2024509722A (en) User interaction in extended reality
CN114830066A (en) Device, method and graphical user interface for displaying applications in a three-dimensional environment
CN110476142A (en) Virtual objects user interface is shown
CN108431729A (en) To increase the three dimensional object tracking of display area
Kolsch et al. Multimodal interaction with a wearable augmented reality system
US10359906B2 (en) Haptic interface for population of a three-dimensional virtual environment
US11893206B2 (en) Transitions between states in a hybrid virtual reality desktop computing environment
KR20150133585A (en) System and method for navigating slices of a volume image
Debarba et al. Disambiguation canvas: A precise selection technique for virtual environments
US20220253808A1 (en) Virtual environment
WO2020084192A1 (en) Method, arrangement, and computer program product for three-dimensional visualization of augmented reality and virtual reality environments
CN109643182B (en) Information processing method and device, cloud processing equipment and computer program product
US20210407185A1 (en) Generating a Semantic Construction of a Physical Setting
Pan et al. Exploring the Use of Smartphones as Input Devices for the Mixed Reality Environment
Pietroszek 3D Pointing with Everyday Devices: Speed, Occlusion, Fatigue
Martinet et al. Design and Evaluation of 3D Positioning Techniques for Multi-touch Displays

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210825

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR