EP3629134A1 - Augmented reality interaction techniques - Google Patents
Augmented reality interaction techniques Download PDFInfo
- Publication number
- EP3629134A1 EP3629134A1 EP19198602.5A EP19198602A EP3629134A1 EP 3629134 A1 EP3629134 A1 EP 3629134A1 EP 19198602 A EP19198602 A EP 19198602A EP 3629134 A1 EP3629134 A1 EP 3629134A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- industrial automation
- virtual
- user
- automation device
- visualization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000003190 augmentative effect Effects 0.000 title claims description 10
- 230000003993 interaction Effects 0.000 title description 25
- 238000012800 visualization Methods 0.000 claims abstract description 307
- 230000033001 locomotion Effects 0.000 claims abstract description 120
- 230000004044 response Effects 0.000 claims description 13
- 230000008878 coupling Effects 0.000 claims description 8
- 238000010168 coupling process Methods 0.000 claims description 8
- 238000005859 coupling reaction Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 8
- 230000004424 eye movement Effects 0.000 claims description 5
- 230000002452 interceptive effect Effects 0.000 description 109
- 210000004247 hand Anatomy 0.000 description 69
- 238000013461 design Methods 0.000 description 38
- 230000009471 action Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 12
- 238000013341 scale-up Methods 0.000 description 7
- 210000003811 finger Anatomy 0.000 description 6
- 210000003128 head Anatomy 0.000 description 6
- 238000012423 maintenance Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000001755 vocal effect Effects 0.000 description 6
- 210000000707 wrist Anatomy 0.000 description 6
- 210000000617 arm Anatomy 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 210000002683 foot Anatomy 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 210000002414 leg Anatomy 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003278 mimic effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000005293 physical law Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002207 retinal effect Effects 0.000 description 2
- 239000004984 smart glass Substances 0.000 description 2
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 239000007858 starting material Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 210000005010 torso Anatomy 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012285 ultrasound imaging Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
- G05B19/41885—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by modeling, simulation of the manufacturing system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/32—Operator till task planning
- G05B2219/32014—Augmented reality assists operator in maintenance, repair, programming, assembly, use of head mounted display with 2-D 3-D display and voice feedback, voice and gesture command
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Manufacturing & Machinery (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Quality & Reliability (AREA)
- Automation & Control Theory (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- The disclosure relates generally to the design of industrial systems. More particularly, embodiments of the present disclosure are related to systems and methods for detecting user input within an augmented reality environment and displaying or modifying visualizations associated with an industrial automation device or an industrial system based on the detected user input.
- Augmented reality (AR) devices provide layers of computer-generated content superimposed on a visualization of a real-world environment to a user via a display. That is, an AR environment may provide a user with a combination of real-world content and computer-generated content. Augmented reality devices may include, for example, a head mounted device, smart glasses, a virtual retinal display, a contact lens, a computer, or a hand-held device, such as a mobile phone or a tablet. As AR devices become more widely available, these devices may be used to assist operators in industrial automation environments to perform certain tasks. As such, it is recognized that improved systems and methods for perform certain operations in the AR environment may better enable the operators to perform their job functions for efficiently.
- This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
- A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
- In one embodiment, a system for interacting with virtual objects in an augmented reality environment may include a head mounted device. The head mounted device may receive a first set of image data associated with a surrounding of a user and generate a first visualization comprising a plurality of virtual compartments. Each virtual compartment may be associated with one type of virtual industrial automation device and each virtual compartment may include a plurality of virtual industrial automation devices. Each virtual industrial automation device may depict a virtual object within the first set of image data and the virtual object may correspond to a physical industrial automation device. The head mounted device may display the first visualization via an electronic display and detect a gesture in a second set of image data that may include the surrounding of the user and the first visualization. The gesture may be indicative of a selection of one of the plurality of virtual compartments. The head mounted device may generate a second visualization comprising a respective plurality of virtual industrial automation devices associated with the selection and display the second visualization via the electronic display.
- In another embodiment, a method may include receiving, via a processor, a first set of image data associated with a surrounding of a user and generating, via the processor, a first visualization comprising a virtual industrial automation device. The virtual industrial automation device may be configured to depict a virtual object within the first set of image data and the virtual object may correspond to a physical industrial automation device. The method may include displaying, via the processor, the first visualization via an electronic display and detecting, via the processor, a gesture in a second set of image data that may include the surrounding of the user and the first visualization. The gesture may be indicative of a request to move the virtual industrial automation device. The method may include tracking, via the processor, a movement of the user, generating, via the processor, a second visualization that may include an animation of the virtual industrial automation device moving based on the movement, and displaying, via the processor, the second visualization via the electronic display.
- In yet another embodiment, a computer-readable medium may include computer-executable instructions that, when executed, may cause a processor to receive a first set of image data associated with a surrounding of a user and generate a first visualization that may include a first virtual industrial automation device and a second virtual industrial automation device. The first and second virtual industrial automation devices may depict first and second respective virtual objects within the first set of image data, and the first and second respective virtual objects may correspond to a first and a second physical industrial automation device. The computer-readable medium may include computer-executable instructions that, when executed, may cause the processor to display the first visualization via an electronic display and detect a first gesture in a second set of image data that may include the surrounding of the user and the first visualization. The first gesture may be indicative of a movement of the first virtual industrial automation device toward the second virtual industrial automation device. The computer-readable medium may include computer-executable instructions that, when executed, may cause the processor to determine a compatibility between the first virtual industrial automation device and the second virtual industrial automation device, generate a second visualization that may include an animation of the first virtual industrial automation device coupling to the second virtual industrial automation device to create a joint virtual industrial automation device in response to determining that the first virtual industrial automation device and the second virtual industrial automation device are compatible, generate a third visualization comprising an error notification in response to determining that the first virtual industrial automation device and the second virtual industrial automation device are incompatible, and display the second visualization or the third visualization via the electronic display.
- These and other features, aspects, and advantages of the present disclosure may become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
-
FIG. 1 is a block diagram of an exemplary embodiment of an interactive augmented reality (AR) system that may be utilized to display and interact with a virtual representation of an industrial automation system in an AR environment, in accordance with an embodiment; -
FIG. 2 a block diagram of an exemplary display device of the interactive AR system ofFIG. 1 , in accordance with an embodiment; -
FIG. 3 is a perspective view of an exemplary visualization that may be perceived by a user utilizing the display device ofFIG. 2 before the performance of a first gaze gesture command, in accordance with an embodiment; -
FIG. 4 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 after a performance of the first gaze gesture command perform a second gaze gesture command, in accordance with an embodiment; -
FIG. 5 is perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 after performing the second gaze gesture command, in accordance with an embodiment; -
FIG. 6 is a flowchart of a method for displaying and modifying a visualization based on one or more gesture commands using the display device ofFIG. 2 , in accordance with an embodiment; -
FIG. 7 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 after performing a gaze gesture command, in accordance with an embodiment; -
FIG. 8 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 before performing a grasp gesture command, in accordance with an embodiment; -
FIG. 9 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 after performing a grasp gesture command, in accordance with an embodiment; -
FIG. 10 is a flowchart of a method for displaying and modifying a visualization based on a grasp gesture command using the display device ofFIG. 2 , in accordance with an embodiment. -
FIG. 11 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 before performing a push gesture command, in accordance with an embodiment; -
FIG. 12 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 after performing a push gesture command, in accordance with an embodiment; -
FIG. 13 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 before performing a rotate gesture command, in accordance with an embodiment; -
FIG. 14 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 after performing a rotate gesture command, in accordance with an embodiment; -
FIG. 15 is a flowchart of a method for displaying and modifying a visualization based on a push gesture command, a pull gesture command, or a rotate gesture command using the display device ofFIG. 2 , in accordance with an embodiment. -
FIG. 16 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 before performing a snap gesture command, in accordance with an embodiment; -
FIG. 17 is a perspective view of an exemplary visualization that may be perceived by the user utilizing the display device ofFIG. 2 after performing a snap gesture command, in accordance with an embodiment; -
FIG. 18 is a flowchart of a method for displaying and modifying a visualization based on a snap gesture command or a separate gesture command using the display device ofFIG. 2 , in accordance with an embodiment. -
FIG. 19 is a perspective view of an exemplary visualization that may be perceived by a user utilizing the display device ofFIG. 2 in a dynamic rotation mode, in accordance with an embodiment; and -
FIG. 20 is a perspective view of an exemplary visualization that may be perceived by a user utilizing the display device ofFIG. 2 after performing a scale down command, in accordance with an embodiment; - One or more specific embodiments of the present disclosure will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
- When introducing elements of various embodiments of the present disclosure, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. One or more specific embodiments of the present embodiments described herein will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
- The present disclosure is generally directed towards an interactive augmented reality (AR) system that may display one or more visualizations of a combination of real-world and computer-generated content in an AR environment to a user and detect various gestures performed by the user that correspond to respective interactions with the real-world and computer-generated content in the AR environment. After detecting a gesture performed by the user, the interactive AR system may modify a visualization associated with the AR environment based on the detected gesture. For example, the interactive AR system may alter the user's perception of the real world by displaying or modifying one or more virtual objects in the visualization associated with the AR environment presented to the user based on the detected gesture. In some embodiments, modifying a visualization or modifying a virtual object in a visualization may include generating an additional visualization and displaying the additional visualization.
- Although the AR system is described herein as providing computer-generated content visually, it should be noted that the AR system may provide computer-generated content via other types of sensory modalities. For example, the computer-generated content may be presented to a user via an auditory modality, a haptic modality, a somatosensory modality, an olfactory modality, or the like. Additionally, although the interactive AR system is described herein as providing an AR experience to the user, it should be noted that features of the interactive AR system may be utilized within a virtual reality context or a mixed reality context as well. In one embodiment, the interactive AR system may generate and display a visualization that contains real-world content (e.g., the user's surroundings or portions of the user) and computer-generated content (e.g., virtual objects). In another embodiment, the interactive AR system may have a transparent display and display a visualization of computer-generated content (e.g., virtual objects) superimposed over the transparent display to produce virtual objects within real world surroundings.
- In certain embodiments, the interactive AR system may also detect various voice commands that correspond to respective interactions with the computer-generated content in the AR environment. After detecting a voice command generated by the user, the interactive AR system may modify the AR environment based on the detected voice command. In some embodiments, a voice command may correspond to an interaction similar to an interaction associated with a gesture. In this way, the interactive AR system may provide the user with flexible control of the interactive AR system by facilitating the user's control of the interactive AR system in different ways.
- For ease of discussion, the description of the interactive AR system provided herein is made with reference to the design of an industrial system. However, it should be noted that the interactive AR system, as described herein, is not limited to such embodiments. The interactive AR system may be used in various other fields and applications. For example, the interactive AR system may be used to formulate possible archaeological site configurations, visualize and design construction models or sites of residential or commercial buildings, underground structures, or offshore wells, visualize and design commercial products or previews of commercial products, such as furniture, clothing, appliances, or vehicles, provide educational material or training, visualize local or remote geographic features, such as cities, forests, or oceans, facilitate social interaction, provide digital game play in a real world environment, provide medical information, such as X-ray, ultrasound, or magnetic resonance imaging (MRI) data, facilitate or visualize toy manufacturing or toy design, or the like.
- For an industrial system with many components, it may be beneficial to tailor the design of the industrial system to the real-world environment in which the industrial system may be located after assembly of the industrial system. For example, in an industrial system employing conveyors, such as a high-speed packaging line, the design of the industrial system may be constrained by the physical dimensions or the shape of the building in which the industrial system may be located. Additionally, each conveyor in the industrial system may have a plethora of parts, such as conveyor sections, movers, and the like, that each having a different size and shape. Further, the design of the industrial system may be tailored to the needs of a user of the industrial system. For example, the specific configuration and shape of the conveyors of the industrial system may be based on other components within the industrial system that provide product to the conveyor to transport. As such, it may be beneficial to visualize and design an industrial system within the real-world environment that the industrial system may be located before assembly of the industrial system to optimize the design of the industrial system based on the physical constraints of the location and the needs of the user of the industrial system.
- With the foregoing in mind, the interactive AR system may facilitate the visualization and the design of an industrial system by a user within the physical space that the industrial system maybe located after assembly. For example, the interactive AR system may display a visualization associated with various virtual industrial automation devices, such as conveyor sections, movers, or the like, to the user in an AR environment, while the user is present in the real-world environment that the actual industrial automation devices may be located after assembly. The interactive AR system may then detect a gesture performed by the user or a voice command issued by the user and modify the visualization based on the detected gesture or the detected voice command. As such, the user may visualize and model various configurations and designs of an industrial system and the components of the industrial system within the physical space the industrial system maybe located after assembly.
- It should be noted that the gestures and voice commands provided by the user to control the positioning of the virtual objects in the AR environment may be more beneficial for the user if they mimic real world motions. That is, using gestures and movements to move a virtual object in the same manner a person may wish to move a real object may provide for a more desirable interface between the user and the AR environment. For example, the following gestures may be performed by the user to interact with one or more virtual industrial automation devices of an industrial system in an AR environment after the gestures are detected by the interactive AR system. It should be noted that the following gestures are exemplary and non-limiting and that other gestures may be performed that provide similar interactions with the virtual representations as described herein or similar gestures may be performed that provide different interactions with the virtual representations as described herein.
- In one embodiment, the interactive AR system may detect a gaze gesture (e.g., viewing an object for certain amount of time) performed by the user to indicate a selection of a virtual industrial automation device or a request for the interactive AR system to display additional information associated with the industrial automation device.
For example, after the interactive AR system has detected a gaze gesture performed by the user and directed at a virtual industrial automation device, the interactive AR system may modify a visualization associated with the AR environment to indicate that the virtual industrial automation device has been selected by a user. Such an indication may be represented by a coloring, a shading, a transition of between dotted lines to solid lines, a highlighting, or the like, of the virtual industrial automation device in the visualization associated with the AR environment. After the virtual industrial automation device has been selected by the user in the visualization associated with the AR environment, the user may perform additional gestures as described herein to interact with the virtual industrial automation device further. - In another example, in a design environment, after the interactive AR system has detected a gaze gesture performed by the user and directed at a virtual compartment or category, the interactive AR system may modify a visualization associated with the AR environment to display the contents of the virtual compartment (e.g., one or more virtual industrial automation devices that are associated with the virtual compartment). For example, the virtual compartment may include various types of virtual conveyor sections that may be employed within the design of a conveyor system. After the interactive AR system has detected a gaze feature performed by the user and directed at the virtual compartment, the interactive AR system may modify a visualization associated with the AR environment to display the various types of virtual conveyor sections (e.g., virtual conveyor sections having different shapes, such as a straight section, a U-shaped section, a C-shaped section, or the like).
- In another example, after the interactive AR system has detected a gaze gesture performed by the user and directed at a virtual industrial automation device, the interactive AR system may modify a visualization associated with the AR environment to display various types of data associated with the virtual industrial automation device, such as identification data, compatibility data with other virtual industrial automation devices, or the like.
- In another embodiment, after the user has selected a virtual industrial automation device, the interactive AR system may detect a grasping gesture performed by the user to map the virtual industrial automation device to the hand of the user that performed the grasping motion. In some embodiments, the interactive AR system may map two respective virtual industrial automation devices to each hand of the user. That is, the interactive AR system may detect a first grasping gesture performed by a user's first hand and map a first virtual industrial automation device to the user's first hand, and the interactive AR system may detect a second grasping gesture performed by the user's second hand and map a second virtual industrial automation device to the user's second hand. After a virtual industrial automation device is mapped to a user's hand, the user may move the virtual industrial automation device within the visualization associated with the AR environment in real-time or approximately in real-time. That is, the interactive AR system may track the movement of the user's hand in the visualization in the AR environment and modify the visualization in response to the movement of the user's hand such that the virtual industrial automation device mapped to user's hand appears in the same location as the user's hand in the visualization.
- In yet another embodiment, after the interactive AR system has mapped a virtual industrial automation device to each of the user's hands, the interactive AR system may detect a snap gesture performed by the user that involves the user bringing both of the user's hands together. For example, the snap gesture may involve the user bringing both of the user's hands together while the user is grasping a respective virtual industrial automation device in each hand. After detecting the snap gesture performed by the user, the interactive AR system may modify a visualization associated with the AR environment to couple (e.g., snap) the two virtual industrial automation devices together at one or more predetermined connection points. For example, the interactive AR system may modify the visualization associated with the AR environment to provide a snapping motion between the two virtual industrial automation devices as the two virtual industrial automation devices are coupled together. In one embodiment, the interactive AR system may provide a snapping sound that may accompany the snapping motion displayed via the visualization associated with the AR environment. In another embodiment, the interactive AR system may determine a compatibility between the two virtual industrial automation devices before modifying the visualization associated with the AR environment to snap the devices together. For example, the interactive AR system may display an error message after determining that the two virtual industrial automation devices are not compatible with each other (e.g., the two virtual industrial automation devices may not couple together in the real-world).
- After the interactive AR system has mapped a joint virtual industrial automation device (e.g., two or more virtual industrial automation devices that have been coupled together) to the hands of a user, the interactive AR system may detect a separate gesture performed by the user that involves the user pulling both of the user's hands apart. For example, the separate gesture may involve the user separating the user's hands while the user is grasping a different section of the joint virtual industrial automation device. After detecting the separate gesture performed by the user, the interactive AR system may decouple or separate the virtual industrial automation devices grasped by the user. That is, the interactive AR system may modify a visualization associated with the AR environment by displaying a motion that separates the virtual industrial automation devices apart from each other. In some embodiments, the interactive AR system may determine whether the user's hands are positioned about a line or a point of severance between respective virtual industrial automation devices that are coupled together in the joint virtual industrial automation device. For example, the interactive AR system may determine a position of each hand of the user along a joint virtual industrial automation device. The interactive AR system may then determine that a line or a point of severance associated with the joint virtual industrial automation device is located between the positions of each user's hands along the joint virtual industrial automation device. In some embodiments, after detecting that the user has performed a gaze gesture command at a joint virtual industrial automation device, the interactive AR system may determine one or more severance joints between the virtual industrial automation devices in the joint virtual industrial automation device and modify the visualization to display the one or more determined severance joints. After determining that the line or the point of severance is not between the positions of each user's hands along the joint virtual industrial automation device, the interactive AR system may display an error message associated with the determination.
- The interactive AR system may also detect a push gesture or a pull gesture performed by the user that involves the user placing one or both hands on a virtual surface of a virtual industrial automation device and the user moving in a certain direction within the visualization associated with the AR environment. For example, the push gesture may involve the user placing both hands on a virtual surface of a virtual industrial automation device and walking in a forward direction relative to the position of the user. Similarly, the pull gesture may involve the user placing both hands on a virtual surface of a virtual industrial automation device and walking in a backward direction relative to the position of the user. After the interactive AR system detects either the push gesture or the pull gesture performed by the user, the interactive AR system may move the virtual industrial automation device in a direction and at a speed that corresponds to the direction and the speed at which the user is moving. That is, the interactive AR system may modify a visualization associated with the AR environment by displaying a continuous movement of the virtual industrial automation device in the direction and the speed at which the user is walking in the AR environment. In this way, the interactive AR system may simulate a pushing action against the virtual surface of the virtual industrial automation device to move the virtual industrial automation device to another position and a pulling action from the virtual surface of the virtual industrial automation device to move the virtual industrial automation device to another position in the visualization associated with the AR environment.
- In another embodiment, the interactive AR system may detect a nudge gesture (e.g., movement of hands across some space within a certain amount of time) performed by the user that involves the user placing one hand on a virtual surface of a virtual industrial automation device and the user moving the user's arm or hand in a certain direction within the visualization associated with the AR environment. For example, the nudge gesture may involve the user placing a hand on a virtual surface of the virtual industrial automation device and a movement of the user's hand and/or arm in the forward direction while the user remains standing in place. After the interactive AR system detects the nudge gesture performed by the user, the interactive AR system may move the virtual industrial automation device in the forward direction and at the speed that the user's hand and/or arm are moving. That is, the interactive AR system may modify a visualization associated with the AR environment by displaying a movement of the virtual industrial automation device in the forward direction and at the speed that the user's hand and/or arm are moving. Although the description of the nudge gesture provided above is made in reference to moving the virtual industrial automation device in the forward direction, it should be noted that in other embodiments, the nudge gesture may move the virtual industrial automation device in a left direction, a right direction, a backward direction, a downward direction, an upward direction, or any other suitable direction based on the direction the user's arm and/or hand is moving.
- The interactive AR system may also detect a rotate gesture performed by the user that involves the user placing a hand about an axis of rotation of the virtual industrial automation device and twisting the wrist of the user. After the interactive AR system detects the rotate gesture performed by the user, the interactive AR system may rotate the virtual industrial automation device about the axis of rotation of the virtual industrial automation device at a speed and an angle corresponding to the speed and the angle of rotation of the user's wrist. That is, the interactive AR system may modify a visualization associated with an AR environment to display a rotating motion of the virtual industrial automation device in the hand of the user.
- As such, the interactive AR system may detect various gestures that a user may perform to assist in the design of an industrial system in an AR environment. The gestures may correspond to respective interactions with one or more virtual industrial automation devices that may be displayed in a visualization associated with the AR environment. In this way, the user may model various configurations of virtual industrial automation devices that a user may include in the design of the industrial system without having to physically move and interact with the actual counterpart devices in the real world. Additionally, by performing one or more of the various gestures described herein, a user of the interactive AR system may interact with the virtual industrial automation devices in a visualization associated with the AR environment in a natural and intuitive manner. That is, the interactive AR system may facilitate a user's interaction with the virtual industrial automation devices in the AR environment similar to how the user would interact with their counterparts in the real world. Additionally, the movement of the virtual industrial automation devices in the AR environment, the interactions of the virtual industrial automation devices with other virtual objects in the AR environment, the interactions of the virtual industrial automation devices with the physical surroundings of the user, or the like, may obey the physical laws of nature and simulate real-world, physical behaviors and actions. Further, the user's interactions with the virtual industrial automation devices may ignore one or more physical laws. For example, the user may move a virtual industrial automation device in the AR environment as though the virtual industrial automation device is weightless and frictionless. In this way, the interactive AR system may facilitate a user's interaction with and movement of a virtual industrial automation device at will and without any encumbrances but may provide a physical simulation of how the physical counterpart device to the virtual device would behave in the real-world.
- Additionally, in some embodiments, the interactive AR system may detect voice commands issued by the user to provide similar interactions or additional interactions with a virtual industrial automation device in the AR environment or with the AR environment itself. In some embodiments, for one or more of the gesture-based commands described herein, a corresponding voice command may be issued by the user to perform a similar interaction with one or more virtual industrial automation devices. For example, the user may perform a gaze gesture at a virtual industrial automation device in the AR environment and may say the voice command "grasp." After the interactive AR system detects the gaze gesture and the voice command, the interactive AR system may map the virtual industrial automation device to a hand of the user. In another example, after the interactive AR system maps a joint virtual industrial automation device to a user's hands, the user may say the voice command "separate" to cause the joint virtual industrial automation device to decouple at a point or line of severance between respective virtual industrial automation devices.
- In some embodiments, a user may wish to design an industrial system from a remote location away from the physical location that the industrial system may be located after assembly. For example, a user may design an industrial system from an office or in another country. In such embodiments, the interactive AR system may provide an operational mode (e.g., dynamic rotation mode) that facilitates the design of an industrial system in a virtual environment. That is, the interactive AR system may display a visualization associated with a virtual environment that corresponds to a scaled version of the physical location that the industrial system may be located after assembly. The interactive AR system may then facilitate the user's navigation of the visualization associated with the virtual environment by detecting one or more gesture and/or voice commands that correspond to respective navigational tools that the user may employ in the visualization associated with the virtual environment.
- Additionally, the interactive AR system may facilitate a user's interaction with various virtual industrial automation devices in the visualization associated with the virtual environment by detecting one or more gestures or voice commands as described herein. For example, the interactive AR system may detect a grasp gesture by a user and map a virtual industrial automation device to the user's hand. In another example, the interactive AR system may issue a voice command to the interactive AR system to "turn right." After the interactive AR system detects the voice command, the interactive AR system may modify a visualization associated with the virtual environment to display a view similar to a view that the user would perceive in the real-world location after the user had turned right from the user's starting position. As such, the interactive AR system may provide the user with a variety of design tools that facilitate a user to flexibly and conveniently design an industrial system.
- In some embodiments, the interactive AR system may provide the user with an operational mode (e.g., hover mode) that provides information associated with one or more industrial automation devices in an existing industrial system. That is, the user may be physically located within an industrial system that has already been designed and assembled. The user may perform a gaze gesture or a voice command at an industrial automation device in the industrial system. After the interactive AR system detects the gaze gesture and/or the voice command by the user, the interactive AR system may display identification information, maintenance information, operational information, performance information, or the like, associated with the industrial automation device in a visualization associated with an AR environment. For example, the information associated with the industrial automation device may be superimposed upon or adjacent to the real-world representation of the industrial automation device in the visualization associated with the AR environment.
- With this in mind, the presently disclosed embodiments include an interactive AR system that may be used to design an industrial system in an AR environment or provide information associated with industrial automation devices in an existing industrial system. In some embodiments, the interactive AR system may be equipped with one or more image devices that may detect various gestures performed by a user to interact with virtual representations of parts of an industrial system displayed within an AR environment. Additionally, the interactive AR system may be equipped with one or more audio devices that may detect various commands issued by a user to interact with virtual representations of parts of an industrial system within an AR environment or with the AR environment itself. Additional details regarding the interactive AR system and various systems and methods for displaying or modifying a visualization associated with the AR environment are described in more detail with reference to
FIGS. 1-21 . - By way of introduction,
FIG. 1 is a block diagram of an interactive AR system 100 that may be utilized by auser 104 to display avisualization 114 that includes a virtual representation of an industrial automation device 102 (e.g., virtual industrial automation device) in an AR environment. In the illustrated embodiment, the augmented reality (AR) environment may refer to avisualization 114 of a combination of computer-generated and real-world content displayed to auser 104 via a head mounteddevice 106 of the interactive AR system 100. Although a head mounteddevice 106 is employed within the illustrated embodiment of the interactive AR system 100, it should be noted that, in other embodiments, other suitable types of displays may be employed by the interactive AR system 100. For example, the interactive AR system 100 may employ smart glasses, a virtual retinal display, one or more contact lenses, a computer, a mobile device, or any other suitable electronic display device for displaying visualizations to a user. In any case, the head mounteddevice 106 may display avisualization 114 that includes a virtualindustrial automation device 102 to theuser 104. Thevisualization 114 may be superimposed computer-generated content (e.g., images or sounds) over real-world content (e.g., images or sounds) of the user's environment in real-time. Additional details with regard to the head mounteddevice 106 may be discussed below with reference toFIG. 2 . - In the illustrated embodiment, the interactive AR system 100 may display a
visualization 114 via the head mounteddevice 106 that includes a virtual representation of amotor drive 102. However, it should be noted that the illustrated embodiment is intended to be non-limiting and that the interactive AR system 100 may display avisualization 114 via the head mounteddevice 106 that may include other virtual industrial automation devices, or parts thereof, that may be employed within an industrial system. For example, the industrial automation devices may include controllers, input/output (I/O) modules, motor control centers, motors, valves, actuators, temperature elements, pressure sensors, human machine interfaces (HMIs), operator interfaces, contactors, starters, sensors, drives, relays, protection devices, switchgear, compressors, network switches (e.g., Ethernet switches, modular-managed, fixed-managed, service-router, industrial, unmanaged, etc.), data centers, conveyor sections, movers, and the like. - In certain embodiments, the head mounted
device 106 of the interactive AR system 100 may detect a gesture command performed by auser 104. For example, the interactive AR system 100 may detect a gaze gesture performed by theuser 104 directed at a virtualindustrial automation device 102 to request information or data associated with theindustrial automation device 102. The head mounteddevice 106 may analyze characteristics of image data associated with the user's biomechanical movements to determine if the image data matches a characteristic of a gesture command stored, learned, or otherwise interpretable by the head mounteddevice 106 of the interactive AR system 100. Image data associated with the user's biomechanical movements may include the motion, or lack thereof, of the user's hands, wrists, arms, fingers, or any other suitable body part to distinguish one gesture command from another gesture command. In some embodiments, the head mounteddevice 106 may acquire the image data and send the image data, vianetwork 108, to acomputing system 110 to analyze the characteristics of the image data to determine if the image data matches a characteristic of a gesture command stored, learned, or otherwise interpretable by thecomputing system 110. - In some embodiments, the head mounted
device 106 may be communicatively coupled to one or more motion sensors attached to a user's body. For example, one or more motion sensors may be disposed on the user's hands, wrists, arms, fingers, legs, feet, torso, or any other suitable body part and provide motion data (e.g., body motion capture data) to the head mounteddevice 106. In one embodiment, based on the received motion data associated with theuser 104, the head mounteddevice 106 may analyze the motion data associated with a respective body part of theuser 104 and determine a gesture command stored, learned, or otherwise interpretable by the head mounteddevice 106. In another embodiment, the head mounteddevice 106 may analyze the motion data associated with the respective body part of theuser 104 and determine a virtual force (e.g., a virtual speed, virtual displacement, or virtual direction) associated with a gesture command performed by the user. For example, the head mounteddevice 106 may determine a speed and an angle associated with the movement of the user's hand or foot after theuser 104 performs a push gesture command against a virtual industrial automation device. The head mounteddevice 106 may then modify avisualization 114 to display an animation of a movement of the virtual industrial automation device based on the determined speed and angle associated with the movement of the user's hand or foot. - In the illustrated embodiment, the
computing system 110 may be communicatively coupled to adatabase 112 that may store a list of gesture commands that are learned or otherwise interpretable by the head mounteddevice 106 and/or thecomputing system 110. Thedatabase 112 may also store a list of user profiles that include gesture commands that may correspond tospecific users 104 that are learned or otherwise interpretable by the head mounteddevice 106 and/or thecomputing system 110. For example, the head mounteddevice 106 and/or thecomputing system 110 may retrieve a user profile that includes a list of gesture commands that corresponds to thespecific user 104 utilizing the head mounteddevice 106. The head mounteddevice 106 and/or thecomputing system 110 may analyze characteristics of the image data to determine if the image data matches a characteristic of the received gesture commands of theuser 104. In some embodiments, if a threshold of one or more characteristics for a gesture command or a verbal command match a stored, learned, or otherwise interpretable gesture command, the head mounteddevice 106 and/or thecomputing system 110 may determine that a gesture command has been performed by theuser 104 of the head mounteddevice 106 based on the image data. - It should be noted that any suitable network may be employed in the embodiments described herein. For instance, the
network 108 may include any wired or wireless network that may be implemented as a local area network (LAN), a wide area network (WAN), and the like. Indeed, other industrial communication network protocol, such as EtherNet/IP, ControlNet, DeviceNet, and the like, may also be used. In any case, thenetwork 108 may permit the exchange of data in accordance with a protocol. - After detecting a gesture performed by the
user 104, the head mounteddevice 106 of the interactive AR system may request information or data associated with a virtualindustrial automation device 102 from thecomputing system 110 communicatively coupled to the head mounteddevice 106 via thenetwork 108. Thecomputing system 110 may then send a request to adatabase 112 communicatively coupled to thecomputing system 110 for the information or the data associated with theindustrial automation device 102. In some embodiments, thecomputing system 110 and thedatabase 112 may be part of the same device. Additionally, it should be noted that thecomputing system 110 may be any suitable computing device that includes communication abilities, processing abilities, and the like. For example, thecomputing system 110 may be any general computing device that may communicate information or data to the head mounteddevice 106 via thenetwork 108. - The type of information or data associated with the
industrial automation device 102 and requested by the head mounteddevice 106 may be based on the gesture performed by theuser 104 and detected by the head amounteddevice 106. In one embodiment, the head mounteddevice 106 may detect a gesture command (e.g., a gaze gesture) performed by auser 104 to select a virtualindustrial automation device 102 to further interact with (e.g., move, rotate, scale up, or scale down) in avisualization 114 associated with the AR environment. The head mounteddevice 106 may send a request to thecomputing system 110 for specification data associated with the virtualindustrial automation device 102. For example, the specification data may include a virtual physics dataset associated with theindustrial automation device 102. For example, the virtual physics dataset may include a virtual weight of theindustrial automation device 102, virtual dimensions of theindustrial automation device 102, and the like. - After receiving the virtual physics dataset associated with the
industrial automation device 102, the head mounteddevice 106 may simulate the real-world physical characteristics of theindustrial automation device 102 in thevisualization 114 associated with the AR environment via the virtualindustrial automation device 102. The specification data may also include other visual data associated with theindustrial automation device 102, such as possible color schemes, or the like. As such, the head mounted device may receive specification data associated with the virtualindustrial automation device 102 from thecomputing system 110. Based on the received specification data, the head mounteddevice 106 may generate and display a virtualindustrial automation device 102 in thevisualization 114 associated with the AR environment to theuser 104. - In another embodiment, the
user 104 may utilize the head mounteddevice 106 to obtain operational data or maintenance data regarding the industrial system. After the head mounteddevice 106 of the interactive AR system 100 detects a gesture command (e.g., a gaze gesture command) performed by theuser 104 to select at an actual industrial automation device in the industrial system, the head mounteddevice 106 may request various types of data (e.g., identification data, operational data, or maintenance data) associated with theindustrial automation device 102 from thecomputing system 110 to display to a user in avisualization 114 associated with the AR environment. For example, the identification data may include a product name, a product type, a vendor name, a cost, a description of the function of the industrial automation device, or the like. The operational data may include data gathered by one or more sensors in the industrial system that measure one or more operational parameters associated with theindustrial automation device 102. The maintenance data may include data associated with maintenance records and/or data logs of theindustrial automation device 102. As such, the head mounteddevice 106 may receive various types of data associated with a real-world industrial automation device in an industrial system from thecomputing system 110 and display the data to theuser 104 in avisualization 114 associated with the AR environment. - In another embodiment, the
user 104 may perform a gesture command (e.g., a gaze gesture command) to select a virtualindustrial automation device 102 when designing an industrial system to obtain identification information associated with theindustrial automation device 102. For example, after the head mounteddevice 106 of the interactive AR system 100 detects a gaze gesture performed by theuser 104 and directed at the virtual representation of theindustrial automation device 102, the head mounteddevice 106 may request identification data associated with theindustrial automation device 102 from thecomputing system 110 to display to theuser 104 in the visualization
114 associated with the AR environment. For example, the identification data may include a product name, a product type, a vendor name, a cost, a description of the function of the industrial automation device, or the like. As such, the head mounteddevice 106 may receive identification data associated with a virtualindustrial automation device 102 from thecomputing system 110 and display the identification data to user in thevisualization 114 associated with the AR environment. - In another embodiment, the
user 104 may perform a snap gesture command to couple a first virtual industrial automation device and a second virtual industrial automation device together. After the head mounteddevice 106 of the interactive AR system 100 detects the snap gesture by theuser 104, the head mounteddevice 106 may request compatibility data associated with the first virtual industrial automation device and the second virtual industrial automation device from thecomputing system 110. For example, the compatibility data may include a first list of devices that are compatible with the first industrial automation device and a second list of devices that are compatible with the second industrial automation device. As such, the head mounteddevice 106 may receive compatibility data associated with theindustrial automation device 102 from thecomputing system 110. Based on the received compatibility data, the head mounteddevice 106 may then determine whether the first industrial automation device and the second industrial automation device are compatible and display a notification associated with the determination. - As described above, the head mounted
device 106 may request information or data associated with anindustrial automation device 102 from thecomputing system 110 that is communicatively coupled to thedatabase 112. Thedatabase 112 may be organized to include a list of various industrial automation devices that may be employed in the design of an industrial system. In some embodiments, thedatabase 112 may index the data associated with theindustrial automation device 102 based on an identifier associated with eachindustrial automation device 102. In such embodiments, the head mounteddevice 106 may identify an identifier of anindustrial automation device 102 based on the gesture command or voice command performed by theuser 104 and data associated with theindustrial automation device 102. The head mounteddevice 106 may then send a request with the identifier associated with theindustrial automation device 102 to thecomputing system 110. Thecomputing system 110 may then extract data associated with theindustrial automation device 102 based on the identifier and/or the type of request and send the extracted data to the head mounteddevice 106. After the head mounteddevice 106 receives the data associated with theindustrial automation device 102, the head mounteddevice 106 may generate and display avisualization 114 that includes a visual representation of the data associated with theindustrial automation device 102. - To perform some of the actions set forth above, the head mounted
device 106 may include certain components to facilitate these actions.FIG. 2 is a block diagram of exemplary components within the head mounteddevice 106. For example, the head mounteddevice 106 may include one ormore cameras 202 and one ormore microphones 204. It should be understood that any suitable image-receiving device may be used in place of, or in addition to, thecameras 202, for example, asingular camera 202 may be incorporated into the head mounteddevice 106. It also should be understood that any suitable sound-receiving device may be used in place of, or in addition to, themicrophones 204, for example, a combined speaker and microphone device, or asingular microphone 204 may be incorporated into the head mounteddevice 106. - In some embodiments, the head mounted
device 106 may include one or more sensors for detecting the movements of theuser 104, the biometrics of theuser 104, the surroundings of theuser 104, or the like. For example, the head mounteddevice 106 may include an infrared sensor, a thermal sensor, a range sensor (e.g., a range camera), a smell sensor (e.g., an electronic nose), or any other suitable sensors for detecting characteristics of theuser 104 or the surroundings of theuser 104. - The head mounted
device 106 may also includeprocessing circuitry 206 including aprocessor 208, amemory 210, acommunication component 212, input/output (I/O)ports 214, and the like. Thecommunication component 212 may be a wireless or a wired communication component that may facilitate communication between the head mounteddevice 106 and thecomputing system 110, thedatabase 112, and the like via thenetwork 108. This wired or wireless communication protocols may include any suitable communication protocol include Wi-Fi, mobile telecommunications technology (e.g., 2G, 3G, 4G, LTE), Bluetooth®, near-field communications technology, and the like. Thecommunication component 212 may include a network interface to enable communication via various protocols such as EtherNet/IP®, ControlNet®, DeviceNet®, or any other industrial communication network protocol. - The
processor 208 of the head mounteddevice 106 may be any suitable type of computer processor or microprocessor capable of executing computer-executable code, including but not limited to one or more field programmable gate arrays (FPGA), application-specific integrated circuits (ASIC), programmable logic devices (PLD), programmable logic arrays (PLA), and the like. Theprocessor 208 may, in some embodiments, include multiple processors. Thememory 210 may include any suitable articles of manufacture that serve as media to store processor-executable code, data, and the like. Thememory 210 may store non-transitory processor-executable code used by theprocessor 208 to perform the presently disclosed techniques. - Generally, the head mounted
device 106 may receive image data or audio data related to auser 104 via one or more image sensors (e.g., cameras 202) or one or more audio sensors (e.g., microphones 204), respectively, communicatively coupled to one or more of the I/O ports 214. Upon receiving image data or audio data, the head mounteddevice 106, via theprocessor 208, may interpret the image data or the audio data to determine commands or actions for the head mounteddevice 106 to perform in response to the determined commands or actions. In some embodiments, the determined command may be forwarded tocomputing system 110 to interpret the detected image data or audio data. Thecomputing system 110 may analyze characteristics of the image data or the audio data to determine if the image data or the audio data matches the characteristic of a gesture command or verbal command, respectively, stored, learned or otherwise interpretable by thecomputing system 110. - As mentioned above, the
database 112 may store a list of gesture commands or voice commands that are stored, learned, or otherwise interpretably by thecomputing system 110. For example, the list of gesture or voice commands may include a snap gesture command, a separate gesture command, a push gesture command, a pull gesture command, a rotate gesture command, a nudge gesture command, a lift gesture command, a let-go gesture command, a grasp gesture command, a gaze gesture command, a scale up gesture command, a scale down gesture command, or the like. In another embodiment, instead of forwarding the command to thecomputing system 110, the head mounteddevice 106 may be able to analyze characteristics of the image data or the audio data to determine if the image data or the audio data matches the characteristic of a gesture command or verbal command, respectively, stored, learned, or otherwise interpretable by the head mounteddevice 106. In any case, the head mounteddevice 106 or thecomputing system 110 may analyze characteristics of the user's movements in the image data such as the motion of the user's hands, wrists, arms, fingers, or any other suitable body part to distinguish one gesture command from another gesture command. - Additionally, the head mounted
device 106 or thecomputing system 110 may analyze characteristics of the audio data, such as frequency (e.g., pitch), amplitude (e.g., loudness), or any other suitable characteristic used to distinguish one verbal command from another verbal command. If a threshold of one or more characteristics for a gesture command or a verbal command match a stored, learned, or otherwise interpretable command, the head mounteddevice 106 may determine a command to be performed by the head mounteddevice 106 based on the image data or the audio data. - As discussed above, the head mounted
device 106 may be communicatively coupled to thenetwork 108, which may include an Internet connection, or otherwise suitable wireless or wired communicative coupling to expand its interpretation and functional capabilities, but, in some embodiments, the head mounteddevice 106 may not rely on such a communicative coupling. In other words, the head mounteddevice 106 may have particular capabilities that may function without an Internet, wireless, or wired connection. For example, the head mounteddevice 106 may perform local command interpretation without an Internet or wireless connection. - The head mounted
device 106 may also include avideo output 216. Thevideo output 216 may be any suitable image-transmitting component, such as a display. Head mounteddevice 106 may display avisualization 114 associated with the AR environment that combines computer-generated content, such as a virtualindustrial automation device 102, with real-world content, such as image data associated with the user's physical surroundings. - With the foregoing in mind,
FIG. 3 illustrates aperspective view 300 of auser 104 utilizing a head mounteddevice 106 to perceive thevisualization 114 associated with an AR environment. By way of example, in the illustrated embodiment, theuser 104 may design a conveyor system for an industrial system. The head mounteddevice 106 of the interactive AR system 100 may generate and display to the user 104 avisualization 114 with virtual representations ofvarious compartments 302, 304 (e.g., virtual compartments) that each correspond to a type or a category of industrial automation device that may be employed by a conveyor system. For example, the head mounteddevice 106 may acquire image data of the user and the user's physical surroundings (e.g., the real world) and generate and display avisualization 114 that superimposes thevirtual compartments virtual compartments 302 may resemble real-world objects, such as boxes, storage bins, lockers, or the like, or may have a design that is not directly tied to a real-world object. - As described above, the
virtual compartments compartment 302 may correspond to different types of conveyor sections that may be employed within a conveyor system. In another example,compartment 304 may correspond to different types of movers that may be employed within a conveyor system. Theuser 104 may interact with thecompartments compartments - In some embodiments, a gaze gesture command may be detected by the head mounted
device 106 by tracking the movement of the user's eyes associated with a virtual surface of a virtual object (e.g., a virtual compartment or a virtual industrial automation device). For example, the head mounteddevice 106 may continuously or intermittently acquire image data of the user's eyes and track a location on the display that the user's eyes are focusing on (e.g., a virtual object). In other embodiments, a visual indicator (e.g., dot) displayed in thevisualization 114 may correspond to a cursor that theuser 104 may use to focus a gesture command on within thevisualization 114 associated with the AR environment. For example, the user may change the position of the dot by moving the user's head left, right, up, or down. Once the cursor is positioned on a certain object for greater than a threshold period of time (e.g., greater than 3 seconds, 4 seconds, 5 seconds, or the like), the head mounteddevice 106 may detect that the user has performed a gaze gesture command to select the object within thevisualization 114 associated with the AR environment. - In any case, after the head mounted
device 106 has detected that theuser 104 has performed a gaze gesture command directed at avirtual compartment device 106 may modify thevisualization 114 to display one or more virtual industrial automation devices corresponding to the virtual compartment selected by theuser 104 via the gaze gesture command. To help illustrate,FIG. 4 illustrates aperspective view 400 of theuser 104 utilizing the head mounteddevice 106 to perceive a modifiedvisualization 114 associated with the AR environment illustrated inFIG. 3 . In the illustrated embodiment, the head mounteddevice 106 of the interactive AR system 100 may display one or more types ofvirtual conveyor sections virtual conveyor section 402 may have a curved shape,virtual conveyor section 404 may have a straight shape, andvirtual conveyor section 406 may have a straight shape and a curved shape. - Although
FIGS. 3 and 4 illustrate that the head mounteddevice 106 may display two virtualcategorical compartments types device 106 may display any number ofvirtual compartments industrial automation devices visualization 114 associated with the AR environment. For example, the head mounteddevice 106 may display one, two, five, ten, twenty, fifty, one hundred, or any other suitable number of virtual compartments and/or virtual industrial automation devices in thevisualization 114 associated with the AR environment. In some embodiments, theuser 104 may look or tilt the user's head to the left, right, up, down, or the like, and the head mounteddevice 106 may display additional virtual compartments and/or virtual industrial automation devices in thevisualization 114 associated with the AR environment accordingly. In one embodiment, theuser 104 may resize the virtual compartments and/or virtual industrial automation devices in thevisualization 114 such that the available virtual compartments and/or virtual industrial automation devices for display to theuser 104 may accommodate the user's visual needs. For example, theuser 104 may perform a gesture command (e.g., scale down) or a voice command (e.g., say "scale down") to decrease the size of the virtual compartments and/or virtual industrial automation devices in thevisualization 114 such that theuser 104 may be able to view and select from tens of options, hundreds of options, thousands of options, or the like, without theuser 104 having to look in a different direction or turn the user's head in a different direction to view additional virtual compartments and/or virtual industrial automation devices in thevisualization 114. - In another embodiment, the head mounted
device 106 may modify thevisualization 114 associated with the AR environment and display additional and/or different virtual compartments and/or virtual industrial automation devices as theuser 104 moves through the user's surroundings. For example, the head mounteddevice 106 may display a first subset of virtual industrial automation devices in thevisualization 114 associated with the AR environment to theuser 104 while the user is in a first position in the user's surroundings. After the head mounteddevice 106 determines that theuser 104 has moved to a second position (e.g., greater than a certain threshold associated with displaying one or more additional virtual industrial automation devices), the head mounteddevice 106 may display a second subset of virtual industrial automation devices in thevisualization 114 associated with the AR environment. In another embodiment, the first subset of the virtual industrial automation devices and the second subset of virtual industrial automation devices may be displayed to theuser 104 while the user is in the first position but the second subset of virtual industrial automation devices may be displayed in a proportional smaller size with respect to the first subset of virtual industrial automation devices to simulate that the second subset is further away from theuser 104 in the AR environment. As theuser 104 moves toward the second subset of the virtual industrial automation devices in the AR environment, the head mounteddevice 106 may modify thevisualization 114 to display the second subset of the virtual industrial automation devices increasing in size as the user walks toward the second subset and the first subset of the virtual industrial automation devices decreasing in size as theuser 104 walks away from the first subset. That is, the head mounteddevice 106 may modify thevisualization 114 to simulate the perspective of theuser 104 walking away from or toward various real-world industrial automation devices in the user's surroundings. - In other embodiments, the number of
virtual compartments industrial automation devices user 104 in thevisualization 114 may be limited by the area of the display of thevisualization 114. For example, the number ofvirtual compartments industrial automation devices device 106 in thevisualization 114 of the AR environment may be more than two, more than five, more than ten, more than twenty, or the like, based on the size of thevirtual compartments industrial automation devices visualization 114 and the display area of thevisualization 114. - In some embodiments, the head mounted
device 106 may display a subset of the number ofvirtual compartments industrial automation devices visualization 114. For example, the head mounteddevice 106 may display avisualization 114 with four of twentyvirtual compartments industrial automation devices device 106 to modify thevisualization 114 to display the next fourvirtual compartments virtual compartments device 106 may detect a swiping gesture command (e.g., hand swiping across thevirtual compartments 302, 304) performed by theuser 104 to display the next subset ofvirtual compartments
106 may detect a "next" voice command issued by the user to display the next subset ofvirtual compartments virtual compartments visualization 114 may not be limited by the display area of thevisualization 114. That is, the head mounteddevice 106 may display additionalvirtual compartments - Referring back to
FIG. 4 , the head mounteddevice 106 may also display a virtual representation of the conveyor system 408 (e.g., a virtual conveyor system) that has been designed or partially designed by theuser 104. For example, thevirtual conveyor system 408 may include one or more conveyor shapes that have been placed by the user in a specific configuration in thevisualization 114 associated with the AR environment. Theuser 104 may be able to determine a desired shape of conveyor section
402, 404, 406 to employ in the design of theconveyor system 408 based on the display of thevirtual conveyor system 408. For example, theuser 104 may be able to determine that the specific configuration of thevirtual conveyor system 408 is missing a curved conveyor section. The user may then perform a gaze gesture command directed at one of the virtualindustrial automation devices 404 corresponding to the curved conveyor section. After detecting the gaze gesture command performed by theuser 104, the head mounteddevice 106 may generate and display a modifiedvisualization 114 that displays the user's selection of the virtualindustrial automation device 404. - In some embodiments, based on the configuration of the
virtual conveyor system 408, the head mounteddevice 106 may modify thevisualization 114 to display a subset of available virtual industrial automation devices that may be utilized with the configuration of thevirtual conveyor system 408. For example, as theuser 104 places virtual industrial automation devices to form thevirtual conveyor system 408, the head mounteddevice 106 may display a smaller subset of virtual industrial automation devices that correspond to virtual industrial automation devices that may couple to the user-placed virtual industrial automation devices in thevirtual conveyor system 408. In this way, the head mounteddevice 106 may predict one or more virtual industrial automation devices that theuser 104 may desire to select and place next based on one or more previous selections and placements of virtual industrial automation devices performed by theuser 104. - Further, in one embodiment, the head mounted
device 106 may determine that the user may desire another type of industrial automation device after the selection and placement of a first type of virtual industrial automation device. The head mounteddevice 106 may display one or more virtual compartments that correspond to the other types of industrial automation devices that theuser 104 may desire after selecting and placing the first type of virtual industrial automation device. For example, the head mounteddevice 106 may determine that theuser 104 has finished designing the track of a virtual conveyor system. The head mounteddevice 106 may modify thevisualization 114 to display a virtual compartment associated with various types of virtual movers that may be coupled or placed on the track of the virtual conveyor system. In some embodiments, the head mounteddevice 106 may modify thevisualization 114 to display both virtual industrial automation devices and virtual compartments associated with predicted selections by theuser 104 after theuser 104 has selected and placed a virtual industrial automation device. - In some embodiments, as shown in
FIG. 4 , virtualindustrial automation devices user 104 in thevisualization 114 associate with the AR environment may be illustrated in dotted lines. After the head mounteddevice 106 detects a gesture command performed by theuser 104 to select a virtualindustrial automation device device 106 may modify thevisualization 114 to display the appearance of the selected virtual industrial automation device in solid lines. It should be noted that the description of the transition between dotted lines and solid lines for displaying not selected and selected virtual industrial automation devices, respectively, is exemplary and non-limiting. Additionally, other embodiments may include transitions in appearance from not selected to selected virtual industrial automation devices, such as a highlighting, a color change, a shading, or any other suitable visual change in appearance. -
FIG. 5 illustrates aperspective view 500 of theuser 104 utilizing the head mounteddevice 106 to perceive a modifiedvisualization 114 associated with the AR environment illustrated inFIGS. 3 and 4 . After detecting a gaze gesture command performed by theuser 104, the head mounteddevice 106 may determine that theuser 104 selected the firstvirtual conveyor section 402, as shown inFIG. 4 . The head mounteddevice 106 may then generate and display a modifiedvisualization 114 corresponding to the user's selection of the firstvirtual conveyor section 402. For example, the head mounteddevice 106 may display the firstvirtual conveyor section 402 in solid lines in the modifiedvisualization 114 to indicate that the firstvirtual conveyor section 402 has been selected by theuser 104 and/or may now be interacted with by the user in the AR environment. - With the foregoing in mind,
FIG. 6 illustrates a flow chart of amethod 600 for displaying and modifying avisualization 114 associated with an AR environment based on one or more gaze gestures commands performed by auser 104. Although the following description of themethod 600 is described in a particular order, it should be noted that themethod 600 is not limited to the depicted order, and instead, the method
600 may be performed in any suitable order. Moreover, although themethod 600 is described as being performed by the head mounteddevice 106, it should be noted that it may be performed by any suitable computing device communicatively coupled to thehead mounting device 106. - Referring now to
FIG. 6 , atblock 602, the head mounteddevice 106 may receive image data of the physical space associated with the real-world environment of theuser 104. In some embodiments, the head mounteddevice 106 may acquire image data via the one ormore cameras 202. The image data may include data that indicates dimensions of the physical space, such as height, width, and length. The head mounteddevice 106 may then process the acquired image data and display thevisualization 114 based on the image data with respect to the physical space associated with the real-world environment ofuser 104. For example, thecameras 202 may acquire image data of real-world objects within surrounding environment of theuser 104. The real-world objects may include physical structures, the user's body, other real-world objects, or portions thereof. - At
block 604, the head mounteddevice 106 may generate and display avisualization 114 based on the acquired image data. For example, thevisualization 114 may replicate the acquired image data on a display of the head mounteddevice 106. In certain embodiments, the head mounteddevice 106 may generate and display thevisualization 114 to simulate the user's perception of the physical space associated with real-world environment of theuser 104. For example, the head mounteddevice 106 may generate and display thevisualization 114 to have the same viewing angle, the same field of vision, the same depth of vision, or the like, that theuser 104 may perceive of the real-world surrounding the user. Alternatively, thevisualization 114 may be presented via a transparent display that allows theuser 104 to view the real world surroundings. Thevisualization 114 may then be superimposed over the transparent display to produce virtual objects within the real world surroundings. In some embodiments, the head mounteddevice 106 may provide a video see-through display or an optical see-through display to display visualizations to the user. - In some embodiments, the head mounted
device 106 may display avirtual compartment virtual compartment 302 may correspond to one or more types of conveyor sections that may be employed within a conveyor system. As such, the head mounteddevice 106 may display avisualization 114 to auser 104 on a display that includes both real-world and computer-generated content in real-time or substantially real-time. In some embodiments, theuser 104 may speak a voice command to indicate the type or category of the industrial automation system that is intended to be placed in the physical space. Otherwise, theuser 104 may specify to the head mounteddevice 106 the type or category of the industrial automation system by scrolling through visualizations that provide categories or types of industrial automation systems that may be designed. - At
block 606, the head mounteddevice 106 may receive a selection of avirtual compartment user 104. For example, the head mounteddevice 106 may acquire image data of theuser 104 or a portion thereof, such as the user's arms, hands, fingers, legs, or the like. The head mounteddevice 106 may then detect a gesture command performed by theuser 104 based on the acquired image data. As described above, for example, a gaze gesture may be detected by the head mounteddevice 106 by tracking the movement of the user's eyes or tracking a cursor indicative of the user's focus in a visualization associated with the AR environment. - After the head mounted
device 106 receives an indication of the selection of thevirtual compartment user 104, the head mounteddevice 106 may modify thevisualization 114 based on the selection of thevirtual compartment block 608. For example, the head mounteddevice 106 may modify thevisualization 114 to display one or more virtualindustrial automation devices virtual compartment device 106 may send a request to thecomputing system 110 for a list of industrial automation devices stored in thedatabase 112 associated with the selectedvirtual compartment device 106 may also send a request to thecomputing system 110 for specification data associated with each industrial automation device associated with the selectedvirtual compartment virtual compartment device 106 may modify thevisualization 114 to display a virtual industrial automation device based on the specification data in the AR environment. That is, the specification data may include image or dimensional data that may be used to generate a virtual object that represents the virtual industrial automation device. - At
block 610, the head mounteddevice 106 may receive another indication of a selection of a virtualindustrial automation device user 104. Similar to the selection of thevirtual compartment device 106 may acquire image data of theuser 104 or a portion thereof, such as the user's arms, hands, fingers, legs, or the like. The head mounteddevice 106 may then detect a gaze gesture command based on the acquired image data. - After the head mounted
device 106 receives the selection of the virtualindustrial automation device user 104, the head mounteddevice 106 may modify thevisualization 114 based on the selection of the virtualindustrial automation device block 608. For example, the head mounteddevice 106 may modify thevisualization 114 by displaying the selected virtualindustrial automation device device 106 may detect a gaze gesture command performed by auser 104 to select and display a virtualindustrial automation device visualization 114 of an AR environment. - Additionally, in some embodiments, the head mounted
device 106 may detect voice commands issued by the user to provide similar interactions or additional interactions with thevirtual compartments industrial automation devices user 104 may look towards avirtual compartment industrial automation device device 106 may perform actions as described herein with respect to the gaze gesture command. - In some embodiments, the
user 104 may perform a gaze gesture command to display information or data associated with anindustrial automation device 102. For example, in a design context, theuser 104 may wish to know the name, the type, the vendor, the cost, or the like, of an industrial automation device. In another example, theuser 104 may wish to know the identification, maintenance, and operational information, or the like, associated with a particular industrial automation device in an existing industrial system. With the foregoing in mind,FIG. 7 illustrates aperspective view 700 of auser 104 utilizing the head mounteddevice 106 to perceive avisualization 114 that may displayidentification information 702 associated with anindustrial automation device 102. For example, the head mounteddevice 106 may acquire image data of the surroundings of theuser 104 via one ormore cameras 202. The head mounted device
106 may then process the acquired image data of the user and the user's surroundings and detect a gaze gesture command performed by theuser 104 based on the image data. After detecting the gaze gesture command performed by theuser 104, the head mounteddevice 106 may determine a target of the gaze gesture based on the image data and/or the gaze gesture command. As illustrated inFIG. 7 , the target of the gaze gesture may be avirtual motor drive 102. The head mounteddevice 106 may then receive an identifier associated with thevirtual motor drive 102 after determining that thevirtual motor drive 102 is the target of the gaze gesture. For example, the head mounteddevice 106 may retrieve an identifier stored in thememory 210 of the head mounteddevice 106 that corresponds to thevirtual motor drive 102. The head mounteddevice 106 may then send a request with the identifier to thecomputing system 110 for identification information associated with theindustrial automation device 102. Based on the identifier and the type of request sent by the head mounteddevice 106, thecomputing system 110 may send identification information associated with the identifier to the head mounteddevice 106. After receiving the identification information associated with the identifier, the head mounteddevice 106 may display a virtual representation of the identification information on or adjacent to the virtualindustrial automation device 102 in thevisualization 114 associated with the AR environment. - After the
user 104 has selected a virtualindustrial automation device 102 within thevisualization 114 associated with the AR environment, theuser 104 may wish to reposition the virtualindustrial automation device 102 within thevisualization 114 to a desired position. To help illustrate,FIG. 8 is a perspective view 800 of auser 104 utilizing the head mounteddevice 106 to reposition a virtualindustrial automation device 806 in thevisualization 114 associated with an AR environment. The head mounteddevice 106 may detect agesture command 802 performed by theuser 104 to select the virtualindustrial automation device 806 within thevisualization 114 associated with the AR environment. In the illustrated embodiment, thegesture command 802 performed by theuser 104 may involve theuser 104 reaching out in a direction toward a desired virtualindustrial automation device 806 with, in one embodiment, a flat or open palm over the selected virtualindustrial automation device 806. In other embodiments, thegesture command 802 performed by the user to select the virtualindustrial automation device 806 may be a gaze gesture command as described above. Based on the image data, the head mounteddevice 106 may detect thegesture command 802 performed by the user and the target of thegesture command 802. In the illustrated embodiment, the head mounteddevice 106 may determine a vector extending along the user's arm or the user's palm toward a virtualindustrial automation device device 106 may then determine that the virtualindustrial automation device 806 is the target of the user'sgesture command 802 because the position of the virtualindustrial automation device 806 intersects with the vector extending from the user's arm or the user's palm. In some embodiments, the head mounteddevice 106 may also track the eye movements of the user or track a cursor indicative of the focus of theuser 104 in the visualization associated with the AR environment to determine the target of the user'sgesture command 802. -
FIG. 9 is a perspective view of auser 104 utilizing the head mounteddevice 106 to reposition the virtualindustrial automation device 806 to the hand of theuser 104. The head mounteddevice 106 may detect agrasping gesture command 808 by theuser 104 with the same hand used to select the virtualindustrial automation device 806 as shown inFIG. 8 . As described above, the head mounteddevice 106 may receive image data associated with the user and the user's surroundings and detect thegrasping gesture command 808 performed by the user based on the image data. The head mounteddevice 106 may then identify one or more mapping points associated with the user's hand that performed the graspinggesture command 808. After the head mounteddevice 106 has detected the graspinggesture command 808, the head mounteddevice 106 may modify the visualization associated with the AR environment and map the selected virtualindustrial automation device 806 to the user's hand that performed the grasping gesture
808 at the one or more identified mapping points. Thereafter, as theuser 104 moves the user's hand in thevisualization 114 associated with the AR environment, the head mounteddevice 106 may continuously modify the visualization associated with the AR environment to move (e.g., as an animation) the selected virtualindustrial automation device 806 toward the one or more identified mapping points associated with the user's hand. That is, theuser 104 may move the virtualindustrial automation device 806 in the visualization associated with the AR environment in real-time or substantially real-time after performing agrasping gesture command 808 in the visualization. - In one example, moving the user's hand from the open palm position to the partially closed (e.g., u-shaped) position may be detected as a gesture that causes the head mounted
device 106 to move the selected virtualindustrial automation device 806 to the hand of theuser 104 performing the gesture. The mapped points of the hand may include one or more fingers, the palm, or other distinguishable features of the hand. When the virtualindustrial automation device 806 is selected and the grasp gesture is initialized, the head mounteddevice 106 may cause the virtualindustrial automation device 806 to move (e.g., as an animation) towards the mapped points and stay attached to the mapped points until another gesture or voice command is received. - After the
user 104 has grasped and/or repositioned a virtualindustrial automation device 102 within thevisualization 114 associated with the AR environment, theuser 104 may wish to drop the virtualindustrial automation device 102 within thevisualization 114. The head mounteddevice 106 may detect a let go gesture command (e.g., a release gesture command) performed by theuser 104 to release the virtualindustrial automation device 806 from the user's hand within thevisualization 114 associated with the AR environment. For example, the let go gesture command may involve theuser 104 extending the user's fingers from a curled position around the virtualindustrial automation device 806. Based on the image data associated with theuser 104, the head mounteddevice 106 may detect the let go gesture command and a position associated with the virtualindustrial automation device 806 in thevisualization 114 associated with the AR environment. The head mounteddevice 106 may then un-map the virtualindustrial automation device 806 from the user's hand and position the virtualindustrial automation device 806 in the detected position where theuser 104 uncurled the user's fingers. That is, the head mounteddevice 106 may modify thevisualization 114 associated with the AR environment to display that the virtualindustrial automation device 806 is not mapped to the user's hand. Thereafter, theuser 104 may move the user's hand in thevisualization 114 associated with the AR environment and the virtualindustrial automation device 806 may not move with the user's hand within thevisualization 114. - With the foregoing in mind,
FIG. 10 illustrates a flow chart of amethod 1000 for displaying and modifying thevisualization 114 based on thegrasping gesture 808 performed by auser 104. Although the following description of themethod 1000 is described in a particular order, it should be noted that themethod 1000 is not limited to the depicted order, and instead, themethod 1000 may be performed in any suitable order. Moreover, although themethod 1000 is described as being performed by the head mounteddevice 106, it should be noted that it may be performed by any suitable computing device communicatively coupled to the head mounteddevice 106. - Referring now to
FIG. 10 , atblock 1002, the head mounteddevice 106 may receive image data of the physical space associated with the real-world environment of theuser 104. In some embodiments, the head mounteddevice 106 may acquire the image data via one ormore cameras 202. The head mounteddevice 106 may then process the acquired image data and display thevisualization 114 based on the image data with respect to the physical space associated with the real-world environment of theuser 104. For example, thecameras 202 may acquire image data of real-world objects in the real-world environment surrounding the user. The real-world objects may include physical structures, the user's body, other real-world objects, or portions thereof. At block 1004, the head mounteddevice 106 may generate and display thevisualization 114 based on the acquired image data and computer-generated content. For example, the head mounteddevice 106 may display thevisualization 114 on a display to theuser 104 that includes both real-world and computer-generated content, such as the one or more virtualindustrial automation devices - At
block 1006, the head mounteddevice 106 may receive a selection of a virtualindustrial automation device palm selection gesture 802 command as described above. After the head mounteddevice 106 receives the selection of the virtualindustrial automation device user 104, the head mounteddevice 106 may receive image data associated with the gestures or hands of theuser 104 atblock 1008. In some embodiments, the head mounteddevice 106 may acquire the image data associated with theuser 104 via the one ormore cameras 202. The head mounteddevice 106 may then analyze the acquired image data for characteristics associated with the graspinggesture command 808. If a threshold of one or more characteristics for the grasping gesture command match a stored, learned, or otherwise interpretable command, the head mounteddevice 106 may determine a corresponding command to be performed by the head mounteddevice 106 based on the image data associated with theuser 104. For example, in response to the determined command, the head mounteddevice 106 may determine one or more mapping points between the user's hand that performed thegrasping gesture 808 and the selected virtualindustrial automation device 806. Atblock 1012, the head mounteddevice 106 may then modify the visualization associated with the AR environment based on the determined command by mapping the selected virtualindustrial automation device 806 to the user's hand at the one or more mapping points in thevisualization 114 associated with the AR environment. That is, the head mounteddevice 106 may modify the visualization in real-time or substantially real-time to position the selected virtualindustrial automation device 806 at the one or more connection points associated with the user's hand. - Additionally, in some embodiments, the head mounted
device 106 may detect voice commands issued by the user to provide similar interactions or additional interactions with the virtualindustrial automation devices user 104 may say the voice command "grasp," or "let go," "drop," or "release." After the head mounted device detects the voice command, the head mounteddevice 106 may perform actions as described herein with respect to the corresponding grasp gesture command or the corresponding let go gesture command (e.g., release gesture command). - After the
user 104 has placed a virtual industrial automation device at a position in thevisualization 114 associated with the AR environment, theuser 104 may wish to move the virtualindustrial automation device 102 to different locations in thevisualization 114 associated with the AR environment.FIG. 11 illustrates aperspective view 1100 of auser 104 utilizing the head mounteddevice 106 to perform apush gesture command 1102 or a pull gesture command to move a virtualindustrial automation device 1104 to another position in the visualization associated with the AR environment. The head mounteddevice 106 may detect thepush gesture command 1102 or the pull gesture command performed by theuser 104 to move (e.g., as an animation) the virtualindustrial automation device 1104 in the visualization associated with the AR environment. For example, the head mounteddevice 106 may receive image data associated with theuser 104 and the user's surroundings. Based on the image data associated with the user and the virtual content displayed in thevisualization 114, the head mounteddevice 106 may determine that thegesture command 1102 performed by theuser 104 is a push gesture command or a pull gesture command because theuser 104 has placed both hands on avirtual surface 1104 of a virtualindustrial automation device 1104. The head mounteddevice 106 may then receive motion data associated with theuser 104 to determine whether thegesture command 1102 is a push gesture command or a pull gesture command. - To help illustrate,
FIG. 12 is aperspective view 1200 of auser 104 utilizing the head mounteddevice 106 to perform the push gesture 1202 to move the virtualindustrial automation device 1104 to another position in the forward direction with respect to theuser 104 within the visualization associated with the AR environment. After the head mounteddevice 106 has determined that the user's stance is indicative of either the push gesture or the pull gesture, the head mounted device may acquire motion data associated with theuser 104. In some embodiments, the motion data may be extracted from the image data received by the head mounteddevice 106. For example, based on the image data associated with theuser 104, the head mounteddevice 106 may determine a direction and a speed at which theuser 104 is moving (e.g., walking). Based on the direction of movement associated with theuser 104, the head mounteddevice 106 may determine that the gesture command 1202 is a push gesture command because theuser 104 is moving in a forward direction with respect to the position of theuser 104 in thevisualization 114 or the gesture command is a pull gesture command if the user is moving in a backward direction with respect to the position of theuser 104 in thevisualization 114. As illustrated inFIG. 12 , the head mounteddevice 106 may determine that the gesture command 1202 is a push gesture command because the user is moving in the forward direction with respect to the position of theuser 104. The head mounteddevice 106 may then modify thevisualization 114 associated with the AR environment to display a movement (e.g., as an animation) of the virtualindustrial automation device 1104 at the same speed as the motion of theuser 104 in the forward direction. In some embodiments, the virtualindustrial automation device 1104 may also be mapped to the hands of theuser 104 in the same manner described above and the movement of the virtualindustrial automation device 1104 may be linked to the movement of the mapped hands. - In some embodiments, the head mounted
device 106 may display an animation of the movement of the virtualindustrial automation device 1104 after the motion of theuser 104 associated with the gesture command 1202 is complete. For example, the head mounteddevice 106 may modify thevisualization 114 to display the animation that the virtualindustrial automation device 1104 is moving after theuser 104 has completed a pushing or pulling motion associated with the gesture command 1202. The head mounteddevice 106 may determine a virtual force associated with the gesture command 1202 performed by theuser 104 such that the head mounteddevice 106 may apply the virtual force to the virtualindustrial automation device 1104 in the AR environment to simulate a movement of the virtualindustrial automation device 1104 in the physical world. - For example, the head mounted
device 106 may receive a virtual weight associated with the virtualindustrial automation device 1104 from thedatabase 112. In one embodiment, the virtual weight may be configurable by theuser 104. In another embodiment, the virtual weight may be based on specification data associated with the physical counterpart device to the virtualindustrial automation device 1104. - In any case, the head mounted
device 106 may determine an angle that theuser 104 is pushing or pulling the virtualindustrial automation device 1104. For example, the head mounteddevice 106 may determine a directional vector extending from the user's arms or hands and compare the directional vector to a horizontal axis associated with the virtualindustrial automation device 1104. The head mounteddevice 106 may then determine an angle associated with the pushing or the pulling gesture motion performed by the user based on the comparison between the directional vector and the horizontal axis. Additionally, the head mounteddevice 106 may determine a speed of the user's hands or arms in the motion associated with the gesture command 1202 based on motion data associated withuser 104. The head mounteddevice 106 may then determine a virtual force based on the determined angle and speed associated with the user's gesture motion and the virtual weight associated with the virtualindustrial automation device 1104. The head mounteddevice 106 may then apply the virtual force to the virtualindustrial automation device 1104 after theuser 104 has completed the user's gesture motion associated with the gesture command (e.g., the push gesture command or the pull gesture command). That is, the head mounteddevice 106 may display an animation of the virtualindustrial automation device 1104 moving in thevisualization 114 that corresponds to the direction and the speed associated with the virtual force applied to the virtualindustrial automation device 1104. - In one embodiment, the virtual
industrial automation device 1104 may have one or more friction parameters associated with the virtualindustrial automation device 1104 in the AR environment. The head mounteddevice 106 may display an animation of the virtualindustrial automation device 1104 moving slower and/or stopping over time based on the one or more friction parameters associated with the virtualindustrial automation device 1104 in the AR environment. - In one embodiment, the head mounted
device 106 may differentiate between theuser 104 performing a push gesture command and a pull gesture command by determining an orientation of the user's fingers. For example, the head mounteddevice 106 may detect the push gesture command by determining that theuser 104 has placed both hands on a virtual surface of a virtual industrial automation device and that the user's fingers are extended (e.g., straight or upward). In another example, the head mounteddevice 106 may detect the pull gesture command by determining that theuser 104 has placed both hands on a virtual surface of a virtual industrial automation device and that the user's fingers are curled around a virtual edge of the virtual industrial automation device. - Similar to the push gesture command and pull gesture command of
FIGS. 11 and 12 , in some embodiments, the head mounteddevice 106 may also detect a nudge gesture command to move a virtualindustrial automation device 1104 to another position in thevisualization 114 associated with the AR environment. Theuser 104 may perform the nudge gesture command to cause the head mounteddevice 106 to modify thevisualization 114 to display a finer movement of the virtualindustrial automation device 1104 as compared to a movement of the virtualindustrial automation device 1104 as a result of the push gesture command or the pull gesture command. That is, after the head mounteddevice 106 detects the nudge gesture command performed by theuser 104, the head mounteddevice 106 may limit the movement displayed in thevisualization 114 of the virtualindustrial automation device 1104 to less than a threshold virtual distance, such as the length of the user's arm, the length of the user's hand, or the like. In this way, the head mounteddevice 106 may more accurately display a desired movement of the virtualindustrial automation device 1104 over a shorter virtual distance as compared to a movement of the virtualindustrial automation device 1104 as a result of the push gesture command or the pull gesture command. - The head mounted
device 106 may detect the nudge gesture command by determining that theuser 104 has placed one or both hands on a virtual surface of the virtualindustrial automation device 1104 based on image data of the user received by the head mounteddevice 106 and virtual content displayed in thevisualization 114. The head mounteddevice 106 may then receive motion data associated with theuser 104 to determine a direction and a speed at which an arm or a hand of theuser 104 is moving in thevisualization 114 associated with the AR environment. The head mounteddevice 106 may then modify thevisualization 114 associated with the AR environment to display a movement (e.g., as an animation) of the virtualindustrial automation device 1104 at the same speed and in the same direction as the motion of the user's arm or hand. - In some embodiments, the
user 104 may wish to lift the virtual industrial automation device upwards in thevisualization 114 associated with the AR environment to reposition the virtual industrial automation device or view the underside of the virtual industrial automation device. Theuser 104 may perform a lift gesture command that involves theuser 104 placing the user's hand on the underside surface (e.g., a surface facing the floor of the AR environment) of a virtual industrial automation device. In one embodiment, the lift gesture command may involve theuser 104 bending the knees of theuser 104 to simulate a lifting motion of an object upwards. In another embodiment, the lift gesture command may involve theuser 104 placing the user's hands on a first surface of the virtual industrial automation device and curling the user's fingers around a virtual edge of the virtual industrial automation device such that the user's fingers are touching an underside surface of the virtual industrial automation device. In any case, the head mounteddevice 106 may detect the lift gesture command performed by theuser 104 to move (e.g., as an animation) the virtualindustrial automation device 1104 upward in thevisualization 114 associated with the AR environment. For example, based on image data associated with theuser 104 and the virtual content displayed in thevisualization 114, the head mounteddevice 106 may determine that the gesture command performed by theuser 104 is a lift gesture command because theuser 104 has placed one or both hands on a virtual underside surface of a virtual industrial automation device. - The head mounted
device 106 may then receive motion data associated with theuser 104. Based on the motion data, the head mounteddevice 106 may determine a direction and an angle of movement associated with the user's hands, arms, or the like, to move the virtual industrial automation device in the visualization 114 (e.g., via an animation). For example, the head mounteddevice 106 may determine a directional vector extending from the user's arms or hands and compare the directional vector to a vertical axis associated with the virtualindustrial automation device 1104. The head mounteddevice 106 may then determine an angle associated with the lifting gesture motion performed by theuser 104 based on the comparison between the directional vector and the vertical axis. Additionally, the head mounteddevice 106 may determine a speed of the user's hands or arms in the motion associated with the lift gesture command based on motion data associated withuser 104. The head mounteddevice 106 may then move the virtualindustrial automation device 1104 via the animation in the visualization
114 after theuser 104 has completed the user's gesture motion associated with the lift gesture command. That is, the head mounteddevice 106 may display an animation of the virtualindustrial automation device 1104 moving in thevisualization 114 that corresponds to the determined direction and the determined speed associated user's lift gesture motion. In some embodiments, the virtual industrial automation device may also be mapped to the hands of theuser 104 in the same manner described above and the movement of the virtualindustrial automation device 1104 may be linked to the movement of the mapped hands. - In some embodiments, the head mounted
device 106 may be communicatively coupled to one or more haptic feedback devices (e.g., actuators) that provide vibrations to theuser 104 based on one or more conditions determined by the head mounteddevice 106 associated with a movement of theuser 104, a movement of the virtual industrial automation device, or the like, in thevisualization 114 associated with the AR environment. The haptic feedback devices may be worn, or otherwise attached, to theuser 104 or portions of theuser 104, such as the user's hands, fingers, feet, legs, or any other suitable body part. The head mounteddevice 106 may send a signal to the haptic feedback devices to provide vibrational feedback to theuser 104 to indicate one or more conditions associated with a movement of theuser 104, a movement of the virtual industrial automation device, or the like, in thevisualization 114 associated with the AR environment. For example, the head mounteddevice 106 may send a signal to the haptic feedback devices to provide a vibration to theuser 104 after detecting that theuser 104 has pushed a virtual industrial automation device into a wall or other boundary. In another example, the head mounteddevice 106 may send a signal to the haptic feedback devices to provide a vibration to theuser 104 to notify the user of a message or an alert. It should be understood that the examples provided above are intended to be non-limiting and that the head mounteddevice 106 may send a signal to the haptic feedback devices to provide vibrational feedback to notify the user of any conditions associated with virtual objects, theuser 104, the AR environment, or the like. In some embodiments, the vibrational feedback provided to theuser 104 may also be accompanied by voice alerts or notifications to theuser 104. - In addition to facilitating the movement of a virtual industrial automation device by a
user 104 to different locations within thevisualization 114 associated with the AR environment, the head mounteddevice 106 may facilitate a rotation of the virtual industrial automation device along one or more axes of rotations of the virtual industrial automation device in the visualization associated with the AR environment.FIG. 13 illustrates a perspective view 1300 of theuser 104 utilizing the head mounteddevice 106 to perform a rotategesture command 1302 to rotate a virtualindustrial automation device 1304 along arotational axis 1306 of the virtualindustrial automation device 1304 within thevisualization 114 associated with the AR environment. The head mounteddevice 106 may receive image data associated with theuser 104. Based on the image data associated with theuser 104 and virtual content displayed in thevisualization 114 associated with the AR environment, the head mounteddevice 106 may detect the rotategesture command 1302 performed by theuser 104 by determining that theuser 104 has placed one or both hands on a virtual edge of the virtualindustrial automation device 1304. In one embodiment, the head mounteddevice 106 may detect that theuser 104 has placed one or both hands on the virtual edge of the virtualindustrial automation device 1304 by determining that the positions of the user's hands (e.g., fingers, palm, wrist, or a combination thereof) align or overlap with a boundary (e.g., virtual edge) where two virtual surfaces of the virtualindustrial automation device 1304 intersect. In one embodiment, the head mounteddevice 106 may detect that theuser 104 is performing a rotate gesture command (e.g., as compared to a push gesture command, a pull gesture command, or a nudge gesture command) if theuser 104 is moving the user's arm or the user's hand at an angle with respect to the virtualindustrial automation device 102. - Alternatively, the user may issue the voice command "rotate" before performing the gesture or while performing the gesture. The head mounted
device 106 may then detect the voice command and distinguish the associated rotate gesture command from the push gesture command or the pull gesture command. In one embodiments, after the head mounteddevice 106 issues the voice command "rotate," the head mounteddevice 106 may modify the visualization to display one or more permissible rotational axes (e.g., 1306) that theuser 104 may rotate the virtualindustrial automation device 1304 about. - The head mounted
device 106 may then receive motion data associated with theuser 104. In some embodiments, the motion data may be extracted from the image data associated with theuser 104. The head mounteddevice 106 may then determine a direction of rotation and a speed of rotation based on the movement of the user's hands or arms on the virtual edge of the virtualindustrial automation device 1304. For example, if the user's hand appears to be pushing downwards on the virtual edge of the virtualindustrial automation device 1304, the head mounteddevice 106 may determine that the direction of rotation is counterclockwise with respect to an axis of rotation of the virtualindustrial automation device 1304. In another example, if the user's hand appears to be pushing upwards on the virtual edge of theindustrial automation device 1304, the head mounted device 105 may determine that the direction of rotation is counter-clockwise with respect to an axis of rotation of the virtualindustrial automation device 1304. - In some embodiments, the head mounted
device 106 may receive specification data associated with the virtualindustrial automation device 1304 that includes one or more permissible axes ofrotation 1306 associated with the virtualindustrial automation device 1304. In some embodiments, the permissible axes ofrotation 1306 may correspond to axes of rotation associated with the actual counterpart device in the real world. For example, a permissible axis ofrotation 1306 may correspond to an axis ofrotation 1306 that extends through the center of mass of the actual counterpart device. - Additionally, the head mounted
device 106 may prevent auser 104 from rotating the virtualindustrial automation device 1304 to an orientation that is not possible in the real world or that would prevent the functioning of the actual counterpart device in the real world. For example, if theuser 104 attempts to rotate a virtual conveyor section to an orientation in which a portion of the conveyor section is embedded in the ground or the conveyor side of the conveyor section is facing the ground, the head mounteddevice 106 may stop the rotation of the virtualindustrial automation device 102 in a position before the virtualindustrial automation device 102 is rotated to the impermissible position. In one embodiment, the head mounteddevice 106 may permit theuser 104 to rotate the virtualindustrial automation device 1304 to an impermissible orientation but modify thevisualization 114 to display an alert or an error notification that the virtual industrial automation device is in an impermissible orientation. - After, the head mounted
device 106 receives the specification data associated with the virtualindustrial automation device 1304, the head mounteddevice 106 may determine the axis ofrotation 1306 based on a position, a motion, or both, of the user's hands on the virtual edge of the virtualindustrial automation device 1304 and the specification data. For example, the head mounteddevice 106 may determine the axis ofrotation 1306 based on possible rotational directions of rotations that may be applied to the virtualindustrial automation device 1304 from the position, the motion, or both, of the user's hands on the virtual edge of the virtualindustrial automation device 1304. - In some embodiments, the
user 104 may adjust the axis ofrotation 1306. For example, after detecting the rotate gesture command performed by theuser 104, the head mounteddevice 106 may modify thevisualization 114 to display a default axis ofrotation 1306. Theuser 104 may then select the default axis of rotation by performing an axis repositioning gesture command by placing one or both hands on a portion of the default axis ofrotation 1306 to select it or perform a gesture command (e.g., the gaze gesture command) to select it. After the head mounteddevice 106 detects the axis repositioning gesture command or the gaze gesture command, the head mounteddevice 106 may map the axis to one or more connection points on the hand or hands of the user (e.g., the fingers or the palm of the user). Thereafter, theuser 104 may move the user's hand and the head mounteddevice 106 may modify thevisualization 114 to move (e.g., as an animation) the axis to the position of the user's hand or hands. In one embodiment, for example, the head mounteddevice 106 may map the axis of rotation to the user's hand as if the user was grabbing a pole. Theuser 104 may then adjust the position or the orientation of the axis to a desired position or a desired orientation. - After the head mounted
device 106 determines the direction of rotation and the speed of rotation, the head mounteddevice 106 may then modify thevisualization 114 to display a rotation of the virtualindustrial automation device 1304 about the determined axis of rotation at the determined direction and speed of rotation, as illustrated byFIG. 14 . - In some embodiments, the
user 104 may rotate and move a virtualindustrial automation device 1304 simultaneously. For example, the head mounteddevice 106 may detect that the user has performed a push gesture command, a pull gesture command, a nudge gesture command, or the like, at an angle. As described above, the head mounteddevice 106 may display a rotation of the virtual industrial automation device at the determined angle about a rotational axis associated with the virtualindustrial automation device 1304 and a movement of the virtualindustrial automation device 1304 at a speed and in a certain direction associated with the gesture command. - In some embodiments, the size of the virtual
industrial automation device 1304 may be too large to conveniently manipulate (e.g., rotate, move, push, or pull) in thevisualization 114 associated with the AR environment. In such embodiments, theuser 104 may be able to perform a scale down command to reduce the size of the virtualindustrial automation device 1304 in thevisualization 114. For example, after theuser 104 has selected a virtual industrial automation device (e.g., via a gaze gesture command), the user may issue a voice command, such as "scale down," "smaller," or the like, to reduce the size of the virtualindustrial automation device 1304. Additionally, theuser 104 may be able perform a scale up command to increase the size of the virtualindustrial automation device 1304 in the visualization. For example, after theuser 104 has selected a virtual industrial automation device (e.g., via a gaze gesture command), the user may issue a voice command, such as "scale up," "larger," "grow," or the like, to increase the size of the virtualindustrial automation device 1304. In addition, theuser 104 may scale up the virtual industrial automation device using hand motions. For example, theuser 104 may extend his hands at two edges or ends of the virtual industrial automation device and move them outward. Alternatively, theuser 104 may scale down the virtual industrial automation device by extending his hands at two edges or ends of the virtual industrial automation device and moving them inward towards each other. - With the foregoing in mind,
FIG. 15 illustrates a flow chart of amethod 1500 for displaying and modifying avisualization 114 based on one or more gesture commands performed by theuser 104 to move a virtual industrial automation device in avisualization 114 associated with an AR environment. Although the following description of themethod 1500 is described in a particular order, it should be noted that themethod 1500 is not limited to the depicted order, and instead, themethod 1500 may be performed in any suitable order. Moreover, although themethod 1500 is described as being performed by the head mounteddevice 106, it should be noted that it may be performed by any suitable computing device communicatively coupled to the head mounteddevice 106. - Referring now to
FIG. 15 , atblock 1502, the head mounteddevice 106 generate and display thevisualization 114 based on virtual content and received image data associated with theuser 104 and the real-world environment of theuser 104. For example, the head mounteddevice 106 may display avisualization 114 that includes a virtual industrial automation device positioned in the real-world environment of theuser 104 in real-time or substantially real-time. - At
block 1504, the head mounteddevice 106 may receive image data associated with a gesture command performed by auser 104. For example, the gesture command may include a push gesture command, a pull gesture command, a nudge gesture command, a rotate gesture command, or the like. The head mounteddevice 106 may then analyze the acquired image data for characteristics associated with the one or more gesture commands. If a threshold of one or more characteristics for a particular gesture command match a stored, learned, or otherwise interpretable command, the head mounteddevice 106 may determine a corresponding command to be performed by the head mounteddevice 106 based on the image data associated with theuser 104. For example, as described above, a push gesture command involves theuser 104 placing both hands on a virtual surface of a virtual industrial automation device. Based on the image data associated with the gesture performed by the user, the head mounteddevice 106 may determine that the gesture command corresponds to the push gesture command if the head mounteddevice 106 determines that the both of the user's hands are placed on a virtual surface of a virtual industrial automation device. - After determining the gesture command, at
block 1508, the head mounteddevice 106 may receive motion data associated with theuser 104. In some embodiments, the motion data is extracted from received image data associated with theuser 104. Atblock 1510, the head mounteddevice 106 may determine one or more motion characteristics associated with the gesture command performed by theuser 104. For example, with regard to the push gesture command, the head mounteddevice 106 may determine a direction and a speed at which theuser 104 is moving (e.g., walking) based on the motion data. It should be noted that the motion data described in the present disclosure may also be acquired using velocity sensors, position sensor, accelerometers, and other suitable speed detection sensors disposed on the head mounteddevice 106. - At
block 1512, the head mounteddevice 106 may then modify thevisualization 114 associated with the AR environment based on the determined gesture command and motion characteristics. For example, with regard to the push gesture command, the head mounteddevice 106 may modify the visualization associated with the AR environment by moving the virtual industrial automation device at the same speed and in the same direction as the movement of the user. - Additionally, in some embodiments, the head mounted
device 106 may detect voice commands issued by the user to provide similar interactions or additional interactions with the virtual industrial automation devices in the AR environment or with the AR environment itself. For example, theuser 104 may say the voice command "push," "pull," "rotate," "nudge," "lift," or the like. After the head mounted device detects the voice command, the head mounteddevice 106 may perform actions as described herein with respect to the corresponding gesture command (e.g., the push gesture command, the pull gesture command, the nudge gesture command, the rotate gesture command, the lift gesture command). - After the
user 104 has moved one or more virtual industrial automation devices to a desire position, theuser 104 may wish to combine the one or more virtual industrial automation devices to form design of the industrial system. To help illustrate,FIG. 16 is aperspective view 1600 of theuser 104 utilizing the head mounteddevice 106 to couple a first virtualindustrial automation device 1602 and a second virtualindustrial automation device 1604 in thevisualization 114 associated with an AR environment. In the illustrated embodiment, theuser 104 may have performed grasping gesture commands using each hand to map the first virtualindustrial automation device 1602 to the user's right hand and the second virtualindustrial automation device 1604 to the user's left hand. - The head mounted
device 106 may detect a snap gesture command performed by theuser 104 to couple the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604. For instance, in the illustrated embodiment, thesnap gesture command 802 performed by theuser 104 may involve theuser 104 bringing both hands together while grasping a respective virtual industrial automation device in each hand. The head mounteddevice 106 may detect the snap gesture command after receiving image data associated with the hands of theuser 104 and the user's surroundings. Based on the image data of theuser 104 and the virtual data displayed in thevisualization 114, the head mounteddevice 106 may determine that theuser 104 is performing the snap gesture with the first virtual industrial automation device
1602 and the second virtualindustrial automation device 1604. For instance, if the user's hands are moving towards each other and each hand includes a virtual industrial automation component or device that may interface with each other, the image data that illustrates the movement of the hands with the connectable virtual component may detect the snap gesture. After detecting the snap gesture performed by theuser 104, the head mounteddevice 106 may modify thevisualization 114 associated with the AR environment to couple (e.g., snap) the first virtualindustrial automation device 1602 with the second virtualindustrial automation device 1604 at one or more predetermined connection points, as shown inFIG. 17 . In some embodiments, the head mounteddevice 106 may provide a snapping motion or a magnetic motion when coupling the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604. That is, the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 may be brought together at a certain speed until they are within a threshold distance of each other. At that time, the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 may accelerate or increase its speed toward each other to mimic a magnetic attraction or snap effect. In one embodiment, the head mounteddevice 106 may provide a snapping sound that may accompany the coupling of the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 together. The snapping sound may correspond to a pop sound, click sound, or other sound (e.g., ring, chime) that conveys to theuser 104 that the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 has connected. - In some embodiments, the head mounted
device 106 may determine a compatibility between the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 before modifying the visualization associated with the AR environment to couple the devices together. For example, the head mounteddevice 106 may receive compatibility data associated with the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 from thecomputing system 110 or other suitable memory component after detecting the snap gesture command performed by theuser 104. Based on the compatibility data associated with the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604, the head mounteddevice 106 may determine if the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 are compatible or not compatible (e.g., whether the real-world counterparts would couple together or not). The compatibility data may be based on specification data related to each of the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604. The specification data may detail devices or components that connect to each other, types of interconnects, male counterpart component, female counterpart components, and the like. - After determining that the first virtual
industrial automation device 1602 and the second virtualindustrial automation device 1604 are not compatible, the head mounteddevice 106 may display an error message in thevisualization 114 notifying theuser 104 of the incompatibility. In some embodiments, the head mounted device may display a recommendation associated with the compatibility of the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 with other virtual industrial automation devices. - After the
user 104 has coupled one or more virtual industrial automation device into a joint virtual industrial automation device, theuser 104 may wish to separate the one or more virtual industrial automation devices from each other using a gesture detected by the head mounteddevice 106. To help illustrate,FIG. 17 is aperspective view 1700 of theuser 104 utilizing the head mounteddevice 106 to separate a first virtualindustrial automation device 1602 and a second virtualindustrial automation device 1604 in avisualization 114 associated with an AR environment. In the illustrated embodiment, theuser 104 may have performed the grasping gesture command using one or both hands to map the joint virtual industrial automation device to one hand or both hands, respectively. - The head mounted
device 106 may detect a separate gesture command performed by theuser 104 to separate the first virtualindustrial automation device 1602 from the second virtualindustrial automation device 1604. In the illustrated embodiment, theseparate gesture command 802 performed by theuser 104 may involve theuser 104 separating the user's hands while theuser 104 is grasping a different section (e.g., the first virtualindustrial automation device 1602 and the second virtual industrial automation device 1604) of the joint virtual industrial automation device. The head mounteddevice 106 may detect the separate command after receiving image data associated with theuser 104 and the user's surroundings. Based on the image data of theuser 104 and the virtual data displayed in thevisualization 114, the head mounteddevice 106 may determine that theuser 104 is performing the separate gesture with the joint virtual industrial automation device. After detecting the separate gesture performed by theuser 104, the head mounteddevice 106 may modify thevisualization 114 associated with the AR environment to separate the first virtualindustrial automation device 1602 from the second virtualindustrial automation device 1604 at one or more predetermined disconnection points, as shown inFIG. 16 . - In some embodiments, the head mounted
device 106 may determine whether the user's hands are positioned about a line or a point of severance between the first virtualindustrial automation device 1602 from the second virtualindustrial automation device 1604 in the joint virtual industrial automation device. For example, the head mounteddevice 106 may determine a position of each hand of the user along a joint virtual industrial automation device. The head mounteddevice 106 may then determine that a line or a point of severance associated with the joint virtual industrial automation device is located between the positions of each user's hands along the joint virtual industrial automation device. In some embodiments, the head mounteddevice 106 may detect a gaze gesture command performed by theuser 104 and directed towards the joint virtual industrial automation device. After detecting the gaze gesture command performed by theuser 104, the head mounteddevice 106 may determine one or more severance joints in the joint virtual industrial automation device and modify the visualization to display the one or more determined severance joints. For example, the head mounteddevice 106 may determine that a joint virtual industrial automation device has a first severance joint between a first and a second virtual industrial automation device in the joint virtual industrial automation device, and the head mounteddevice 106 may determine that the joint virtual industrial automation device has a second severance joint between the second virtual industrial automation device and a third virtual industrial automation device. The head mounteddevice 106 may then modify thevisualization 114 associated with the AR environment to display the first and second severance joints associated with the joint virtual industrial automation device. - In one embodiment, the
user 104 may place the user's hands at desired positions about a desired severance joint to select the severance joint. For example, the head mounteddevice 106 may determine that theuser 104 has selected a desired severance joint by detecting the positions of the user's hands, arms, fingers, or any other suitable body part on either side of one of the displayed severance joints. In another embodiment, theuser 104 may perform a gaze gesture command directed toward one of the severance joints to select the desired severance joint. For example, the head mounteddevice 106 may determine that theuser 104 has selected a desired severance joint by detecting that theuser 104 has performed the gaze gesture command, as described herein, and the target of the gaze gesture command is one of the displayed severance joints. - If the head mounted
device 106 determines that the severance joint (e.g., the line or the point of severance) is not between the positions of each user's hands along the joint virtual industrial automation device, the head mounteddevice 106 may display an error message in thevisualization 114 associated with the determination. In some embodiments, the head mounteddevice 106 may display a recommendation to theuser 104 in the visualization. For example, the head mounteddevice 106 may highlight the line or point of severance between the first virtualindustrial automation device 1602 from the second virtualindustrial automation device 1604 in the joint virtual industrial automation device. - With the foregoing in mind,
FIG. 18 illustrates a flow chart of amethod 1800 for displaying and modifying avisualization 114 based on a snap gesture or a separate gesture command performed by auser 104 in avisualization 114 associated with an AR environment. Although the following description of themethod 1800 is described in a particular order, it should be noted that themethod 1800 is not limited to the depicted order, and instead, themethod 1800 may be performed in any suitable order. Moreover, although themethod 1800 is described as being performed by the head mounteddevice 106, it should be noted that it may be performed by any suitable computing device communicatively coupled to the head mounteddevice 106. - Referring now to
FIG. 18 , atblock 1802, the head mounteddevice 106 may generate and display thevisualization 114 based on virtual content and received image data associated with theuser 104 and the real-world environment of theuser 104. For example, the head mounteddevice 106 may display avisualization 114 that includes first virtual industrial automation device and a second industrial automation device positioned in the real-world environment of the user. Atblock 1804, the head mounteddevice 106 may receive image data associated with a gesture command performed by theuser 104. For example, the gesture command may include a snap gesture command, a separate gesture command, or the like. The head mounteddevice 106 may then analyze the acquired image data for characteristics associated with one or more gesture commands. If a threshold of one or more characteristics for a particular gesture command match a stored, learned, or otherwise interpretable command, the head mounteddevice 106 may determine a corresponding command to be performed by the head mounteddevice 106 based on the image data associated with theuser 104 atblock 1806. For example, as described above, a snap gesture command involves theuser 104 bringing both hands together while grasping a respective virtual industrial automation device in each hand and the separate gesture command involves the user separating the user's hands while theuser 104 is grasping a different section (e.g., the first virtualindustrial automation device 1602 and the second virtual industrial automation device 1604) of the joint virtual industrial automation device. - After determining the gesture command, at
block 1808, the head mounteddevice 106 may determine whether the command is valid based on the identity of the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604. As described above, with regard to the snap gesture command, the head mounteddevice 106 may determine a compatibility between the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604. If the head mounteddevice 106 determines that the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 are not compatible, the head mounteddevice 106 may display an error message and/or a recommendation atblock 1810. - If the head mounted
device 106 determines that the first virtualindustrial automation device 1602 and the second virtualindustrial automation device 1604 are compatible, the head mounteddevice 106 may proceed to block 1812 and modify thevisualization 114 associated with the AR environment to couple (e.g., snap) the first virtualindustrial automation device 1602 with the second virtualindustrial automation device 1604 at one or more predetermined connection points. - In some embodiments, the head mounted
device 106 may join the wire connections associated with the firstindustrial automation device 1602 and the secondindustrial automation device 1604 when coupling the firstindustrial automation device 1602 and the secondindustrial automation device 1604. Based on the connected wire connections, the head mounteddevice 106 may perform logic that associates the firstindustrial automation device 1602 with the secondindustrial automation device 1604. For example, after the head mounteddevice 106 detects a separate gesture command performed by theuser 104 associated with a joint virtual industrial automation device, the head mounteddevice 106 may display one or more severance points or joints associated with the joint virtual industrial automation device in thevisualization 114. The one or more severance points or joints may be associated with locations where theuser 104 may decouple the wire connections between the firstindustrial automation device 1602 with the secondindustrial automation device 1604. As described above, after the head mounteddevice 106 detects that theuser 104 has positioned the user's hands about a proper severance joint, the head mounted device 105 may separate the firstindustrial automation device 1602 from the secondindustrial automation device 1604 and the wire connections associated with each respective virtual device. - Similarly, referring back to block 1808 with regard to the separate feature, the head mounted
device 106 may determine whether the command is valid by determining whether the user's hands are positioned about a line or a point of severance between the first virtualindustrial automation device 1602 from the second virtualindustrial automation device 1604 in a joint virtual industrial automation device. If the head mounteddevice 106 determines that the user's hands are position about a line or a point of severance, the head mounteddevice 106 may modify thevisualization 114 to display an error message and/or recommendation atblock 1810. If the head mounteddevice 106 determines that the user's hands are positioned about a line or point of severance, the head mounteddevice 106 may modify thevisualization 114 to separate the first virtualindustrial automation device 1602 from the second virtual industrial automation device
1604 at one or more predetermined disconnection points (block 1812). - Additionally, in some embodiments, the head mounted
device 106 may detect voice commands issued by the user to provide similar interactions or additional interactions with the virtual industrial automation devices in the AR environment or with the AR environment itself. For example, theuser 104 may say the voice command "separate," "snap," or the like. After the head mounted device detects the voice command, the head mounteddevice 106 may perform actions as described herein with respect to the corresponding gesture command (e.g., the separate gesture command or the snap gesture command). - In some embodiments, the
user 104 may wish to design an industrial system from a remote location away from the physical location that the industrial system may be located after assembly. For example, a user may design an industrial system from an office or in another country. In such embodiments, the head mounteddevice 106 may provide a dynamic rotation mode that facilitates the design of an industrial system in a virtual environment. To help illustrate,FIG. 19 is an illustration of theuser 104 utilizing the head mounteddevice 106 to navigate a virtualindustrial system 1908 in a virtual environment. Theuser 104 may view the virtualindustrial system 1908 from in the virtual environment without physically moving. That is, the head mounteddevice 106 may detect various gesture commands (e.g., the gesture commands as describe herein) or voice commands that may cause the head mounteddevice 106 to modify the visualization of the virtual environment. For example, theuser 104 may issue navigational voice commands (e.g., "turn left," "turn right," "forward," or "backward"). The head mounteddevice 106 may detect the navigational voice commands and modify the visualization of the virtual industrial system to provide theuser 104 with aview user 104 may interact with the virtualindustrial system 1908, and portions thereof (e.g., various virtual industrial devices) using other voice commands and gesture commands as described herein. That is, theuser 104 may extend his hands to the edges or sides of the virtualindustrial system 1908 and move the hands in a manner (e.g., circular motion) that rotates the virtualindustrial system 1908. By way of example, theuser 104 with theview 1902 may rotate the virtualindustrial system 1908 180 degrees to obtain theview 1906 without moving from his position in the corresponding physical space. - In some embodiments, the
user 104 may wish to have a bird's eye perspective of the design of an industrial system. As such, the head mounteddevice 106 may provide the user with a scale down command to reduce the size of the virtual system in the visualization. To help illustrate,FIG. 20 is aperspective view 2000 of auser 104 utilizing a head mounteddevice 106 to view a visualization of a virtual industrial automation device or a virtualindustrial system 2002. The user may issue a voice command, such as "scale down," "smaller," or the like, to reduce the size of the virtual industrial automation device or the virtualindustrial system 2002. In some embodiments, theuser 104 may be able perform a scale up command to increase the size of the virtual industrial automation device or the virtualindustrial automation device 2002 in the visualization. For example, after theuser 104 has selected a virtual industrial automation device (e.g., via a gaze gesture command), the user may issue a voice command, such as "scale up," "larger," "grow," or the like, or scale gesture commands described above to modify the size of the virtualindustrial automation device 1304. - Although certain embodiments as described herein refer to displaying or modifying a visualization that includes the user's surrounding, virtual objects, virtual information, or the like, on a display of, for example, the head mounted
device 106, it should be understood that, in other embodiments, the display may be a transparent display allowing the user to see the user's surroundings through the display, and a visualization that includes the virtual objects, virtual information, or the like, may be superimposed on the transparent display to appear to the user as if the virtual objects are in the user's surroundings. - Technical effects of the present disclosure include techniques for facilitating the visualization and the design of an industrial system by a user in an AR environment. The interactive AR system may allow a user to visualize and model various configurations and designs of an industrial system and the components of the industrial system within the physical space the industrial system may be located after assembly. Additionally, the interactive AR system provide the user with various gesture commands and voice commands to interact with virtual objects within the AR environment and to navigate the AR environment itself using natural hand motions and gestures. In this way, operating in the AR environment may be more easily performed by various operators.
- While only certain features of the disclosure have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure.
-
- Embodiment 1: A system for interacting with virtual objects in an augmented reality environment, comprising:
a head mounted device, configured to:- receive a first set of image data associated with a surrounding of a user;
- generate a first visualization comprising a plurality of virtual compartments, wherein each virtual compartment is associated with one type of virtual industrial automation device, wherein each virtual compartment comprises a plurality of virtual industrial automation devices, wherein each virtual industrial automation device is configured to depict a virtual object within the first set of image data, wherein the virtual object corresponds to a physical industrial automation device;
- display the first visualization via an electronic display;
- detect a gesture in a second set of image data comprising the surrounding of the user and the first visualization, wherein the gesture is indicative of a selection of one of the plurality of virtual compartments;
- generate a second visualization comprising a respective plurality of virtual industrial automation devices associated with the selection; and
- display the second visualization via the electronic display.
- Embodiment 2: The system of embodiment 1, wherein the gesture comprises the user placing at least one hand on a virtual surface of the one of the plurality of virtual compartments.
- Embodiment 3: The system of embodiment 1, wherein the gesture comprises eye movement of the user, wherein the eye movement is associated with a virtual surface of the one of the plurality of virtual compartments.
- Embodiment 4: The system of embodiment 1, wherein the electronic display comprises a transparent display.
- Embodiment 5: The system of embodiment 1, wherein the head mounted device is configured to:
- detect a second gesture performed by the user in a third set of image data comprising the second visualization, wherein the second gesture is indicative of a selection of one of the respective plurality of virtual industrial automation devices; and
- generate a third visualization comprising an animation of the one of the respective plurality of virtual industrial automation devices moving towards a hand of the user.
- Embodiment 6: The system of embodiment 5, wherein the second gesture comprises the user extending out at least a hand toward the one of the respective plurality of virtual industrial automation devices.
- Embodiment 7: The system of embodiment 5, wherein the animation comprises mapping the one of the respective plurality of virtual industrial automation devices to the hand.
- Embodiment 8: The system of embodiment 7, wherein the head mounted device is configured to:
- track a movement of the hand; and
- adjust the animation to follow the movement of the hand.
- Embodiment 9: The system of embodiment 8, wherein the animation is configured to move based on a speed of the movement.
- Embodiment 10: The system of embodiment 7, wherein the gesture is a first gesture, and wherein the system comprises detecting a second gesture in a third set of image data comprising the surrounding of the user and the second visualization, wherein the second gesture is indicative of a request to un-map the one of the respective plurality of virtual industrial automation devices from the hand.
- Embodiment 11: The system of embodiment 5, wherein the head mounted device is configured to:
- detect one of the respective plurality of virtual industrial automation devices moving towards an additional virtual industrial automation device; and
- accelerate the one of the respective plurality of virtual industrial automation devices towards the additional virtual industrial automation device in response to the one of the respective plurality of virtual industrial automation devices being compatible with the additional virtual industrial automation device.
- Embodiment 12: The system of embodiment 5, wherein the head mounted device is configured to:
- detect one of the respective plurality of virtual industrial automation devices moving towards another virtual industrial automation device; and
- display an error message in response to the one of the respective plurality of virtual industrial automation devices being incompatible with the other virtual industrial automation device.
- Embodiment 13: A method, comprising:
- receiving, via a processor, a first set of image data associated with a surrounding of a user;
- generating, via the processor, a first visualization comprising a virtual industrial automation device, wherein the virtual industrial automation device is configured to depict a virtual object within the first set of image data, wherein the virtual object corresponds to a physical industrial automation device;
- displaying, via the processor, the first visualization via an electronic display;
- detecting, via the processor, a gesture in a second set of image data comprising the surrounding of the user and the first visualization, wherein the gesture is indicative of a request to move the virtual industrial automation device;
- tracking, via the processor, a movement of the user;
- generating, via the processor, a second visualization comprising an animation of the virtual industrial automation device moving based on the movement; and
- displaying, via the processor, the second visualization via the electronic display.
- Embodiment 14: The method of embodiment 13, wherein the gesture comprises the user placing at least one hand on a virtual surface of the virtual industrial automation device, and wherein the movement is performed in a forward direction with respect to the user.
- Embodiment 15: The method of embodiment 14, wherein the gesture comprises fingers of the at least one hand of the user extending upward.
- Embodiment 16: The method of embodiment 13, wherein the gesture comprises the user placing at least one hand on a virtual surface of the virtual industrial automation device, and wherein the movement of the user is performed in a backward direction with respect to the user.
- Embodiment 17: The method of embodiment 14, wherein the gesture comprises fingers of the at least one hand of the user curling over a virtual edge of the virtual industrial automation device.
- Embodiment 18: The method of embodiment 13, wherein the gesture comprises the user placing at least one hand on a virtual edge of the virtual industrial automation device, the movement of the user comprises a rotational speed and a rotational direction with respect to the at least one hand, and the animation comprises rotating the virtual industrial automation device at the rotation speed in the rotational direction with respect to the at least one hand.
- Embodiment 19: The method of embodiment 13, wherein the gesture comprises the user placing at least one hand on a virtual underside surface of the virtual industrial automation device, and wherein the movement of the user comprises a lifting motion associated with the virtual industrial automation device.
- Embodiment 20: A computer-readable medium comprising computer-executable instructions that, when executed, are configured to cause a processor to:
- receive a first set of image data associated with a surrounding of a user;
- generate a first visualization comprising a first virtual industrial automation device and a second virtual industrial automation device, wherein the first and second virtual industrial automation devices are configured to depict first and second respective virtual objects within the first set of image data, wherein the first and second respective virtual objects correspond to a first and a second physical industrial automation device;
- display the first visualization via an electronic display;
- detect a first gesture in a second set of image data comprising the surrounding of the user and the first visualization, wherein the first gesture is indicative of a movement of the first virtual industrial automation device toward the second virtual industrial automation device;
- determine a compatibility between the first virtual industrial automation device and the second virtual industrial automation device;
- generate a second visualization comprising an animation of the first virtual industrial automation device coupling to the second virtual industrial automation device to create a joint virtual industrial automation device in response to determining that the first virtual industrial automation device and the second virtual industrial automation device are compatible;
- generate a third visualization comprising an error notification in response to determining that the first virtual industrial automation device and the second virtual industrial automation device are incompatible; and
- display the second visualization or the third visualization via the electronic display.
- Embodiment 21: The computer-readable medium of embodiment 20, comprising computer-executable instructions that, when executed, are configured to cause the processor to:
- display the second visualization;
- detect a second gesture in a third set of image data comprising the joint virtual industrial automation device and at least a portion of the user, wherein the second gesture is indicative of a request to separate the joint virtual industrial automation device;
- generate a fourth visualization comprising a second animation of the joint virtual industrial automation device separating into the first virtual industrial automation device and the second virtual industrial automation device; and
- display the fourth visualization via the electronic display.
- Embodiment 22: The computer-readable medium of embodiment 21, comprising computer-executable instructions that, when executed, are configured to cause the processor to:
- detect a third gesture in a fourth set of image data comprising the joint virtual industrial automation device, before detecting the second gesture, wherein the third gesture is indicative of a request to display one or more severance joints associated with the joint virtual industrial automation device;
- generate a fifth visualization after detecting the third gesture, wherein the fifth visualization comprises at least the one or more severance joints and the joint virtual industrial automation device; and
- detect a fourth gesture in a fifth set of image data comprising at least the one or more severance joints and the joint virtual automation device, before detecting the second gesture and after detecting the third gesture, wherein the fourth gesture is indicative of a selection of one of the severance joints.
Claims (15)
- A system for interacting with virtual objects in an augmented reality environment, comprising:
a head mounted device, configured to:receive a first set of image data associated with a surrounding of a user;generate a first visualization comprising a plurality of virtual compartments, wherein each virtual compartment is associated with one type of virtual industrial automation device, wherein each virtual compartment comprises a plurality of virtual industrial automation devices, wherein each virtual industrial automation device is configured to depict a virtual object within the first set of image data, wherein the virtual object corresponds to a physical industrial automation device;display the first visualization via an electronic display;detect a gesture in a second set of image data comprising the surrounding of the user and the first visualization, wherein the gesture is indicative of a selection of one of the plurality of virtual compartments;generate a second visualization comprising a respective plurality of virtual industrial automation devices associated with the selection; anddisplay the second visualization via the electronic display. - The system of claim 1, wherein the gesture comprises the user placing at least one hand on a virtual surface of the one of the plurality of virtual compartments.
- The system of claim 1 or 2, wherein the gesture comprises eye movement of the user, wherein the eye movement is associated with a virtual surface of the one of the plurality of virtual compartments.
- The system of one of claims 1 to 3, wherein the electronic display comprises a transparent display.
- The system of one of claims 1 to 4, wherein the head mounted device is configured to:detect a second gesture performed by the user in a third set of image data comprising the second visualization, wherein the second gesture is indicative of a selection of one of the respective plurality of virtual industrial automation devices; andgenerate a third visualization comprising an animation of the one of the respective plurality of virtual industrial automation devices moving towards a hand of the user; and/orwherein the second gesture comprises the user extending out at least a hand toward the one of the respective plurality of virtual industrial automation devices.
- The system of claim 5, wherein the animation comprises mapping the one of the respective plurality of virtual industrial automation devices to the hand; and/or
wherein the head mounted device is configured to:track a movement of the hand; andadjust the animation to follow the movement of the hand. - The system of claim 6, wherein the animation is configured to move based on a speed of the movement.
- The system of claim 6 or 7, wherein the gesture is a first gesture, and wherein the system comprises detecting a second gesture in a third set of image data comprising the surrounding of the user and the second visualization, wherein the second gesture is
indicative of a request to un-map the one of the respective plurality of virtual industrial automation devices from the hand. - The system of one of claims 5 to 8, wherein the head mounted device is configured to:detect one of the respective plurality of virtual industrial automation devices moving towards an additional virtual industrial automation device; andaccelerate the one of the respective plurality of virtual industrial automation devices towards the additional virtual industrial automation device in response to the one of the respective plurality of virtual industrial automation devices being compatible with the additional virtual industrial automation device.
- The system of one of claims 5 to 9, wherein the head mounted device is configured to:detect one of the respective plurality of virtual industrial automation devices moving towards another virtual industrial automation device; anddisplay an error message in response to the one of the respective plurality of virtual industrial automation devices being incompatible with the other virtual industrial automation device.
- A method, comprising:receiving, via a processor, a first set of image data associated with a surrounding of a user;generating, via the processor, a first visualization comprising a virtual industrial automation device, wherein the virtual industrial automation device is configured to depict a virtual object within the first set of image data, wherein the virtual object corresponds to a physical industrial automation device;displaying, via the processor, the first visualization via an electronic display;detecting, via the processor, a gesture in a second set of image data comprising the surrounding of the user and the first visualization, wherein the gesture is indicative of a request to move the virtual industrial automation device;tracking, via the processor, a movement of the user;generating, via the processor, a second visualization comprising an animation of the virtual industrial automation device moving based on the movement; anddisplaying, via the processor, the second visualization via the electronic display.
- The method of claim 11, wherein the gesture comprises the user placing at least one hand on a virtual surface of the virtual industrial automation device, and wherein the movement is performed in a forward direction with respect to the user; and/or
wherein the gesture comprises fingers of the at least one hand of the user extending upward; or
wherein the gesture comprises the user placing at least one hand on a virtual surface of the virtual industrial automation device, and wherein the movement of the user is performed in a backward direction with respect to the user; or
wherein the gesture comprises fingers of the at least one hand of the user curling over a virtual edge of the virtual industrial automation device. - The method of claim 11 or 12, wherein the gesture comprises the user placing at least one hand on a virtual edge of the virtual industrial automation device, the movement of the user comprises a rotational speed and a rotational direction with respect to the at least one hand, and the animation comprises rotating the virtual industrial automation device at the rotation speed in the rotational direction with respect to the at least one hand; or
wherein the gesture comprises the user placing at least one hand on a virtual underside surface of the virtual industrial automation device, and wherein the movement of the user comprises a lifting motion associated with the virtual industrial automation device. - A computer-readable medium comprising computer-executable instructions that, when executed, are configured to cause a processor to:receive a first set of image data associated with a surrounding of a user;generate a first visualization comprising a first virtual industrial automation device and a second virtual industrial automation device, wherein the first and second virtual industrial automation devices are configured to depict first and second respective virtual objects within the first set of image data, wherein the first and second respective virtual objects correspond to a first and a second physical industrial automation device;display the first visualization via an electronic display;detect a first gesture in a second set of image data comprising the surrounding of the user and the first visualization, wherein the first gesture is indicative of a movement of the first virtual industrial automation device toward the second virtual industrial automation device;determine a compatibility between the first virtual industrial automation device and the second virtual industrial automation device;generate a second visualization comprising an animation of the first virtual industrial automation device coupling to the second virtual industrial automation device to create a joint virtual industrial automation device in response to determining that the first virtual industrial automation device and the second virtual industrial automation device are compatible;generate a third visualization comprising an error notification in response to determining that the first virtual industrial automation device and the second virtual industrial automation device are incompatible; anddisplay the second visualization or the third visualization via the electronic display.
- The computer-readable medium of claim 14, comprising computer-executable instructions that, when executed, are configured to cause the processor to:display the second visualization;detect a second gesture in a third set of image data comprising the joint virtual industrial automation device and at least a portion of the user, wherein the second gesture is indicative of a request to separate the joint virtual industrial automation device;generate a fourth visualization comprising a second animation of the joint virtual industrial automation device separating into the first virtual industrial automation device and the second virtual industrial automation device; anddisplay the fourth visualization via the electronic display; orcomprising computer-executable instructions that, when executed, are configured to cause the processor to:detect a third gesture in a fourth set of image data comprising the joint virtual industrial automation device, before detecting the second gesture, wherein the third gesture is indicative of a request to display one or more severance joints associated with the joint virtual industrial automation device;generate a fifth visualization after detecting the third gesture, wherein the fifth visualization comprises at least the one or more severance joints and the joint virtual industrial automation device; anddetect a fourth gesture in a fifth set of image data comprising at least the one or more severance joints and the joint virtual automation device, before detecting the second gesture and after detecting the third gesture, wherein the fourth gesture is indicative of a selection of one of the severance joints.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/143,087 US10942577B2 (en) | 2018-09-26 | 2018-09-26 | Augmented reality interaction techniques |
Publications (4)
Publication Number | Publication Date |
---|---|
EP3629134A1 true EP3629134A1 (en) | 2020-04-01 |
EP3629134C0 EP3629134C0 (en) | 2023-06-21 |
EP3629134B1 EP3629134B1 (en) | 2023-06-21 |
EP3629134B8 EP3629134B8 (en) | 2023-07-26 |
Family
ID=67998296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19198602.5A Active EP3629134B8 (en) | 2018-09-26 | 2019-09-20 | Augmented reality interaction techniques |
Country Status (2)
Country | Link |
---|---|
US (3) | US10942577B2 (en) |
EP (1) | EP3629134B8 (en) |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11266919B2 (en) * | 2012-06-29 | 2022-03-08 | Monkeymedia, Inc. | Head-mounted display for navigating virtual and augmented reality |
US10867061B2 (en) | 2018-09-28 | 2020-12-15 | Todd R. Collart | System for authorizing rendering of objects in three-dimensional spaces |
CA3031479A1 (en) * | 2019-01-25 | 2020-07-25 | Jonathan Gagne | Computer animation methods and systems |
KR102605342B1 (en) * | 2019-08-06 | 2023-11-22 | 엘지전자 주식회사 | Method and apparatus for providing information based on object recognition, and mapping apparatus therefor |
US11417228B2 (en) * | 2019-09-18 | 2022-08-16 | International Business Machines Corporation | Modification of extended reality environments based on learning characteristics |
US11170576B2 (en) | 2019-09-20 | 2021-11-09 | Facebook Technologies, Llc | Progressive display of virtual objects |
US11086406B1 (en) | 2019-09-20 | 2021-08-10 | Facebook Technologies, Llc | Three-state gesture virtual controls |
US11176745B2 (en) | 2019-09-20 | 2021-11-16 | Facebook Technologies, Llc | Projection casting in virtual environments |
US10991163B2 (en) | 2019-09-20 | 2021-04-27 | Facebook Technologies, Llc | Projection casting in virtual environments |
US11189099B2 (en) | 2019-09-20 | 2021-11-30 | Facebook Technologies, Llc | Global and local mode virtual object interactions |
US10802600B1 (en) * | 2019-09-20 | 2020-10-13 | Facebook Technologies, Llc | Virtual interactions at a distance |
US11086476B2 (en) * | 2019-10-23 | 2021-08-10 | Facebook Technologies, Llc | 3D interactions with web content |
US11175730B2 (en) | 2019-12-06 | 2021-11-16 | Facebook Technologies, Llc | Posture-based virtual space configurations |
US11475639B2 (en) | 2020-01-03 | 2022-10-18 | Meta Platforms Technologies, Llc | Self presence in artificial reality |
US11257280B1 (en) | 2020-05-28 | 2022-02-22 | Facebook Technologies, Llc | Element-based switching of ray casting rules |
US11256336B2 (en) | 2020-06-29 | 2022-02-22 | Facebook Technologies, Llc | Integration of artificial reality interaction modes |
CN112000224A (en) * | 2020-08-24 | 2020-11-27 | 北京华捷艾米科技有限公司 | Gesture interaction method and system |
US11176755B1 (en) | 2020-08-31 | 2021-11-16 | Facebook Technologies, Llc | Artificial reality augments and surfaces |
US11227445B1 (en) | 2020-08-31 | 2022-01-18 | Facebook Technologies, Llc | Artificial reality augments and surfaces |
US11178376B1 (en) | 2020-09-04 | 2021-11-16 | Facebook Technologies, Llc | Metering for display modes in artificial reality |
US11113893B1 (en) | 2020-11-17 | 2021-09-07 | Facebook Technologies, Llc | Artificial reality environment with glints displayed by an extra reality device |
TWI768590B (en) * | 2020-12-10 | 2022-06-21 | 國立臺灣科技大學 | Method and head-mounted apparatus for reducing vr motion sickness |
US11409405B1 (en) | 2020-12-22 | 2022-08-09 | Facebook Technologies, Llc | Augment orchestration in an artificial reality environment |
US11461973B2 (en) | 2020-12-22 | 2022-10-04 | Meta Platforms Technologies, Llc | Virtual reality locomotion via hand gesture |
US11294475B1 (en) | 2021-02-08 | 2022-04-05 | Facebook Technologies, Llc | Artificial reality multi-modal input switching model |
US20220309753A1 (en) * | 2021-03-25 | 2022-09-29 | B/E Aerospace, Inc. | Virtual reality to assign operation sequencing on an assembly line |
US11762952B2 (en) | 2021-06-28 | 2023-09-19 | Meta Platforms Technologies, Llc | Artificial reality application lifecycle |
US11295503B1 (en) | 2021-06-28 | 2022-04-05 | Facebook Technologies, Llc | Interactive avatars in artificial reality |
US11748944B2 (en) | 2021-10-27 | 2023-09-05 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
US11798247B2 (en) | 2021-10-27 | 2023-10-24 | Meta Platforms Technologies, Llc | Virtual object structures and interrelationships |
US11662822B1 (en) * | 2021-12-20 | 2023-05-30 | Huawei Technologies Co., Ltd. | Systems and methods for generating pseudo haptic feedback |
US20240126373A1 (en) * | 2022-10-12 | 2024-04-18 | Attila ALVAREZ | Tractable body-based ar system input |
US11947862B1 (en) | 2022-12-30 | 2024-04-02 | Meta Platforms Technologies, Llc | Streaming native application content to artificial reality devices |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140160001A1 (en) * | 2012-12-06 | 2014-06-12 | Peter Tobias Kinnebrew | Mixed reality presentation |
US20150187357A1 (en) * | 2013-12-30 | 2015-07-02 | Samsung Electronics Co., Ltd. | Natural input based virtual ui system for mobile devices |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8576253B2 (en) * | 2010-04-27 | 2013-11-05 | Microsoft Corporation | Grasp simulation of a virtual object |
CN105378593B (en) * | 2012-07-13 | 2019-03-01 | 索尼深度传感解决方案股份有限公司 | The method and system of man-machine synchronous interaction is carried out based on gesture using unusual point of interest on hand |
JP6112815B2 (en) * | 2012-09-27 | 2017-04-12 | 京セラ株式会社 | Display device, control system, and control program |
US9552673B2 (en) * | 2012-10-17 | 2017-01-24 | Microsoft Technology Licensing, Llc | Grasping virtual objects in augmented reality |
US10203762B2 (en) * | 2014-03-11 | 2019-02-12 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
US10429923B1 (en) * | 2015-02-13 | 2019-10-01 | Ultrahaptics IP Two Limited | Interaction engine for creating a realistic experience in virtual reality/augmented reality environments |
KR101639066B1 (en) * | 2015-07-14 | 2016-07-13 | 한국과학기술연구원 | Method and system for controlling virtual model formed in virtual space |
US9947140B2 (en) * | 2015-09-15 | 2018-04-17 | Sartorius Stedim Biotech Gmbh | Connection method, visualization system and computer program product |
US10176641B2 (en) * | 2016-03-21 | 2019-01-08 | Microsoft Technology Licensing, Llc | Displaying three-dimensional virtual objects based on field of view |
US10735691B2 (en) * | 2016-11-08 | 2020-08-04 | Rockwell Automation Technologies, Inc. | Virtual reality and augmented reality for industrial automation |
EP3324270A1 (en) * | 2016-11-16 | 2018-05-23 | Thomson Licensing | Selection of an object in an augmented reality environment |
US20190073827A1 (en) * | 2017-09-06 | 2019-03-07 | Josen Premium LLC | Method and System for Converting 3-D Scan Displays with Optional Telemetrics, Temporal and Component Data into an Augmented or Virtual Reality BIM |
KR101961221B1 (en) * | 2017-09-18 | 2019-03-25 | 한국과학기술연구원 | Method and system for controlling virtual model formed in virtual space |
-
2018
- 2018-09-26 US US16/143,087 patent/US10942577B2/en active Active
-
2019
- 2019-09-20 EP EP19198602.5A patent/EP3629134B8/en active Active
-
2021
- 2021-02-24 US US17/184,254 patent/US11507195B2/en active Active
-
2022
- 2022-11-21 US US17/991,586 patent/US20230091359A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140160001A1 (en) * | 2012-12-06 | 2014-06-12 | Peter Tobias Kinnebrew | Mixed reality presentation |
US20150187357A1 (en) * | 2013-12-30 | 2015-07-02 | Samsung Electronics Co., Ltd. | Natural input based virtual ui system for mobile devices |
Also Published As
Publication number | Publication date |
---|---|
US20230091359A1 (en) | 2023-03-23 |
US20200097077A1 (en) | 2020-03-26 |
US11507195B2 (en) | 2022-11-22 |
EP3629134B8 (en) | 2023-07-26 |
EP3629134C0 (en) | 2023-06-21 |
US10942577B2 (en) | 2021-03-09 |
US20210181856A1 (en) | 2021-06-17 |
EP3629134B1 (en) | 2023-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11507195B2 (en) | Augmented reality interaction techniques | |
CN112771476B (en) | Method and system for providing tele-robotic control | |
JP7092445B2 (en) | Methods and systems that provide remote robotic control | |
CN112805673B (en) | Method and system for providing tele-robotic control | |
EP2942693B1 (en) | Systems and methods for viewport-based augmented reality haptic effects | |
JP2021524629A (en) | Transformer mode input fusion for wearable systems | |
CN110476142A (en) | Virtual objects user interface is shown | |
CN102253713B (en) | Towards 3 D stereoscopic image display system | |
Beattie et al. | Taking the LEAP with the Oculus HMD and CAD-Plucking at thin Air? | |
CN103793060A (en) | User interaction system and method | |
US20190240573A1 (en) | Method for controlling characters in virtual space | |
US10964104B2 (en) | Remote monitoring and assistance techniques with volumetric three-dimensional imaging | |
US11048375B2 (en) | Multimodal 3D object interaction system | |
Hernoux et al. | A seamless solution for 3D real-time interaction: design and evaluation | |
JP7167518B2 (en) | Controllers, head-mounted displays and robotic systems | |
CN113661521A (en) | Computer animation method and system | |
CN112424736A (en) | Machine interaction | |
US11618164B2 (en) | Robot and method of controlling same | |
Alcañiz et al. | Technological background of VR | |
KR20210007774A (en) | How to display on a robot simulator panel | |
De Felice et al. | Hapto-acoustic interaction metaphors in 3d virtual environments for non-visual settings | |
KR102612430B1 (en) | System for deep learning-based user hand gesture recognition using transfer learning and providing virtual reality contents | |
KR102167066B1 (en) | System for providing special effect based on motion recognition and method thereof | |
Omarali | Exploring Robot Teleoperation in Virtual Reality | |
Piumsomboon | Natural hand interaction for augmented reality. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17P | Request for examination filed |
Effective date: 20200917 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20201027 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20230110 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTC | Intention to grant announced (deleted) | ||
INTG | Intention to grant announced |
Effective date: 20230314 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PK Free format text: BERICHTIGUNG B8 Ref country code: CH Ref legal event code: EP |
|
RAP4 | Party data changed (patent owner data changed or rights of a patent transferred) |
Owner name: ROCKWELL AUTOMATION TECHNOLOGIES, INC. |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1581361 Country of ref document: AT Kind code of ref document: T Effective date: 20230715 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602019031272 Country of ref document: DE |
|
U01 | Request for unitary effect filed |
Effective date: 20230626 |
|
U07 | Unitary effect registered |
Designated state(s): AT BE BG DE DK EE FI FR IT LT LU LV MT NL PT SE SI Effective date: 20230630 |
|
U20 | Renewal fee paid [unitary effect] |
Year of fee payment: 5 Effective date: 20230822 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230921 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20230823 Year of fee payment: 5 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231021 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20231021 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20230621 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |