CN114625241A - Augmented reality augmented context awareness - Google Patents

Augmented reality augmented context awareness Download PDF

Info

Publication number
CN114625241A
CN114625241A CN202111434876.3A CN202111434876A CN114625241A CN 114625241 A CN114625241 A CN 114625241A CN 202111434876 A CN202111434876 A CN 202111434876A CN 114625241 A CN114625241 A CN 114625241A
Authority
CN
China
Prior art keywords
user
area
program instructions
computer
program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111434876.3A
Other languages
Chinese (zh)
Inventor
R.P.纳加
S.K.拉克什
M.S.索迪
R.吉恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CN114625241A publication Critical patent/CN114625241A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/487Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/489Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Abstract

A method for enhancing user context awareness of issues within an area. The method includes one or more computers receiving visual information corresponding to a region from a device associated with a user. The method also includes receiving data from a set of one or more sensors within an area, wherein the area includes a plurality of physical elements. The method also includes determining, based on analyzing data received from the set of sensors, that a first problem exists within the area and a first physical element in the area corresponding to the first problem. The method also includes generating Augmented Reality (AR) content related to a first problem existing within the region. The method also includes displaying, via a device associated with the user, the generated problem-related AR content within the visual information corresponding to the area.

Description

Augmented reality augmented context awareness
Technical Field
The present invention relates generally to the field of Augmented Reality (AR), and more particularly to generating AR content based on information obtained from internet of things sensors.
Background
The internet of things (IoT) is defined as the ability of various physical devices and everyday objects to connect to each other over the internet. Embedded with electronics, internet connections, and other forms of hardware (such as sensors), IoT devices may communicate and interact with other devices over the internet, wireless networks, and other inter-device communication methods so that IoT devices may provide information and be remotely monitored/controlled. The IoT devices may include person-to-device communications. For example, a user utilizes an application on a mobile device to contact an IoT device to identify services and/or navigation within a building or venue. Further, some IoT devices in one area (e.g., edge devices) may obtain data from sensors and perform edge computing analysis and interface with other IoT devices in the area.
Augmented Reality (AR) is a view of a physical real-world environment with elements augmented (overlaid) by computer-generated sensory input, such as graphical information, tactile events, auditory, and/or other sensory effects. Typically, the enhancement occurs in near real-time and in a semantic context with various environmental elements. The AR overlay may integrate virtual information (e.g., shapes, colors, text, links to information, computer-generated graphics, etc.) within and/or with images or video streams associated with features within the physical world. Various electronic (e.g., computing) devices may include AR capabilities and/or receive AR content information, such as smart phones, smart glasses, heads-up displays, tablet computers, and so on.
Disclosure of Invention
According to aspects of the present invention, a method, computer program product, and/or system are provided for enhancing user context awareness of problems in an area. The method includes at least one computer processor receiving visual information corresponding to a region from a device associated with a user. The method also includes at least one computer processor receiving data from a set of one or more sensors within the area, wherein the area includes a plurality of physical elements. The method also includes at least one computer processor receiving data from a set of one or more sensors within the area, wherein the area further includes a plurality of physical elements. The method includes at least one computer processor determining, based on analyzing data received from a set of sensors, that a first problem exists within the area, and determining a first physical element in the area corresponding to the first problem. The method includes generating, by at least one computer processor, Augmented Reality (AR) content related to a first problem existing within the area. The method includes displaying, by the at least one computer processor, the generated AR content related to the issue within the visual information corresponding to the area via the user's device.
Drawings
FIG. 1 illustrates a networked site environment, according to an embodiment of the invention.
FIG. 2 depicts a flowchart of the steps of a context aware program according to an embodiment of the present invention.
Fig. 3 depicts a flow chart of the steps of a time visualization program according to an embodiment of the invention.
FIG. 4 is a block diagram of components of a computer according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention recognize that various problems that exist (i.e., occur) within an area may begin as secondary problems, and that individuals may often ignore or delay solving the problems. Embodiments of the present invention recognize that ignored or unresolved problems may gradually worsen and generate other events and/or dangers that may ultimately result in a catastrophic situation or event if not corrected. Similarly, embodiments of the present invention recognize that if one individual applies only temporary fixes to a problem within an area, different individuals who enter the area at a later occasion may not know that the problem has not been completely corrected and unknowingly expose themselves to the hazards associated with the problem or perform actions that may exacerbate the problem.
Embodiments of the present invention recognize that in some cases, a problem within a region is self-evident in that the problem generates one or more sensory components (e.g., visual, vibratory, olfactory, and/or auditory elements). In other cases, individuals entering the area are unable to know the problem because the problem is not likely to be detected via various sensory components because the problem is hidden within an object, housing, or piece of equipment. Embodiments of the present invention recognize that utilizing automatically obtained sensor data and providing the sensor data to an individual or analysis of the sensor data is easier to identify problems rather than utilizing the individual's viewing skills and abilities.
Embodiments of the present invention increase the probability of a user (e.g., an individual) noticing a problem, informing the user of the potential hazards associated with the problem; record how the problem was corrected; or to determine that the problem is fixed or ignored. Embodiments of the present invention utilize data obtained from sensors within an area and/or included within various elements or equipment within an area to automatically detect and identify problems within the area. Embodiments of the present invention utilize analytics and contextual information obtained from various sensors and/or IoT-enabled devices within and/or associated with units of an area to determine whether a problem exists and/or predict that a problem may occur at some point in the future due to conditions within the area. Conditions may include environmental factors that increase wear, corrosion, stress equipment; construction progress; increased traffic, such as people, vehicles, and/or materials; etc. for
One aspect of the present invention leverages Augmented Reality (AR) capabilities of a user's device, such as AR headsets, smart glasses, mobile phones, and the like; in order to draw the attention or focus of a user (i.e., an individual) to a particular location within an area where problems exist, where risks exist, or where problems are predicted to occur in the future. Embodiments of the present invention determine the type of problem and then determine an image that represents the problem and/or the hazards associated with the problem. Embodiments of the present invention utilize AR and computer-generated graphics to enhance and/or magnify the image associated with the problem and embed the enhanced image within the user's field of view. The image associated with the problem moves within the field of view of the user's device until the user/device faces the location of the problem. If embodiments of the present invention detect that the user's attention is not directed to the location of the problem, embodiments of the present invention further modify the AR content to make the problem and the location of the problem more apparent and/or initiate other actions via the user's device.
Another aspect of the present invention utilizes time information entered by a user, information related to a problem, and actions performed or selected by the user to effect mitigation or temporary correction of the problem as factors utilized by a suite of analysis programs. Embodiments of the present invention use the output and predictions of an analysis suite to instruct an automated computer graphics program to generate images, Virtual Reality (VR) rendering and/or animation sequences related to the forecast and/or a time-delayed sequence of projection events depicting future states of the problem and/or the area where the problem is located.
The description of various scenarios, examples, and examples related to the present invention has been presented for purposes of illustration but is not intended to be exhaustive or limited to the disclosed embodiments.
The present invention will now be described in detail with reference to the accompanying drawings. FIG. 1 is a functional block diagram illustrating an environment 100 according to an embodiment of the present invention. In one embodiment, environment 100 includes a system 110, sensors 125, and user devices 130, all interconnected by a network 140. In an embodiment, environment 100 includes one or more instances of area 120 monitored by respective instances of sensors 125.
The system 110 and the user device 130 may be a laptop computer, a tablet computer, a personal computer, a desktop computer, or any programmable computer system known in the art. In certain embodiments, system 110 and user device 130 represent computer systems that utilize clustered computers and components (e.g., database server computers, application server computers, etc.) that, when accessed over network 140, act as a single, seamless pool of resources, as is common in data centers and with cloud computing applications. In some embodiments, the user device 130 may be a Personal Digital Assistant (PDA), a smartphone, a wearable device (e.g., smart glasses, smart watches, electronic textiles, AR headsets, etc.). In general, the system 110 and the user device 130 represent any programmable electronic device or combination of programmable electronic devices capable of executing machine-readable program instructions and communicating with the sensors 125 via the network 140. According to embodiments of the invention, the system 110 and the user device 130 may include components as depicted and described in further detail with respect to fig. 4.
The system 110 includes historical information 112, an analysis suite 113, issue resolution information 114, a corpus of media content 115, a suite of computer graphics 116, a time visualization program 300, and a number of other programs and data (not shown). Examples of other programs and data included in system 110 may include one or more databases, web browsers; cognitive programs such as Natural Language Processing (NLP) programs, image recognition programs, semantic query programs, video analysis programs, audio recognition programs, and the like; a location mapping/geofencing program; a proximity threshold; a haptic event generation program; a map of an instance of area 120; a list of repair supplies and tools; functions and/or operations performed within the instances of region 120; and so on.
The historical information 112 includes a plurality of information respectively associated with instances of the area 120, such as a log of sensor data and analysis related to elements within the area 120, a status log of problems associated with the area 120 (e.g., solved, unresolved, delayed, fixed or partially fixed, etc.), severity descriptions and/or ratings related to previous problems, hazards associated with problems, warning messages, and so forth. In one embodiment, the historical information 112 also includes a list of equipment and equipment locations, facility schematics (e.g., electrical, piping, ventilation, etc.), sensor locations, etc., associated with the instance of the area 120. In some embodiments, the historical information 112 also includes operating values and/or settings associated with the elements of the area 120, such as amperage, temperature, and noise levels associated with various operating conditions. In other embodiments, the historical information 112 may also include reference information related to a plurality of other issues, events, and corresponding hazards obtained from other network accessible resources, such as company maintenance and security databases or regulatory agencies.
The analysis suite 113 includes a plurality of analysis programs that utilize data from the sensors 125, historical information 112, and/or issue resolution information 114. In some scenarios, the analysis suite 113 determines that there are one or more problems within the area 120 and whether there may be associated hazards. In other scenarios, the analysis suite 113 determines a future state associated with the issue within the area 120 based on various factors. For example, the future state associated with the problem may include one or more hazards generated or released by the problem, triggering another problem to occur, elements of the area 120 that will be affected by the problem, changes in the size of the area within the area 120 affected by the problem or hazard, the cost of correcting the problem, the skill and/or Personal Protection Equipment (PPE) required to correct the problem, an adjusted schedule indicated to compensate for the problem, and so forth. The various factors may include a time frame, one or more actions of the user, a lack of user action, whether a problem exists within the area 120 at the same time, whether one problem associated with the area 120 affects (e.g., interacts with, aggravates) another problem existing within the area 120.
In some embodiments, the analysis suite 113 also estimates a severity rating of the current problem, and may infer a change in the severity rating of the problem based on time and/or one or more actions of the user. Additionally, the analysis suite 113 may determine hazards and/or hazard interaction-related changes associated with issues within the area 120 based on time and/or one or more actions of the user. In various embodiments, the analysis suite 113 also utilizes information obtained from other programs included within the system 110 and/or the user device 130, such as image recognition programs, audio analysis programs, and the like. In some embodiments, the analysis suite 113 may utilize data within the historical information 112 and data from the sensors 125 to predict a probability of a future occurrence of a problem within the area 120.
The issue resolution information 114 includes information related to correcting the problems or potential problems identified within the instances of the area 120. The issue resolution information 114 may include decision trees; a soft copy manual; historical questions and corresponding skills, actions, supplies, PPE, and/or equipment of previous instances for correcting the questions; acceptable problem solving delay values; and so on. In various embodiments, the issue resolution information 114 includes root cause information and corresponding corrective actions for correcting, resolving, or repairing the issue. In some embodiments, the issue resolution information 114 may also represent resources related to a plurality of issues, events and corresponding corrective actions, hazardous interactions, and/or results obtained from other resources accessible via the network 140, such as company health and security databases, virtual engineers, links to soft copy manuals, security and hazardous information available from regulatory agencies associated with health and security, and the like.
The corpus 115 of media content represents a library, database, and/or collection of media files (e.g., content) related to problems and/or hazards, such as graphical representations, images, videos, animated images, and the like. The corpus 115 of media content may also include audio files of various problems or hazards, such as metal scratches, fire pops, arcing, water flow, structural material failures, and the like. In some embodiments, the corpus 115 of media content may also represent media files identified by aspects of the present invention for public use or licensing and obtained from other sources accessible via a network 140, such as the internet. In other embodiments, the corpus 115 of media content includes content produced by the computer graphics suite 116, such as generated animation sequences or extracted content.
The computer graphics suite 116 represents a suite of automated programs that edit, extract, and/or generate visual and/or audio from elements within the corpus 115 of media content and/or other network accessible sources to generate and/or modify AR and/or VR content. The AR and/or VR content may be stored within AR content 117 for future use. In an embodiment, the computer graphics suite 116 generates AR content based on information obtained from at least the context aware program 200. In some embodiments, the computer graphics suite 116 modifies AR and/or VR content based on instructions from the context aware program 200.
In other embodiments, the computer graphics suite 116 utilizes user-entered temporal information, information generated by the analysis suite 113, and information and/or instructions from the time visualization program 300 to create time-based images, animated sequences, and/or audio events related to the progression of the problem or depicting potential future states of the problem over time.
In one embodiment, the AR content 117 is a library of media files associated with one or more questions within the area 120 obtained from the corpus 115 of media content and the context aware program 200. In another embodiment, AR content 117 also includes media files associated with one or more questions within region 120 obtained by time visualization program 300 from corpus 115 of media content. In other embodiments, the AR content 117 also includes AR and/or VR content that is modified and/or generated by the computer graphics suite 116 in response to instructions from the time visualization program 300.
The time visualization program 300 is a program that generates AR and/or VR content (e.g., media files) related to the problem and/or one or more future states of the area 120 based on information related to the problem, information associated with the lack of or action by the user to correct the problem, and one or more times indicative of user input. In one example, the time visualization program 300 enables a user to obtain a visual forecast (i.e., prediction) of the status of a problem at a future fixed point, or to view the changing status of a problem over time (e.g., fast forward, time increase, etc.), such as food spoilage in response to a refrigeration unit failure. In another example, if the user chooses to perform an incomplete fix, such as a partial or provisional fix, modify parameters and/or settings, etc., the time visualization program 300 may interface with the computer graphics suite 116 to generate AR and/or VR content associated with a future state of the issue related to the identified issue.
In various embodiments, the time visualization program 300 utilizes information obtained from the analysis suite 113 and input from the user to instruct the computer graphics suite 116 to generate temporally manipulated AR and/or VR content for display to the user. In other embodiments, time visualization program 300 may determine the interaction and generate AR/VR content in response to identifying that two or more questions exist within region 120 at the same time (e.g., during the same time interval). For example, if a problem is at least partially not repaired, time visualization program 300 determines additional problems and/or hazards caused by interactions between the problems, such as attempting to repair a live electrical problem when there is standing water nearby; thereby increasing the risk and/or severity (e.g., exacerbating) of shock hazards.
Region 120 may represent a physically bounded region, such as a room; a geo-fenced area within a larger area, such as an island of a room or warehouse of a venue; and/or a dynamically defined area proximate to (e.g., surrounding) the user relative to the location of the user device 130. Region 120 may include a plurality of elements (e.g., physical features) (not shown), such as devices; a processing tool; a computer; utility infrastructure such as heating, cooling and ventilation systems, piping, power distribution systems, and communication networks; one or more security systems; physical infrastructure such as drip trays (drip pan) and sumps (temp), transportation facilities, etc. Some elements of the area 120 include IoT-enabled devices (not shown). In some embodiments, the region 120 also includes in-transit elements, such as perishable goods or items being manufactured. In some cases, when a user is within the area 120, there are one or more questions (not shown) within the area 120. A problem may refer to operating out of specification conditions; effects and/or defects associated with one or more components of region 120, such as damage, wear, structural fatigue, corrosion, embrittlement, deformation of a structure, biological breakdown, leakage, arcing, and the like. Problems may also create hazards such as electric shock or slippery surfaces.
The sensors 125 represent a plurality of sensors and/or sensors operatively coupled to internet of things (IoT) enabled devices that determine information related to the area 120 and/or information included within various elements (previously discussed above) associated with the area 120. The sensors 125 may include thermal sensors, noise sensors, chemical sensors, artificial noses, various electrical sensors (e.g., voltage sensors, current sensors, thermistors, harmonic distortion sensors, etc.), humidity sensors, environmental sensors (e.g., temperature, humidity, airflow, etc.), and the like. In some embodiments, one or more of the sensors 125 may also send information other than sensor measurements, such as operating parameters; a beacon signal; identification information; contextual information associated with the elements of the area 120 that include the sensor, such as an equipment ID or a subassembly ID; and the like. In various embodiments, one or more of the sensors 125 may include components as depicted and described in further detail with respect to fig. 4, in accordance with embodiments of the present invention.
In an embodiment, some of the sensors 125 associated with the area 120 communicate with the system 110 and the user device 130 using the network 140. In other embodiments, one or more of the sensors 125 may analyze and selectively transmit data based on determining an anomaly or out-of-specification condition. In another embodiment, one or more other sensors of the sensors 125 associated with the area 120 and included within the IoT-enabled device (not shown) may wirelessly communicate with the user device 130 without utilizing the network 140. In some scenarios, the user device 130 may transmit the raw data and/or the analyzed data from one or more of the sensors 125 to the system 110 via the network 140.
Additionally, a user (e.g., owner, administrator, etc.) that controls and/or is responsible for an instance of the area 120 has elected to join and authorize the sensors 125 associated with the instance area 120 may collect data associated with the instance of the area 120. Further, according to various embodiments of the present invention, a user (e.g., owner, administrator, etc.) controlling and/or responsible for an instance of area 120 has opted to engage context aware program 200 and/or time visualization program 300 to process data received from sensors 125 and store the received data within historical data 112 and/or other locations.
In an embodiment, user device 130 includes a User Interface (UI)132, an output device 134, an Augmented Reality (AR) program 135, a context aware program 200, and a plurality of programs and data (not shown). Examples of other programs and data may include Global Positioning System (GPS) software, web browsers, camera/video applications, audio analysis programs, image recognition software, cognitive applications, maps of one or more instances of region 120, local copies of at least a portion of historical information 112, data obtained from sensors 125, and so forth. In other embodiments, user device 130 represents a remote monitoring system or robotic monitoring system included within area 120 that may traverse area 120 differently than instances in which the user enters area 120, such as in response to detecting a hazard that may affect the user.
In various embodiments, user device 130 also includes and/or is operatively coupled to a plurality of other hardware features (not shown) utilized in association with AR program 135 and/or context aware program 200, such as one or more cameras; a speaker; a headset; a haptic actuator; wireless communication technologies and protocols that interface with one or more of the sensors 125, such as LTE-M, narrowband IoT (NB-IoT), Near Field Communication (NFC), etc.; a compass and/or inertial monitoring system for sensing a position, orientation and/or one or more physical actions of a user; and/or different instances of output device 134, such as an AR headset; a pair of smart glasses; a head-up display.
Various embodiments of the invention may utilize various accessible data sources, such as historical data 112, issue resolution information 114, which may include storage devices and content associated with users. In an example embodiment, instances of context aware program 200 and/or time visualization program 300 allow a user to opt-in or opt-out of the types and categories of exposure information. Instances of context aware program 200 and/or time visualization program 300 enable authorization and security processing of user information, such as location information, as well as types and categories of information that may have been obtained, maintained, and/or accessible. In another example, the user opts-in to allow the context aware program 200 to record decision or state information, but anonymize the ID of the user who recorded the decision, updated the state, or performed one or more actions. The user may be provided with a notification of the collection of the type and category of information and an opportunity to opt-in or opt-out of the collection process. The grant may take several forms. Opt-in permissions may force the user to take affirmative action before collecting data. Alternatively, opt-out permission may force the user to take an affirmative action to prevent data collection before data is collected.
In one embodiment, the UI 132 may be a Graphical User Interface (GUI) or a web page user interface (WUI). UI 132 may display text, documents, forms, web browser windows, user options, application interfaces, and instructions for operations, and include information that the program presents to the user, such as graphics, text, and sound. In various embodiments, the UI 132 displays one or more icons representing applications that a user may execute in association with the user device 130. In one example, UI 132 represents an application interface of context aware program 200 and/or time visualization program 300. Additionally, UI 132 may control the sequence of actions that the user uses to respond to, and/or confirm actions associated with context aware program 200 and/or time visualization program 300.
In some embodiments, a user of the user device 130 may interact with the UI 132 via a single device, such as a touch screen (e.g., a display) that performs both input to the GUI/WUI and as an output device (e.g., a display) that presents a plurality of icons associated with apps and/or images depicting one or more executing software applications. In various embodiments, the UI 132 accepts input from a plurality of input/output (I/O) devices (not shown) including, but not limited to, a keyboard, a tactile sensor interface (e.g., touchscreen, touchpad), a virtual interface device, and/or a natural user interface (e.g., voice control unit, motion capture device, eye tracking, computerized gloves, heads-up display, etc.). In addition to audio and visual interactions, UI 132 may receive input in response to a user of device 130 utilizing natural language (such as written words or spoken words) that device 130 signals as information and/or commands.
In one embodiment, output device 134 is included within user device 130 and displays AR/VR content and images/video obtained from a camera (not shown) of user device 130. In another embodiment, output device 134 represents a display technology, such as a heads-up display, smart glasses, virtual retinal display, or the like, operatively coupled to user device 130. In various embodiments, output device 134 is a touch screen device that can operate as both a display and an input device. In some embodiments, output device 134 also displays UI 132 and GUI elements related to other programs executing on user device 130. In various embodiments, different instances of output device 134 present different information and/or graphical elements to the user. In other embodiments, output device 134 represents one or more displays outside of area 120 and associated with a remote or robotic monitoring system.
AR program 135 is an augmented reality program that embeds AR elements and/or AR content overlays within a captured picture (i.e., still image) or video feed obtained by a camera associated with user device 130. In one embodiment, AR program 135 embeds and/or moves AR content and/or AR content overlays as indicated and/or generated by context aware program 200 and/or time visualization program 300. In another embodiment, the AR program 135 displays VR content generated by the computer graphics suite 116. In some embodiments, AR program 135 may add and/or modify AR and/or VR content received from system 110 based on instructions from context aware program 200, such as increasing the size of AR content elements, adding visual effects, extending the duration of sensor events, and so forth. In other embodiments, the AR program 135 displays multiple instances of the field of view.
Context aware program 200 is a program that utilizes data from among sensors 125 associated with area 120 to determine if a problem and/or danger exists; or have the potential to occur within region 120. In an embodiment, in response to determining that a problem and/or one or more hazards exist or may potentially occur within area 120, context aware program 200 utilizes AR program 135 to embed AR content related to the problem, situation, and/or hazard within an image or video feed corresponding to a portion of area 120. In various embodiments, context aware program 200 utilizes network 140 to access a plurality of resources, files, and programs of system 110.
In some embodiments, if the context aware program 200 determines that the user's attention is not drawn to the identified location associated with the occurring problem, the context aware program 200 further modifies (e.g., augments, zooms in, etc.) the AR content and/or the presentation of the AR content to draw the user's attention. In various embodiments, context aware program 200 may respond to a determination that two or more questions exist within area 120 and generate different AR/VR content based on the user input. In other embodiments, context aware program 200 interfaces with time visualization program 300 and obtains other AR content and/or VR content related to the problem that occurred based on various user inputs related to incomplete fixes to the problem (such as partial or temporary fixes, modifying/adjusting operational settings, etc.); and/or determine a future state of the problem within the area 120 based on the time information input by the user.
Network 140 may be, for example, a Local Area Network (LAN), a telecommunications network (e.g., part of a cellular network), a Wireless Local Area Network (WLAN) such as an intranet, a Wide Area Network (WAN) such as the internet, or any combination of the preceding, and may include wired, wireless, or fiber optic connections. In general, the network 140 may be any combination of connections and protocols that will support communication between the system 110, the sensors 125, the user devices 130, and/or the Internet in accordance with embodiments of the present invention. In various embodiments, network 140 operates locally via wired, wireless, or optical connections, and may be any combination of connections and protocols (e.g., Personal Area Network (PAN); or,
Figure BDA0003381437400000111
Near Field Communication (NFC), laser, infrared, ultrasound, etc.).
FIG. 2 is a flow diagram depicting operational steps of a context aware program 200 for analyzing information received from one or more sensors associated with an area to identify a problem and then modifying AR content related to the problem to draw the user's attention or focus to the identified problem, in accordance with an embodiment of the present invention. In various embodiments, context aware program 200 interfaces with time visualization program 300 to generate AR/VR content based on one or more selections of a user in response to a question and/or to modify the AR/VR content to delineate potential changes to the question as a function of one or more time indications. In an embodiment, the user may dynamically modify the generation of AR and/or VR content by time visualization program 300 while context awareness program 200 presents the AR/VR content to the user.
In step 201, context aware program 200 determines the location of the user. Context aware program 200 may utilize user device 130 to continuously monitor the user's location and movement within area 120. Context aware program 200 utilizes user device 130 to determine that a user is proximate to area 120, to determine the user's location within area 120, or to determine that the user is away from area 120. In some embodiments, context aware program 200 also utilizes user device 130 to determine the orientation of the user. In various embodiments, context aware program 200 instructs the user to opt-in to one or more types of data, such as the user's ID, an ID associated with user device 130, tracking data, and the like. For example, a user may opt-in to context aware program 200 to track user device 130, but opt-out of identifying user device 130 or a user associated with user device 130.
In step 202, the context aware process 200 retrieves historical problem information. In one embodiment, in response to determining that the user entered the area 120 or is within a proximity threshold of the proximity to the area 120, the context aware program 200 retrieves information from the historical information 112 to determine whether the known problem is active (e.g., is ongoing) or is not incompletely corrected. Context aware program 200 may also retrieve information from historical information 112 related to previous instances of the corrected problem associated with area 120. In various embodiments, context aware program 200 retrieves further information associated with instances of area 120 from various sources, such as a list of devices and respective values associated with operations, common charts, layouts, sensor locations, and the like.
In step 204, context aware program 200 obtains data from the set of sensors. Context aware program 200 may receive data from among the sensors of sensors 125 via network 140 and/or directly communicate the data to user device 130 via wireless communication techniques. In some scenarios, context aware program 200 polls sensors 125 to obtain data related to elements of area 120. In other scenarios, context aware program 200 automatically obtains reception data from sensors of sensors 125 based on the location of user device 130, such as upon entering area 120.
In another embodiment, context aware program 200 obtains data from historical information 112 relating to groups of sensors 125 within area 120 associated with previously identified issues. In other embodiments, context aware program 200 determines other data associated with area 120 based on one or more features and/or procedures of user device 130.
In step 206, context aware program 200 analyzes the sensor data. Context aware program 200 analyzes the sensor data to determine whether one or more problems exist within region 120 or are likely to occur within region 120 in the future. If the analysis indicates that there are no problems within the area 120, the context aware program 200 terminates. In an embodiment, context aware program 200 compares data obtained from sensors 125 to sensor data and/or equipment operating specifications included within historical information 112 to determine whether the comparison indicates that a problem exists within area 120. Context aware program 200 may also include data obtained from one or more features of data user device 130 within various analyses. Additionally, context aware program 200 may identify one or more elements within area 120 that are associated with the issue.
In another embodiment, context aware program 200 receives results indicating whether there is a problem within area 120 from one or more IoT enabled devices (not shown) that include sensors and may perform field analysis. In some embodiments, the context aware program 200 utilizes the analysis suite 113 and/or cognitive programs to perform more complex analyses, such as determining a severity rating associated with the problem, determining future impacts or events that the problem may produce, and the like.
In step 208, context aware program 200 determines context information associated with the issue. Context aware program 200 determines context information based on information included within one or more resources, such as historical information 112 or other information stored within system 110, such as operations performed within a portion of area 120. Contextual information associated with the issue may include one or more hazards of issue release or generation, such as smoke, sparks, water; the location within region 120 where the problem is occurring; a description of the problem, such as "within the power distribution panel" or "embedded within the sub-assembly 325 of equipment ID X2B"; and so on. In other embodiments, context aware program 200 also determines contextual information associated with area 120 based on features and/or programs of user device 130, such as identifying sounds and determining directions of sounds.
In some embodiments, in response to determining that the problem releases or generates a hazard, context aware program 200 utilizes network 140 to access other resources (not shown) to determine whether the hazard is a threat to the user and/or other elements of area 120. In another embodiment, the context aware program 200 also accesses the issue resolution information 114 to identify one or more actions to correct or temporarily fix the issue.
In step 210, context aware program 200 generates AR content. In an embodiment, the context aware program 200 utilizes information associated with the question and/or the risk related to the question to select at least one media file from the corpus of media content 115 or the AR content 117 that represents the question and/or the risk related to the question. In one example, AR content associated with slow leaks may be represented by a pipe with a short line and two drops of liquid, while more severe leaks may be represented by a pipe with large cracks and liquid flow. In another example, AR content related to an electrical problem may be depicted as a pair of lightning bolts. If there is also an arc, the context aware program 200 can download audio content from the corpus 115 of media content or utilize the computer graphics suite 116 to apply a strobe effect to lightning rays within the media file.
In various embodiments, context aware program 200 instructs AR program 135 to modify AR content based on information related to the issue. In one example, context aware program 200 instructs AR program 135 to apply different visual effects around AR content based on whether the problem is exposed or closed; if the question is behind another element of the display portion of area 120, then another visual effect is applied; if the location of the question is outside the portion of the area 120 displayed within the output device 134, a directional indication, such as an arrow, is added. In another example, context aware program 200 instructs AR program 135 to change the brightness of the AR content or modify visual effects, such as color around the AR content that is rated based on the severity of the problem. In other embodiments, if the context aware program 200 is unable to identify AR content within the corpus 115 of media content or other network accessible resource that is applicable to the problem, the context aware program 200 utilizes a cognitive program and computer graphics suite 116 to extract and generate AR content from imagery related to one or more aspects of the problem.
Still referring to step 210, in some embodiments, context aware program 200 also generates an AR content overlay that includes contextual information associated with the problem, the danger associated with the problem, and/or other relevant information. For example, context aware program 200 generates an AR content overlay, which is a hovering element, including equipment IDs, severity ratings for issues, warning messages, status information, and the like. In other embodiments, if the context aware program 200 is unable to identify representations of problems or related hazards, the context aware program 200 interfaces with the computer graphics suite 116 to utilize the corpus 115 of media content or other stored media files within other network accessible media files to create one or more media files (e.g., AR content) that represent the respective problems and/or related hazards.
In decision step 211, context aware program 200 determines whether multiple problems are identified. In one embodiment, context aware program 200 determines that a plurality of issues are identified based on the analysis performed in step 206. In another embodiment, context aware program 200 determines that multiple questions are identified on the analysis performed in step 206 and the status log of the questions in historical information 112.
In response to determining that multiple questions are identified (the "yes" branch, decision step 211), context aware program 200 determines the effects associated with the multiple questions (step 212).
In step 212, context aware program 200 determines the impact associated with the plurality of issues. Context aware program 200 may utilize one or more cognitive programs (not shown) to search and analyze information included within the various information sources to determine the impact and/or risk associated with the problem accordingly. In an embodiment, context aware program 200 determines the impact (e.g., impact) and/or danger associated with multiple problems based on information included within historical information 112, analysis of sensor data, problem resolution information 114, and/or other internal information sources. For example, identifying a stuck (stuck) value within one portion of region 120 causes overheating problems within devices in a different portion of region 120.
In another embodiment, the context aware program 200 also utilizes the analysis suite 113 to determine priorities for resolving (e.g., repairing) multiple issues within the area 120. In some embodiments, context aware program 200 searches network accessible resources, such as security and risk information available from one or more regulatory agencies, to determine if the impact and/or risk of two or more problems interact and increase the severity of the problem and/or increase the risk to users within area 120. For example, the problem of reduced ventilation along with the problem of generating smoke hazards may contaminate the area 120 or risk the user breathing unsafe levels of smoke. Subsequently, in step 214, context aware program 200 presents the AR content related to the issue to the user.
Referring to decision step 211, in response to determining that multiple questions are not identified (the "no" branch, decision step 211), context aware program 200 presents AR content related to the questions to the user (step 214).
In step 214, context aware program 200 presents the AR content related to the issue to the user. Context aware program 200 utilizes AR program 135 to display AR and/or VR content via output device 134. In one embodiment, context aware program 200 selects AR content to present based on determining the location of the problem within area 120 and/or other factors (discussed previously with respect to step 210). In some embodiments, in response to determining that multiple questions are present within area 120, context aware program 200 presents AR content related to each question. Additionally, context aware program 200 may also present additional AR content, such as mask icons, electrical insulation boots, gloves, and the like, associated with interactions between multiple questions and/or associated with risks associated with interactions between multiple questions. The additional AR content may also include a content overlay that includes contextual and/or descriptive information associated with the interaction between the plurality of questions. Further, context aware program 200 may instruct AR program 135 to adjust the presentation of the respective AR content based on the severity rating respectively associated with the issue.
In one scenario, if the context aware program 200 determines that the issue is located within the portion of the area 120 displayed within the output device 134, the context aware program 200 applies AR content related to the issue near the visual location of the issue displayed within the output device 134. In another case, if context aware program 200 determines that the issue is not seen (e.g., enclosed behind other elements of area 120, etc.), context aware program 200 presents modified AR content related to the issue at an approximate visual location of the issue display within output device 134. In other scenarios, if context aware program 200 determines that the location of the problem is not within the displayed portion of area 120, context aware program 200 indicates that AR program 135 also includes a directional indication related to the location of the problem and is associated with AR content related to the problem accordingly.
In step 216, context aware program 200 determines a user response related to the presentation of the AR content. In one embodiment, context aware program 200 determines that the user is responsive to presentation of AR content based on the user device moving to the location of the question. In various embodiments, the context aware program 200 determines a user response based on the user activating the UI 132 to review information related to an action to be performed to correct the problem or identify a temporary fix to the problem determined in step 208.
In another embodiment, context aware program 200 determines that the user is not responsive to presentation of AR content based on determining that user device 130 is moving and/or oriented in a direction away from the problem location. In some embodiments, context aware program 200 determines that the user confirms the problem associated with the presented AR based only on information input to UI 132. In other embodiments, the context aware program 200 determines that the user confirms the problem associated with the presented AR content based on the user performing the time visualization program 300 to determine one or more future states of the problem.
In decision step 217, context aware program 200 determines whether the user is responding to the question. In one embodiment, the context aware program 200 determines that the user responded to the problem by determining that the user at least accessed the problem resolution information 114 and that subsequent analysis of the data received from the sensors indicates a lack of the problem (e.g., the problem was corrected or temporarily fixed). In some embodiments, context aware program 200 determines that the user responded by confirming the existence of the issue via UI 132, but chooses not to correct the issue. In another embodiment, context aware program 200 determines that the user is not responding to the question based on the movement and/or orientation of user device 130 in a direction different from the location of the question. In other embodiments, the context aware program 200 determines that the user is not responding to the question based on the user performing the time visualization program 300 to determine one or more future states of the question based on various inputs and/or selections.
In response to determining that the user is not responding to the question (the "no" branch, decision step 217), context aware program 200 updates the AR content presented to the user (step 218).
In step 218, the context aware program 200 updates the AR content presented to the user. In one embodiment, in response to determining that the user is not responding to the question based on moving away from the location of the question, the context aware program 200 updates and/or adds AR content to attract the user's attention and/or prompt the user's response to the question. In one example, context aware program 200 may instruct AR program 135 to modify one or more aspects of AR content related to the issue, such as increasing the size of the AR content or modifying a directional indication associated with the AR content. Context aware program 200 may continue to instruct AR program 135 to modify AR content based on subsequent responses or lack of user response to the question. In another example, context aware program 200 also instructs AR program 135 to move the modified AR content to stay in the field of view of output device 134 as the user moves. In response to presenting the updated AR and/or VR content to the user, context aware program 200 loops to step 216 to determine a user response related to another presentation of the AR and/or VR content.
In other embodiments, the context aware program 200 presents updated AR and/or VR content received from the time visualization program 300 that depicts one or more future states of the issue as opposed to AR content related to the issue, content associated with determining that the user delays responding to the issue to determine one or more future states of the issue, and/or content associated with the issue that is not corrected. In some embodiments, in response to receiving multiple items of AR and/or VR content from time visualization program 300, context aware program 200 utilizes UI 132 to notify the user about available content and allow the user to select the content presented
Still referring to step 218, in other embodiments, context aware program 200 instructs AR program 135 to present multiple instances of the same portion of area 120, including different AR/VR content, based on the user's indication (such as a different time snapshot of one issue, or a forecast of a view of a different issue).
Referring to decision step 217, in response to determining that the user responded to the question (the "yes" branch, decision step 217), context aware program 200 updates information associated with the question (step 220).
In step 220, context aware program 200 updates information associated with the issue. Context aware program 200 updates historical information 112 and/or issue resolution information 114 based on information input by the user and/or subsequent data from among sensors 125. The information input by the user may include information indicating: actions, tools and/or supplies for correcting the problem; the problem is recorded (e.g., writing notes, taking a picture, updating status information, etc.), a hazard found near the problem, and/or the impact of the problem affecting one or more elements of area 120. In an embodiment, if the context aware program 200 determines that the user confirms the problem but does not correct the problem, the context aware program 200 prompts the user via the UI 132 to record the reason the problem was not resolved (e.g., fixed) and to record the problem and/or hazard within the historical information 112.
In one embodiment, the context aware program 200 activates another aspect of the UI 132 to receive user input associated with a user correcting an issue within the area 120. In another embodiment, context aware program 200 activates another aspect of UI 132 to receive user input associated with a user performing an incomplete fix of a problem, such as applying a temporary fix, modifying parameters/settings, implementing a "work around," or the like. In some embodiments, context aware program 200 is also stored within the input of time visualization program 300; within the output generated by time visualization program 300; and/or user information within the analysis suite 113, such as VR media files, descriptions of problem progress, changes related to future status of the problem at a specified point in time, and the like.
Fig. 3 is a flow diagram depicting operational steps of a time visualization program 300 for generating and/or modifying AR and/or VR content based on one or more selections by a user in response to determining that there is a problem with a region within the region that varies over time, in accordance with an embodiment of the present invention. In various embodiments, time visualization program 300 is executed in response to one or more user actions initiated while context aware program 200 is executing.
In step 302, the time visualization program 300 receives information related to the issue. In various embodiments, time visualization program 300 receives information related to issues within area 120 determined by context aware program 200, such as sensor data, results of one or more analyses, contextual information associated with an issue and/or one or more related hazards, interactions between two or more issues, and the like. In some embodiments, the time visualization program 300 obtains additional information related to the issue from the historical issue information 112. In an embodiment, time visualization program 300 accesses network accessible resources to obtain other information related to similar instances of problems occurring in areas other than area 120, such as security laboratories monitoring videos or recording the progress of problems under controlled conditions.
In step 304, time visualization program 300 receives input from a user. In one embodiment, time visualization program 300 receives, via UI 132, a time indication and factors from user input, such as one or more future time periods for determining a status of an issue, a severity rating threshold, performing or not performing an action to correct, alleviate, or temporarily fix an issue, an order in which to solve multiple issues, and so forth. In some embodiments, time visualization program 300 also receives input from the user indicating how to present AR and/or VR content to the user, such as in two hour increments, 10:1 time compression, or in time intervals when predicted severity rating changes or dangers occur.
In step 306, time visualization program 300 determines a time modification associated with AR content related to the issue. In various embodiments, time visualization program 300 identifies and selects AR and/or VR content from a plurality of media files included in corpus 115 of media content based on the information obtained in step 302. In one embodiment, time visualization program 300 determines one or more temporal modifications associated with the selected AR and/or VR content related to the problem based on the original time indication and the factors received at step 304. In another embodiment, the time visualization program 300 determines one or more temporal modifications associated with AR and/or VR content relevant to the issue based on user input received in response to the user viewing the temporal modifications of the AR and/or VR content (in context aware program 200, step 218). In other embodiments, time visualization program 300 may further modify AR and/or VR content based on determining that two or more questions interacted and aggravated one or more aspects and/or risks of the questions. Time visualization program 300 may store the modified and/or generated AR and VR content stored within AR content 117.
In some embodiments, the time visualization program 300 utilizes the computer graphics suite 116 to temporally modify selected AR and/or VR content based on input received from a user. In other embodiments, if the time visualization program 300 is unable to identify AR and/or VR content related to the issue within the corpus 115 or other network accessible resource of media content to modify over time, the time visualization program 300 utilizes the cognitive program and computer graphics suite 116 to extract and generate AR and/or VR content from accessible content that includes at least images related to aspects of the issue.
In step 308, time visualization program 300 sends the temporally modified AR content to the user's device. In an embodiment, time visualization program 300 sends a temporally modified AR to user device 130, such as a snapshot of the future state of the problem in step 218, for presentation by AR program 135 or via context aware program 200. In another embodiment, the time visualization program 300 sends VR content to the user device 130, such as an animation sequence associated with a forecast or progress of a problem state based on the time information and factors input by the user.
In decision step 309, time visualization program 300 determines whether additional user input has been received. In an embodiment, time visualization program 300 determines that additional user input is received from user device 130 via UI 132. In one example, the time visualization program 300 receives information that the user selected to view VR progress for different issues associated with the area 120. In another example, time visualization program 300 receives information that the user chooses to implement a temporary fix to the problem and requests to view the VR sequence in a 1 hour snapshot forecasting the impact on area 120 during the next ten days.
In response to receiving additional user input (the "yes" branch, decision step 309), time visualization program 300 loops to step 306 to determine another time modification associated with the AR and/or VR content related to the issue based on the additional input from the user. Referring to decision step 309, in response to determining that no additional user input has been received ("no" branch, decision step 311), time visualization program 300 determines whether the user has corrected the problem.
In decision step 311, time visualization program 300 determines whether the user corrected the problem. In one embodiment, time visualization program 300 determines that the user has not corrected the problem based on information determined by context aware program 200 in step 216 and/or decision step 217, such as the user's movement relative to the problem location; a change in the lack of history information 112; or identify log status indicating unresolved, delayed, etc. related to the problem. In another embodiment, if there are multiple questions within the area 120, the time visualization program 300 may respond to another question that the user has not corrected. In one example, time visualization program 300 corrects a low severity rating problem in response to a user, but indicates a delayed fix of a problem with a higher severity rating. In another example, time visualization program 300 determines that the user corrected a high severity rating problem and that another problem has an acceptable resolution delay duration. In some embodiments, the time visualization program 300 determines that the user corrected the problem based on updates to at least the historical information 112.
In response to determining that the user has not corrected the problem (the "no" branch, decision step 311), time visualization program 300 determines AR content associated with the uncorrected problem (step 312).
At step 312, time visualization program 300 determines AR content associated with the uncorrected problem. In an embodiment, time visualization program 300 selects AR and/or VR content from a plurality of media files included within corpus 115 and AR content 117 of media content and modifies the content based on the information obtained in steps 302 and 304. In another embodiment, the time visualization program 300 determines one or more temporal modifications associated with AR and/or VR content related to the uncorrected problem based on user input received in response to the user viewing the temporal modifications of the AR and/or VR content (in context aware program 200, step 218). In some embodiments, the time visualization program 300 may further modify AR and/or VR content related to an uncorrected problem based on determining that one or more aspects and/or hazards of one uncorrected problem are exacerbated by another uncorrected problem.
In other embodiments, the time visualization program 300 utilizes the computer graphics suite 116 to apply time modifications to selected AR or VR content based on input received from a user. In other embodiments, if the time visualization program 300 is unable to identify AR and/or VR content related to an uncorrected problem for modification within the corpus 115 or other network accessible resource of media content, the time visualization program 300 utilizes the cognitive program and computer graphics suite 116 to extract and generate AR and/or VR content from accessible content sources that include at least images related to aspects of the problem.
In step 314, time visualization program 300 sends the temporally modified AR content to the user's device. In an embodiment, the time visualization program 300 sends the temporally modified AR and/or VR content to the user device 130 for presentation to the user via the AR program 135 of the user device 130. In some embodiments, time visualization program 300 sends temporally modified AR and/or VR content to user device 130 and interfaces with various aspects of context aware program 200, such as the loops associated with steps 218, 216 and decision step 217.
Referring to decision step 311, in response to determining that the user corrected the problem ("yes" branch, decision step 311), time visualization program 300 terminates.
Fig. 4 depicts a computer system 400, which is a representative system 110 and client device 130. Computer system 400 also represents one or more instances of sensor 125. Computer system 400 is an example of a system that includes software and data 412. Computer system 400 includes processor(s) 401, cache 403, memory 402, persistent storage 405, communication unit 407, input/output (I/O) interface(s) 406, and communication fabric 404. Communication fabric 404 provides communication between cache 403, memory 402, persistent storage 405, communication unit 407, and input/output (I/O) interface 406. Communication fabric 404 may be implemented with any architecture designed to transfer data and/or control information between a processor (such as a microprocessor, communication and network processors, etc.), a system memory, peripheral devices, and any other hardware components within a system. For example, the communication fabric 404 may be implemented with one or more buses or crossbar switches.
Memory 402 and persistent storage 405 are computer-readable storage media. In this embodiment, memory 402 comprises Random Access Memory (RAM). In general, memory 402 may include any suitable volatile or non-volatile computer-readable storage media. Cache 403 is a fast memory that enhances the performance of processor 401 by holding recently accessed data from memory 402 and data near recently accessed data.
Program instructions and data used to practice embodiments of the present invention may be stored in persistent storage 405 and memory 402 for execution by one or more respective processors 401 via cache 403. In an embodiment, persistent storage 405 includes a magnetic hard drive. In addition to or in lieu of a magnetic hard disk drive, persistent storage 405 may include a solid state hard drive, a semiconductor memory device, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash memory, or any other computer-readable storage medium capable of storing program instructions or digital information.
The media used by persistent storage 405 may also be removable. For example, a removable hard drive may be used for persistent storage 405. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer readable storage medium that is also part of persistent storage 405. Software and data 412 are stored in persistent storage 405 for access and/or execution by one or more respective processors 401 via cache 403 and one or more memories of memory 402. With respect to the system 110, software and data 412, historical information 112, analysis suite 113, problem resolution information 114, corpus of media content 115, computer graphics suite 116, AR content 117, time visualization program 300, and other programs and data (not shown). With respect to client device 130, software and data 412 includes AR program 135, context aware program 200, and other data and programs (not shown). With respect to the example of sensors 125, software and data 412 includes firmware, other data and programs (not shown).
In these examples, the communication unit 407 provides communication with other data processing systems or devices, including the resource system 110, the sensors 125, and the client device 130. In these examples, communications unit 407 includes one or more network interface cards and/or wireless communications adapters. The communication unit 407 may provide communication using one or both of physical and wireless communication links. Program instructions and data for practicing embodiments of the present invention may be downloaded to persistent storage 405 via communication unit 407.
The I/O interface 406 allows for input and output of data with other devices that may be connected to each computer system. For example, the I/O interface 406 may provide a connection to an external device 408, such as a keyboard, a keypad, a touch screen, and/or some other suitable input device. External device 408 may also include portable computer-readable storage media such as, for example, a thumb drive, a portable optical or magnetic disk, and a memory card. Software and data for practicing embodiments of the invention may be stored on such portable computer-readable storage media and loaded onto persistent storage 405 via I/O interface(s) 406. The I/O interface 406 is also connected to a display 409.
Display 409 provides a mechanism for displaying data to a user and may be, for example, a computer monitor. Display 409 may also be used as a touch screen, such as the display of a tablet computer or smartphone. Alternatively, the display 409 displays information to the user based on projection techniques, such as a virtual retinal display, a virtual display, or an image projector.
The programs described herein are identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions embodied therewith for causing a processor to perform various aspects of the present invention.
The computer readable storage medium may be a tangible device capable of retaining and storing instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer-readable storage medium includes the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device such as a punch card or a raised pattern in a groove with instructions recorded thereon, and any suitable combination of the foregoing. A computer-readable storage medium as used herein should not be interpreted as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or an electrical signal transmitted through a wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a corresponding computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, optical transmission fibers, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium within the respective computing/processing device.
Computer-readable program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, to perform aspects of the invention, an electronic circuit comprising, for example, a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), may personalize the electronic circuit by executing computer-readable program instructions with information of the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable storage medium having stored therein the instructions comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, with some or all of the blocks overlapping in time, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The description of various embodiments of the present invention has been presented for purposes of illustration but is not intended to be exhaustive or limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The terminology used herein is chosen to best explain the principles of the embodiments, the practical application, or improvements to the technology found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (21)

1. A method, comprising:
receiving, by one or more computer processors, visual information corresponding to a region from a device associated with a user;
receiving, by one or more computer processors, data from a set of one or more sensors within the area, wherein the area comprises a plurality of physical elements;
determining, by one or more computer processors, based on analyzing the data received from the set of sensors, that a first problem exists within the area and a first physical element in the area corresponding to the first problem;
generating, by one or more computer processors, Augmented Reality (AR) content related to the first problem existing within the area; and
displaying, by one or more computer processors, the generated AR content related to the issue within the visual information corresponding to the area via the device associated with the user.
2. The method of claim 1, further comprising:
determining, by one or more computer processors, a location and an orientation within the area corresponding to the user based on the device associated with the user;
determining, by one or more processors, a field of view associated with a portion of the area based on the location and the orientation corresponding to the user; and
positioning, by one or more processors, the generated AR content within the determined field of view associated with the area based on the location corresponding to the first question.
3. The method of claim 1, further comprising:
determining, by one or more computer processors, a location within the area corresponding to the first problem and the first physical element in the area corresponding to the first problem based on data received from the set of sensors.
4. The method of claim 1, wherein generating the AR content corresponding to the question further comprises:
selecting, by one or more computer processors, a graphical element from a plurality of media files that is relevant to the question; and
modifying, by one or more computer processors, one or more aspects of the graphical element related to the issue based on one or more items selected from the group consisting of increasing a size of the graphical element, adding a visual effect, and adding a directional indication.
5. The method of claim 1, wherein displaying the generated AR content related to the question further comprises:
determining, by one or more computer processors, a rating associated with the question; and
adjusting, by the one or more computer processors, one or more aspects of the generated AR content based on the determined severity rating respectively associated with the issue.
6. The method of claim 1, wherein a problem comprises one or more items selected from the group consisting of an out-of-specification condition of an operating condition, a defect within infrastructure of the area, a defect associated with one or more physical elements within the area, and a hazard generated by the problem.
7. The method of claim 2, further comprising:
determining, by one or more computer processors, that the user moved away from the location of the first question; and
modifying, by one or more computer processors, one or more aspects of the generated AR related to the first question displayed to the user in response to determining that the user moved away from the location of the first question.
8. A computer program product, comprising:
one or more computer-readable storage media and program instructions stored on the one or more computer-readable storage media that are readable/executable by one or more computer processors to:
program instructions for receiving visual information corresponding to a region from a device associated with a user;
program instructions for receiving data from a set of one or more sensors within the area, wherein the area comprises a plurality of physical elements;
program instructions for determining, based on analyzing the data received from the set of sensors, that a first problem exists within the area and a first physical element in the area corresponding to the first problem;
program instructions for generating Augmented Reality (AR) content related to the first problem existing within the region; and
program instructions for displaying, via the device associated with the user, the generated AR content related to the issue within the visual information corresponding to the area.
9. The computer program product of claim 8, further comprising:
program instructions for determining a location and an orientation within the area corresponding to the user based on the device associated with the user;
program instructions for determining a field of view associated with a portion of the area based on the location and the orientation corresponding to the user; and
program instructions for positioning the generated AR content within the determined field of view associated with the area based on the location corresponding to the first question.
10. The computer program product of claim 8, further comprising:
program instructions for determining a location within the area corresponding to the first issue and the first physical element in the area corresponding to the first issue based on data received from the set of sensors.
11. The computer program product of claim 8, wherein the program instructions for generating AR content corresponding to the question further comprise:
program instructions for selecting a graphical element from a plurality of media files that is associated with the question; and
program instructions for modifying one or more aspects of the graphical element related to the issue based on one or more items selected from the group consisting of increasing a size of the graphical element, adding a visual effect, and adding a directional indication.
12. The computer program product of claim 8, wherein the program instructions for displaying the generated AR content related to the issue further comprise:
program instructions for determining a rating associated with the question; and
program instructions for adjusting one or more aspects of the generated AR content based on the determined severity rating associated with the question, respectively.
13. The computer program product of claim 8, wherein a problem comprises one or more items selected from the group consisting of an out-of-specification condition of an operating condition, a defect within infrastructure of the area, a defect associated with one or more physical elements within the area, and a hazard generated by the problem.
14. The computer program product of claim 9, further comprising:
program instructions for determining that the user moved away from the location of the first question; and
program instructions to modify one or more aspects of the generated AR associated with the first question displayed to the user in response to determining that the user moved away from the location of the first question.
15. A computer system, comprising:
one or more computer processors;
one or more computer-readable storage media; and
program instructions stored on the computer-readable storage medium for execution by at least one of the one or more computer processors, the program instructions comprising:
program instructions for receiving visual information corresponding to a region from a device associated with a user;
program instructions for receiving data from a set of one or more sensors within the area, wherein the area comprises a plurality of physical elements;
program instructions for determining, based on analyzing the data received from the set of sensors, that a first problem exists within the area and a first physical element in the area corresponding to the first problem;
program instructions for generating Augmented Reality (AR) content related to the first problem existing within the region; and
program instructions for displaying, via the device associated with the user, the generated AR content related to the issue within the visual information corresponding to the area.
16. The computer system of claim 15, further comprising:
program instructions for determining a location and an orientation within the area corresponding to the user based on the device associated with the user;
program instructions for determining a field of view associated with a portion of the area based on the location and the orientation corresponding to the user; and
program instructions for positioning the generated AR content within the determined field of view associated with the area based on the location corresponding to the first question.
17. The computer system of claim 15, further comprising:
program instructions for determining a location within the area corresponding to the first issue and the first physical element in the area corresponding to the first issue based on data received from the set of sensors.
18. The computer system of claim 15, wherein the program instructions for generating AR content corresponding to the question further comprise:
program instructions for selecting a graphical element from a plurality of media files that is associated with the issue; and
program instructions for modifying one or more aspects of the graphical element related to the issue based on one or more items selected from the group consisting of increasing a size of the graphical element, adding a visual effect, and adding a directional indication.
19. The computer system of claim 15, wherein the program instructions for displaying the generated AR content related to the issue further comprise:
program instructions for determining a rating associated with the question; and
program instructions for adjusting one or more aspects of the generated AR content based on the determined severity rating associated with the question, respectively.
20. The computer system of claim 16, further comprising:
program instructions for determining that the user moved away from the location of the first question; and
program instructions to modify one or more aspects of the generated AR associated with the first question displayed to the user in response to determining that the user moved away from the location of the first question.
21. A system comprising means for performing the steps of the method according to any one of claims 1-7, respectively.
CN202111434876.3A 2020-12-10 2021-11-29 Augmented reality augmented context awareness Pending CN114625241A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/117,637 2020-12-10
US17/117,637 US20220188545A1 (en) 2020-12-10 2020-12-10 Augmented reality enhanced situational awareness

Publications (1)

Publication Number Publication Date
CN114625241A true CN114625241A (en) 2022-06-14

Family

ID=79163959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111434876.3A Pending CN114625241A (en) 2020-12-10 2021-11-29 Augmented reality augmented context awareness

Country Status (5)

Country Link
US (1) US20220188545A1 (en)
JP (1) JP2022092599A (en)
CN (1) CN114625241A (en)
DE (1) DE102021129177A1 (en)
GB (1) GB2604977A (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11635742B2 (en) 2017-12-04 2023-04-25 Enertiv Inc. Technologies for fault related visual content
US11031128B2 (en) * 2019-01-25 2021-06-08 Fresenius Medical Care Holdings, Inc. Augmented reality-based training and troubleshooting for medical devices
US11580734B1 (en) * 2021-07-26 2023-02-14 At&T Intellectual Property I, L.P. Distinguishing real from virtual objects in immersive reality
US20230057371A1 (en) * 2021-08-18 2023-02-23 Bank Of America Corporation System for predictive virtual scenario presentation
US11941231B2 (en) 2021-08-29 2024-03-26 Snap Inc. Camera interfaces to interact with IoT devices
US11954774B2 (en) 2021-08-29 2024-04-09 Snap Inc. Building augmented reality experiences with IoT devices
US20230063944A1 (en) * 2021-08-29 2023-03-02 Yu Jiang Tham Two-way control of iot devices using ar camera
US20230194864A1 (en) * 2021-12-20 2023-06-22 International Business Machines Corporation Device management in a smart environment
US20230342100A1 (en) * 2022-04-20 2023-10-26 Snap Inc. Location-based shared augmented reality experience system

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6500008B1 (en) * 1999-03-15 2002-12-31 Information Decision Technologies, Llc Augmented reality-based firefighter training system and method
US20030210228A1 (en) * 2000-02-25 2003-11-13 Ebersole John Franklin Augmented reality situational awareness system and method
US6616454B2 (en) * 2000-03-15 2003-09-09 Information Decision Technologies, Llc Method of simulating nozzle spray interaction with fire, smoke and other aerosols and gases
WO2009049282A2 (en) * 2007-10-11 2009-04-16 University Of Florida Research Foundation, Inc. Mixed simulator and uses thereof
JP5691568B2 (en) * 2011-01-28 2015-04-01 ソニー株式会社 Information processing apparatus, notification method, and program
JP6121647B2 (en) * 2011-11-11 2017-04-26 ソニー株式会社 Information processing apparatus, information processing method, and program
US20130321245A1 (en) * 2012-06-04 2013-12-05 Fluor Technologies Corporation Mobile device for monitoring and controlling facility systems
US20130342568A1 (en) * 2012-06-20 2013-12-26 Tony Ambrus Low light scene augmentation
US9135735B2 (en) * 2012-06-26 2015-09-15 Qualcomm Incorporated Transitioning 3D space information to screen aligned information for video see through augmented reality
US11504051B2 (en) * 2013-01-25 2022-11-22 Wesley W. O. Krueger Systems and methods for observing eye and head information to measure ocular parameters and determine human health status
US20160140868A1 (en) * 2014-11-13 2016-05-19 Netapp, Inc. Techniques for using augmented reality for computer systems maintenance
US10431005B2 (en) * 2015-05-05 2019-10-01 Ptc Inc. Augmented reality system
WO2016187352A1 (en) * 2015-05-18 2016-11-24 Daqri, Llc Threat identification system
US10360728B2 (en) * 2015-05-19 2019-07-23 Hand Held Products, Inc. Augmented reality device, system, and method for safety
US10528021B2 (en) * 2015-10-30 2020-01-07 Rockwell Automation Technologies, Inc. Automated creation of industrial dashboards and widgets
US20170178013A1 (en) * 2015-12-21 2017-06-22 International Business Machines Corporation Augmented reality recommendations in emergency situations
US9836652B2 (en) * 2016-02-02 2017-12-05 International Business Machines Corporation Showing danger areas associated with objects using augmented-reality display techniques
US9792567B2 (en) * 2016-03-11 2017-10-17 Route4Me, Inc. Methods and systems for managing large asset fleets through a virtual reality interface
GB201613138D0 (en) * 2016-07-29 2016-09-14 Unifai Holdings Ltd Computer vision systems
US10169921B2 (en) * 2016-08-03 2019-01-01 Wipro Limited Systems and methods for augmented reality aware contents
US10617956B2 (en) * 2016-09-30 2020-04-14 Sony Interactive Entertainment Inc. Methods for providing interactive content in a virtual reality scene to guide an HMD user to safety within a real world space
JP7141410B2 (en) * 2017-05-01 2022-09-22 マジック リープ, インコーポレイテッド Matching Content to Spatial 3D Environments
US11048218B2 (en) * 2017-05-10 2021-06-29 Katerra, Inc. Method and apparatus for controlling devices in a real property monitoring and control system
US10713922B1 (en) * 2017-05-10 2020-07-14 Katerra, Inc. Method and apparatus for exchanging messages with users of a real property monitoring and control system
US10878240B2 (en) * 2017-06-19 2020-12-29 Honeywell International Inc. Augmented reality user interface on mobile device for presentation of information related to industrial process, control and automation system, or other system
US10593086B2 (en) * 2017-10-13 2020-03-17 Schneider Electric Systems Usa, Inc. Augmented reality light beacon
US10445944B2 (en) * 2017-11-13 2019-10-15 Rockwell Automation Technologies, Inc. Augmented reality safety automation zone system and method
US11635742B2 (en) * 2017-12-04 2023-04-25 Enertiv Inc. Technologies for fault related visual content
US10809077B2 (en) * 2018-01-10 2020-10-20 International Business Machines Corporation Navigating to a moving target
US10853647B2 (en) * 2018-07-12 2020-12-01 Dell Products, L.P. Environmental safety notifications in virtual, augmented, and mixed reality (xR) applications
US11244509B2 (en) * 2018-08-20 2022-02-08 Fisher-Rosemount Systems, Inc. Drift correction for industrial augmented reality applications
US10325485B1 (en) * 2018-09-11 2019-06-18 Rockwell Automation Technologies, Inc. System or process to detect, discriminate, aggregate, track, and rank safety related information in a collaborative workspace
CA3114093A1 (en) * 2018-09-26 2020-04-02 Middle Chart, LLC Method and apparatus for augmented virtual models and orienteering
US10993082B2 (en) * 2018-09-27 2021-04-27 Amber Solutions, Inc. Methods and apparatus for device location services
US11197153B2 (en) * 2018-09-27 2021-12-07 Amber Solutions, Inc. Privacy control and enhancements for distributed networks
US11205011B2 (en) * 2018-09-27 2021-12-21 Amber Solutions, Inc. Privacy and the management of permissions
US10665032B2 (en) * 2018-10-12 2020-05-26 Accenture Global Solutions Limited Real-time motion feedback for extended reality
US10832484B1 (en) * 2019-05-09 2020-11-10 International Business Machines Corporation Virtual reality risk detection
US20210174952A1 (en) * 2019-12-05 2021-06-10 SOL-X Pte. Ltd. Systems and methods for operations and incident management
US11158177B1 (en) * 2020-11-03 2021-10-26 Samsara Inc. Video streaming user interface with data from multiple sources

Also Published As

Publication number Publication date
JP2022092599A (en) 2022-06-22
US20220188545A1 (en) 2022-06-16
GB202116917D0 (en) 2022-01-05
GB2604977A (en) 2022-09-21
DE102021129177A1 (en) 2022-06-15

Similar Documents

Publication Publication Date Title
CN114625241A (en) Augmented reality augmented context awareness
US20200250462A1 (en) Key point detection method and apparatus, and storage medium
US20190355177A1 (en) Building system maintenance using mixed reality
US10593118B2 (en) Learning opportunity based display generation and presentation
EP3382643B1 (en) Automated object tracking in a video feed using machine learning
CN108228705B (en) Automatic object and activity tracking device, method and medium in live video feedback
KR102158529B1 (en) Method and system for smart life safety response from the control center and savior on points of view of industrial safety and facility safety based extended reality and internet of thins
US9892648B2 (en) Directing field of vision based on personal interests
CN111680535A (en) Method and system for real-time prediction of one or more potential threats in video surveillance
CN111143906A (en) Control method and control device
US11340693B2 (en) Augmented reality interactive messages and instructions for batch manufacturing and procedural operations
CN106210608A (en) The methods, devices and systems of the dynamic front cover in position, control point are realized based on mobile detection
US11710278B2 (en) Predictive virtual reconstruction of physical environments
KR20210058783A (en) Object tracking and service execution system with event rules and service settings for each Vid-Fence
KR20180046431A (en) Method and Apparatus for Classification and Prediction of Internet Game Addiction using Decision Tree for Screening of Internet Game Addiction Adult Patients
KR101587877B1 (en) Monitoring system and method thereof, recording medium for performing the method
US20180012126A1 (en) Computer-Implemented System And Method For Predicting Activity Outcome Based On User Attention
US11270670B2 (en) Dynamic visual display targeting using diffraction grating
US11710483B2 (en) Controlling voice command execution via boundary creation
KR102525305B1 (en) Method, server and system for providing integrated information management service for ship maintenance
US20220294827A1 (en) Virtual reality gamification-based security need simulation and configuration in any smart surrounding
KR102221898B1 (en) Method for visualization in virtual object based on real object
McKay et al. Fixed AI-Powered Imaging for Automated Leak Detection on Offshore Production Platforms
KR20160098788A (en) System and method to interacting external subject by head mounted display
CN112650896A (en) Data processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination