US20220262358A1 - Providing enhanced functionality in an interactive electronic technical manual - Google Patents

Providing enhanced functionality in an interactive electronic technical manual Download PDF

Info

Publication number
US20220262358A1
US20220262358A1 US17/249,051 US202117249051A US2022262358A1 US 20220262358 A1 US20220262358 A1 US 20220262358A1 US 202117249051 A US202117249051 A US 202117249051A US 2022262358 A1 US2022262358 A1 US 2022262358A1
Authority
US
United States
Prior art keywords
user
content
module
verbal command
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/249,051
Inventor
Ran Meriaz
Yoram Meriaz
Alexander Tkachman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MBTE HOLDINGS SWEDEN AB
Original Assignee
MBTE HOLDINGS SWEDEN AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MBTE HOLDINGS SWEDEN AB filed Critical MBTE HOLDINGS SWEDEN AB
Priority to US17/249,051 priority Critical patent/US20220262358A1/en
Priority claimed from US17/249,039 external-priority patent/US11967317B2/en
Assigned to MBTE Holdings Sweden AB reassignment MBTE Holdings Sweden AB ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MERIAZ, YORAM, TKACHMAN, ALEXANDER, MERIAZ, RAN
Publication of US20220262358A1 publication Critical patent/US20220262358A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/106Display of layout of documents; Previewing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2111Location-sensitive, e.g. geographical location, GPS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • Embodiments of the present disclosure generally relate to providing enhanced functionality in an interactive electronic technical manual (IETM).
  • IETM interactive electronic technical manual
  • the inventors have developed solutions that increase the efficiency, functionality, speed, capabilities, and user friendliness over conventional IETMs.
  • IETMs and other technical data generally hold large amounts of information that can include multiple volumes and hundreds or thousands of data modules when in electronic format.
  • users of IETMs, or other technical data that are provided electronically need to look for a specific subject, they need to go over a lengthy electronic table of contents, similar to a paper book, but using links, which can include nested subsystems (and sub-subsystems) within systems.
  • This requires the users to know not only the exact nomenclature of the item they seek (many times this is unknown), but how to navigate through the seemingly endless array of nested data.
  • IETMs provide some type of interactive functionality with respect to the technical data that allow users to interactively view the data
  • such functionality is typically limited to capabilities and do not address many of the technical issues encountered when providing an electronic interface for a large amount of information, as well as technical improvements that provide features beyond just simply allowing the user to view such information.
  • the technical data may involve information that is highly confidential such as information on military equipment.
  • Many conventional IETMs fail to provide functionality to control secure access to the technical data, as well as control user functionality within the IETMs in viewing and using the technical data in a secure manner.
  • embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for controlling content found in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation for the item via an IETM viewer.
  • IETM interactive electronic technical manual system
  • a method for controlling content found in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation for the item via an IETM viewer is provided.
  • the method comprises: providing a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation; receiving a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity; responsive to receiving the first verbal command: identifying, via one or more processors, a focus of a first portion of the content; and causing, via the one or more processors, a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content: receiving a second verbal command, wherein the second verbal command is received as a result of the user speaking the second verbal command that is detected by the audio input of the user computing entity; and responsive to receiving the second verbal command,
  • an apparatus comprises at least one processor and at least one memory comprising computer program code.
  • the at least memory and the computer program code are configured to, with the at least one processor, cause the apparatus to: provide a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation; receive a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity; responsive to receiving the first verbal command: identify a focus of a first portion of the content; and cause a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content: receive a second verbal command, wherein the second verb
  • a non-transitory computer storage medium comprises instructions stored thereon.
  • the instructions being configured to cause one or more processors to at least perform operations configured to: provide a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation; receive a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity; responsive to receiving the first verbal command: identify a focus of a first portion of the content; and cause a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content: receive a second verbal command, wherein the second verb
  • one or more features of the first verbal command may be processed using a verbal command machine learning model to generate the first action.
  • one or more features of the second verbal command may be processed using the verbal command machine learning model to generate the second action.
  • the verbal command machine learning model may be trained using first training data comprising a first plurality of samples of the user speaking the first verbal command for the first action and second training data comprising a second plurality of samples of the user speaking the second verbal command for the second action.
  • the user may identify the first action for the first verbal command and the second action for the second verbal command.
  • the focus of the first portion of the content may comprise a selection of the first portion of the content via a selection verbal command received as a result of the user speaking the selection verbal command that is detected by the audio input of the user computing entity.
  • the first action may comprise causing the first user interface control element to at least one of convey input, navigate to a particular section of the first portion of the content, or display other content associated with the first portion of the content.
  • the focus of the second portion of the content may result from the first action being performed with respect to the first user interface control element associated with the first portion of the content.
  • the content may comprise a plurality of sequential portions of the content
  • the second portion of the content may immediately follow the first portion of the content in the plurality of sequential portions of the content
  • the first action with respect to the first user interface control element associated with the first portion of the content may comprise setting the first user interface control element associated with the first portion of the content to indicate a completion of the first portion of the content.
  • FIG. 1 is a diagram of a system architecture that can be used in conjunction with various embodiments of the present disclosure
  • FIG. 2 is a schematic of a management computing entity that may be used in conjunction with various embodiments of the present disclosure
  • FIG. 3 is a schematic of a user computing entity that may be used in conjunction with various embodiments of the present disclosure
  • FIG. 4 is a process flow for signing in a user to an IETM in accordance with various embodiments of the present disclosure
  • FIGS. 5A and 5B provide examples of a sign-in window that may be used in accordance with various embodiments of the present disclosure
  • FIGS. 5C and 5D provide examples of user reports that may be used in accordance with various embodiments of the present disclosure
  • FIG. 6 is a process flow for viewing and interacting with a table of contents provided by an IETM in accordance with various embodiments of the present disclosure
  • FIG. 7 provides an example of a window displaying a table of contents in accordance with various embodiments of the present disclosure
  • FIG. 8 is a process flow for filtering a table of contents in accordance with various embodiments of the present disclosure.
  • FIG. 9 provides an example of a window displaying a table of contents that has been filtered in accordance with various embodiments of the present disclosure
  • FIG. 10 is a process flow for tagging content with formatting found in a source of the content in accordance with various embodiments of the present disclosure
  • FIG. 11 is a process flow for formatting content based at least in part on a format structure found in a source of the content in accordance with various embodiments of the present disclosure
  • FIG. 12A provides an example of a table of contents formatted according to S1000D standards
  • FIG. 12B provides an example of a table of contents formatted according to a format structure found in one or more sources of the contents
  • FIG. 12C provides an example of content from a source formatted according to a format structure found in the source
  • FIG. 13 is a process flow for searching a table of contents in accordance with various embodiments of the present disclosure.
  • FIG. 14 is a process flow for providing one or more predictions based at least in part on search term(s) in accordance with various embodiments of the present disclosure
  • FIGS. 15A and 15B provide examples of a search window in accordance with various embodiments of the present disclosure
  • FIG. 16 is a process flow for generating a list of parts in accordance with various embodiments of the present disclosure.
  • FIG. 17 is a process flow for displaying a list of parts in accordance with various embodiments of the present disclosure.
  • FIG. 18A provides an example of a window displaying a list of parts in accordance with various embodiments of the present disclosure
  • FIG. 18B provides an example of a mechanism for identifying levels for relisting a list of parts in accordance with various embodiments of the present disclosure
  • FIG. 18C provides an example of a preview displaying information for a supplier in accordance with various embodiments of the present disclosure
  • FIG. 18D provides an example of a preview displaying a list of other items that use a part in accordance with various embodiments of the present disclosure
  • FIG. 19 is a process flow for allowing a user to order a part via an IETM in accordance with various embodiments of the present disclosure
  • FIG. 20 is a process flow for submitting an order for a part via an IETM in accordance with various embodiments of the present disclosure
  • FIG. 21A provides an example of a window in which an option to order a part is provided in accordance with various embodiments of the present disclosure
  • FIG. 21B provides an example of an electronic order form that can be used to order a part in accordance with various embodiments of the present disclosure
  • FIG. 21C provides an example of a graphical code that can be scanned to order a part in accordance with various embodiments of the present disclosure
  • FIG. 22 is a process flow for displaying content for a topic found in technical documentation for an item in accordance with various embodiments of the present disclosure
  • FIG. 23 is a process flow for causing parts found in textual information to be displayed as selectable in accordance with various embodiments of the present disclosure
  • FIG. 24 is a process flow for causing applicability found in textual information to be displayed as selectable in accordance with various embodiments of the present disclosure
  • FIG. 25 is a process flow for locking content in accordance with various embodiments of the present disclosure.
  • FIG. 26 is a process flow for setting a security classification for specific content in accordance with various embodiments of the present disclosure.
  • FIG. 27 provides an example of security classification formatting and functionality set for the display of content in accordance with various embodiments of the present disclosure
  • FIGS. 28A and 28B is a process flow for invoking functionality provided for a topic in accordance with various embodiments of the present disclosure
  • FIG. 29 is a process flow for displaying related information for a part in accordance with various embodiments of the present disclosure.
  • FIG. 30 provides an example of related information displayed for a part in accordance with various embodiments of the present disclosure.
  • FIG. 31 is a process flow for displaying information on the meaning of an occurrence of applicability in accordance with various embodiments of the present disclosure
  • FIG. 32 provides an example of displaying information on the meaning of an occurrence of applicability in accordance with various embodiments of the present disclosure
  • FIG. 33 is a process flow for displaying a data source for a topic in accordance with various embodiments of the present disclosure
  • FIG. 34A provides an example of a section of a data source displayed in accordance with various embodiments of the present disclosure
  • FIG. 34B provides an example of an entire data source displayed in accordance with various embodiments of the present disclosure
  • FIG. 35 is a process flow for generating an annotation in accordance with various embodiments of the present disclosure.
  • FIG. 36A provides an example of a generated annotation in accordance with various embodiments of the present disclosure.
  • FIG. 36B provides an example a change request form in accordance with various embodiments of the present disclosure.
  • FIG. 36C provides an example of a selection mechanism to generate an annotation in accordance with various embodiments of the present disclosure
  • FIG. 36D provides an example of a report of change requests submitted by a user in accordance with various embodiments of the present disclosure
  • FIG. 36E provides an example of a list of annotations generated by a user in accordance with various embodiments of the present disclosure
  • FIG. 37A is a process flow for configuring enhancing, relevant, and/or irrelevant formats in accordance with various embodiments of the present disclosure
  • FIG. 37B is a process flow for assessing the steps found in a sequence in accordance with various embodiments of the present disclosure.
  • FIGS. 38A-E provide examples of sequential information in which current steps, or steps that have been skipped, are displayed using various formats in accordance with various embodiments of the present disclosure
  • FIG. 39 is a process flow for unlocking content as a result of a user acknowledging an alert in accordance with various embodiments of the present disclosure
  • FIG. 40A provides an example of a portion of content that has been locked in accordance with various embodiments of the present disclosure
  • FIG. 40B provides an example of a portion of content that has been unlocked in accordance with various embodiments of the present disclosure
  • FIG. 41 is a process flow for facilitating a user transferring a job in accordance with various embodiments of the present disclosure
  • FIG. 42 is a process flow for facilitating a user resuming a suspended job in accordance with various embodiments of the present disclosure
  • FIG. 43A is an example of a mechanism to enable a user to transfer or resume a job in accordance with various embodiments of the present disclosure
  • FIG. 43B is an example of a job transfer window in accordance with various embodiments of the present disclosure.
  • FIG. 43C is an example of a procedure that has been suspended in accordance with various embodiments of the present disclosure.
  • FIG. 43D is an example of a procedure that has been resumed in accordance with various embodiments of the present disclosure.
  • FIG. 44 is a process flow for causing media content that is displayed to be updated based at least in part on a user scrolling through textual information in accordance with various embodiments of the present disclosure
  • FIG. 45 provides an example of media content being updated as a user scrolls through textual information in accordance with various embodiments of the present disclosure
  • FIG. 46A is a process flow for causing display of pins for a connector as highlighted in media content or referenced in textual information in accordance with various embodiments of the present disclosure
  • FIGS. 46B and 46C provide examples of pins highlighted in an illustration in accordance with various embodiments of the present disclosure
  • FIG. 47A is a process flow for causing display of a unit as highlighted in media content or referenced in textual information in accordance with various embodiments of the present disclosure
  • FIG. 47B provides an example of a unit highlighted in an illustration in accordance with various embodiments of the present disclosure.
  • FIG. 47C provides an example of units highlighted in textual information in accordance with various embodiments of the present disclosure.
  • FIG. 48 is a process flow for providing functionality when a user reaches the end of content for a topic in accordance with various embodiments of the present disclosure
  • FIG. 49A provides an example of an end of topic mechanism in accordance with various embodiments of the present disclosure
  • FIG. 49B provides an example of a table of contents displayed as a result of invoking end of module functionality in accordance with various embodiments of the present disclosure
  • FIG. 50A is a process flow for enabling a user to set up verbal commands in accordance with various embodiments of the present disclosure
  • FIG. 50B is a process flow for processing a verbal command in accordance with various embodiments of the present disclosure.
  • FIG. 51A is a process flow for providing functionality for wiring data in accordance with various embodiments of the present disclosure.
  • FIG. 51B provides an example of an electrical schematic displayed in accordance with various embodiments of the present disclosure.
  • FIG. 51C provides an example of a preview of a connector in accordance with various embodiments of the present disclosure.
  • FIG. 51D provides an example of a list of components displayed in an electrical schematic in accordance with various embodiments of the present disclosure.
  • FIG. 51E provides an example of a list of other electrical schematics that display a selected component in accordance with various embodiments of the present disclosure
  • FIG. 52 is a process flow for providing live wire functionality for a selected wire in accordance with various embodiments of the present disclosure
  • FIG. 53 is an example of a wire diagram in accordance with various embodiments of the present disclosure.
  • FIG. 54 is a process flow for providing crosshairs on a graph in accordance with various embodiments of the present disclosure.
  • FIG. 55 is an example of crosshairs placed on a graph in accordance with various embodiments of the present disclosure.
  • FIG. 56 is a process flow for providing functionality for media content involving 3D graphics in accordance with various embodiments of the present disclosure
  • FIGS. 57A-D provide examples of a table of parts and a 3D graphic displayed in accordance with various embodiments of the present disclosure
  • FIGS. 57E and 57F provide examples of a part removed from a 3D graphic in accordance with various embodiments of the present disclosure
  • FIGS. 57G and 57H provide examples of a part solely displayed in a 3D graphic in accordance with various embodiments of the present disclosure
  • FIG. 57I provides an example of axes on a 3D graphic displayed in accordance with various embodiments of the present disclosure
  • FIG. 58 is a process flow for providing components in media content as identified in a hierarchy in accordance with various embodiments of the present disclosure
  • FIG. 59A provides an example of a hierarchy of components displayed for components found in media content in accordance with various embodiments of the present disclosure
  • FIG. 59B provides an example of a report displayed of components illustrated in media content but not listed in accordance with various embodiments of the present disclosure
  • FIG. 60 is a process flow for allowing a user to initiate communication sessions within an IETM environment in accordance with various embodiments of the present disclosure
  • FIG. 61A is an example of a selection mechanism to enable a user to access communication session functionality in accordance with various embodiments of the present disclosure
  • FIG. 61B is an example of a display to enable a user to initiate a communication session within an IETM in accordance with various embodiments of the present disclosure
  • FIG. 61C is an example of a communication window that is displayed once a communication session is established in accordance with various embodiments of the present disclosure
  • FIG. 61D is an example of a communication window in which a user has shared his or her window to other users involved in a communication session in accordance with various embodiments of the present disclosure
  • FIG. 62 is a process flow for addressing warnings and/or cautions shown on a caution panel found on an item in accordance with various embodiments of the present disclosure
  • FIG. 63A provides an example of a virtual caution panel in accordance with various embodiments of the present disclosure
  • FIG. 63B provides an example of a corrective action provided for one or more warnings and/or cautions in accordance with various embodiments of the present disclosure
  • FIG. 64 is a process flow for generating a workflow for loading articles onto and/or into an object of an item in accordance with various embodiments of the present disclosure
  • FIG. 65A provides an example of a display of a digital model of an aircraft to be loaded with articles in accordance with various embodiments of the present disclosure
  • FIG. 65B provides an example of display of a digital workflow in the form of a table of contents in accordance with various embodiments of the present disclosure
  • FIG. 66 is a process flow for managing a workflow for loading articles onto and/or into an object for an item in accordance with various embodiments of the present disclosure
  • FIG. 67 is a process flow for securely integrating the use of a network connected with a remote device with an IETM environment in accordance with various embodiments of the present disclosure
  • FIG. 68 is a process flow for providing a virtual network within an IETM environment in accordance with various embodiments of the present disclosure.
  • FIG. 69 is a process flow for importing data for the technical documentation for an item into an IETM in accordance with various embodiments of the present disclosure.
  • IETMs oftentimes provide interactive functionality to users who are viewing technical documentation via the IETMs, such functionality is normally limited to simply viewing the documentation in different formats.
  • a conventional IETM may provide a digital model of an apparatus, machine, vehicle, equipment, and/or the like (e.g., illustrations) that allows the user to select a component for the apparatus, machine, vehicle, equipment, and/or the like displayed in the model to view documentation on the component.
  • this capability is typically the extent of the interactive functionality provided in the IETM.
  • requiring users to use multiple systems to view technical documentation on an apparatus, machine, vehicle, equipment, and/or the like and perform various tasks with respect to the apparatus, machine, vehicle, equipment, and/or the like can present many technical challenges. For instance, requiring users who are viewing technical documentation through an IETM to use other systems to perform tasks outside of viewing the documentation necessitates separate security measures to be implemented within the multiple systems. Managing these separate security measures within each of the systems can lead to multiple challenges in providing secure environments, as well as to further inefficiencies for users, systems, storage, networking, and/or equipment.
  • IETM Integrated Multimedia Subsystem
  • this large volume of documentation may involve viewing and interacting with textual documentation and/or media content (e.g., illustrations) on several different topics.
  • a user may be performing maintenance on a component and may wish to view technical documentation via the IETM on the component, on a maintenance procedure the user is performing on the component, as well as on a part being used in performing the maintenance procedure.
  • the user may need to view the technical documentation for the different topics by interchangeably moving back-and-forth between the technical documentation for the different topics.
  • a technical challenge often encountered in conventional IETMs is facilitating the user's ability to move back-and-forth between technical documentation for different topics.
  • the technical documentation involves a large volume of information.
  • IETM IETM
  • a user may be viewing documentation through an IETM on a maintenance procedure while out in the field performing the procedure.
  • the user may be required to scroll through the documentation on the maintenance procedure while performing the procedure.
  • the user may be need to use both his or her hands in performing the maintenance procedure and as a result, may not be able to interact with a device (e.g., laptop computer or mobile device) being used by the user to view the IETM as required by many conventional IETMs.
  • a device e.g., laptop computer or mobile device
  • the user may be faced with some type of physical challenge that may make it inconvenient and/or impractical for the user to interact and/or comprehend documentation through the IETM.
  • the user may be required to use a mobile device such as smartphone or tablet to access the IETM and view technical documentation.
  • the content for the documentation may be shown in a font size that is difficult for the user to read.
  • simply increasing the font size for the documentation may be impractical in that the bigger font size may require the user to have to manipulate the documentation (e.g., navigate around the documentation on the screen of his or her device) very often to view certain portions of the documentation and/or to perform certain functionality.
  • conventional IETMs do not provide functionality to allow the user to selectively enhance content so that it may be easier for the user to comprehend.
  • the user may have a physical challenge that can make it difficult for the user to physically interact with his or her device being used to access the IETM in a manner required by many conventional IETMs.
  • various embodiments of the present disclosure address the above-mentioned technical problems and challenges encountered with many conventional IETMs. Specifically, various embodiments of the present disclosure provide functionality beyond simply presenting an interactive environment to view technical documentation on items found in conventional IETMs. In addition, various embodiments of the present disclosure provide such functionality within a secure environment that is more easily administered and maintained over conventional configurations involving a user having to use multiple systems to perform such functionality. In addition, various embodiments of the present disclosure provide functionality that allows a user to view, comprehend, convey, and interact with content within an IETM environment through enhanced capabilities not found in conventional IETMs.
  • various embodiments of the present disclosure facilitate the display of and interaction with technical documentation within an IETM environment by presenting such technical documentation though the use of displaying, positioning, and/or organizing of the technical documentation in a more optimal manner over conventional IETMs through the use of unique and novel configurations of display windows, view panes, and/or the like.
  • the disclosed solution provided herein is more effective, efficient, timely, accurate, faster, and provides more functionality than found in conventional IETMs.
  • incorporation of such functionality into an IETM enables users to use such functionality in a more secure environment.
  • the disclosed solution provided herein enables presentation of technical documentation in a more optimal manner over conventional IETMs to facilitate the use of such documentation. Incorporating such functionality and presentation of technical documentation provides the advantage of allowing user to carry out many tasks in a shorter timeframe than under conventional IETMs.
  • the disclosed solution can result in reduced network traffic, require fewer computational resources, allow for less memory usage, and/or the like.
  • various embodiments of the present disclosure make significant technical contributions to improving the efficiency, reliability, and functionality in providing technical documentation within an IETM environment.
  • Embodiments of the present disclosure may be implemented in various ways, including as computer program products that comprise articles of manufacture.
  • Such computer program products may include one or more software components including, for example, software objects, methods, data structures, and/or the like.
  • a software component may be coded in any of a variety of programming languages.
  • An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform.
  • a software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform.
  • Another example programming language may be a higher-level programming language that may be portable across multiple architectures.
  • a software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution.
  • programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, and/or a report writing language.
  • a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form.
  • a software component may be stored as a file or other data storage construct.
  • Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library.
  • Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution).
  • a computer program product may include a non-transitory computer-readable storage medium storing applications, programs, program modules, scripts, source code, program code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like (also referred to herein as executable instructions, instructions for execution, computer program products, program code, and/or similar terms used herein interchangeably).
  • Such non-transitory computer-readable storage media include all computer-readable media (including volatile and non-volatile media).
  • a non-volatile computer-readable storage medium may include a floppy disk, flexible disk, hard disk, solid-state storage (SSS) (e.g., a solid state drive (SSD), solid state card (SSC), solid state module (SSM), enterprise flash drive, magnetic tape, or any other non-transitory magnetic medium, and/or the like.
  • SSS solid state storage
  • a non-volatile computer-readable storage medium may also include a punch card, paper tape, optical mark sheet (or any other physical medium with patterns of holes or other optically recognizable indicia), compact disc read only memory (CD-ROM), compact disc-rewritable (CD-RW), digital versatile disc (DVD), Blu-ray disc (BD), any other non-transitory optical medium, and/or the like.
  • Such a non-volatile computer-readable storage medium may also include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory (e.g., Serial, NAND, NOR, and/or the like), multimedia memory cards (MMC), secure digital (SD) memory cards, SmartMedia cards, CompactFlash (CF) cards, Memory Sticks, and/or the like.
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory e.g., Serial, NAND, NOR, and/or the like
  • MMC multimedia memory cards
  • SD secure digital
  • SmartMedia cards SmartMedia cards
  • CompactFlash (CF) cards Memory Sticks, and/or the like.
  • a non-volatile computer-readable storage medium may also include conductive-bridging random access memory (CBRAM), phase-change random access memory (PRAM), ferroelectric random-access memory (FeRAM), non-volatile random-access memory (NVRAM), magnetoresistive random-access memory (MRAM), resistive random-access memory (RRAM), Silicon-Oxide-Nitride-Oxide-Silicon memory (SONOS), floating junction gate random access memory (FJG RAM), Millipede memory, racetrack memory, and/or the like.
  • CBRAM conductive-bridging random access memory
  • PRAM phase-change random access memory
  • FeRAM ferroelectric random-access memory
  • NVRAM non-volatile random-access memory
  • MRAM magnetoresistive random-access memory
  • RRAM resistive random-access memory
  • SONOS Silicon-Oxide-Nitride-Oxide-Silicon memory
  • FJG RAM floating junction gate random access memory
  • Millipede memory racetrack memory
  • a volatile computer-readable storage medium may include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), fast page mode dynamic random access memory (FPM DRAM), extended data-out dynamic random access memory (EDO DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), double data rate type two synchronous dynamic random access memory (DDR2 SDRAM), double data rate type three synchronous dynamic random access memory (DDR3 SDRAM), Rambus dynamic random access memory (RDRAM), Twin Transistor RAM (TTRAM), Thyristor RAM (T-RAM), Zero-capacitor (Z-RAM), Rambus in-line memory module (RIMM), dual in-line memory module (DIMM), single in-line memory module (SIMM), video random access memory (VRAM), cache memory (including various levels), flash memory, register memory, and/or the like.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • FPM DRAM fast page mode dynamic random access
  • embodiments of the present disclosure may also be implemented as methods, apparatus, systems, computing devices, computing entities, and/or the like.
  • embodiments of the present disclosure may take the form of a data structure, apparatus, system, computing device, computing entity, and/or the like executing instructions stored on a computer-readable storage medium to perform certain steps or operations.
  • embodiments of the present disclosure may also take the form of an entirely hardware embodiment, an entirely computer program product embodiment, and/or an embodiment that comprises a combination of computer program products and hardware performing certain steps or operations.
  • retrieval, loading, and/or execution may be performed in parallel, such that multiple instructions are retrieved, loaded, and/or executed together.
  • such embodiments can produce specifically configured machines performing the steps or operations specified in the block diagrams and flowchart illustrations. Accordingly, the block diagrams and flowchart illustrations support various combinations of embodiments for performing the specified instructions, operations, or steps.
  • FIG. 1 provides an illustration of an exemplary system architecture that may be used in accordance with various embodiments of the present disclosure.
  • the architecture may include one or more management computing entities 100 , one or more networks 105 , and one or more user computing entities 110 .
  • Each of these components, entities, devices, systems, and similar words used herein interchangeably may be in direct or indirect communication with, for example, one another over the same or different wired or wireless networks.
  • FIG. 1 illustrates the various system entities as separate, standalone entities, the various embodiments are not limited to this particular architecture.
  • FIG. 2 provides a schematic of a management computing entity 100 according to one embodiment of the present disclosure.
  • the terms computing entity, computer, entity, device, system, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktop computers, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, gaming consoles (e.g., Xbox, Play Station, Wii), watches, glasses, iBeacons, proximity beacons, key fobs, radio frequency identification (RFID) tags, ear pieces, scanners, televisions, dongles, cameras, wristbands, wearable items/devices, kiosks, input terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein.
  • gaming consoles e.g., Xbox, Play Station, Wii
  • RFID radio frequency
  • Such functions, operations, and/or processes may include, for example, transmitting, receiving, operating on, processing, displaying, storing, determining, creating/generating, monitoring, evaluating, comparing, and/or similar terms used herein interchangeably. In one embodiment, these functions, operations, and/or processes can be performed on data, content, information, and/or similar terms used herein interchangeably.
  • the management computing entity 100 may also include one or more communications interfaces 220 for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like.
  • the management computing entity 100 may communicate with user computing entities 110 and/or a variety of other computing entities.
  • the management computing entity 100 may include or be in communication with one or more processing elements 205 (also referred to as processors, processing circuitry, and/or similar terms used herein interchangeably) that communicate with other elements within the management computing entity 100 via a bus, for example.
  • the processing element 205 may be embodied in a number of different ways.
  • the processing element 205 may be embodied as one or more complex programmable logic devices (CPLDs), microprocessors, multi-core processors, coprocessing entities, application-specific instruction-set processors (ASIPs), microcontrollers, and/or controllers.
  • CPLDs complex programmable logic devices
  • ASIPs application-specific instruction-set processors
  • microcontrollers and/or controllers.
  • the processing element 205 may be embodied as one or more other processing devices or circuitry.
  • circuitry may refer to an entirely hardware embodiment or a combination of hardware and computer program products.
  • the processing element 205 may be embodied as integrated circuits, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), hardware accelerators, other circuitry, and/or the like.
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • PDAs programmable logic arrays
  • the processing element 205 may be configured for a particular use or configured to execute instructions stored in volatile or non-volatile media or otherwise accessible to the processing element 205 .
  • the processing element 205 may be capable of performing steps or operations according to embodiments of the present disclosure when configured accordingly.
  • the management computing entity 100 may further include or be in communication with non-volatile media (also referred to as non-volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably).
  • non-volatile storage or memory may include one or more non-volatile storage or memory media 210 , including but not limited to hard disks, ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like.
  • the non-volatile storage or memory media may store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like.
  • database, database instance, database management system, and/or similar terms used herein interchangeably may refer to a collection of records or data that is stored in a computer-readable storage medium using one or more database models, such as a hierarchical database model, network model, relational model, entity-relationship model, object model, document model, semantic model, graph model, and/or the like.
  • the management computing entity 100 may further include or be in communication with volatile media (also referred to as volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably).
  • volatile storage or memory may also include one or more volatile storage or memory media 215 , including but not limited to RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like.
  • the volatile storage or memory media may be used to store at least portions of the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like being executed by, for example, the processing element 205 .
  • the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like may be used to control certain aspects of the operation of the management computing entity 100 with the assistance of the processing element 205 and operating system.
  • the management computing entity 100 may also include one or more communications interfaces 220 for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like.
  • Such communication may be executed using a wired data transmission protocol, such as fiber distributed data interface (FDDI), digital subscriber line (DSL), Ethernet, asynchronous transfer mode (ATM), frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol.
  • FDDI fiber distributed data interface
  • DSL digital subscriber line
  • Ethernet asynchronous transfer mode
  • ATM asynchronous transfer mode
  • frame relay frame relay
  • DOCSIS data over cable service interface specification
  • the management computing entity 100 may be configured to communicate via wireless external communication networks using any of a variety of protocols, such as general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), CDMA2000 1 ⁇ (1 ⁇ RTT), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), Evolution-Data Optimized (EVDO), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), IEEE 802.11 (Wi-Fi), Wi-Fi Direct, 802.16 (WiMAX), ultra-wideband (UWB), infrared (IR) protocols, near field communication (NFC) protocols, Wibree, Bluetooth protocols, wireless universal serial bus (USB) protocols, and/or any other wireless protocol.
  • GPRS general packet radio service
  • UMTS Universal Mobile Telecommunications System
  • CDMA2000
  • the management computing entity 100 may include or be in communication with one or more input elements, such as a keyboard input, a mouse input, a touch screen/display input, motion input, movement input, audio input, pointing device input, joystick input, keypad input, and/or the like.
  • the management computing entity 100 may also include or be in communication with one or more output elements (not shown), such as audio output, video output, screen/display output, motion output, movement output, and/or the like.
  • one or more of the management computing entity's 100 components may be located remotely from other management computing entity 100 components, such as in a distributed system. Furthermore, one or more of the components may be combined and additional components performing functions described herein may be included in the management computing entity 100 .
  • the management computing entity 100 can be adapted to accommodate a variety of needs and circumstances. As will be recognized, these architectures and descriptions are provided for exemplary purposes only and are not limiting to the various embodiments.
  • a user may be an individual, a family, a company, an organization, an entity, a department within an organization, a representative of an organization and/or person, and/or the like. To do so, a user may operate a user computing entity 110 that includes one or more components that are functionally similar to those of the management computing entity 100 .
  • FIG. 3 provides an illustrative schematic representative of a user computing entity 110 that can be used in conjunction with embodiments of the present disclosure.
  • the terms device, system, computing entity, entity, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktops, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, gaming consoles (e.g., Xbox, Play Station, Wii), watches, glasses, key fobs, radio frequency identification (RFID) tags, ear pieces, scanners, cameras, wristbands, kiosks, input terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein.
  • gaming consoles e.g., Xbox, Play Station, Wii
  • RFID radio frequency identification
  • User computing entities 110 can be operated by various parties. As shown in FIG. 3 , the user computing entity 110 can include an antenna 312 , a transmitter 304 (e.g., radio), a receiver 306 (e.g., radio), and a processing element 308 (e.g., CPLDs, microprocessors, multi-core processors, coprocessing entities, ASIPs, microcontrollers, and/or controllers) that provides signals to and receives signals from the transmitter 304 and receiver 306 , respectively.
  • CPLDs CPLDs, microprocessors, multi-core processors, coprocessing entities, ASIPs, microcontrollers, and/or controllers
  • the signals provided to and received from the transmitter 304 and the receiver 306 , respectively, may include signaling information in accordance with air interface standards of applicable wireless systems.
  • the user computing entity 110 may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. More particularly, the user computing entity 110 may operate in accordance with any of a number of wireless communication standards and protocols, such as those described above with regard to the management computing entity 100 .
  • the user computing entity 110 may operate in accordance with multiple wireless communication standards and protocols, such as UMTS, CDMA2000, 1 ⁇ RTT, WCDMA, TD-SCDMA, LTE, E-UTRAN, EVDO, HSPA, HSDPA, Wi-Fi, Wi-Fi Direct, WiMAX, UWB, IR, NFC, Bluetooth, USB, and/or the like.
  • the user computing entity 110 may operate in accordance with multiple wired communication standards and protocols, such as those described above with regard to the management computing entity 100 via a network interface 320 .
  • the user computing entity 110 can communicate with various other entities using concepts such as Unstructured Supplementary Service Data (USSD), Short Message Service (SMS), Multimedia Messaging Service (MMS), Dual-Tone Multi-Frequency Signaling (DTMF), and/or Subscriber Identity Module Dialer (SIM dialer).
  • USSD Unstructured Supplementary Service Data
  • SMS Short Message Service
  • MMS Multimedia Messaging Service
  • DTMF Dual-Tone Multi-Frequency Signaling
  • SIM dialer Subscriber Identity Module Dialer
  • the user computing entity 110 can also download changes, add-ons, and updates, for instance, to its firmware, software (e.g., including executable instructions, applications, program modules), and operating system.
  • the user computing entity 110 may include location determining aspects, devices, modules, functionalities, and/or similar words used herein interchangeably.
  • the user computing entity 110 may include outdoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, universal time (UTC), date, and/or various other information/data.
  • the location module can acquire data, sometimes known as ephemeris data, by identifying the number of satellites in view and the relative positions of those satellites.
  • the satellites may be a variety of different satellites, including Low Earth Orbit (LEO) satellite systems, Department of Defense (DOD) satellite systems, the European Union Galileo positioning systems, the Chinese Compass navigation systems, Indian Regional Navigational satellite systems, and/or the like.
  • LEO Low Earth Orbit
  • DOD Department of Defense
  • the location information can be determined by triangulating the user computing entity's 110 position in connection with a variety of other systems, including cellular towers, Wi-Fi access points, and/or the like.
  • the user computing entity 110 may include indoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, time, date, and/or various other information/data.
  • Some of the indoor systems may use various position or location technologies including RFID tags, indoor beacons or transmitters, Wi-Fi access points, cellular towers, nearby computing devices (e.g., smartphones, laptops) and/or the like.
  • position or location technologies including RFID tags, indoor beacons or transmitters, Wi-Fi access points, cellular towers, nearby computing devices (e.g., smartphones, laptops) and/or the like.
  • such technologies may include the iBeacons, Gimbal proximity beacons, Bluetooth Low Energy (BLE) transmitters, NFC transmitters, and/or the like.
  • BLE Bluetooth Low Energy
  • the user computing entity 110 may also comprise an IETM viewer (that can include a display 316 coupled to a processing element 308 ) and/or a viewer (coupled to a processing element 308 ).
  • the IETM viewer may be a user application, browser, user interface, graphical user interface, and/or similar words used herein interchangeably executing on and/or accessible via the user computing entity 110 to interact with and/or cause display of information from the management computing entity 100 , as described herein.
  • the term “viewer” is used generically and is not limited to “viewing.” Rather, the viewer is a multi-purpose digital data viewer capable and/or receiving input and providing output.
  • the viewer can comprise any of a number of devices or interfaces allowing the user computing entity 110 to receive data, such as a keypad 318 (hard or soft), a touch display, voice/speech or motion interfaces, or other input device.
  • the keypad 318 can include (or cause display of) the conventional numeric (0-9) and related keys (#, *), and other keys used for operating the user computing entity 110 and may include a full set of alphabetic keys or set of keys that may be activated to provide a full set of alphanumeric keys.
  • the viewer can be used, for example, to activate or deactivate certain functions, such as screen savers and/or sleep modes.
  • the user computing entity 110 can also include volatile storage or memory 322 and/or non-volatile storage or memory 324 , which can be embedded and/or may be removable.
  • the non-volatile memory may be ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like.
  • the volatile memory may be RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like.
  • the volatile and non-volatile storage or memory can store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like to implement the functions of the user computing entity 110 . As indicated, this may include a user application that is resident on the entity or accessible through a browser or other IETM viewer for communicating with the management computing entity 100 and/or various other computing entities.
  • the user computing entity 110 may include one or more components or functionality that are the same or similar to those of the management computing entity 100 , as described in greater detail above.
  • these architectures and descriptions are provided for exemplary purposes only and are not limiting to the various embodiments.
  • the logical operations described herein may be implemented (1) as a sequence of computer implemented acts or one or more program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system.
  • the implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These states, operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. Greater or fewer operations may be performed than shown in the figures and described herein. These operations may also be performed in a different order than those described herein.
  • the management computing entity 100 and/or user computing entity 110 may be configured for storing technical documentation (e.g., data) in an IETM, providing access to the technical documentation to a user via the IETM, and/or providing functionality to the user accessing the technical documentation via the IETM.
  • technical documentation e.g., data
  • the technical documentation is typically made up of volumes of text along with other media objects.
  • the technical documentation is arranged to provide the text and/or the media objects on an item.
  • the item may be a product, machinery, equipment, a system, and/or the like such as, for example, a bicycle or an aircraft.
  • the technical documentation may provide textual information along with non-textual information (e.g., one or more visual representations) of the item and/or components of the item.
  • Textual information generally includes alphanumeric information and may also include different element types such as graphical features, controls, and/or the like.
  • Non-textual information generally includes media content such as illustrations (e.g., 2D and 3D graphics), video, audio, and/or the like. Although the non-textual information may also include alphanumeric information.
  • the technical documentation may be provided as digital media in any of a variety of formats, such as JPEG, JFIF, JPEG2000, EXIF, TIFF, RAW, DIV, GIF, BMP, PNG, PPM, MOV, AVI, MP4, MKV, and/or the like.
  • the technical documentation may be provided in any of a variety of formats, such as DOCX, HTMLS, TXT, PDF, XML, SGML, JSON and/or the like.
  • the technical documentation may provide textual and non-textual information of various components of the item. For example, various information may be provided with respect to assemblies, sub-assemblies, sub-sub-assemblies, systems, subsystems, sub-subsystems, individual parts, and/or the like associated with the item.
  • the technical documentation for the item may be stored and/or provided in accordance with S1000D standards and/or a variety of other standards.
  • the management computing entity 100 and/or user computing entity 110 provides functionality in the access and use of the technical documentation provided via the IETM in accordance with user instructions and/or input received from the user via an IETM viewer (e.g., a browser, a window, an application, a graphical user interface, and/or the like).
  • the IETM viewer is accessible from a user computing entity 110 that may or may not be in communication with the management computing entity 100 .
  • a user may sign into the management computing entity 100 from the user computing entity 110 or solely into the user computing entity 110 to access technical documentation via the IETM and the management computing entity 100 and/or user computing entity 110 may be configured to recognize any such sign in request, verify the user has permission to access the technical documentation (e.g., by verifying the user's credentials), and present/provide the user with various displays of content for the technical documentation via the IETM viewer (e.g., displayed on display 316 ).
  • modules now discussed and configured for carrying out various functionality may be invoked, executed, and/or the like by the management computing entity 100 , the user computing entity 110 , and/or a combination thereof depending on the embodiment.
  • a user may be required to sign-in on a device (e.g., a user computing entity 110 ) to gain access to the technical documentation for an item through an IETM.
  • the user's device e.g., user computing entity 110
  • a management computing entity 100 may be configured for facilitating the user's access to the technical documentation.
  • the technical documentation may be stored locally on the user's computing entity 110 and therefore, the user's computing entity 110 is configured to facilitate the user's access to the documentation without cooperation of the management computing entity 100 .
  • the user's computing entity 110 and the management computing entity 100 may be communication and work in concert to provide access to the technical documentation to the user.
  • FIG. 4 is a flow diagram showing a sign-in module for performing such functionality according to various embodiments of the disclosure.
  • the user may open the IETM residing on his or her user computing entity 110 to gain access to technical documentation for a particular item.
  • the user may open an IETM viewer (e.g., browser) to gain access to the technical documentation residing remotely on the management computing entity 100 .
  • the IETM may be provided as a software-as-a-service over some type of network.
  • the technical documentation may be stored locally on the user's computing entity 110 or remotely on the management computing entity 100 that the user computing entity 110 communicates with to access the documentation.
  • the process flow 400 begins in various embodiments with the sign-in module providing a sign-in page (e.g., webpage), screen, window, graphical user interface, and/or the like viewable by the user via an IETM viewer in Operation 410 .
  • a sign-in page e.g., webpage
  • window is used throughout the remainder of the application, although those of ordinary skill in the art understand this term may include other forms of displaying content.
  • the sign-in window may provide a number of fields such as a selectable dataset field, a selectable unit field, and a selectable object field.
  • the selectable dataset field provides one or more datasets in which each dataset represents a publication of the technical documentation available for a particular item.
  • technical documentation accessible through the IETM may be for an airline.
  • the airline may have a number of different aircraft types/models in its fleet such as different jet models, propeller models, rotor models, and/or the like. Therefore, the IETM may provide a dataset for each model and the selectable dataset field may be a mechanism such as a dropdown field listing all of the datasets for the different aircraft models that allows for the user to select a particular dataset.
  • the sign-in module determines whether input has been received indicating the user has selected a dataset for a particular item in Operation 415 . If so, then the sign-in module provides one or more applicable units for the dataset for display in Operation 420 .
  • An applicable unit may represent the user's relationship with respect to the technical documentation and the associated item.
  • the user may be an employee of an airline and the unit may represent the position, job, role, and/or the like that the user holds with the airline.
  • the user may be a salesperson, design engineer, mechanical, and/or the like for the airline.
  • the unit may represent a larger entity within the organization such as, for example, research and development department, marketing department, engineering design department, and/or the like.
  • the applicable units displayed may be dependent on the dataset selected by the user.
  • an applicable unit that may be provided is jet mechanic as a result of the user selecting the model of a jet dataset.
  • the units may be displayed in the selectable unit field.
  • the selectable unit field may be a dropdown field listing all of the applicable units for the user to select from.
  • the sign-in module determines whether input has been received indicating the user has selected a unit in Operation 425 . If so, then the sign-in module in particular embodiments provides one or more applicable objects in the selectable object field in Operation 430 .
  • an object represents a specific instance of the item associated with the technical documentation.
  • the user may be a mechanic for the airline and he or she may be signing into the IETM to gain access to technical documentation for a particular model of aircraft.
  • the particular model of aircraft may have multiple configurations in which a first configuration uses air brakes and thrust reversers and a second configuration uses disc brakes and thrust reversers. Therefore, the objects may represent the two different configurations of the model of aircraft.
  • the user may instead be a mechanic for the airline and he or she may be signing into the IETM to gain access to technical documentation for a particular aircraft. Therefore, in this instance, the one or more applicable objects may be the specific aircraft found in the airline's fleet for the model of aircraft. For example, the user may be planning to perform maintenance on one of the particular aircraft and selects the aircraft from the applicable objects listed in the selectable object field.
  • the selectable object field may be configured as a control such as a dropdown listing the applicable objects to allow the user to select a desired object.
  • the applicable objects may be dependent on the unit selected by the user. For example, the user may have selected mechanic for crew C as the unit and only the aircraft for the particular type of aircraft authorized to be worked on by crew C may be displayed on the sign-in window.
  • selection of a particular object may allow for the technical documentation for the item to be filtered down to a smaller dataset. For instance, returning to the example involving the different configurations for the model of aircraft, the technical documentation for this particular model of aircraft may be filtered to only provide documentation on the air brake configuration or the disc brake configuration based at least in part on the user's selection.
  • a selection of a particular object may allow for recordation of technical documentation accessed and/or processes, tasks, and/or the like performed for a particular object of an item. For instance, the performance of maintenance on a specific aircraft found in the airline's fleet may be recorded/tracked in the IETM.
  • IETM may be used to maintain a maintenance record for the specific aircraft.
  • a universal object may be provided along with the applicable objects that allows for the user to view all the technical documentation for a particular item.
  • a universal object may be provided to allow the user to view the technical documentation on both the air brake configuration and the disc brake configuration of the model of aircraft.
  • the sign-in module determines whether input has been received indicating the user has selected a specific object in Operation 435 . If not, then the sign-in module determines whether input has been received indicating the user has selected a universal object in Operation 440 . If the user has selected the universal object, then the sign-in module causes a sign-in mechanism to be made available on the sign-in window to the user in Operation 445 . Accordingly, the sign-in mechanism may be any one of different types of controls depending on the embodiment such as, for example, a button, a toggle, checkbox, and/or the like.
  • the sign-in module determines whether input has been received indicating a job has been identified in Operation 450 .
  • a job may represent an instance of a specific procedure, task, operation, and/or the like to be performed on the specific object. For instance, returning to the example involving the user selecting a specific aircraft for airline, the job may represent a specific maintenance task the user is to perform on the specific aircraft such as repairing the air braking system. Accordingly, the sign-in window may provide a field for the user to enter an identifier for the job. In some embodiments, the sign-in module causes the job field to be accessible in response to the user selecting a specific object.
  • the identification of a job may allow the technical documentation to be filtered to enable the user to find the documentation needed for the job more easily.
  • the identification of a job may allow for the tracking on the jobs performed on the specific object. Further, the identification of a job may provide security in that access to only certain technical documentation may be provided based at least in part on the job. If a job has been identified by the user, then the sign-in module causes the sign-in mechanism to be made available in Operation 445 .
  • the sign-in module determines whether input has been received indicating the user has selected the sign-in mechanism in Operation 455 and if so, has provided the required information in Operation 460 .
  • the sign-in window may also display one or more fields for the user to enter a username and/or password. Therefore, in these instances, the sign-in module may determine whether the user has provided such information. If the user has not, then the sign-in module may provide an error message to display informing the user to provide the needed information in Operation 465 .
  • the sign-in module determines whether the user's credentials are valid in Operation 470 .
  • the IETM and/or a supporting system in communication with the IETM may store information on the user's credentials and the information entered by the user on the sign-in window may be compared with the stored credential information. If the user's credentials are invalid, then the sign-in module may provide an error message to display informing the user of such in Operation 465 . However, if the user's credentials are valid, then the sign-in module signs the user into the IETM in Operation 475 . At this point, the user may begin accessing and interacting with the technical documentation for the item via the IETM.
  • a username field 510 is provided as a text field that allows for the user to enter his or her username.
  • a selectable dataset field 515 is provided to allow the user to select the technical documentation (e.g., dataset) for a desired item.
  • the selectable dataset field 515 is provided as a dropdown menu control that lists the available technical documentation from which the user can select.
  • a selectable unit field 520 is provided that allows for the user to select a unit. Again, in this particular example, the selectable unit field 520 is provided as a dropdown menu control listing the applicable units for the dataset.
  • a selectable object field 525 is provided that allows for the user to select a specific object for the item.
  • the objects are specific aircraft identified by their tail numbers. Therefore, the user selects the tail number of the desired aircraft.
  • a universal object 530 is provided in the list of objects in this particular example that allows for the user to gain access to all of the technical documentation for the model of aircraft (item).
  • the universal object 530 is provided so that it may be used when the user is engaging in research and/or training on the model of aircraft and not necessarily performing a procedure, task, operation, and/or the like on a specific aircraft.
  • a job field 535 is provided to allow the user to enter a job (e.g., job identifier) with respect to the specific object.
  • a sign-in mechanism e.g., a button
  • the user may select to sign into the IETM and view the technical documentation for the specific object.
  • the user may now be provided with access to the technical documentation and a number of different functionality with respect to the technical documentation in various embodiments.
  • the sign-in functionality may allow for tracking and reporting of activities within the IETM. For instance, any activity engaged in by the user once he or she is signed into the IETM may be recorded and viewable via the IETM. For example, the content (e.g., the technical documentation) accessed and viewed by the user may be recorded so that the user's access and use of such content can be monitored. In addition, the user's completion of activities such as procedures, tasks, operations, and the like may be recorded and monitored.
  • any activity engaged in by the user once he or she is signed into the IETM may be recorded and viewable via the IETM.
  • the content e.g., the technical documentation
  • the user's completion of activities such as procedures, tasks, operations, and the like may be recorded and monitored.
  • FIG. 5C provides a history report 545 the user may view via the IETM on the user's history of accessing and viewing different content (e.g., data modules) in the technical documentation.
  • the history report 545 may be configured in some embodiments to allow the user to select particular content (e.g., a particular data module) from the report 545 to view the content in a separate view pane 550 .
  • the history report 545 may only be provided to the user or may be provided to other personal such as the user's supervisor so that the supervisor can monitor the user's activities.
  • Other types of reports may be made available to the user such as a daily report 555 shown in FIG. 5D .
  • the daily report 555 may only be provided to the user or may be provided to other personal such as the user's supervisor.
  • the availability of certain functionality within the IETM may be provided to the user and others based at least in part on their credentials used to sign-into the IETM.
  • the user may be provided with an initial window upon signing into the IETM to view the technical documentation for an item.
  • a table of contents may be displayed on the initial window for the technical documentation associated with the item and various functionality.
  • the initial window may include multiple view panes.
  • the window may include a first view pane and a second view pane that are displayed on non-overlapping portions of the window, although more than two view panes may be displayed and/or the panes may be displayed on overlapping portions of the window in some instances.
  • the table of contents may be displayed on a first view pane and may provide a list of topics configured to be selectable to view information on a selected topic.
  • each of the topics may be provided as a hyperlink and/or provided with one or more selection mechanisms such as buttons that a user may select to view additional information on the topic.
  • the additional information may then be provided for displaying on another view pane on the window (e.g., on the second view pane) and/or via a separate window.
  • the separate window displaying the additional information may be superimposed over a portion of the first window displaying the table of contents.
  • other windows provided for display in various embodiments may be configured in the same or similar fashion.
  • these windows may include any number of panes.
  • the panes may be provided side-by-side on non-overlapping portions of the window or may be provided as overlapping (e.g., superimposed over one another) on the window.
  • the panes may be displayed in various sizes and dimensions with respect to the window. Further, the panes may be display statically and/or dynamically such as pop-up panes.
  • any number of separate windows may be displayed at virtually the same time side-by-side or with one window superimposed over a portion of or an entire second window.
  • the window(s) may be displayed in various sizes and dimensions.
  • multiple windows may be displayed as superimposed over one another (or portion thereof) in a cascading fashion.
  • such windows may be displayed statically or dynamically such as pop-up windows.
  • a window may be provided in particular embodiments for display in any number of different formats such as, for example, a dialog box, tooltip, infotip, tear-off window, and/or the like.
  • FIG. 6 is a flow diagram showing a table of contents (TOC) module for performing such functionality according to various embodiments of the disclosure.
  • TOC table of contents
  • the process flow 600 begins in various embodiments with the TOC module providing a window for display comprising the table of contents in Operation 610 .
  • the table of contents may provide a list of topics on content found within the technical documentation for the item. Accordingly, each of the topics may be selectable (e.g., may be configured as a hyperlink or configured with some type of selection mechanism such as a button) to access content found in the technical documentation for the item.
  • topics may include procedures, tasks, operations, services, checklists, planning, and/or the like performed with respect to the item.
  • topics may include maintenance procedures and/or tasks performed on the item. Therefore, the maintenance procedure (e.g., an identifier of the maintenance procedure such as a title of the maintenance procedure) may be selected by the user directly from the table of contents to access content found in the technical documentation for the maintenance procedure.
  • the maintenance procedure e.g., an identifier of the maintenance procedure such as a title of the maintenance procedure
  • topics may include different components that make up the item.
  • a component of an aircraft is the front landing wheel.
  • components may identify functional and/or physical structures of the item and may be broken down into assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like.
  • the table of contents may be displayed in a hierarchical structure in which topics are grouped accordingly with some topics nested within other topics within the hierarchical structure based at least in part on relationships between the different topics. For example, a topic on the front landing wheel of an aircraft may be nested under a topic on the front landing gear assembly for the aircraft in the hierarchical structure of the table of contents.
  • the table of contents may provide various lists on other types of information in particular embodiments such as lists of effective data modules, illustrations, tables, parts, orders for parts, annotations, directions, publications, and/or the like.
  • the user may select a topic to preview in particular embodiments. For example, the user may use a mouse to click on, right click on, or hover over a topic in the table of contents or use a stylus or finger to select a topic in the table of contents to generate a preview for the topic. Therefore, the TOC module may determine whether input has been received indicating the user has selected a topic to preview in Operation 615 . If so, then the TOC module generates the topic preview in Operation 620 and provides the topic preview for display for the user to view in Operation 625 .
  • the topic preview may be provided as a separate window for display. Accordingly, the topic preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the selected topic.
  • the topic preview is configured to provide only a preview of some of the content found in the technical documentation on the topic.
  • the topic preview may be configured in particular embodiments to provide the first five to fifty lines of textual information that the user would be provided with if the user were to select the topic to view the entire content for the topic.
  • the preview may be superimposed over a portion of the window displaying the table of contents.
  • each data module includes a header section configured to provide identification information and status information for the data module that includes metadata for managing the data module (e.g., source information, security classification, applicability, change history, reason for change, verification status, and/or the like).
  • the header section may include an information code that provides a description on the type of information found in the content of the data module.
  • functionality is provided to allow the user to filter the table of contents using the information codes for the different topics (e.g., data modules for the topics).
  • the TOC module determines whether input has been received indicating the user would like to filter the table of content based at least in part on an information code (InfoCode) in Operation 630 . If so, then the TOC module filters the table of contents and provides of the table for display in Operation 635 .
  • InfoCode information code
  • functionality is provided to allow the user to view the table of contents in a source format as opposed to a format adhering to S1000D standards.
  • the source format may be preferable for a user because the source format may include labeling of the content that is better suited for searching than the formatting of the content under S1000D standards.
  • S1000D standards requires the figures (e.g., illustrations) found in a data module to be numbered always beginning with one. Therefore, if content from a source is partitioned into multiple data modules, the original labeling of figures may be lost. As a result, the content may end up being displayed having multiple figures labeled the same (e.g., may end of having multiple figures labeled as one).
  • the TOC module determines whether input has been received indicating the user would like to view the table of contents showing the source content formatting in Operation 640 . If so, then the TOC module generates and provides the table of contents with the source content formatting for display in Operation 645 .
  • functionality e.g., a search mechanism displayed on the window
  • the search functionality may allow the user to provide criteria (e.g., one or more search terms) that can then be used to identify topics based at least in part on the criteria.
  • a search window is provided on which the user can enter search terms and to display the search results. Therefore, in these embodiments, the TOC module determines whether input has been received indicating the search functionality has been selected by the user in Operation 650 . If so, then the TOC module enables such functionality in Operation 655 .
  • functionality is provided to allow for the user to copy the data module code (DMC) for a topic.
  • the data module code is part of the metadata (e.g., header section) of a data module that holds the content for a topic.
  • the DMC includes several characters identifying information about the data module such as the item to which the content applies, the functional or physical breakdown of the item associated with the content, the specific type of information found in the content, and/or the like. Therefore, in these particular embodiments, the TOC module determines whether input has been received indicating the user would like to copy the DMC for a particular topic (e.g., particular data module) in Operation 660 .
  • the user may select a topic in the table of contents using shift click to copy the DMC for the topic. If so, then the TOC module copies the DMC in Operation 665 .
  • the TOC module may copy the DMC from a URL displayed via the IETM viewer (e.g., for the corresponding data module).
  • the user may then send the URL in some type of communication (e.g., in an email) to another individual.
  • the user may wish to send a message to an individual who is managing the content of the data module asking the individual to make a change to the data module. Therefore, the user may wish to include the DMC for the date module to identify which data module the user is talking about.
  • the TOC module is configured in various embodiments to determine whether input has been received indicating the user has selected a particular topic to view in Operation 670 .
  • the TOC module may be configured to determine the user using a first type of selection mechanism (e.g., hover over a topic in the table of contents) to generate and provide a topic preview of the content for the topic and determine the user using a second, different type of selection mechanism (e.g., a mouse click on the topic in the table of contents) to generate and provide the content found in the technical documentation for the topic.
  • a first type of selection mechanism e.g., hover over a topic in the table of contents
  • a second, different type of selection mechanism e.g., a mouse click on the topic in the table of contents
  • the selection mechanism may involve the user using some type of control such as a mouse to click on, right click on, or hover over the topic in the table of contents or use a stylus or finger to select a topic in the table of contents. Therefore, if the TOC module determines the user has selected a topic to view in the IETM, then the TOC module provides the topic to display in Operation 675 . At that point, the TOC module determines whether to exit in Operation 680 . If not, then the TOC module returns to Operation 610 and provides the table of contents.
  • some type of control such as a mouse to click on, right click on, or hover over the topic in the table of contents or use a stylus or finger to select a topic in the table of contents.
  • the content for the topic may be displayed on the same or a different window.
  • the content for the topic may be displayed in a separate view pane (e.g., second view pane) on the window.
  • the content may be displayed on a different window while the window displaying the table of contents may still be available for viewing.
  • the window displaying the table of contents may be available for immediate viewing in response to the user selecting a mechanism such as a button displayed on a toolbar and/or a view tab via the IETM viewer.
  • the table of contents includes a preface 700 of different lists along with a list of various topics.
  • the user has selected a particular topic 715 to generate a preview for the topic that is being displayed on a separate window 720 .
  • the window provides a selectable field 725 (e.g., a dropdown menu control) to allow the user to filter the table of contents based at least in part on information codes.
  • the preview window 720 in this example provides a selection mechanism (e.g., a button) 730 to add a bookmark for the preview. Bookmarking the preview may allow the user to recall the preview and/or content for the associated topic at a later time to view. Accordingly, such a bookmark may be recorded and saved in the IETM for the user.
  • FIG. 8 is a flow diagram showing a filtering module for performing such functionality according to various embodiments of the disclosure.
  • the filtering module may be invoked by another module to filter the table of contents such as, for example, the TOC module previously described.
  • the filtering module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the filtering module may be invoked in some embodiments as a result of the user identifying a particular information code to use in filtering the table of contents.
  • the technical documentation may include publication data (e.g., a publication module).
  • the publication data may provide a list of technical data (e.g., every data module) found in the publication of the technical documentation for the item in the order in which the publication delivers the data to the IETM. Therefore, the publication data may provide a navigation structure for the IETM in constructing the table of contents.
  • the process flow 800 may begin with the filtering module referencing the publication data in Operation 810 .
  • the filtering module select specific data (e.g., a data module) found in the publication data in Operation 815 .
  • the publication data may also include metadata (e.g., the DMC) for the technical data (e.g., for each of the data modules). Therefore, the filtering module reads the information code for the selected data in Operation 820 .
  • the filtering module determines whether the information code for the selected data matches the information code selected by the user to filter the table of contents in Operation 825 . If so, then the filtering module marks the technical data for displaying as a topic in the filtered table of contents in Operation 830 .
  • the filtering module determines whether the publication module contains additional technical data (e.g., another data module) in the list of technical data in Operation 835 . If so, then the filtering module returns to Operation 815 , selects the next technical data found in the list (e.g., the next data module), and repeats the operations just described for the newly selected technical data. Once all of the technical data have been processed in the list, the filtering module then generates and provides the results for display to the user in Operations 840 and 845 .
  • additional technical data e.g., another data module
  • FIG. 9 an example of the results of filtering the table of contents based at least in part on an information code is provided.
  • the table of contents has been filtered based at least in part on the information code for troubleshooting 900 .
  • the filter function provided in various embodiments allows for the user to filter down the topics found in the technical documentation in a faster, more efficient manner so that the user can more easily and quickly identify needed content in the technical documentation.
  • functionality may be provided in some embodiments to allow the user to view the table of contents in a source format as opposed to a format adhering to S1000D standards.
  • the source format may be preferable for a user because the source format may include labeling of the content that is better suited for searching than the formatting of the content under S1000D standards.
  • FIG. 10 is a flow diagram showing a source format tagging module for performing such functionality according to various embodiments of the disclosure.
  • the source format tagging module may be executed in particular embodiments by an entity such as the management computing entity 100 and/or a user computing entity 110 engaged in importing a publication of technical documentation for an item into the IETM.
  • the publication may include content from a source in a format such as portable document format (PDF), a standards generalized markup language (SGML) format, and/or the like.
  • PDF portable document format
  • SGML standards generalized markup language
  • the source may include formatting for the content such as identifiers (e.g., numbering and/or textual descriptions) for chapters, headings, sub-headings, sections, tables, figures, and/or the like.
  • the process flow 1000 begins with the source format tagging module reading the information from such a source in Operation 1010 .
  • the source format tagging module selects the format structure from the information in Operation 1015 and tags the appropriate portion of the content with the information in Operation 1020 .
  • the source format tagging module may record metadata along with the content from the source in the IETM that includes the source formatting and information to format the content appropriately.
  • the content may include a reference to a figure and the source format tagging module may record the format (e.g., the label) for the figure in metadata along with the content in the IETM.
  • the content found in the source may include a chapter title. Therefore, the source format tagging module may record the title of the chapter in the metadata along with the content in the IETM.
  • the format tagging module determines whether additional format structure is found in the content in Operation 1025 . If so, then the source format tagging module returns to Operation 1015 , selects the next format structure found in the content, and tags the content with the format structure accordingly. As a result, the content can be displayed in various embodiments in its original format structure from the source of the content.
  • FIG. 11 is a flow diagram showing a source formatting module for performing such functionality according to various embodiments of the disclosure.
  • the source formatting module may be invoked by another module to display the content with the format structure from the source such as, for example, the TOC module previously described.
  • the source formatting module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 1100 begins with the source formatting module reading a format tag for the content in Operation 1110 .
  • the content may be tagged in particular embodiments by including metadata (e.g., tags) long with the content identifying various parts of the format structure found in the source of the content.
  • the metadata may include one or more tags providing identifiers (e.g., numbering and/or textual descriptions) for chapters, headings, sub-headings, sections, tables, figures, and/or the like found in the source of the content.
  • the source formatting module then formats the content based at least in part on the format structure found in the tag in Operation 1115 .
  • the format structure may identify a subject matter heading for the content. Therefore, the source formatting module may format the content with the subject matter heading. Accordingly, in particular instances, the content may then be found in the table of contents as a topic having the subject matter heading as a title. While in other instances, the content itself may be displayed on a window with the subject matter heading.
  • the source formatting module determines whether another tag exists for the content in Operation 1120 . If so, then the source formatting module returns to Operation 1110 , reads the next tag for the content, and formats the content based at least in part on the format structure found in the tag.
  • FIG. 12A an example is provided of a table of contents 1200 formatted according to S1000D standards.
  • all of the topics found under the heading flight manual are provided in a generic format with only a title for each topic.
  • FIG. 12B the table of contents 1210 is now formatted using the format structure found in the source for the flight manual.
  • each of the topics is now listed with a section heading as found in the source for the flight manual. Such section headings may allow for the user to more easily distinguish between the different content provided by the source.
  • FIG. 12C Another example is shown in FIG. 12C .
  • content from a source in this instance a PDF file
  • a PDF file is being displayed on a window with source formatting according to various embodiments.
  • the format structure of the content shown on the window matches the format structure of the content found in the source PDF file.
  • the title designator for the content 1215 has been included along with the title of the content 1220 shown on the window.
  • the heading 1225 and sub-headings 1235 , 1245 from the source PDF file are shown as a heading 1230 and sub-headings 1240 , 1250 in the content on the window.
  • the user may be able to better navigate and understand the content as a result of viewing the content in the format structure found in the source PDF file.
  • FIG. 13 is a flow diagram showing a search module for performing such functionality according to various embodiments of the disclosure.
  • the search module may be invoked by another module to search the table of contents such as, for example, the TOC module previously described.
  • a mechanism e.g., button
  • the TOC module may invoke the search module.
  • the search module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 1300 begins with the search module providing a search window for display to the user in Operation 1310 .
  • the search window may be configured in a similar fashion as the window displaying the table of contents.
  • the search window may include one or more view panes for displaying search results according to different criteria (e.g., different features of the elements found in the table of contents).
  • the search window provides a freeform field that allows the user to type in one or more search terms to use in searching the table of contents.
  • the search module may be configured to provide predictions of search terms to the user based at least in part on the characters typed into the freeform field.
  • the search module determines whether input has been received indicating the user has typed one or more characters into the freeform field in Operation 1315 . If so, then the search module provides one or more predictions of search terms (e.g., autocomplete) to the user in Operation 1320 . As discussed further herein, the predictions may be based at least in part on different grounds depending on the embodiment. For example, the search module may be configured to provide the first five predictions identified for the entered characters alphabetically, based at least in part on frequency of use, based at least in part on recent trends, and/or the like.
  • search module may be configured to provide the first five predictions identified for the entered characters alphabetically, based at least in part on frequency of use, based at least in part on recent trends, and/or the like.
  • the search module determines whether input has been received indicating the user has initiated a search based at least in part on the entered search term(s) in Operation 1325 .
  • the search window may include a selection mechanism (e.g., a button) that the user can select to initiate the search. Therefore, the search module determines whether input has been received indicating the user has selected the selection mechanism. If the user has initiated the search, then the search module generates search results based at least in part on the entered search term(s) in Operation 1330 .
  • the user may indicate other criteria for conducting the search.
  • the search window may include a field that allows the user to identify applicability requirements for the search results.
  • Applicability generally pertains to the context for which the results (e.g., information found in topics) are valid.
  • the context can be associated with a physical configuration of the item, but can also include other aspects such as support equipment availability and/or environmental conditions.
  • the search window may include a field that allows the user to identify the type of content required for the search results.
  • the content generally pertains to the technical information provided by the search result. For example, different types of content may include procedural, process, wiring, maintenance, learning, parts, checklists, and/or the like.
  • the search window may include other mechanisms that allow the user to identify criteria for filtering the search results such as information code.
  • the search module is configured to search different features of the elements found in the table of contents to identify the search results.
  • the search window is configured to provide the search results with respect to table of contents, data module, and part name and/or number.
  • the search module searches the table of contents to identify those topics with the search term(s) in the title of the topic.
  • the search module searches the various data (e.g., data modules) that make up the technical documentation to identify data in which the search term(s) are found in the textual information for the data.
  • the search module searches the part names and/or numbers of the parts used in the item to identify those pails with the search term(s) in the part names and/or numbers.
  • the search module may format the search results with respect to table of contents, data modules, and parts (e.g., part names and/or numbers) in Operation 1335 .
  • the search module may then provide the search results for displaying in Operation 1340 .
  • the search window may be configured to show the search results with respect to the three different basis: table of contents: data modules; and parts.
  • the search window may provide a view pane with a tab for each basis that the user may select to view the search results for the basis.
  • the search module determines whether input has been received indicating the user wishes to exit the search window in Operation 1345 .
  • the user may select one of the search results (e.g., a topic) to view or the user may simply select a mechanism to exit the search window. If so, then the search module exits.
  • the search results are not necessarily lost (e.g., closed) as a result of the user exiting the search window. Instead, the results may be maintained while the user is still actively signed into the IETM.
  • Such functionality allows for the user to later return to his or her search results to further view and use accordingly. For example, the user may initially view a data module listed in the search results and then later decided to view the search results again because the data module did not have the information the user was looking for. Therefore, the search results may be maintained so that the user can later return to them if desired.
  • the IETM may be configured to save the search results even past the user's current sign-in to the IETM.
  • FIG. 14 is a flow diagram showing a predictions module for performing such functionality according to various embodiments of the disclosure.
  • the predictions module may be invoked by another module to provide predictions such as, for example, the search module previously described.
  • the predictions module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 1400 begins with the predictions module reading (e.g., receive input of) the character(s) typed in by the user on the search window in Operation 1410 .
  • a search index is maintained in the IETM that is constructed from the dataset for the technical documentation of the item.
  • the search index provides a mapping of characters (e.g., alphanumeric) to various terms found in the technical documentation for the item. Therefore, in these embodiments, the predictions module searches the index to identify predictions based at least in part on the entered character(s) in Operation 1415 .
  • the predictions module then identifies and orders the predictions based at least in part on certain grounds in Operations 1420 and 1425 .
  • the grounds for ordering the predictions may differ depending on the embodiments.
  • the predictions module may order the predictions based at least in part on alphabetically, frequency of use, recent trends, and/or the like.
  • the predictions module provides the top predictions in operation 1430 .
  • the predictions module may be configured to provide the top five, ten, and/or the like predictions that are selectable by the user to automatically complete the search terms in the freeform field provided on the search window.
  • the predictions module determines whether input has been received indicating the user has selected a prediction in Operation 1435 . If not, then the predictions module returns to Operation 1410 to read any further characters entered by the user in the freeform field and to make further predictions accordingly. Once the user selects one of the predictions or finishes typing in characters in the freeform field, then the predictions module exits.
  • FIG. 15A provides an example of a search window 1500 displaying search results according to various embodiments.
  • the search results are being displayed on a view pane 1510 with respect to data modules that have content containing the search term “assembly” 1515 .
  • view panes 1520 , 1525 are also provided for the table of contents and part numbers that are hidden on the window 1500 behind the data modules view pane 1510 .
  • FIG. 15B the search results are now shown as filtered based information code 1530 .
  • the user has selected a mechanism 1535 provided on the search window 1500 indicating to filter the results based at least in part on information code.
  • a separate tab 1540 , 1545 , 1550 is provided for each of table of contents view pane 1520 , data modules view pane 1510 , and parts view pane 1525 , respectively, to provide the user with access to the search results for the three different basis.
  • a list of parts for an item may be provided in the IETM in various embodiments.
  • this list of parts may be generated based at least in part on information/data provided in a publication of the technical documentation of the item.
  • the list of parts may be generated based at least in part on the illustrated parts breakdown (IPB) found in the publication.
  • IPB illustrated parts breakdown
  • a list of parts used by the item may be generated without the need to gather such a list from the suppliers of the parts or any other third-party source outside the publication of the technical documentation for the item.
  • FIG. 16 is a flow diagram showing a generate list of parts module for performing such functionality according to various embodiments of the disclosure. Accordingly, the generate list of parts module may be executed in particular embodiments by an entity such as the management computing entity 100 and/or a user computing entity 110 engaged in importing a publication of the technical documentation for an item.
  • an entity such as the management computing entity 100 and/or a user computing entity 110 engaged in importing a publication of the technical documentation for an item.
  • the process flow 1600 begins with the generate list of parts module reading the IPB provided with the publication in Operation 1610 .
  • the IPB identifies the parts found in the technical documentation for which one or more illustrations (e.g., graphics and/or other media objects) are included in the technical documentation.
  • one or more illustrations e.g., graphics and/or other media objects
  • a data module for a particular maintenance task may be found in the publication for the technical documentation that references a particular part used in a repair that is detailed in the maintenance task.
  • one or more illustrations of installing the part may be included along with the data module that can be displayed to a user as the user views the maintenance task via the IETM. Therefore, a reference to the one or more illustrations may be provided in the IPB.
  • the generate list of parts module identifies the parts (e.g., part names and/or numbers) found in the IPB in Operation 1615 and generates the list of parts based at least in part on the parts found in the IPB in Operation 1620 . Accordingly, as detailed further herein, the generated lists of parts may then be viewed by a user via the IETM.
  • the parts e.g., part names and/or numbers
  • a user may request to view the list of parts for an item via the IETM.
  • a selection mechanism may be provided such as a button provided on a toolbar to allow the user to request to view the list of parts for the item.
  • a window may be provided for displaying the list of parts. Accordingly, in particular embodiments, the window may be configured similar to the other windows mentioned herein.
  • the window may be configured to have a first view pane displaying the list of parts and a second view pane that is used to display various information on a part found in the lists of parts.
  • the window may be configured to display the view panes on non-overlapping portions of the window.
  • each part displayed in the list of parts may be selectable (e.g., may be displayed as a hyperlink and/or displayed with one or more selections mechanisms such as buttons) to provided information on the part.
  • such information may be displayed on a view pane (e.g., the second view pane) and/or may be displayed on a separate window.
  • the window may provide the user with various functionality that may be used with respect to the list of parts.
  • FIG. 17 is a flow diagram showing a list of parts module for performing such functionality according to various embodiments of the disclosure. Accordingly, the list of parts module may be executed in particular embodiments as a result of a user who is viewing the list of parts via the IETM invoking various functionality.
  • the process flow 1700 begins with the list of parts module determining whether input has been received indicating a selection of a part by the user in Operation 1710 .
  • each part in the list of parts may be selectable.
  • each part in the list of parts may be displayed as a hyperlink and/or along with some type of selection mechanism (e.g., a button) to allow the user to select the part from the list.
  • some type of selection mechanism e.g., a button
  • the media content may be made up of one or more illustrations that may include 2D and/or 3D graphics, as well as other media objects such as images and/or videos that may be provided in the technical documentation for the item. Therefore, in particular embodiments, the list of parts module may be configured to retrieve the media content and provide the list of parts for display on a first view pane of the window and the media content for the selected part on a second view pane of the window. As noted, the window may be configured so that the first and second view panes are displayed on non-overlapping portions of the window. In addition, in particular embodiments, the part may be highlighted in the media content so that the user can easily identify it in the content.
  • the selected part may be displayed in the list of parts using a format to demonstrate the part has been selected such as, for example, the selected part may be highlighted, shown in a particular color, shown with a border, and/or the like.
  • functionality may be provided for the selected part such as, for example, a selection mechanism that provides functionality to allow the user to order the part from the IETM.
  • the list of parts module determines whether input has been received indicating the user has identified one or more level indicators for relisting the list of parts in Operation 1720 .
  • each of the parts may be associated with one or more components of the item for which the technical documentation is being viewed by the user via the IETM.
  • each of these components may be identified with a functional and/or physical structure of the item such as assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like. Therefore, the user may be interested in viewing the parts in the list of parts broken down into these levels of functional and/or physical structure. If that is the case, then the list of parts module relists the list of parts based at least in part on the levels identified (e.g., selected) by the user and provides the relisted list of parts for display on the window in Operation 1725 .
  • each of the parts in the list of parts may display various information for the part that may be selectable to retrieve and view search results on additional information found in the technical documentation for the part.
  • each of the parts may display a part name and/or number for the part that is selectable (e.g., that is displayed as a hyperlink and/or along with a selection mechanism such as a button) that when selected by the user, a preview is generated and displayed providing results on textual information and/or media content (e.g., illustrations and/or other media objects) found in the technical documentation for the selected part.
  • the list of parts module determines whether input has been received indicating the user has selected a part name and/or number for a part to generate a preview in Operation 1730 . If so, then the list of parts module generates a preview of results based at least in part on information on the part found in the technical documentation for the item in Operation 1735 and provides the preview for display in Operation 1740 .
  • the part preview may be provided as a separate window.
  • the preview window may be superposed over a portion of the window displaying the list of parts.
  • the part preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the selected part.
  • the part preview is configured to provide only a preview of some of the content found in the technical documentation on the part.
  • various components of the results may be selectable to access further information.
  • each of the parts in the list of parts may be associated with one or more commercial and government entity (CAGE) codes and/or one or more source, maintenance, and recovery (SMR) codes.
  • CAGE commercial and government entity
  • SMR source, maintenance, and recovery
  • these codes identifier a supplier for the part, although other types of supplier identifiers may be used.
  • these codes may be displayed along with each part in the list of parts on the window.
  • each of these codes may be selectable on the window (e.g., displayed as a hyperlink and/or associated with a selection mechanism) to allow the user to view a preview displaying information on the particular supplier associated with the code.
  • the list of parts module may determine whether input has been received indicating the user has selected a CAGE or SMR code for a part. If so, then the list of parts module generates a preview for the supplier associated with the selected CAGE or SMR code and provides the preview for the user to view.
  • the supplier preview may be provided as a separate window.
  • the preview window may be superposed over a portion of the window displaying the list of parts.
  • the supplier preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the supplier.
  • the supplier preview is configured to provide only a preview of some of the content found in the technical documentation on the supplier.
  • various components display on the preview may be selectable to access further information.
  • related maintenance procedures and/or tasks that mention the part may be provided for each part in the lists of parts that are selectable.
  • the user may use a mouse to click on, right click on, or hover over a maintenance procedure and/or task for a part or use a stylus or finger to select a maintenance procedure and/or task for a part to generate a preview. Therefore, the list of parts module may determine whether input has been received indicating the user has selected a maintenance procedure and/or task related a part. If so, then the list of parts module generates a preview for the related maintenance procedure and/or tasks and provides the preview for the user to view.
  • the maintenance procedure and/or task preview may be provided as a separate window.
  • the preview window may be superposed over a portion of the window displaying the list of parts.
  • the maintenance procedure and/or task preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the maintenance procedure and/or task.
  • the preview is configured to provide only a preview of some of the content found in the technical documentation on the maintenance procedure and/or task.
  • various components display on the preview may be selectable to access further information.
  • functionality may be provided in some embodiments that allows the user to order a selected part from the IETM. As discussed further herein, this functionality provides an order form that can then be populated and submitted by the user to order the part. Therefore, in these particular embodiments, the list of parts module determines whether input has been received indicating the user would like to order a selected part in Operation 1745 . If so, then the list of parts module enables the order part functionality in Operation 1750 .
  • the list of parts module may provide functionality to allow the user to view other items besides the item the user is currently viewing the technical documentation for that also use a selected part in the list of parts.
  • a mechanism may be displayed along with the selected part that can be used to display a list of other items that also use the part.
  • a selectable plus sign may be provide that the user may use a mouse to click on, right click on, hover over, and/or the like to display the list of other items that also use the part.
  • the list of parts module determines whether input has been received indicating the user would like to view the list of other items that use a selected part in Operation 1755 . If so, then the list of parts module generates a preview displaying the list of other items that use the selected part in Operation 1760 and provides the preview for the user to view in Operation 1765 . At this point, the list of parts module determines whether to exit in Operation 1770 . If not, then the list of parts module returns to Operation 1710 to determine whether input has been received indicating a selection of a part by the user.
  • the preview may be provided as a separate window.
  • the preview window may be superposed over a portion of the window displaying the list of parts.
  • the preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the list of other items.
  • the preview is configured to provide only a preview of some of the content found in the technical documentation on the list of other items.
  • various components display on the preview may be selectable to access further information.
  • the user may be maintenance personnel who is tasked with performing certain maintenance on an object such as an aircraft. Therefore, the user may have signed into the IETM to view the technical information for the type of aircraft. Specifically, the user may have signed into the IETM to view documentation on the maintenance task he or she is to perform on the aircraft. The documentation on the maintenance task may identify a particular part needed in performing the task. However, the user may determine that the particular part is not currently in stock. Therefore, in this instance, the user may view the list of parts, select the particular part in the list, and generate and display the preview showing other types of aircraft that also use the particular part. As a result, the user may be able to obtain the part from inventory for another type of aircraft and/or may be able to use the part from another aircraft to perform the maintenance task instead of waiting for the part to be ordered and received.
  • FIG. 18A provides an example of a window 1800 displaying a list of parts according to various embodiments.
  • the window 1800 provides a first view pane 1810 displaying the list of parts for a particular item (e.g., platform 1810 ) in which a particular part 1815 found on the list has been selected.
  • the window 1800 in this example provides a second view pane 1820 displaying an illustration with the selected part 1825 highlighted in the illustration.
  • a mechanism is provided for displaying a window 1830 providing functionality to perform with respect to the selected part 1825 such as ordering the part 1825 .
  • FIG. 18B an example of a mechanism 1835 that can be used by a user in various embodiments in selecting identifiers for levels for relisting the list of parts is demonstrated.
  • the mechanism 1835 is provided as a dropdown menu control that allows the user to relist the list of parts according to part associated with an end item, component, major assembly, assembly, and/or subassembly. For instance, in this example, the user has indicated to relist the list of parts according to assembly 1840 , but not according to subassembly 1845 .
  • FIG. 18C provides an example of a preview 1850 displaying the information for a supplier as a result of the user selecting a CAGE code associated with a part in the list of parts according to various embodiments.
  • FIG. 18D provides an example of a preview 1855 displaying a list of other items that use a selected part according to various embodiments.
  • FIG. 19 is a flow diagram showing an order part module for performing such functionality according to various embodiments of the disclosure.
  • the order part module may be invoked by another module to order a part from the IETM such as, for example, the list of parts module previously described.
  • a user may select a mechanism (e.g., button) provided for a selected part on a window displaying the list of parts and as a result, the list of parts module may invoke the order part module.
  • the order part module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 1900 begins with the order part module reading the part number for the part in Operation 1910 .
  • the part number may be provided to the order part module from another module such as the list of parts module.
  • the order part module may read the part number (e.g., provided as input) from some type of window being displayed.
  • the part number serves as an identifier for the part. Therefore, depending on the embodiment, the part number may be in various forms such as, for example, an alphanumeric, and may include characters such as dashes, underscore, ampersand, commercial at sign, and/or the like. Those of ordinary skill in the art can envision other characters and/or symbols that may be used in a part number in light of this disclosure.
  • the order part module then identifies a system for the item for which the part will be used in Operation 1915 .
  • the item is generally the item related to the technical documentation currently being viewed by the user through the IETM.
  • the user may identify a specific item that is not necessarily the item associated with the technical documentation currently active for the IETM.
  • the item may involve a type of aircraft used by the military.
  • the military's backend system used in managing the individual aircraft for the type of aircraft may normally be used in ordering parts for the aircraft.
  • This backend system may have a specific electronic form that is used in ordering parts for the aircraft. Accordingly, forms for the different systems may be available in the IETM and the order part module selects the appropriate form based at least in part on the system associated with the item in Operation 1920 .
  • the order part number then queries a stock number for the part in Operation 1925 .
  • the stock number is often used in identifying the physical location where a particular part is stored in a warehouse and/or inventory. Similar to a part number, the stock number serves as an identifier and may be in various forms such as, for example, an alphanumeric, and may include characters such as dashes, underscore, ampersand, commercial at sign, and/or the like. Those of ordinary skill in the art can envision other characters and/or symbols that may be used in a stock number in light of this disclosure.
  • the order part module may be configured to identify a stock number for a particular supplier of the part based at least in part on the part number.
  • the supplier may be identified based at least in part on a CAGE and/or SMR code associated with the part found in the technical documentation for the item, although other identifiers may be used for the supplier. Accordingly, in particular embodiments, the order part module determines whether a stock number can be found for the part in Operation 1930 . If not, then the order part module may provide an error message to the user in Operation 1935 informing the user that a valid stock number cannot be located for the part.
  • the order part module queries data (e.g., information) for the part in Operation 1940 .
  • data e.g., information
  • the IETM may be in communication with the supplier's system over some type of network so that the data on the part can be queried directly from the supplier.
  • the IETM may store the data internally and the order part module queries the data accordingly.
  • the module auto-populates one or more of the fields on the electronic order form based at least in part on the queried data in Operation 1945 .
  • the order part module provides the electronic order form for display for the user to view in Operation 1950 .
  • the form may be displayed on a separate window than the window displaying the list of parts.
  • the user may then provide any additional data (e.g., information) that may be needed on the electronic form such as, for example, a quantity of the part that is to be ordered.
  • the user may submit the electronic form.
  • the electronic order form may provide a selection mechanism (e.g., a button) that the user can select to submit the order for the part.
  • a selection mechanism e.g., a button
  • the form may be submitted directly to the supplier to fulfill the order for the part or the form may be placed in a queue and submitted indirectly depending on the embodiment.
  • Other options may be provided to the user in some embodiments as discussed further herein.
  • the order part module determines whether input has been received indicating to exit in Operation 1955 . If not, then the order part module continues to display the electronic order form. Otherwise once the user has completed submitting the order for the part, or wishes to simply exit the form and indicated such, the order part module exits.
  • FIG. 20 is a flow diagram showing a submit order for part module for performing such functionality according to various embodiments of the disclosure.
  • the submit order for part module may be invoked by another module to submit the order for the part from the IETM such as, for example, the order part module previously described.
  • a user may select a mechanism (e.g., button) provided on an electronic order form and as a result, the order part module may invoke the submit order for part module.
  • the submit order for part module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the user may be provided various options for submitting the order for the part depending on the embodiment. Some of these options may be contingent on whether or not the user's computing entity 110 is currently in communication with another system. For example, the user may be working out in the field using the IETM to perform maintenance where a connectively (e.g., a wireless network) is not available. As a result, the user may need to order a replacement for a part that was used during the maintenance repair. However, the user cannot submit the order for the part directly to the supplier since the user's computing entity 110 is unable to communicate with the supplier's system. While in other instances, the computing entity 110 may not be in communication with any other system for security reasons.
  • a connectively e.g., a wireless network
  • the process flow 2000 begins with the submit order for part module reading (e.g., receiving input) the user's selection for submitting the order for the part in Operation 2010 .
  • the options available to the user may be dictated based at least in part on whether or not the user's computing entity 110 is currently in communication with any other systems.
  • the different options may be made available to the user on the electronic order form as one or more selection mechanisms (e.g., one or more buttons). Further, the selection mechanisms may be made available on the electronic order form based at least in part on the options currently available to the user.
  • One such option that may be used in various embodiments is to submit the order for the part directly to the supplier.
  • this option may involve the user's computing entity 110 submitting the order for the part directly to the supplier's system or may involve sending the order for the part initially to some intermediary who then submits the order to the supplier. Therefore, the submit order for part module determines whether input has been received indicating the user has selected the submit order option in Operation 2015 . If the submit order for part module determines the user has selected this option, then the submit order for part module submits the order to a remote system in Operation 2020 . Accordingly, the remote system may be associated with the supplier of the part or to an intermediary.
  • the submit order for part module may be configured to submit the order to a procurement system for an airline in instances in which the user is a maintenance employee of the airline who is ordering a replacement part for an aircraft.
  • the procurement system may process the order for the part and then submit it to the supplier to fulfill.
  • the submit order for part module may submit the order to the remote system using different procedures depending on the embodiment.
  • the order may be submitted via electronic data interchange (EDI) between the user's computing entity 110 and the supplier's or intermediary's system.
  • the order may be submitted via a message such as an email, instant messaging, text messaging, and/or the like.
  • the submit order for part module may determine whether input has been received indicating the user has selected to add the order to a shopping cart option in Operation 2025 . If so, then the submit order for part module places the order in the shopping cart in Operation 2030 . Once the order has been placed in the shopping cart, the order may then be submitted at a later time when the user's computing entity 110 is in communication with another system. Accordingly, depending on the embodiment, the order for the part may be submitted to the supplier directly or initially to an intermediary using any number of different procedures at the later time.
  • the submit order for part module generates a graphical code with the order information and provides the code for display for the user to scan using his or her mobile device.
  • the graphical code may be provided in various forms such as a barcode, a quick response (QR) code, a one-dimensional code, a universal product code, a data matric code, and/or the like.
  • the order can be submitted using the mobile device's cellular network as a channel of communication, although the mobile device may be connected to other types of networks such as WIFI.
  • the user may use a generic code reader application on his or her mobile device or an application specifically designed to submit the order.
  • Using a specific application designed to submit the order may also allow for the order to be submitted in a secure manner.
  • the user may be required to enter security information into the application to open the application to scan the graphical code.
  • the submit order for part module determines whether input has been received indicating the user has selected the graphical code option in Operation 2035 . If so, then the submit order for part module generates the graphical code in Operation 2040 and provides the code in Operation 2045 . For example, in particular embodiments, the graphical code may be displayed on a separate window. At this point, the submit order for part module in some embodiments records the submission of the order in Operation 2050 . Therefore, in these particular embodiments, the IETM can be used a recordkeeper for ordered parts. It noted that recordation of the submission of orders placed in the shopping cart may not be performed in some embodiments until the orders have actually been submitted.
  • FIG. 21A provides an example of a part 2100 that has been selected in which the option to order the part (e.g., button) 2110 has been provided to the user via a window according to various embodiments.
  • FIG. 21B provides an example of an electronic order form 2115 that has provided on a window as a result of the user exercising the option to order the part 2110 according to various embodiments.
  • the user has been provided the option to directly submit the order for the part (e.g., button) 2120 and the option to place the order in the shopping cart (e.g., button) 2125 .
  • FIG. 21C provides an example of a graphical code in the form of a QR code 2130 generated according to various embodiments that can be scanned by the user to submit an order for a part.
  • FIG. 22 is a flow diagram showing a display topic module for performing such functionality according to various embodiments of the disclosure.
  • the display topic module may be invoked by another module to provide a topic for display such as, for example, the TOC module previously described. For instance, a user may select a topic found in a table of content displayed on a window and as a result, the TOC module may invoke the display topic module.
  • the display topic module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • topics found in the technical documentation for an item may include procedures, tasks, operations, services, checklists, planning, and/or the like performed with respect to the item.
  • topics may include maintenance procedures and/or tasks performed on the item.
  • topics may include different components that make up the item.
  • a user may be viewing the table of contents for the technical documentation of the item and may select a maintenance procedure listed in the table of contents directly from the table to view the content in the technical documentation on conducting the maintenance procedure.
  • the user may be viewing an illustration (e.g., a 2D graphic) of the front braking assembly of an aircraft and may select the front wheel directly from the illustration to view the content on the technical documentation for the front wheel.
  • the technical documentation may be formatted according to S1000D standards and therefore, the documentation for a particular topic may be found in a data module.
  • a data module primarily includes two parts, metadata and content.
  • the metadata is made up of an identification section and a status section. These two sections are used to control a module's retrieval.
  • the content is what a user views on the topic.
  • the content typically is made up of textual information, as well as references (e.g., links) to any media content (e.g., illustrations such as 2D and/or 3D graphics, images, audio, videos, and/or the like) and other data pertaining to the topic.
  • the content of the data module is usually specific to the type of the data module, which is written in accordance with that type's schema.
  • the types of content found in a data module may include, for example: procedural used for tasks and steps information; fault used for troubleshooting; illustrated parts data used for parts lists and other illustrated parts data; process used for sequencing other data modules and/or steps; learning used for training-related material; maintenance checklists used for preventive maintenance, services, and inspections; and/or the like.
  • the process flow 2200 begins with the display topic module retrieving the textual information for the topic in Operation 2210 .
  • the display topic module creates selectable parts found in the textual information in Operation 2215 .
  • the parts e.g., the part names and/or numbers
  • the textual information are recognized and made selectable by displaying them as a hyperlink and/or with some other type of selection mechanism such as a button.
  • a user viewing the textual information is able to access specific information via the IETM on the part directly from the textual information, as well as perform other functionality with respect to the part such as order the part from the IETM.
  • the display topic module creates selectable applicability found in the textual information in Operation 2220 in some embodiments. Similar to parts, as a result, a user viewing the textual information is able to access specific information on applicability mentioned in the textual information directly from the textual information.
  • the display topic module may lock data found in the textual information in Operation 2225 . This particular operation may be performed in some embodiments when the topic selected by the user provides alerts in the content such as warnings, cautions, notes, and/or the like. As discussed further herein, the content found after an alert may be locked (e.g., not able to view and/or not able to scroll through) until the user viewing the content has acknowledged the alert. This functionality helps to ensure the user is giving the alerts found in the content proper attention.
  • the display topic module may create a security classification for the textual information in Operation 2230 .
  • the textual information may be configured so that those users with a certain level of security should be able to view the content found in the textual information. Therefore, in particular embodiments, the display topic module may set up a security classification for the content based at least in part on the user's credentials who is requesting to view the content. For example, this operation may involve marking the content with a particular level of security (e.g., top secret) and making the content unviewable to the user.
  • a particular level of security e.g., top secret
  • the display topic module determines whether the data module references any non-textual content in Operation 2235 .
  • non-textual content may involve illustrations such as 2D and/or 3D graphics and/or other media objects such as images, videos, audios, and/or the like. If so, then the display topic module retrieves one of the non-textual content in Operation 2240 . Accordingly, the reference to the non-textual content found in the data module may provide a link (e.g., html) and/or other information such as an information control number (ICN) to retrieve the non-textual content. In particular embodiments, the display topic module may then create a security classification for the non-textual content, similar to the textual information, in Operation 2245 .
  • a link e.g., html
  • ICN information control number
  • the display topic module determines whether the data for the topic (e.g., the data module for the topic) references other non-textual content (e.g., another illustration or media object) in Operation 2250 . If so, then the display topic module returns to Operation 2240 , retrieves the next non-textual content referenced in the data module, and creates a security classification for the retrieved non-textual content.
  • the data for the topic e.g., the data module for the topic
  • other non-textual content e.g., another illustration or media object
  • the display topic module provides the content for the topic for display via a window in Operation 2255 .
  • the content may be displayed using a number of different configurations depending on the embodiment.
  • the display topic module may be configured to display the content on multiple view panes so that multiple aspects of the content (e.g., textual information and illustrations) can be viewed by the user at the same time.
  • the window displaying the content may be configured so that the view panes are displayed on non-overlapping portions of the window.
  • the display topic module may invoke various modules to perform some of the operations just described. Accordingly, a discussion of these various modules is now provided.
  • FIG. 23 is a flow diagram showing a selectable parts module for performing such functionality according to various embodiments of the disclosure.
  • the selectable parts module may be invoked by another module to cause the parts to be displayed as selectable such as, for example, the display topic module previously described.
  • the selectable parts module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 2300 begins with the selectable parts module selecting a part from the list of parts in Operation 2310 .
  • a list of parts may be generated in various embodiments from the illustrated parts breakdown found in a publication of the technical documentation for the item during a time when the publication is being imported into the IETM. Accordingly, this list of parts may identify the information associated with each part found in the list such as, for example, illustrations of components of the item displaying the part and processes, procedures, maintenance, and/or the like that make use of the part.
  • the selectable parts module searches the textual information for a topic (e.g., the data module for a topic) to identify occurrences of the part in the textual information in Operation 2315 .
  • a topic e.g., the data module for a topic
  • the selectable parts module may be configured to perform some type of character recognition to identify occurrences of the part in the textual information.
  • the selectable parts module may configure the part so that multiple types of selection may be used by a user in some embodiments.
  • the selectable parts module may configure the part so that a user can hover his or her mouse over the part (e.g., the part name and/or number) to view a preview providing preview information on the part and click on the part to display content (e.g., textual information, as well as media content such as illustrations) for the part on a window.
  • various functionality may be provided as a result of a user selecting the part in the textual information such as, for example, functionality to enable the user to order the part from the IETM and/or functionality to allow the user to view other items that use the part.
  • the selectable parts module determines whether another part is found on the list of parts in Operation 2330 . If so, then the selectable parts module returns to Operation 2310 , selects the next part found on the list of parts, and repeats the operations just described for the newly selected part. Once the selectable parts module has processed all the parts found on the list of parts, the module exits.
  • FIG. 24 is a flow diagram showing a selectable applicability module for performing such functionality according to various embodiments of the disclosure.
  • the selectable applicability module may be invoked by another module to cause applicability to be displayed as selectable such as, for example, the display topic module previously described.
  • the selectable applicability module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • applicability generally pertains to the context for which the information for a topic is valid.
  • the context can be associated with a physical configuration of an item, but can also include other aspects such as support equipment availability and/or environmental conditions.
  • a user may be viewing information on the first wheel assembly for an aircraft. Accordingly, the information may provide information for both an air brake configuration of the assembly and a disc brake configuration of the assembly.
  • the user may be specifically working on an aircraft at the time with disc brakes. Therefore, the information being viewed in on the front wheel assembly pertaining to disc brakes is applicable while the information pertaining to air brakes is not.
  • the IETM may be configured in various embodiments to allow the user to sign into the IETM to view the technical documentation for an item with respect to a specific object (e.g., a specific aircraft in an airline's fleet or a specific aircraft configuration) or a universal object.
  • a specific object e.g., a specific aircraft in an airline's fleet or a specific aircraft configuration
  • a universal object e.g., a user may be conducting training on performing maintenance on a specific model of aircraft and therefore signs into the IETM using a universal object so that he or she can view technical documentation on the model of aircraft using either an air brake configuration or a disc brake configuration.
  • the process flow 2400 begins with the selectable applicability module determining whether the user is signed into the IETM with respect to a specific object or a universal object for the item in Operation 2410 .
  • the reason for making such a determination in these embodiments is the selectable applicability module may be configured to only make those occurrences of applicability found in the textual information selectable that are actually applicable to the current instance of the user signed into the IETM.
  • the selectable applicability module does not make any of the occurrences of applicability involving disc brakes selectable in the textual information.
  • the selectable applicability module generates only those occurrences of applicability related to the specific object found in the textual information as selectable in Operation 2415 .
  • the selectable applicability module generates all of the occurrences of applicability found in the textual information as selectable in Operation 2420 .
  • the selectable applicability module may be configured in particular embodiments to perform some type of character recognition to identify occurrences of applicability in the textual information.
  • the selectable applicability module may make an occurrence of applicability selectable in the textual information using a number of different mechanisms.
  • the selectable applicability module may configure an occurrence of applicability so that multiple types of selection may be used by a user in some embodiments.
  • the selectable applicability module may provide various functionality for an occurrence of applicability as a result of a user selecting the occurrence in the textual information.
  • FIG. 25 is a flow diagram showing a lock content module for performing such functionality according to various embodiments of the disclosure.
  • the lock content module may be invoked by another module to lock content for a topic such as, for example, the display topic module previously described.
  • the lock content module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the textual information for a topic may include element types providing various alerts.
  • the textual information may provide a warning alerting a user of possible hazards associated with a material, a process, a procedure, and/or the like.
  • the textual information may provide a caution alerting the user that damage to a material is possible if instructions in an operational and/or procedural task are not followed precisely.
  • alerts are tagged in the textual information of the data (e.g., data module) for the topic found in the technical documentation.
  • the process flow 2500 begins with the lock content module reading the textual information for the topic in Operation 2510 . Accordingly, the lock content module determines whether a tag for an alert has been encountered in the textual information in Operation 2515 . If so, then the lock content module records a marker for the tag in Operation 2520 . Here, the marker identifies where in the textual information the tag is found. As discussed herein, the marker enables the lock content module to lock the portion of the content found in the textual information associated with the alert. The lock content module then determines whether additional textual information remains after the occurrence of the alert in Operation 2525 . If so, then the lock content module returns to Operation 2510 and continues reading the textual information to identify further occurrences of tags for alerts in the information.
  • the lock content module selects a marker for a tag in Operation 2530 .
  • the lock content module then identifies the preceding marker for a tag in Operation 2535 .
  • the lock content module may be configured in particular embodiments to skip the first marker of a tag found in the textual information since this marker/tag would not have a preceding marker/tag found in the textual information.
  • the lock content module locks the portion of the content found between the tags for the two markers in the textual information in Operation 2540 .
  • the lock content module may be configured to lock the portion of the content using a number of different approaches and/or any combination thereof.
  • the lock content module may obscure a user's ability to view the portion of the content in some embodiments.
  • the lock content module may grey out the portion of the content so that it cannot be read.
  • the lock content module may disable any interactive functionality found within the portion of the content.
  • the portion of the content may contain an occurrence of a selectable part.
  • the lock content module may disable the selectable functionality of the selectable part.
  • the lock content module may lock the user's ability to scroll through the portion of the content displayed on the window.
  • the module determines whether a marker for another tag exists in Operation 2545 . If so, then the lock content module returns to Operation 2530 , selects the next marker, and preforms the operations just discussed to lock the portion of the content in the textual information accordingly. Once the lock content module has processed all the markers, the module exits.
  • FIG. 26 is a flow diagram showing a security classification module for performing such functionality according to various embodiments of the disclosure.
  • the security classification module may be invoked by another module to set the security classification for content such as, for example, the display topic module previously described.
  • the security classification module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the metadata for a topic may include a security classification tag (e.g., a code) that identifies a level of security with respect to the content for the topic. This may also be true with respect media content for the topic such as illustrations, videos, audio, and/or other data associated with the topic. Therefore, when displaying the various components of content for the topic on a window, the security classification tag found in the metadata for a particular component of content can be used to set formatting and properties for the content.
  • a security classification tag e.g., a code
  • the process flow 2600 begins with the security classification module reading the security classification tag for a textual or a non-textual component of content for the topic in Operation 2610 .
  • the security classification module then reads the credentials for the user in Operation 2615 .
  • the formatting and/or properties associated with the content may be contingent based at least in part on the user's level of security. For example, if the user has a high level of security (e.g., a top secret clearance), then the user may be able to view content that may not normally be available for viewing by many other users.
  • the credentials used by the user in signing into the IETM may be used to identify the user's level of security.
  • the security classification module in some embodiments formats a border for the content based at least in part on the security classification of the content in Operation 2620 .
  • the security classification that may be set for the content may include unclassified, classified, secret, top secret, and/or the like.
  • the security classification module may format a border placed around the content as it is displayed on a window based at least in part on the security classification set for the content.
  • the content may be displayed on the window in a view pane. Therefore, in this example, the security classification module may format a border placed around the view pane by including a title in the border identifying the security classification for the content and displaying the border in a particular color. Such formatting may help the user to quickly identify the security classification associated with the different components of content being displayed for the topic on the window.
  • the security classification module in some embodiments sets the accessibility of the content based at least in part on the security classification of the content and the user's credentials in Operation 2625 .
  • the security classification module sets the accessibility of the content as it is displayed on the window based at least in part on the level of security identified in the security classification tag for the content and the level of security identified in the user's credentials used to sign into the IETM. For example, if the level of security identified in the security classification tag for the content is top secret and the level of security identified in the user's credential is unclassified, then the security classification module may set the content so that it is not accessible on the window. In this instance, the security classification module may make the content unviewable on the window to the user.
  • the security classification module may also disable functionality for the content such as, for example, disabling the user's ability to print the content, copy the content, email the content, and/or the like.
  • the security classification module may be configured to also set the accessibility for various interactive functionality found in the content.
  • the content may include a part (e.g., a part number and/or name) that is normally selectable to access information on the part.
  • the security classification module may have set the accessibility for the content to allow the user to view the content on the window.
  • the security classification module may have determined the level of security for the content is unclassified and the user's level of security is classified and as a result, set the accessibility for the content to allow the user to view the content.
  • the security classification module may also read a classification tag for the selectable part in Operation 2630 .
  • the security classification module may read the classification tag found in the metadata for data (e.g., the data module) found in the technical documentation for the part.
  • the classification tag may identify the level of security set for the part is top secret. Therefore, as a result, the security classification module may disable the user's ability to select the part in the content in Operation 2635 .
  • the security classification module may then determine whether any further interactive functionality is found in the content in Operation 2640 . If so, then the security classification module may perform the operations just described for the additional functionality.
  • the security classification module may be configured in particular embodiments to set the formatting and/or functionality of content of various topics with respect to other features and/or displays that are provided via the IETM. For instance, the security classification module may also be configured to set the accessibility of topics found in a table of contents for the technical documentation for an item based at least in part on the security classification set for the topics. Those of ordinary skill in the art can envision other applications of setting security classification formatting and/or functionality of content in light of this disclosure.
  • FIG. 27 provides an example of security classification formatting and functionality set for the display of a topic according to various embodiments.
  • a border 2700 has been placed around a view pane displaying content for the topic on a window.
  • the border 2700 includes a title indicating the content (e.g., textual information) for the topic is secret.
  • the steps found in the textual information 2710 have been removed from being able to be viewed by the user.
  • an illustration is also displayed in a view pane on the window that is viewable to the user.
  • the border for the illustration 2715 indicates the illustration is unclassified and therefore the user is able to view it.
  • the example demonstrates how the formatting and functionality of various sections of content for a topic may be set differently based at least in part on the security classifications identified for the various sections of content.
  • FIGS. 28A and 28B are a flow diagram showing a topic module for performing such functionality according to various embodiments of the disclosure.
  • the topic module may be executed by an entity such as the management computing entity 100 and/or the user computing entity 110 previously discussed.
  • the topic module is executed once a user has selected a topic to view and the topic is displayed to the user on a window.
  • the window may be displaying using a various of configurations depending the embodiment. For example, the window may display multiple pane that provide various content for the topic. Once displayed, the user may decide to invoke different interactive functionality provided for the topic.
  • the process flow 2800 begins with the topic module determining whether input has been received indicating the user has selected a selectable part displayed on the window to view related information on the part in Operation 2810 .
  • parts e.g., part names and/or numbers
  • the topic module If the user has selected a part (e.g., uses a mouse to hover over the part, click on the part, alt-click on the part, and/or the like), then the topic module generates and provides a preview to display information on the part to the user in Operation 2815 .
  • the preview may be provided in a similar manner as the other previews described herein.
  • the preview may be provided on separate window than the window displaying the topic.
  • different functionality may be provided on the preview in some embodiments.
  • the preview may provide functionality to allow the user to search for other occurrences of the part in the technical documentation for the item.
  • the topic module also determines whether input has been received indicating the user has selected a selectable applicability displayed on the window to view information on the applicability in Operation 2820 .
  • applicability generally pertains to the context for which the information provided for a topic is valid. Therefore, if the user selects an applicability found in the content displayed for the topic (e.g., hovers over the applicability, click on the applicability, alt-clicks on the applicability, and/or the like), the topic module generates and provides a preview for display providing information on the meaning of the applicability in Operation 2825 .
  • the preview may be provided in a similar manner as the other previews described herein. For example, the preview may be provided on separate window than the window displaying the topic.
  • the topic module also determines whether input has been received indicating the user would like to view the source data for the topic in Operation 2830 .
  • the source data may represent the source of the content found in the technical documentation for the topic.
  • the source data may involve data from a file such as a PDF and/or a SGML file. Therefore, if the user has indicated he or she would like to view the source data for the topic, then the topic module provides the source data for display in Operation 2835 .
  • the source data may be displayed on a separate window than the window displaying the topic.
  • this particular functionality may be configured to perform differently based at least in part on the user's selection of this functionality.
  • the user is provided with the corresponding section of the source data as that currently displayed on the window for the topic in response to the user exercising a first type of selection (e.g., single click). While the user is provided with the entire source data for the topic in response to the user exercising a second, different type of selection (e.g., alt-click).
  • a first type of selection e.g., single click
  • a second, different type of selection e.g., alt-click
  • the topic module also determines whether input has been received indicating the user has selected an option to generate an annotation for the topic in Operation 2840 .
  • annotations may be generated for different content for the topic. For instance, the user may generate an annotation with respect to certain text found in the textual information for a topic and/or the user may generate an annotation with respect to other content for the topic such as an illustration (e.g., 2D and/or 3D graphic). If the user has selected the option to generate an annotation for the topic, then the topic module does so in Operation 2845 .
  • the annotation can be recorded and stored within the IETM and can only be viewed by the user. While in other instances, others may be able to view and comment on the annotation.
  • the topic module may provide further functionality based at least in part on the content of the topic involving sequential information.
  • the topic may involve a process, procedure, task, checklist, and/or the like that involves various operations and/or steps to be performed.
  • the user may be viewing a maintenance task involving steps the user is to perform for the task. Therefore, in these particular embodiments, the topic module may determine whether the data for the topic (e.g., the data module for the topic) involves sequential information in Operation 2850 .
  • the topic module may make such a determination based at least in part on the type of content found in the data for the topic as indicated in the data's metadata (e.g., in the data module's information code). If the content does involve sequential information, then the topic module provides further functionality for the content in Operation 2855 .
  • the topic module determines whether input has been received indicating the user has performed an action with respect to a step (operation) in a sequence such as a checklist sequence in Operation 2860 .
  • a step operation
  • such an action may involve the user selecting a step and/or acknowledging a step in the sequence.
  • the steps found in sequential information e.g., the steps found in a checklist
  • the user may wish to have the content (e.g., textual information) provided in a step be displayed using one or more enhanced formats to better enable the user's comprehension of the content.
  • the user may wish to have the content displayed in a higher magnification (e.g., textual content displayed in a larger font) so that the user is better able to see the content.
  • the user may wish to have content that is relevant to the user to be displayed using some type of format (relevant format) so that the content stands out to the user.
  • the user may be viewing sequential information that involves a maintenance procedure and/or task.
  • the maintenance procedure/task may include several steps.
  • the topic module assesses the step in Operation 2865 .
  • the action may entail the user selecting the step so that the step receives focus and/or acknowledging completion of the step.
  • the topic module determines whether input has been received indicating the user has acknowledged an alert in Operation 2870 .
  • content is locked based at least in part on alerts provided in the content. For example, the content may provide warnings and/or cautions for the user. Therefore, if the user has acknowledged an alert, the topic module unlocks the corresponding content for the alert in Operation 2875 .
  • the user may be provided functionality (e.g. an option) in particular embodiments to transfer a job (e.g., process, procedure, task, checklist, and/or the like) he or she is currently performing to another user.
  • a job e.g., process, procedure, task, checklist, and/or the like
  • the topic module may determine whether input has been received indicating the user has selected the option to transfer a job in Operation 2880 . If so, then the topic module may enable functionality to allow the user to transfer the job in Operation 2881 .
  • functionality may be implemented that updates media content provided on the window as the user scrolls through sequential information.
  • the user may be viewing the steps for a maintenance task displayed on a first view pane shown on the window.
  • illustrations for the maintenance may be displayed on a second view pane shown on the window.
  • a step in the maintenance task may involve a particular component and an illustration of the component may be provided to aid the user in locating the component on the actual item. Therefore, as the user scrolls through the various steps of the maintenance task, the illustrations provided on the second view pane may change automatically in particular embodiments as the user moves from step-to-step and different illustrations are referenced in the steps.
  • the topic module determines whether input has been received indicating the user is scrolling through the sequential information in Operation 2882 . If the user is scrolling through the sequential information, then the topic module updates the media content displayed on the window accordingly in Operation 2883 .
  • functionality may be implemented for a component found in the sequential data that is a electrical connector such as a plug having a plurality of pins.
  • the user may be maintenance personnel who is out in the field, and the sequential information may entail a maintenance procedure and/or task being performed by the user that references an electrical plug.
  • the maintenance procedure/task may involve the user conducting trouble shooting on a electrical problem by testing various combinations of pins (e.g., pairs of pins) found in the plug.
  • these plugs can be quite small and/or have a large number of pins, and the user may have trouble with identifying the specific pins on the physical plug that he or she is supposed to test.
  • the topic module determines whether input has been received indicating the user has selected a part that is an electrical connector in Operation 2884 . If so, then the topic module enables functionality for the selected connector in Operation 2885 .
  • the instructions for performing a maintenance task may reference a particular part that is to be replaced during the task.
  • some type of media may also be provided such as an illustration to assist the user in actually replacing the part.
  • the instructions may be displayed on a first view pane and the illustration may be displayed on a second view pane.
  • the part may be provided in the first and/or second view panes as selectable. As a result, the user's selection of the part in either the first or the second view pane may cause the part to be highlighted in the other view pane.
  • the part is automatically highlighted in the illustration to assist the user in locating the part in the illustration.
  • the part is automatically highlighted in the sequential information to assist the user in determining which instructions in the maintenance task the part is involved.
  • the topic module determines whether input has been received indicating the user has selected a component in Operation 2886 . If the user has selected a component, then the topic module highlights the component on the window accordingly in Operation 2887 .
  • the topic module may be configured to provide the user with certain functionality at the end of a topic (e.g., at the end of a data module).
  • some type of selection mechanism e.g., button
  • the topic module determines whether input has been received indicating the end of the topic has been reached in Operation 2890 . If so, then the topic module enables the end of topic functionality in Operation 2891 .
  • the topic module may be configured to enable the user to perform certain actions via verbal commands. For example, the user may be able to navigate through content by reciting verbal commands that are detected via an audio input of a user computing entity 110 being used by the user to access the IETM. If such functionality is being provided, then the topic module determines whether a verbal command has been received in Operation 2892 . If so, then the topic module enables the verbal command functionality in Operation 2893 .
  • the topic module may determine whether the content for the topic currently being displayed involves wiring data in Operation 2894 . If so, then the topic module enables wiring functionality in Operation 2895 . Likewise, in particular embodiments, the topic module may determine whether the content for the topic involves media providing a chart in Operation 2896 . If so, then the topic module enables crosshairs functionality in Operation 2897 . Finally, in particular embodiments, the topic module may determine whether the content for the topic involves 3D graphics in Operation 2898 . If so, then the topic module enables 3D graphic functionality in Operation 2899 .
  • the topic module determines whether input has been received indicating the user wishes to exist viewing the content for the topic in Operation 2899 A. For example, the user may have simply selected a mechanism (e.g., a button) to exit the topic. If that is the case, then the topic module exits. Otherwise, the topic module continues to monitor the user's interactions.
  • a mechanism e.g., a button
  • topic module in various embodiments may invoke various modules to perform some of the operations just described. Accordingly, a discussion of these various modules is now provided.
  • FIG. 29 is a flow diagram showing a display content for part module for performing such functionality according to various embodiments of the disclosure.
  • the display content for part module may be invoked by another module to display the content such as, for example, the topic module previously described.
  • the display content for part module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 2900 begins with the display content for part module retrieving the content for the part select by the user in Operation 2910 .
  • parts e.g., part names and/or numbers
  • the user may select one of the parts in the textual information (e.g., use a mouse to hover over the part, click on the part, alt-clicks on the part, and/or the like).
  • the display content for part module retrieves related information for the part to display.
  • the display content for part module may retrieve metadata from the data (e.g., the data module) found in the technical documentation for the part, as well as the topics found in the table of content in which the part is mentioned.
  • the display content for part module provides the content for display in Operation 2915 .
  • the content may be displayed as a preview as previously discussed. Accordingly, the preview may be displayed on a separate window that is superimposed over a portion of the window displaying the topic.
  • the displayed content may provide information on the part such as, for example, the part name and number.
  • the content may provide various functionality the user may invoke with respect to the part. For example, a selection mechanism (e.g., a hyperlink and/or button) may be provided to allow the user to search the technical documentation for the item to identify other instances where the part is mentioned/used (e.g., maintenance tasks). A selection mechanism may also be provided that enables the user to order the part from the IETM.
  • the display content for part module determines whether input has been received indicating the user has selected the functionality to order the part in Operation 2920 . If so, then the display content for part module generates and provides the order form for ordering the part in Operation 2925 . For example, the display content for part module invokes the order part module previously discussed ( FIG. 19 ) in some embodiments.
  • the display content for part module determines whether input has been received indicating the user has selected the functionality to search the technical documentation to identify other instances of the part in Operation 2930 . If so, then the display content for part module queries the technical documentation for the item in Operation 2935 .
  • the display content for part module may query various items found in the technical documentation such as the table of contents, data modules, media objects, and/or the like to identify instances in which the part name and/or number is found. The display content for part module then provides the results of the search for display in Operation 2940 .
  • the display content for part module may be configured in some embodiments to specifically query and identify the maintenance procedures/tasks found in the technical documentation in which the part is used and/or involved. Therefore, in these embodiments, the display content for part module provides a list of the maintenance procedures/tasks for display for the user to view.
  • the display content for part module may be configured to display a set number of the procedures/tasks such as, for example, five of the procedures/tasks.
  • the display content for part module may use a number of criteria to identify which of procedures/tasks to display such as, for example, alphabetically, more frequently viewed, and/or the like.
  • a selection mechanism e.g., a button
  • the display content for part module may provide the list so that each of the maintenance procedures/tasks displayed is selectable (e.g., displayed as a hyperlink and/or displayed with a selection mechanism such as a button) so that the user may view a particular maintenance procedure/task if desired. Therefore, in these particular embodiments, the display content for part module determines whether input has been received indicating the user has selected a particular maintenance procedure/tasks to view in Operation 2945 . If so, then the display content for part module retrieves the maintenance procedure/tasks and provides the procedure/task for display to the user in Operation 2950 .
  • the display content for part module determines whether input has been received indicating the user would like to exit the display of the content in Operation 2955 . If so, then then the display content for part module causes the display of the content be closed and exists. Otherwise, the display content for part module continues monitor the user's interactions.
  • FIG. 30 provides an example of a window 3000 providing content for a part 3010 selected by a user according to various embodiments.
  • the window 3000 displaying the content has been superimposed over a portion of the window for the topic in this example.
  • the display of the content provides the user with a selection mechanism (e.g., a button) 3015 to enable the user to order the part from the IETM.
  • the display of the content lists related maintenance procedures/tasks 3020 in which the part is used and/or mentioned.
  • the display of the content provides a selection mechanism (e.g., a button) 3025 to view additional maintenance procedures/tasks in which the part is used and/or mentioned.
  • FIG. 31 is a flow diagram showing a display content for applicability module for performing such functionality according to various embodiments of the disclosure.
  • the display content for applicability module may be invoked by another module to display the content such as, for example, the topic module previously described.
  • the display content for applicability module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 3100 begins with the display content for applicability module retrieving the content for the applicability select by the user in Operation 3110 .
  • applicability found within the textual information of the topic may be displayed as selectable in some embodiments. Therefore, the user may select one of the occurrences of applicability in the textual information (e.g., use his or her mouse to hover over the occurrence, click on the occurrence, alt-click on the occurrence, and/or the like).
  • the display content for applicability module retrieves related information for the applicability to display. For example, the display content for applicability module may retrieve information on the meaning of the applicability as it pertains to the item.
  • the display content for applicability module provides the content for display for the user to view in Operation 3115 .
  • the content may be displayed as a preview as previously discussed. Accordingly, the preview of the content may be displayed on a separate window that is superimposed over a portion of the window for the topic.
  • FIG. 32 provides an example of a window 3200 displaying content provided for an occurrence of applicability 3210 selected by a user according to various embodiments.
  • the window 3200 display the content has been superimposed over a portion of the window for the topic in this example.
  • the content provides the user with a rule 3215 for a list of components (e.g., engines) in which the applicability applies.
  • FIG. 33 is a flow diagram showing a display source for topic module for performing such functionality according to various embodiments of the disclosure.
  • the display source for topic module may be invoked by another module to display the source such as, for example, the topic module previously described.
  • the display source for topic module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the user may indicate he or she would like to view the source data for a topic.
  • the source data may represent the source of the content found in the technical documentation for the topic.
  • the source data may involve data from a file such as a PDF and/or a SGML file. Therefore, if the user has indicated he or she would like to view the source data for the topic, then the process flow 3300 begins with the display source for topic module determining whether input has been received indicating the user would like to view a section from the source or the entire source in Operation 3310 .
  • the user may be provided with multiple actions to select the selection mechanism in particular embodiments to indicate what from the source he or she would like to view.
  • the user is provided with the corresponding section of the source data as that is currently displayed on the window for the topic in response to the user exercising a first type of selection (e.g., single click). While the user is provided with the entire source data for the topic in response to the user exercising a second, different type of selection (e.g., alt-click).
  • a first type of selection e.g., single click
  • a second, different type of selection e.g., alt-click
  • the display source for topic module determines the user has exercised the first type of selection, then the display source for topic module retrieves the corresponding section (e.g., pages) of the source in Operation 3315 and provides the section of the source for display in Operation 3320 .
  • the section of the source may be displayed on a window that is superimposed over a portion of the window display the topic in some embodiments. While in other embodiments, the section of the source may be displayed on a separate view pane on the window.
  • the display source for topic module determines the user has exercised the second type of selection, then the display source for topic module retrieves the entire source in Operation 3325 and provides the entire source for display in Operation 3330 .
  • the entire source may be displayed on a window that is superimposed over a portion of the window display the topic in some embodiments. While in other embodiments, the entire source may be displayed on a separate view pane on the window.
  • FIG. 34A provides an example of displaying a section of a source for a topic according to various embodiments.
  • a selection mechanism 3400 is displayed on a window that is configured so that the user is provided with multiple actions to select the mechanism 3400 . Accordingly, if the user exercises a first type of selection (e.g., click) of the mechanism 3400 , then a separate window 3410 is displayed that provides a section from the source (in this example, a pdf) as shown as in this example as page five of the source 3415 . However, if the user exercises a second, different type of selection (e.g., alt-click) of the mechanism 3400 , then a separate window is displayed that provides the entire source as shown as all five pages 3420 in FIG. 34B .
  • a first type of selection e.g., click
  • a separate window 3410 is displayed that provides a section from the source (in this example, a pdf) as shown as in this example as page five of the source 3415 .
  • FIG. 35 is a flow diagram showing a generate annotation module for performing such functionality according to various embodiments of the disclosure.
  • the generate annotation module may be invoked by another module to generate an annotation such as, for example, the topic module previously described.
  • the generate annotation module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 3500 begins with the generate annotation module receiving where (e.g., receiving input identifying where) in the content for the topic the annotation is to be placed in Operation 3510 .
  • an annotation may not necessarily be placed in the content of a topic but may be place at other locations in the technical documentation of an item such as, for example, in the table of contents.
  • the generate annotation module then provides the annotation in Operation 3515 .
  • the generate annotation module may generate and provide the annotation to display on a separate window than the window displaying the topic.
  • the window may display initial information for the annotation such as, for example, the date and time the annotation was generated.
  • the user may be provided with different types of annotations that may be added to the content such as a personal note, a question, a warning and/or missing information, a problem, and/or the like. Therefore, the initial information may also indicate the type of annotation.
  • the generate annotation module may provide various functionality with respect to the annotation. Therefore, in particular embodiments, the generate annotation module determines whether input has been received indicating the user would like to add an attachment to the annotation in Operation 3520 . For example, the user may wish to attach a text document, image, and/or screenshot of the window (e.g., image of the window) and the user selects a selection mechanism (e.g., a button) provided on the window for the annotation. In response, the generate annotation module provides a capability for the user to identify the file to attach to the annotation. For example, the generate annotation module may cause display of a window that allows the user to navigate to a location where the file is locate and attach the file to the annotation.
  • a selection mechanism e.g., a button
  • the generate annotation module is configured in various embodiments to enable the attachment of a file in a variety of formats such as JPEG, JFIF, JPEG2000, EXIF, TIFF, RAW, DIV, GIF, BMP, PNG, PPM, MOV, AVI, MP4, MKV, DOCX, HTMLS, TXT, PDF, XML, SGML, JSON and/or the like. Therefore, if the user has indicated he or she would like to attach a file to the annotation, then the generate annotation module attaches the file in Operation 3525 .
  • the generate annotation module may determine whether input has been received indicating the user would like to share the annotation with other users in Operation 3530 .
  • an annotation may normally only be available to view to the user who generated the annotation.
  • the user may want to share his or her annotation with other users and ask for comments.
  • the user may identify an error he or she believes is in the technical documentation for a topic. Therefore, the user may decide to place an annotation in the topic on the error and ask other users whether they also agree on the error in the documentation. Accordingly, such functionality can allow for crowd sourcing to address issues in the technical documentation and/or to assist a user in using the documentation. Therefore, if the user has indicated he or she would like to share the annotation, then the generate annotation module sets the annotation to share in Operation 3535 .
  • the generate annotation module may determine whether input has been received indicating the user may want to submit a change request based at least in part on the annotation in Operation 3540 .
  • a formal procedure may be put in place to allow users of the IETM to submit change requests to have content changed in the technical documentation for an item.
  • a user may be viewing the textual information on a topic and may decide to generate an annotation for a section of the textual information the user does not believe is quite clear and should be further explained in the information. Therefore, the user may wish to submit a change request based at least in part on his or her annotation. If that is the case, then the generate annotation module may provide a change request form to display for the user in Operation 3545 .
  • the generate annotation module may auto-populate some of the fields provided on the form based at least in part on the information found in the annotation in Operation 3550 .
  • the generate annotation module may auto-populate the fields in which the user provides his or her name, a date, an identifier for the topic (e.g., a DMC), and/or any comments for the request that have been provided in the annotation. The user may then fill any additional information needed on the form and select a mechanism provided on the form to submit the request for change.
  • the generate annotation module determines whether input has been received indicating the user has submitted the change request form in Operation 3555 . If the user has submitted the form, then the generate annotation module submits the change request form in Operation 3560 .
  • the change request form may be sent to personnel who are responsible for maintaining the technical information for the item. Accordingly, such personnel may include those individuals who are responsible for maintaining the IETM and/or the publication of the technical documentation currently uploaded to the IETM for the item and/or those individuals who are responsible for maintaining the source technical documentation used in producing the publication that has been uploaded into the IETM.
  • the generate annotation module may determine whether input has been received indicating the user would like to capture a screenshot (e.g., an image) of the window and the content currently being displayed on the window in Operation 3565 .
  • a screenshot e.g., an image
  • the user may wish to attach a screenshot of the window to the annotation to provide more explanation for the annotation. Therefore, if the user would like to capture a screenshot of the window, the generate annotation module generates the screenshot in Operation 3570 .
  • the generate annotation module determines whether input has been received indicating the user would like to exit the window displaying the annotation in Operation 3575 .
  • the annotation is automatically generated and recorded in the IETM at the time the user selects the option (e.g., the selection mechanism) on the window for the topic. Therefore, in these particular embodiments, any additional information provided by the user on the annotation is recorded for the annotation when the user exists the window displaying the annotation.
  • the user may be required to take some action such as select a mechanism (e.g., a button) provided on the window displaying the annotation and/or the topic to record the annotation.
  • different selection mechanisms e.g., buttons
  • the various functionality provided by the generate annotation module described above may also be made available to users once the annotations have been recorded in the IETM. For example, a user may be able to sign into the IETM and view an annotation he or she had previously added to the technical documentation of an item. At this time, in particular embodiments, the functionality such as attaching a file and/or submitting a change request may be made available to the user.
  • FIG. 36A provides an example of an annotation window 3600 displayed according to various embodiments.
  • the user has identified an area 3610 in an illustration displayed for a topic and added a note of “bad region.”
  • the annotation window 3600 provides a first selection mechanism 3615 to allow the user to attach a file 3620 to the annotation such as a screenshot of the window displaying the topic.
  • the annotation window 3600 provides a second selection mechanism 3625 that enables the user to take the screenshot of the window displaying the topic.
  • the annotation window 3600 in the example provides a third selection mechanism 3630 that allows the user to share the annotation with other users.
  • the annotation window 3600 provides a fourth selection mechanism 3635 that facilitates the user submitting a change request based at least in part on the annotation.
  • a change request form 3640 that may be provided in some embodiments in shown in FIG. 36B .
  • FIG. 36C provides an example of a selection mechanism 3645 that may be provided in particular embodiments to enable a user to generate an annotation.
  • the selection mechanism 3645 is a dropdown menu control provided in a toolbar displayed along the top of a window that provides the user with options for generating different types of annotations.
  • the IETM may provide the user with a report 3650 on the change requests that have been submitted by the user as shown in FIG. 36D .
  • the IETM may provide the user with a list of all the annotations 3655 that have been generated by the user as shown in FIG. 36E . In some embodiments, this list 3655 may also display annotations that have been shared by other users.
  • a user may wish to use particular formatting for various types of content. For instance, a user may wish to have certain content enhanced so that the user may be able to view the content better. For example, the user may be working in the field and using a user computing entity 110 that is small in size, and therefore has a small display. As a result, content may be normally displayed in a size that is difficult for the user to see. Therefore, the user may wish to have content that he or she is currently viewing to be conveyed using an enhancing format so that the content is easier for the user to comprehend.
  • the user may wish to use formatting to identify content that is relevant to the user.
  • the user may wish to have the steps of a procedure and/or task the user is supposed to perform displayed using relevant formatting so that the steps stand out to the user while he or she is viewing the content for the procedure/task via the IETM.
  • This can be beneficial to the user while he or she is working out in the field in that the relevant formatting of content can help draw the user's attention to content he or she may need to view while the user is also engaged in other activities.
  • the user may wish to use formatting to identify content that is irrelevant to the user. Therefore, in various embodiments, functionality may be provided through the IETM to allow the user to set up enhanced formats, relevant formats, and/or irrelevant formats for certain types of content.
  • FIG. 37A is a flow diagram showing a formatting module for performing such functionality according to various embodiments of the disclosure. Accordingly, the formatting module may be executed by an entity such as the management computing entity 100 and/or the user computing entity 110 previously discussed. For instance, in various embodiments, the formatting module may be executed in response to a user selecting an option to set up such formatting from a window provided through the IETM.
  • the process flow 3700 begins with the formatting module displaying the various types of content for which the user can set up formatting in Operation 3710 .
  • the various types of content may include procedural, process, wiring, maintenance, learning, parts, checklist, and/or the like.
  • the various types of content may include particular content found within a type of content such as, for example, the steps of a maintenance procedure and/or task, the items in a checklist, diagrams for wiring, illustrations for parts, and/or the like.
  • the various types of content may include the various forms of content such as textural information, media content, and/or the like.
  • the formatting module may be configured in particular embodiments to provide one or more windows, view panes, and/or the like within the IETM to allow the user to identify the particular type of content he or she would like to set up formatting for. For example, the user may identify that he or she would like to set up formatting for the steps of maintenance procedures/tasks.
  • the formatting module determines whether the user would like to set up one or more enhancing formats for the selected type of content in Operation 3715 .
  • the user may be provided an option to identify the type of format he or she would like to set up for the selected type of content. If the formatting module determines the user would like to set up enhancing format(s) for the selected type of content, then the formatting module displays the types of enhancing formats for the user to select from in Operation 3720 .
  • the enhancing formats may include enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like.
  • the enhancing formats may include magnifying the media content, enhancing a resolution of the media content, and/or the like.
  • the formatting module receives one or more indications of the user's selection(s) in Operation 3725 and records the selection(s) in Operation 3730 .
  • the formatting module may record the user's selection(s) along with credentials for the user. Therefore, as a result, the enhancing format(s) selected by the user for the particular type of content can be identified and used based at least in part on the user's credentials provided at a time when the user logs into the IETM.
  • the user may have selected to have steps of maintenance procedures/tasks displayed through the IETM in a larger font as the enhancing format.
  • an active step e.g., current step
  • a maintenance procedure/task is displayed to the user in the enlarged font while the user is viewing the maintenance procedure/task through the IETM.
  • the formatting module may also be configured to allow the user to select one or more properties for the enhancing format(s). For example, the user may be able to select a font size, a color for a font, a color for a border, a color for a background, and/or the like. Accordingly, these properties may also be recorded along with the user's selection of enhancing format(s).
  • the formatting module determines whether the user wants to set up one or more relevant formats for the selected type of content in Operation 3735 . If the formatting module determines the user would like to set up relevant format(s) for the selected type of content, then the formatting module displays the types of relevant formats for the user to select from in Operation 3740 .
  • the relevant formats may include, for example, enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like for textual information.
  • the relevant formats may include magnifying the media content, enhancing a resolution of the media content, and/or the like for media content.
  • the user may also define one or more properties for the relevant format(s).
  • the formatting module receives one or more indications of the user's selection(s) in Operation 3741 and records the selection(s) in Operation 3742 .
  • the formatting module may record the user's selection(s) along with credentials for the user. Therefore, as a result, the relevant format(s) selected by the user for the particular type of content can be identified and used based at least in part on the user's credentials provided at a time the user logs into the IETM. Accordingly, in various embodiments, the relevant format(s) are used for the particular type of content only in instances in which the content is found to be relevant to the user.
  • the user may have selected to have steps of maintenance procedures/tasks displayed through the IETM in a larger font as the relevant format.
  • an active step e.g., current step
  • a maintenance procedure/task is only displayed to the user in the enlarged font if the step is determined to be relevant to the user who is viewing the maintenance procedure/task through the IETM.
  • the formatting module determines whether the user wants to set up one or more irrelevant formats for the selected type of content in Operation 3736 . If the formatting module determines the user would like to set up irrelevant format(s) for the selected type of content, then the formatting module displays the types of irrelevant formats for the user to select from in Operation 3750 .
  • irrelevant formats may be used in deemphasizing content that is not relevant to the user.
  • the irrelevant formats may include, for example, reducing a font size of the text, changing a font color of the text, changing a font case of the text (e.g., to lowercase), adding a border around the text, adding a background to the text, and/or the like for textual information.
  • the irrelevant formats may include reducing the size of the media content, decreasing a resolution of the media content, and/or the like for media content.
  • the user may also define one or more properties for the irrelevant format(s).
  • the formatting module receives one or more indications of the user's selection(s) in Operation 3751 and records the selection(s) in Operation 3752 .
  • the formatting module may record the user's selection(s) along with credentials for the user. Therefore, as a result, the irrelevant format(s) selected by the user for the particular type of content can be identified and used based at least in part on the user's credentials provided at a time the user logs into the IETM. Accordingly, in various embodiments, the irrelevant format(s) are used for the particular type of content only in instances in which the content is found to be irrelevant (e.g., not relevant) to the user.
  • the user may have selected to have steps of maintenance procedures/tasks displayed through the IETM in a smaller/reduced font as the irrelevant format.
  • an active step e.g., current step
  • a maintenance procedure/task is displayed to the user in the smaller/reduced font if the step is determined to be irrelevant to the user who is viewing the maintenance procedure/task through the IETM.
  • the formatting module determines whether to exit in Operation 3755 . If not, then the formatting module returns to Operation 3710 and displays the content types again so that the user may set up another enhancing, relevant, and/or irrelevant format. However, if the formatting module determines to exit (e.g., the user selects an exit button), then the formatting module does so and the process flow 3700 ends.
  • the formatting module may be configured in particular embodiments to allow a user to set up various types of content so that the type of content is only conveyed to the user if the content is relevant to the user.
  • warnings and/or cautions may be provided for different steps performed in a sequence.
  • warnings and/or cautions may be provided as a popup window when an associated step for a sequence has focus (e.g., when the user selects the associated step).
  • the user may be interested in having such warnings and/or cautions provided only if the associated step is relevant to the user.
  • the formatting module may be configured to allow the user to indicate to only have warnings and/or cautions displayed to the user when the warnings and/or cautions (e.g., only when the associated steps) are relevant to the user.
  • Such functionality may allow the user to reduce the amount of content that is provided through the IETM so that the user is not inundated with unnecessary content.
  • FIG. 37B is a flow diagram showing a sequence module for performing such functionality according to various embodiments of the disclosure.
  • the sequence module may be invoked by another module to assess the steps preformed in a sequence such as, for example, the topic module previously described.
  • the sequence module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • additional functionality may be provided in various embodiments for content involving sequential information.
  • One such functionality involves displaying steps of a sequence using one or more enhancing formats. Such formats may enable a user who is viewing the steps in the IETM to be able to better comprehend (e.g., read) the steps.
  • Another such functionality involves displaying steps of a sequence using one or more relevant formats. Such formats may enable the displaying of content to demonstrate the content is relevant to the user.
  • functionality may be provided for displaying steps of a sequence using one or more irrelevant formats to demonstrate the content is irrelevant to the user.
  • Another such functionality involves highlighting any steps skipped in a sequence such as a checklist upon the user acknowledging performing a step in the sequence.
  • the steps found in sequential information are designed to be performed in the sequential order in which they are listed. Therefore, in particular embodiments, any steps that are skipped over in the sequence and not acknowledged are highlighted to bring them to the user's attention.
  • the process flow 3760 begins with the sequence module determining whether the action taken by the user with respect to the step results in the step having focus, and if so, whether the step should be conveyed using one or more enhancing formats in Operation 3765 .
  • focus on a step identifies the step as a portion of content having a center of interest and/or activity with respect to the content currently being provided through the IETM.
  • the user may have performed an action such as selected a particular step of the sequence using an input mechanism associated with a user computing entity 110 such as a mouse input, tab key, touchscreen capability, and/or the like.
  • the user may have performed an action that places focus on the particular step in the sequence such as acknowledging completion of a previous step in the sequence, therefore identifying the particular step as the next step to perform for the sequence.
  • the sequence module may determine whether the step should be conveyed using one or more enhancing formats based at least in part on various criteria. For instance, in particular embodiments, the sequence module may make such a determination based at least in part on settings that have been identified by the user. For example, as previously discussed, the user may identify the one or more enhancing formats to use for the steps (for the particular type of content) and the enhancing format(s) may be recorded as personal settings for the user.
  • the IETM and/or sequence module
  • the one or more enhancing formats may be identified within the IETM configuration for certain roles.
  • the user may log into the IETM and identify himself or herself as maintenance personnel.
  • the one or more enhancing formats may be identified to be used for users who are serving in the maintenance personnel role and viewing documentation through the IETM.
  • the one or more enhancing formats may be identified as a global setting to be used for every user who is viewing documentation through the IETM.
  • the one or more enhancing formats may be identified by the user upon logging into the IETM and may only be used as a one-time setting for the current use of the IETM.
  • the sequence module determines the step should be conveyed using one or more enhancing formats, then the sequence module causes the step to be conveyed using the one or more enhancing formats in Operation 3770 .
  • enhancing formats that may be used for textual information may include enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like.
  • examples of enhancing formats that may be used for media content may include magnifying the media content, enhancing a resolution of the media content, and/or the like.
  • the step may be conveyed via the IETM in a manner that may enable the user to better comprehend the step.
  • the sequence module may be configured in particular embodiments to cause the removal any enhancing formats used to display content that has lost focus. For instance, if a previous step that had focus prior to the current step had been displayed using one or more enhancing formats, then the sequence module may cause removal of these enhancing formats upon the previous step losing focus.
  • the sequence module determines whether the step having focus should be conveyed using one or more relevant formats in Operation 3775 .
  • the sequence module may be configured to make such a determination in a similar fashion as to determining whether the step should be conveyed using one or more enhancing formats. If the sequence module determines the step having focus should be conveyed using one or more relevant formats, then the sequence module determines whether the step having focus is relevant to the user in Operation 3780 .
  • the sequence module may be configured to make such a determination based at least in part on different criteria.
  • the sequence module may be configured to determine whether a portion of content is relevant to the user based at least in part on a role the user is serving in and/or based at least in part on the user himself or herself.
  • the sequence module may be configured to use credentials entered by the user to log into the IETM in identifying the user and/or identifying a role the user is currently serving in to make a determination as to whether the step currently having focus is relevant to the user.
  • the user may assign himself or herself a particular position and/or role to serve in while logged into the IETM and the sequence module may use this particular position and/or role in determining whether the step having focus is relevant to the user.
  • the user may be logged into the IETM to view a maintenance procedure and/or task for a particular component of an item.
  • the user may be tasked with performing maintenance detailed in the procedure on the component with two other users who are also logged in and using the IETM to view the maintenance procedure/task.
  • each of the three users are to perform specific steps within the maintenance procedure/task. Therefore, only certain steps of the maintenance procedure/task are relevant to the particular user.
  • each of the users may have identified (selected) a certain position and/or role he or she is to serve in while performing the maintenance procedure/task and this identified position and/or role may be associated with certain steps of the maintenance procedure/task.
  • the sequence module may be configured to identify whether the step of the maintenance procedure/task having focus is relevant to the user based at least in part on the position and/or role assigned to the user for the maintenance procedure/task.
  • the sequence module determines the step having focus is relevant to the user, then the sequence module causes the step to be conveyed using the one or more relevant formats in Operation 3785 .
  • the one or more relevant formats may involve different types of formats.
  • relevant formats that may be used for textual information may include enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like.
  • relevant formats that may be used for media content may include magnifying the media content, enhancing a resolution of the media content, and/or the like.
  • the step may be conveyed via the IETM in a manner that may enable the user to recognize when a step in the sequence is relevant to the user.
  • the sequence module may be configured in particular embodiments to also cause the removal of any relevant formats used to display content that has lost focus.
  • the sequence module determines the step having focus is not relevant (irrelevant) to the user, then the sequence module in particular embodiments causes the step to be conveyed using one or more irrelevant formats in Operation 3786 .
  • the one or more irrelevant formats may involve different types of formats.
  • irrelevant formats that may be used for textual information may include reducing a font size of the text, changing a font color of the text, changing a font case of the text (e.g., changing the font case to lowercase), adding a border around the text, adding a background to the text, suppressing an audio reading of the text, and/or the like.
  • irrelevant formats may include reducing the size of the media content, reducing a resolution of the media content, and/or the like.
  • the step may be conveyed via the IETM in a manner that may enable the user to recognize when a step in the sequence is irrelevant to the user.
  • the sequence module may be configured in particular embodiments to also cause the removal of any irrelevant formats used to display content that has lost focus.
  • the sequence module may be configured in particular embodiments to convey a portion of content (e.g., a step of a sequence) only if the portion of content is relevant to the user.
  • the sequence module may be configured to only convey portions of content that are relevant to the user with respect to certain types of content or with respect to all content.
  • the sequence module may be configured to convey all the steps found in the maintenance procedure/task, with those steps of the procedure/task that are relevant to the user being conveyed using one or more relevant formats, but only convey warnings and/or cautions provided along with the steps that are relevant to the user.
  • Such a configuration may allow selective content only to be conveyed when such content is relevant to the user so as to minimize the amount of content the user may be required to comprehend. For instance, the user may be interested is seeing all the steps of the maintenance procedure/task, with those steps of the procedure/task that are relevant to the user being conveyed using the one or more relevant formats, so that the user is able to keep track of where in the procedure/task the maintenance personnel are at. However, the user may not be interested in seeing warnings and/or cautions associated with the steps of the procedure/task that are not relevant to the user.
  • the sequence module determines whether the step acknowledged by the user is the next step in the sequence to be performed by the user in Operation 3790 .
  • the user may be provided a field (e.g., a checkbox) for each step in the sequential information that the user is able to check as he or she completes the step in the sequence. Therefore, in these embodiments, the sequence module receives input on the fields and determines which of the fields have been checked by the user.
  • the sequence module determines the step acknowledged by the user is not the next sequential step to be performed, then the sequence module causes the steps in the sequence that have been skipped by the user to be displayed as highlighted in the sequential information displayed on the window in Operation 3795 .
  • various formats may be used in displaying the skipped steps as highlighted.
  • FIG. 38A An example of a window displaying sequence information in which a step 3800 is being displayed using one or more enhancing formats according to various embodiments is shown in FIG. 38A .
  • the one or more enhancing formats involve displaying the text of the step 3800 in a larger font than the other steps of the sequence (e.g., magnified) and with a border in a particular color.
  • FIG. 38B an example of a window displaying sequence information in which a step 3810 is being displayed using one or more relevant formats according to various embodiments is shown in FIG. 38B .
  • the step 3810 is determined to be relevant to the user based at least in part on a role 3815 the user is serving in matching the role 3815 identified/assigned to the particular step 3810 .
  • the one or more relevant formats used for displaying the step 3810 involve displaying the text of the step 3810 in a larger font than the other steps of the sequence (e.g., magnified) and with a border in a particular color.
  • the window displaying a subsequent step 3820 is shown in FIG. 38C .
  • this particular step 3820 is not being displayed using the one or more relevant formats because the step 3820 is identified/assigned to a role 3825 that is different than the role 3815 the user is serving in.
  • a portion of content may only be conveyed to the user if the portion of content is determined to be relevant to the user.
  • a warning/caution 3830 is being displayed to the user as a result of the warning/caution 3830 being determined to be relevant to the user.
  • the warning/caution 3830 is identified/assigned to a role 3835 that is the same as the user's role.
  • FIG. 38E an example of a window displaying sequence information in which steps have been skipped 3840 are highlighted according to various embodiments is shown in FIG. 38E .
  • the user has acknowledged a step 3845 that is not the next step to perform in the sequence based at least in part on the steps already acknowledged by the user. Therefore, as a result, the prior steps 3820 that have not been acknowledged by the user are highlighted to bring them to the user's attention.
  • portions of content involving other types of content such as, for example, content on wiring, learning, parts, and/or the like, may be conveyed using one or more enhancing formats, one or more relevant formats, and/or one or more irrelevant formats.
  • such functionality may be configured to convey a portion of content using one or more enhancing, relevant, and/or irrelevant formats upon the portion of content acquiring focus.
  • a user may be viewing textual information on a component that includes several parts that are described in the textual information.
  • media content that includes an illustration of the component may also be displayed along with the textual information in the IETM.
  • functionality may be performed that recognizes the focus on the particular part in the text, and displays the part in the illustration using one or more enhancing, relevant, and/or irrelevant formats in a similar fashion as described herein with respect to the sequence module.
  • additional modules and/or one or more of the modules described herein may be configured with similar functionality as the sequence module to facilitate conveying other types of content using enhancing, relevant, and/or irrelevant formats, as well as highlighting other types of content that may have been skipped and/or missed.
  • FIG. 39 is a flow diagram showing an unlock content module for performing such functionality according to various embodiments of the disclosure.
  • the unlock content module may be invoked by another module to unlock content such as, for example, the topic module previously described.
  • the unlock content module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • a portion of the content provided for a topic may be locked to require a user to acknowledge an alert associated with the portion of the content.
  • the content may provide a warning and/or caution for the user. Accordingly, the user may acknowledge the alert.
  • some type of mechanism such as a button may be provided that the user selects to acknowledge the alert and as a result, the unlock content module is invoked.
  • the process flow 3900 begins with the unlock content module identifying the alert that has been acknowledged in Operation 3910 .
  • the unlock content module may receive and/or read a tag associated with the alert that is provided in the textual information for a topic. Accordingly, the tag identifies the alert and its location with respect to the other content found in the textual information.
  • the unlock content module identifies the next alert in the content in Operation 3915 .
  • the unlock content module may identify the next tag found in the textual information for an alert.
  • the alert acknowledged by the user may be the last alert provided in the content. If that is the case, then the unlock content module may identify the end of the content.
  • the unlock content module unlocks the portion of the content between the two alerts in Operation 3920 .
  • the content may be locked using a number of different approaches and/or any combination thereof.
  • the user's ability to view the portion of the content may be obscured.
  • the portion of the content may be greyed out so that it cannot be read.
  • any interactive functionality found within the portion of the content may be disabled.
  • the portion of the content may contain an occurrence of a selectable part.
  • the selectable functionality of the part may be disabled.
  • the user's ability to scroll through the portion of the content may be disabled.
  • the unlock content module performs the necessary operations to unlock the content.
  • FIG. 40A provides an example of a portion of content 4000 that has been locked according to various embodiments. Specifically, the portion of the content 4000 has been greyed out to obscure the user's ability to view the portion of the content 4000 .
  • An alert is displayed that provides an acknowledgment mechanism (e.g., a button) 4010 that can be selected by the user to acknowledge the alert and unlock the portion of the content 4000 .
  • the portion of the content 4015 is unlock as shown in FIG. 40B .
  • the portion of the content 4015 is unlock to the next alert found in the content.
  • the user can select the acknowledgment mechanism 4020 for the next alert to unlock additional content.
  • FIG. 41 is a flow diagram showing a transfer job module for performing such functionality according to various embodiments of the disclosure.
  • the transfer job module may be invoked by another module to transfer a job such as, for example, the topic module previously described.
  • the transfer job module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • a user may wish to transfer a job (e.g., a particular instance of a process, procedure, task, checklist, and/or the like) he or she is currently performing to another user.
  • a job e.g., a particular instance of a process, procedure, task, checklist, and/or the like
  • the user's work shift may be ending and therefore, he or she may wish to transfer the current job he or she is performing to another user who is working the following shift. Therefore, in these embodiments, the user may select an option (e.g., a button) to transfer a job and as a result, the transfer job module is invoked.
  • an option e.g., a button
  • the process flow 4100 begins with the transfer job module causing display of an indication (e.g., a divider) at a point in the content being displayed on a window where the user is suspending performing the job in Operation 4110 .
  • an indication e.g., a divider
  • the transfer job module causes the indicator to be displayed between the two steps of the procedure/task where the user is stopping.
  • the indication may be displayed in a number of different formats such as a line, arrow, bullet point, and/or the like.
  • the transfer job module then generates a job transfer window based at least in part on the job in Operation 4115 and provides the window for display in Operation 4120 .
  • the transfer job window may be superimposed over a portion of the window displaying the procedure/task.
  • the job transfer window may provide information such as the title of the procedure/task being performed for the job (e.g., the DMC for the related data module), the user's name, a data and time the job is suspended, a job control number, comments provided by the user, and/or the like.
  • the transfer job module then records the job transfer in the IETM in Operation 4125 . This operation in particular embodiments involves the transfer job module recording a marker identifying where the job was suspended. Accordingly, this marker can then be used at a later time in identifying where the job needs to be resumed.
  • the job transfer may now be posted in the IETM so that another user may resume the job.
  • the job transfer may be viewed by every user who signs into the IETM for the item and/or specific object for the item or the job transfer may only be viewed by those users who can resume the job. That is to say, in particular embodiments, the job transfers available to a user to view and/or resume may be dependent on the credentials used by the user in signing into the IETM.
  • FIG. 42 is a flow diagram showing a resume job module for performing such functionality according to various embodiments of the disclosure.
  • the resume job module may be invoked as a result of a user signing into the IETM and selecting an option to view the jobs that have been suspended.
  • the process flow 4200 begins with the resume job module receiving input indicating a selection from a user to view the jobs that have been suspended in Operation 4210 .
  • the user may be provided with a mechanism such as a button on a toolbar to view the jobs that have been suspended.
  • the resume job module may provide the suspended jobs to display on a window to the user in Operation 4215 .
  • the window may be configured to allow the user to select a particular job from the suspended jobs.
  • the resume job module determines whether input has been received indicating the user has selected a job displayed on the window to resume in Operation 4220 . If so, then the resume job module retrieves the stop position for the job in Operation 4225 . As previously noted, a marker may be recorded when the job was transferred that identifies the position where the job was suspended. Once the marker has been retrieved, the resume job module provides the procedure/tasks associated with the suspended job for display on a window to the user along with an indication (e.g., a divider) based at least in part on the marker in Operation 4230 . In addition, the resume job module provides a resume job window for display in Operation 4235 . Here, the resume job window may be superimposed over a portion of the window displaying the procedure/task and may provide a mechanism (e.g., a button) that the user can select to resume the job.
  • a mechanism e.g., a button
  • the resume job module determines whether input has been received indicating the user will resume the job in Operation 4240 . If the user has decided to resume the job, then the resume job module causes the resume job window to close and causes the indication to be removed in Operation 4245 . Accordingly, the job that has been resumed may be removed from the suspended jobs. Otherwise, the resume job module determines whether input has been received indicating the user would like to exit viewing the suspended jobs in Operation 4250 . If the user does want to exit, then the resume job module causes the display of the suspended jobs to be closes and exits.
  • FIG. 43A provides an example of a mechanism 4300 that is provided in particular embodiments to enable a user to transfer or resume a job.
  • the mechanism 4300 is a dropdown menu control provided in a toolbar displayed along the top of a window.
  • the dropdown menu provides the user with the option to create a job transfer 4310 and the option to open the jobs that have been transferred (suspended) 4315 .
  • FIG. 43B provides an example of a job transfer window 4320 according to various embodiments. As noted above, such a window 4320 may be provided in particular embodiments when a user selects an option to transfer a job the user is currently performing.
  • FIG. 43C provides an example of a procedure/task that has been suspended that a user has identified to resume.
  • an indication 4325 is shown in the display of the procedure/task at a position where the procedure/task was suspended.
  • a resume job window is provided along with a mechanism (e.g., a button) 4330 to allow the user to resume the job.
  • FIG. 43D displays the procedure/task for the job with the indication 4335 removed. At this point, the user can resume the job and finish the remaining steps for the procedure/task.
  • FIG. 44 is a flow diagram showing an update media module for performing such functionality according to various embodiments of the disclosure.
  • the update media module may be invoked by another module to update the media content displayed such as, for example, the topic module previously described.
  • the update media module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • a user may be viewing the steps for a maintenance task displayed on a first view pane on a window.
  • illustrations for the maintenance may be provided on a second view pane.
  • a step in the maintenance task may involve a particular component and an illustration of the component may be provided to aid the user in locating the component on the actual item.
  • the window may be configured to display the two panes on non-overlapping portions of the window.
  • the process flow 4400 begins with the update media module identifying the first occurrence of media content mentioned in the textual information displayed on the window in Operation 4410 .
  • the first occurrence is determined from the top of the window. Therefore, the update media module searches the textual information starting at the top of the window until the module finds a reference to media content in the text.
  • the first reference to media content may be a reference to a figure, a video, an image, a sound recording, and/or the like.
  • the update media module then retrieves the media content associated with the reference in Operation 4415 .
  • the reference to the media may include a hyperlink that the user may select to retrieve the media content if desired. Therefore, the update media module may obtain the storage location of the media content in the IETM from the hyperlink and retrieve the media content from the storage location.
  • the update media module may obtain the storage location from the data (e.g., data module) for the textual information being viewed.
  • the update media module may use other processes for retrieving the media content as those of ordinary skill in the art can envision in light of this disclosure.
  • the update media module updates the view pane used for displaying media by causing the retrieved media content to be displayed in the view pane in Operation 4420 .
  • FIG. 45 provides an example of media content being updated as a user scrolls through the textual information for a topic according to various embodiments.
  • the first occurrence of media content mentioned in the textual information shown in the view pane displayed on the left side of a window is FIG. 2 , Sheet 2 4500 .
  • the corresponding illustration for FIG. 2 , Sheet 2 4510 is shown in the view pane displayed on the right side of the window.
  • the media content displayed in the view pane on the right is updated to reflect the media content that is now the first to be referenced in the textual information.
  • multiple view panes may be used to display the media content so that multiple occurrences of media content mentioned in the textual information may be shown on a window at the same time.
  • FIG. 46A is a flow diagram showing a connector module for performing such functionality according to various embodiments of the disclosure.
  • the connector module may be invoked by another module to provide the functionality such as, for example, the topic module previously described.
  • the connector module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the maintenance procedure/task may involve trouble shooting an electrical problem that is being experienced with respect to an item that the user is viewing documentation for via the IETM.
  • the maintenance procedure/task may entail the user testing various pins found in an electrical connector (e.g., a plug) to ensure the pins are working properly.
  • the user may have a piece of testing equipment configured to be connected to the pair of pins so that the pins can be tested.
  • physically identifying the pair of pins in the connector may be difficult due to the size of the connector and/or the number of pins found in the connector. Therefore, the user may become quite frustrated with attempting to physically identify the pair of pins so that he or she may connect the testing equipment to the correct pins as indicated in the maintenance procedure/task.
  • the connector may be referenced in the content (e.g., textual information) of the maintenance procedure/task by some type of identifier such as, for example, the name of the connector, the part number associated with the connector, and/or the like.
  • the connector may be configured as selectable from the content of the maintenance procedure/task.
  • the textual information for the maintenance procedure/task may be provided in a first view pane on a window for the IETM and an identifier may be provided in the textual information that is selectable as a hyperlink.
  • some type of selection mechanism such as a button may be provided for the connector. Therefore, the user may select the connector from the content and as a result, the connector module is invoked.
  • the process flow 4600 begins with the connector module retrieving media content for the connector and displaying the media content in Operations 4610 and 4615 .
  • the media content may include one or more illustrations of the connector such as one or more 2D or 3D graphics.
  • the media content may display the pin configuration (a plurality of pins) for the connector.
  • the maintenance procedure/task may be provided in a first view pane displayed on the window and the media content for the connector may be provided in a second view pane displayed on the window.
  • the window may be configured to display the first and second view panes on non-overlapping portions of the window.
  • the connector module in various embodiments generates and displays a preview for the connector in Operations 4620 and 4625 .
  • the connector preview may be provided as a separate window than the window displaying the maintenance procedure/task and media content.
  • the preview window may be superimposed over a portion of the window displaying the maintenance procedure/task and media content.
  • the connector preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the connector.
  • the preview is configured to provide a list of the pins that are found in the connector. For example, the preview may provide the list of pins as a dropdown menu.
  • each of the pins may be configured as selectable by the user.
  • a selection mechanism such as a checkbox may be provide that can be selected by the user to select the associated pin.
  • the pins may be configured as selectable in the media content.
  • the connector module determines whether input has been received indicating the user has selected a pin from the preview (and/or media content) using a first selection mechanism in Operation 4630 . For example, the connector module determines whether input has been received indicating the user has selected the checkbox for the pin. If the user has selected the pin using the first selection mechanism, then the connector module determines whether the pin is already highlighted in Operation 4635 . If that is the case, then the user may be attempting to unselect the pin in the preview and/or media content. Therefore, if the pin is already highlighted, the connector module removes the highlighting for the pin in Operation 4640 . This operation may involve the connector module removing highlighting of the pin in the media content and/or in the preview window.
  • the pin may be displayed on the media content in a particular color (e.g., blue) to highlight the pin from the other pins for the connector, which may be displayed in a different color (e.g., gray). Therefore, the connector module may remove the highlighting by causing the pin to return to being displayed in the same color (e.g., gray) as the other pins, as well as unchecking the checkbox associated with the pin in the preview.
  • a particular color e.g., blue
  • gray e.g., gray
  • the connector module may be configured to allow the user to select a single pair of pins at any given time.
  • the testing equipment may be designed for testing a pair of pins. Therefore, the connector module may be configured to format the display of the remaining pins that have not been selected using some type of deemphasized format in some embodiments. If this is the case, then the connector module may remove the deemphasized format of the remaining pins and display the pins as normal in Operation 4645 in response to the user deselecting one of the pins. This may allow the user to then select a different pin for the pair of pins that is to be tested.
  • the connector module causes the selected pin to be displayed as highlighted in the media content in a first format in Operation 4650 .
  • the connector module may highlight the pin in the media content by formatting the pin in bold, in a particular color, with a border, in a different font, any combination thereof, and/or the like. Such formatting may allow the pin to stand out from the other pins displayed in the media content for the connector. Accordingly, as a result of displaying the pin as highlighted in the media content in the first format, this may enable the user to identify the pin in the actual connector while in the field.
  • the connector module may also provide some type of highlighting format to the information on the pin provided in the preview.
  • the connector module determines whether the user has selected a pair of pins in Operation 4655 . If so, then the connector module displays the remaining pins for the connector that have not been selected in a deemphasized format in Operation 4660 .
  • the deemphasized format may entail displaying the remaining pins in the media content and/or preview in a particular color (e.g., dark grey), with a particular background, in a different font, and/or the like.
  • the connector module may be configured to display the remaining pins in a deemphasized format that demonstrates the pins are not currently selected by the user.
  • the deemphasized format may be configured to prevent the user from selecting another pin to highlight once the user has selected a pair of pins.
  • the connector module can be configured in other embodiments to prevent the user from selecting another pin to highlight based at least in part on a different number of pins besides two (a pair).
  • the testing equipment being used by the user may allow for the testing of three pins, or four pins, at any given time. Therefore, the connector module may be configured to prevent the user from selecting more than three pins or four pins to display as highlighted in the media content and/or preview.
  • the connector module may determine whether input has been received indicating the user has instead selected the pin using a second, different selection mechanism (e.g., using his or her mouse to hover over the pin in the preview and/or on the media content) in Operation 4665 . If the user has selected the pin using the second selection mechanism, then the connector module causes the selected pin to be displayed as highlighted in the media content in a second format in Operation 4670 . In addition, in particular embodiments, the connector module may highlight the pin in the preview. For example, the second format may involve displaying the pin in a second color (e.g., green) in the media content that is a different color (e.g., blue) than had the user selected the pin using the first selection mechanism.
  • a second color e.g., green
  • a different color e.g., blue
  • the connector module determines whether the user wishes to exit out of the preview of the connector in Operation 4675 . If so, then the process flow 4600 ends. If not, then the connector module continues to monitor the user's selection of pins.
  • the second selection mechanism (e.g., hovering over the pin in the preview and/or the media content using a cursor) is to provide the user with a quick way in identifying the pin in the connector.
  • Such functionality may allow the user to move freely from pin to pin in the preview and/or media content and identify the pin pair he or she is specifically looking for by viewing what corresponding pin is highlighted in the preview and/or media content.
  • the first selection mechanism (e.g., selecting the corresponding checkbox for the pin in the preview and/or clicking on the pin in the media content) is to provide the user with a way to select a pin that stays selected. This can allow the user to select a pair of pins while working in the field that are then displayed highlighted and can be referenced by the user while locating the actual pins in the physical connector.
  • FIG. 46B provides an example of a window displaying a first view pane 4680 on the left side of the window providing the textual information for a maintenance procedure/task and a second view pane 4681 on the right side of the window providing media content (e.g., an illustration) of the connector and pins according to various embodiments.
  • the user has selected an identifier 4682 for the connector found in the textual information for the maintenance procedure/task.
  • a preview window 4683 is displayed for the connector in which a dropdown has been provided to allow the user to select a pair of pins 4684 , 4685 .
  • FIG. 46C provides an example in which the user has selected one of the pins 4686 using a second selection mechanism (e.g., hovering over the pin 4684 with his or her cursor in the preview window 4683 ).
  • the pair of pins 4686 , 4687 are highlighted in the media content using two different formats. Specifically, the pair of pins 4686 , 4687 are displayed with the first pin 4686 highlighted in a first color and the second pin 4687 highlighted in a second, different color.
  • FIG. 47A is a flow diagram showing a highlight unit module for performing such functionality according to various embodiments of the disclosure.
  • the highlight unit module may be invoked by another module to highlight a unit such as, for example, the topic module previously described.
  • the highlight unit module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • a unit may refer to a component of an item, equipment, a tool, and/or the like. Accordingly, a unit may be referenced in the textual information for a topic, as well as displayed in media content such as an illustration. For example, a user may be being viewing the instructions for performing a maintenance task and the instructions may reference a particular part that is to be replaced during the task. Many times, some type of media may also be provided such as an illustration to assist the user in actually replacing the part. For instance, the instructions may be displayed on a first view pane of a window and the illustration may be displayed on a second view pane of a window.
  • the part may be provided in the first and/or second view panes as selectable. Although the part may not necessarily be selectable. Therefore, in response to the user selecting one or more units in one of the view panes, the highlight unit module may be invoked.
  • the process flow 4700 begins with the highlight unit module determining whether input has been received indication a selection of text referencing one or more units in Operation 4710 .
  • the user may be viewing the steps for a maintenance procedure/task and may select a particular step for the procedure/task in the textual information displayed on a window.
  • the step may refer to one or more units (e.g., one or more components).
  • the highlight unit module may be configured to identify the reference(s) to the unit(s) based at least in part on the unit(s) (e.g., unit name and/or number) being selectable within the textual information.
  • the highlight unit module may be configured to identify the reference(s) to the units(s) by searching the selected text and comparing terms within the text to a list of units(s) (e.g., component names, part names and/or numbers, and/or the like).
  • the highlight unit module then causes the unit(s) to be displayed as highlighted in the media content being displayed on the window in Operation 4715 .
  • the highlight unit module may highlight the unit(s) using different formatting depending on the embodiment. For instance, the highlight unit module may highlight the unit(s) in the media content by displaying the unit(s) in bold, in a particular color, with a marker, with a border, in a different font, any combination thereof, and/or the like. As a result, the user is then able to identify the unit(s) referenced in the selected text in the media content more easily.
  • the highlight unit module is configured in various embodiments to perform similar functionality in respect to the user selecting one or more units displayed in the media content. Therefore, if the highlight unit module determines it has not received a selection of text containing one or more units, then the module determines whether it has received a selection of one or more units in the media content currently being displayed on the window in Operation 4720 .
  • the unit(s) displayed in the media content may be selectable and therefore, the user may have selected one or more of the units displayed in the media content. For example, the user may select a unit by clicking on the unit in the media content. In particular instances, the user may be able to select multiple units by holding down a key while clicking on the units such as, for example, the ctrl key or the alt key.
  • Those of ordinary skill in the art can contemplate other approaches that may be used to select the unit(s) in the media content in light of this disclosure.
  • the highlight unit module then causes the unit(s) to be displayed as highlighted in the textual information being displayed on the window in Operation 4725 .
  • the highlight unit module may highlight the unit(s) using different formatting depending on the embodiment.
  • FIG. 47B provides an example of a window displaying a first view pane on the left side of the window providing the textual information for a topic and a second view pane on the right side of the window providing an illustration of the topic.
  • the user has selected a particular step 4730 of a procedure/task referencing parts 4735 , 4740 , 4745 displayed in the illustration and a result, the parts 4750 , 4755 , 4760 have been automatically highlighted in the illustration according to various embodiments.
  • 47C provides an example in which the user has selected a part 4765 in the illustration in the view pane displayed on the right side of the window and the references to the part 4770 , 4775 are automatically highlighted in the textual information in the view pane displayed on the left side of the window according to various embodiments.
  • FIG. 48 is a flow diagram showing an end of topic module for performing such functionality according to various embodiments of the disclosure.
  • the end of topic module may be invoked by another module to invoke functionality such as, for example, the topic module previously described.
  • the end of topic module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the topic module may invoke the end of topic module in response to detecting the user has scrolled to the end of the textual information provided for a topic.
  • the content for a topic may be formatted in various embodiments according to S1000D standards. Therefore, the content for a topic may be stored in the IETM with respect to data modules and the end of the topic may refer to the end of content found in a particular data module for the topic (e.g., the end of the data module).
  • the functionality may only be provided at the end of the topic in particular embodiments to ensure the user has viewed and/or processed/used all of the content for a topic.
  • the user may be viewing a topic involving a task with many steps that are to be performed by the user. Therefore, end of topic functionality may only be provided upon detecting the user has reached the end of the content, that is reached the end of the steps for the task, to ensure the user has performed all of the steps.
  • other criteria may also be associated with providing end of topic functionality. For instance, returning to the example, the user may also need to acknowledge he or she has performed all of the steps in the tasks by checking off the steps before the end of topic functionality is provided.
  • the process flow 4800 begins with the end of topic module providing of an end of topic mechanism (e.g., a button) for the content displayed for the topic on a window in Operation 4810 .
  • an end of topic mechanism e.g., a button
  • the end of topic module in particular embodiments provides a previous topic mechanism (e.g., a button) and a next topic mechanism (e.g., a button) for the content displayed for the topic on the window in Operations 4815 and 4820 .
  • the end of topic module determines whether input has been received indicating the user has selected the previous topic mechanism in Operation 4825 . If so, then the end of topic module generates a preview for the previous topic found just before the current topic being viewed by the user in the table of contents for the technical documentation in Operation 4830 and provides the preview for display in Operation 4835 .
  • the previous topic preview may be provided as a separate window than the window displaying the topic.
  • the preview window may be superimposed over a portion of the window displaying the topic.
  • the previous topic preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the previous topic.
  • the preview is configured to provide only a preview of some of the content found in the technical documentation on the previous topic.
  • the preview may be configured in particular embodiments to provide the first five to fifty lines of textual information that the user would be provided with if the user were to select the previous topic to view the entire content for the topic.
  • the end of topic module determines whether input has been received indicating the user has selected the next topic mechanism in Operation 4840 . If so, then the end of topic module generates a preview of the next topic found just after the current topic being viewed by the user in the table of contents for the technical documentation in Operation 4845 and provides the preview for display in Operation 4850 . Accordingly, the preview for the next topic may be configured in the same manner as the preview for the previous topic.
  • the end of topic module determines whether input has been received indicating the user has selected the end of topic mechanism in Operation 4855 . If so, then the end of topic module executes the functionality associated with the end of topic mechanism in Operation 4860 .
  • the functionality may perform different operations depending on the embodiment. For instance, in some embodiments, the functionality may open the table of contents for the technical documentation at the place in the table of contents where the current topic being viewed by the user is located and may highlight the current topic in the table of contents.
  • the table of contents may be provided in a separate window and/or a view pane displayed on the window displaying the topic.
  • Such functionality may allow the user to then view other topics in the vicinity of the current topic to help the user navigate to a new topic.
  • the functionality may take the user back to the top of the content for the topic (e.g., back to the top of the data module).
  • the functionality may allow the user to view other objects for the item.
  • the user may be performing maintenance on a particular aircraft of a type of aircraft found in an airline's fleet and may be viewing a maintenance task.
  • the user may be signed into the IETM using credentials identifying the particular aircraft so that the maintenance work (e.g., job) being performed on the aircraft is tracked and recorded.
  • the user may be assigned to perform the same maintenance on another aircraft of the same type found in the airline's fleet.
  • the end of the topic functionality may allow the user to view the other aircraft of the same type in the airline's fleet and then enable the user to move easily to the other aircraft in the IETM (e.g., sign-into the other aircraft in the IETM) while maintaining the same maintenance task (.e.g., the same topic).
  • IETM e.g., sign-into the other aircraft in the IETM
  • the end of topic module is configured in some embodiments to cause the end of topic mechanism, the previous topic mechanism, and/or the next topic mechanism to be removed from display if the user scrolls to the position in the content for the topic that is no longer at the end of the content.
  • FIG. 49A provides an example of an end of topic mechanism (e.g., a button) 4900 provided at the end of the content for a topic according to various embodiments.
  • FIG. 49B provides an example in which the functionality performed as a result of the user selecting the end of topic mechanism 4900 is displaying a window with the table contents at a position 4910 in the table of contents highlighting the current topic being viewed by the user
  • the IETM may include functionality that allows for users to use verbal commands for interacting with content being viewed through the IETM.
  • a user may be maintenance personnel who is out in the field performing maintenance on a component for an item.
  • the user may be viewing documentation for the component via the IETM.
  • the documentation may involve content on a maintenance procedure and/or task the user is performing on the component, or the documentation may involve content on the component itself.
  • the maintenance the user is performing may be quite involved and require the user to use both of his or her hands in performing the maintenance. Therefore, it may be inconvenient for the user to have to interact with the IETM using his or her hands. As a result, the user may wish to use verbal commands to interact with the IETM.
  • functionality is provided in various embodiments to allow the user to setup verbal commands for interacting with content through the IETM.
  • functionality is provided that allows the user to identify an action to be performed based at least in part on a particular verbal command provided by the user.
  • the action may involve manipulating a user interface control element found on a window of the IETM such as, for example, checking a checkbox control element, selecting a button control element, selecting an item from a dropdown control element, and/or the like.
  • the action may involve manipulating content being displayed by the IETM such as, for example, scrolling through content, highlighting a portion of content, selecting a portion of content, having a portion of content read out audibly, and/or the like.
  • the functionality may be configured to allow the user to identify and associate various verbal commands with actions, user interface control elements, and/or the like.
  • the functionality may be configured to allow the user to associate such verbal commands and/or actions with particular types of content (e.g., portions of content).
  • FIG. 50A is a flow diagram showing a verbal command setup module for performing such functionality according to various embodiments of the disclosure.
  • the verbal command setup module may be executed by an entity such as the management computing entity 100 and/or the user computing entity 110 previously discussed.
  • the verbal command setup module may be executed in response to a user selecting an option to set up a verbal command from a window provided through the IETM.
  • the IETM may provide one or more windows that can be used by the user in setting up verbal commands for various actions. Accordingly, the user may be able to select a particular verbal command and action to be performed for the verbal command. Therefore, the process flow 5000 begins with the verbal command setup module receiving the verbal command in Operation 5010 and the action to be performed in Operation 5015 .
  • the verbal command may be to interact with a user interface element being displayed through the IETM.
  • the verbal command may be the term “check” and the action may be to check a checkbox control element found in a portion of content being displayed on the IETM and having focus.
  • the verbal command may be the term “click” and the action may be to click a button control element found in a portion of content being displayed on the IETM and having focus.
  • the verbal command may be the term “next” and the action may be to jump to a next portion of content (e.g., to a next step in a procedure, task, and/or checklist) being displayed on the IETM.
  • the verbal command may be the term “scroll down” and the action may be to scroll down through a portion of content being displayed on the IETM.
  • the user may be requested to provide one or more samples of the user providing the verbal command.
  • one or more audio samples of the user speaking the verbal command may be recorded. Therefore, as a result, the verbal command setup module receives the sample(s) in Operation 5020 .
  • the one or more samples provided by the user may be used in training a machine learning model.
  • the verbal command machine learning model may be a model configured to perform some type of automatic speech recognition on the verbal command to generate a representation of the verbal command, that can then be mapped to an action to perform for the verbal command.
  • the verbal command machine learning model may be configured to process a verbal command and generate the action to be performed based at least in part on the verbal command.
  • the verbal command machine learning model may generate a feature representation of the verbal command to map the feature representation directly to an applicable action. Therefore, the output of such a model is the action, itself, to be performed.
  • the verbal command machine learning model may be configured to process a verbal command and generate a representation of the verbal command, that can then be used in identifying an action to be performed.
  • the verbal command machine learning model may generate a textual representation of the verbal command.
  • the textual representation may then be used in identifying any keywords that appear in the verbal command, and these keywords may then be used in identifying an action to perform based at least in part on the verbal command.
  • a “keyword” may include a single word, combination of words such as a phrase, and/or the like.
  • the verbal command machine learning model may be any one of a number of different types of supervised and/or unsupervised machine learning models such as, for example, Hidden Markov models, conventional recurrent neural networks (RNNs), gated recurrent unit neural networks (GRUs), long short-term memory neural networks (LSTMs), and/or the like.
  • RNNs conventional recurrent neural networks
  • GRUs gated recurrent unit neural networks
  • LSTMs long short-term memory neural networks
  • the verbal command machine learning model may be configured in some embodiments as an ensemble involving multiple machine learning models and/or algorithms.
  • the verbal command setup module may be configured in particular embodiments to preprocess the one or more samples and/or extract features from the one or more samples prior to using them to train and test the verbal command machine learning model.
  • the verbal command setup module may be configured to preprocess the sample(s) to remove background noise and/or silence, to normalize the volume of the sample(s) to a standard level, to pre-emphasis to boost high frequency components of the audio signal(s) for the sample(s), and/or the like.
  • the verbal command setup module may be configured to extract one or more features from the sample(s) such as, for example, zero crossing rate, spectral rolloff, Mel-frequency cepstral coefficients (MFCC), chroma frequencies, and/or the like.
  • features such as, for example, zero crossing rate, spectral rolloff, Mel-frequency cepstral coefficients (MFCC), chroma frequencies, and/or the like.
  • the one or more samples provided by the user may be broken down into training sample(s) and testing sample(s). Therefore, the verbal command setup module trains the verbal command machine learning model using the training sample(s) (e.g., extracted features of the sample(s)) in Operation 5025 . Once trained, the verbal command setup module determines whether the model is trained to an acceptable level for generating the action identified by the user for the verbal command in Operation 5030 .
  • the verbal command setup module may be configured to determine whether the verbal command machine learning model can generate the appropriate action for the testing samples to a certain level of performance (e.g., satisfy a threshold level of performance). If the verbal command setup module determines the performance of the verbal command machine learning model is not acceptable, then the verbal command setup module returns to Operation 5020 and receives additional sample(s) from the user and further trains the model on the additional samples.
  • the verbal command setup module stores the model in Operation 5035 so that it may be used for processing verbal commands received by the user while using the IETM. Accordingly, the verbal command machine learning model may be trained for processing a variety of commands to perform a variety of actions.
  • the verbal command machine learning model may be trained and used for a specific user or for multiple users. That is to say, in particular embodiments, a verbal machine learning model may be developed and trained for each individual user. While in other embodiments, a verbal machine learning module may be developed and trained for multiple users.
  • the verbal command machine learning model can then be used in generating actions to perform based at least in part on verbal commands received by the user while the user is viewing documentation through the IETM.
  • the verbal command machine learning model may be further trained over time as samples of verbal commands are provided by the user during actual use. Such further training may help in fine tuning the verbal command machine learning model.
  • FIG. 50B is a flow diagram showing a verbal command module for performing such functionality according to various embodiments of the disclosure.
  • the verbal command module may be invoked by another module to process a verbal command such as, for example, the topic module previously described.
  • the verbal command module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the process flow 5040 begins with the verbal command module receiving a verbal command in Operation 5045 .
  • the verbal command may be received through an audio input of a user computing entity 110 being used by the user to view documentation in the IETM.
  • the verbal command module identifies what portion of content that is being displayed by the IETM currently has focus in Operation 5050 . Accordingly, in various embodiments, focus on a portion of content identifies the portion of content as having a center of interest and/or activity with respect to the content currently being provided through the IETM.
  • the user may be viewing content involving a checklist and the user may have selected a particular step of the checklist. Therefore, in this example, the selected step is identified as the portion of the content having focus.
  • focus on a portion of content may be accomplished using various mechanisms depending on the embodiment.
  • the user may indicate completion of a particular portion of content (e.g., completion of a step in a checklist), and focus may automatically move to another portion of the content (e.g., focus may move automatically to the next step in the checklist).
  • focus may perform some type of action such as click on and/or hover over a portion of content to convey focus on the portion of content.
  • Those of ordinary skill in the art can envision multiple types of mechanisms that can be used to establish focus on a portion of content in light of this disclosure.
  • the verbal command module generates an action based at least in part on the verbal command received from the user in Operation 5055 .
  • the verbal command module performs this operation by processing the verbal command (e.g., audio of the verbal command) using a verbal command machine learning model to generate the action.
  • the verbal command module may preprocess and/or extract one or more features from the verbal command (e.g., audio of the verbal command) before processing the verbal command (e.g., before processing the extracted feature(s) of the verbal command) using the verbal command machine learning model.
  • the verbal command machine learning model may be configured to process the verbal command and generate an action to perform based at least in part on the verbal command. Therefore, for these embodiments, the verbal command machine learning model can generate the action to be performed without further processing by the verbal command module.
  • the verbal command machine learning model may be configured to generate a representation of the verbal command (e.g., a textual representation) by performing natural language processing on the verbal command, and the representation may then be used in generating the action to be performed.
  • the verbal command machine learning model may be a deep learning model such as a CNN configured to perform automatic speech recognition on the verbal command to generate the representation.
  • the verbal command module may be configured to perform preprocessing and/or feature extraction on the verbal command prior to processing the verbal command using the verbal command machine learning model.
  • the verbal command module may then identify any keywords found in the representation of the verbal command that may be used to identify an action to perform based at least in part on the verbal command.
  • the verbal command module may be configured to then use some type of data structure, such as a table, file, array, and/or the like, to reference and map/match the identified keyword(s) found in the textual representation with an action.
  • a “keyword” may include a single word, combination of words such as a phrase, and/or the like.
  • the verbal command module determines whether the identified action to perform involves a user interface control element in Operation 5060 .
  • the identified action to perform may be to scroll down through the portion of content currently having focus to another portion of the content.
  • the identified action to perform may be to jump to a next step in a procedure, task, and/or checklist.
  • the identified action to perform does not necessarily involve a user interface control element. Therefore, in these examples, the verbal command module determines the identified action to perform does not involve a user interface control element and as a result, performs the identified action in Operation 5070 .
  • the identified action to perform may involve a user interface control element. Therefore, if this is the case, the verbal command module identifies an applicable user interface control element for the action in Operation 5065 . In particular embodiments, the verbal command module performs this operation by first identifying one or more applicable user interface control elements for the identified action to be performed, and then determining which of the applicable user interface control elements are found in the portion of content that currently has focus. For example, the verbal command received from the user may have been the term “check.”
  • the verbal command module may generate an action to perform that involves checking a user interface control element and determine that such an element associated with this action is a checkbox control element. Therefore, the verbal command module may determine whether a checkbox control element is present in the portion of content that currently has focus. If such an element is present, then the verbal command module performs the action by checking the checkbox control element in Operation 5070 .
  • the verbal command module in various embodiments allows the user to perform functionality within the IETM using various verbal commands. Specifically, in various embodiments, such functionality may involve performing some type of action such as, for example, checking a checkbox control element, selecting a button control element, highlighting a portion of content, skipping to another portion of content, scrolling through a portion of content, launching a preview window, and/or the like.
  • a user may be able to perform functionality that normally requires the user to physically interact (e.g., use an input device such as a mouse, pointer, touchscreen, and/or the like) with his or her user computing entity 110 to interact with documentation being viewed through the IETM by using verbal commands instead.
  • Such a capability may be very beneficial in instances where it is inconvenient for the user to physically interact with his or her user computing entity 110 . This may also be true of users of the IETM who may be physically challenged and therefore, may be unable to physically interact with his or her computing entity 110 .
  • FIG. 51A is a flow diagram showing a wiring module for performing such functionality according to various embodiments of the disclosure.
  • the wiring module may be invoked by another module to invoke functionality such as, for example, the topic module previously described.
  • the wiring module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • content involving wiring data may provide one or more illustrations of an electrical schematic of a wiring configuration used for the item.
  • the electrical schematic may include a layout of a plurality of wires and a plurality of other components that make up the configuration.
  • the other components may include articles such as harnesses, electrical equipment, connectors (e.g., plugs), track assemblies, and/or the like. Therefore, in particular embodiments, the topic module may determine whether the content for the topic currently being displayed involves wiring data and if so, the topic module invokes the wiring module.
  • the process flow 5100 begins with the wiring module determining whether input has been received indicating the user who is viewing wiring data has selected a particular wire in the electrical schematic being displayed on a window in Operation 5110 .
  • the wiring data may entail one or more illustrations of the electrical schematic.
  • the individual wiring and/or components shown in the illustration(s) may be configured as selectable to invoke different functionality depending on the type of selection mechanism used by the user.
  • the individual wiring may be configured so that if the user uses his or her mouse to hover over a particular wire shown in the schematic, then tracing of the wire in the schematic is displayed on the window.
  • the tracing may be shown by highlighting the wire in the schematic by, for example, bolding the wire, displaying the wire in a particular color, displaying the wire using a unique pattern, using a combination thereof, and/or the like.
  • the wiring module generates a preview for the wire and provides the preview for display in Operations 5111 and 5112 .
  • the wire preview may be provided as a separate window than the window displaying the wiring data.
  • the preview window may be superimposed over a portion of the window displaying the wiring data.
  • the wire preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the wire in some embodiments, the preview is configured to provide only a preview of some of the content found in the technical documentation on the wire.
  • the wiring module instead determines input has been received indicating the user has selected the particular wire using a third, different selection mechanism (e.g., alt-clicking on the wire) in Operation 5113 , then the wiring module enables live wire for the particular wire in Operation 5114 .
  • live wire provides a window displaying a diagram with all of the terminal ends for the selected wire. Accordingly, the window is configured in particular embodiments so that the user can select portions of the wire between terminal ends within the diagram to view information on the portion of wire and terminal ends.
  • the wiring module determines whether input has been received indicating the user has selected a component (other than a wire) displayed in the schematic in Operation 5120 . If so, then the wiring module generates a preview for the component and provides the preview for display in Operations 5121 and 5122 . Accordingly, the preview for the component may be configured in the same manner as the preview for the wire.
  • the component selected by the user may be a connector displayed in the electrical schematic of the wiring configuration used for the item.
  • the preview for the connector may display an illustration of the connector and a plurality of pins found on the connector.
  • each of the pins may be selectable by the user to generate a preview for the pin. Therefore, in this example, the wiring module may determine whether input has been received indicating the user has selected a particular pin displayed in the illustration for the connector in Operation 5123 . If the user has selected a particular pin, then the wiring module generates and provides a preview for the pin for display in Operations 5124 and 5125 .
  • the preview for the pin may be configured in the same manner as the preview for the wire and/or component.
  • the pin may be highlighted in the illustration of the connector to help the user to better identify where the pin is located within the connector. This may be quite useful to an individual who is working in the field on the particular connector.
  • the preview for the connector may be configured in a similar fashion as the preview described above with respect to the connector module, with the wiring module having similar functionality as the connector module. Accordingly, the preview may provide a list of the pins found on the connector and allow for the user to select one or more pins (e.g., a pair of pins) to display on media content (e.g., an illustration) of the connector to assist the user in locating the pins on the physical connector while working in the field.
  • the preview may provide a list of the pins found on the connector and allow for the user to select one or more pins (e.g., a pair of pins) to display on media content (e.g., an illustration) of the connector to assist the user in locating the pins on the physical connector while working in the field.
  • the user may also be provided with a selection mechanism (e.g., a button) to generate a list of the components found in the electrical schematic of the wiring configuration displayed on the window.
  • a selection mechanism e.g., a button
  • Each of the components may be identified by a reference designator (e.g., ResDet). Therefore, in these particular embodiments, the wiring module determines whether input has been received indicating the user has selected this selection mechanism in Operation 5130 . If so, then the wiring module retrieves and provides the list of components for display in Operations 5131 and 5132 . For example, in particular embodiments, the wiring module may cause the list of components to be displayed in a first view pane on the window while continuing to display the illustration of the electrical schematic in a second view pane on the window.
  • the components provided in the list may be selectable (e.g., may be displayed as a hyperlink and/or displayed with a selection mechanism such as a button) to allow the user to view information for the component.
  • the information may be displayed on a separate window and may provide a list of other electrical schematics found in the wiring data for the technical documentation on the item in which the component is shown. Therefore, upon displaying the list of components, the wiring module may determine whether input has been received indicating the user has selected a particular component found in the list in Operation 5133 . If the user has selected a component found in the list, then the wiring module retrieves and provides the information providing the other electrical schematics in which the component is shown in Operations 5134 and 5135 . In particular instances, the electrical schematics displayed in the list may also be selectable to allow the user to retrieve and view the schematic.
  • the wiring module determines whether to exit in Operation 5140 . If not, then the wiring module returns to Operation 5110 to determines whether input has been received of selection of another wire. If instead, the wiring module determines to exit, then it does so and the process flow 5100 ends.
  • FIG. 51B provides an example of a window displaying an electrical schematic of a wiring configuration used for an item.
  • the user has selected a particular wire 5150 shown in the schematic to generate and display a preview window 5151 for the wire superimposed over the window displaying the electrical schematic according to various embodiments.
  • the tracing of the wire has been highlighted in the electrical schematic.
  • FIG. 51C provides an example of a preview window 5160 for a connector according to various embodiments as a result of the user selecting the connector 5161 in the electrical schematic.
  • the preview window 5160 is superimposed over the window displaying the electrical schematic and provides an illustration of the connector (e.g., plug) displaying a plurality of pins found in the connector. Accordingly, the user has selected a particular pin 5162 and as a result, a preview window 5163 for the pin has been generated and displayed. In addition, the pin 5162 has been highlighted in the illustration of the connector.
  • FIG. 51D provides an example of a list of components found in the electrical schematic that has been generated and provided in a first view pane 5170 displayed on a window according to various embodiments.
  • the electrical schematic continues to be provided in a second view pane 5171 displayed on the window.
  • FIG. 51E provides an example of a list of other electrical schematics 5180 in which a selected component is shown that has been generated and displayed according to various embodiments.
  • each of the schematics (and accompanying data modules) have been made selectable to allow the user to retrieve and view a schematic if desired.
  • FIG. 52 is a flow diagram showing a live wire module for performing such functionality according to various embodiments of the disclosure.
  • the live wire module may be invoked by another module to provide live wire such as, for example, the wiring module previously described.
  • the live wire module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the wiring module may invoke the live wire module.
  • the process flow 5200 begins with the live wire module generating a wire diagram displaying all of the terminal ends for the selected wire and providing the wire diagram for display in Operations 5210 and 5215 .
  • the live wire module may provide the diagram in a separate window or in a view pane displayed on an existing window.
  • each portion of the wire shown between two terminal ends is selectable (e.g., displayed as a hyperlink and/or displayed with a selection mechanism such as a button) in the wire diagram. Therefore, in these embodiments, the live wire module determines whether input has been received indicating the user has selected a portion of the wire in the diagram in Operation 5220 . If so, then the live wire module provides information on the portion of the wire and the two terminal ends for display in Operation 5225 .
  • the information on the portion of the wire may be provided on a view pane displayed on the window displaying the wire diagram (with the wire diagram displayed on a separate view pane) or on a separate window.
  • the information displayed on the portion of the wire may include such information as the material used for the wiring, properties for the portion of wire, the parts (e.g., part names and/or numbers) that are associated with the wire and/or terminals ends, location identifiers for the terminal ends, and/or the like. Accordingly, some of the information displayed for the portion of the wire may be selectable (e.g., displayed as a hyperlink and/or displayed with a selection mechanism such as a button) to allow further information to be displayed. For example, in some embodiments, the parts (e.g., the part names and/or numbers) are selectable, as well as the location identifiers for the terminals ends.
  • the live wire module determines whether input has been received indicating the user has selected one of the parts in Operation 5230 . If so, then the live wire module generates and provides a preview for the part for display in Operations 5235 and 5240 . Similar to other previews, the part preview may be provided as a separate window than the window displaying the wiring diagram. In some embodiments, the preview window may be superimposed over a portion of the window displaying the wiring diagram.
  • the live wire module may retrieve the information displayed for the preview from the parts data (e.g., parts data modules) found in the technical documentation on the item.
  • the preview may provide interactive functionality such as a selection mechanism to enable the user to order the part from the IETM (as previously discussed).
  • the live wire module determines whether input has been received indicating the user has selected one of the location identifiers for a terminal end displayed on the wire window in Operation 5245 . If so, then the live wire module generates and provides a preview for the location for display in Operations 5250 and 5255 . Similar to other previews, the location preview may be provided as a separate window than the window displaying wiring diagram. In some embodiments, the preview window may be superimposed over a portion of the window displaying the wiring diagram. Accordingly, the preview may provide information on the location of the terminal end. The live wire module may retrieve such information from the wiring data (e.g., wire data modules) found in the terminal documentation of the item.
  • the wiring data e.g., wire data modules
  • the live wire module may determine whether input has been received indicating the user would like to exist from viewing the wire diagram in Operation 5260 . If not, then the live wire module continues to monitor the user's interactions. Otherwise, the live wire module exits.
  • FIG. 53 provides an example of a wire diagram generated and displayed for a selected wire according to various embodiments.
  • the user who is viewing the diagram has selection a portion of the wire 5300 between two terminal ends 5310 , 5315 that is highlighted and as a result, information of the portion of the wire is displayed that provides information of the portion of the wire 5300 and the two terminal ends 5310 , 5315 .
  • the parts e.g., part numbers
  • location identifiers e.g. zones
  • FIG. 54 is a flow diagram showing a crosshairs module for performing such functionality according to various embodiments of the disclosure.
  • the crosshairs module may be invoked as a result of a user who is viewing the graph invoking a mechanism (e.g., alt-click) to place crosshairs on the graph.
  • the process flow 5400 begins with the crosshairs module determining whether input has been received identifying a location to place the crosshairs on the graph in Operation 5410 . Accordingly, in various embodiments, the user moves a cursor over the graph displayed on the window to a position on the graph that he or she would like to place the crosshairs and then invokes the appropriate mechanism. Such action identifies the location where the crosshairs module is to place the crosshairs. If the user has appropriately identified a location, then the crosshairs module causes the crosshairs to be placed on the graph at the location in Operation 5415 .
  • FIG. 55 provides an example of crosshairs 5500 placed on a graph displayed on a window according to various embodiments. The user may use this functionality to help the user better identify the values associated with a particular location (e.g., the values associated with a particular location on a line) on the graph.
  • FIG. 56 is a flow diagram showing a 3D graphics module for performing such functionality according to various embodiments of the disclosure.
  • the graphics module may be invoked by another module to provide functionality for 3D graphics such as, for example, the topic module previously described.
  • the 3D graphics module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • the content displayed for a particular topic may include media content.
  • the media content may involve 3D graphics.
  • the topic may involve displaying the illustrated pans data for a component of an item.
  • a table of the parts used for the component may be provided in a first view pane displayed on a window and media content for the component may be provided in a second view pane displayed on the window.
  • the window may be configured to display the first and second view panes on non-overlapping portions of the window.
  • the parts listed in the table may be selectable in the first view pane and the media content displayed in the second view pane may be a 3D graphic of the component. Therefore, in particular embodiments, the topic module may determine the media content for the topic currently being displayed is a 3D graphic and as a result, the topic module invokes the 3D graphics module.
  • the process flow 5600 begins with the 3D graphics module determining whether input has been received indicating the user has selected a part in the 3D graphic using a first selection mechanism (e.g., using his or her mouse to hover over the part in the graphic) in Operation 5610 . If the user has selected the part using the first selection mechanism, then the 3D graphics module causes the selected part to be displayed as highlighted in both the graphic displayed in the second view pane and the table displayed in the first view pane in a first format in Operations 5611 and 5612 . Accordingly, the part may be highlighted in the 3D graphic and the table using different formatting depending on the embodiment.
  • a first selection mechanism e.g., using his or her mouse to hover over the part in the graphic
  • highlighting the part may be accomplished by formatting the part in bold, in a particular color, with a border, in a different font, any combination thereof, and/or the like. Therefore, the first format may involve displaying the part in a first color (e.g., green) in the 3D graphic and displaying the part in a separate color (e.g., blue) in the table.
  • a first color e.g., green
  • a separate color e.g., blue
  • the 3D graphics module may determine whether input has been received indicating the user has instead selected the part in the 3D graphic using a second, different selection mechanism (e.g., clicking on the part in the graphic) in Operation 5620 . If the user has selected a part using the second selection mechanism, then the 3D graphics module causes the selected part to be displayed as highlighted in both the graphic displayed in the second view pane and the table displayed in the first view pane in a second format in Operations 5621 and 5622 .
  • the second format may involve displaying the part in a second color (e.g., blue) in the 3D graphic and displaying the part in the separate color (e.g., blue) along with a border in the table.
  • the first selection mechanism (e.g., hovering over the part in the 3D graphic using a cursor) is to provide the user with a quick way in identifying the part in the table of parts.
  • Such functionality may allow the user to move freely from part to part in the 3D graphic and identify the part he or she is specifically looking for by viewing what corresponding part is highlighted in the table. Therefore, as the user moves from part to part using the first selection mechanism, the corresponding part highlighted in the table also moves. While the previous part selected using the first selection mechanism is no longer highlighted in particular embodiments.
  • the second selection mechanism (e.g., clicking on the part in the 3d graphic) is to provide the user with a way to select a part in the table that stays selected.
  • the user may want to view more information on a part that is available through the table and/or order the part using a mechanism (e.g., a button) provided along with the part in the table. Therefore, in this example, the user uses the second selection mechanism (e.g., clicking on the part in the 3D graphic) to select the corresponding part in the table.
  • the part stays selected even after the user moves his or her cursor off the part in the 3D graphic.
  • the user can select multiple parts by using the second selection mechanism.
  • the 3D graphics module determines whether input has been received indicating the user has selected a part to delete (e.g., using a selection mechanism such as right clicking on the part and selecting delete) in Operation 5623 . If so, the 3D graphics module causes the part to be removed from being displayed in the 3D graphic in Operation 5624 . Accordingly, a deleted part can be added back to the 3D graphic in some embodiments.
  • the 3D graphics module determines whether input has been received indicating the user wants to un-delete a part that has been removed from display in the 3D graphic in Operation 5625 . If so, then the 3D graphics module causes the part to be displayed again in the 3D graphic in Operation 5626 .
  • the 3D graphics module may be configured in various embodiments to allow for similar functionality based at least in part on the user selecting a part in the table. Therefore, the 3D graphics module may determine whether input has been received indicating the user has selected a part in the table in Operation 5630 . If so, then the 3D graphics module causes the part to be displayed as highlighted in the 3D graphic in Operation 5631 . In addition, in particular embodiments, the 3D graphics module causes the part to be zoomed in on and rotated in the 3D graphic in Operation 5632 in these particular embodiments, the 3D graphics module may be configured to cause the part to be zoomed in on in the 3D graphic with respect to the size of the part. The smaller the part, the more the part is zoomed in on in the 3D graphic. Likewise, the 3D graphics module may be configured to cause the part to be rotated to a better angle for viewing.
  • multiple selection mechanisms can be used in a similar fashion to select a part in the table as selecting a part in the 3D graphic. That is to say some embodiments may be configured to allow a user to use a first selection mechanism (e.g. hover over a part in the table) to highlight the part in a first format and use a second, different mechanism (e.g., click on the part in the table) to highlight the part in a second format.
  • a first selection mechanism e.g. hover over a part in the table
  • a second, different mechanism e.g., click on the part in the table
  • the 3D graphics module may determine whether input has been received indicating the user has selected a party to display by itself in the 3D graphic (e.g., using a selection mechanism such as alt-clicking on the part) in Operation 5640 . If so, then the 3D graphics module causes all the other parts of the component to be removed from being displayed in the 3D graphic in Operation 5641 .
  • the user may be provided functionality to display an axis or axes in the 3D graphic to assist the user in rotating the graphic to obtain a better view of a part. Therefore, in these particular embodiments, the 3D graphics module determines whether input has been received indicating the user has selected to display the axis or axes in the 3D graphic (e.g., has selected an add axis/axes mechanism) in Operation 5650 . If so, then the 3D graphics module causes display of the axis or axes in Operation 5651 .
  • FIG. 57A provides an example of a window displaying a table of parts for a component in a first view pane and a 3D graphic of the component in a second view pane.
  • a user has selected a particular part 5700 in the 3D graphic using a first selection mechanism (e.g., by using his or her mouse to hover over the part) and a result, the part is 5700 is highlighted in the 3D graphic and the corresponding part 5710 is highlighted in the table according to various embodiments.
  • a first selection mechanism e.g., by using his or her mouse to hover over the part
  • both are highlighted using a first format involving showing the parts 5700 , 5710 in color.
  • FIG. 57B again provides the window displaying the table of parts for the component in the first view pane and the 3D graphic of the component in the second view pane.
  • the user has now selected the particular part 5700 in the 3D graphic using a second selection mechanism (e.g., by clicking on the part) and as a result, the part 5700 is highlighted in the 3D graphic and the corresponding part 5710 is highlighted in the table using a second format involving showing the parts 5700 , 5710 in color and placing a border around the part 5710 in the table according to various embodiments.
  • the first selection mechanism can allow the user to quickly identify where a part displayed in the 3D graphic is found in the table, while the second selection mechanism can allow the user to actually select a part in both the 3D graphic and the table so that he or she may view further information on the pan and/or perform some type of functionality with respect to the part.
  • FIG. 57C again provides the window displaying the table of parts for the component in the first view pane and the 3D graphic of the component in the second view pane.
  • the user is interested in a part 5715 listed in the table that is also shown in the 3D graphic 5720 and selects the part 5715 (e.g., clicks on the part 5715 ) in the table.
  • the part 5715 is highlighted in the table and is highlighted in the 3D graphic 5720 according to various embodiments as shown in FIG. 57D .
  • the part 5720 shown in the 3D graphic is zoomed in on and rotated so that the user can get a better view of the part 5720 .
  • FIG. 57E provides an example of a 3D graphic where the user is interested in viewing a specific part 5725 that the user has selected but would like to do so without the other part 5730 hindering the view. Therefore, in this example, the user selects the other part 5730 and provides an indication to remove the part from view in the 3D graphic according to various embodiments. As a result, the other part 5730 is removed from the 3D graphic so that only the part of the user is interested in viewing 5725 is provided in the 3D graphic as shown in FIG. 57F .
  • FIG. 57G provides an example of a 3D graphic where the user is again interested in viewing a specific part 5735 but would like to do so without the other parts shown in the graphic hindering the view.
  • the user selects the specific part 5735 and indicates to solely show the part 5735 in the 3D graphic according to various embodiments.
  • the specific part 5735 is shown in the 3D graphic by itself without the other parts of the component being displayed as shown in FIG. 57H .
  • FIG. 57I provides an example where the user has indicated to display axes 5740 in the 3D graphic according to various embodiments. As previously mentioned, the user may display the axes 5740 to assist him or her in rotating the graphic to obtain a better view of a part.
  • FIG. 58 is a flow diagram showing a hierarchy module for performing such functionality according to various embodiments of the disclosure.
  • the hierarchy module may be invoked as a result of a user indicating to view the hierarchy associated with the components shown in media content currently being displayed.
  • the hierarchy refers to the relationships between the components of an item with respect to functional and/or physical breakdown of the components (e.g., breakdown into assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like).
  • the process flow 5800 begins with the hierarchy module providing the hierarchy for the components shown in the media content currently being displayed in Operation 5810 .
  • the hierarchy may be provided in a first view pane displayed on a window and the media content (e.g., illustration) may be provided on a second view pane displayed on the window.
  • the window may be configured to display the first and second view panes on non-overlapping portions of the window.
  • each of the components provided in the hierarchy may be associated with a selection mechanism (e.g., a checkbox control) to allow the user to identify which of the components to display in the media content and which of the components not to display.
  • the hierarchy module determines whether input has been received indicating a selection of a component to display in the media content in Operation 5815 . If so, then the hierarchy module causes display of the component in the media content in Operation 5820 . Likewise, the hierarchy module determines whether input has been received indicating a selection of a component not to display in the media content in Operation 5825 . If so, then the hierarchy module causes the component to be removed from being displayed in the media content in Operation 5830 .
  • a report may also be provided on those components illustrated (shown) in the media content but not listed (e.g., not found in the hierarchy).
  • some type of selection mechanism e.g., a button
  • the report may be provided on a window that is displayed as a result of the user indicating he or she would like to view the report. Therefore, the hierarchy module may determine whether input has been received indicating the user would like to view the report in Operation 5835 . If so, then the hierarchy module provides the report for display in Operation 5840 .
  • Such a report may be useful in identifying content in the technical documentation (e.g., illustrated parts data and/or breakdown) for the item that is deficient with respect to certain components.
  • the hierarchy module determines whether input has been received indicating the user would like to exit in Operation 5845 . If so, then the hierarchy module causes the window to close and exits. Otherwise, the hierarchy module continues to monitor the user's interactions.
  • FIG. 59A provides an example of a window in which a hierarchy of components 5900 is displayed in a first view pane for the components shown in media content, in this instance a 3D graphic 5910 , displayed in a second view pane.
  • each of the components listed in the hierarchy is provided with a checkbox control 5915 to allow the user to identify which of the components to display in the content media and which of the components not to display in the content media.
  • FIG. 59B provides an example of a report 5920 of components illustrated in the media content but not listed in the hierarchy.
  • a communication session may be a voice call, a video call, a chat session, a text session, and/or the like.
  • Such functionality allows for users to converse and interactive with each other while in a secure environment facilitated by the IETM in many instances.
  • a user may be performing a maintenance task and may have a question as to a particular step in the task.
  • the communication session functionality provided in various embodiments enables the user to conduct a communication session (e.g., a voice call) and converse with another user who is actively signed into the IETM to discuss the step of the maintenance task. Because both users are signed into the IETM and the IETM is facilitating the session, the conversation between the users is secure.
  • FIG. 60 is a flow diagram showing a communication session module for performing such functionality according to various embodiments of the disclosure.
  • the communications session module may be invoked as a result of a user who is signed into the IETM indicating he or she would like to initiate a communication session with another user who is actively signed into the IETM.
  • the process flow 6000 begins with the communication session module identifying the users who are actively signed into the IETM in Operation 6010 .
  • the users who are identified as active may be based at least in part on the credentials of the user who wants to initiate the communication session. For example, the user may be signed into a particular object (e.g., a particular aircraft) of an item (e.g., a type of aircraft) and therefore, the active users who are identified may be those users who are currently signed into the same object (e.g., the same aircraft). Further, in particular embodiments, other users (e.g., special users) may be identified as well such as the user's supervisor, quality assurance, engineering, and/or the like.
  • the communication session module provides the active users (e.g., identifiers for the active users) for display on a window in Operation 6015 .
  • the user may select one or more of the active users and/or special users on the window to initiate a communication session to.
  • the window may provide some type of selection mechanism for each user such as a button so that the user is selectable. Therefore, the communication session module determines whether input has been received indicating the user has selected a particular user in Operation 6020 .
  • the user may identify the type of session he or she would like to initiate to the user (e.g., voice call). Therefore, the communication session module may determine the type of communication session from the input as well. If the user has identified a particular user (and the type of session), then the communication session module initiates the communication session to the particular user in Operation 6025 .
  • the communication session is conducted over an IP-based network that the user's computing entity 110 is in communication with to ensure the session is conducted over a secure network.
  • the particular user may accept the communication session within the IETM.
  • the particular user may receive some type of notification in the IETM about the incoming communication session and may be provided with some type of selection mechanism to accept the session.
  • the communication session module determines whether input has been received indicating the communication session has been accepted in Operation 6030 . If the session has not been timely accepted, then the user who initiated the communication session may decide to drop the session. Therefore, if the session has not been accepted, then the communication session module determines whether input has been received indicating the user who initiated the session has decided to drop the session in Operation 6035 . If not, then the communication session module maintains the session and waits for an acceptance.
  • the communication session module determines whether input has been received indicating the user may want to initiate a session with an additional user in Operation 6040 . In other words, the communication session module determines whether the user may want to conduct a conference session involving multiple users. If so, then the communication session module returns to Operation 6015 and provides the available users so that the user can select another user to include in the session. Accordingly, the communication session module performs the same operations to initiate a communication session to the newly selected user and bridges the session onto the session with the first selected user when accepted.
  • the communication session module facilities the communication session within the IETM environment and provides a session window for display in Operation 6045 .
  • the session window may provide video if a communication session supporting such is being conducted between the users.
  • the session window may provide the user with functionality such as ability to share the user's screen with the other users, enable a webcam, mute and/or unmute a microphone, end the session, record, and/or the like. Therefore, the user may then converse and interact with the other users on the communication session via the session window.
  • the communication session module may determine whether input has been received indicating the user has selected any of the provided functionality. For instance, the communication session module may determine whether the user has decided to share his or her computing entity's screen display in Operation 6050 . If so, then the communication session module shares the user's screen with the other users in Operation 6055 . Accordingly, the communication session module may determine whether the user wants to use other functionality that is available and if so, invokes such functionality.
  • the communication session module determines whether input has been received indicating the user wants to end the communication session (e.g., hang up the call) in Operation 6060 . If so, then the communication session module ends the communication session in Operation 6065 . The communication session module then determines whether input has been received indicating the user wants to close the communication session functionality in Operation 6070 . If so, then the communication session module causes the session window to close and exits. It is noted that in some embodiments upon completion of the communication session, the communication session module may save a record of the session in a log within the IETM for reporting and/or tracking purposes.
  • FIG. 61A provides an example of a window that provides a selection mechanism (e.g., a button) 6100 to enable a user to access the communication session functionality according to various embodiments.
  • FIG. 61B provides an example of a window 6110 according to various embodiments that is opened as a result of the user selecting the mechanism 6100 .
  • the window 6110 provides a list of active users 6115 and a list of special users 6120 along with a selection mechanism to allow the user to initiate a communication session (e.g., “call”) with one of the active users 6115 and/or special users 6120 .
  • the selection mechanisms for the special users 6120 are unavailable indicating either the user who is initiating the session does not have the credentials to initiate a session any of the special users and/or each of the special users is not actively signed into the IETM.
  • FIG. 61C provides an example of a session window 6125 that is displayed once a communication session is activated according to various embodiments.
  • the session window 6125 includes different functionality the user may invoke while engaged in the communication session.
  • the session window 6125 includes a selection mechanism (e.g., a button) 6130 that the user may select to share his or her screen with the other users on the session.
  • the session window 6125 provides a selection mechanism (e.g., a button) 6135 to allow the user to end the communication session.
  • FIG. 61D shows the session window 6125 once the user has shared his or her screen 6140 with the other users on the session.
  • FIG. 62 is a flow diagram showing a virtual caution panel module for performing such functionality according to various embodiments of the disclosure.
  • the virtual caution panel module may be invoked as a result of a user who is signed into the IETM opening the virtual caution panel displayed on a window.
  • Caution panels are often used to warn and/or caution personnel of a problem with the item.
  • personnel who are working on and/or using the item will reference some manual, often in paper form, that will provide instructions on how to handle the warning and/or caution.
  • time may be essence when addressing such warnings and/or cautions.
  • a caution panel is often provided in the cockpit of the aircraft to provide the pilot with warnings and/or cautions.
  • the pilot may have a limited amount of time to address the problem before it becomes too late to fix while in flight. This can lead to lose of the aircraft and/or life.
  • many problems can lead to multiple warnings and/or cautions being displayed. Therefore, the pilot may not only have to deal with resolving a warning and/or caution but a combination of warnings and/or cautions.
  • various embodiments provide a virtual caution panel that can be used by a user to assist the user in addressing warnings and/or cautions provided by such a caution panel found on an item. These embodiments can enable a user in addressing a warning and/or caution (or combination thereof) in a timely manner that is not typically possible using a conventional manual, even when the manual may be in a digital format.
  • the virtual caution panel mimics the actual caution panel found on the item with the same warnings and/or cautions.
  • the caution panel may include a plurality of indicators (e.g., warning lights) for the different warnings and/or cautions that light up. These indicators may provide different levels of warnings and/or cautions, such as different color lights, to represent degrees of urgency. Yellow may represent a caution with respect to the corresponding component, condition, process, and/or the like for an indicator and red may represent a warning that requires more urgency in addressing. Therefore, the user mimics the warnings and/or cautions shown on the actual panel by selecting the same warnings and/or cautions displayed on the virtual panel.
  • indicators e.g., warning lights
  • These indicators may provide different levels of warnings and/or cautions, such as different color lights, to represent degrees of urgency. Yellow may represent a caution with respect to the corresponding component, condition, process, and/or the like for an indicator and red may represent a warning that requires more urgency in addressing. Therefore, the user mimics the warnings and/or cautions shown on the actual panel by selecting the same warnings and/or cautions displayed on the virtual panel
  • the process flow 6200 begins with the virtual caution panel module providing the virtual caution panel for display on a window in Operation 6210 .
  • the virtual caution panel module determines whether input has been received indicating the user has selected any of the warnings and/or cautions displayed on the virtual panel in Operation 6215 .
  • the virtual caution panel may be configured to allow the user to select different levels (e.g., set different colors) for the individual indicators displayed on the panel as well as select combinations of warnings and/or cautions.
  • the virtual caution panel module retrieves a corrective action (e.g., steps to perform to address the one or more cautions and/or warnings) in Operation 6220 . Therefore, in various embodiments, the corrective actions to address the different warnings and/or cautions may be stored within the IETM and retrieved by the virtual caution panel module based at least in part on the warnings and/or cautions (and/or combination thereof) identified by the user on the panel. Such retrieval may be much quicker than if the user were to search for the corrective action him or herself in a physical and/or digital manual. Therefore, embodiments of the virtual caution panel can be very beneficial in addressing warnings and/or cautions in a timely manner when required.
  • a corrective action e.g., steps to perform to address the one or more cautions and/or warnings
  • the module provides the corrective action for display to the user in Operation 6225 .
  • the corrective action may be displayed on the same window as the virtual caution panel or displayed on a different window.
  • the virtual caution panel module determines whether input has been received indicating the user wishes to exit the virtual caution panel in Operation 6230 . If so, then the virtual caution panel module causes the virtual caution panel to close and exits. Otherwise, the virtual caution panel module continues to provide the virtual caution panel and corrective action if appropriate.
  • FIG. 63A provides an example of a virtual caution panel 6300 according to various embodiments.
  • an indicator 6310 has been selected on the virtual caution panel 6300 by the user to mimic a caution being displayed by the actual caution panel found on the item.
  • a corrective action 6315 to address the caution may then be provided as shown in FIG. 63B .
  • entities have various items (e.g., objects for items) such as vehicles that periodically need to be loaded with different articles.
  • vehicles e.g., objects for items
  • Such vehicles may be used for air, land, and/or water and may include, for example, aircraft, boats, ships, armored fighting vehicles, reconnaissance vehicles, light utility vehicles, engineering vehicles, self-propelled weapons and defense systems, ambulances, and/or the like. Accordingly, when such vehicles are deployed for a mission, the vehicles are required to be carrying certain equipment expected to be used for the mission.
  • aircraft such as fighters and bombers and armored fighting vehicles such as tanks and troop carriers are often required to be carrying certain munitions expected to be used for combat.
  • the loading of these munitions is typically performed by military personnel who receive a list of munitions and then are required to physically load the munitions onto and/or into the vehicle.
  • Many vehicles have multiple positions on the vehicle for holding such munitions.
  • many aircraft have several positions (e.g., stations) on the body of the aircraft for holding munitions, whether they be types of weapons and/or ammunitions such as missiles, bombs, and/or the like. These positions are often configured so that only certain munitions can be placed at certain positions.
  • munitions may be required to be loaded/installed on the vehicle using a number of operations (e.g., steps) and in a certain sequence. Therefore, personnel who are responsible for loading the munitions are regularly required to initially put together a workflow that includes a number of different procedures in a sequential order that are to be performed to load the munitions onto the vehicle. The generation of this workflow can oftentimes be very time consuming in identifying which munitions are to be loaded at which positions, identifying the corresponding procedures for loading the munitions, and then generating the workflow of the procedures in the correct ordered needed to load the munitions.
  • various embodiments provide functionality (e.g., article loading wizard) that assists personnel in loading different articles onto and/or into an object of an item.
  • the example of loading munitions onto an aircraft is used in discussing this functionality.
  • the functionality can be used in loading different articles for a number of different types of items.
  • other articles may be loaded other than equipment such as cargo, personnel, perishable goods, livestock, medications, and/or the like.
  • other items besides vehicles may be loaded such as warehouses, trailers, medical facilities, and/or the like.
  • FIG. 64 is a flow diagram showing an article loading module for performing such functionality according to various embodiments of the disclosure.
  • a user may be signed into the IETM for a particular object for an item.
  • the user may be signed into the IETM for a particular aircraft (e.g., fighter T123) found in a military's fleet of aircraft (fleet of jet fighters).
  • the user may be tasked with loading munitions onto the aircraft and therefore has also signed into the IETM identifying a specific job to be performed.
  • the user may select a mechanism to invoke the article loading module.
  • the process flow 6400 begins with the article loading module reading the item the user is currently signed into the IETM to view in Operation 6410 .
  • the item is a type of jet fighter found in the military's fleet of aircraft.
  • the article loading module provides media content (e.g., a digital model) of the item for display on a window in Operation 6415 .
  • the media content e.g., the digital model
  • the article loading module determines whether the user has selected a position in Operation 6420 . If so, then the article loading module retrieves the articles that can be loaded at the position and provides the articles for display on the window in Operations 6425 and 6430 .
  • the articles may be displayed as a list in a dropdown menu control that is configured to allow the user to select one or more of the articles for loading at the particular position. Note that in particular embodiments, only those articles that can be loaded at the particular position are retrieved and displayed to the user. Such a configuration can ensure that an article is not loaded by personnel at an inappropriate position.
  • the article loading module determines whether input has been received indicating the user has selected one or more articles for the position in Operation 6435 . If the user has selected one or more articles, then in particular embodiments, the article loading module provides media content (e.g., illustration(s) and/or image(s)) of the selected articles for display for the user to view in Operation 6440 . Such an operation may be carried out in these embodiments so that the user can see what he or she has selected to load at the position. This may help the user with physically selecting and loading the correct articles in the field.
  • media content e.g., illustration(s) and/or image(s)
  • the media content may be displayed on a separate window that is superimposed over a portion of the window displaying the media content (e.g., the digital model) of the item or the media content may be displayed on one or more view panes along with the media content of the item on a separate view pane.
  • the article loading module records the article(s) that are to be loaded at the position in Operation 6445 .
  • the article loading module determines whether input has been received indicating the user's desire to generate a workflow for loading the object for the item in Operation 6450 .
  • the user may select some type of mechanism (e.g., a button) displayed on the window after the user has identified the article(s) be loaded at each of the positions for the item. If the user has indicated to generate the workflow, then the article loading module generates the workflow for loading the selected article(s) onto and/or into the object for the item in Operation 6455 .
  • the workflow may include one or more procedures to be performed by personnel in loading the article(s) onto and/or into the object for the item.
  • the workflow may identify the sequential order in which the procedures are to be performed. For instance, returning to the example, the loading of munitions onto the aircraft may be required to be carried out in a particular order to ensure the safety of the military personnel who are physically loading the munitions onto the aircraft. For example, certain ammunition may need to be loaded and tested before loading another ammunition to ensure the ammunition is properly loaded and stabilized so that it will not trigger other ammunition loaded onto the aircraft from going off.
  • the article loading module is configured to dynamically generate the workflow based at least in part on the articles selected by the user to be loaded at each position. In some instances, a significant number of combinations of articles can be potentially loaded at the different positions.
  • an advantage provided by the article loading module in some embodiments is the ability of the module to dynamically generate a workflow based at least in part on a significant number of potential combinations that places the loading of the articles in a correct sequence to ensure they are loaded safely.
  • the module provides the workflow for display in Operation 6460 .
  • the article loading module may provide a digital workflow to be displayed in the form of a table of contents that lists the different procedures that make up the workflow in the order in which they are to be performed.
  • each of the different procedures may be selectable. Therefore, the user may then select the procedures, one-by-one, in the order in which they are found in the table of contents to view the operations that need to be performed for the procedures in loading the articles onto and/or into the object for the item.
  • various functionality may be implemented in embodiments to ensure the procedures are performed in the correct sequence as displayed in the digital workflow.
  • the article loading module may determine whether input has been received indicating the user would like to exit in Operation 6465 .
  • the user may be generating a workflow for loading the object of the item at a later time and therefore, the user may not be ready to start the actual loading of the object.
  • the article loading module may be configured to save the workflow so that is may be used at the later time.
  • FIG. 65A provides an example of a window displaying a digital model of an aircraft 6500 to be loaded with articles according to various embodiments.
  • the digital model of the aircraft 6500 displays the various positions (e.g., stations) at which articles can be loaded. Accordingly, the various stations are selectable (e.g., displayed as hyperlinks) so that the user may select each station, such as station 1 6510 , to be provided a list (e.g., a dropdown menu control) of the different articles that may be loaded at the station.
  • a digital workflow in the form of a table of contents 6515 may be generated with the different procedures to be performed in loading the aircraft in the order in which they are to be performed as shown in FIG. 65B .
  • a discussion is now provided with respect to using the digital workflow at a time when the article(s) are actually being loaded onto and/or into the object for the item.
  • FIG. 66 is a flow diagram showing a loading workflow module for performing such functionality according to various embodiments of the disclosure.
  • a digital workflow may be displayed on a window in the form of a table of contents listing the procedures to be performed in loading the articles onto and/or into the object for the item.
  • the procedures are provided in the table of contents in particular embodiments in the order in which they are to be performed in loading the object. Accordingly, each of the procedures found in the table of contents may be selectable so that the user selects the procedures one at a time in the sequence provided to view the operations to perform for the selected procedure to load the articles onto and/or into the object for the item.
  • the process flow 6600 begins with the loading workflow module determining whether input has been received indicating the user has selected a procedure in the table of contents in Operation 6610 . If so, then the loading workflow module determines whether the selected procedure is the next procedure to be performed for the workflow in Operation 6615 . Therefore, in particular embodiments, that loading workflow module is configured to determine whether the procedure(s) found in the workflow listed before the selected procedure have been performed. As further discussed below, the loading workflow module marks the procedures that have been completed in some embodiments. Therefore, the loading workflow module is able to determine whether each of the procedures found in the workflow before the currently selected procedure has been completed.
  • the loading workflow module provide an error to the user in Operation 6620 .
  • the loading workflow module may provide an error message for displaying on a window informing the user that the selected procedure is not the next procedure to be performed in the workflow.
  • the loading workflow module may be configured in some embodiments so that the operations for the selected procedure cannot be displayed.
  • the loading workflow module provides the procedure for display to the user in Operation 6625 .
  • the loading workflow module may retrieve the data for the procedure from the technical documentation for the item and provide the data for the procedure to display on a new window for the user.
  • the procedure may be displayed on a pane provided on the window with the workflow (with the workflow displayed on a second pane) or the procedure may be provided on a separate window from the window with the workflow.
  • the user is then able to read the instructions (e.g., different operations) found in the procedure and perform the instructions accordingly.
  • the different procedures found in the workflow may involve procedures that provide instructions for loading a particular munition at a particular station of the aircraft, as well as procedures for testing a munition once it has been loaded at a particular station. Therefore, the instructions for the different procedures may provide a sequence of operations (e.g., steps) to be performed by the military personnel who are loading munitions onto the jet fighter.
  • the loading workflow module may determine whether input has been received that the end of the procedure currently being displayed has been reached in Operation 6630 , indicating the user has completed performing the procedure.
  • the loading workflow module may be configured to determine the end of the procedure has been reached by receiving input indicating the user has performed some action such as, for example, selecting a mechanism such as a button displayed on the window and/or scrolling to the bottom on the procedure displayed on the window.
  • the loading workflow module determines whether each of the operations found in the procedure has been acknowledged in Operation 6635 .
  • each operation e.g., step
  • the loading workflow module may determine whether input has been received that the selection mechanism for each operation has been selected by the user.
  • the loading workflow module may be configured to also determine whether the user has acknowledged each of the previous operations in the procedure whenever the user acknowledges a particular operation in the procedure to ensure the operations are performed in order.
  • the loading workflow module causes display an error to the user in Operation 6640 .
  • the loading workflow module may provide an error message to display informing the user that all of the operations in the procedure have not been acknowledged as being performed.
  • the loading workflow module marks the procedure as completed in Operation 6645 .
  • the loading workflow module returns to the window displaying the table of contents for the workflow if need be in Operation 6650 . Accordingly, as a result of the user completing the procedure, the loading workflow module may cause the procedure to be displayed as being completed in the digital workflow (e.g., the table of contents).
  • the procedure may now be displayed along with some type of indicator (e.g., in a particular font, in a particular color, with a symbol such as a plus sign, as no longer selectable, and/or the like) to demonstrate the procedure has been completed.
  • some type of indicator e.g., in a particular font, in a particular color, with a symbol such as a plus sign, as no longer selectable, and/or the like. The user may then select the next procedure found in the workflow.
  • the user may decide to exit the window displaying the table on contents and select a mechanism (e.g., a button) displayed on the window to do so.
  • a mechanism e.g., a button
  • the loading workflow module may determine input has been received indicating the user would like to exit in Operation 6655 .
  • the loading workflow module determines whether the workflow for loading the articles onto and/or into the object for the item has been completed in Operation 6660 . That is to say, the loading workflow module determines whether each of the procedures found in the workflow has been completed.
  • the loading workflow module in particular embodiments provides an error (e.g., an error message for displaying on a window) to the user indicating the workflow has not been completed in Operation 6665 .
  • the loading workflow module may then determine whether input has been received indicating the user still wishes to exit the window displaying the digital workflow in Operation 6670 .
  • the personnel who are loading the munitions onto the jet fighter may be taking a lunch break. Therefore, the user may wish to exit the window for security reasons while away from the loading area and eating lunch. He or she then plans to resume with the workflow once he or she has returned from lunch.
  • the loading workflow module in particular embodiments records one or more images of the object in Operation 6675 to document the progress of loading the articles that has been completed to that point.
  • imaging devices may be installed at different locations in the loading area to allow images to be taken of the different loading stations.
  • the loading workflow module records the progress of the workflow in a log in Operation 6680 . Therefore, in the example, the user can retrieve the incomplete workflow upon returning from lunch and continue with the remainder of the workflow for loading the munitions onto the jet fighters.
  • the loading workflow module again records image(s) of the object to document the loading of the articles and records the completion of the workflow in the log.
  • Recordation of the images and progress of the workflows in various embodiments can allow for tracking of the workflows being performed, as well as allow for quality control measures to be put into place to evaluate different personnel on performing loading tasks. For example, recordation of the images of the jet fighter loaded with the required munitions may allow for the pilot to view the images prior to takeoff to ensure the munitions have been properly loaded onto the aircraft. This can help to not only ensure success of the mission but can also ensure the safety of the pilot and any other flight crew member on the aircraft.
  • network connectivity e.g., wireless network
  • maintenance personnel may be working out in the field performing maintenance on an object (e.g., an aircraft) where network connectivity is unavailable.
  • the maintenance personnel may be making use of the IETM to view one or more maintenance procedures they are to perform on the object.
  • one of the maintenance personnel may want to perform some type of functionality provided by embodiments of the IETM that may require connectivity.
  • the maintenance personnel may want to order a part to replace a part taken from inventory used in performing the maintenance on the object.
  • various embodiments can facilitate the personnel's ordering of the part by generating a graphical code that can then be scanned by the personnel using a remote device such as his or her mobile device with some type of connectively such as cellular.
  • a remote device is a device that is not in communication with the user's computing entity 110 being used to access the IETM.
  • the remote device may be the user's mobile device (e.g., smartphone), tablet, and/or the like with connectivity to a network such as a cellular network, wireless network, and/or the like.
  • the user who is signed into the IETM may have a software application (e.g., an app) installed on his or her remote device that is required to be used to enable the functionality to be performed in the IETM.
  • This software application may be limited in its distribution so that it is only installed on devices belonging to valid users.
  • FIG. 67 is a flow diagram showing a remote device integration module for performing such functionality according to various embodiments of the disclosure.
  • the user may be signed into the IETM and decides to perform some functionality within the IETM that requires connectivity such as, for example, submitting a form filed out while signed into in the IETM to a backend system.
  • a selection mechanism e.g., a button
  • the remote device integration module is invoked in various embodiments.
  • the process flow 6700 begins with the remote device integration module generating and providing a security graphical code for displaying in Operations 6710 and 6715 .
  • the security graphical code may be a barcode, a quick response code, a one-dimensional code, a universal product code, a data matric code, and/or the like.
  • the remote device integration module may generate the security graphical code to contain the user's credentials used in signing into the IETM. Accordingly, the security graphical code may be displayed on a window so that the user can scan the code using some type of code reader installed on the user's mobile device.
  • the code reader may be any one of many commercially available graphical code readers and the reader may not necessarily include any type of security features.
  • the software application may be configured so that the application can be used initially to scan the security graphical code.
  • other functionality may not be available within the application.
  • Such a configuration can provide security features within the software application with respect to allowing the user to perform certain functionality using the software application while not allowing the user to perform other functionality.
  • the software application may be configured to require the user to provide credentials (e.g., a username and/or password) to open the application. Therefore, in particular embodiments, various functionality provided by the software application residing on the user's remote device may become available as a result of the user scanning the security graphical code displayed in the window.
  • the remote device integration module determines whether input has been received indicating to generate a graphical code for the form the user wishes to submit in Operation 6720 . For instance, the remote device integration module may determine that the security graphical code has been scanned by the user as a result of the user acknowledging he or she has scanned the code. For example, the window displaying the security graphical code may provide a selection mechanism such as a button that the user can select to close the window with the code. Accordingly, the remote device integration module may receive input indicating the window with the security graphical code has been closed and as a result, generate and provide the graphical code for the form for display in Operations 6725 and 6730 .
  • the remote device integration module may provide the graphical code for the form to display on a window so that the user can now use his or her mobile device to scan the code.
  • the graphical code may be a quick response code, a one-dimensional graphical code, a universal product code, a data matric graphical code, and/or the like.
  • the graphical code may include information provided by the user on the form such as the information required to order the part.
  • the graphical code may include information such as the user's credentials, an identifier for the object and/or item, an identifier for a location for the user, and/or the like.
  • the graphical code may be configured so that it can only be read by the software application residing on the user's remote device.
  • the remote device integration module determines whether to exit in Operation 6735 .
  • the user may have scanned the graphical code for the form and then selected a mechanism such as a button provided on the window displaying the code to close the window.
  • the remote device integration module may receive input indicating the window has been closed. If that is the case, then the remote device integration module exits.
  • the remote device integration module may be invoked at different times other than when specific functionality is to be carried out that requires connectivity.
  • the user may invoke the remote device integration module upon signing into the IETM to establish that the software application residing on the user's remote device can then be used in facilitating any functionality requiring connectivity while the user is signed into the IETM. Therefore, in these particular embodiments, the user may not be required to scan a security graphical code each time he or she wishes to use functionality provided by the IETM that requires connectivity.
  • the process flow 6700 shown in FIG. 67 may only involve providing the security graphical code without necessarily providing a graphical code to facilitate other functionality.
  • VPNs Virtual private networks
  • networks that are not necessarily secure (e.g., public networks) as though they are connected to a secure private network. Accordingly, applications running over a VPN can often benefit from the functionality, security, and management provided in a private network. Therefore, various embodiments provide a virtual network in which users can operate within while signed into the IETM.
  • FIG. 68 is a flow diagram showing a virtual network module for performing such functionality according to various embodiments of the disclosure.
  • a user may have already signed into the IETM and decides to join a virtual network provided through the IETM or the user may join a virtual network at the time when he or she signs into the IETM.
  • the user may have a software application installed on remote device such as his or her mobile device that provides a graphical code for the user to scan using his or her computing entity 110 (e.g., webcam on his or her computing entity 110 ) being employed to view the IETM.
  • the graphical code may be provided in various forms such as a barcode, a quick response (QR) code, a one-dimensional code, a universal product code, a data matric code, and/or the like.
  • a graphical code may be provided on an object that is scanned by the user using his or her computing entity 110 .
  • the user may be maintenance personal who is working on a particular aircraft found in an airline's fleet and the graphical code may be physically displayed on a component of the aircraft such as its landing gear.
  • the user invokes the virtual network module to scan the graphical code and the process flow 6800 begins with the virtual network module scanning the graphical code in Operation 6810 .
  • the virtual network module determines whether the graphical code that has been scanned is valid in Operation 6815 .
  • the virtual network module is configured in various embodiments to interrogate the information found in the code to determine whether the code is associated with a valid user and/or object.
  • the graphical code that was scanned may have been provided by a software application installed on the user's mobile device.
  • the user may have signed into the application and generated the code using functionality provided by the application. Therefore, the information provided in the code may identify the user (e.g., provide credentials for the user) and the virtual network module may determine whether the credentials provided for the user in the graphical code are valid.
  • the graphical code that was scanned may have been provided on an object (e.g., aircraft) and the information provided in the code may identify the object. Therefore, the virtual network module may determine whether the object identified in the code is valid (e.g., is scheduled to have maintenance performed on the object).
  • the virtual network module determines the graphical code is invalid, then the virtual network module causes display an error message to the user in Operation 6820 .
  • the virtual network module may provide an error message via a window informing the user that the graphical code is invalid.
  • the virtual network module determines whether input has been received indicating the user would like to exit or scan another graphical code in Operation 6825 .
  • the window displaying the error message may provide a first selection mechanism (e.g., a first button) to exit and a second selection mechanism (e.g., a second button) to scan another code. If the user indicates he or she would like to scan another code, then the virtual network module returns to Operation 6810 .
  • the virtual network module in particular embodiments may provide one or more objects identifying the various virtual networks available to the user in Operation 6830 .
  • This particular operation may be carried out when the graphical code scanned by the user provides the user's credentials.
  • the virtual network module may identify the objects the user is currently authorized to work on. For instance, the user may be maintenance personnel who is scheduled to perform maintenance on two particular aircraft found in an airline's fleet. Therefore, in this instance, the virtual network module may identify the two aircraft as available to the user.
  • a virtual network is configured for each of the objects so that the user's selection of a particular object identifies which virtual network supported by the IETM the user is to join while signed into the IETM.
  • the selection of an object may also identify an instance for the IETM. That is to say, the selection of the object (and corresponding virtual network) may identify what technical documentation to make available to the user while he or she is signed into the IETM, as well as identify any information found within the IETM for the particular object such as the maintenance jobs to be performed on the object.
  • the virtual network module determines whether input has been received indicating the user has selected a particular object in Operation 6835 . If so, then the virtual network module joins the virtual network for the object in Operation 6840 . Accordingly, if the graphical code scanned by the user includes information that identifies the object, then the virtual network module may automatically join the corresponding virtual network without the user having to select the object. This may also be true if only a single object is associated with the user.
  • the user may then be provided with specific functionality as a result of joining the virtual network.
  • the user may interact directly with other users who are signed into the IETM and are on the same virtual network.
  • specific functionality may be associated with the corresponding object.
  • a lockout program often involves “locking out” certain operations, processes, functions, and/or the like for an object that may be unsafe to perform while certain maintenance is being carried out on the object.
  • the power supply for a particular component may be shut off while maintenance is being performed on the component.
  • some type of warning e.g., a lockout tag
  • the virtual network module may invoke lockout functionality for the object in Operation 6845 that broadcasts warnings to all the users who are on the virtual network for the object.
  • lockout functionality may require the users on the virtual network for the object to acknowledge the warnings, as well as track which users have or have not acknowledged the warnings.
  • the user may be signed into the IETM and using the technical documentation to perform a specific role with respect to the object.
  • the user may be maintenance personnel, engineering personnel, operations personnel, and/or the like.
  • the user may have one or more tasks (e.g., jobs) that the user is expected to perform with respect to the object while signed into the IETM. Therefore, the virtual network module in particular embodiments may identify and/or assign and/or allocate one or more tasks (e.g., jobs) to the user to perform with respect to the object in Operation 6850 .
  • tasks e.g., jobs
  • the virtual network may be provided over a variety of different types of networks such as IP-based and/or cellular depending on the embodiment.
  • the virtual network may be facilitated through the software application installed on the user's remote device.
  • the user may sign into the software application and/or the user may scan a graphical code displayed via the IETM or found on an object using the software application to display one or more available virtual networks for objects or to automatically connect to a virtual network for an object through the software application.
  • the software application can identify the user and provide what virtual networks are available to the user.
  • the user can select one of the available virtual networks and connect to the network on his or her mobile device.
  • the same functionality e.g., object-specific functionality and/or user-specific functionality
  • the software application may be configured to perform similar operations to those performed by the virtual network module described above in various embodiments.
  • the technical documentation associated with an item is typically stored and/or provided in accordance with S1000D standards.
  • data modules are normally provided that include header and/or preface data in accordance with S1000D standards.
  • S1000D standards require a document to be broken down into individual data modules that are typically identified via XML and/or SGML tags, labels, and/or metadata and that are organized into a hierarchical XML and/or SGML structure.
  • the XML and/or SGML files and/or data stored therein may be converted to JSON formatted data and/or files. Accordingly, in these embodiments, the content found in the JSON formatted data and/or files provides the technical documentation for the item.
  • instances may occur in which an entity may have documentation in formats that are not in accordance with S1000D standards.
  • many entities have technical manuals, instructions, orders, and/or the like for various items in PDF files and/or SGML files that do not adhere to S1000D standards. Therefore, these entities are oftentimes required to use systems, software, applications, and/or the like other than an IETM to view such documentation since most conventional IETMs require the technical documentation to adhere to S1000D standards.
  • This can lead to the entities having to maintain multiple components (e.g., systems, software, applications, and/or the like) to view all of the technical documentation associated with a particular item.
  • users who are viewing/using the documentation are then required to have the multiple components available to them at any given time so that they have access to any of the documentation as needed.
  • various embodiments are configured to allow the import of source data that does not adhere to S1000D standards into the IETM. Accordingly, such embodiments allow users to view technical documentation in the IETM from data sources other than those that adhere to S1000D standards. As a result, users can view and use the complete technical documentation for an item in many instances using a single instrument (the IETM). In addition, these embodiments eliminate the need to convert source data in many instances in accordance with S1000D standards to import into the IETM.
  • FIG. 69 is a flow diagram showing an import module for performing such functionality according to various embodiments of the disclosure.
  • the data e.g., dataset
  • the data may be provided in different formats and adhere to different standards.
  • the data may be provided in XML and/or SGML files in accordance with S1000D standards.
  • the data may also be provided in XML, SGML, PDF files and/or the like that are not in accordance with S1000D standards.
  • the data may include a combination of both types of files.
  • the process flow 6900 begins with the import module receiving the data to import in Operation 6910 .
  • the data may be received in any number of different formats.
  • the data may be a dataset for a publication of the technical documentation for an item according to S1000D standards.
  • the data may be one or more files having content (e.g., manual) that make up the technical documentation for the item in a file format such as PDF and/or SGML.
  • the import module determines whether the data is provided in accordance with S1000D standards in Operation 6915 . For instance, in particular embodiments, the import module may make such a determination based at least in part on whether the data is provided as XML and/or SGML files that conform to data modules found in a dataset adhering to S1000D standards. If that is the case, then the import module selects one of the data modules in Operation 6920 and converts the data module to JSON format in Operation 6925 . The import module may then store the converted data module for use with the IETM. At this point, the import module determines whether the data includes another data module in Operation 6930 . If so, then the import module returns to operation 6920 , select the next data module found in the data, and preforms the operations just described for the newly selected data module.
  • the import module selects a file found in the data in Operation 6935 .
  • the file may be provided in any number of different formats such as PDF, SGML, DOC, RTF, TXT, WPS, and/or the like. Therefore, the import module converts the file to JSON format and stores the converted file in Operation 6940 .
  • the import module may be configured to convert the file to JSON format in multiple steps. For example, in particular embodiments, if the original file is in TXT format, then the import module may first convert the file to SGML format and then convert the file to JSON format.
  • the import module determines whether the data includes another file in Operation 6945 . If so, then then the import module returns to operation 6935 , select the next file found in the data, and preforms the operations just described for the newly selected file. Once the import module has processed all the files found in the data, the import module exists.
  • the data received to be imported into the IETM in some instances may include both content in accordance with S1000D standards (e.g., include data modules) and content not in accordance with S1000D standards (e.g., include files in PDF format). Therefore, in these particular instances, the process flow 6900 may involve looking at individual components of the data to determine how to process each of the individual components.
  • the data from the different sources can be used interchangeably and/or simultaneously in the IETM in various embodiments.
  • various embodiments are able to provide the same functionality, security, features, and performance for all of the technical documentation for an item in the IETM regard of the source of the technical documentation. Therefore, as a result, functionality that would not normally be available for some technical documentation can now be provided for the documentation in the IETM.
  • a technical manual may be sourced in one or more PDF files. Therefore, a user would typically make use of a PDF reader (e.g., application) to view the technical manual.
  • a conventional PDF reader does not furnish the functionality implemented in various embodiments described herein.
  • a conventional PDF reader does not furnish the preview capabilities described herein provided by various embodiments.
  • the preview capabilities may be implemented for the technical manual in various embodiments. That is to say, links may be provided in the content of the technical documentation originating from the PDF files that can be configured to generate and display previews. Such links cannot normally be placed in PDF files and provided in a PDF reader.
  • a PDF reader does not have the capability to allow a user to search a set of PDF files. Therefore, if the technical documentation involves multiple files, then the user who is using a PDF reader is required to open the files one at a time to search for a particular term and/or topic. However, various embodiments would allow the user to search the entire library (e.g., multiple PDF files) for the technical documentation in the IETM with a single search.
  • the data structure and/or formatting may be maintained by importing the data source that is not required to adhere to S1000D standards. This may be helpful to a user who needs to navigate the technical documentation since the structure and formatting mimic the structure and formatting found in the original data source.
  • personnel who maintain the data source e.g., maintain the technical manual provided in the PDF file(s)
  • a data request when a data request is received within the IETM. For example, a user may select a component, topic, request a preview, and/or the like while signed into the IETM.
  • the data request may identify particular content that was imported as a data module and/or data file that can be provided in JSON format. Accordingly, in some embodiments, providing the content in JSON format may allow the content to be transmitted and/or processed more quickly than if the content were provided in another file format such as XML, SGML, and/or PDF format.
  • components may identify functional and/or physical structures of an item and may be broken down into assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like.
  • a 3D graphic may not only be provided at the part level, but may be provided at other levels found within the structure of the item and therefore, the functionality described herein with respect to 3D graphics may be applicable to these other levels and corresponding components.
  • the same can be said with respect to other functionality described herein involving parts such as generating a preview for a part. Therefore, it should be understood the functionality described herein involving parts is not to be limited to use with just parts and may be used with respect to other components of an item in various embodiments.

Abstract

Embodiments of the present disclosure provide methods, apparatus, systems, computing devices, and computing entities for controlling content found in technical documentation via an IETM viewer. In accordance with one embodiment, a method is provided comprising: providing a window for display via the IETM viewer, the window displaying the content found in the documentation; receiving a first verbal command from a user; responsive to receiving the first verbal command: identifying a focus of a first portion of the content; and causing a first action to be performed with respect to a first user interface control element associated with the first portion; and thereafter: receiving a second verbal command from the user; and responsive to receiving the second verbal command: identifying a focus of a second portion of the content; and causing a second action to be performed with respect to a second user interface control element associated with the second portion.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 17/249,039, filed Feb. 18, 2021, the contents of which are incorporated by reference in their entirety into the present application.
  • TECHNOLOGICAL FIELD
  • Embodiments of the present disclosure generally relate to providing enhanced functionality in an interactive electronic technical manual (IETM). The inventors have developed solutions that increase the efficiency, functionality, speed, capabilities, and user friendliness over conventional IETMs.
  • BACKGROUND
  • IETMs and other technical data generally hold large amounts of information that can include multiple volumes and hundreds or thousands of data modules when in electronic format. When users of IETMs, or other technical data that are provided electronically, need to look for a specific subject, they need to go over a lengthy electronic table of contents, similar to a paper book, but using links, which can include nested subsystems (and sub-subsystems) within systems. This requires the users to know not only the exact nomenclature of the item they seek (many times this is unknown), but how to navigate through the seemingly endless array of nested data. This results in a lot of time spent by users, trying to look in many different places (and sometimes, out of exasperation, just look from A to Z) to find the information, which results in inefficiency, loss of time, and waste of expensive resources.
  • Furthermore, although many conventional IETMs provide some type of interactive functionality with respect to the technical data that allow users to interactively view the data, such functionality is typically limited to capabilities and do not address many of the technical issues encountered when providing an electronic interface for a large amount of information, as well as technical improvements that provide features beyond just simply allowing the user to view such information. For example, the technical data may involve information that is highly confidential such as information on military equipment. Many conventional IETMs fail to provide functionality to control secure access to the technical data, as well as control user functionality within the IETMs in viewing and using the technical data in a secure manner.
  • Thus, a need exists in the industry to address technical problems related to efficiently providing technical data to users in a user-friendly manner. Further, a need exists in the industry to provide technical improvements to allow for enhanced functionality with respect to the technical data. It is with respect to these considerations and others that the disclosure herein is presented.
  • BRIEF SUMMARY
  • In general, embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for controlling content found in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation for the item via an IETM viewer. In accordance with one aspect of the present disclosure, a method for controlling content found in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation for the item via an IETM viewer is provided. In various embodiments, the method comprises: providing a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation; receiving a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity; responsive to receiving the first verbal command: identifying, via one or more processors, a focus of a first portion of the content; and causing, via the one or more processors, a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content: receiving a second verbal command, wherein the second verbal command is received as a result of the user speaking the second verbal command that is detected by the audio input of the user computing entity; and responsive to receiving the second verbal command: identifying, via the one or more processors, a focus of a second portion of the content; and causing, via the one or more processors, a second action to be performed based at least in part on the second verbal command with respect to a second user interface control element associated with the second portion of the content.
  • In accordance with another aspect of the present disclosure, an apparatus is provided. In various embodiments, the apparatus comprises at least one processor and at least one memory comprising computer program code. The at least memory and the computer program code are configured to, with the at least one processor, cause the apparatus to: provide a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation; receive a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity; responsive to receiving the first verbal command: identify a focus of a first portion of the content; and cause a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content: receive a second verbal command, wherein the second verbal command is received as a result of the user speaking the second verbal command that is detected by the audio input of the user computing entity; and responsive to receiving the second verbal command: identify a focus of a second portion of the content; and cause a second action to be performed based at least in part on the second verbal command with respect to a second user interface control element associated with the second portion of the content.
  • In accordance with yet another aspect of the present disclosure, a non-transitory computer storage medium is provided. In various embodiments, the non-transitory computer storage medium comprises instructions stored thereon. The instructions being configured to cause one or more processors to at least perform operations configured to: provide a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation; receive a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity; responsive to receiving the first verbal command: identify a focus of a first portion of the content; and cause a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content: receive a second verbal command, wherein the second verbal command is received as a result of the user speaking the second verbal command that is detected by the audio input of the user computing entity; and responsive to receiving the second verbal command: identify a focus of a second portion of the content; and cause a second action to be performed based at least in part on the second verbal command with respect to a second user interface control element associated with the second portion of the content.
  • In particular embodiments, one or more features of the first verbal command may be processed using a verbal command machine learning model to generate the first action. Likewise, one or more features of the second verbal command may be processed using the verbal command machine learning model to generate the second action. In some embodiments, the verbal command machine learning model may be trained using first training data comprising a first plurality of samples of the user speaking the first verbal command for the first action and second training data comprising a second plurality of samples of the user speaking the second verbal command for the second action. In addition, in some embodiments, the user may identify the first action for the first verbal command and the second action for the second verbal command.
  • In particular embodiments, the focus of the first portion of the content may comprise a selection of the first portion of the content via a selection verbal command received as a result of the user speaking the selection verbal command that is detected by the audio input of the user computing entity. In addition, in particular embodiments, the first action may comprise causing the first user interface control element to at least one of convey input, navigate to a particular section of the first portion of the content, or display other content associated with the first portion of the content. Further, in particular embodiments, the focus of the second portion of the content may result from the first action being performed with respect to the first user interface control element associated with the first portion of the content. For example, the content may comprise a plurality of sequential portions of the content, the second portion of the content may immediately follow the first portion of the content in the plurality of sequential portions of the content, and the first action with respect to the first user interface control element associated with the first portion of the content may comprise setting the first user interface control element associated with the first portion of the content to indicate a completion of the first portion of the content.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
  • Having thus described the disclosure in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein:
  • FIG. 1 is a diagram of a system architecture that can be used in conjunction with various embodiments of the present disclosure;
  • FIG. 2 is a schematic of a management computing entity that may be used in conjunction with various embodiments of the present disclosure;
  • FIG. 3 is a schematic of a user computing entity that may be used in conjunction with various embodiments of the present disclosure;
  • FIG. 4 is a process flow for signing in a user to an IETM in accordance with various embodiments of the present disclosure;
  • FIGS. 5A and 5B provide examples of a sign-in window that may be used in accordance with various embodiments of the present disclosure;
  • FIGS. 5C and 5D provide examples of user reports that may be used in accordance with various embodiments of the present disclosure;
  • FIG. 6 is a process flow for viewing and interacting with a table of contents provided by an IETM in accordance with various embodiments of the present disclosure;
  • FIG. 7 provides an example of a window displaying a table of contents in accordance with various embodiments of the present disclosure;
  • FIG. 8 is a process flow for filtering a table of contents in accordance with various embodiments of the present disclosure;
  • FIG. 9 provides an example of a window displaying a table of contents that has been filtered in accordance with various embodiments of the present disclosure;
  • FIG. 10 is a process flow for tagging content with formatting found in a source of the content in accordance with various embodiments of the present disclosure;
  • FIG. 11 is a process flow for formatting content based at least in part on a format structure found in a source of the content in accordance with various embodiments of the present disclosure;
  • FIG. 12A provides an example of a table of contents formatted according to S1000D standards;
  • FIG. 12B provides an example of a table of contents formatted according to a format structure found in one or more sources of the contents;
  • FIG. 12C provides an example of content from a source formatted according to a format structure found in the source;
  • FIG. 13 is a process flow for searching a table of contents in accordance with various embodiments of the present disclosure;
  • FIG. 14 is a process flow for providing one or more predictions based at least in part on search term(s) in accordance with various embodiments of the present disclosure;
  • FIGS. 15A and 15B provide examples of a search window in accordance with various embodiments of the present disclosure;
  • FIG. 16 is a process flow for generating a list of parts in accordance with various embodiments of the present disclosure;
  • FIG. 17 is a process flow for displaying a list of parts in accordance with various embodiments of the present disclosure;
  • FIG. 18A provides an example of a window displaying a list of parts in accordance with various embodiments of the present disclosure;
  • FIG. 18B provides an example of a mechanism for identifying levels for relisting a list of parts in accordance with various embodiments of the present disclosure;
  • FIG. 18C provides an example of a preview displaying information for a supplier in accordance with various embodiments of the present disclosure;
  • FIG. 18D provides an example of a preview displaying a list of other items that use a part in accordance with various embodiments of the present disclosure;
  • FIG. 19 is a process flow for allowing a user to order a part via an IETM in accordance with various embodiments of the present disclosure;
  • FIG. 20 is a process flow for submitting an order for a part via an IETM in accordance with various embodiments of the present disclosure;
  • FIG. 21A provides an example of a window in which an option to order a part is provided in accordance with various embodiments of the present disclosure;
  • FIG. 21B provides an example of an electronic order form that can be used to order a part in accordance with various embodiments of the present disclosure;
  • FIG. 21C provides an example of a graphical code that can be scanned to order a part in accordance with various embodiments of the present disclosure;
  • FIG. 22 is a process flow for displaying content for a topic found in technical documentation for an item in accordance with various embodiments of the present disclosure;
  • FIG. 23 is a process flow for causing parts found in textual information to be displayed as selectable in accordance with various embodiments of the present disclosure;
  • FIG. 24 is a process flow for causing applicability found in textual information to be displayed as selectable in accordance with various embodiments of the present disclosure;
  • FIG. 25 is a process flow for locking content in accordance with various embodiments of the present disclosure;
  • FIG. 26 is a process flow for setting a security classification for specific content in accordance with various embodiments of the present disclosure;
  • FIG. 27 provides an example of security classification formatting and functionality set for the display of content in accordance with various embodiments of the present disclosure;
  • FIGS. 28A and 28B is a process flow for invoking functionality provided for a topic in accordance with various embodiments of the present disclosure;
  • FIG. 29 is a process flow for displaying related information for a part in accordance with various embodiments of the present disclosure;
  • FIG. 30 provides an example of related information displayed for a part in accordance with various embodiments of the present disclosure;
  • FIG. 31 is a process flow for displaying information on the meaning of an occurrence of applicability in accordance with various embodiments of the present disclosure;
  • FIG. 32 provides an example of displaying information on the meaning of an occurrence of applicability in accordance with various embodiments of the present disclosure;
  • FIG. 33 is a process flow for displaying a data source for a topic in accordance with various embodiments of the present disclosure;
  • FIG. 34A provides an example of a section of a data source displayed in accordance with various embodiments of the present disclosure;
  • FIG. 34B provides an example of an entire data source displayed in accordance with various embodiments of the present disclosure;
  • FIG. 35 is a process flow for generating an annotation in accordance with various embodiments of the present disclosure;
  • FIG. 36A provides an example of a generated annotation in accordance with various embodiments of the present disclosure;
  • FIG. 36B provides an example a change request form in accordance with various embodiments of the present disclosure;
  • FIG. 36C provides an example of a selection mechanism to generate an annotation in accordance with various embodiments of the present disclosure;
  • FIG. 36D provides an example of a report of change requests submitted by a user in accordance with various embodiments of the present disclosure;
  • FIG. 36E provides an example of a list of annotations generated by a user in accordance with various embodiments of the present disclosure;
  • FIG. 37A is a process flow for configuring enhancing, relevant, and/or irrelevant formats in accordance with various embodiments of the present disclosure;
  • FIG. 37B is a process flow for assessing the steps found in a sequence in accordance with various embodiments of the present disclosure;
  • FIGS. 38A-E provide examples of sequential information in which current steps, or steps that have been skipped, are displayed using various formats in accordance with various embodiments of the present disclosure;
  • FIG. 39 is a process flow for unlocking content as a result of a user acknowledging an alert in accordance with various embodiments of the present disclosure;
  • FIG. 40A provides an example of a portion of content that has been locked in accordance with various embodiments of the present disclosure;
  • FIG. 40B provides an example of a portion of content that has been unlocked in accordance with various embodiments of the present disclosure;
  • FIG. 41 is a process flow for facilitating a user transferring a job in accordance with various embodiments of the present disclosure;
  • FIG. 42 is a process flow for facilitating a user resuming a suspended job in accordance with various embodiments of the present disclosure;
  • FIG. 43A is an example of a mechanism to enable a user to transfer or resume a job in accordance with various embodiments of the present disclosure;
  • FIG. 43B is an example of a job transfer window in accordance with various embodiments of the present disclosure;
  • FIG. 43C is an example of a procedure that has been suspended in accordance with various embodiments of the present disclosure;
  • FIG. 43D is an example of a procedure that has been resumed in accordance with various embodiments of the present disclosure;
  • FIG. 44 is a process flow for causing media content that is displayed to be updated based at least in part on a user scrolling through textual information in accordance with various embodiments of the present disclosure;
  • FIG. 45 provides an example of media content being updated as a user scrolls through textual information in accordance with various embodiments of the present disclosure;
  • FIG. 46A is a process flow for causing display of pins for a connector as highlighted in media content or referenced in textual information in accordance with various embodiments of the present disclosure;
  • FIGS. 46B and 46C provide examples of pins highlighted in an illustration in accordance with various embodiments of the present disclosure;
  • FIG. 47A is a process flow for causing display of a unit as highlighted in media content or referenced in textual information in accordance with various embodiments of the present disclosure;
  • FIG. 47B provides an example of a unit highlighted in an illustration in accordance with various embodiments of the present disclosure;
  • FIG. 47C provides an example of units highlighted in textual information in accordance with various embodiments of the present disclosure;
  • FIG. 48 is a process flow for providing functionality when a user reaches the end of content for a topic in accordance with various embodiments of the present disclosure;
  • FIG. 49A provides an example of an end of topic mechanism in accordance with various embodiments of the present disclosure;
  • FIG. 49B provides an example of a table of contents displayed as a result of invoking end of module functionality in accordance with various embodiments of the present disclosure;
  • FIG. 50A is a process flow for enabling a user to set up verbal commands in accordance with various embodiments of the present disclosure;
  • FIG. 50B is a process flow for processing a verbal command in accordance with various embodiments of the present disclosure;
  • FIG. 51A is a process flow for providing functionality for wiring data in accordance with various embodiments of the present disclosure;
  • FIG. 51B provides an example of an electrical schematic displayed in accordance with various embodiments of the present disclosure;
  • FIG. 51C provides an example of a preview of a connector in accordance with various embodiments of the present disclosure;
  • FIG. 51D provides an example of a list of components displayed in an electrical schematic in accordance with various embodiments of the present disclosure;
  • FIG. 51E provides an example of a list of other electrical schematics that display a selected component in accordance with various embodiments of the present disclosure;
  • FIG. 52 is a process flow for providing live wire functionality for a selected wire in accordance with various embodiments of the present disclosure;
  • FIG. 53 is an example of a wire diagram in accordance with various embodiments of the present disclosure;
  • FIG. 54 is a process flow for providing crosshairs on a graph in accordance with various embodiments of the present disclosure;
  • FIG. 55 is an example of crosshairs placed on a graph in accordance with various embodiments of the present disclosure;
  • FIG. 56 is a process flow for providing functionality for media content involving 3D graphics in accordance with various embodiments of the present disclosure;
  • FIGS. 57A-D provide examples of a table of parts and a 3D graphic displayed in accordance with various embodiments of the present disclosure;
  • FIGS. 57E and 57F provide examples of a part removed from a 3D graphic in accordance with various embodiments of the present disclosure;
  • FIGS. 57G and 57H provide examples of a part solely displayed in a 3D graphic in accordance with various embodiments of the present disclosure;
  • FIG. 57I provides an example of axes on a 3D graphic displayed in accordance with various embodiments of the present disclosure;
  • FIG. 58 is a process flow for providing components in media content as identified in a hierarchy in accordance with various embodiments of the present disclosure;
  • FIG. 59A provides an example of a hierarchy of components displayed for components found in media content in accordance with various embodiments of the present disclosure;
  • FIG. 59B provides an example of a report displayed of components illustrated in media content but not listed in accordance with various embodiments of the present disclosure;
  • FIG. 60 is a process flow for allowing a user to initiate communication sessions within an IETM environment in accordance with various embodiments of the present disclosure;
  • FIG. 61A is an example of a selection mechanism to enable a user to access communication session functionality in accordance with various embodiments of the present disclosure;
  • FIG. 61B is an example of a display to enable a user to initiate a communication session within an IETM in accordance with various embodiments of the present disclosure;
  • FIG. 61C is an example of a communication window that is displayed once a communication session is established in accordance with various embodiments of the present disclosure;
  • FIG. 61D is an example of a communication window in which a user has shared his or her window to other users involved in a communication session in accordance with various embodiments of the present disclosure;
  • FIG. 62 is a process flow for addressing warnings and/or cautions shown on a caution panel found on an item in accordance with various embodiments of the present disclosure;
  • FIG. 63A provides an example of a virtual caution panel in accordance with various embodiments of the present disclosure;
  • FIG. 63B provides an example of a corrective action provided for one or more warnings and/or cautions in accordance with various embodiments of the present disclosure;
  • FIG. 64 is a process flow for generating a workflow for loading articles onto and/or into an object of an item in accordance with various embodiments of the present disclosure;
  • FIG. 65A provides an example of a display of a digital model of an aircraft to be loaded with articles in accordance with various embodiments of the present disclosure;
  • FIG. 65B provides an example of display of a digital workflow in the form of a table of contents in accordance with various embodiments of the present disclosure;
  • FIG. 66 is a process flow for managing a workflow for loading articles onto and/or into an object for an item in accordance with various embodiments of the present disclosure;
  • FIG. 67 is a process flow for securely integrating the use of a network connected with a remote device with an IETM environment in accordance with various embodiments of the present disclosure;
  • FIG. 68 is a process flow for providing a virtual network within an IETM environment in accordance with various embodiments of the present disclosure; and
  • FIG. 69 is a process flow for importing data for the technical documentation for an item into an IETM in accordance with various embodiments of the present disclosure.
  • DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS
  • Various embodiments of the present disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of the disclosure are shown. Indeed, the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. The term “or” (also designated as “/”) is used herein in both the alternative and conjunctive sense, unless otherwise indicated. The terms “illustrative” and “exemplary” are used to be examples with no indication of quality level. Like numbers refer to like elements throughout.
  • Exemplary Technical Contributions
  • Various embodiments of the present disclosure address technical problems related to providing technical documentation within an IETM environment. Although conventional IETMs oftentimes provide interactive functionality to users who are viewing technical documentation via the IETMs, such functionality is normally limited to simply viewing the documentation in different formats. For example, a conventional IETM may provide a digital model of an apparatus, machine, vehicle, equipment, and/or the like (e.g., illustrations) that allows the user to select a component for the apparatus, machine, vehicle, equipment, and/or the like displayed in the model to view documentation on the component. However, this capability is typically the extent of the interactive functionality provided in the IETM. Therefore, if the user needs to perform additional tasks with respect to the component such as, for example, ordering the component, then the user is required to sign into a different system (e.g., procurement system) to perform such tasks. Such requirements not only lead to inefficiencies with respect to resources such as the user's time and effort jumping back and forth between different systems, but also lead to inefficiencies with respect to resources such as the systems, storage, networking, and/or equipment required to perform such tasks.
  • In addition, requiring users to use multiple systems to view technical documentation on an apparatus, machine, vehicle, equipment, and/or the like and perform various tasks with respect to the apparatus, machine, vehicle, equipment, and/or the like can present many technical challenges. For instance, requiring users who are viewing technical documentation through an IETM to use other systems to perform tasks outside of viewing the documentation necessitates separate security measures to be implemented within the multiple systems. Managing these separate security measures within each of the systems can lead to multiple challenges in providing secure environments, as well as to further inefficiencies for users, systems, storage, networking, and/or equipment.
  • Further, users oftentimes wish to view and interact with a large volume of technical documentation at any given time while viewing and interacting with such documentation via an IETM. For instance, this large volume of documentation may involve viewing and interacting with textual documentation and/or media content (e.g., illustrations) on several different topics. For example, a user may be performing maintenance on a component and may wish to view technical documentation via the IETM on the component, on a maintenance procedure the user is performing on the component, as well as on a part being used in performing the maintenance procedure. Here, the user may need to view the technical documentation for the different topics by interchangeably moving back-and-forth between the technical documentation for the different topics. However, a technical challenge often encountered in conventional IETMs is facilitating the user's ability to move back-and-forth between technical documentation for different topics. Especially, when the technical documentation involves a large volume of information.
  • Finally, some users may wish to view documentation through an IETM under circumstances that may result in challenges for the users in interacting with the IETM. For example, a user may be viewing documentation through an IETM on a maintenance procedure while out in the field performing the procedure. In this instance, the user may be required to scroll through the documentation on the maintenance procedure while performing the procedure. However, the user may be need to use both his or her hands in performing the maintenance procedure and as a result, may not be able to interact with a device (e.g., laptop computer or mobile device) being used by the user to view the IETM as required by many conventional IETMs. Specifically, many conventional IETMs require a user to perform some type of physical interaction with the device being used to view the IETM in order to work with the documentation, such as, for example, using a mouse, pointer, touchscreen, and/or the like. Therefore, many conventional IETMs are quite inconvenient and/or impractical to use in such situations.
  • Further, the user may be faced with some type of physical challenge that may make it inconvenient and/or impractical for the user to interact and/or comprehend documentation through the IETM. For example, the user may be required to use a mobile device such as smartphone or tablet to access the IETM and view technical documentation. In this example, the content for the documentation may be shown in a font size that is difficult for the user to read. However, simply increasing the font size for the documentation may be impractical in that the bigger font size may require the user to have to manipulate the documentation (e.g., navigate around the documentation on the screen of his or her device) very often to view certain portions of the documentation and/or to perform certain functionality. Accordingly, conventional IETMs do not provide functionality to allow the user to selectively enhance content so that it may be easier for the user to comprehend. Likewise, the user may have a physical challenge that can make it difficult for the user to physically interact with his or her device being used to access the IETM in a manner required by many conventional IETMs.
  • Thus, various embodiments of the present disclosure address the above-mentioned technical problems and challenges encountered with many conventional IETMs. Specifically, various embodiments of the present disclosure provide functionality beyond simply presenting an interactive environment to view technical documentation on items found in conventional IETMs. In addition, various embodiments of the present disclosure provide such functionality within a secure environment that is more easily administered and maintained over conventional configurations involving a user having to use multiple systems to perform such functionality. In addition, various embodiments of the present disclosure provide functionality that allows a user to view, comprehend, convey, and interact with content within an IETM environment through enhanced capabilities not found in conventional IETMs. Furthermore, various embodiments of the present disclosure facilitate the display of and interaction with technical documentation within an IETM environment by presenting such technical documentation though the use of displaying, positioning, and/or organizing of the technical documentation in a more optimal manner over conventional IETMs through the use of unique and novel configurations of display windows, view panes, and/or the like.
  • Therefore, the disclosed solution provided herein is more effective, efficient, timely, accurate, faster, and provides more functionality than found in conventional IETMs. In addition, the incorporation of such functionality into an IETM enables users to use such functionality in a more secure environment. Further, the disclosed solution provided herein enables presentation of technical documentation in a more optimal manner over conventional IETMs to facilitate the use of such documentation. Incorporating such functionality and presentation of technical documentation provides the advantage of allowing user to carry out many tasks in a shorter timeframe than under conventional IETMs. Finally, the disclosed solution can result in reduced network traffic, require fewer computational resources, allow for less memory usage, and/or the like. Thus, various embodiments of the present disclosure make significant technical contributions to improving the efficiency, reliability, and functionality in providing technical documentation within an IETM environment.
  • Computer Program Products, Systems, Methods, and Computing Entities
  • Embodiments of the present disclosure may be implemented in various ways, including as computer program products that comprise articles of manufacture. Such computer program products may include one or more software components including, for example, software objects, methods, data structures, and/or the like. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component comprising assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component comprising higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution.
  • Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, and/or a report writing language. In one or more example embodiments, a software component comprising instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library. Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution).
  • A computer program product may include a non-transitory computer-readable storage medium storing applications, programs, program modules, scripts, source code, program code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like (also referred to herein as executable instructions, instructions for execution, computer program products, program code, and/or similar terms used herein interchangeably). Such non-transitory computer-readable storage media include all computer-readable media (including volatile and non-volatile media).
  • In one embodiment, a non-volatile computer-readable storage medium may include a floppy disk, flexible disk, hard disk, solid-state storage (SSS) (e.g., a solid state drive (SSD), solid state card (SSC), solid state module (SSM), enterprise flash drive, magnetic tape, or any other non-transitory magnetic medium, and/or the like. A non-volatile computer-readable storage medium may also include a punch card, paper tape, optical mark sheet (or any other physical medium with patterns of holes or other optically recognizable indicia), compact disc read only memory (CD-ROM), compact disc-rewritable (CD-RW), digital versatile disc (DVD), Blu-ray disc (BD), any other non-transitory optical medium, and/or the like. Such a non-volatile computer-readable storage medium may also include read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), flash memory (e.g., Serial, NAND, NOR, and/or the like), multimedia memory cards (MMC), secure digital (SD) memory cards, SmartMedia cards, CompactFlash (CF) cards, Memory Sticks, and/or the like. Further, a non-volatile computer-readable storage medium may also include conductive-bridging random access memory (CBRAM), phase-change random access memory (PRAM), ferroelectric random-access memory (FeRAM), non-volatile random-access memory (NVRAM), magnetoresistive random-access memory (MRAM), resistive random-access memory (RRAM), Silicon-Oxide-Nitride-Oxide-Silicon memory (SONOS), floating junction gate random access memory (FJG RAM), Millipede memory, racetrack memory, and/or the like.
  • In one embodiment, a volatile computer-readable storage medium may include random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), fast page mode dynamic random access memory (FPM DRAM), extended data-out dynamic random access memory (EDO DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), double data rate type two synchronous dynamic random access memory (DDR2 SDRAM), double data rate type three synchronous dynamic random access memory (DDR3 SDRAM), Rambus dynamic random access memory (RDRAM), Twin Transistor RAM (TTRAM), Thyristor RAM (T-RAM), Zero-capacitor (Z-RAM), Rambus in-line memory module (RIMM), dual in-line memory module (DIMM), single in-line memory module (SIMM), video random access memory (VRAM), cache memory (including various levels), flash memory, register memory, and/or the like. It will be appreciated that where embodiments are described to use a computer-readable storage medium, other types of computer-readable storage media may be substituted for or used in addition to the computer-readable storage media described above.
  • As should be appreciated, various embodiments of the present disclosure may also be implemented as methods, apparatus, systems, computing devices, computing entities, and/or the like. As such, embodiments of the present disclosure may take the form of a data structure, apparatus, system, computing device, computing entity, and/or the like executing instructions stored on a computer-readable storage medium to perform certain steps or operations. Thus, embodiments of the present disclosure may also take the form of an entirely hardware embodiment, an entirely computer program product embodiment, and/or an embodiment that comprises a combination of computer program products and hardware performing certain steps or operations.
  • Embodiments of the present disclosure are described below with reference to block diagrams and flowchart illustrations. Thus, it should be understood that each block of the block diagrams and flowchart illustrations may be implemented in the form of a computer program product, an entirely hardware embodiment, a combination of hardware and computer program products, and/or apparatus, systems, computing devices, computing entities, and/or the like carrying out instructions, operations, steps, and similar words used interchangeably (e.g., the executable instructions, instructions for execution, program code, and/or the like) on a computer-readable storage medium for execution. For example, retrieval, loading, and execution of code may be performed sequentially, such that one instruction is retrieved, loaded, and executed at a time. In some exemplary embodiments, retrieval, loading, and/or execution may be performed in parallel, such that multiple instructions are retrieved, loaded, and/or executed together. Thus, such embodiments can produce specifically configured machines performing the steps or operations specified in the block diagrams and flowchart illustrations. Accordingly, the block diagrams and flowchart illustrations support various combinations of embodiments for performing the specified instructions, operations, or steps.
  • Exemplary System Architecture
  • FIG. 1 provides an illustration of an exemplary system architecture that may be used in accordance with various embodiments of the present disclosure. As shown in FIG. 1, the architecture may include one or more management computing entities 100, one or more networks 105, and one or more user computing entities 110. Each of these components, entities, devices, systems, and similar words used herein interchangeably may be in direct or indirect communication with, for example, one another over the same or different wired or wireless networks. Additionally, while FIG. 1 illustrates the various system entities as separate, standalone entities, the various embodiments are not limited to this particular architecture.
  • Exemplary Management Computing Entity
  • FIG. 2 provides a schematic of a management computing entity 100 according to one embodiment of the present disclosure. In general, the terms computing entity, computer, entity, device, system, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktop computers, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, gaming consoles (e.g., Xbox, Play Station, Wii), watches, glasses, iBeacons, proximity beacons, key fobs, radio frequency identification (RFID) tags, ear pieces, scanners, televisions, dongles, cameras, wristbands, wearable items/devices, kiosks, input terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein. Such functions, operations, and/or processes may include, for example, transmitting, receiving, operating on, processing, displaying, storing, determining, creating/generating, monitoring, evaluating, comparing, and/or similar terms used herein interchangeably. In one embodiment, these functions, operations, and/or processes can be performed on data, content, information, and/or similar terms used herein interchangeably.
  • As indicated, in one embodiment, the management computing entity 100 may also include one or more communications interfaces 220 for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like. For instance, the management computing entity 100 may communicate with user computing entities 110 and/or a variety of other computing entities.
  • As shown in FIG. 2, in one embodiment, the management computing entity 100 may include or be in communication with one or more processing elements 205 (also referred to as processors, processing circuitry, and/or similar terms used herein interchangeably) that communicate with other elements within the management computing entity 100 via a bus, for example. As will be understood, the processing element 205 may be embodied in a number of different ways. For example, the processing element 205 may be embodied as one or more complex programmable logic devices (CPLDs), microprocessors, multi-core processors, coprocessing entities, application-specific instruction-set processors (ASIPs), microcontrollers, and/or controllers. Further, the processing element 205 may be embodied as one or more other processing devices or circuitry. The term circuitry may refer to an entirely hardware embodiment or a combination of hardware and computer program products. Thus, the processing element 205 may be embodied as integrated circuits, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), hardware accelerators, other circuitry, and/or the like. As will therefore be understood, the processing element 205 may be configured for a particular use or configured to execute instructions stored in volatile or non-volatile media or otherwise accessible to the processing element 205. As such, whether configured by hardware or computer program products, or by a combination thereof, the processing element 205 may be capable of performing steps or operations according to embodiments of the present disclosure when configured accordingly.
  • In one embodiment, the management computing entity 100 may further include or be in communication with non-volatile media (also referred to as non-volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably). In one embodiment, the non-volatile storage or memory may include one or more non-volatile storage or memory media 210, including but not limited to hard disks, ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like. As will be recognized, the non-volatile storage or memory media may store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like. The term database, database instance, database management system, and/or similar terms used herein interchangeably may refer to a collection of records or data that is stored in a computer-readable storage medium using one or more database models, such as a hierarchical database model, network model, relational model, entity-relationship model, object model, document model, semantic model, graph model, and/or the like.
  • In one embodiment, the management computing entity 100 may further include or be in communication with volatile media (also referred to as volatile storage, memory, memory storage, memory circuitry and/or similar terms used herein interchangeably). In one embodiment, the volatile storage or memory may also include one or more volatile storage or memory media 215, including but not limited to RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like. As will be recognized, the volatile storage or memory media may be used to store at least portions of the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like being executed by, for example, the processing element 205. Thus, the databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like may be used to control certain aspects of the operation of the management computing entity 100 with the assistance of the processing element 205 and operating system.
  • As indicated, in one embodiment, the management computing entity 100 may also include one or more communications interfaces 220 for communicating with various computing entities, such as by communicating data, content, information, and/or similar terms used herein interchangeably that can be transmitted, received, operated on, processed, displayed, stored, and/or the like. Such communication may be executed using a wired data transmission protocol, such as fiber distributed data interface (FDDI), digital subscriber line (DSL), Ethernet, asynchronous transfer mode (ATM), frame relay, data over cable service interface specification (DOCSIS), or any other wired transmission protocol. Similarly, the management computing entity 100 may be configured to communicate via wireless external communication networks using any of a variety of protocols, such as general packet radio service (GPRS), Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), CDMA2000 1× (1×RTT), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), Evolution-Data Optimized (EVDO), High Speed Packet Access (HSPA), High-Speed Downlink Packet Access (HSDPA), IEEE 802.11 (Wi-Fi), Wi-Fi Direct, 802.16 (WiMAX), ultra-wideband (UWB), infrared (IR) protocols, near field communication (NFC) protocols, Wibree, Bluetooth protocols, wireless universal serial bus (USB) protocols, and/or any other wireless protocol.
  • Although not shown, the management computing entity 100 may include or be in communication with one or more input elements, such as a keyboard input, a mouse input, a touch screen/display input, motion input, movement input, audio input, pointing device input, joystick input, keypad input, and/or the like. The management computing entity 100 may also include or be in communication with one or more output elements (not shown), such as audio output, video output, screen/display output, motion output, movement output, and/or the like.
  • As will be appreciated, one or more of the management computing entity's 100 components may be located remotely from other management computing entity 100 components, such as in a distributed system. Furthermore, one or more of the components may be combined and additional components performing functions described herein may be included in the management computing entity 100. Thus, the management computing entity 100 can be adapted to accommodate a variety of needs and circumstances. As will be recognized, these architectures and descriptions are provided for exemplary purposes only and are not limiting to the various embodiments.
  • Exemplary User Computing Entity
  • A user may be an individual, a family, a company, an organization, an entity, a department within an organization, a representative of an organization and/or person, and/or the like. To do so, a user may operate a user computing entity 110 that includes one or more components that are functionally similar to those of the management computing entity 100. FIG. 3 provides an illustrative schematic representative of a user computing entity 110 that can be used in conjunction with embodiments of the present disclosure. In general, the terms device, system, computing entity, entity, and/or similar words used herein interchangeably may refer to, for example, one or more computers, computing entities, desktops, mobile phones, tablets, phablets, notebooks, laptops, distributed systems, gaming consoles (e.g., Xbox, Play Station, Wii), watches, glasses, key fobs, radio frequency identification (RFID) tags, ear pieces, scanners, cameras, wristbands, kiosks, input terminals, servers or server networks, blades, gateways, switches, processing devices, processing entities, set-top boxes, relays, routers, network access points, base stations, the like, and/or any combination of devices or entities adapted to perform the functions, operations, and/or processes described herein. User computing entities 110 can be operated by various parties. As shown in FIG. 3, the user computing entity 110 can include an antenna 312, a transmitter 304 (e.g., radio), a receiver 306 (e.g., radio), and a processing element 308 (e.g., CPLDs, microprocessors, multi-core processors, coprocessing entities, ASIPs, microcontrollers, and/or controllers) that provides signals to and receives signals from the transmitter 304 and receiver 306, respectively.
  • The signals provided to and received from the transmitter 304 and the receiver 306, respectively, may include signaling information in accordance with air interface standards of applicable wireless systems. In this regard, the user computing entity 110 may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. More particularly, the user computing entity 110 may operate in accordance with any of a number of wireless communication standards and protocols, such as those described above with regard to the management computing entity 100. In a particular embodiment, the user computing entity 110 may operate in accordance with multiple wireless communication standards and protocols, such as UMTS, CDMA2000, 1×RTT, WCDMA, TD-SCDMA, LTE, E-UTRAN, EVDO, HSPA, HSDPA, Wi-Fi, Wi-Fi Direct, WiMAX, UWB, IR, NFC, Bluetooth, USB, and/or the like. Similarly, the user computing entity 110 may operate in accordance with multiple wired communication standards and protocols, such as those described above with regard to the management computing entity 100 via a network interface 320.
  • Via these communication standards and protocols, the user computing entity 110 can communicate with various other entities using concepts such as Unstructured Supplementary Service Data (USSD), Short Message Service (SMS), Multimedia Messaging Service (MMS), Dual-Tone Multi-Frequency Signaling (DTMF), and/or Subscriber Identity Module Dialer (SIM dialer). The user computing entity 110 can also download changes, add-ons, and updates, for instance, to its firmware, software (e.g., including executable instructions, applications, program modules), and operating system.
  • According to one embodiment, the user computing entity 110 may include location determining aspects, devices, modules, functionalities, and/or similar words used herein interchangeably. For example, the user computing entity 110 may include outdoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, universal time (UTC), date, and/or various other information/data. In one embodiment, the location module can acquire data, sometimes known as ephemeris data, by identifying the number of satellites in view and the relative positions of those satellites. The satellites may be a variety of different satellites, including Low Earth Orbit (LEO) satellite systems, Department of Defense (DOD) satellite systems, the European Union Galileo positioning systems, the Chinese Compass navigation systems, Indian Regional Navigational satellite systems, and/or the like. Alternatively, the location information can be determined by triangulating the user computing entity's 110 position in connection with a variety of other systems, including cellular towers, Wi-Fi access points, and/or the like. Similarly, the user computing entity 110 may include indoor positioning aspects, such as a location module adapted to acquire, for example, latitude, longitude, altitude, geocode, course, direction, heading, speed, time, date, and/or various other information/data. Some of the indoor systems may use various position or location technologies including RFID tags, indoor beacons or transmitters, Wi-Fi access points, cellular towers, nearby computing devices (e.g., smartphones, laptops) and/or the like. For instance, such technologies may include the iBeacons, Gimbal proximity beacons, Bluetooth Low Energy (BLE) transmitters, NFC transmitters, and/or the like. These indoor positioning aspects can be used in a variety of settings to determine the location of someone or something to within inches or centimeters.
  • The user computing entity 110 may also comprise an IETM viewer (that can include a display 316 coupled to a processing element 308) and/or a viewer (coupled to a processing element 308). For example, the IETM viewer may be a user application, browser, user interface, graphical user interface, and/or similar words used herein interchangeably executing on and/or accessible via the user computing entity 110 to interact with and/or cause display of information from the management computing entity 100, as described herein. The term “viewer” is used generically and is not limited to “viewing.” Rather, the viewer is a multi-purpose digital data viewer capable and/or receiving input and providing output. The viewer can comprise any of a number of devices or interfaces allowing the user computing entity 110 to receive data, such as a keypad 318 (hard or soft), a touch display, voice/speech or motion interfaces, or other input device. In embodiments including a keypad 318, the keypad 318 can include (or cause display of) the conventional numeric (0-9) and related keys (#, *), and other keys used for operating the user computing entity 110 and may include a full set of alphabetic keys or set of keys that may be activated to provide a full set of alphanumeric keys. In addition to providing input, the viewer can be used, for example, to activate or deactivate certain functions, such as screen savers and/or sleep modes.
  • The user computing entity 110 can also include volatile storage or memory 322 and/or non-volatile storage or memory 324, which can be embedded and/or may be removable. For example, the non-volatile memory may be ROM, PROM, EPROM, EEPROM, flash memory, MMCs, SD memory cards, Memory Sticks, CBRAM, PRAM, FeRAM, NVRAM, MRAM, RRAM, SONOS, FJG RAM, Millipede memory, racetrack memory, and/or the like. The volatile memory may be RAM, DRAM, SRAM, FPM DRAM, EDO DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, DDR3 SDRAM, RDRAM, TTRAM, T-RAM, Z-RAM, RIMM, DIMM, SIMM, VRAM, cache memory, register memory, and/or the like. The volatile and non-volatile storage or memory can store databases, database instances, database management systems, data, applications, programs, program modules, scripts, source code, object code, byte code, compiled code, interpreted code, machine code, executable instructions, and/or the like to implement the functions of the user computing entity 110. As indicated, this may include a user application that is resident on the entity or accessible through a browser or other IETM viewer for communicating with the management computing entity 100 and/or various other computing entities.
  • In another embodiment, the user computing entity 110 may include one or more components or functionality that are the same or similar to those of the management computing entity 100, as described in greater detail above. As will be recognized, these architectures and descriptions are provided for exemplary purposes only and are not limiting to the various embodiments.
  • Exemplary System Operations
  • The logical operations described herein may be implemented (1) as a sequence of computer implemented acts or one or more program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These states, operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. Greater or fewer operations may be performed than shown in the figures and described herein. These operations may also be performed in a different order than those described herein.
  • As described above, the management computing entity 100 and/or user computing entity 110 may be configured for storing technical documentation (e.g., data) in an IETM, providing access to the technical documentation to a user via the IETM, and/or providing functionality to the user accessing the technical documentation via the IETM. In general, the technical documentation is typically made up of volumes of text along with other media objects. In many instances, the technical documentation is arranged to provide the text and/or the media objects on an item. For instance, the item may be a product, machinery, equipment, a system, and/or the like such as, for example, a bicycle or an aircraft.
  • Accordingly, the technical documentation may provide textual information along with non-textual information (e.g., one or more visual representations) of the item and/or components of the item. Textual information generally includes alphanumeric information and may also include different element types such as graphical features, controls, and/or the like. Non-textual information generally includes media content such as illustrations (e.g., 2D and 3D graphics), video, audio, and/or the like. Although the non-textual information may also include alphanumeric information.
  • The technical documentation may be provided as digital media in any of a variety of formats, such as JPEG, JFIF, JPEG2000, EXIF, TIFF, RAW, DIV, GIF, BMP, PNG, PPM, MOV, AVI, MP4, MKV, and/or the like. In addition, the technical documentation may be provided in any of a variety of formats, such as DOCX, HTMLS, TXT, PDF, XML, SGML, JSON and/or the like. As noted, the technical documentation may provide textual and non-textual information of various components of the item. For example, various information may be provided with respect to assemblies, sub-assemblies, sub-sub-assemblies, systems, subsystems, sub-subsystems, individual parts, and/or the like associated with the item.
  • In various embodiments, the technical documentation for the item may be stored and/or provided in accordance with S1000D standards and/or a variety of other standards. According to various embodiments, the management computing entity 100 and/or user computing entity 110 provides functionality in the access and use of the technical documentation provided via the IETM in accordance with user instructions and/or input received from the user via an IETM viewer (e.g., a browser, a window, an application, a graphical user interface, and/or the like).
  • Accordingly, in particular embodiments, the IETM viewer is accessible from a user computing entity 110 that may or may not be in communication with the management computing entity 100. For example, a user may sign into the management computing entity 100 from the user computing entity 110 or solely into the user computing entity 110 to access technical documentation via the IETM and the management computing entity 100 and/or user computing entity 110 may be configured to recognize any such sign in request, verify the user has permission to access the technical documentation (e.g., by verifying the user's credentials), and present/provide the user with various displays of content for the technical documentation via the IETM viewer (e.g., displayed on display 316).
  • Further detail is now provided with respect to various functionality provided by embodiments of the present disclosure. As one of ordinary skill in the art will understand in light of this disclosure. The modules now discussed and configured for carrying out various functionality may be invoked, executed, and/or the like by the management computing entity 100, the user computing entity 110, and/or a combination thereof depending on the embodiment.
  • Sign-In Module
  • A user may be required to sign-in on a device (e.g., a user computing entity 110) to gain access to the technical documentation for an item through an IETM. Accordingly, depending on the circumstances, the user's device (e.g., user computing entity 110) and/or a management computing entity 100 may be configured for facilitating the user's access to the technical documentation. For example, the technical documentation may be stored locally on the user's computing entity 110 and therefore, the user's computing entity 110 is configured to facilitate the user's access to the documentation without cooperation of the management computing entity 100. In other instances, the user's computing entity 110 and the management computing entity 100 may be communication and work in concert to provide access to the technical documentation to the user.
  • Turning now to FIG. 4, additional details are provided regarding a process flow for signing a user into the IETM according to various embodiments. FIG. 4 is a flow diagram showing a sign-in module for performing such functionality according to various embodiments of the disclosure. Here, the user may open the IETM residing on his or her user computing entity 110 to gain access to technical documentation for a particular item. While in other instances, the user may open an IETM viewer (e.g., browser) to gain access to the technical documentation residing remotely on the management computing entity 100. For example, the IETM may be provided as a software-as-a-service over some type of network. Similarly, depending on the embodiment, the technical documentation may be stored locally on the user's computing entity 110 or remotely on the management computing entity 100 that the user computing entity 110 communicates with to access the documentation.
  • Therefore, the process flow 400 begins in various embodiments with the sign-in module providing a sign-in page (e.g., webpage), screen, window, graphical user interface, and/or the like viewable by the user via an IETM viewer in Operation 410. For convenience, the term “window” is used throughout the remainder of the application, although those of ordinary skill in the art understand this term may include other forms of displaying content. The sign-in window may provide a number of fields such as a selectable dataset field, a selectable unit field, and a selectable object field. In particular embodiments, the selectable dataset field provides one or more datasets in which each dataset represents a publication of the technical documentation available for a particular item. For example, technical documentation accessible through the IETM may be for an airline. Here, the airline may have a number of different aircraft types/models in its fleet such as different jet models, propeller models, rotor models, and/or the like. Therefore, the IETM may provide a dataset for each model and the selectable dataset field may be a mechanism such as a dropdown field listing all of the datasets for the different aircraft models that allows for the user to select a particular dataset.
  • The sign-in module determines whether input has been received indicating the user has selected a dataset for a particular item in Operation 415. If so, then the sign-in module provides one or more applicable units for the dataset for display in Operation 420. An applicable unit may represent the user's relationship with respect to the technical documentation and the associated item. For instance, in particular embodiments, the user may be an employee of an airline and the unit may represent the position, job, role, and/or the like that the user holds with the airline. For example, the user may be a salesperson, design engineer, mechanical, and/or the like for the airline. In other embodiments, the unit may represent a larger entity within the organization such as, for example, research and development department, marketing department, engineering design department, and/or the like. In addition, in particular embodiments, the applicable units displayed may be dependent on the dataset selected by the user. For example, an applicable unit that may be provided is jet mechanic as a result of the user selecting the model of a jet dataset. Accordingly, the units may be displayed in the selectable unit field. For example, the selectable unit field may be a dropdown field listing all of the applicable units for the user to select from.
  • Therefore, the sign-in module determines whether input has been received indicating the user has selected a unit in Operation 425. If so, then the sign-in module in particular embodiments provides one or more applicable objects in the selectable object field in Operation 430. Here, an object represents a specific instance of the item associated with the technical documentation. For example, the user may be a mechanic for the airline and he or she may be signing into the IETM to gain access to technical documentation for a particular model of aircraft. Here, the particular model of aircraft may have multiple configurations in which a first configuration uses air brakes and thrust reversers and a second configuration uses disc brakes and thrust reversers. Therefore, the objects may represent the two different configurations of the model of aircraft. In another example, the user may instead be a mechanic for the airline and he or she may be signing into the IETM to gain access to technical documentation for a particular aircraft. Therefore, in this instance, the one or more applicable objects may be the specific aircraft found in the airline's fleet for the model of aircraft. For example, the user may be planning to perform maintenance on one of the particular aircraft and selects the aircraft from the applicable objects listed in the selectable object field.
  • Again, the selectable object field may be configured as a control such as a dropdown listing the applicable objects to allow the user to select a desired object. In addition, the applicable objects may be dependent on the unit selected by the user. For example, the user may have selected mechanic for crew C as the unit and only the aircraft for the particular type of aircraft authorized to be worked on by crew C may be displayed on the sign-in window.
  • Accordingly, in particular embodiments, selection of a particular object may allow for the technical documentation for the item to be filtered down to a smaller dataset. For instance, returning to the example involving the different configurations for the model of aircraft, the technical documentation for this particular model of aircraft may be filtered to only provide documentation on the air brake configuration or the disc brake configuration based at least in part on the user's selection. In addition, in particular embodiments, a selection of a particular object may allow for recordation of technical documentation accessed and/or processes, tasks, and/or the like performed for a particular object of an item. For instance, the performance of maintenance on a specific aircraft found in the airline's fleet may be recorded/tracked in the IETM. Therefore, the IETM may be used to maintain a maintenance record for the specific aircraft. In some embodiments, a universal object may be provided along with the applicable objects that allows for the user to view all the technical documentation for a particular item. For example, a universal object may be provided to allow the user to view the technical documentation on both the air brake configuration and the disc brake configuration of the model of aircraft.
  • Therefore, in particular embodiments, the sign-in module determines whether input has been received indicating the user has selected a specific object in Operation 435. If not, then the sign-in module determines whether input has been received indicating the user has selected a universal object in Operation 440. If the user has selected the universal object, then the sign-in module causes a sign-in mechanism to be made available on the sign-in window to the user in Operation 445. Accordingly, the sign-in mechanism may be any one of different types of controls depending on the embodiment such as, for example, a button, a toggle, checkbox, and/or the like.
  • If instead the user has selected a specific object, then the sign-in module in particular embodiments determines whether input has been received indicating a job has been identified in Operation 450. A job may represent an instance of a specific procedure, task, operation, and/or the like to be performed on the specific object. For instance, returning to the example involving the user selecting a specific aircraft for airline, the job may represent a specific maintenance task the user is to perform on the specific aircraft such as repairing the air braking system. Accordingly, the sign-in window may provide a field for the user to enter an identifier for the job. In some embodiments, the sign-in module causes the job field to be accessible in response to the user selecting a specific object.
  • Again, the identification of a job may allow the technical documentation to be filtered to enable the user to find the documentation needed for the job more easily. In addition, the identification of a job may allow for the tracking on the jobs performed on the specific object. Further, the identification of a job may provide security in that access to only certain technical documentation may be provided based at least in part on the job. If a job has been identified by the user, then the sign-in module causes the sign-in mechanism to be made available in Operation 445.
  • At this point, the user may select the sign-in mechanism to gain access to the IETM and desired technical documentation. Therefore, the sign-in module determines whether input has been received indicating the user has selected the sign-in mechanism in Operation 455 and if so, has provided the required information in Operation 460. For example, in particular embodiments, the sign-in window may also display one or more fields for the user to enter a username and/or password. Therefore, in these instances, the sign-in module may determine whether the user has provided such information. If the user has not, then the sign-in module may provide an error message to display informing the user to provide the needed information in Operation 465.
  • If all the required information has been provided by the user, then the sign-in module determines whether the user's credentials are valid in Operation 470. Here, in particular embodiments, the IETM and/or a supporting system in communication with the IETM may store information on the user's credentials and the information entered by the user on the sign-in window may be compared with the stored credential information. If the user's credentials are invalid, then the sign-in module may provide an error message to display informing the user of such in Operation 465. However, if the user's credentials are valid, then the sign-in module signs the user into the IETM in Operation 475. At this point, the user may begin accessing and interacting with the technical documentation for the item via the IETM.
  • Turning now to FIG. 5A, an example of a sign-in window 500 is provided that may be used according to various embodiments. In this particular example, a username field 510 is provided as a text field that allows for the user to enter his or her username. In addition, a selectable dataset field 515 is provided to allow the user to select the technical documentation (e.g., dataset) for a desired item. Here, the selectable dataset field 515 is provided as a dropdown menu control that lists the available technical documentation from which the user can select. Likewise, a selectable unit field 520 is provided that allows for the user to select a unit. Again, in this particular example, the selectable unit field 520 is provided as a dropdown menu control listing the applicable units for the dataset. Further, a selectable object field 525 is provided that allows for the user to select a specific object for the item. In this particular example, the objects are specific aircraft identified by their tail numbers. Therefore, the user selects the tail number of the desired aircraft. In addition, a universal object 530 is provided in the list of objects in this particular example that allows for the user to gain access to all of the technical documentation for the model of aircraft (item). Here, the universal object 530 is provided so that it may be used when the user is engaging in research and/or training on the model of aircraft and not necessarily performing a procedure, task, operation, and/or the like on a specific aircraft.
  • Turning to FIG. 5B, a job field 535 is provided to allow the user to enter a job (e.g., job identifier) with respect to the specific object. In addition, a sign-in mechanism (e.g., a button) 540 is provided that the user may select to sign into the IETM and view the technical documentation for the specific object. As further discussed herein, the user may now be provided with access to the technical documentation and a number of different functionality with respect to the technical documentation in various embodiments.
  • Accordingly, the sign-in functionality provided in various embodiments may allow for tracking and reporting of activities within the IETM. For instance, any activity engaged in by the user once he or she is signed into the IETM may be recorded and viewable via the IETM. For example, the content (e.g., the technical documentation) accessed and viewed by the user may be recorded so that the user's access and use of such content can be monitored. In addition, the user's completion of activities such as procedures, tasks, operations, and the like may be recorded and monitored.
  • For example, FIG. 5C provides a history report 545 the user may view via the IETM on the user's history of accessing and viewing different content (e.g., data modules) in the technical documentation. The history report 545 may be configured in some embodiments to allow the user to select particular content (e.g., a particular data module) from the report 545 to view the content in a separate view pane 550. Depending on the embodiment, the history report 545 may only be provided to the user or may be provided to other personal such as the user's supervisor so that the supervisor can monitor the user's activities. Other types of reports may be made available to the user such as a daily report 555 shown in FIG. 5D. Again, depending on the embodiment, the daily report 555 may only be provided to the user or may be provided to other personal such as the user's supervisor. Thus, the availability of certain functionality within the IETM may be provided to the user and others based at least in part on their credentials used to sign-into the IETM.
  • Table of Contents Module
  • In various embodiments, the user may be provided with an initial window upon signing into the IETM to view the technical documentation for an item. Accordingly, in particular embodiments, a table of contents may be displayed on the initial window for the technical documentation associated with the item and various functionality. In some embodiments, the initial window may include multiple view panes. For instance, in some embodiments, the window may include a first view pane and a second view pane that are displayed on non-overlapping portions of the window, although more than two view panes may be displayed and/or the panes may be displayed on overlapping portions of the window in some instances.
  • In some embodiments, the table of contents may be displayed on a first view pane and may provide a list of topics configured to be selectable to view information on a selected topic. For example, each of the topics may be provided as a hyperlink and/or provided with one or more selection mechanisms such as buttons that a user may select to view additional information on the topic. Depending of the embodiment, the additional information may then be provided for displaying on another view pane on the window (e.g., on the second view pane) and/or via a separate window. In some embodiments, the separate window displaying the additional information may be superimposed over a portion of the first window displaying the table of contents.
  • As described further herein, other windows provided for display in various embodiments may be configured in the same or similar fashion. Depending on the configuration, these windows may include any number of panes. For instances, the panes may be provided side-by-side on non-overlapping portions of the window or may be provided as overlapping (e.g., superimposed over one another) on the window. In addition, the panes may be displayed in various sizes and dimensions with respect to the window. Further, the panes may be display statically and/or dynamically such as pop-up panes.
  • In addition, any number of separate windows may be displayed at virtually the same time side-by-side or with one window superimposed over a portion of or an entire second window. Here, the window(s) may be displayed in various sizes and dimensions. In addition, in some embodiments, multiple windows may be displayed as superimposed over one another (or portion thereof) in a cascading fashion. Further, such windows may be displayed statically or dynamically such as pop-up windows. Furthermore, a window may be provided in particular embodiments for display in any number of different formats such as, for example, a dialog box, tooltip, infotip, tear-off window, and/or the like.
  • Thus, turning now to FIG. 6, additional details are provided regarding a process flow for facilitating the user's viewing and interacting with the table of contents according to various embodiments. FIG. 6 is a flow diagram showing a table of contents (TOC) module for performing such functionality according to various embodiments of the disclosure.
  • The process flow 600 begins in various embodiments with the TOC module providing a window for display comprising the table of contents in Operation 610. As previously discussed, the table of contents may provide a list of topics on content found within the technical documentation for the item. Accordingly, each of the topics may be selectable (e.g., may be configured as a hyperlink or configured with some type of selection mechanism such as a button) to access content found in the technical documentation for the item.
  • For example, topics may include procedures, tasks, operations, services, checklists, planning, and/or the like performed with respect to the item. For instance, topics may include maintenance procedures and/or tasks performed on the item. Therefore, the maintenance procedure (e.g., an identifier of the maintenance procedure such as a title of the maintenance procedure) may be selected by the user directly from the table of contents to access content found in the technical documentation for the maintenance procedure.
  • In addition, topics may include different components that make up the item. For example, a component of an aircraft is the front landing wheel. Accordingly, components may identify functional and/or physical structures of the item and may be broken down into assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like.
  • Further, the table of contents may be displayed in a hierarchical structure in which topics are grouped accordingly with some topics nested within other topics within the hierarchical structure based at least in part on relationships between the different topics. For example, a topic on the front landing wheel of an aircraft may be nested under a topic on the front landing gear assembly for the aircraft in the hierarchical structure of the table of contents. Lastly, the table of contents may provide various lists on other types of information in particular embodiments such as lists of effective data modules, illustrations, tables, parts, orders for parts, annotations, directions, publications, and/or the like.
  • The user may select a topic to preview in particular embodiments. For example, the user may use a mouse to click on, right click on, or hover over a topic in the table of contents or use a stylus or finger to select a topic in the table of contents to generate a preview for the topic. Therefore, the TOC module may determine whether input has been received indicating the user has selected a topic to preview in Operation 615. If so, then the TOC module generates the topic preview in Operation 620 and provides the topic preview for display for the user to view in Operation 625.
  • For instance, in particular embodiments, the topic preview may be provided as a separate window for display. Accordingly, the topic preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the selected topic. In some embodiments, the topic preview is configured to provide only a preview of some of the content found in the technical documentation on the topic. For example, the topic preview may be configured in particular embodiments to provide the first five to fifty lines of textual information that the user would be provided with if the user were to select the topic to view the entire content for the topic. In addition, the preview may be superimposed over a portion of the window displaying the table of contents.
  • In some embodiments, the user may be provided with functionality to filter the table of contents. For instance, the content for each of the topics may be associated with metadata. Indeed, the content may be organized based at least in part on S1000D standards. S1000D standards requires the content to be configured into data modules representing small, reusable pieces of technical information/data. Accordingly, each data module includes a header section configured to provide identification information and status information for the data module that includes metadata for managing the data module (e.g., source information, security classification, applicability, change history, reason for change, verification status, and/or the like). Here, the header section may include an information code that provides a description on the type of information found in the content of the data module.
  • Therefore, in particular embodiments, functionality is provided to allow the user to filter the table of contents using the information codes for the different topics (e.g., data modules for the topics). Thus, in these particular embodiments, the TOC module determines whether input has been received indicating the user would like to filter the table of content based at least in part on an information code (InfoCode) in Operation 630. If so, then the TOC module filters the table of contents and provides of the table for display in Operation 635.
  • In addition, in particular embodiments, functionality is provided to allow the user to view the table of contents in a source format as opposed to a format adhering to S1000D standards. In many instances, the source format may be preferable for a user because the source format may include labeling of the content that is better suited for searching than the formatting of the content under S1000D standards. For example, S1000D standards requires the figures (e.g., illustrations) found in a data module to be numbered always beginning with one. Therefore, if content from a source is partitioned into multiple data modules, the original labeling of figures may be lost. As a result, the content may end up being displayed having multiple figures labeled the same (e.g., may end of having multiple figures labeled as one). The same can happened with respect to other labels found in the source content such as chapters, headings, sub-headings, sections, sub-sections, and/or the like. Therefore, the TOC module determines whether input has been received indicating the user would like to view the table of contents showing the source content formatting in Operation 640. If so, then the TOC module generates and provides the table of contents with the source content formatting for display in Operation 645.
  • Further, in particular embodiments, functionality (e.g., a search mechanism displayed on the window) is provided that allows the user to search the table of contents. As discussed further herein, the search functionality may allow the user to provide criteria (e.g., one or more search terms) that can then be used to identify topics based at least in part on the criteria. In some embodiments, a search window is provided on which the user can enter search terms and to display the search results. Therefore, in these embodiments, the TOC module determines whether input has been received indicating the search functionality has been selected by the user in Operation 650. If so, then the TOC module enables such functionality in Operation 655.
  • Furthermore, in particular embodiments, functionality is provided to allow for the user to copy the data module code (DMC) for a topic. The data module code is part of the metadata (e.g., header section) of a data module that holds the content for a topic. The DMC includes several characters identifying information about the data module such as the item to which the content applies, the functional or physical breakdown of the item associated with the content, the specific type of information found in the content, and/or the like. Therefore, in these particular embodiments, the TOC module determines whether input has been received indicating the user would like to copy the DMC for a particular topic (e.g., particular data module) in Operation 660. For example, the user may select a topic in the table of contents using shift click to copy the DMC for the topic. If so, then the TOC module copies the DMC in Operation 665. For instance, the TOC module may copy the DMC from a URL displayed via the IETM viewer (e.g., for the corresponding data module). In some embodiments, the user may then send the URL in some type of communication (e.g., in an email) to another individual. For example, the user may wish to send a message to an individual who is managing the content of the data module asking the individual to make a change to the data module. Therefore, the user may wish to include the DMC for the date module to identify which data module the user is talking about.
  • Finally, the TOC module is configured in various embodiments to determine whether input has been received indicating the user has selected a particular topic to view in Operation 670. For instance, in particular embodiments, the TOC module may be configured to determine the user using a first type of selection mechanism (e.g., hover over a topic in the table of contents) to generate and provide a topic preview of the content for the topic and determine the user using a second, different type of selection mechanism (e.g., a mouse click on the topic in the table of contents) to generate and provide the content found in the technical documentation for the topic. Again, the selection mechanism may involve the user using some type of control such as a mouse to click on, right click on, or hover over the topic in the table of contents or use a stylus or finger to select a topic in the table of contents. Therefore, if the TOC module determines the user has selected a topic to view in the IETM, then the TOC module provides the topic to display in Operation 675. At that point, the TOC module determines whether to exit in Operation 680. If not, then the TOC module returns to Operation 610 and provides the table of contents.
  • Accordingly, depending on the embodiment, the content for the topic may be displayed on the same or a different window. For instance, in particular embodiments, the content for the topic may be displayed in a separate view pane (e.g., second view pane) on the window. In other embodiments, the content may be displayed on a different window while the window displaying the table of contents may still be available for viewing. For example, the window displaying the table of contents may be available for immediate viewing in response to the user selecting a mechanism such as a button displayed on a toolbar and/or a view tab via the IETM viewer.
  • Turning briefly to FIG. 7, an example of a table of contents displayed according to various embodiments is shown. Here, the table of contents includes a preface 700 of different lists along with a list of various topics. In this example, the user has selected a particular topic 715 to generate a preview for the topic that is being displayed on a separate window 720. In addition, the window provides a selectable field 725 (e.g., a dropdown menu control) to allow the user to filter the table of contents based at least in part on information codes. Further, the preview window 720 in this example provides a selection mechanism (e.g., a button) 730 to add a bookmark for the preview. Bookmarking the preview may allow the user to recall the preview and/or content for the associated topic at a later time to view. Accordingly, such a bookmark may be recorded and saved in the IETM for the user.
  • Further detail is now provided with respect to functionality available in various embodiments for the table of contents. Specifically, different modules are discussed that may be invoked in various embodiments by the TOC module to facilitate such functionality
  • Filtering Module
  • Turning now to FIG. 8, additional details are provided regarding a process flow for filtering the table of contents based at least in part on an information code according to various embodiments. FIG. 8 is a flow diagram showing a filtering module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the filtering module may be invoked by another module to filter the table of contents such as, for example, the TOC module previously described. However, with that said, the filtering module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • Accordingly, the filtering module may be invoked in some embodiments as a result of the user identifying a particular information code to use in filtering the table of contents. Here, the technical documentation may include publication data (e.g., a publication module). In these particular embodiments, the publication data may provide a list of technical data (e.g., every data module) found in the publication of the technical documentation for the item in the order in which the publication delivers the data to the IETM. Therefore, the publication data may provide a navigation structure for the IETM in constructing the table of contents.
  • Therefore, the process flow 800 may begin with the filtering module referencing the publication data in Operation 810. The filtering module then select specific data (e.g., a data module) found in the publication data in Operation 815. In addition to identifying the technical data found in the publication of the technical documentation, the publication data may also include metadata (e.g., the DMC) for the technical data (e.g., for each of the data modules). Therefore, the filtering module reads the information code for the selected data in Operation 820. The filtering module then determines whether the information code for the selected data matches the information code selected by the user to filter the table of contents in Operation 825. If so, then the filtering module marks the technical data for displaying as a topic in the filtered table of contents in Operation 830.
  • At this point, the filtering module determines whether the publication module contains additional technical data (e.g., another data module) in the list of technical data in Operation 835. If so, then the filtering module returns to Operation 815, selects the next technical data found in the list (e.g., the next data module), and repeats the operations just described for the newly selected technical data. Once all of the technical data have been processed in the list, the filtering module then generates and provides the results for display to the user in Operations 840 and 845.
  • Turning now to FIG. 9, an example of the results of filtering the table of contents based at least in part on an information code is provided. In this example, the table of contents has been filtered based at least in part on the information code for troubleshooting 900. As the reader can see, only those topics 910 dealing with troubleshooting are shown under the topic heading fuel and topic sub-headings distribution and general. Thus, the filter function provided in various embodiments allows for the user to filter down the topics found in the technical documentation in a faster, more efficient manner so that the user can more easily and quickly identify needed content in the technical documentation.
  • Source Format Tagging Module
  • As previously described, functionality may be provided in some embodiments to allow the user to view the table of contents in a source format as opposed to a format adhering to S1000D standards. As noted, the source format may be preferable for a user because the source format may include labeling of the content that is better suited for searching than the formatting of the content under S1000D standards.
  • Turning now to FIG. 10, additional details are provided regarding a process flow for tagging content with the formatting found in the source of the content according to various embodiments. FIG. 10 is a flow diagram showing a source format tagging module for performing such functionality according to various embodiments of the disclosure. Accordingly, the source format tagging module may be executed in particular embodiments by an entity such as the management computing entity 100 and/or a user computing entity 110 engaged in importing a publication of technical documentation for an item into the IETM. In this instance, the publication may include content from a source in a format such as portable document format (PDF), a standards generalized markup language (SGML) format, and/or the like. The source may include formatting for the content such as identifiers (e.g., numbering and/or textual descriptions) for chapters, headings, sub-headings, sections, tables, figures, and/or the like.
  • Therefore, the process flow 1000 begins with the source format tagging module reading the information from such a source in Operation 1010. The source format tagging module then selects the format structure from the information in Operation 1015 and tags the appropriate portion of the content with the information in Operation 1020. For instance, in particular embodiments, the source format tagging module may record metadata along with the content from the source in the IETM that includes the source formatting and information to format the content appropriately. For example, the content may include a reference to a figure and the source format tagging module may record the format (e.g., the label) for the figure in metadata along with the content in the IETM. While in another example, the content found in the source may include a chapter title. Therefore, the source format tagging module may record the title of the chapter in the metadata along with the content in the IETM.
  • At this point, the format tagging module determines whether additional format structure is found in the content in Operation 1025. If so, then the source format tagging module returns to Operation 1015, selects the next format structure found in the content, and tags the content with the format structure accordingly. As a result, the content can be displayed in various embodiments in its original format structure from the source of the content.
  • Source Formatting Module
  • Turning now to FIG. 11, additional details are provided regarding a process flow for formatting content based at least in part on a format structure found in the source of the content according to various embodiments. FIG. 11 is a flow diagram showing a source formatting module for performing such functionality according to various embodiments of the disclosure. In this instance, the user may wish to view the table of contents with the topics shown with the format structure found in the source of the topics. Therefore, in particular embodiments, the source formatting module may be invoked by another module to display the content with the format structure from the source such as, for example, the TOC module previously described. However, with that said, the source formatting module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 1100 begins with the source formatting module reading a format tag for the content in Operation 1110. As previously discussed, the content may be tagged in particular embodiments by including metadata (e.g., tags) long with the content identifying various parts of the format structure found in the source of the content. For example, the metadata may include one or more tags providing identifiers (e.g., numbering and/or textual descriptions) for chapters, headings, sub-headings, sections, tables, figures, and/or the like found in the source of the content.
  • The source formatting module then formats the content based at least in part on the format structure found in the tag in Operation 1115. For example, the format structure may identify a subject matter heading for the content. Therefore, the source formatting module may format the content with the subject matter heading. Accordingly, in particular instances, the content may then be found in the table of contents as a topic having the subject matter heading as a title. While in other instances, the content itself may be displayed on a window with the subject matter heading.
  • At this point, the source formatting module determines whether another tag exists for the content in Operation 1120. If so, then the source formatting module returns to Operation 1110, reads the next tag for the content, and formats the content based at least in part on the format structure found in the tag.
  • Turning to FIG. 12A, an example is provided of a table of contents 1200 formatted according to S1000D standards. As shown in the figure, all of the topics found under the heading flight manual are provided in a generic format with only a title for each topic. However, in FIG. 12B, the table of contents 1210 is now formatted using the format structure found in the source for the flight manual. As the reader can see, each of the topics is now listed with a section heading as found in the source for the flight manual. Such section headings may allow for the user to more easily distinguish between the different content provided by the source.
  • Another example is shown in FIG. 12C. In this example, content from a source, in this instance a PDF file, is being displayed on a window with source formatting according to various embodiments. Here, the format structure of the content shown on the window matches the format structure of the content found in the source PDF file. Specifically, the title designator for the content 1215 has been included along with the title of the content 1220 shown on the window. In addition, the heading 1225 and sub-headings 1235, 1245 from the source PDF file are shown as a heading 1230 and sub-headings 1240, 1250 in the content on the window. Here, in the example, the user may be able to better navigate and understand the content as a result of viewing the content in the format structure found in the source PDF file.
  • Search Module
  • As previously noted, the user may conduct a search of the elements (e.g., topics and/or lists) found in the table of contents based at least in part on criteria (e.g., one or more search terms). Turning now to FIG. 13, additional details are provided regarding a process flow for searching the table of contents according to various embodiments. FIG. 13 is a flow diagram showing a search module for performing such functionality according to various embodiments of the disclosure. In particular embodiment, the search module may be invoked by another module to search the table of contents such as, for example, the TOC module previously described. For instance, a user may select a mechanism (e.g., button) provided on a window displaying the table of contents and as a result, the TOC module may invoke the search module. However, with that said, the search module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 1300 begins with the search module providing a search window for display to the user in Operation 1310. Accordingly, the search window may be configured in a similar fashion as the window displaying the table of contents. For instance, in some embodiments, the search window may include one or more view panes for displaying search results according to different criteria (e.g., different features of the elements found in the table of contents). In particular embodiments, the search window provides a freeform field that allows the user to type in one or more search terms to use in searching the table of contents. In some embodiments, the search module may be configured to provide predictions of search terms to the user based at least in part on the characters typed into the freeform field.
  • Therefore, in these embodiments, the search module determines whether input has been received indicating the user has typed one or more characters into the freeform field in Operation 1315. If so, then the search module provides one or more predictions of search terms (e.g., autocomplete) to the user in Operation 1320. As discussed further herein, the predictions may be based at least in part on different grounds depending on the embodiment. For example, the search module may be configured to provide the first five predictions identified for the entered characters alphabetically, based at least in part on frequency of use, based at least in part on recent trends, and/or the like.
  • The search module then determines whether input has been received indicating the user has initiated a search based at least in part on the entered search term(s) in Operation 1325. For instance, the search window may include a selection mechanism (e.g., a button) that the user can select to initiate the search. Therefore, the search module determines whether input has been received indicating the user has selected the selection mechanism. If the user has initiated the search, then the search module generates search results based at least in part on the entered search term(s) in Operation 1330. In addition, in some embodiments, the user may indicate other criteria for conducting the search.
  • For example, the search window may include a field that allows the user to identify applicability requirements for the search results. Applicability generally pertains to the context for which the results (e.g., information found in topics) are valid. The context can be associated with a physical configuration of the item, but can also include other aspects such as support equipment availability and/or environmental conditions. In addition, the search window may include a field that allows the user to identify the type of content required for the search results. The content generally pertains to the technical information provided by the search result. For example, different types of content may include procedural, process, wiring, maintenance, learning, parts, checklists, and/or the like. Further, the search window may include other mechanisms that allow the user to identify criteria for filtering the search results such as information code.
  • Accordingly, in various embodiments, the search module is configured to search different features of the elements found in the table of contents to identify the search results. For instance, in particular embodiments, the search window is configured to provide the search results with respect to table of contents, data module, and part name and/or number. Here, the search module searches the table of contents to identify those topics with the search term(s) in the title of the topic. In addition, the search module searches the various data (e.g., data modules) that make up the technical documentation to identify data in which the search term(s) are found in the textual information for the data. Further, the search module searches the part names and/or numbers of the parts used in the item to identify those pails with the search term(s) in the part names and/or numbers.
  • Accordingly, in these particular embodiments, the search module may format the search results with respect to table of contents, data modules, and parts (e.g., part names and/or numbers) in Operation 1335. The search module may then provide the search results for displaying in Operation 1340. Here, the search window may be configured to show the search results with respect to the three different basis: table of contents: data modules; and parts. For example, the search window may provide a view pane with a tab for each basis that the user may select to view the search results for the basis.
  • At this point, the search module determines whether input has been received indicating the user wishes to exit the search window in Operation 1345. For example, the user may select one of the search results (e.g., a topic) to view or the user may simply select a mechanism to exit the search window. If so, then the search module exits.
  • It is noted that in some embodiments, the search results are not necessarily lost (e.g., closed) as a result of the user exiting the search window. Instead, the results may be maintained while the user is still actively signed into the IETM. Such functionality allows for the user to later return to his or her search results to further view and use accordingly. For example, the user may initially view a data module listed in the search results and then later decided to view the search results again because the data module did not have the information the user was looking for. Therefore, the search results may be maintained so that the user can later return to them if desired. In some instances, the IETM may be configured to save the search results even past the user's current sign-in to the IETM.
  • Predictions Module
  • Turning now to FIG. 14, additional details are provided regarding a process flow for providing predictions based at least in part on search term(s) entered by a user according to various embodiments. FIG. 14 is a flow diagram showing a predictions module for performing such functionality according to various embodiments of the disclosure. in particular embodiments, the predictions module may be invoked by another module to provide predictions such as, for example, the search module previously described. However, with that said, the predictions module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 1400 begins with the predictions module reading (e.g., receive input of) the character(s) typed in by the user on the search window in Operation 1410. In various embodiments, a search index is maintained in the IETM that is constructed from the dataset for the technical documentation of the item. Here, the search index provides a mapping of characters (e.g., alphanumeric) to various terms found in the technical documentation for the item. Therefore, in these embodiments, the predictions module searches the index to identify predictions based at least in part on the entered character(s) in Operation 1415.
  • The predictions module then identifies and orders the predictions based at least in part on certain grounds in Operations 1420 and 1425. As previously discussed, the grounds for ordering the predictions may differ depending on the embodiments. For example, the predictions module may order the predictions based at least in part on alphabetically, frequency of use, recent trends, and/or the like. The predictions module provides the top predictions in operation 1430. For instance, the predictions module may be configured to provide the top five, ten, and/or the like predictions that are selectable by the user to automatically complete the search terms in the freeform field provided on the search window.
  • At this point, the predictions module determines whether input has been received indicating the user has selected a prediction in Operation 1435. If not, then the predictions module returns to Operation 1410 to read any further characters entered by the user in the freeform field and to make further predictions accordingly. Once the user selects one of the predictions or finishes typing in characters in the freeform field, then the predictions module exits.
  • FIG. 15A provides an example of a search window 1500 displaying search results according to various embodiments. In this example, the search results are being displayed on a view pane 1510 with respect to data modules that have content containing the search term “assembly” 1515. Note that view panes 1520, 1525 are also provided for the table of contents and part numbers that are hidden on the window 1500 behind the data modules view pane 1510. Turning now to FIG. 15B, the search results are now shown as filtered based information code 1530. Here, the user has selected a mechanism 1535 provided on the search window 1500 indicating to filter the results based at least in part on information code. In addition, a separate tab 1540, 1545, 1550 is provided for each of table of contents view pane 1520, data modules view pane 1510, and parts view pane 1525, respectively, to provide the user with access to the search results for the three different basis.
  • Generate List of Parts Module
  • A list of parts for an item may be provided in the IETM in various embodiments. In these particular embodiments, this list of parts may be generated based at least in part on information/data provided in a publication of the technical documentation of the item. Specifically, the list of parts may be generated based at least in part on the illustrated parts breakdown (IPB) found in the publication. Thus, in various embodiments, a list of parts used by the item may be generated without the need to gather such a list from the suppliers of the parts or any other third-party source outside the publication of the technical documentation for the item.
  • Turning now to FIG. 16, additional details are provided regarding a process flow for generating a list of parts for the item according to various embodiments. FIG. 16 is a flow diagram showing a generate list of parts module for performing such functionality according to various embodiments of the disclosure. Accordingly, the generate list of parts module may be executed in particular embodiments by an entity such as the management computing entity 100 and/or a user computing entity 110 engaged in importing a publication of the technical documentation for an item.
  • The process flow 1600 begins with the generate list of parts module reading the IPB provided with the publication in Operation 1610. Here, the IPB identifies the parts found in the technical documentation for which one or more illustrations (e.g., graphics and/or other media objects) are included in the technical documentation. For example, a data module for a particular maintenance task may be found in the publication for the technical documentation that references a particular part used in a repair that is detailed in the maintenance task. Accordingly, one or more illustrations of installing the part may be included along with the data module that can be displayed to a user as the user views the maintenance task via the IETM. Therefore, a reference to the one or more illustrations may be provided in the IPB.
  • Thus, the generate list of parts module identifies the parts (e.g., part names and/or numbers) found in the IPB in Operation 1615 and generates the list of parts based at least in part on the parts found in the IPB in Operation 1620. Accordingly, as detailed further herein, the generated lists of parts may then be viewed by a user via the IETM.
  • List of Parts Module
  • Accordingly, a user may request to view the list of parts for an item via the IETM. For example, a selection mechanism may be provided such as a button provided on a toolbar to allow the user to request to view the list of parts for the item. As a result, a window may be provided for displaying the list of parts. Accordingly, in particular embodiments, the window may be configured similar to the other windows mentioned herein.
  • For instance, in some embodiments, the window may be configured to have a first view pane displaying the list of parts and a second view pane that is used to display various information on a part found in the lists of parts. The window may be configured to display the view panes on non-overlapping portions of the window. In addition, each part displayed in the list of parts may be selectable (e.g., may be displayed as a hyperlink and/or displayed with one or more selections mechanisms such as buttons) to provided information on the part. In some embodiments, such information may be displayed on a view pane (e.g., the second view pane) and/or may be displayed on a separate window. As now further detailed, the window may provide the user with various functionality that may be used with respect to the list of parts.
  • Turning now to FIG. 17, additional details are provided regarding a process flow for providing functionality for the list of parts according to various embodiments. FIG. 17 is a flow diagram showing a list of parts module for performing such functionality according to various embodiments of the disclosure. Accordingly, the list of parts module may be executed in particular embodiments as a result of a user who is viewing the list of parts via the IETM invoking various functionality.
  • The process flow 1700 begins with the list of parts module determining whether input has been received indicating a selection of a part by the user in Operation 1710. As noted, in particular embodiments, each part in the list of parts may be selectable. For example, each part in the list of parts may be displayed as a hyperlink and/or along with some type of selection mechanism (e.g., a button) to allow the user to select the part from the list. Accordingly, in response to determining the user has selected a part, the list of parts module provides media content for the part in Operation 1715.
  • As previously noted, the media content may be made up of one or more illustrations that may include 2D and/or 3D graphics, as well as other media objects such as images and/or videos that may be provided in the technical documentation for the item. Therefore, in particular embodiments, the list of parts module may be configured to retrieve the media content and provide the list of parts for display on a first view pane of the window and the media content for the selected part on a second view pane of the window. As noted, the window may be configured so that the first and second view panes are displayed on non-overlapping portions of the window. In addition, in particular embodiments, the part may be highlighted in the media content so that the user can easily identify it in the content.
  • Further, the selected part may be displayed in the list of parts using a format to demonstrate the part has been selected such as, for example, the selected part may be highlighted, shown in a particular color, shown with a border, and/or the like. Furthermore, functionality may be provided for the selected part such as, for example, a selection mechanism that provides functionality to allow the user to order the part from the IETM.
  • If the user has not selected a part in the list of parts, then the list of parts module determines whether input has been received indicating the user has identified one or more level indicators for relisting the list of parts in Operation 1720. As previously noted, each of the parts may be associated with one or more components of the item for which the technical documentation is being viewed by the user via the IETM. In various embodiments, each of these components may be identified with a functional and/or physical structure of the item such as assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like. Therefore, the user may be interested in viewing the parts in the list of parts broken down into these levels of functional and/or physical structure. If that is the case, then the list of parts module relists the list of parts based at least in part on the levels identified (e.g., selected) by the user and provides the relisted list of parts for display on the window in Operation 1725.
  • Accordingly, each of the parts in the list of parts may display various information for the part that may be selectable to retrieve and view search results on additional information found in the technical documentation for the part. For instance, each of the parts may display a part name and/or number for the part that is selectable (e.g., that is displayed as a hyperlink and/or along with a selection mechanism such as a button) that when selected by the user, a preview is generated and displayed providing results on textual information and/or media content (e.g., illustrations and/or other media objects) found in the technical documentation for the selected part.
  • For example, the user may use a mouse to click on, right click on, or hover over a part in the list of parts or use a stylus or finger to select a part in the list of parts to generate a preview for the part. Therefore, the list of parts module determines whether input has been received indicating the user has selected a part name and/or number for a part to generate a preview in Operation 1730. If so, then the list of parts module generates a preview of results based at least in part on information on the part found in the technical documentation for the item in Operation 1735 and provides the preview for display in Operation 1740.
  • In particular embodiments, the part preview may be provided as a separate window. For instance, in some embodiments, the preview window may be superposed over a portion of the window displaying the list of parts. Accordingly, the part preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the selected part. In some embodiments, the part preview is configured to provide only a preview of some of the content found in the technical documentation on the part. In addition, various components of the results may be selectable to access further information.
  • Although not specifically shown in FIG. 17, other information may be retrieved and displayed in a preview for the part in some embodiments. Specifically, each of the parts in the list of parts may be associated with one or more commercial and government entity (CAGE) codes and/or one or more source, maintenance, and recovery (SMR) codes. In general, these codes identifier a supplier for the part, although other types of supplier identifiers may be used. In particular embodiments, these codes may be displayed along with each part in the list of parts on the window. In addition, each of these codes may be selectable on the window (e.g., displayed as a hyperlink and/or associated with a selection mechanism) to allow the user to view a preview displaying information on the particular supplier associated with the code. For example, the user may use a mouse to click on, right click on, or hover over a code for a part or use a stylus or finger to select a code for a part to generate a preview. Therefore, the list of parts module may determine whether input has been received indicating the user has selected a CAGE or SMR code for a part. If so, then the list of parts module generates a preview for the supplier associated with the selected CAGE or SMR code and provides the preview for the user to view.
  • Similar to the part preview, the supplier preview may be provided as a separate window. For instance, in some embodiments, the preview window may be superposed over a portion of the window displaying the list of parts. Accordingly, the supplier preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the supplier. In some embodiments, the supplier preview is configured to provide only a preview of some of the content found in the technical documentation on the supplier. In addition, various components display on the preview may be selectable to access further information.
  • Similarly, related maintenance procedures and/or tasks that mention the part may be provided for each part in the lists of parts that are selectable. For example, the user may use a mouse to click on, right click on, or hover over a maintenance procedure and/or task for a part or use a stylus or finger to select a maintenance procedure and/or task for a part to generate a preview. Therefore, the list of parts module may determine whether input has been received indicating the user has selected a maintenance procedure and/or task related a part. If so, then the list of parts module generates a preview for the related maintenance procedure and/or tasks and provides the preview for the user to view.
  • Again, similar to the part and supplier previews, the maintenance procedure and/or task preview may be provided as a separate window. For instance, in some embodiments, the preview window may be superposed over a portion of the window displaying the list of parts. Accordingly, the maintenance procedure and/or task preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the maintenance procedure and/or task. In some embodiments, the preview is configured to provide only a preview of some of the content found in the technical documentation on the maintenance procedure and/or task. In addition, various components display on the preview may be selectable to access further information.
  • Further, as previously noted, functionality may be provided in some embodiments that allows the user to order a selected part from the IETM. As discussed further herein, this functionality provides an order form that can then be populated and submitted by the user to order the part. Therefore, in these particular embodiments, the list of parts module determines whether input has been received indicating the user would like to order a selected part in Operation 1745. If so, then the list of parts module enables the order part functionality in Operation 1750.
  • Finally, in particular embodiments, the list of parts module may provide functionality to allow the user to view other items besides the item the user is currently viewing the technical documentation for that also use a selected part in the list of parts. Here, a mechanism may be displayed along with the selected part that can be used to display a list of other items that also use the part. For example, a selectable plus sign may be provide that the user may use a mouse to click on, right click on, hover over, and/or the like to display the list of other items that also use the part.
  • Therefore, in these particular embodiments, the list of parts module determines whether input has been received indicating the user would like to view the list of other items that use a selected part in Operation 1755. If so, then the list of parts module generates a preview displaying the list of other items that use the selected part in Operation 1760 and provides the preview for the user to view in Operation 1765. At this point, the list of parts module determines whether to exit in Operation 1770. If not, then the list of parts module returns to Operation 1710 to determine whether input has been received indicating a selection of a part by the user.
  • Again, the preview may be provided as a separate window. For instance, in some embodiments, the preview window may be superposed over a portion of the window displaying the list of parts. Accordingly, the preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the list of other items. In some embodiments, the preview is configured to provide only a preview of some of the content found in the technical documentation on the list of other items. In addition, various components display on the preview may be selectable to access further information.
  • Accordingly, such a list of items may be quite helpful to the user under certain circumstances. For example, the user may be maintenance personnel who is tasked with performing certain maintenance on an object such as an aircraft. Therefore, the user may have signed into the IETM to view the technical information for the type of aircraft. Specifically, the user may have signed into the IETM to view documentation on the maintenance task he or she is to perform on the aircraft. The documentation on the maintenance task may identify a particular part needed in performing the task. However, the user may determine that the particular part is not currently in stock. Therefore, in this instance, the user may view the list of parts, select the particular part in the list, and generate and display the preview showing other types of aircraft that also use the particular part. As a result, the user may be able to obtain the part from inventory for another type of aircraft and/or may be able to use the part from another aircraft to perform the maintenance task instead of waiting for the part to be ordered and received.
  • FIG. 18A provides an example of a window 1800 displaying a list of parts according to various embodiments. In this example, the window 1800 provides a first view pane 1810 displaying the list of parts for a particular item (e.g., platform 1810) in which a particular part 1815 found on the list has been selected. As a result, the window 1800 in this example provides a second view pane 1820 displaying an illustration with the selected part 1825 highlighted in the illustration. Further, a mechanism is provided for displaying a window 1830 providing functionality to perform with respect to the selected part 1825 such as ordering the part 1825.
  • Turning now to FIG. 18B, an example of a mechanism 1835 that can be used by a user in various embodiments in selecting identifiers for levels for relisting the list of parts is demonstrated. Here, the mechanism 1835 is provided as a dropdown menu control that allows the user to relist the list of parts according to part associated with an end item, component, major assembly, assembly, and/or subassembly. For instance, in this example, the user has indicated to relist the list of parts according to assembly 1840, but not according to subassembly 1845.
  • Finally, FIG. 18C provides an example of a preview 1850 displaying the information for a supplier as a result of the user selecting a CAGE code associated with a part in the list of parts according to various embodiments. Likewise, FIG. 18D provides an example of a preview 1855 displaying a list of other items that use a selected part according to various embodiments.
  • Order Part Module
  • As previously noted, various embodiments provide functionality to allow a user to order a part from the IETM. Turning now to FIG. 19, additional details are provided regarding a process flow for ordering a part from the IETM according to various embodiments. FIG. 19 is a flow diagram showing an order part module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the order part module may be invoked by another module to order a part from the IETM such as, for example, the list of parts module previously described. For instance, a user may select a mechanism (e.g., button) provided for a selected part on a window displaying the list of parts and as a result, the list of parts module may invoke the order part module. However, with that said, the order part module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 1900 begins with the order part module reading the part number for the part in Operation 1910. Here, for example, the part number may be provided to the order part module from another module such as the list of parts module. While in other instances, the order part module may read the part number (e.g., provided as input) from some type of window being displayed. The part number serves as an identifier for the part. Therefore, depending on the embodiment, the part number may be in various forms such as, for example, an alphanumeric, and may include characters such as dashes, underscore, ampersand, commercial at sign, and/or the like. Those of ordinary skill in the art can envision other characters and/or symbols that may be used in a part number in light of this disclosure.
  • The order part module then identifies a system for the item for which the part will be used in Operation 1915. Accordingly, the item is generally the item related to the technical documentation currently being viewed by the user through the IETM. However, in some embodiments, the user may identify a specific item that is not necessarily the item associated with the technical documentation currently active for the IETM.
  • Regardless, many of the items may be associated with a backend system that is used in managing the item. For example, the item may involve a type of aircraft used by the military. Here, the military's backend system used in managing the individual aircraft for the type of aircraft may normally be used in ordering parts for the aircraft. This backend system may have a specific electronic form that is used in ordering parts for the aircraft. Accordingly, forms for the different systems may be available in the IETM and the order part module selects the appropriate form based at least in part on the system associated with the item in Operation 1920.
  • The order part number then queries a stock number for the part in Operation 1925. The stock number is often used in identifying the physical location where a particular part is stored in a warehouse and/or inventory. Similar to a part number, the stock number serves as an identifier and may be in various forms such as, for example, an alphanumeric, and may include characters such as dashes, underscore, ampersand, commercial at sign, and/or the like. Those of ordinary skill in the art can envision other characters and/or symbols that may be used in a stock number in light of this disclosure. In particular embodiments, the order part module may be configured to identify a stock number for a particular supplier of the part based at least in part on the part number. For example, the supplier may be identified based at least in part on a CAGE and/or SMR code associated with the part found in the technical documentation for the item, although other identifiers may be used for the supplier. Accordingly, in particular embodiments, the order part module determines whether a stock number can be found for the part in Operation 1930. If not, then the order part module may provide an error message to the user in Operation 1935 informing the user that a valid stock number cannot be located for the part.
  • If a valid stock number is located for the part, then the order part module queries data (e.g., information) for the part in Operation 1940. In particular embodiments, the IETM may be in communication with the supplier's system over some type of network so that the data on the part can be queried directly from the supplier. In other embodiments, the IETM may store the data internally and the order part module queries the data accordingly.
  • Once the order part module has queried the data for the part, the module auto-populates one or more of the fields on the electronic order form based at least in part on the queried data in Operation 1945. At this point, the order part module provides the electronic order form for display for the user to view in Operation 1950. Here, in particular embodiments, the form may be displayed on a separate window than the window displaying the list of parts. The user may then provide any additional data (e.g., information) that may be needed on the electronic form such as, for example, a quantity of the part that is to be ordered. Once the user has completed filling out the electronic form, the user may submit the electronic form. For example, the electronic order form may provide a selection mechanism (e.g., a button) that the user can select to submit the order for the part. Accordingly, the form may be submitted directly to the supplier to fulfill the order for the part or the form may be placed in a queue and submitted indirectly depending on the embodiment. Other options may be provided to the user in some embodiments as discussed further herein.
  • Finally, the order part module determines whether input has been received indicating to exit in Operation 1955. If not, then the order part module continues to display the electronic order form. Otherwise once the user has completed submitting the order for the part, or wishes to simply exit the form and indicated such, the order part module exits.
  • Submit Order for Part Module
  • Turning now to FIG. 20, additional details are provided regarding a process flow for submitting an order for a part from the IETM according to various embodiments. FIG. 20 is a flow diagram showing a submit order for part module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the submit order for part module may be invoked by another module to submit the order for the part from the IETM such as, for example, the order part module previously described. For instance, a user may select a mechanism (e.g., button) provided on an electronic order form and as a result, the order part module may invoke the submit order for part module. However, with that said, the submit order for part module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, the user may be provided various options for submitting the order for the part depending on the embodiment. Some of these options may be contingent on whether or not the user's computing entity 110 is currently in communication with another system. For example, the user may be working out in the field using the IETM to perform maintenance where a connectively (e.g., a wireless network) is not available. As a result, the user may need to order a replacement for a part that was used during the maintenance repair. However, the user cannot submit the order for the part directly to the supplier since the user's computing entity 110 is unable to communicate with the supplier's system. While in other instances, the computing entity 110 may not be in communication with any other system for security reasons.
  • Therefore, the process flow 2000 begins with the submit order for part module reading (e.g., receiving input) the user's selection for submitting the order for the part in Operation 2010. As noted, the options available to the user may be dictated based at least in part on whether or not the user's computing entity 110 is currently in communication with any other systems. Here, the different options may be made available to the user on the electronic order form as one or more selection mechanisms (e.g., one or more buttons). Further, the selection mechanisms may be made available on the electronic order form based at least in part on the options currently available to the user.
  • One such option that may be used in various embodiments is to submit the order for the part directly to the supplier. Depending on the embodiment, this option may involve the user's computing entity 110 submitting the order for the part directly to the supplier's system or may involve sending the order for the part initially to some intermediary who then submits the order to the supplier. Therefore, the submit order for part module determines whether input has been received indicating the user has selected the submit order option in Operation 2015. If the submit order for part module determines the user has selected this option, then the submit order for part module submits the order to a remote system in Operation 2020. Accordingly, the remote system may be associated with the supplier of the part or to an intermediary. For example, the submit order for part module may be configured to submit the order to a procurement system for an airline in instances in which the user is a maintenance employee of the airline who is ordering a replacement part for an aircraft. In turn, the procurement system may process the order for the part and then submit it to the supplier to fulfill.
  • In addition, the submit order for part module may submit the order to the remote system using different procedures depending on the embodiment. For example, in one embodiment, the order may be submitted via electronic data interchange (EDI) between the user's computing entity 110 and the supplier's or intermediary's system. In another embodiment, the order may be submitted via a message such as an email, instant messaging, text messaging, and/or the like. Those of ordinary skill in the art can envision other procedures that may be used in submitting the order to the remote system in light of this disclosure.
  • Another option that may be used in various embodiments is to place the order in a queue (e.g., a shopping cart) and submit the order at a later time. This option may be used when the user's computing entity 110 is not currently in communication with another system. Therefore, the submit order for part module may determine whether input has been received indicating the user has selected to add the order to a shopping cart option in Operation 2025. If so, then the submit order for part module places the order in the shopping cart in Operation 2030. Once the order has been placed in the shopping cart, the order may then be submitted at a later time when the user's computing entity 110 is in communication with another system. Accordingly, depending on the embodiment, the order for the part may be submitted to the supplier directly or initially to an intermediary using any number of different procedures at the later time.
  • Finally, another option that may be used in various embodiments is to send the order through another channel of communication. In these particular embodiments, the submit order for part module generates a graphical code with the order information and provides the code for display for the user to scan using his or her mobile device. Here, the graphical code may be provided in various forms such as a barcode, a quick response (QR) code, a one-dimensional code, a universal product code, a data matric code, and/or the like. As a result, the order can be submitted using the mobile device's cellular network as a channel of communication, although the mobile device may be connected to other types of networks such as WIFI. Depending on the embodiments, the user may use a generic code reader application on his or her mobile device or an application specifically designed to submit the order. Using a specific application designed to submit the order may also allow for the order to be submitted in a secure manner. For example, the user may be required to enter security information into the application to open the application to scan the graphical code.
  • Therefore, the submit order for part module determines whether input has been received indicating the user has selected the graphical code option in Operation 2035. If so, then the submit order for part module generates the graphical code in Operation 2040 and provides the code in Operation 2045. For example, in particular embodiments, the graphical code may be displayed on a separate window. At this point, the submit order for part module in some embodiments records the submission of the order in Operation 2050. Therefore, in these particular embodiments, the IETM can be used a recordkeeper for ordered parts. It noted that recordation of the submission of orders placed in the shopping cart may not be performed in some embodiments until the orders have actually been submitted.
  • FIG. 21A provides an example of a part 2100 that has been selected in which the option to order the part (e.g., button) 2110 has been provided to the user via a window according to various embodiments. FIG. 21B provides an example of an electronic order form 2115 that has provided on a window as a result of the user exercising the option to order the part 2110 according to various embodiments. Here, the user has been provided the option to directly submit the order for the part (e.g., button) 2120 and the option to place the order in the shopping cart (e.g., button) 2125. Finally, FIG. 21C provides an example of a graphical code in the form of a QR code 2130 generated according to various embodiments that can be scanned by the user to submit an order for a part.
  • Display Topic Module
  • Turning now to FIG. 22, additional details are provided regarding a process flow for displaying content for a topic found in the technical documentation for an item via an IETM according to various embodiments. FIG. 22 is a flow diagram showing a display topic module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the display topic module may be invoked by another module to provide a topic for display such as, for example, the TOC module previously described. For instance, a user may select a topic found in a table of content displayed on a window and as a result, the TOC module may invoke the display topic module. However, with that said, the display topic module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, topics found in the technical documentation for an item may include procedures, tasks, operations, services, checklists, planning, and/or the like performed with respect to the item. For instance, topics may include maintenance procedures and/or tasks performed on the item. In addition, topics may include different components that make up the item.
  • For example, a user may be viewing the table of contents for the technical documentation of the item and may select a maintenance procedure listed in the table of contents directly from the table to view the content in the technical documentation on conducting the maintenance procedure. Likewise, the user may be viewing an illustration (e.g., a 2D graphic) of the front braking assembly of an aircraft and may select the front wheel directly from the illustration to view the content on the technical documentation for the front wheel.
  • In particular embodiments, the technical documentation may be formatted according to S1000D standards and therefore, the documentation for a particular topic may be found in a data module. A data module primarily includes two parts, metadata and content. The metadata is made up of an identification section and a status section. These two sections are used to control a module's retrieval. The content is what a user views on the topic. The content typically is made up of textual information, as well as references (e.g., links) to any media content (e.g., illustrations such as 2D and/or 3D graphics, images, audio, videos, and/or the like) and other data pertaining to the topic. The content of the data module is usually specific to the type of the data module, which is written in accordance with that type's schema. The types of content found in a data module may include, for example: procedural used for tasks and steps information; fault used for troubleshooting; illustrated parts data used for parts lists and other illustrated parts data; process used for sequencing other data modules and/or steps; learning used for training-related material; maintenance checklists used for preventive maintenance, services, and inspections; and/or the like.
  • Accordingly, the process flow 2200 begins with the display topic module retrieving the textual information for the topic in Operation 2210. In some embodiments, the display topic module creates selectable parts found in the textual information in Operation 2215. As discussed further herein, the parts (e.g., the part names and/or numbers) mentioned in the textual information are recognized and made selectable by displaying them as a hyperlink and/or with some other type of selection mechanism such as a button. As a result, in these particular embodiments, a user viewing the textual information is able to access specific information via the IETM on the part directly from the textual information, as well as perform other functionality with respect to the part such as order the part from the IETM.
  • In addition to creating selectable parts, the display topic module creates selectable applicability found in the textual information in Operation 2220 in some embodiments. Similar to parts, as a result, a user viewing the textual information is able to access specific information on applicability mentioned in the textual information directly from the textual information.
  • Further, the display topic module may lock data found in the textual information in Operation 2225. This particular operation may be performed in some embodiments when the topic selected by the user provides alerts in the content such as warnings, cautions, notes, and/or the like. As discussed further herein, the content found after an alert may be locked (e.g., not able to view and/or not able to scroll through) until the user viewing the content has acknowledged the alert. This functionality helps to ensure the user is giving the alerts found in the content proper attention.
  • Furthermore, the display topic module may create a security classification for the textual information in Operation 2230. Accordingly, the textual information may be configured so that those users with a certain level of security should be able to view the content found in the textual information. Therefore, in particular embodiments, the display topic module may set up a security classification for the content based at least in part on the user's credentials who is requesting to view the content. For example, this operation may involve marking the content with a particular level of security (e.g., top secret) and making the content unviewable to the user.
  • At this point, the display topic module determines whether the data module references any non-textual content in Operation 2235. Here, non-textual content may involve illustrations such as 2D and/or 3D graphics and/or other media objects such as images, videos, audios, and/or the like. If so, then the display topic module retrieves one of the non-textual content in Operation 2240. Accordingly, the reference to the non-textual content found in the data module may provide a link (e.g., html) and/or other information such as an information control number (ICN) to retrieve the non-textual content. In particular embodiments, the display topic module may then create a security classification for the non-textual content, similar to the textual information, in Operation 2245.
  • The display topic module then determines whether the data for the topic (e.g., the data module for the topic) references other non-textual content (e.g., another illustration or media object) in Operation 2250. If so, then the display topic module returns to Operation 2240, retrieves the next non-textual content referenced in the data module, and creates a security classification for the retrieved non-textual content.
  • Once the display topic module has retrieved all of the non-textual content for the topic, the display topic module provides the content for the topic for display via a window in Operation 2255. As discussed further herein, the content may be displayed using a number of different configurations depending on the embodiment. For example, the display topic module may be configured to display the content on multiple view panes so that multiple aspects of the content (e.g., textual information and illustrations) can be viewed by the user at the same time. Accordingly, in particular embodiments, the window displaying the content may be configured so that the view panes are displayed on non-overlapping portions of the window.
  • In various embodiments, the display topic module may invoke various modules to perform some of the operations just described. Accordingly, a discussion of these various modules is now provided.
  • Selectable Parts Module
  • Turning now to FIG. 23, additional details are provided regarding a process flow for causing parts found in textual information to be displayed as selectable according to various embodiments. FIG. 23 is a flow diagram showing a selectable parts module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the selectable parts module may be invoked by another module to cause the parts to be displayed as selectable such as, for example, the display topic module previously described. However, with that said, the selectable parts module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 2300 begins with the selectable parts module selecting a part from the list of parts in Operation 2310. As previously discussed, a list of parts may be generated in various embodiments from the illustrated parts breakdown found in a publication of the technical documentation for the item during a time when the publication is being imported into the IETM. Accordingly, this list of parts may identify the information associated with each part found in the list such as, for example, illustrations of components of the item displaying the part and processes, procedures, maintenance, and/or the like that make use of the part.
  • The selectable parts module then searches the textual information for a topic (e.g., the data module for a topic) to identify occurrences of the part in the textual information in Operation 2315. Here, for instance, the part may be identified in the textual information by a name and/or part number. Therefore, in particular embodiments, the selectable parts module may be configured to perform some type of character recognition to identify occurrences of the part in the textual information.
  • Accordingly, the selectable parts module determines whether an occurrences of the part have been found in the textual information in Operation 2320. If so, then the selectable parts module configures each of the occurrences in the text information as selectable in Operation 2325. Depending on the embodiments, the selectable parts module may make the part selectable in the textual information using a number of different mechanisms. For instance, the selectable parts module may display the part (e.g., the part name and/or number) in the textual information as a hyperlink. In other instances, the selectable parts module may display the part along with a selection mechanism in the textual information such as a button.
  • Further, the selectable parts module may configure the part so that multiple types of selection may be used by a user in some embodiments. For example, the selectable parts module may configure the part so that a user can hover his or her mouse over the part (e.g., the part name and/or number) to view a preview providing preview information on the part and click on the part to display content (e.g., textual information, as well as media content such as illustrations) for the part on a window. Furthermore, various functionality may be provided as a result of a user selecting the part in the textual information such as, for example, functionality to enable the user to order the part from the IETM and/or functionality to allow the user to view other items that use the part. Those of ordinary skill in the art can envision other mechanisms, configurations, and functionality that may be implemented for the parts in light of this disclosure.
  • At this point, the selectable parts module determines whether another part is found on the list of parts in Operation 2330. If so, then the selectable parts module returns to Operation 2310, selects the next part found on the list of parts, and repeats the operations just described for the newly selected part. Once the selectable parts module has processed all the parts found on the list of parts, the module exits.
  • Selectable Applicability Module
  • Turning now to FIG. 24, additional details are provided regarding a process flow for causing applicability found in textual information to be displayed as selectable according to various embodiments. FIG. 24 is a flow diagram showing a selectable applicability module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the selectable applicability module may be invoked by another module to cause applicability to be displayed as selectable such as, for example, the display topic module previously described. However, with that said, the selectable applicability module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, applicability generally pertains to the context for which the information for a topic is valid. The context can be associated with a physical configuration of an item, but can also include other aspects such as support equipment availability and/or environmental conditions. For example, a user may be viewing information on the first wheel assembly for an aircraft. Accordingly, the information may provide information for both an air brake configuration of the assembly and a disc brake configuration of the assembly. However, the user may be specifically working on an aircraft at the time with disc brakes. Therefore, the information being viewed in on the front wheel assembly pertaining to disc brakes is applicable while the information pertaining to air brakes is not.
  • Also previously noted, the IETM may be configured in various embodiments to allow the user to sign into the IETM to view the technical documentation for an item with respect to a specific object (e.g., a specific aircraft in an airline's fleet or a specific aircraft configuration) or a universal object. For example, a user may be conducting training on performing maintenance on a specific model of aircraft and therefore signs into the IETM using a universal object so that he or she can view technical documentation on the model of aircraft using either an air brake configuration or a disc brake configuration.
  • Therefore, in particular embodiments, the process flow 2400 begins with the selectable applicability module determining whether the user is signed into the IETM with respect to a specific object or a universal object for the item in Operation 2410. The reason for making such a determination in these embodiments is the selectable applicability module may be configured to only make those occurrences of applicability found in the textual information selectable that are actually applicable to the current instance of the user signed into the IETM. Therefore, returning to the example, if the user is signed into the IETM to view technical documentation on a specific model of aircraft and the user has signed in identifying a specific object with an air brake configuration, then the selectable applicability module does not make any of the occurrences of applicability involving disc brakes selectable in the textual information.
  • Thus, if the user is signed into the IETM with respect to a specific object of the item, then the selectable applicability module generates only those occurrences of applicability related to the specific object found in the textual information as selectable in Operation 2415. However, if the user is signed into the IETM with respect to a universal object of the item, then the selectable applicability module generates all of the occurrences of applicability found in the textual information as selectable in Operation 2420.
  • Similar to the selectable parts module, the selectable applicability module may be configured in particular embodiments to perform some type of character recognition to identify occurrences of applicability in the textual information. In addition, the selectable applicability module may make an occurrence of applicability selectable in the textual information using a number of different mechanisms. Further, the selectable applicability module may configure an occurrence of applicability so that multiple types of selection may be used by a user in some embodiments. Furthermore, the selectable applicability module may provide various functionality for an occurrence of applicability as a result of a user selecting the occurrence in the textual information. Those of ordinary skill in the art can envision various mechanisms, configurations, and functionality that may be implemented for applicability in light of this disclosure.
  • Lock Content Module
  • Turning now to FIG. 25, additional details are provided regarding a process flow for locking content for a topic according to various embodiments. FIG. 25 is a flow diagram showing a lock content module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the lock content module may be invoked by another module to lock content for a topic such as, for example, the display topic module previously described. However, with that said, the lock content module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously discussed, the textual information for a topic may include element types providing various alerts. For example, the textual information may provide a warning alerting a user of possible hazards associated with a material, a process, a procedure, and/or the like. In addition, the textual information may provide a caution alerting the user that damage to a material is possible if instructions in an operational and/or procedural task are not followed precisely. In particular embodiments, such alerts are tagged in the textual information of the data (e.g., data module) for the topic found in the technical documentation.
  • Thus, the process flow 2500 begins with the lock content module reading the textual information for the topic in Operation 2510. Accordingly, the lock content module determines whether a tag for an alert has been encountered in the textual information in Operation 2515. If so, then the lock content module records a marker for the tag in Operation 2520. Here, the marker identifies where in the textual information the tag is found. As discussed herein, the marker enables the lock content module to lock the portion of the content found in the textual information associated with the alert. The lock content module then determines whether additional textual information remains after the occurrence of the alert in Operation 2525. If so, then the lock content module returns to Operation 2510 and continues reading the textual information to identify further occurrences of tags for alerts in the information.
  • Once the lock content module has read the entire textual information for the topic and has recorded markers for all of the tags for alerts, the lock content module selects a marker for a tag in Operation 2530. The lock content module then identifies the preceding marker for a tag in Operation 2535. It is noted that the lock content module may be configured in particular embodiments to skip the first marker of a tag found in the textual information since this marker/tag would not have a preceding marker/tag found in the textual information. At this point, the lock content module locks the portion of the content found between the tags for the two markers in the textual information in Operation 2540.
  • Depending on the embodiment, the lock content module may be configured to lock the portion of the content using a number of different approaches and/or any combination thereof. For instance, the lock content module may obscure a user's ability to view the portion of the content in some embodiments. For example, the lock content module may grey out the portion of the content so that it cannot be read. In some embodiments, the lock content module may disable any interactive functionality found within the portion of the content. For example, the portion of the content may contain an occurrence of a selectable part. Here, the lock content module may disable the selectable functionality of the selectable part. In some embodiments, the lock content module may lock the user's ability to scroll through the portion of the content displayed on the window. Those of ordinary skill in the art can envision other approaches that may be used in locking the portion of the content in light of this disclosure.
  • Once the lock content module has locked the portion of the content, the module determines whether a marker for another tag exists in Operation 2545. If so, then the lock content module returns to Operation 2530, selects the next marker, and preforms the operations just discussed to lock the portion of the content in the textual information accordingly. Once the lock content module has processed all the markers, the module exits.
  • Security Classification Module
  • Turning now to FIG. 26, additional details are provided regarding a process flow for setting a security classification for specific content of a topic according to various embodiments. FIG. 26 is a flow diagram showing a security classification module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the security classification module may be invoked by another module to set the security classification for content such as, for example, the display topic module previously described. However, with that said, the security classification module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • In various embodiments, the metadata for a topic (e.g., the data module for the topic) may include a security classification tag (e.g., a code) that identifies a level of security with respect to the content for the topic. This may also be true with respect media content for the topic such as illustrations, videos, audio, and/or other data associated with the topic. Therefore, when displaying the various components of content for the topic on a window, the security classification tag found in the metadata for a particular component of content can be used to set formatting and properties for the content.
  • Thus, the process flow 2600 begins with the security classification module reading the security classification tag for a textual or a non-textual component of content for the topic in Operation 2610. The security classification module then reads the credentials for the user in Operation 2615. Accordingly, in particular embodiments, the formatting and/or properties associated with the content may be contingent based at least in part on the user's level of security. For example, if the user has a high level of security (e.g., a top secret clearance), then the user may be able to view content that may not normally be available for viewing by many other users. Here, the credentials used by the user in signing into the IETM may be used to identify the user's level of security.
  • Next, the security classification module in some embodiments formats a border for the content based at least in part on the security classification of the content in Operation 2620. For instance, the security classification that may be set for the content may include unclassified, classified, secret, top secret, and/or the like. Here, the security classification module may format a border placed around the content as it is displayed on a window based at least in part on the security classification set for the content. For example, the content may be displayed on the window in a view pane. Therefore, in this example, the security classification module may format a border placed around the view pane by including a title in the border identifying the security classification for the content and displaying the border in a particular color. Such formatting may help the user to quickly identify the security classification associated with the different components of content being displayed for the topic on the window.
  • In addition, the security classification module in some embodiments sets the accessibility of the content based at least in part on the security classification of the content and the user's credentials in Operation 2625. Specifically, the security classification module sets the accessibility of the content as it is displayed on the window based at least in part on the level of security identified in the security classification tag for the content and the level of security identified in the user's credentials used to sign into the IETM. For example, if the level of security identified in the security classification tag for the content is top secret and the level of security identified in the user's credential is unclassified, then the security classification module may set the content so that it is not accessible on the window. In this instance, the security classification module may make the content unviewable on the window to the user. The security classification module may also disable functionality for the content such as, for example, disabling the user's ability to print the content, copy the content, email the content, and/or the like.
  • In particular embodiments, the security classification module may be configured to also set the accessibility for various interactive functionality found in the content. For example, the content may include a part (e.g., a part number and/or name) that is normally selectable to access information on the part. In this example, the security classification module may have set the accessibility for the content to allow the user to view the content on the window. Specifically, the security classification module may have determined the level of security for the content is unclassified and the user's level of security is classified and as a result, set the accessibility for the content to allow the user to view the content.
  • In some embodiments, the security classification module may also read a classification tag for the selectable part in Operation 2630. Here, the security classification module may read the classification tag found in the metadata for data (e.g., the data module) found in the technical documentation for the part. In this example, the classification tag may identify the level of security set for the part is top secret. Therefore, as a result, the security classification module may disable the user's ability to select the part in the content in Operation 2635. The security classification module may then determine whether any further interactive functionality is found in the content in Operation 2640. If so, then the security classification module may perform the operations just described for the additional functionality.
  • It is noted that the security classification module may be configured in particular embodiments to set the formatting and/or functionality of content of various topics with respect to other features and/or displays that are provided via the IETM. For instance, the security classification module may also be configured to set the accessibility of topics found in a table of contents for the technical documentation for an item based at least in part on the security classification set for the topics. Those of ordinary skill in the art can envision other applications of setting security classification formatting and/or functionality of content in light of this disclosure.
  • FIG. 27 provides an example of security classification formatting and functionality set for the display of a topic according to various embodiments. In this example, a border 2700 has been placed around a view pane displaying content for the topic on a window. Here, the border 2700 includes a title indicating the content (e.g., textual information) for the topic is secret. In addition, the steps found in the textual information 2710 have been removed from being able to be viewed by the user. However, an illustration is also displayed in a view pane on the window that is viewable to the user. The border for the illustration 2715 indicates the illustration is unclassified and therefore the user is able to view it. Thus, the example demonstrates how the formatting and functionality of various sections of content for a topic may be set differently based at least in part on the security classifications identified for the various sections of content.
  • Topic Module
  • Turning now to FIGS. 28A and 28B, additional details are provided regarding a process flow for invoking functionality for a topic displayed on a window according to various embodiments. FIGS. 28A and 28B are a flow diagram showing a topic module for performing such functionality according to various embodiments of the disclosure. Accordingly, the topic module may be executed by an entity such as the management computing entity 100 and/or the user computing entity 110 previously discussed. In various embodiments, the topic module is executed once a user has selected a topic to view and the topic is displayed to the user on a window. As previously noted, the window may be displaying using a various of configurations depending the embodiment. For example, the window may display multiple pane that provide various content for the topic. Once displayed, the user may decide to invoke different interactive functionality provided for the topic.
  • Therefore, turning first to FIG. 28A, the process flow 2800 begins with the topic module determining whether input has been received indicating the user has selected a selectable part displayed on the window to view related information on the part in Operation 2810. For example, as previously discussed, parts (e.g., part names and/or numbers) found in the textual information on the topic may be displayed as selectable on the window in various embodiments. If the user has selected a part (e.g., uses a mouse to hover over the part, click on the part, alt-click on the part, and/or the like), then the topic module generates and provides a preview to display information on the part to the user in Operation 2815. Here, the preview may be provided in a similar manner as the other previews described herein. For example, the preview may be provided on separate window than the window displaying the topic. As discussed further herein, different functionality may be provided on the preview in some embodiments. For example, the preview may provide functionality to allow the user to search for other occurrences of the part in the technical documentation for the item.
  • In various embodiments, the topic module also determines whether input has been received indicating the user has selected a selectable applicability displayed on the window to view information on the applicability in Operation 2820. As previously noted, applicability generally pertains to the context for which the information provided for a topic is valid. Therefore, if the user selects an applicability found in the content displayed for the topic (e.g., hovers over the applicability, click on the applicability, alt-clicks on the applicability, and/or the like), the topic module generates and provides a preview for display providing information on the meaning of the applicability in Operation 2825. Again, the preview may be provided in a similar manner as the other previews described herein. For example, the preview may be provided on separate window than the window displaying the topic.
  • In various embodiments, the topic module also determines whether input has been received indicating the user would like to view the source data for the topic in Operation 2830. Here, the source data may represent the source of the content found in the technical documentation for the topic. For example, the source data may involve data from a file such as a PDF and/or a SGML file. Therefore, if the user has indicated he or she would like to view the source data for the topic, then the topic module provides the source data for display in Operation 2835. Here, in particular embodiments, the source data may be displayed on a separate window than the window displaying the topic.
  • As discussed in further detail herein, this particular functionality may be configured to perform differently based at least in part on the user's selection of this functionality. Specifically, in particular embodiments, the user is provided with the corresponding section of the source data as that currently displayed on the window for the topic in response to the user exercising a first type of selection (e.g., single click). While the user is provided with the entire source data for the topic in response to the user exercising a second, different type of selection (e.g., alt-click).
  • In various embodiments, the topic module also determines whether input has been received indicating the user has selected an option to generate an annotation for the topic in Operation 2840. In various embodiments, annotations may be generated for different content for the topic. For instance, the user may generate an annotation with respect to certain text found in the textual information for a topic and/or the user may generate an annotation with respect to other content for the topic such as an illustration (e.g., 2D and/or 3D graphic). If the user has selected the option to generate an annotation for the topic, then the topic module does so in Operation 2845. In particular embodiments, the annotation can be recorded and stored within the IETM and can only be viewed by the user. While in other instances, others may be able to view and comment on the annotation.
  • In particular embodiments, the topic module may provide further functionality based at least in part on the content of the topic involving sequential information. For instance, the topic may involve a process, procedure, task, checklist, and/or the like that involves various operations and/or steps to be performed. For example, the user may be viewing a maintenance task involving steps the user is to perform for the task. Therefore, in these particular embodiments, the topic module may determine whether the data for the topic (e.g., the data module for the topic) involves sequential information in Operation 2850. In some embodiments, the topic module may make such a determination based at least in part on the type of content found in the data for the topic as indicated in the data's metadata (e.g., in the data module's information code). If the content does involve sequential information, then the topic module provides further functionality for the content in Operation 2855.
  • The additional functionality is now discussed with respect to FIG. 28B. Therefore, turning now to FIG. 28B, the topic module determines whether input has been received indicating the user has performed an action with respect to a step (operation) in a sequence such as a checklist sequence in Operation 2860. For example, such an action may involve the user selecting a step and/or acknowledging a step in the sequence. Typically, the steps found in sequential information (e.g., the steps found in a checklist) are designed to be performed in the sequential order as listed. Therefore, in particular embodiments, any steps that are skipped over in the sequence and not acknowledged are highlighted to bring them to the user's attention.
  • In addition, in particular embodiments, the user may wish to have the content (e.g., textual information) provided in a step be displayed using one or more enhanced formats to better enable the user's comprehension of the content. For example, the user may wish to have the content displayed in a higher magnification (e.g., textual content displayed in a larger font) so that the user is better able to see the content. Further, in particular embodiments, the user may wish to have content that is relevant to the user to be displayed using some type of format (relevant format) so that the content stands out to the user. For example, the user may be viewing sequential information that involves a maintenance procedure and/or task. Here, the maintenance procedure/task may include several steps. However, the user may not be tasked with performing every step found in the procedure/task. Therefore, the user may wish to have those steps found in the procedure/task the user is to perform displayed using a relevant format to bring those steps to the user's attention. Likewise, the user may wish to have those steps found in the procedure/task the user is not to perform displayed using an irrelevant format to convey to the user that the step is irrelevant to the user. Thus, in various embodiments, as result of the user performing an action for a step, the topic module assesses the step in Operation 2865. For example, the action may entail the user selecting the step so that the step receives focus and/or acknowledging completion of the step.
  • In addition, in various embodiments, the topic module determines whether input has been received indicating the user has acknowledged an alert in Operation 2870. As previously discussed, in certain embodiments, content is locked based at least in part on alerts provided in the content. For example, the content may provide warnings and/or cautions for the user. Therefore, if the user has acknowledged an alert, the topic module unlocks the corresponding content for the alert in Operation 2875.
  • As discussed further herein, the user may be provided functionality (e.g. an option) in particular embodiments to transfer a job (e.g., process, procedure, task, checklist, and/or the like) he or she is currently performing to another user. For example, the user's work shift may be ending and therefore, he or she may wish to transfer the current job he or she is performing to another user who is working the following shift. Therefore, in these embodiments, the topic module may determine whether input has been received indicating the user has selected the option to transfer a job in Operation 2880. If so, then the topic module may enable functionality to allow the user to transfer the job in Operation 2881.
  • Further, in various embodiments, functionality may be implemented that updates media content provided on the window as the user scrolls through sequential information. For example, the user may be viewing the steps for a maintenance task displayed on a first view pane shown on the window. At the same time, illustrations for the maintenance may be displayed on a second view pane shown on the window. For instance, a step in the maintenance task may involve a particular component and an illustration of the component may be provided to aid the user in locating the component on the actual item. Therefore, as the user scrolls through the various steps of the maintenance task, the illustrations provided on the second view pane may change automatically in particular embodiments as the user moves from step-to-step and different illustrations are referenced in the steps.
  • Accordingly, if this is the case, then the topic module determines whether input has been received indicating the user is scrolling through the sequential information in Operation 2882. If the user is scrolling through the sequential information, then the topic module updates the media content displayed on the window accordingly in Operation 2883.
  • Furthermore, in various embodiments, functionality may be implemented for a component found in the sequential data that is a electrical connector such as a plug having a plurality of pins. For example, the user may be maintenance personnel who is out in the field, and the sequential information may entail a maintenance procedure and/or task being performed by the user that references an electrical plug. Here, the maintenance procedure/task may involve the user conducting trouble shooting on a electrical problem by testing various combinations of pins (e.g., pairs of pins) found in the plug. However, oftentimes, these plugs can be quite small and/or have a large number of pins, and the user may have trouble with identifying the specific pins on the physical plug that he or she is supposed to test. Therefore, functionality may be implemented that assists the user in identifying a combination of pins in the plug. Accordingly, if this is the case, then the topic module determines whether input has been received indicating the user has selected a part that is an electrical connector in Operation 2884. If so, then the topic module enables functionality for the selected connector in Operation 2885.
  • Finally, other components (in addition to electrical connectors) are often mentioned in the sequence information. For example, the instructions for performing a maintenance task may reference a particular part that is to be replaced during the task. Many times, some type of media may also be provided such as an illustration to assist the user in actually replacing the part. For instance, the instructions may be displayed on a first view pane and the illustration may be displayed on a second view pane. Here, in particular embodiments, the part may be provided in the first and/or second view panes as selectable. As a result, the user's selection of the part in either the first or the second view pane may cause the part to be highlighted in the other view pane. For example, if the user selects the part in the sequential information, then the part is automatically highlighted in the illustration to assist the user in locating the part in the illustration. Likewise, if the user selects the part in the illustration, then the part is automatically highlighted in the sequential information to assist the user in determining which instructions in the maintenance task the part is involved.
  • Therefore, if such functionality is provided, then the topic module determines whether input has been received indicating the user has selected a component in Operation 2886. If the user has selected a component, then the topic module highlights the component on the window accordingly in Operation 2887.
  • Returning to FIG. 28A, in particular embodiments, the topic module may be configured to provide the user with certain functionality at the end of a topic (e.g., at the end of a data module). In some embodiments, some type of selection mechanism (e.g., button) may be provided for the topic on the window to invoke the functionality when the end of the content for the topic has been detected. For example, when the topic module detects the user has scrolled to the end of the textual information provided for a topic. If such functionality is being provided, then the topic module determines whether input has been received indicating the end of the topic has been reached in Operation 2890. If so, then the topic module enables the end of topic functionality in Operation 2891.
  • In addition, in particular embodiments, the topic module may be configured to enable the user to perform certain actions via verbal commands. For example, the user may be able to navigate through content by reciting verbal commands that are detected via an audio input of a user computing entity 110 being used by the user to access the IETM. If such functionality is being provided, then the topic module determines whether a verbal command has been received in Operation 2892. If so, then the topic module enables the verbal command functionality in Operation 2893.
  • As previously noted, various types of content may be provided in different topics. For example, different content types may involve procedural, fault, parts, process, learning, maintenance, wiring, crew/operator, and/or the like. Thus, in addition to sequential information, various embodiments may provide certain functionality based at least in part on the topic involving a particular type of content. Therefore, in particular embodiments, the topic module may determine whether the content for the topic currently being displayed involves wiring data in Operation 2894. If so, then the topic module enables wiring functionality in Operation 2895. Likewise, in particular embodiments, the topic module may determine whether the content for the topic involves media providing a chart in Operation 2896. If so, then the topic module enables crosshairs functionality in Operation 2897. Finally, in particular embodiments, the topic module may determine whether the content for the topic involves 3D graphics in Operation 2898. If so, then the topic module enables 3D graphic functionality in Operation 2899.
  • At this point, the topic module determines whether input has been received indicating the user wishes to exist viewing the content for the topic in Operation 2899A. For example, the user may have simply selected a mechanism (e.g., a button) to exit the topic. If that is the case, then the topic module exits. Otherwise, the topic module continues to monitor the user's interactions.
  • Similar to the display topic module, the topic module in various embodiments may invoke various modules to perform some of the operations just described. Accordingly, a discussion of these various modules is now provided.
  • Display Content for Part Module
  • Turning now to FIG. 29, additional details are provided regarding a process flow for displaying content for a part according to various embodiments. FIG. 29 is a flow diagram showing a display content for part module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the display content for part module may be invoked by another module to display the content such as, for example, the topic module previously described. However, with that said, the display content for part module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 2900 begins with the display content for part module retrieving the content for the part select by the user in Operation 2910. As previously discussed, parts (e.g., part names and/or numbers) found within the textual information of the topic (or other areas of the technical documentation) may be displayed as selectable in some embodiments. Therefore, the user may select one of the parts in the textual information (e.g., use a mouse to hover over the part, click on the part, alt-clicks on the part, and/or the like). As a result, the display content for part module retrieves related information for the part to display. For example, the display content for part module may retrieve metadata from the data (e.g., the data module) found in the technical documentation for the part, as well as the topics found in the table of content in which the part is mentioned.
  • Once retrieved, the display content for part module provides the content for display in Operation 2915. For example, the content may be displayed as a preview as previously discussed. Accordingly, the preview may be displayed on a separate window that is superimposed over a portion of the window displaying the topic. Here, the displayed content may provide information on the part such as, for example, the part name and number. In addition, the content may provide various functionality the user may invoke with respect to the part. For example, a selection mechanism (e.g., a hyperlink and/or button) may be provided to allow the user to search the technical documentation for the item to identify other instances where the part is mentioned/used (e.g., maintenance tasks). A selection mechanism may also be provided that enables the user to order the part from the IETM.
  • Therefore, in particular embodiments, the display content for part module determines whether input has been received indicating the user has selected the functionality to order the part in Operation 2920. If so, then the display content for part module generates and provides the order form for ordering the part in Operation 2925. For example, the display content for part module invokes the order part module previously discussed (FIG. 19) in some embodiments.
  • In addition, in particular embodiments, the display content for part module determines whether input has been received indicating the user has selected the functionality to search the technical documentation to identify other instances of the part in Operation 2930. If so, then the display content for part module queries the technical documentation for the item in Operation 2935. Here, the display content for part module may query various items found in the technical documentation such as the table of contents, data modules, media objects, and/or the like to identify instances in which the part name and/or number is found. The display content for part module then provides the results of the search for display in Operation 2940.
  • For example, the display content for part module may be configured in some embodiments to specifically query and identify the maintenance procedures/tasks found in the technical documentation in which the part is used and/or involved. Therefore, in these embodiments, the display content for part module provides a list of the maintenance procedures/tasks for display for the user to view. The display content for part module may be configured to display a set number of the procedures/tasks such as, for example, five of the procedures/tasks. The display content for part module may use a number of criteria to identify which of procedures/tasks to display such as, for example, alphabetically, more frequently viewed, and/or the like. In addition, a selection mechanism (e.g., a button) may be provided to allow the user to view additional maintenance procedures/tasks for the part.
  • Further, the display content for part module may provide the list so that each of the maintenance procedures/tasks displayed is selectable (e.g., displayed as a hyperlink and/or displayed with a selection mechanism such as a button) so that the user may view a particular maintenance procedure/task if desired. Therefore, in these particular embodiments, the display content for part module determines whether input has been received indicating the user has selected a particular maintenance procedure/tasks to view in Operation 2945. If so, then the display content for part module retrieves the maintenance procedure/tasks and provides the procedure/task for display to the user in Operation 2950.
  • At this point, the display content for part module determines whether input has been received indicating the user would like to exit the display of the content in Operation 2955. If so, then then the display content for part module causes the display of the content be closed and exists. Otherwise, the display content for part module continues monitor the user's interactions.
  • FIG. 30 provides an example of a window 3000 providing content for a part 3010 selected by a user according to various embodiments. As one can see, the window 3000 displaying the content has been superimposed over a portion of the window for the topic in this example. In this example, the display of the content provides the user with a selection mechanism (e.g., a button) 3015 to enable the user to order the part from the IETM. In addition, the display of the content lists related maintenance procedures/tasks 3020 in which the part is used and/or mentioned. Further, the display of the content provides a selection mechanism (e.g., a button) 3025 to view additional maintenance procedures/tasks in which the part is used and/or mentioned.
  • Display Content for Applicability Module
  • Turning now to FIG. 31, additional details are provided regarding a process flow for displaying content for applicability according to various embodiments. FIG. 31 is a flow diagram showing a display content for applicability module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the display content for applicability module may be invoked by another module to display the content such as, for example, the topic module previously described. However, with that said, the display content for applicability module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 3100 begins with the display content for applicability module retrieving the content for the applicability select by the user in Operation 3110. As previously discussed, applicability found within the textual information of the topic (or other areas of the technical documentation) may be displayed as selectable in some embodiments. Therefore, the user may select one of the occurrences of applicability in the textual information (e.g., use his or her mouse to hover over the occurrence, click on the occurrence, alt-click on the occurrence, and/or the like). As a result, the display content for applicability module retrieves related information for the applicability to display. For example, the display content for applicability module may retrieve information on the meaning of the applicability as it pertains to the item.
  • Once retrieved, the display content for applicability module provides the content for display for the user to view in Operation 3115. For example, the content may be displayed as a preview as previously discussed. Accordingly, the preview of the content may be displayed on a separate window that is superimposed over a portion of the window for the topic.
  • FIG. 32 provides an example of a window 3200 displaying content provided for an occurrence of applicability 3210 selected by a user according to various embodiments. As one can see, the window 3200 display the content has been superimposed over a portion of the window for the topic in this example. Here, the content provides the user with a rule 3215 for a list of components (e.g., engines) in which the applicability applies.
  • Display Source for Topic Module
  • Turning now to FIG. 33, additional details are provided regarding a process flow for displaying the source for a topic according to various embodiments. FIG. 33 is a flow diagram showing a display source for topic module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the display source for topic module may be invoked by another module to display the source such as, for example, the topic module previously described. However, with that said, the display source for topic module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously mentioned, the user may indicate he or she would like to view the source data for a topic. The source data may represent the source of the content found in the technical documentation for the topic. For example, the source data may involve data from a file such as a PDF and/or a SGML file. Therefore, if the user has indicated he or she would like to view the source data for the topic, then the process flow 3300 begins with the display source for topic module determining whether input has been received indicating the user would like to view a section from the source or the entire source in Operation 3310.
  • Also previously discussed, the user may be provided with multiple actions to select the selection mechanism in particular embodiments to indicate what from the source he or she would like to view. Specifically, in particular embodiments, the user is provided with the corresponding section of the source data as that is currently displayed on the window for the topic in response to the user exercising a first type of selection (e.g., single click). While the user is provided with the entire source data for the topic in response to the user exercising a second, different type of selection (e.g., alt-click).
  • Therefore, if the display source for topic module determines the user has exercised the first type of selection, then the display source for topic module retrieves the corresponding section (e.g., pages) of the source in Operation 3315 and provides the section of the source for display in Operation 3320. For example, the section of the source may be displayed on a window that is superimposed over a portion of the window display the topic in some embodiments. While in other embodiments, the section of the source may be displayed on a separate view pane on the window.
  • However, if the display source for topic module determines the user has exercised the second type of selection, then the display source for topic module retrieves the entire source in Operation 3325 and provides the entire source for display in Operation 3330. Again, the entire source may be displayed on a window that is superimposed over a portion of the window display the topic in some embodiments. While in other embodiments, the entire source may be displayed on a separate view pane on the window.
  • FIG. 34A provides an example of displaying a section of a source for a topic according to various embodiments. Here, a selection mechanism 3400 is displayed on a window that is configured so that the user is provided with multiple actions to select the mechanism 3400. Accordingly, if the user exercises a first type of selection (e.g., click) of the mechanism 3400, then a separate window 3410 is displayed that provides a section from the source (in this example, a pdf) as shown as in this example as page five of the source 3415. However, if the user exercises a second, different type of selection (e.g., alt-click) of the mechanism 3400, then a separate window is displayed that provides the entire source as shown as all five pages 3420 in FIG. 34B.
  • Generate Annotation Module
  • Turning now to FIG. 35, additional details are provided regarding a process flow for generating an annotation according to various embodiments. FIG. 35 is a flow diagram showing a generate annotation module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the generate annotation module may be invoked by another module to generate an annotation such as, for example, the topic module previously described. However, with that said, the generate annotation module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, a user may add an annotation to various content displayed for a topic such as in the textual information and/or media content such as an illustration. Therefore, the process flow 3500 begins with the generate annotation module receiving where (e.g., receiving input identifying where) in the content for the topic the annotation is to be placed in Operation 3510. Note that in particular embodiments, an annotation may not necessarily be placed in the content of a topic but may be place at other locations in the technical documentation of an item such as, for example, in the table of contents.
  • The generate annotation module then provides the annotation in Operation 3515. Specifically, in particular embodiments, the generate annotation module may generate and provide the annotation to display on a separate window than the window displaying the topic. Accordingly, the window may display initial information for the annotation such as, for example, the date and time the annotation was generated. In addition, the user may be provided with different types of annotations that may be added to the content such as a personal note, a question, a warning and/or missing information, a problem, and/or the like. Therefore, the initial information may also indicate the type of annotation.
  • Depending on the embodiment, the generate annotation module may provide various functionality with respect to the annotation. Therefore, in particular embodiments, the generate annotation module determines whether input has been received indicating the user would like to add an attachment to the annotation in Operation 3520. For example, the user may wish to attach a text document, image, and/or screenshot of the window (e.g., image of the window) and the user selects a selection mechanism (e.g., a button) provided on the window for the annotation. In response, the generate annotation module provides a capability for the user to identify the file to attach to the annotation. For example, the generate annotation module may cause display of a window that allows the user to navigate to a location where the file is locate and attach the file to the annotation. Accordingly, the generate annotation module is configured in various embodiments to enable the attachment of a file in a variety of formats such as JPEG, JFIF, JPEG2000, EXIF, TIFF, RAW, DIV, GIF, BMP, PNG, PPM, MOV, AVI, MP4, MKV, DOCX, HTMLS, TXT, PDF, XML, SGML, JSON and/or the like. Therefore, if the user has indicated he or she would like to attach a file to the annotation, then the generate annotation module attaches the file in Operation 3525.
  • In addition, the generate annotation module may determine whether input has been received indicating the user would like to share the annotation with other users in Operation 3530. In some embodiments, an annotation may normally only be available to view to the user who generated the annotation. However, there may be instances in which the user may want to share his or her annotation with other users and ask for comments. For example, the user may identify an error he or she believes is in the technical documentation for a topic. Therefore, the user may decide to place an annotation in the topic on the error and ask other users whether they also agree on the error in the documentation. Accordingly, such functionality can allow for crowd sourcing to address issues in the technical documentation and/or to assist a user in using the documentation. Therefore, if the user has indicated he or she would like to share the annotation, then the generate annotation module sets the annotation to share in Operation 3535.
  • Further, the generate annotation module may determine whether input has been received indicating the user may want to submit a change request based at least in part on the annotation in Operation 3540. In particular embodiments, a formal procedure may be put in place to allow users of the IETM to submit change requests to have content changed in the technical documentation for an item. For example, a user may be viewing the textual information on a topic and may decide to generate an annotation for a section of the textual information the user does not believe is quite clear and should be further explained in the information. Therefore, the user may wish to submit a change request based at least in part on his or her annotation. If that is the case, then the generate annotation module may provide a change request form to display for the user in Operation 3545.
  • In some embodiments, the generate annotation module may auto-populate some of the fields provided on the form based at least in part on the information found in the annotation in Operation 3550. For example, the generate annotation module may auto-populate the fields in which the user provides his or her name, a date, an identifier for the topic (e.g., a DMC), and/or any comments for the request that have been provided in the annotation. The user may then fill any additional information needed on the form and select a mechanism provided on the form to submit the request for change.
  • Therefore, the generate annotation module determines whether input has been received indicating the user has submitted the change request form in Operation 3555. If the user has submitted the form, then the generate annotation module submits the change request form in Operation 3560. As a result, the change request form may be sent to personnel who are responsible for maintaining the technical information for the item. Accordingly, such personnel may include those individuals who are responsible for maintaining the IETM and/or the publication of the technical documentation currently uploaded to the IETM for the item and/or those individuals who are responsible for maintaining the source technical documentation used in producing the publication that has been uploaded into the IETM.
  • Finally, the generate annotation module may determine whether input has been received indicating the user would like to capture a screenshot (e.g., an image) of the window and the content currently being displayed on the window in Operation 3565. In many instances, the user may wish to attach a screenshot of the window to the annotation to provide more explanation for the annotation. Therefore, if the user would like to capture a screenshot of the window, the generate annotation module generates the screenshot in Operation 3570.
  • At this point, the generate annotation module determines whether input has been received indicating the user would like to exit the window displaying the annotation in Operation 3575. It is noted that in particular embodiments, the annotation is automatically generated and recorded in the IETM at the time the user selects the option (e.g., the selection mechanism) on the window for the topic. Therefore, in these particular embodiments, any additional information provided by the user on the annotation is recorded for the annotation when the user exists the window displaying the annotation. However, in other embodiments, the user may be required to take some action such as select a mechanism (e.g., a button) provided on the window displaying the annotation and/or the topic to record the annotation. Furthermore, different selection mechanisms (e.g., buttons) may be provided on the window displaying the annotation and/or on the topic to invoke the functionality described above.
  • Finally, the various functionality provided by the generate annotation module described above may also be made available to users once the annotations have been recorded in the IETM. For example, a user may be able to sign into the IETM and view an annotation he or she had previously added to the technical documentation of an item. At this time, in particular embodiments, the functionality such as attaching a file and/or submitting a change request may be made available to the user.
  • FIG. 36A provides an example of an annotation window 3600 displayed according to various embodiments. In this example, the user has identified an area 3610 in an illustration displayed for a topic and added a note of “bad region.” The annotation window 3600 provides a first selection mechanism 3615 to allow the user to attach a file 3620 to the annotation such as a screenshot of the window displaying the topic. Accordingly, the annotation window 3600 provides a second selection mechanism 3625 that enables the user to take the screenshot of the window displaying the topic. In addition, the annotation window 3600 in the example provides a third selection mechanism 3630 that allows the user to share the annotation with other users. Finally, the annotation window 3600 provides a fourth selection mechanism 3635 that facilitates the user submitting a change request based at least in part on the annotation. Accordingly, a change request form 3640 that may be provided in some embodiments in shown in FIG. 36B.
  • FIG. 36C provides an example of a selection mechanism 3645 that may be provided in particular embodiments to enable a user to generate an annotation. Here, the selection mechanism 3645 is a dropdown menu control provided in a toolbar displayed along the top of a window that provides the user with options for generating different types of annotations. In particular embodiments, the IETM may provide the user with a report 3650 on the change requests that have been submitted by the user as shown in FIG. 36D. Finally, in particular embodiments, the IETM may provide the user with a list of all the annotations 3655 that have been generated by the user as shown in FIG. 36E. In some embodiments, this list 3655 may also display annotations that have been shared by other users.
  • Formatting Module
  • As previously mentioned, a user may wish to use particular formatting for various types of content. For instance, a user may wish to have certain content enhanced so that the user may be able to view the content better. For example, the user may be working in the field and using a user computing entity 110 that is small in size, and therefore has a small display. As a result, content may be normally displayed in a size that is difficult for the user to see. Therefore, the user may wish to have content that he or she is currently viewing to be conveyed using an enhancing format so that the content is easier for the user to comprehend.
  • In addition, the user may wish to use formatting to identify content that is relevant to the user. For example, the user may wish to have the steps of a procedure and/or task the user is supposed to perform displayed using relevant formatting so that the steps stand out to the user while he or she is viewing the content for the procedure/task via the IETM. This can be beneficial to the user while he or she is working out in the field in that the relevant formatting of content can help draw the user's attention to content he or she may need to view while the user is also engaged in other activities. Likewise, the user may wish to use formatting to identify content that is irrelevant to the user. Therefore, in various embodiments, functionality may be provided through the IETM to allow the user to set up enhanced formats, relevant formats, and/or irrelevant formats for certain types of content.
  • Turning now to FIG. 37A, additional details are provided regarding a process flow for setting up one or more enhancing formats, one or more relevant formats, and/or one or more irrelevant formats according to various embodiments. FIG. 37A is a flow diagram showing a formatting module for performing such functionality according to various embodiments of the disclosure. Accordingly, the formatting module may be executed by an entity such as the management computing entity 100 and/or the user computing entity 110 previously discussed. For instance, in various embodiments, the formatting module may be executed in response to a user selecting an option to set up such formatting from a window provided through the IETM.
  • The process flow 3700 begins with the formatting module displaying the various types of content for which the user can set up formatting in Operation 3710. For example, the various types of content may include procedural, process, wiring, maintenance, learning, parts, checklist, and/or the like. In addition, the various types of content may include particular content found within a type of content such as, for example, the steps of a maintenance procedure and/or task, the items in a checklist, diagrams for wiring, illustrations for parts, and/or the like. Further, the various types of content may include the various forms of content such as textural information, media content, and/or the like. Therefore, the formatting module may be configured in particular embodiments to provide one or more windows, view panes, and/or the like within the IETM to allow the user to identify the particular type of content he or she would like to set up formatting for. For example, the user may identify that he or she would like to set up formatting for the steps of maintenance procedures/tasks.
  • Therefore, the formatting module determines whether the user would like to set up one or more enhancing formats for the selected type of content in Operation 3715. Here, the user may be provided an option to identify the type of format he or she would like to set up for the selected type of content. If the formatting module determines the user would like to set up enhancing format(s) for the selected type of content, then the formatting module displays the types of enhancing formats for the user to select from in Operation 3720. For example, for textual information, the enhancing formats may include enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like. Similarly, for media content, the enhancing formats may include magnifying the media content, enhancing a resolution of the media content, and/or the like.
  • Once the user has selected one or more of the enhancing formats, the formatting module receives one or more indications of the user's selection(s) in Operation 3725 and records the selection(s) in Operation 3730. For example, the formatting module may record the user's selection(s) along with credentials for the user. Therefore, as a result, the enhancing format(s) selected by the user for the particular type of content can be identified and used based at least in part on the user's credentials provided at a time when the user logs into the IETM. For example, the user may have selected to have steps of maintenance procedures/tasks displayed through the IETM in a larger font as the enhancing format. As a result, an active step (e.g., current step) of a maintenance procedure/task is displayed to the user in the enlarged font while the user is viewing the maintenance procedure/task through the IETM.
  • In addition to identifying the one or more enhancing formats, the formatting module may also be configured to allow the user to select one or more properties for the enhancing format(s). For example, the user may be able to select a font size, a color for a font, a color for a border, a color for a background, and/or the like. Accordingly, these properties may also be recorded along with the user's selection of enhancing format(s).
  • Returning to Operation 3715, if the formatting module determines the user does not want to set up one or more enhancing formats for the selected type of content, then the formatting module determines whether the user wants to set up one or more relevant formats for the selected type of content in Operation 3735. If the formatting module determines the user would like to set up relevant format(s) for the selected type of content, then the formatting module displays the types of relevant formats for the user to select from in Operation 3740. Similar to the enhancing formats, the relevant formats may include, for example, enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like for textual information. Similarly, for example, the relevant formats may include magnifying the media content, enhancing a resolution of the media content, and/or the like for media content. In addition, the user may also define one or more properties for the relevant format(s).
  • Once the user has selected one or more of the relevant formats, the formatting module receives one or more indications of the user's selection(s) in Operation 3741 and records the selection(s) in Operation 3742. For example, similar to enhancing formats, the formatting module may record the user's selection(s) along with credentials for the user. Therefore, as a result, the relevant format(s) selected by the user for the particular type of content can be identified and used based at least in part on the user's credentials provided at a time the user logs into the IETM. Accordingly, in various embodiments, the relevant format(s) are used for the particular type of content only in instances in which the content is found to be relevant to the user. Therefore, for example, the user may have selected to have steps of maintenance procedures/tasks displayed through the IETM in a larger font as the relevant format. As a result, in this example, an active step (e.g., current step) of a maintenance procedure/task is only displayed to the user in the enlarged font if the step is determined to be relevant to the user who is viewing the maintenance procedure/task through the IETM.
  • If the formatting module determines the user does not want to set up one or more relevant formats for the selected type of content, then the formatting module determines whether the user wants to set up one or more irrelevant formats for the selected type of content in Operation 3736. If the formatting module determines the user would like to set up irrelevant format(s) for the selected type of content, then the formatting module displays the types of irrelevant formats for the user to select from in Operation 3750. Here, irrelevant formats may be used in deemphasizing content that is not relevant to the user. Therefore, the irrelevant formats may include, for example, reducing a font size of the text, changing a font color of the text, changing a font case of the text (e.g., to lowercase), adding a border around the text, adding a background to the text, and/or the like for textual information. Similarly, for example, the irrelevant formats may include reducing the size of the media content, decreasing a resolution of the media content, and/or the like for media content. In addition, the user may also define one or more properties for the irrelevant format(s).
  • Once the user has selected one or more of the irrelevant formats, the formatting module receives one or more indications of the user's selection(s) in Operation 3751 and records the selection(s) in Operation 3752. For example, similar to enhancing and relevant formats, the formatting module may record the user's selection(s) along with credentials for the user. Therefore, as a result, the irrelevant format(s) selected by the user for the particular type of content can be identified and used based at least in part on the user's credentials provided at a time the user logs into the IETM. Accordingly, in various embodiments, the irrelevant format(s) are used for the particular type of content only in instances in which the content is found to be irrelevant (e.g., not relevant) to the user. Therefore, for example, the user may have selected to have steps of maintenance procedures/tasks displayed through the IETM in a smaller/reduced font as the irrelevant format. As a result, in this example, an active step (e.g., current step) of a maintenance procedure/task is displayed to the user in the smaller/reduced font if the step is determined to be irrelevant to the user who is viewing the maintenance procedure/task through the IETM.
  • At this point, the formatting module determines whether to exit in Operation 3755. If not, then the formatting module returns to Operation 3710 and displays the content types again so that the user may set up another enhancing, relevant, and/or irrelevant format. However, if the formatting module determines to exit (e.g., the user selects an exit button), then the formatting module does so and the process flow 3700 ends.
  • Although not shown in FIG. 37A, the formatting module may be configured in particular embodiments to allow a user to set up various types of content so that the type of content is only conveyed to the user if the content is relevant to the user. For example, warnings and/or cautions may be provided for different steps performed in a sequence. For instance, such warnings and/or cautions may be provided as a popup window when an associated step for a sequence has focus (e.g., when the user selects the associated step). Here, the user may be interested in having such warnings and/or cautions provided only if the associated step is relevant to the user. Therefore, in particular embodiments, the formatting module may be configured to allow the user to indicate to only have warnings and/or cautions displayed to the user when the warnings and/or cautions (e.g., only when the associated steps) are relevant to the user. Such functionality may allow the user to reduce the amount of content that is provided through the IETM so that the user is not inundated with unnecessary content.
  • Sequence Module
  • Turning now to FIG. 37B, additional details are provided regarding a process flow for assessing the steps (operations) found in a sequence according to various embodiments. FIG. 37B is a flow diagram showing a sequence module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the sequence module may be invoked by another module to assess the steps preformed in a sequence such as, for example, the topic module previously described. However, with that said, the sequence module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, additional functionality may be provided in various embodiments for content involving sequential information. One such functionality involves displaying steps of a sequence using one or more enhancing formats. Such formats may enable a user who is viewing the steps in the IETM to be able to better comprehend (e.g., read) the steps. Another such functionality involves displaying steps of a sequence using one or more relevant formats. Such formats may enable the displaying of content to demonstrate the content is relevant to the user. Similarly, functionality may be provided for displaying steps of a sequence using one or more irrelevant formats to demonstrate the content is irrelevant to the user. Another such functionality involves highlighting any steps skipped in a sequence such as a checklist upon the user acknowledging performing a step in the sequence. Typically, the steps found in sequential information (e.g., the steps found in a checklist) are designed to be performed in the sequential order in which they are listed. Therefore, in particular embodiments, any steps that are skipped over in the sequence and not acknowledged are highlighted to bring them to the user's attention.
  • Therefore, the process flow 3760 begins with the sequence module determining whether the action taken by the user with respect to the step results in the step having focus, and if so, whether the step should be conveyed using one or more enhancing formats in Operation 3765. Accordingly, in various embodiments, focus on a step identifies the step as a portion of content having a center of interest and/or activity with respect to the content currently being provided through the IETM. For instance, the user may have performed an action such as selected a particular step of the sequence using an input mechanism associated with a user computing entity 110 such as a mouse input, tab key, touchscreen capability, and/or the like. While in another instance, the user may have performed an action that places focus on the particular step in the sequence such as acknowledging completion of a previous step in the sequence, therefore identifying the particular step as the next step to perform for the sequence.
  • Accordingly, depending on the embodiment, the sequence module may determine whether the step should be conveyed using one or more enhancing formats based at least in part on various criteria. For instance, in particular embodiments, the sequence module may make such a determination based at least in part on settings that have been identified by the user. For example, as previously discussed, the user may identify the one or more enhancing formats to use for the steps (for the particular type of content) and the enhancing format(s) may be recorded as personal settings for the user. Here, the IETM (and/or sequence module) may identify these settings based at least in part on credentials entered by the user at the time he or she logs into the IETM. In other embodiments, the one or more enhancing formats may be identified within the IETM configuration for certain roles. For example, the user may log into the IETM and identify himself or herself as maintenance personnel. Here, the one or more enhancing formats may be identified to be used for users who are serving in the maintenance personnel role and viewing documentation through the IETM. Yet, in other embodiments, the one or more enhancing formats may be identified as a global setting to be used for every user who is viewing documentation through the IETM. Still yet, in other embodiments, the one or more enhancing formats may be identified by the user upon logging into the IETM and may only be used as a one-time setting for the current use of the IETM.
  • Therefore, if the sequence module determines the step should be conveyed using one or more enhancing formats, then the sequence module causes the step to be conveyed using the one or more enhancing formats in Operation 3770. As previously discussed, examples of enhancing formats that may be used for textual information may include enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like. While examples of enhancing formats that may be used for media content may include magnifying the media content, enhancing a resolution of the media content, and/or the like. Thus, as a result, the step may be conveyed via the IETM in a manner that may enable the user to better comprehend the step.
  • Although not specifically shown in FIG. 37B, the sequence module may be configured in particular embodiments to cause the removal any enhancing formats used to display content that has lost focus. For instance, if a previous step that had focus prior to the current step had been displayed using one or more enhancing formats, then the sequence module may cause removal of these enhancing formats upon the previous step losing focus.
  • Continuing, in various embodiments, the sequence module determines whether the step having focus should be conveyed using one or more relevant formats in Operation 3775. Here, the sequence module may be configured to make such a determination in a similar fashion as to determining whether the step should be conveyed using one or more enhancing formats. If the sequence module determines the step having focus should be conveyed using one or more relevant formats, then the sequence module determines whether the step having focus is relevant to the user in Operation 3780. Depending on the embodiment, the sequence module may be configured to make such a determination based at least in part on different criteria. For instance, in particular embodiments, the sequence module may be configured to determine whether a portion of content is relevant to the user based at least in part on a role the user is serving in and/or based at least in part on the user himself or herself. For example, the sequence module may be configured to use credentials entered by the user to log into the IETM in identifying the user and/or identifying a role the user is currently serving in to make a determination as to whether the step currently having focus is relevant to the user.
  • Yet, in another embodiment, the user (or some other personnel such as a supervisor) may assign himself or herself a particular position and/or role to serve in while logged into the IETM and the sequence module may use this particular position and/or role in determining whether the step having focus is relevant to the user. For example, the user may be logged into the IETM to view a maintenance procedure and/or task for a particular component of an item. Here, the user may be tasked with performing maintenance detailed in the procedure on the component with two other users who are also logged in and using the IETM to view the maintenance procedure/task. In this instance, each of the three users are to perform specific steps within the maintenance procedure/task. Therefore, only certain steps of the maintenance procedure/task are relevant to the particular user. Accordingly, upon logging into the IETM, each of the users may have identified (selected) a certain position and/or role he or she is to serve in while performing the maintenance procedure/task and this identified position and/or role may be associated with certain steps of the maintenance procedure/task. Thus, in this example, the sequence module may be configured to identify whether the step of the maintenance procedure/task having focus is relevant to the user based at least in part on the position and/or role assigned to the user for the maintenance procedure/task.
  • If the sequence module determines the step having focus is relevant to the user, then the sequence module causes the step to be conveyed using the one or more relevant formats in Operation 3785. Again, depending on the embodiment, the one or more relevant formats may involve different types of formats. For example, relevant formats that may be used for textual information may include enlarging a font size of the text, changing a font color of the text, changing a font case of the text, adding a border around the text, adding a background to the text, causing an audio reading of the text, and/or the like. While examples of relevant formats that may be used for media content may include magnifying the media content, enhancing a resolution of the media content, and/or the like. Thus, as a result, the step may be conveyed via the IETM in a manner that may enable the user to recognize when a step in the sequence is relevant to the user. Accordingly, similar to enhancing formats, the sequence module may be configured in particular embodiments to also cause the removal of any relevant formats used to display content that has lost focus.
  • If instead the sequence module determines the step having focus is not relevant (irrelevant) to the user, then the sequence module in particular embodiments causes the step to be conveyed using one or more irrelevant formats in Operation 3786. Depending on the embodiment, the one or more irrelevant formats may involve different types of formats. For example, irrelevant formats that may be used for textual information may include reducing a font size of the text, changing a font color of the text, changing a font case of the text (e.g., changing the font case to lowercase), adding a border around the text, adding a background to the text, suppressing an audio reading of the text, and/or the like. While examples of irrelevant formats that may be used for media content may include reducing the size of the media content, reducing a resolution of the media content, and/or the like. Thus, as a result, the step may be conveyed via the IETM in a manner that may enable the user to recognize when a step in the sequence is irrelevant to the user. Accordingly, similar to enhancing and/or relevant formats, the sequence module may be configured in particular embodiments to also cause the removal of any irrelevant formats used to display content that has lost focus.
  • Although not shown in FIG. 37B, the sequence module may be configured in particular embodiments to convey a portion of content (e.g., a step of a sequence) only if the portion of content is relevant to the user. For instance, depending on the embodiment, the sequence module may be configured to only convey portions of content that are relevant to the user with respect to certain types of content or with respect to all content. For example, in particular embodiments, the sequence module may be configured to convey all the steps found in the maintenance procedure/task, with those steps of the procedure/task that are relevant to the user being conveyed using one or more relevant formats, but only convey warnings and/or cautions provided along with the steps that are relevant to the user. Such a configuration may allow selective content only to be conveyed when such content is relevant to the user so as to minimize the amount of content the user may be required to comprehend. For instance, the user may be interested is seeing all the steps of the maintenance procedure/task, with those steps of the procedure/task that are relevant to the user being conveyed using the one or more relevant formats, so that the user is able to keep track of where in the procedure/task the maintenance personnel are at. However, the user may not be interested in seeing warnings and/or cautions associated with the steps of the procedure/task that are not relevant to the user.
  • Finally, as previously noted, any steps that have been skipped over in the sequence and not acknowledged by the user (or someone else) may be highlighted in various embodiments to bring them to the user's attention. Therefore, in these embodiments, as a result of the action performed by the user being the acknowledgement of a step, the sequence module determines whether the step acknowledged by the user is the next step in the sequence to be performed by the user in Operation 3790. For example, in particular embodiments, the user may be provided a field (e.g., a checkbox) for each step in the sequential information that the user is able to check as he or she completes the step in the sequence. Therefore, in these embodiments, the sequence module receives input on the fields and determines which of the fields have been checked by the user. Accordingly, if the sequence module determines the step acknowledged by the user is not the next sequential step to be performed, then the sequence module causes the steps in the sequence that have been skipped by the user to be displayed as highlighted in the sequential information displayed on the window in Operation 3795. Again, depending on the embodiment, various formats may be used in displaying the skipped steps as highlighted.
  • An example of a window displaying sequence information in which a step 3800 is being displayed using one or more enhancing formats according to various embodiments is shown in FIG. 38A. In this example, the one or more enhancing formats involve displaying the text of the step 3800 in a larger font than the other steps of the sequence (e.g., magnified) and with a border in a particular color.
  • Similarly, an example of a window displaying sequence information in which a step 3810 is being displayed using one or more relevant formats according to various embodiments is shown in FIG. 38B. In this example, the step 3810 is determined to be relevant to the user based at least in part on a role 3815 the user is serving in matching the role 3815 identified/assigned to the particular step 3810. Accordingly, the one or more relevant formats used for displaying the step 3810 involve displaying the text of the step 3810 in a larger font than the other steps of the sequence (e.g., magnified) and with a border in a particular color. The window displaying a subsequent step 3820 is shown in FIG. 38C. However, this particular step 3820 is not being displayed using the one or more relevant formats because the step 3820 is identified/assigned to a role 3825 that is different than the role 3815 the user is serving in.
  • As previously noted, in some embodiments, a portion of content may only be conveyed to the user if the portion of content is determined to be relevant to the user. Such an example in provided in FIG. 38D in which a warning/caution 3830 is being displayed to the user as a result of the warning/caution 3830 being determined to be relevant to the user. Specifically, in this example, the warning/caution 3830 is identified/assigned to a role 3835 that is the same as the user's role.
  • Finally, an example of a window displaying sequence information in which steps have been skipped 3840 are highlighted according to various embodiments is shown in FIG. 38E. In this example, the user has acknowledged a step 3845 that is not the next step to perform in the sequence based at least in part on the steps already acknowledged by the user. Therefore, as a result, the prior steps 3820 that have not been acknowledged by the user are highlighted to bring them to the user's attention.
  • As previously noted, the functionality performed by various embodiments of the sequence module with respect to steps found in a sequence may also be performed for other types of content. For instance, in particular embodiments, portions of content involving other types of content such as, for example, content on wiring, learning, parts, and/or the like, may be conveyed using one or more enhancing formats, one or more relevant formats, and/or one or more irrelevant formats. Accordingly, such functionality may be configured to convey a portion of content using one or more enhancing, relevant, and/or irrelevant formats upon the portion of content acquiring focus. For example, a user may be viewing textual information on a component that includes several parts that are described in the textual information. Here, in this example, media content that includes an illustration of the component may also be displayed along with the textual information in the IETM. Accordingly, as the user selects text in the textual information discussing a particular part of the component, functionality may be performed that recognizes the focus on the particular part in the text, and displays the part in the illustration using one or more enhancing, relevant, and/or irrelevant formats in a similar fashion as described herein with respect to the sequence module. Thus, as one of ordinary skill in the art will recognize, additional modules and/or one or more of the modules described herein may be configured with similar functionality as the sequence module to facilitate conveying other types of content using enhancing, relevant, and/or irrelevant formats, as well as highlighting other types of content that may have been skipped and/or missed.
  • Unlock Content Module
  • Turning now to FIG. 39, additional details are provided regarding a process flow for unlocking content as a result of a user acknowledging an alert according to various embodiments. FIG. 39 is a flow diagram showing an unlock content module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the unlock content module may be invoked by another module to unlock content such as, for example, the topic module previously described. However, with that said, the unlock content module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • In various embodiments, a portion of the content provided for a topic may be locked to require a user to acknowledge an alert associated with the portion of the content. For example, the content may provide a warning and/or caution for the user. Accordingly, the user may acknowledge the alert. For example, some type of mechanism such as a button may be provided that the user selects to acknowledge the alert and as a result, the unlock content module is invoked.
  • Therefore, the process flow 3900 begins with the unlock content module identifying the alert that has been acknowledged in Operation 3910. Here, in particular embodiments, the unlock content module may receive and/or read a tag associated with the alert that is provided in the textual information for a topic. Accordingly, the tag identifies the alert and its location with respect to the other content found in the textual information.
  • Next, the unlock content module identifies the next alert in the content in Operation 3915. Again, in particular embodiments, the unlock content module may identify the next tag found in the textual information for an alert. In some instances, the alert acknowledged by the user may be the last alert provided in the content. If that is the case, then the unlock content module may identify the end of the content.
  • Finally, the unlock content module unlocks the portion of the content between the two alerts in Operation 3920. As previously discussed, the content may be locked using a number of different approaches and/or any combination thereof. For instance, the user's ability to view the portion of the content may be obscured. For example, the portion of the content may be greyed out so that it cannot be read. In addition, any interactive functionality found within the portion of the content may be disabled. For example, the portion of the content may contain an occurrence of a selectable part. Here, the selectable functionality of the part may be disabled. In some instances, the user's ability to scroll through the portion of the content may be disabled. However the portion of the content has been locked, the unlock content module performs the necessary operations to unlock the content.
  • It is noted that in various embodiments not all of the content that has been locked is unlock as a result of the user acknowledging the alert. Generally speaking, only the portion of the content that is available between the acknowledged alert and the next alert found in the content is unlock. Such a configuration can be used to ensure that the user views and acknowledges each and every alert provided in the content as the user moves through the content. However, with that said, other configurations may be used in unlocking the content based at least in part on the user acknowledging alerts. For example, some embodiments may require the user to acknowledge multiple alerts before unlocking content. Those of ordinary skill in the art can envision other configurations that may be used in other embodiments in light of this disclosure.
  • FIG. 40A provides an example of a portion of content 4000 that has been locked according to various embodiments. Specifically, the portion of the content 4000 has been greyed out to obscure the user's ability to view the portion of the content 4000. An alert is displayed that provides an acknowledgment mechanism (e.g., a button) 4010 that can be selected by the user to acknowledge the alert and unlock the portion of the content 4000. As a result of the user acknowledging the alert, that is as a result of the user selecting the acknowledgment mechanism 4010, the portion of the content 4015 is unlock as shown in FIG. 40B. Here, the portion of the content 4015 is unlock to the next alert found in the content. At this point, the user can select the acknowledgment mechanism 4020 for the next alert to unlock additional content.
  • Transfer Job Module
  • Turning now to FIG. 41, additional details are provided regarding a process flow for facilitating a user transferring a job according to various embodiments. FIG. 41 is a flow diagram showing a transfer job module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the transfer job module may be invoked by another module to transfer a job such as, for example, the topic module previously described. However, with that said, the transfer job module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously discussed, a user may wish to transfer a job (e.g., a particular instance of a process, procedure, task, checklist, and/or the like) he or she is currently performing to another user. For example, the user's work shift may be ending and therefore, he or she may wish to transfer the current job he or she is performing to another user who is working the following shift. Therefore, in these embodiments, the user may select an option (e.g., a button) to transfer a job and as a result, the transfer job module is invoked.
  • The process flow 4100 begins with the transfer job module causing display of an indication (e.g., a divider) at a point in the content being displayed on a window where the user is suspending performing the job in Operation 4110. For instance, if the user is performing a job involving a maintenance procedure/task that includes several steps, then the transfer job module causes the indicator to be displayed between the two steps of the procedure/task where the user is stopping. Accordingly, depending on the embodiment, the indication may be displayed in a number of different formats such as a line, arrow, bullet point, and/or the like.
  • The transfer job module then generates a job transfer window based at least in part on the job in Operation 4115 and provides the window for display in Operation 4120. Here, the transfer job window may be superimposed over a portion of the window displaying the procedure/task. The job transfer window may provide information such as the title of the procedure/task being performed for the job (e.g., the DMC for the related data module), the user's name, a data and time the job is suspended, a job control number, comments provided by the user, and/or the like. The transfer job module then records the job transfer in the IETM in Operation 4125. This operation in particular embodiments involves the transfer job module recording a marker identifying where the job was suspended. Accordingly, this marker can then be used at a later time in identifying where the job needs to be resumed.
  • As a result, the job transfer may now be posted in the IETM so that another user may resume the job. Depending on the embodiment, the job transfer may be viewed by every user who signs into the IETM for the item and/or specific object for the item or the job transfer may only be viewed by those users who can resume the job. That is to say, in particular embodiments, the job transfers available to a user to view and/or resume may be dependent on the credentials used by the user in signing into the IETM.
  • Resume Job Module
  • Turning now to FIG. 42, additional details are provided regarding a process flow for resuming a suspended job according to various embodiments. FIG. 42 is a flow diagram showing a resume job module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the resume job module may be invoked as a result of a user signing into the IETM and selecting an option to view the jobs that have been suspended.
  • The process flow 4200 begins with the resume job module receiving input indicating a selection from a user to view the jobs that have been suspended in Operation 4210. For instance, in particular embodiments, the user may be provided with a mechanism such as a button on a toolbar to view the jobs that have been suspended. In response to the user selecting the mechanism, the resume job module may provide the suspended jobs to display on a window to the user in Operation 4215. Here, the window may be configured to allow the user to select a particular job from the suspended jobs.
  • Therefore, the resume job module determines whether input has been received indicating the user has selected a job displayed on the window to resume in Operation 4220. If so, then the resume job module retrieves the stop position for the job in Operation 4225. As previously noted, a marker may be recorded when the job was transferred that identifies the position where the job was suspended. Once the marker has been retrieved, the resume job module provides the procedure/tasks associated with the suspended job for display on a window to the user along with an indication (e.g., a divider) based at least in part on the marker in Operation 4230. In addition, the resume job module provides a resume job window for display in Operation 4235. Here, the resume job window may be superimposed over a portion of the window displaying the procedure/task and may provide a mechanism (e.g., a button) that the user can select to resume the job.
  • Thus, the resume job module determines whether input has been received indicating the user will resume the job in Operation 4240. If the user has decided to resume the job, then the resume job module causes the resume job window to close and causes the indication to be removed in Operation 4245. Accordingly, the job that has been resumed may be removed from the suspended jobs. Otherwise, the resume job module determines whether input has been received indicating the user would like to exit viewing the suspended jobs in Operation 4250. If the user does want to exit, then the resume job module causes the display of the suspended jobs to be closes and exits.
  • FIG. 43A provides an example of a mechanism 4300 that is provided in particular embodiments to enable a user to transfer or resume a job. In this example, the mechanism 4300 is a dropdown menu control provided in a toolbar displayed along the top of a window. Here, the dropdown menu provides the user with the option to create a job transfer 4310 and the option to open the jobs that have been transferred (suspended) 4315. FIG. 43B provides an example of a job transfer window 4320 according to various embodiments. As noted above, such a window 4320 may be provided in particular embodiments when a user selects an option to transfer a job the user is currently performing. FIG. 43C provides an example of a procedure/task that has been suspended that a user has identified to resume. Accordingly, an indication 4325 is shown in the display of the procedure/task at a position where the procedure/task was suspended. In addition, a resume job window is provided along with a mechanism (e.g., a button) 4330 to allow the user to resume the job. Finally, FIG. 43D displays the procedure/task for the job with the indication 4335 removed. At this point, the user can resume the job and finish the remaining steps for the procedure/task.
  • Update Media Module
  • Turning now to FIG. 44, additional details are provided regarding a process flow for updating the media content displayed based at least in part on a user scrolling through textual information according to various embodiments. FIG. 44 is a flow diagram showing an update media module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the update media module may be invoked by another module to update the media content displayed such as, for example, the topic module previously described. However, with that said, the update media module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • For example, a user may be viewing the steps for a maintenance task displayed on a first view pane on a window. At the same time, illustrations for the maintenance may be provided on a second view pane. For instance, a step in the maintenance task may involve a particular component and an illustration of the component may be provided to aid the user in locating the component on the actual item. Accordingly, in particular embodiments, the window may be configured to display the two panes on non-overlapping portions of the window.
  • Therefore, as the user scrolls through the various steps of the maintenance task, the process flow 4400 begins with the update media module identifying the first occurrence of media content mentioned in the textual information displayed on the window in Operation 4410. In various embodiments, the first occurrence is determined from the top of the window. Therefore, the update media module searches the textual information starting at the top of the window until the module finds a reference to media content in the text. For example, the first reference to media content may be a reference to a figure, a video, an image, a sound recording, and/or the like.
  • The update media module then retrieves the media content associated with the reference in Operation 4415. In particular embodiments, the reference to the media may include a hyperlink that the user may select to retrieve the media content if desired. Therefore, the update media module may obtain the storage location of the media content in the IETM from the hyperlink and retrieve the media content from the storage location. In other embodiments, the update media module may obtain the storage location from the data (e.g., data module) for the textual information being viewed. In other embodiments, the update media module may use other processes for retrieving the media content as those of ordinary skill in the art can envision in light of this disclosure. Once retrieved, the update media module updates the view pane used for displaying media by causing the retrieved media content to be displayed in the view pane in Operation 4420.
  • FIG. 45 provides an example of media content being updated as a user scrolls through the textual information for a topic according to various embodiments. As shown in this example, the first occurrence of media content mentioned in the textual information shown in the view pane displayed on the left side of a window is FIG. 2, Sheet 2 4500. As a result, the corresponding illustration for FIG. 2, Sheet 2 4510 is shown in the view pane displayed on the right side of the window. Once the user has scroll down the textual information so that the reference to FIG. 2, Sheet 2 can no longer be seen in the view pane, then the media content displayed in the view pane on the right is updated to reflect the media content that is now the first to be referenced in the textual information. It is noted that in particular embodiments multiple view panes may be used to display the media content so that multiple occurrences of media content mentioned in the textual information may be shown on a window at the same time.
  • Connector Module
  • Turning now to FIG. 46A, additional details are provided regarding a process flow for providing functionality for an electrical connector (e.g., a plug) based at least in part on a user selecting the electrical connector according to various embodiments. FIG. 46A is a flow diagram showing a connector module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the connector module may be invoked by another module to provide the functionality such as, for example, the topic module previously described. However, with that said, the connector module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, the user may be viewing some type of sequential information via the IETM such as, for example, a maintenance procedure and/or task that the user is performing while out in the field. In this example, the maintenance procedure/task may involve trouble shooting an electrical problem that is being experienced with respect to an item that the user is viewing documentation for via the IETM. Accordingly, the maintenance procedure/task may entail the user testing various pins found in an electrical connector (e.g., a plug) to ensure the pins are working properly. Here, the user may have a piece of testing equipment configured to be connected to the pair of pins so that the pins can be tested. However, physically identifying the pair of pins in the connector may be difficult due to the size of the connector and/or the number of pins found in the connector. Therefore, the user may become quite frustrated with attempting to physically identify the pair of pins so that he or she may connect the testing equipment to the correct pins as indicated in the maintenance procedure/task.
  • Accordingly, the connector may be referenced in the content (e.g., textual information) of the maintenance procedure/task by some type of identifier such as, for example, the name of the connector, the part number associated with the connector, and/or the like. Further, the connector may be configured as selectable from the content of the maintenance procedure/task. For example, the textual information for the maintenance procedure/task may be provided in a first view pane on a window for the IETM and an identifier may be provided in the textual information that is selectable as a hyperlink. While in another example, some type of selection mechanism such as a button may be provided for the connector. Therefore, the user may select the connector from the content and as a result, the connector module is invoked.
  • Thus, in various embodiments, the process flow 4600 begins with the connector module retrieving media content for the connector and displaying the media content in Operations 4610 and 4615. For example, the media content may include one or more illustrations of the connector such as one or more 2D or 3D graphics. In addition, the media content may display the pin configuration (a plurality of pins) for the connector. Here, the maintenance procedure/task may be provided in a first view pane displayed on the window and the media content for the connector may be provided in a second view pane displayed on the window. For instance, in particular embodiments, the window may be configured to display the first and second view panes on non-overlapping portions of the window.
  • In addition to displaying the media content, the connector module in various embodiments generates and displays a preview for the connector in Operations 4620 and 4625. For instance, in particular embodiments, the connector preview may be provided as a separate window than the window displaying the maintenance procedure/task and media content. In some embodiments, the preview window may be superimposed over a portion of the window displaying the maintenance procedure/task and media content. Accordingly, the connector preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the connector. In some embodiments, the preview is configured to provide a list of the pins that are found in the connector. For example, the preview may provide the list of pins as a dropdown menu. Here, each of the pins may be configured as selectable by the user. For example, a selection mechanism such as a checkbox may be provide that can be selected by the user to select the associated pin. In addition, in some embodiments, the pins may be configured as selectable in the media content.
  • Therefore, the connector module determines whether input has been received indicating the user has selected a pin from the preview (and/or media content) using a first selection mechanism in Operation 4630. For example, the connector module determines whether input has been received indicating the user has selected the checkbox for the pin. If the user has selected the pin using the first selection mechanism, then the connector module determines whether the pin is already highlighted in Operation 4635. If that is the case, then the user may be attempting to unselect the pin in the preview and/or media content. Therefore, if the pin is already highlighted, the connector module removes the highlighting for the pin in Operation 4640. This operation may involve the connector module removing highlighting of the pin in the media content and/or in the preview window. For example, the pin may be displayed on the media content in a particular color (e.g., blue) to highlight the pin from the other pins for the connector, which may be displayed in a different color (e.g., gray). Therefore, the connector module may remove the highlighting by causing the pin to return to being displayed in the same color (e.g., gray) as the other pins, as well as unchecking the checkbox associated with the pin in the preview.
  • Accordingly, in particular embodiments, the connector module may be configured to allow the user to select a single pair of pins at any given time. As previously mentioned, the testing equipment may be designed for testing a pair of pins. Therefore, the connector module may be configured to format the display of the remaining pins that have not been selected using some type of deemphasized format in some embodiments. If this is the case, then the connector module may remove the deemphasized format of the remaining pins and display the pins as normal in Operation 4645 in response to the user deselecting one of the pins. This may allow the user to then select a different pin for the pair of pins that is to be tested.
  • Returning to Operation 4635, if the selected pin is not currently highlighted, then the connector module causes the selected pin to be displayed as highlighted in the media content in a first format in Operation 4650. For example, the connector module may highlight the pin in the media content by formatting the pin in bold, in a particular color, with a border, in a different font, any combination thereof, and/or the like. Such formatting may allow the pin to stand out from the other pins displayed in the media content for the connector. Accordingly, as a result of displaying the pin as highlighted in the media content in the first format, this may enable the user to identify the pin in the actual connector while in the field. Note, that in particular embodiments, the connector module may also provide some type of highlighting format to the information on the pin provided in the preview.
  • At this point, the connector module in various embodiments determines whether the user has selected a pair of pins in Operation 4655. If so, then the connector module displays the remaining pins for the connector that have not been selected in a deemphasized format in Operation 4660. Depending on the embodiment, the deemphasized format may entail displaying the remaining pins in the media content and/or preview in a particular color (e.g., dark grey), with a particular background, in a different font, and/or the like. Generally speaking, the connector module may be configured to display the remaining pins in a deemphasized format that demonstrates the pins are not currently selected by the user.
  • In addition, in some embodiments, the deemphasized format may be configured to prevent the user from selecting another pin to highlight once the user has selected a pair of pins. However, with that said, those of ordinary skill in the art will understand that the connector module can be configured in other embodiments to prevent the user from selecting another pin to highlight based at least in part on a different number of pins besides two (a pair). For example, the testing equipment being used by the user may allow for the testing of three pins, or four pins, at any given time. Therefore, the connector module may be configured to prevent the user from selecting more than three pins or four pins to display as highlighted in the media content and/or preview.
  • Finally, returning to Operation 4630, if the connector module determines input has not been received indicating the user has selected the pin using the first selection mechanism, then the connector module may determine whether input has been received indicating the user has instead selected the pin using a second, different selection mechanism (e.g., using his or her mouse to hover over the pin in the preview and/or on the media content) in Operation 4665. If the user has selected the pin using the second selection mechanism, then the connector module causes the selected pin to be displayed as highlighted in the media content in a second format in Operation 4670. In addition, in particular embodiments, the connector module may highlight the pin in the preview. For example, the second format may involve displaying the pin in a second color (e.g., green) in the media content that is a different color (e.g., blue) than had the user selected the pin using the first selection mechanism.
  • The connector module then determines whether the user wishes to exit out of the preview of the connector in Operation 4675. If so, then the process flow 4600 ends. If not, then the connector module continues to monitor the user's selection of pins.
  • Accordingly, in various embodiments, the second selection mechanism (e.g., hovering over the pin in the preview and/or the media content using a cursor) is to provide the user with a quick way in identifying the pin in the connector. Such functionality may allow the user to move freely from pin to pin in the preview and/or media content and identify the pin pair he or she is specifically looking for by viewing what corresponding pin is highlighted in the preview and/or media content.
  • In addition, in various embodiments, the first selection mechanism (e.g., selecting the corresponding checkbox for the pin in the preview and/or clicking on the pin in the media content) is to provide the user with a way to select a pin that stays selected. This can allow the user to select a pair of pins while working in the field that are then displayed highlighted and can be referenced by the user while locating the actual pins in the physical connector.
  • FIG. 46B provides an example of a window displaying a first view pane 4680 on the left side of the window providing the textual information for a maintenance procedure/task and a second view pane 4681 on the right side of the window providing media content (e.g., an illustration) of the connector and pins according to various embodiments. In this example, the user has selected an identifier 4682 for the connector found in the textual information for the maintenance procedure/task. As a result, a preview window 4683 is displayed for the connector in which a dropdown has been provided to allow the user to select a pair of pins 4684, 4685. As a result of the user selecting the pair of pins 4684, 4685, the pins 4686, 4687 are highlighted in the media content displayed in the second view pane 4681. FIG. 46C provides an example in which the user has selected one of the pins 4686 using a second selection mechanism (e.g., hovering over the pin 4684 with his or her cursor in the preview window 4683). As result, the pair of pins 4686, 4687 are highlighted in the media content using two different formats. Specifically, the pair of pins 4686, 4687 are displayed with the first pin 4686 highlighted in a first color and the second pin 4687 highlighted in a second, different color.
  • Highlight Unit Module
  • Turning now to FIG. 47A, additional details are provided regarding a process flow for highlighting a unit displayed in media content such as an illustration (e.g., 2D or 3D graphic) or mentioned in text based at least in part on a user selecting the unit according to various embodiments. FIG. 47A is a flow diagram showing a highlight unit module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the highlight unit module may be invoked by another module to highlight a unit such as, for example, the topic module previously described. However, with that said, the highlight unit module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The term “unit” may refer to a component of an item, equipment, a tool, and/or the like. Accordingly, a unit may be referenced in the textual information for a topic, as well as displayed in media content such as an illustration. For example, a user may be being viewing the instructions for performing a maintenance task and the instructions may reference a particular part that is to be replaced during the task. Many times, some type of media may also be provided such as an illustration to assist the user in actually replacing the part. For instance, the instructions may be displayed on a first view pane of a window and the illustration may be displayed on a second view pane of a window. Here, in particular embodiments, the part may be provided in the first and/or second view panes as selectable. Although the part may not necessarily be selectable. Therefore, in response to the user selecting one or more units in one of the view panes, the highlight unit module may be invoked.
  • The process flow 4700 begins with the highlight unit module determining whether input has been received indication a selection of text referencing one or more units in Operation 4710. For example, the user may be viewing the steps for a maintenance procedure/task and may select a particular step for the procedure/task in the textual information displayed on a window. Accordingly, the step may refer to one or more units (e.g., one or more components). The highlight unit module may be configured to identify the reference(s) to the unit(s) based at least in part on the unit(s) (e.g., unit name and/or number) being selectable within the textual information. In other embodiments, the highlight unit module may be configured to identify the reference(s) to the units(s) by searching the selected text and comparing terms within the text to a list of units(s) (e.g., component names, part names and/or numbers, and/or the like).
  • The highlight unit module then causes the unit(s) to be displayed as highlighted in the media content being displayed on the window in Operation 4715. Accordingly, the highlight unit module may highlight the unit(s) using different formatting depending on the embodiment. For instance, the highlight unit module may highlight the unit(s) in the media content by displaying the unit(s) in bold, in a particular color, with a marker, with a border, in a different font, any combination thereof, and/or the like. As a result, the user is then able to identify the unit(s) referenced in the selected text in the media content more easily.
  • The highlight unit module is configured in various embodiments to perform similar functionality in respect to the user selecting one or more units displayed in the media content. Therefore, if the highlight unit module determines it has not received a selection of text containing one or more units, then the module determines whether it has received a selection of one or more units in the media content currently being displayed on the window in Operation 4720. The unit(s) displayed in the media content may be selectable and therefore, the user may have selected one or more of the units displayed in the media content. For example, the user may select a unit by clicking on the unit in the media content. In particular instances, the user may be able to select multiple units by holding down a key while clicking on the units such as, for example, the ctrl key or the alt key. Those of ordinary skill in the art can contemplate other approaches that may be used to select the unit(s) in the media content in light of this disclosure.
  • Similar to the user selecting text referencing one or more units, the highlight unit module then causes the unit(s) to be displayed as highlighted in the textual information being displayed on the window in Operation 4725. Again, the highlight unit module may highlight the unit(s) using different formatting depending on the embodiment.
  • FIG. 47B provides an example of a window displaying a first view pane on the left side of the window providing the textual information for a topic and a second view pane on the right side of the window providing an illustration of the topic. In this example, the user has selected a particular step 4730 of a procedure/ task referencing parts 4735, 4740, 4745 displayed in the illustration and a result, the parts 4750, 4755, 4760 have been automatically highlighted in the illustration according to various embodiments. FIG. 47C provides an example in which the user has selected a part 4765 in the illustration in the view pane displayed on the right side of the window and the references to the part 4770, 4775 are automatically highlighted in the textual information in the view pane displayed on the left side of the window according to various embodiments.
  • End of Topic Module
  • Turning now to FIG. 48, additional details are provided regarding a process flow for providing functionality when a user has reached the end of a topic according to various embodiments. FIG. 48 is a flow diagram showing an end of topic module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the end of topic module may be invoked by another module to invoke functionality such as, for example, the topic module previously described. However, with that said, the end of topic module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously mentioned, various embodiments provide the user with certain functionality when the end of the content for a topic has been detected. For example, the topic module may invoke the end of topic module in response to detecting the user has scrolled to the end of the textual information provided for a topic. As previously noted, the content for a topic may be formatted in various embodiments according to S1000D standards. Therefore, the content for a topic may be stored in the IETM with respect to data modules and the end of the topic may refer to the end of content found in a particular data module for the topic (e.g., the end of the data module).
  • Further, the functionality may only be provided at the end of the topic in particular embodiments to ensure the user has viewed and/or processed/used all of the content for a topic. For example, the user may be viewing a topic involving a task with many steps that are to be performed by the user. Therefore, end of topic functionality may only be provided upon detecting the user has reached the end of the content, that is reached the end of the steps for the task, to ensure the user has performed all of the steps. In some embodiments, other criteria may also be associated with providing end of topic functionality. For instance, returning to the example, the user may also need to acknowledge he or she has performed all of the steps in the tasks by checking off the steps before the end of topic functionality is provided.
  • Accordingly, the process flow 4800 begins with the end of topic module providing of an end of topic mechanism (e.g., a button) for the content displayed for the topic on a window in Operation 4810. In addition, the end of topic module in particular embodiments provides a previous topic mechanism (e.g., a button) and a next topic mechanism (e.g., a button) for the content displayed for the topic on the window in Operations 4815 and 4820.
  • At this point, the end of topic module determines whether input has been received indicating the user has selected the previous topic mechanism in Operation 4825. If so, then the end of topic module generates a preview for the previous topic found just before the current topic being viewed by the user in the table of contents for the technical documentation in Operation 4830 and provides the preview for display in Operation 4835.
  • For instance, in particular embodiments, the previous topic preview may be provided as a separate window than the window displaying the topic. In some embodiments, the preview window may be superimposed over a portion of the window displaying the topic. Accordingly, the previous topic preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the previous topic. In some embodiments, the preview is configured to provide only a preview of some of the content found in the technical documentation on the previous topic. For example, the preview may be configured in particular embodiments to provide the first five to fifty lines of textual information that the user would be provided with if the user were to select the previous topic to view the entire content for the topic.
  • If the user has not selected the previous topic mechanism, then the end of topic module determines whether input has been received indicating the user has selected the next topic mechanism in Operation 4840. If so, then the end of topic module generates a preview of the next topic found just after the current topic being viewed by the user in the table of contents for the technical documentation in Operation 4845 and provides the preview for display in Operation 4850. Accordingly, the preview for the next topic may be configured in the same manner as the preview for the previous topic.
  • However, if the user has not selected the next topic mechanism, then the end of topic module determines whether input has been received indicating the user has selected the end of topic mechanism in Operation 4855. If so, then the end of topic module executes the functionality associated with the end of topic mechanism in Operation 4860. The functionality may perform different operations depending on the embodiment. For instance, in some embodiments, the functionality may open the table of contents for the technical documentation at the place in the table of contents where the current topic being viewed by the user is located and may highlight the current topic in the table of contents. Here, the table of contents may be provided in a separate window and/or a view pane displayed on the window displaying the topic. Such functionality may allow the user to then view other topics in the vicinity of the current topic to help the user navigate to a new topic. In other embodiments, the functionality may take the user back to the top of the content for the topic (e.g., back to the top of the data module).
  • In other embodiments, the functionality may allow the user to view other objects for the item. For example, the user may be performing maintenance on a particular aircraft of a type of aircraft found in an airline's fleet and may be viewing a maintenance task. Accordingly, the user may be signed into the IETM using credentials identifying the particular aircraft so that the maintenance work (e.g., job) being performed on the aircraft is tracked and recorded. However, the user may be assigned to perform the same maintenance on another aircraft of the same type found in the airline's fleet. Therefore, the end of the topic functionality may allow the user to view the other aircraft of the same type in the airline's fleet and then enable the user to move easily to the other aircraft in the IETM (e.g., sign-into the other aircraft in the IETM) while maintaining the same maintenance task (.e.g., the same topic). Those of ordinary skill in the art can envision other functionality may be invoked in other embodiments in light of this disclosure. It is noted that although not shown in the process flow 4800 provided in FIG. 48, the end of topic module is configured in some embodiments to cause the end of topic mechanism, the previous topic mechanism, and/or the next topic mechanism to be removed from display if the user scrolls to the position in the content for the topic that is no longer at the end of the content.
  • FIG. 49A provides an example of an end of topic mechanism (e.g., a button) 4900 provided at the end of the content for a topic according to various embodiments. FIG. 49B provides an example in which the functionality performed as a result of the user selecting the end of topic mechanism 4900 is displaying a window with the table contents at a position 4910 in the table of contents highlighting the current topic being viewed by the user
  • Verbal Command Setup Module
  • In various embodiments, the IETM may include functionality that allows for users to use verbal commands for interacting with content being viewed through the IETM. For example, a user may be maintenance personnel who is out in the field performing maintenance on a component for an item. The user may be viewing documentation for the component via the IETM. Here, the documentation may involve content on a maintenance procedure and/or task the user is performing on the component, or the documentation may involve content on the component itself. The maintenance the user is performing may be quite involved and require the user to use both of his or her hands in performing the maintenance. Therefore, it may be inconvenient for the user to have to interact with the IETM using his or her hands. As a result, the user may wish to use verbal commands to interact with the IETM.
  • Accordingly, functionality is provided in various embodiments to allow the user to setup verbal commands for interacting with content through the IETM. Specifically, in particular embodiments, functionality is provided that allows the user to identify an action to be performed based at least in part on a particular verbal command provided by the user. For instance, the action may involve manipulating a user interface control element found on a window of the IETM such as, for example, checking a checkbox control element, selecting a button control element, selecting an item from a dropdown control element, and/or the like. The action may involve manipulating content being displayed by the IETM such as, for example, scrolling through content, highlighting a portion of content, selecting a portion of content, having a portion of content read out audibly, and/or the like. As further discussed herein, the functionality may be configured to allow the user to identify and associate various verbal commands with actions, user interface control elements, and/or the like. In addition, the functionality may be configured to allow the user to associate such verbal commands and/or actions with particular types of content (e.g., portions of content).
  • Turning now to FIG. 50A, additional details are provided regarding a process flow for setting up a verbal command for a user according to various embodiments. FIG. 50A is a flow diagram showing a verbal command setup module for performing such functionality according to various embodiments of the disclosure. Accordingly, the verbal command setup module may be executed by an entity such as the management computing entity 100 and/or the user computing entity 110 previously discussed. For instance, in various embodiments, the verbal command setup module may be executed in response to a user selecting an option to set up a verbal command from a window provided through the IETM.
  • In particular embodiments, the IETM may provide one or more windows that can be used by the user in setting up verbal commands for various actions. Accordingly, the user may be able to select a particular verbal command and action to be performed for the verbal command. Therefore, the process flow 5000 begins with the verbal command setup module receiving the verbal command in Operation 5010 and the action to be performed in Operation 5015. For instance, the verbal command may be to interact with a user interface element being displayed through the IETM. For example, the verbal command may be the term “check” and the action may be to check a checkbox control element found in a portion of content being displayed on the IETM and having focus. In another example, the verbal command may be the term “click” and the action may be to click a button control element found in a portion of content being displayed on the IETM and having focus. Yet, in another example, the verbal command may be the term “next” and the action may be to jump to a next portion of content (e.g., to a next step in a procedure, task, and/or checklist) being displayed on the IETM. Still, in another example, the verbal command may be the term “scroll down” and the action may be to scroll down through a portion of content being displayed on the IETM. Those of ordinary skill in the art can envision various combinations of verbal commands, actions, and/or types of content may be setup by the user in light of this disclosure.
  • In addition, in various embodiments, the user may be requested to provide one or more samples of the user providing the verbal command. For example, one or more audio samples of the user speaking the verbal command may be recorded. Therefore, as a result, the verbal command setup module receives the sample(s) in Operation 5020.
  • At this point, in various embodiments, the one or more samples provided by the user may be used in training a machine learning model. For instance, in particular embodiments, the verbal command machine learning model may be a model configured to perform some type of automatic speech recognition on the verbal command to generate a representation of the verbal command, that can then be mapped to an action to perform for the verbal command. For example, in some embodiments, the verbal command machine learning model may be configured to process a verbal command and generate the action to be performed based at least in part on the verbal command. In these embodiments, the verbal command machine learning model may generate a feature representation of the verbal command to map the feature representation directly to an applicable action. Therefore, the output of such a model is the action, itself, to be performed.
  • In other embodiments, the verbal command machine learning model may be configured to process a verbal command and generate a representation of the verbal command, that can then be used in identifying an action to be performed. For example, the verbal command machine learning model may generate a textual representation of the verbal command. Accordingly, the textual representation may then be used in identifying any keywords that appear in the verbal command, and these keywords may then be used in identifying an action to perform based at least in part on the verbal command. Note that a “keyword” may include a single word, combination of words such as a phrase, and/or the like.
  • Accordingly, depending on the embodiment, the verbal command machine learning model may be any one of a number of different types of supervised and/or unsupervised machine learning models such as, for example, Hidden Markov models, conventional recurrent neural networks (RNNs), gated recurrent unit neural networks (GRUs), long short-term memory neural networks (LSTMs), and/or the like. In addition, the verbal command machine learning model may be configured in some embodiments as an ensemble involving multiple machine learning models and/or algorithms.
  • Further, the verbal command setup module may be configured in particular embodiments to preprocess the one or more samples and/or extract features from the one or more samples prior to using them to train and test the verbal command machine learning model. For example, in some embodiments, the verbal command setup module may be configured to preprocess the sample(s) to remove background noise and/or silence, to normalize the volume of the sample(s) to a standard level, to pre-emphasis to boost high frequency components of the audio signal(s) for the sample(s), and/or the like. In addition, in some embodiments, the verbal command setup module may be configured to extract one or more features from the sample(s) such as, for example, zero crossing rate, spectral rolloff, Mel-frequency cepstral coefficients (MFCC), chroma frequencies, and/or the like.
  • Accordingly, the one or more samples provided by the user may be broken down into training sample(s) and testing sample(s). Therefore, the verbal command setup module trains the verbal command machine learning model using the training sample(s) (e.g., extracted features of the sample(s)) in Operation 5025. Once trained, the verbal command setup module determines whether the model is trained to an acceptable level for generating the action identified by the user for the verbal command in Operation 5030. Here, in particular embodiments, the verbal command setup module may be configured to determine whether the verbal command machine learning model can generate the appropriate action for the testing samples to a certain level of performance (e.g., satisfy a threshold level of performance). If the verbal command setup module determines the performance of the verbal command machine learning model is not acceptable, then the verbal command setup module returns to Operation 5020 and receives additional sample(s) from the user and further trains the model on the additional samples.
  • Once the verbal command machine learning model is trained to an acceptable level, the verbal command setup module stores the model in Operation 5035 so that it may be used for processing verbal commands received by the user while using the IETM. Accordingly, the verbal command machine learning model may be trained for processing a variety of commands to perform a variety of actions. In addition, depending on the embodiment, the verbal command machine learning model may be trained and used for a specific user or for multiple users. That is to say, in particular embodiments, a verbal machine learning model may be developed and trained for each individual user. While in other embodiments, a verbal machine learning module may be developed and trained for multiple users. As detailed further herein, once trained, the verbal command machine learning model can then be used in generating actions to perform based at least in part on verbal commands received by the user while the user is viewing documentation through the IETM. Finally, the verbal command machine learning model may be further trained over time as samples of verbal commands are provided by the user during actual use. Such further training may help in fine tuning the verbal command machine learning model.
  • Verbal Command Module
  • Turning now to FIG. 50B, additional details are provided regarding a process flow for processing a verbal command received from a user according to various embodiments. FIG. 50B is a flow diagram showing a verbal command module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the verbal command module may be invoked by another module to process a verbal command such as, for example, the topic module previously described. However, with that said, the verbal command module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • The process flow 5040 begins with the verbal command module receiving a verbal command in Operation 5045. For example, the verbal command may be received through an audio input of a user computing entity 110 being used by the user to view documentation in the IETM. Once received, the verbal command module identifies what portion of content that is being displayed by the IETM currently has focus in Operation 5050. Accordingly, in various embodiments, focus on a portion of content identifies the portion of content as having a center of interest and/or activity with respect to the content currently being provided through the IETM. For example, the user may be viewing content involving a checklist and the user may have selected a particular step of the checklist. Therefore, in this example, the selected step is identified as the portion of the content having focus.
  • Accordingly, focus on a portion of content may be accomplished using various mechanisms depending on the embodiment. For instance, the user may indicate completion of a particular portion of content (e.g., completion of a step in a checklist), and focus may automatically move to another portion of the content (e.g., focus may move automatically to the next step in the checklist). While in other instances, the user may perform some type of action such as click on and/or hover over a portion of content to convey focus on the portion of content. Those of ordinary skill in the art can envision multiple types of mechanisms that can be used to establish focus on a portion of content in light of this disclosure.
  • Once the portion of content having focus has been identified, the verbal command module generates an action based at least in part on the verbal command received from the user in Operation 5055. In particular embodiments, the verbal command module performs this operation by processing the verbal command (e.g., audio of the verbal command) using a verbal command machine learning model to generate the action. Accordingly, in some embodiments, the verbal command module may preprocess and/or extract one or more features from the verbal command (e.g., audio of the verbal command) before processing the verbal command (e.g., before processing the extracted feature(s) of the verbal command) using the verbal command machine learning model. In some embodiments, the verbal command machine learning model may be configured to process the verbal command and generate an action to perform based at least in part on the verbal command. Therefore, for these embodiments, the verbal command machine learning model can generate the action to be performed without further processing by the verbal command module.
  • In other embodiments, the verbal command machine learning model may be configured to generate a representation of the verbal command (e.g., a textual representation) by performing natural language processing on the verbal command, and the representation may then be used in generating the action to be performed. For example, the verbal command machine learning model may be a deep learning model such as a CNN configured to perform automatic speech recognition on the verbal command to generate the representation. Again, depending on the embodiment, the verbal command module may be configured to perform preprocessing and/or feature extraction on the verbal command prior to processing the verbal command using the verbal command machine learning model.
  • In these embodiments, the verbal command module may then identify any keywords found in the representation of the verbal command that may be used to identify an action to perform based at least in part on the verbal command. For instance, in some embodiments, the verbal command module may be configured to then use some type of data structure, such as a table, file, array, and/or the like, to reference and map/match the identified keyword(s) found in the textual representation with an action. As previously noted, a “keyword” may include a single word, combination of words such as a phrase, and/or the like.
  • At this point, the verbal command module determines whether the identified action to perform involves a user interface control element in Operation 5060. For example, the identified action to perform may be to scroll down through the portion of content currently having focus to another portion of the content. In another example, the identified action to perform may be to jump to a next step in a procedure, task, and/or checklist. In these examples, the identified action to perform does not necessarily involve a user interface control element. Therefore, in these examples, the verbal command module determines the identified action to perform does not involve a user interface control element and as a result, performs the identified action in Operation 5070.
  • On the other hand, the identified action to perform may involve a user interface control element. Therefore, if this is the case, the verbal command module identifies an applicable user interface control element for the action in Operation 5065. In particular embodiments, the verbal command module performs this operation by first identifying one or more applicable user interface control elements for the identified action to be performed, and then determining which of the applicable user interface control elements are found in the portion of content that currently has focus. For example, the verbal command received from the user may have been the term “check.” Here, the verbal command module may generate an action to perform that involves checking a user interface control element and determine that such an element associated with this action is a checkbox control element. Therefore, the verbal command module may determine whether a checkbox control element is present in the portion of content that currently has focus. If such an element is present, then the verbal command module performs the action by checking the checkbox control element in Operation 5070.
  • Thus, the verbal command module in various embodiments allows the user to perform functionality within the IETM using various verbal commands. Specifically, in various embodiments, such functionality may involve performing some type of action such as, for example, checking a checkbox control element, selecting a button control element, highlighting a portion of content, skipping to another portion of content, scrolling through a portion of content, launching a preview window, and/or the like. As a result, a user may be able to perform functionality that normally requires the user to physically interact (e.g., use an input device such as a mouse, pointer, touchscreen, and/or the like) with his or her user computing entity 110 to interact with documentation being viewed through the IETM by using verbal commands instead. Accordingly, such a capability may be very beneficial in instances where it is inconvenient for the user to physically interact with his or her user computing entity 110. This may also be true of users of the IETM who may be physically challenged and therefore, may be unable to physically interact with his or her computing entity 110.
  • Wiring Module
  • Turning now to FIG. 51A, additional details are provided regarding a process flow for providing functionality for wiring data according to various embodiments. FIG. 51A is a flow diagram showing a wiring module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the wiring module may be invoked by another module to invoke functionality such as, for example, the topic module previously described. However, with that said, the wiring module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, various types of content may be provided in different topics. One such type of content is wiring data. For instance, content involving wiring data may provide one or more illustrations of an electrical schematic of a wiring configuration used for the item. The electrical schematic may include a layout of a plurality of wires and a plurality of other components that make up the configuration. The other components may include articles such as harnesses, electrical equipment, connectors (e.g., plugs), track assemblies, and/or the like. Therefore, in particular embodiments, the topic module may determine whether the content for the topic currently being displayed involves wiring data and if so, the topic module invokes the wiring module.
  • Therefore, the process flow 5100 begins with the wiring module determining whether input has been received indicating the user who is viewing wiring data has selected a particular wire in the electrical schematic being displayed on a window in Operation 5110. As noted, the wiring data may entail one or more illustrations of the electrical schematic. Here, the individual wiring and/or components shown in the illustration(s) may be configured as selectable to invoke different functionality depending on the type of selection mechanism used by the user.
  • For instance, in some embodiments, the individual wiring may be configured so that if the user uses his or her mouse to hover over a particular wire shown in the schematic, then tracing of the wire in the schematic is displayed on the window. Here, the tracing may be shown by highlighting the wire in the schematic by, for example, bolding the wire, displaying the wire in a particular color, displaying the wire using a unique pattern, using a combination thereof, and/or the like.
  • However, if the user selects the particular wire using a second, different selection mechanism (e.g., clicking on the wire), then the wiring module generates a preview for the wire and provides the preview for display in Operations 5111 and 5112. Similar to other previews, the wire preview may be provided as a separate window than the window displaying the wiring data. In some embodiments, the preview window may be superimposed over a portion of the window displaying the wiring data. Accordingly, the wire preview may provide the user with information/data, tables, instructions, illustrations, other media content, links to additional and/or related information, and/or the like associated with the wire in some embodiments, the preview is configured to provide only a preview of some of the content found in the technical documentation on the wire.
  • However, if the wiring module instead determines input has been received indicating the user has selected the particular wire using a third, different selection mechanism (e.g., alt-clicking on the wire) in Operation 5113, then the wiring module enables live wire for the particular wire in Operation 5114. As discussed further herein, live wire provides a window displaying a diagram with all of the terminal ends for the selected wire. Accordingly, the window is configured in particular embodiments so that the user can select portions of the wire between terminal ends within the diagram to view information on the portion of wire and terminal ends.
  • If the user has not selected a wire in the electrical schematic, then the wiring module determines whether input has been received indicating the user has selected a component (other than a wire) displayed in the schematic in Operation 5120. If so, then the wiring module generates a preview for the component and provides the preview for display in Operations 5121 and 5122. Accordingly, the preview for the component may be configured in the same manner as the preview for the wire.
  • For example, the component selected by the user may be a connector displayed in the electrical schematic of the wiring configuration used for the item. Accordingly, in particular embodiments, the preview for the connector may display an illustration of the connector and a plurality of pins found on the connector. Here, each of the pins may be selectable by the user to generate a preview for the pin. Therefore, in this example, the wiring module may determine whether input has been received indicating the user has selected a particular pin displayed in the illustration for the connector in Operation 5123. If the user has selected a particular pin, then the wiring module generates and provides a preview for the pin for display in Operations 5124 and 5125. Again, the preview for the pin may be configured in the same manner as the preview for the wire and/or component. In addition, the pin may be highlighted in the illustration of the connector to help the user to better identify where the pin is located within the connector. This may be quite useful to an individual who is working in the field on the particular connector.
  • With that said, the preview for the connector may be configured in a similar fashion as the preview described above with respect to the connector module, with the wiring module having similar functionality as the connector module. Accordingly, the preview may provide a list of the pins found on the connector and allow for the user to select one or more pins (e.g., a pair of pins) to display on media content (e.g., an illustration) of the connector to assist the user in locating the pins on the physical connector while working in the field.
  • In some embodiments, the user may also be provided with a selection mechanism (e.g., a button) to generate a list of the components found in the electrical schematic of the wiring configuration displayed on the window. Each of the components may be identified by a reference designator (e.g., ResDet). Therefore, in these particular embodiments, the wiring module determines whether input has been received indicating the user has selected this selection mechanism in Operation 5130. If so, then the wiring module retrieves and provides the list of components for display in Operations 5131 and 5132. For example, in particular embodiments, the wiring module may cause the list of components to be displayed in a first view pane on the window while continuing to display the illustration of the electrical schematic in a second view pane on the window.
  • In addition, the components provided in the list may be selectable (e.g., may be displayed as a hyperlink and/or displayed with a selection mechanism such as a button) to allow the user to view information for the component. For example, in particular embodiments, the information may be displayed on a separate window and may provide a list of other electrical schematics found in the wiring data for the technical documentation on the item in which the component is shown. Therefore, upon displaying the list of components, the wiring module may determine whether input has been received indicating the user has selected a particular component found in the list in Operation 5133. If the user has selected a component found in the list, then the wiring module retrieves and provides the information providing the other electrical schematics in which the component is shown in Operations 5134 and 5135. In particular instances, the electrical schematics displayed in the list may also be selectable to allow the user to retrieve and view the schematic.
  • Accordingly, the wiring module determines whether to exit in Operation 5140. If not, then the wiring module returns to Operation 5110 to determines whether input has been received of selection of another wire. If instead, the wiring module determines to exit, then it does so and the process flow 5100 ends.
  • FIG. 51B provides an example of a window displaying an electrical schematic of a wiring configuration used for an item. Here, the user has selected a particular wire 5150 shown in the schematic to generate and display a preview window 5151 for the wire superimposed over the window displaying the electrical schematic according to various embodiments. In addition, the tracing of the wire has been highlighted in the electrical schematic.
  • FIG. 51C provides an example of a preview window 5160 for a connector according to various embodiments as a result of the user selecting the connector 5161 in the electrical schematic. In this example, the preview window 5160 is superimposed over the window displaying the electrical schematic and provides an illustration of the connector (e.g., plug) displaying a plurality of pins found in the connector. Accordingly, the user has selected a particular pin 5162 and as a result, a preview window 5163 for the pin has been generated and displayed. In addition, the pin 5162 has been highlighted in the illustration of the connector.
  • FIG. 51D provides an example of a list of components found in the electrical schematic that has been generated and provided in a first view pane 5170 displayed on a window according to various embodiments. In this particular example, the electrical schematic continues to be provided in a second view pane 5171 displayed on the window. Finally, FIG. 51E provides an example of a list of other electrical schematics 5180 in which a selected component is shown that has been generated and displayed according to various embodiments. In this example, each of the schematics (and accompanying data modules) have been made selectable to allow the user to retrieve and view a schematic if desired.
  • Live Wire Module
  • Turning now to FIG. 52, additional details are provided regarding a process flow for providing live wire for a selected wire according to various embodiments. FIG. 52 is a flow diagram showing a live wire module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the live wire module may be invoked by another module to provide live wire such as, for example, the wiring module previously described. However, with that said, the live wire module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously discussed, a user may select a particular wire in an electrical schematic being displayed on a window using a particular selection mechanism (e.g., alt-clicking on the wire) and as a result, the wiring module may invoke the live wire module. Accordingly, the process flow 5200 begins with the live wire module generating a wire diagram displaying all of the terminal ends for the selected wire and providing the wire diagram for display in Operations 5210 and 5215. For instance, the live wire module may provide the diagram in a separate window or in a view pane displayed on an existing window.
  • Accordingly, in particular embodiments, each portion of the wire shown between two terminal ends is selectable (e.g., displayed as a hyperlink and/or displayed with a selection mechanism such as a button) in the wire diagram. Therefore, in these embodiments, the live wire module determines whether input has been received indicating the user has selected a portion of the wire in the diagram in Operation 5220. If so, then the live wire module provides information on the portion of the wire and the two terminal ends for display in Operation 5225. Here, depending on the embodiment, the information on the portion of the wire may be provided on a view pane displayed on the window displaying the wire diagram (with the wire diagram displayed on a separate view pane) or on a separate window.
  • Here, the information displayed on the portion of the wire may include such information as the material used for the wiring, properties for the portion of wire, the parts (e.g., part names and/or numbers) that are associated with the wire and/or terminals ends, location identifiers for the terminal ends, and/or the like. Accordingly, some of the information displayed for the portion of the wire may be selectable (e.g., displayed as a hyperlink and/or displayed with a selection mechanism such as a button) to allow further information to be displayed. For example, in some embodiments, the parts (e.g., the part names and/or numbers) are selectable, as well as the location identifiers for the terminals ends.
  • Therefore, in these particular embodiments, the live wire module determines whether input has been received indicating the user has selected one of the parts in Operation 5230. If so, then the live wire module generates and provides a preview for the part for display in Operations 5235 and 5240. Similar to other previews, the part preview may be provided as a separate window than the window displaying the wiring diagram. In some embodiments, the preview window may be superimposed over a portion of the window displaying the wiring diagram. Here, the live wire module may retrieve the information displayed for the preview from the parts data (e.g., parts data modules) found in the technical documentation on the item. In addition, the preview may provide interactive functionality such as a selection mechanism to enable the user to order the part from the IETM (as previously discussed).
  • Likewise, the live wire module determines whether input has been received indicating the user has selected one of the location identifiers for a terminal end displayed on the wire window in Operation 5245. If so, then the live wire module generates and provides a preview for the location for display in Operations 5250 and 5255. Similar to other previews, the location preview may be provided as a separate window than the window displaying wiring diagram. In some embodiments, the preview window may be superimposed over a portion of the window displaying the wiring diagram. Accordingly, the preview may provide information on the location of the terminal end. The live wire module may retrieve such information from the wiring data (e.g., wire data modules) found in the terminal documentation of the item.
  • At this point, the live wire module may determine whether input has been received indicating the user would like to exist from viewing the wire diagram in Operation 5260. If not, then the live wire module continues to monitor the user's interactions. Otherwise, the live wire module exits.
  • FIG. 53 provides an example of a wire diagram generated and displayed for a selected wire according to various embodiments. In this example, the user who is viewing the diagram has selection a portion of the wire 5300 between two terminal ends 5310, 5315 that is highlighted and as a result, information of the portion of the wire is displayed that provides information of the portion of the wire 5300 and the two terminal ends 5310, 5315. Here, the parts (e.g., part numbers) and location identifiers (e.g. zones) are displayed as selectable (e.g., hyperlinks) to enable the user to select a part or a location identifier for a terminal end to generate previews providing information on the part or the location for the terminal end.
  • Crosshairs Module
  • In particular instances, a user may be viewing an illustration for a topic displayed on a window that provides a graph. Turning now to FIG. 54, additional details are provided regarding a process flow for placing crosshairs on the graph according to various embodiments. FIG. 54 is a flow diagram showing a crosshairs module for performing such functionality according to various embodiments of the disclosure. In this particular instance, the crosshairs module may be invoked as a result of a user who is viewing the graph invoking a mechanism (e.g., alt-click) to place crosshairs on the graph.
  • Therefore, the process flow 5400 begins with the crosshairs module determining whether input has been received identifying a location to place the crosshairs on the graph in Operation 5410. Accordingly, in various embodiments, the user moves a cursor over the graph displayed on the window to a position on the graph that he or she would like to place the crosshairs and then invokes the appropriate mechanism. Such action identifies the location where the crosshairs module is to place the crosshairs. If the user has appropriately identified a location, then the crosshairs module causes the crosshairs to be placed on the graph at the location in Operation 5415. FIG. 55 provides an example of crosshairs 5500 placed on a graph displayed on a window according to various embodiments. The user may use this functionality to help the user better identify the values associated with a particular location (e.g., the values associated with a particular location on a line) on the graph.
  • 3D Graphics Module
  • Turning now to FIG. 56, additional details are provided regarding a process flow for providing functionality for media content involving 3D graphics according to various embodiments. FIG. 56 is a flow diagram showing a 3D graphics module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the graphics module may be invoked by another module to provide functionality for 3D graphics such as, for example, the topic module previously described. However, with that said, the 3D graphics module may not necessarily be invoked by another module and may execute as a stand-alone module in other embodiments.
  • As previously noted, the content displayed for a particular topic may include media content. In some instances, the media content may involve 3D graphics. Here, for example, the topic may involve displaying the illustrated pans data for a component of an item. Accordingly, a table of the parts used for the component may be provided in a first view pane displayed on a window and media content for the component may be provided in a second view pane displayed on the window. Accordingly, in particular embodiments, the window may be configured to display the first and second view panes on non-overlapping portions of the window. The parts listed in the table may be selectable in the first view pane and the media content displayed in the second view pane may be a 3D graphic of the component. Therefore, in particular embodiments, the topic module may determine the media content for the topic currently being displayed is a 3D graphic and as a result, the topic module invokes the 3D graphics module.
  • Thus, the process flow 5600 begins with the 3D graphics module determining whether input has been received indicating the user has selected a part in the 3D graphic using a first selection mechanism (e.g., using his or her mouse to hover over the part in the graphic) in Operation 5610. If the user has selected the part using the first selection mechanism, then the 3D graphics module causes the selected part to be displayed as highlighted in both the graphic displayed in the second view pane and the table displayed in the first view pane in a first format in Operations 5611 and 5612. Accordingly, the part may be highlighted in the 3D graphic and the table using different formatting depending on the embodiment. For example, highlighting the part may be accomplished by formatting the part in bold, in a particular color, with a border, in a different font, any combination thereof, and/or the like. Therefore, the first format may involve displaying the part in a first color (e.g., green) in the 3D graphic and displaying the part in a separate color (e.g., blue) in the table.
  • If the user has not selected the part using the first selection mechanism, then the 3D graphics module may determine whether input has been received indicating the user has instead selected the part in the 3D graphic using a second, different selection mechanism (e.g., clicking on the part in the graphic) in Operation 5620. If the user has selected a part using the second selection mechanism, then the 3D graphics module causes the selected part to be displayed as highlighted in both the graphic displayed in the second view pane and the table displayed in the first view pane in a second format in Operations 5621 and 5622. For example, the second format may involve displaying the part in a second color (e.g., blue) in the 3D graphic and displaying the part in the separate color (e.g., blue) along with a border in the table.
  • In various embodiments, the first selection mechanism (e.g., hovering over the part in the 3D graphic using a cursor) is to provide the user with a quick way in identifying the part in the table of parts. Such functionality may allow the user to move freely from part to part in the 3D graphic and identify the part he or she is specifically looking for by viewing what corresponding part is highlighted in the table. Therefore, as the user moves from part to part using the first selection mechanism, the corresponding part highlighted in the table also moves. While the previous part selected using the first selection mechanism is no longer highlighted in particular embodiments.
  • The second selection mechanism (e.g., clicking on the part in the 3d graphic) is to provide the user with a way to select a part in the table that stays selected. For example, the user may want to view more information on a part that is available through the table and/or order the part using a mechanism (e.g., a button) provided along with the part in the table. Therefore, in this example, the user uses the second selection mechanism (e.g., clicking on the part in the 3D graphic) to select the corresponding part in the table. Here, the part stays selected even after the user moves his or her cursor off the part in the 3D graphic. In some embodiments, the user can select multiple parts by using the second selection mechanism.
  • In some instances, the user may wish to remove a part from being viewed in the 3D graphic so that he or she can view the remaining parts of the component better in the graphic. Therefore, in particular embodiments, the 3D graphics module determines whether input has been received indicating the user has selected a part to delete (e.g., using a selection mechanism such as right clicking on the part and selecting delete) in Operation 5623. If so, the 3D graphics module causes the part to be removed from being displayed in the 3D graphic in Operation 5624. Accordingly, a deleted part can be added back to the 3D graphic in some embodiments. Therefore, the 3D graphics module determines whether input has been received indicating the user wants to un-delete a part that has been removed from display in the 3D graphic in Operation 5625. If so, then the 3D graphics module causes the part to be displayed again in the 3D graphic in Operation 5626.
  • The 3D graphics module may be configured in various embodiments to allow for similar functionality based at least in part on the user selecting a part in the table. Therefore, the 3D graphics module may determine whether input has been received indicating the user has selected a part in the table in Operation 5630. If so, then the 3D graphics module causes the part to be displayed as highlighted in the 3D graphic in Operation 5631. In addition, in particular embodiments, the 3D graphics module causes the part to be zoomed in on and rotated in the 3D graphic in Operation 5632 in these particular embodiments, the 3D graphics module may be configured to cause the part to be zoomed in on in the 3D graphic with respect to the size of the part. The smaller the part, the more the part is zoomed in on in the 3D graphic. Likewise, the 3D graphics module may be configured to cause the part to be rotated to a better angle for viewing.
  • Although not shown in FIG. 56, in some embodiments multiple selection mechanisms can be used in a similar fashion to select a part in the table as selecting a part in the 3D graphic. That is to say some embodiments may be configured to allow a user to use a first selection mechanism (e.g. hover over a part in the table) to highlight the part in a first format and use a second, different mechanism (e.g., click on the part in the table) to highlight the part in a second format.
  • In addition to removing parts from being displaying in the 3D graphic, parts may also be solely displayed in the 3D graphic in some embodiments. Therefore, the 3D graphics module may determine whether input has been received indicating the user has selected a party to display by itself in the 3D graphic (e.g., using a selection mechanism such as alt-clicking on the part) in Operation 5640. If so, then the 3D graphics module causes all the other parts of the component to be removed from being displayed in the 3D graphic in Operation 5641.
  • Finally, in particular embodiments, the user may be provided functionality to display an axis or axes in the 3D graphic to assist the user in rotating the graphic to obtain a better view of a part. Therefore, in these particular embodiments, the 3D graphics module determines whether input has been received indicating the user has selected to display the axis or axes in the 3D graphic (e.g., has selected an add axis/axes mechanism) in Operation 5650. If so, then the 3D graphics module causes display of the axis or axes in Operation 5651.
  • FIG. 57A provides an example of a window displaying a table of parts for a component in a first view pane and a 3D graphic of the component in a second view pane. In this example, a user has selected a particular part 5700 in the 3D graphic using a first selection mechanism (e.g., by using his or her mouse to hover over the part) and a result, the part is 5700 is highlighted in the 3D graphic and the corresponding part 5710 is highlighted in the table according to various embodiments. Here, both are highlighted using a first format involving showing the parts 5700, 5710 in color.
  • FIG. 57B again provides the window displaying the table of parts for the component in the first view pane and the 3D graphic of the component in the second view pane. However, the user has now selected the particular part 5700 in the 3D graphic using a second selection mechanism (e.g., by clicking on the part) and as a result, the part 5700 is highlighted in the 3D graphic and the corresponding part 5710 is highlighted in the table using a second format involving showing the parts 5700, 5710 in color and placing a border around the part 5710 in the table according to various embodiments. As previously explained, the first selection mechanism can allow the user to quickly identify where a part displayed in the 3D graphic is found in the table, while the second selection mechanism can allow the user to actually select a part in both the 3D graphic and the table so that he or she may view further information on the pan and/or perform some type of functionality with respect to the part.
  • FIG. 57C again provides the window displaying the table of parts for the component in the first view pane and the 3D graphic of the component in the second view pane. In this example, the user is interested in a part 5715 listed in the table that is also shown in the 3D graphic 5720 and selects the part 5715 (e.g., clicks on the part 5715) in the table. As a result, the part 5715 is highlighted in the table and is highlighted in the 3D graphic 5720 according to various embodiments as shown in FIG. 57D. In addition, the part 5720 shown in the 3D graphic is zoomed in on and rotated so that the user can get a better view of the part 5720.
  • FIG. 57E provides an example of a 3D graphic where the user is interested in viewing a specific part 5725 that the user has selected but would like to do so without the other part 5730 hindering the view. Therefore, in this example, the user selects the other part 5730 and provides an indication to remove the part from view in the 3D graphic according to various embodiments. As a result, the other part 5730 is removed from the 3D graphic so that only the part of the user is interested in viewing 5725 is provided in the 3D graphic as shown in FIG. 57F.
  • FIG. 57G provides an example of a 3D graphic where the user is again interested in viewing a specific part 5735 but would like to do so without the other parts shown in the graphic hindering the view. In this example, the user selects the specific part 5735 and indicates to solely show the part 5735 in the 3D graphic according to various embodiments. As a result, the specific part 5735 is shown in the 3D graphic by itself without the other parts of the component being displayed as shown in FIG. 57H.
  • Finally, FIG. 57I provides an example where the user has indicated to display axes 5740 in the 3D graphic according to various embodiments. As previously mentioned, the user may display the axes 5740 to assist him or her in rotating the graphic to obtain a better view of a part.
  • Hierarchy Module
  • Turning now to FIG. 58, additional details are provided regarding a process flow for displaying components in media content as identified in a hierarchy according to various embodiments. FIG. 58 is a flow diagram showing a hierarchy module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the hierarchy module may be invoked as a result of a user indicating to view the hierarchy associated with the components shown in media content currently being displayed. Here, the hierarchy refers to the relationships between the components of an item with respect to functional and/or physical breakdown of the components (e.g., breakdown into assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like).
  • Therefore, the process flow 5800 begins with the hierarchy module providing the hierarchy for the components shown in the media content currently being displayed in Operation 5810. Here, in particular embodiments, the hierarchy may be provided in a first view pane displayed on a window and the media content (e.g., illustration) may be provided on a second view pane displayed on the window. Accordingly, in particular embodiments, the window may be configured to display the first and second view panes on non-overlapping portions of the window. In addition, each of the components provided in the hierarchy may be associated with a selection mechanism (e.g., a checkbox control) to allow the user to identify which of the components to display in the media content and which of the components not to display.
  • Thus, the hierarchy module determines whether input has been received indicating a selection of a component to display in the media content in Operation 5815. If so, then the hierarchy module causes display of the component in the media content in Operation 5820. Likewise, the hierarchy module determines whether input has been received indicating a selection of a component not to display in the media content in Operation 5825. If so, then the hierarchy module causes the component to be removed from being displayed in the media content in Operation 5830.
  • In particular embodiments, a report may also be provided on those components illustrated (shown) in the media content but not listed (e.g., not found in the hierarchy). In these particular embodiments, some type of selection mechanism (e.g., a button) may be provided that the user can select to view the report. For example, the report may be provided on a window that is displayed as a result of the user indicating he or she would like to view the report. Therefore, the hierarchy module may determine whether input has been received indicating the user would like to view the report in Operation 5835. If so, then the hierarchy module provides the report for display in Operation 5840. Such a report may be useful in identifying content in the technical documentation (e.g., illustrated parts data and/or breakdown) for the item that is deficient with respect to certain components.
  • The hierarchy module then determines whether input has been received indicating the user would like to exit in Operation 5845. If so, then the hierarchy module causes the window to close and exits. Otherwise, the hierarchy module continues to monitor the user's interactions.
  • FIG. 59A provides an example of a window in which a hierarchy of components 5900 is displayed in a first view pane for the components shown in media content, in this instance a 3D graphic 5910, displayed in a second view pane. In this example, each of the components listed in the hierarchy is provided with a checkbox control 5915 to allow the user to identify which of the components to display in the content media and which of the components not to display in the content media. FIG. 59B provides an example of a report 5920 of components illustrated in the media content but not listed in the hierarchy.
  • Communication Session Module
  • Various embodiments of the IETM provide functionality to allow users to conduct communication sessions between one another within the IETM environment. For instance, a communication session may be a voice call, a video call, a chat session, a text session, and/or the like. Such functionality allows for users to converse and interactive with each other while in a secure environment facilitated by the IETM in many instances. For example, a user may be performing a maintenance task and may have a question as to a particular step in the task. Here, the communication session functionality provided in various embodiments enables the user to conduct a communication session (e.g., a voice call) and converse with another user who is actively signed into the IETM to discuss the step of the maintenance task. Because both users are signed into the IETM and the IETM is facilitating the session, the conversation between the users is secure.
  • Turning now to FIG. 60, additional details are provided regarding a process flow for providing communication session functionality in an IETM according to various embodiments. FIG. 60 is a flow diagram showing a communication session module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the communications session module may be invoked as a result of a user who is signed into the IETM indicating he or she would like to initiate a communication session with another user who is actively signed into the IETM.
  • The process flow 6000 begins with the communication session module identifying the users who are actively signed into the IETM in Operation 6010. In some embodiments, the users who are identified as active may be based at least in part on the credentials of the user who wants to initiate the communication session. For example, the user may be signed into a particular object (e.g., a particular aircraft) of an item (e.g., a type of aircraft) and therefore, the active users who are identified may be those users who are currently signed into the same object (e.g., the same aircraft). Further, in particular embodiments, other users (e.g., special users) may be identified as well such as the user's supervisor, quality assurance, engineering, and/or the like. Once identified, the communication session module provides the active users (e.g., identifiers for the active users) for display on a window in Operation 6015.
  • At this point, the user may select one or more of the active users and/or special users on the window to initiate a communication session to. Here, the window may provide some type of selection mechanism for each user such as a button so that the user is selectable. Therefore, the communication session module determines whether input has been received indicating the user has selected a particular user in Operation 6020. In addition, the user may identify the type of session he or she would like to initiate to the user (e.g., voice call). Therefore, the communication session module may determine the type of communication session from the input as well. If the user has identified a particular user (and the type of session), then the communication session module initiates the communication session to the particular user in Operation 6025.
  • In particular embodiments, the communication session is conducted over an IP-based network that the user's computing entity 110 is in communication with to ensure the session is conducted over a secure network. Accordingly, the particular user may accept the communication session within the IETM. Here, the particular user may receive some type of notification in the IETM about the incoming communication session and may be provided with some type of selection mechanism to accept the session.
  • Therefore, the communication session module determines whether input has been received indicating the communication session has been accepted in Operation 6030. If the session has not been timely accepted, then the user who initiated the communication session may decide to drop the session. Therefore, if the session has not been accepted, then the communication session module determines whether input has been received indicating the user who initiated the session has decided to drop the session in Operation 6035. If not, then the communication session module maintains the session and waits for an acceptance.
  • Once the communication session has been accepted, the communication session module determines whether input has been received indicating the user may want to initiate a session with an additional user in Operation 6040. In other words, the communication session module determines whether the user may want to conduct a conference session involving multiple users. If so, then the communication session module returns to Operation 6015 and provides the available users so that the user can select another user to include in the session. Accordingly, the communication session module performs the same operations to initiate a communication session to the newly selected user and bridges the session onto the session with the first selected user when accepted.
  • Once all of the users who have agreed to be a part of the session have accepted, the communication session module facilities the communication session within the IETM environment and provides a session window for display in Operation 6045. Depending on the embodiment, the session window may provide video if a communication session supporting such is being conducted between the users. In addition, the session window may provide the user with functionality such as ability to share the user's screen with the other users, enable a webcam, mute and/or unmute a microphone, end the session, record, and/or the like. Therefore, the user may then converse and interact with the other users on the communication session via the session window.
  • While the user is conversing and interacting with the other users, the communication session module may determine whether input has been received indicating the user has selected any of the provided functionality. For instance, the communication session module may determine whether the user has decided to share his or her computing entity's screen display in Operation 6050. If so, then the communication session module shares the user's screen with the other users in Operation 6055. Accordingly, the communication session module may determine whether the user wants to use other functionality that is available and if so, invokes such functionality.
  • Finally, the communication session module determines whether input has been received indicating the user wants to end the communication session (e.g., hang up the call) in Operation 6060. If so, then the communication session module ends the communication session in Operation 6065. The communication session module then determines whether input has been received indicating the user wants to close the communication session functionality in Operation 6070. If so, then the communication session module causes the session window to close and exits. It is noted that in some embodiments upon completion of the communication session, the communication session module may save a record of the session in a log within the IETM for reporting and/or tracking purposes.
  • FIG. 61A provides an example of a window that provides a selection mechanism (e.g., a button) 6100 to enable a user to access the communication session functionality according to various embodiments. FIG. 61B provides an example of a window 6110 according to various embodiments that is opened as a result of the user selecting the mechanism 6100. In this example, the window 6110 provides a list of active users 6115 and a list of special users 6120 along with a selection mechanism to allow the user to initiate a communication session (e.g., “call”) with one of the active users 6115 and/or special users 6120. In this instance, the selection mechanisms for the special users 6120 are unavailable indicating either the user who is initiating the session does not have the credentials to initiate a session any of the special users and/or each of the special users is not actively signed into the IETM.
  • FIG. 61C provides an example of a session window 6125 that is displayed once a communication session is activated according to various embodiments. As shown in FIG. 61C, the session window 6125 includes different functionality the user may invoke while engaged in the communication session. For example, the session window 6125 includes a selection mechanism (e.g., a button) 6130 that the user may select to share his or her screen with the other users on the session. In addition, the session window 6125 provides a selection mechanism (e.g., a button) 6135 to allow the user to end the communication session. Finally, FIG. 61D shows the session window 6125 once the user has shared his or her screen 6140 with the other users on the session.
  • Virtual Caution Panel Module
  • Various embodiments of the IETM provide a virtual caution panel that mimics a caution panel found on an item (e.g., a piece of equipment) such as, for example, an aircraft. Therefore, turning now to FIG. 62, additional details are provided regarding a process flow for addressing warnings and/or cautions provided by a caution panel found on an item according to various embodiments. FIG. 62 is a flow diagram showing a virtual caution panel module for performing such functionality according to various embodiments of the disclosure. In particular embodiments, the virtual caution panel module may be invoked as a result of a user who is signed into the IETM opening the virtual caution panel displayed on a window.
  • Caution panels are often used to warn and/or caution personnel of a problem with the item. Typically, personnel who are working on and/or using the item will reference some manual, often in paper form, that will provide instructions on how to handle the warning and/or caution. However, time may be essence when addressing such warnings and/or cautions. For instance, returning to the example of an aircraft, a caution panel is often provided in the cockpit of the aircraft to provide the pilot with warnings and/or cautions. When the panel provides a warning and/or caution, oftentimes the pilot may have a limited amount of time to address the problem before it becomes too late to fix while in flight. This can lead to lose of the aircraft and/or life. Furthermore, many problems can lead to multiple warnings and/or cautions being displayed. Therefore, the pilot may not only have to deal with resolving a warning and/or caution but a combination of warnings and/or cautions.
  • Accordingly, various embodiments provide a virtual caution panel that can be used by a user to assist the user in addressing warnings and/or cautions provided by such a caution panel found on an item. These embodiments can enable a user in addressing a warning and/or caution (or combination thereof) in a timely manner that is not typically possible using a conventional manual, even when the manual may be in a digital format. In particular embodiments, the virtual caution panel mimics the actual caution panel found on the item with the same warnings and/or cautions.
  • For example, the caution panel may include a plurality of indicators (e.g., warning lights) for the different warnings and/or cautions that light up. These indicators may provide different levels of warnings and/or cautions, such as different color lights, to represent degrees of urgency. Yellow may represent a caution with respect to the corresponding component, condition, process, and/or the like for an indicator and red may represent a warning that requires more urgency in addressing. Therefore, the user mimics the warnings and/or cautions shown on the actual panel by selecting the same warnings and/or cautions displayed on the virtual panel.
  • The process flow 6200 begins with the virtual caution panel module providing the virtual caution panel for display on a window in Operation 6210. The virtual caution panel module then determines whether input has been received indicating the user has selected any of the warnings and/or cautions displayed on the virtual panel in Operation 6215. Accordingly, in particular embodiments, the virtual caution panel may be configured to allow the user to select different levels (e.g., set different colors) for the individual indicators displayed on the panel as well as select combinations of warnings and/or cautions.
  • If the user has selected one or more warnings and/or cautions on the virtual caution panel, then the virtual caution panel module retrieves a corrective action (e.g., steps to perform to address the one or more cautions and/or warnings) in Operation 6220. Therefore, in various embodiments, the corrective actions to address the different warnings and/or cautions may be stored within the IETM and retrieved by the virtual caution panel module based at least in part on the warnings and/or cautions (and/or combination thereof) identified by the user on the panel. Such retrieval may be much quicker than if the user were to search for the corrective action him or herself in a physical and/or digital manual. Therefore, embodiments of the virtual caution panel can be very beneficial in addressing warnings and/or cautions in a timely manner when required.
  • Once the virtual caution panel module has retrieved the corrective action, the module provides the corrective action for display to the user in Operation 6225. Here, depending on the embodiment, the corrective action may be displayed on the same window as the virtual caution panel or displayed on a different window. The virtual caution panel module then determines whether input has been received indicating the user wishes to exit the virtual caution panel in Operation 6230. If so, then the virtual caution panel module causes the virtual caution panel to close and exits. Otherwise, the virtual caution panel module continues to provide the virtual caution panel and corrective action if appropriate.
  • FIG. 63A provides an example of a virtual caution panel 6300 according to various embodiments. In this example, an indicator 6310 has been selected on the virtual caution panel 6300 by the user to mimic a caution being displayed by the actual caution panel found on the item. A corrective action 6315 to address the caution may then be provided as shown in FIG. 63B.
  • Article Loading Module
  • Oftentimes entities have various items (e.g., objects for items) such as vehicles that periodically need to be loaded with different articles. For instance, many military entities have both combat and non-combat vehicles that need to be routinely loaded with different equipment. Such vehicles may be used for air, land, and/or water and may include, for example, aircraft, boats, ships, armored fighting vehicles, reconnaissance vehicles, light utility vehicles, engineering vehicles, self-propelled weapons and defense systems, ambulances, and/or the like. Accordingly, when such vehicles are deployed for a mission, the vehicles are required to be carrying certain equipment expected to be used for the mission.
  • For example, aircraft such as fighters and bombers and armored fighting vehicles such as tanks and troop carriers are often required to be carrying certain munitions expected to be used for combat. The loading of these munitions is typically performed by military personnel who receive a list of munitions and then are required to physically load the munitions onto and/or into the vehicle. Many vehicles have multiple positions on the vehicle for holding such munitions. For instance, many aircraft have several positions (e.g., stations) on the body of the aircraft for holding munitions, whether they be types of weapons and/or ammunitions such as missiles, bombs, and/or the like. These positions are often configured so that only certain munitions can be placed at certain positions.
  • In addition, munitions may be required to be loaded/installed on the vehicle using a number of operations (e.g., steps) and in a certain sequence. Therefore, personnel who are responsible for loading the munitions are regularly required to initially put together a workflow that includes a number of different procedures in a sequential order that are to be performed to load the munitions onto the vehicle. The generation of this workflow can oftentimes be very time consuming in identifying which munitions are to be loaded at which positions, identifying the corresponding procedures for loading the munitions, and then generating the workflow of the procedures in the correct ordered needed to load the munitions.
  • Therefore, various embodiments provide functionality (e.g., article loading wizard) that assists personnel in loading different articles onto and/or into an object of an item. The example of loading munitions onto an aircraft is used in discussing this functionality. However, those of ordinary skill in the art can appreciated the functionality can be used in loading different articles for a number of different types of items. For example, other articles may be loaded other than equipment such as cargo, personnel, perishable goods, livestock, medications, and/or the like. In addition, other items besides vehicles may be loaded such as warehouses, trailers, medical facilities, and/or the like.
  • Turning now to FIG. 64, additional details are provided regarding a process flow for generating a workflow for loading articles onto and/or into an object for an item according to various embodiments. FIG. 64 is a flow diagram showing an article loading module for performing such functionality according to various embodiments of the disclosure. Accordingly, a user may be signed into the IETM for a particular object for an item. For example, the user may be signed into the IETM for a particular aircraft (e.g., fighter T123) found in a military's fleet of aircraft (fleet of jet fighters). In addition, the user may be tasked with loading munitions onto the aircraft and therefore has also signed into the IETM identifying a specific job to be performed. Once signed in, the user may select a mechanism to invoke the article loading module.
  • Therefore, the process flow 6400 begins with the article loading module reading the item the user is currently signed into the IETM to view in Operation 6410. In this instance, the item is a type of jet fighter found in the military's fleet of aircraft. Thus, the article loading module provides media content (e.g., a digital model) of the item for display on a window in Operation 6415. In particular embodiments, the media content (e.g., the digital model) displays the different loading positions (e.g., stations) found on the item as selectable (e.g., associated with some type of selection mechanism). Therefore, the user selects a particular loading position by using some type of control such as a mouse to click on, right click on, or hover over the position or use a stylus or finger to select a position for the item.
  • In turn, the article loading module determines whether the user has selected a position in Operation 6420. If so, then the article loading module retrieves the articles that can be loaded at the position and provides the articles for display on the window in Operations 6425 and 6430. For example, the articles may be displayed as a list in a dropdown menu control that is configured to allow the user to select one or more of the articles for loading at the particular position. Note that in particular embodiments, only those articles that can be loaded at the particular position are retrieved and displayed to the user. Such a configuration can ensure that an article is not loaded by personnel at an inappropriate position.
  • Therefore, the article loading module determines whether input has been received indicating the user has selected one or more articles for the position in Operation 6435. If the user has selected one or more articles, then in particular embodiments, the article loading module provides media content (e.g., illustration(s) and/or image(s)) of the selected articles for display for the user to view in Operation 6440. Such an operation may be carried out in these embodiments so that the user can see what he or she has selected to load at the position. This may help the user with physically selecting and loading the correct articles in the field. Accordingly, the media content may be displayed on a separate window that is superimposed over a portion of the window displaying the media content (e.g., the digital model) of the item or the media content may be displayed on one or more view panes along with the media content of the item on a separate view pane. In addition, the article loading module records the article(s) that are to be loaded at the position in Operation 6445.
  • Returning to Operation 6420, if the user has not selected a particular position for the item, then the article loading module determines whether input has been received indicating the user's desire to generate a workflow for loading the object for the item in Operation 6450. The user may select some type of mechanism (e.g., a button) displayed on the window after the user has identified the article(s) be loaded at each of the positions for the item. If the user has indicated to generate the workflow, then the article loading module generates the workflow for loading the selected article(s) onto and/or into the object for the item in Operation 6455.
  • As previously noted, the workflow may include one or more procedures to be performed by personnel in loading the article(s) onto and/or into the object for the item. Here, the workflow may identify the sequential order in which the procedures are to be performed. For instance, returning to the example, the loading of munitions onto the aircraft may be required to be carried out in a particular order to ensure the safety of the military personnel who are physically loading the munitions onto the aircraft. For example, certain ammunition may need to be loaded and tested before loading another ammunition to ensure the ammunition is properly loaded and stabilized so that it will not trigger other ammunition loaded onto the aircraft from going off.
  • Therefore, in particular embodiments, the article loading module is configured to dynamically generate the workflow based at least in part on the articles selected by the user to be loaded at each position. In some instances, a significant number of combinations of articles can be potentially loaded at the different positions. Thus, an advantage provided by the article loading module in some embodiments is the ability of the module to dynamically generate a workflow based at least in part on a significant number of potential combinations that places the loading of the articles in a correct sequence to ensure they are loaded safely.
  • Once the article loading module has generated the workflow, the module provides the workflow for display in Operation 6460. For instance, in particular embodiments, the article loading module may provide a digital workflow to be displayed in the form of a table of contents that lists the different procedures that make up the workflow in the order in which they are to be performed. Here, each of the different procedures may be selectable. Therefore, the user may then select the procedures, one-by-one, in the order in which they are found in the table of contents to view the operations that need to be performed for the procedures in loading the articles onto and/or into the object for the item. As discussed in further detail herein, various functionality may be implemented in embodiments to ensure the procedures are performed in the correct sequence as displayed in the digital workflow.
  • As this point, the article loading module may determine whether input has been received indicating the user would like to exit in Operation 6465. For example, the user may be generating a workflow for loading the object of the item at a later time and therefore, the user may not be ready to start the actual loading of the object. Here, the article loading module may be configured to save the workflow so that is may be used at the later time.
  • FIG. 65A provides an example of a window displaying a digital model of an aircraft 6500 to be loaded with articles according to various embodiments. In this example, the digital model of the aircraft 6500 displays the various positions (e.g., stations) at which articles can be loaded. Accordingly, the various stations are selectable (e.g., displayed as hyperlinks) so that the user may select each station, such as station 1 6510, to be provided a list (e.g., a dropdown menu control) of the different articles that may be loaded at the station. Once the user has selected the various articles to be loaded at the different stations, a digital workflow in the form of a table of contents 6515 may be generated with the different procedures to be performed in loading the aircraft in the order in which they are to be performed as shown in FIG. 65B. A discussion is now provided with respect to using the digital workflow at a time when the article(s) are actually being loaded onto and/or into the object for the item.
  • Loading Workflow Module
  • Turning now to FIG. 66, additional details are provided regarding a process flow for managing a workflow for loading articles onto and/or into an object for an item according to various embodiments. FIG. 66 is a flow diagram showing a loading workflow module for performing such functionality according to various embodiments of the disclosure. Here, a digital workflow may be displayed on a window in the form of a table of contents listing the procedures to be performed in loading the articles onto and/or into the object for the item. As noted, the procedures are provided in the table of contents in particular embodiments in the order in which they are to be performed in loading the object. Accordingly, each of the procedures found in the table of contents may be selectable so that the user selects the procedures one at a time in the sequence provided to view the operations to perform for the selected procedure to load the articles onto and/or into the object for the item.
  • Therefore, the process flow 6600 begins with the loading workflow module determining whether input has been received indicating the user has selected a procedure in the table of contents in Operation 6610. If so, then the loading workflow module determines whether the selected procedure is the next procedure to be performed for the workflow in Operation 6615. Therefore, in particular embodiments, that loading workflow module is configured to determine whether the procedure(s) found in the workflow listed before the selected procedure have been performed. As further discussed below, the loading workflow module marks the procedures that have been completed in some embodiments. Therefore, the loading workflow module is able to determine whether each of the procedures found in the workflow before the currently selected procedure has been completed.
  • If each of the procedures in the workflow before the currently selected procedure has not been completed, then the loading workflow module provide an error to the user in Operation 6620. For example, the loading workflow module may provide an error message for displaying on a window informing the user that the selected procedure is not the next procedure to be performed in the workflow. In addition, the loading workflow module may be configured in some embodiments so that the operations for the selected procedure cannot be displayed.
  • However, if the selected procedure is the next procedure in the sequence, then the loading workflow module provides the procedure for display to the user in Operation 6625. For instance, in particular embodiments, the loading workflow module may retrieve the data for the procedure from the technical documentation for the item and provide the data for the procedure to display on a new window for the user. Depending on the embodiment, the procedure may be displayed on a pane provided on the window with the workflow (with the workflow displayed on a second pane) or the procedure may be provided on a separate window from the window with the workflow. As a result, the user is then able to read the instructions (e.g., different operations) found in the procedure and perform the instructions accordingly.
  • For instance, in the example involving the loading of munitions onto the jet fighter, the different procedures found in the workflow may involve procedures that provide instructions for loading a particular munition at a particular station of the aircraft, as well as procedures for testing a munition once it has been loaded at a particular station. Therefore, the instructions for the different procedures may provide a sequence of operations (e.g., steps) to be performed by the military personnel who are loading munitions onto the jet fighter.
  • Accordingly, the loading workflow module may determine whether input has been received that the end of the procedure currently being displayed has been reached in Operation 6630, indicating the user has completed performing the procedure. Here, the loading workflow module may be configured to determine the end of the procedure has been reached by receiving input indicating the user has performed some action such as, for example, selecting a mechanism such as a button displayed on the window and/or scrolling to the bottom on the procedure displayed on the window.
  • If the end of the procedure has been reached, then the loading workflow module in various embodiments determines whether each of the operations found in the procedure has been acknowledged in Operation 6635. For instance, in some embodiments, each operation (e.g., step) found in the procedure may be associated with a selection mechanism such as a checkbox control that the user selects to acknowledge that he or she has completed the particular operation in the procedure. Therefore, the loading workflow module may determine whether input has been received that the selection mechanism for each operation has been selected by the user. In addition, in some embodiments, the loading workflow module may be configured to also determine whether the user has acknowledged each of the previous operations in the procedure whenever the user acknowledges a particular operation in the procedure to ensure the operations are performed in order.
  • If the user has not acknowledged all of the operations in the procedure, then the loading workflow module causes display an error to the user in Operation 6640. Again, the loading workflow module may provide an error message to display informing the user that all of the operations in the procedure have not been acknowledged as being performed. However, if all of the operations have been acknowledged, then the loading workflow module marks the procedure as completed in Operation 6645. At this point, the loading workflow module returns to the window displaying the table of contents for the workflow if need be in Operation 6650. Accordingly, as a result of the user completing the procedure, the loading workflow module may cause the procedure to be displayed as being completed in the digital workflow (e.g., the table of contents). For example, in particular embodiments, the procedure may now be displayed along with some type of indicator (e.g., in a particular font, in a particular color, with a symbol such as a plus sign, as no longer selectable, and/or the like) to demonstrate the procedure has been completed. The user may then select the next procedure found in the workflow.
  • Once the user has performed all of the procedures for the workflow, then the user may decide to exit the window displaying the table on contents and select a mechanism (e.g., a button) displayed on the window to do so. As a result, the loading workflow module may determine input has been received indicating the user would like to exit in Operation 6655. The loading workflow module then determines whether the workflow for loading the articles onto and/or into the object for the item has been completed in Operation 6660. That is to say, the loading workflow module determines whether each of the procedures found in the workflow has been completed.
  • Accordingly, if the workflow has not been completed, then the loading workflow module in particular embodiments provides an error (e.g., an error message for displaying on a window) to the user indicating the workflow has not been completed in Operation 6665. The loading workflow module may then determine whether input has been received indicating the user still wishes to exit the window displaying the digital workflow in Operation 6670. For example, the personnel who are loading the munitions onto the jet fighter may be taking a lunch break. Therefore, the user may wish to exit the window for security reasons while away from the loading area and eating lunch. He or she then plans to resume with the workflow once he or she has returned from lunch.
  • If this is the case, then the loading workflow module in particular embodiments records one or more images of the object in Operation 6675 to document the progress of loading the articles that has been completed to that point. For example, imaging devices may be installed at different locations in the loading area to allow images to be taken of the different loading stations. In addition, the loading workflow module records the progress of the workflow in a log in Operation 6680. Therefore, in the example, the user can retrieve the incomplete workflow upon returning from lunch and continue with the remainder of the workflow for loading the munitions onto the jet fighters. Once the user has completed the workflow, the loading workflow module again records image(s) of the object to document the loading of the articles and records the completion of the workflow in the log.
  • Recordation of the images and progress of the workflows in various embodiments can allow for tracking of the workflows being performed, as well as allow for quality control measures to be put into place to evaluate different personnel on performing loading tasks. For example, recordation of the images of the jet fighter loaded with the required munitions may allow for the pilot to view the images prior to takeoff to ensure the munitions have been properly loaded onto the aircraft. This can help to not only ensure success of the mission but can also ensure the safety of the pilot and any other flight crew member on the aircraft.
  • Remote Device Integration Module
  • As previously discussed, users are oftentimes working in environments where network connectivity (e.g., wireless network) for their computing entity 110 is unavailable. For instance, maintenance personnel may be working out in the field performing maintenance on an object (e.g., an aircraft) where network connectivity is unavailable. In these instances, the maintenance personnel may be making use of the IETM to view one or more maintenance procedures they are to perform on the object. However, one of the maintenance personnel may want to perform some type of functionality provided by embodiments of the IETM that may require connectivity. For example, the maintenance personnel may want to order a part to replace a part taken from inventory used in performing the maintenance on the object. As previously noted, various embodiments can facilitate the personnel's ordering of the part by generating a graphical code that can then be scanned by the personnel using a remote device such as his or her mobile device with some type of connectively such as cellular.
  • However, security is also often a concern with allowing such functionality since the functionality is being carried out over a network that is not within the IETM environment. Therefore, various embodiments allow for such functionality to be carried out over a network connected to a remote device while still maintaining a secure environment. Here, a remote device is a device that is not in communication with the user's computing entity 110 being used to access the IETM. For example, the remote device may be the user's mobile device (e.g., smartphone), tablet, and/or the like with connectivity to a network such as a cellular network, wireless network, and/or the like. Specifically, in particular embodiments, the user (e.g., maintenance personnel) who is signed into the IETM may have a software application (e.g., an app) installed on his or her remote device that is required to be used to enable the functionality to be performed in the IETM. This software application may be limited in its distribution so that it is only installed on devices belonging to valid users.
  • Turning now to FIG. 67, additional details are provided regarding a process flow for securely integrating the use of a network connected to a remote device with the IETM according to various embodiments. FIG. 67 is a flow diagram showing a remote device integration module for performing such functionality according to various embodiments of the disclosure. Here, the user may be signed into the IETM and decides to perform some functionality within the IETM that requires connectivity such as, for example, submitting a form filed out while signed into in the IETM to a backend system. Accordingly, a selection mechanism (e.g., a button) may be provided on the form that the user selects to submit the form and as a result, the remote device integration module is invoked in various embodiments.
  • Therefore, the process flow 6700 begins with the remote device integration module generating and providing a security graphical code for displaying in Operations 6710 and 6715. For instance, depending on the embodiment, the security graphical code may be a barcode, a quick response code, a one-dimensional code, a universal product code, a data matric code, and/or the like. In addition, in particular embodiments, the remote device integration module may generate the security graphical code to contain the user's credentials used in signing into the IETM. Accordingly, the security graphical code may be displayed on a window so that the user can scan the code using some type of code reader installed on the user's mobile device.
  • For example, the code reader may be any one of many commercially available graphical code readers and the reader may not necessarily include any type of security features. While in other instances, the software application may be configured so that the application can be used initially to scan the security graphical code. However, other functionality may not be available within the application. Such a configuration can provide security features within the software application with respect to allowing the user to perform certain functionality using the software application while not allowing the user to perform other functionality. In addition, the software application may be configured to require the user to provide credentials (e.g., a username and/or password) to open the application. Therefore, in particular embodiments, various functionality provided by the software application residing on the user's remote device may become available as a result of the user scanning the security graphical code displayed in the window.
  • The remote device integration module then determines whether input has been received indicating to generate a graphical code for the form the user wishes to submit in Operation 6720. For instance, the remote device integration module may determine that the security graphical code has been scanned by the user as a result of the user acknowledging he or she has scanned the code. For example, the window displaying the security graphical code may provide a selection mechanism such as a button that the user can select to close the window with the code. Accordingly, the remote device integration module may receive input indicating the window with the security graphical code has been closed and as a result, generate and provide the graphical code for the form for display in Operations 6725 and 6730.
  • Again, the remote device integration module may provide the graphical code for the form to display on a window so that the user can now use his or her mobile device to scan the code. Again, depending on the embodiment, the graphical code may be a quick response code, a one-dimensional graphical code, a universal product code, a data matric graphical code, and/or the like. The graphical code may include information provided by the user on the form such as the information required to order the part. In addition, the graphical code may include information such as the user's credentials, an identifier for the object and/or item, an identifier for a location for the user, and/or the like. Further, the graphical code may be configured so that it can only be read by the software application residing on the user's remote device.
  • At this point, the remote device integration module determines whether to exit in Operation 6735. For example, the user may have scanned the graphical code for the form and then selected a mechanism such as a button provided on the window displaying the code to close the window. As a result, the remote device integration module may receive input indicating the window has been closed. If that is the case, then the remote device integration module exits.
  • It is noted that in some embodiments the remote device integration module may be invoked at different times other than when specific functionality is to be carried out that requires connectivity. For instance, in particular embodiments, the user may invoke the remote device integration module upon signing into the IETM to establish that the software application residing on the user's remote device can then be used in facilitating any functionality requiring connectivity while the user is signed into the IETM. Therefore, in these particular embodiments, the user may not be required to scan a security graphical code each time he or she wishes to use functionality provided by the IETM that requires connectivity. Thus, the process flow 6700 shown in FIG. 67 may only involve providing the security graphical code without necessarily providing a graphical code to facilitate other functionality.
  • Virtual Network Module
  • Virtual private networks (VPNs) are often used to allow users to send and share data over networks that are not necessarily secure (e.g., public networks) as though they are connected to a secure private network. Accordingly, applications running over a VPN can often benefit from the functionality, security, and management provided in a private network. Therefore, various embodiments provide a virtual network in which users can operate within while signed into the IETM.
  • Turning now to FIG. 68, additional details are provided regarding a process flow for providing a virtual network within the IETM environment according to various embodiments. FIG. 68 is a flow diagram showing a virtual network module for performing such functionality according to various embodiments of the disclosure. Depending on the circumstances, a user may have already signed into the IETM and decides to join a virtual network provided through the IETM or the user may join a virtual network at the time when he or she signs into the IETM.
  • In particular embodiments, the user may have a software application installed on remote device such as his or her mobile device that provides a graphical code for the user to scan using his or her computing entity 110 (e.g., webcam on his or her computing entity 110) being employed to view the IETM. Here, the graphical code may be provided in various forms such as a barcode, a quick response (QR) code, a one-dimensional code, a universal product code, a data matric code, and/or the like. While in other embodiments, a graphical code may be provided on an object that is scanned by the user using his or her computing entity 110. For example, the user may be maintenance personal who is working on a particular aircraft found in an airline's fleet and the graphical code may be physically displayed on a component of the aircraft such as its landing gear.
  • Therefore, the user invokes the virtual network module to scan the graphical code and the process flow 6800 begins with the virtual network module scanning the graphical code in Operation 6810. The virtual network module then determines whether the graphical code that has been scanned is valid in Operation 6815. Accordingly, the virtual network module is configured in various embodiments to interrogate the information found in the code to determine whether the code is associated with a valid user and/or object.
  • For example, the graphical code that was scanned may have been provided by a software application installed on the user's mobile device. Here, the user may have signed into the application and generated the code using functionality provided by the application. Therefore, the information provided in the code may identify the user (e.g., provide credentials for the user) and the virtual network module may determine whether the credentials provided for the user in the graphical code are valid. While in another example, the graphical code that was scanned may have been provided on an object (e.g., aircraft) and the information provided in the code may identify the object. Therefore, the virtual network module may determine whether the object identified in the code is valid (e.g., is scheduled to have maintenance performed on the object).
  • If the virtual network module determines the graphical code is invalid, then the virtual network module causes display an error message to the user in Operation 6820. For instance, in particular embodiments, the virtual network module may provide an error message via a window informing the user that the graphical code is invalid. The virtual network module then determines whether input has been received indicating the user would like to exit or scan another graphical code in Operation 6825. For example, the window displaying the error message may provide a first selection mechanism (e.g., a first button) to exit and a second selection mechanism (e.g., a second button) to scan another code. If the user indicates he or she would like to scan another code, then the virtual network module returns to Operation 6810.
  • However, if the graphical code is valid, then the virtual network module in particular embodiments may provide one or more objects identifying the various virtual networks available to the user in Operation 6830. This particular operation may be carried out when the graphical code scanned by the user provides the user's credentials. Here, for example, the virtual network module may identify the objects the user is currently authorized to work on. For instance, the user may be maintenance personnel who is scheduled to perform maintenance on two particular aircraft found in an airline's fleet. Therefore, in this instance, the virtual network module may identify the two aircraft as available to the user.
  • Accordingly, in various embodiments, a virtual network is configured for each of the objects so that the user's selection of a particular object identifies which virtual network supported by the IETM the user is to join while signed into the IETM. In addition, the selection of an object may also identify an instance for the IETM. That is to say, the selection of the object (and corresponding virtual network) may identify what technical documentation to make available to the user while he or she is signed into the IETM, as well as identify any information found within the IETM for the particular object such as the maintenance jobs to be performed on the object.
  • Therefore, the virtual network module determines whether input has been received indicating the user has selected a particular object in Operation 6835. If so, then the virtual network module joins the virtual network for the object in Operation 6840. Accordingly, if the graphical code scanned by the user includes information that identifies the object, then the virtual network module may automatically join the corresponding virtual network without the user having to select the object. This may also be true if only a single object is associated with the user.
  • The user may then be provided with specific functionality as a result of joining the virtual network. In addition, the user may interact directly with other users who are signed into the IETM and are on the same virtual network. In some instances, specific functionality may be associated with the corresponding object.
  • For example, many entities establish a lockout program for maintenance. A lockout program often involves “locking out” certain operations, processes, functions, and/or the like for an object that may be unsafe to perform while certain maintenance is being carried out on the object. For instance, the power supply for a particular component may be shut off while maintenance is being performed on the component. Here, some type of warning (e.g., a lockout tag) may be placed on the component and/or the power supply indicating that it is unsafe to turn back on the power so that personnel who are not performing the maintenance on the component do not inadvertently restore power to the component while the maintenance is being performed.
  • Therefore, in various embodiments, the virtual network module may invoke lockout functionality for the object in Operation 6845 that broadcasts warnings to all the users who are on the virtual network for the object. In some instances, such functionality may require the users on the virtual network for the object to acknowledge the warnings, as well as track which users have or have not acknowledged the warnings. Those of ordinary skill in the art can envision other object-specific functionality may be invoked in light of disclosure.
  • In addition, some of the specific functionality may be associated with the user. For example, the user may be signed into the IETM and using the technical documentation to perform a specific role with respect to the object. For instance, the user may be maintenance personnel, engineering personnel, operations personnel, and/or the like. In many instances, the user may have one or more tasks (e.g., jobs) that the user is expected to perform with respect to the object while signed into the IETM. Therefore, the virtual network module in particular embodiments may identify and/or assign and/or allocate one or more tasks (e.g., jobs) to the user to perform with respect to the object in Operation 6850. Those of ordinary skill in the art can envision other user-specific functionality may be invoked in light of disclosure.
  • It is noted that the virtual network may be provided over a variety of different types of networks such as IP-based and/or cellular depending on the embodiment. In addition, in particular embodiments, the virtual network may be facilitated through the software application installed on the user's remote device. In these particular embodiments, the user may sign into the software application and/or the user may scan a graphical code displayed via the IETM or found on an object using the software application to display one or more available virtual networks for objects or to automatically connect to a virtual network for an object through the software application. Accordingly, the software application can identify the user and provide what virtual networks are available to the user. In turn, the user can select one of the available virtual networks and connect to the network on his or her mobile device. As a result, the same functionality (e.g., object-specific functionality and/or user-specific functionality) described above may be provided through the software application installed on the user's remote device. That is to say, the software application may be configured to perform similar operations to those performed by the virtual network module described above in various embodiments.
  • Import Module
  • The technical documentation associated with an item (e.g., the dataset that includes the textual information, corresponding media content, and other data that make up the technical documentation for the item) is typically stored and/or provided in accordance with S1000D standards. For example, data modules are normally provided that include header and/or preface data in accordance with S1000D standards. S1000D standards require a document to be broken down into individual data modules that are typically identified via XML and/or SGML tags, labels, and/or metadata and that are organized into a hierarchical XML and/or SGML structure. In various embodiments, the XML and/or SGML files and/or data stored therein may be converted to JSON formatted data and/or files. Accordingly, in these embodiments, the content found in the JSON formatted data and/or files provides the technical documentation for the item.
  • However, instances may occur in which an entity may have documentation in formats that are not in accordance with S1000D standards. For example, many entities have technical manuals, instructions, orders, and/or the like for various items in PDF files and/or SGML files that do not adhere to S1000D standards. Therefore, these entities are oftentimes required to use systems, software, applications, and/or the like other than an IETM to view such documentation since most conventional IETMs require the technical documentation to adhere to S1000D standards. This can lead to the entities having to maintain multiple components (e.g., systems, software, applications, and/or the like) to view all of the technical documentation associated with a particular item. In addition, users who are viewing/using the documentation are then required to have the multiple components available to them at any given time so that they have access to any of the documentation as needed.
  • Therefore, various embodiments are configured to allow the import of source data that does not adhere to S1000D standards into the IETM. Accordingly, such embodiments allow users to view technical documentation in the IETM from data sources other than those that adhere to S1000D standards. As a result, users can view and use the complete technical documentation for an item in many instances using a single instrument (the IETM). In addition, these embodiments eliminate the need to convert source data in many instances in accordance with S1000D standards to import into the IETM.
  • Turning now to FIG. 69, additional details are provided regarding a process flow for importing data for the technical documentation for an item into the IETM according to various embodiments. FIG. 69 is a flow diagram showing an import module for performing such functionality according to various embodiments of the disclosure. Depending on the circumstances, the data (e.g., dataset) may be provided in different formats and adhere to different standards. For instance, the data may be provided in XML and/or SGML files in accordance with S1000D standards. However, the data may also be provided in XML, SGML, PDF files and/or the like that are not in accordance with S1000D standards. In some instances, the data may include a combination of both types of files.
  • Therefore, the process flow 6900 begins with the import module receiving the data to import in Operation 6910. Here, the data may be received in any number of different formats. For example, the data may be a dataset for a publication of the technical documentation for an item according to S1000D standards. While in another instance, the data may be one or more files having content (e.g., manual) that make up the technical documentation for the item in a file format such as PDF and/or SGML.
  • The import module then determines whether the data is provided in accordance with S1000D standards in Operation 6915. For instance, in particular embodiments, the import module may make such a determination based at least in part on whether the data is provided as XML and/or SGML files that conform to data modules found in a dataset adhering to S1000D standards. If that is the case, then the import module selects one of the data modules in Operation 6920 and converts the data module to JSON format in Operation 6925. The import module may then store the converted data module for use with the IETM. At this point, the import module determines whether the data includes another data module in Operation 6930. If so, then the import module returns to operation 6920, select the next data module found in the data, and preforms the operations just described for the newly selected data module.
  • However, if the data is not provided in accordance with S1000D standards, then the import module selects a file found in the data in Operation 6935. As previously mentioned, the file may be provided in any number of different formats such as PDF, SGML, DOC, RTF, TXT, WPS, and/or the like. Therefore, the import module converts the file to JSON format and stores the converted file in Operation 6940. In some embodiments, the import module may be configured to convert the file to JSON format in multiple steps. For example, in particular embodiments, if the original file is in TXT format, then the import module may first convert the file to SGML format and then convert the file to JSON format. At this point, the import module determines whether the data includes another file in Operation 6945. If so, then then the import module returns to operation 6935, select the next file found in the data, and preforms the operations just described for the newly selected file. Once the import module has processed all the files found in the data, the import module exists.
  • It should be noted that the data received to be imported into the IETM in some instances may include both content in accordance with S1000D standards (e.g., include data modules) and content not in accordance with S1000D standards (e.g., include files in PDF format). Therefore, in these particular instances, the process flow 6900 may involve looking at individual components of the data to determine how to process each of the individual components.
  • Accordingly, as a result of importing data from different sources both adhering and not adhering to S1000D standards and converting such data to a common format (e.g., JSON), the data from the different sources (e.g., technical documentation for the different sources) can be used interchangeably and/or simultaneously in the IETM in various embodiments. In addition, various embodiments are able to provide the same functionality, security, features, and performance for all of the technical documentation for an item in the IETM regard of the source of the technical documentation. Therefore, as a result, functionality that would not normally be available for some technical documentation can now be provided for the documentation in the IETM.
  • For instance, a technical manual may be sourced in one or more PDF files. Therefore, a user would typically make use of a PDF reader (e.g., application) to view the technical manual. A conventional PDF reader does not furnish the functionality implemented in various embodiments described herein. For example, a conventional PDF reader does not furnish the preview capabilities described herein provided by various embodiments. However, as a result of importing the PDF files for the technical manual as described herein, the preview capabilities may be implemented for the technical manual in various embodiments. That is to say, links may be provided in the content of the technical documentation originating from the PDF files that can be configured to generate and display previews. Such links cannot normally be placed in PDF files and provided in a PDF reader.
  • In addition, a PDF reader does not have the capability to allow a user to search a set of PDF files. Therefore, if the technical documentation involves multiple files, then the user who is using a PDF reader is required to open the files one at a time to search for a particular term and/or topic. However, various embodiments would allow the user to search the entire library (e.g., multiple PDF files) for the technical documentation in the IETM with a single search.
  • Further, the data structure and/or formatting (e.g., number of chapters, paragraphs, figures, tables, and/or the like) may be maintained by importing the data source that is not required to adhere to S1000D standards. This may be helpful to a user who needs to navigate the technical documentation since the structure and formatting mimic the structure and formatting found in the original data source. Finally, personnel who maintain the data source (e.g., maintain the technical manual provided in the PDF file(s)) are not required to convert the data source to another file format (e.g., XML and/or SGML) and/or to S1000D standards, or learn how to do so for that matter, for embodiments that allow source data that does not adhere to S1000D standards to be imported and used in the IETM.
  • Therefore, in various embodiments, when a data request is received within the IETM. For example, a user may select a component, topic, request a preview, and/or the like while signed into the IETM. The data request may identify particular content that was imported as a data module and/or data file that can be provided in JSON format. Accordingly, in some embodiments, providing the content in JSON format may allow the content to be transmitted and/or processed more quickly than if the content were provided in another file format such as XML, SGML, and/or PDF format.
  • CONCLUSION
  • Many modifications and other embodiments of the disclosure set forth herein will come to mind to one skilled in the art to which these modifications and other embodiments pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the disclosure is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
  • In addition, the functionality described herein involving parts may be applicable to other components in various embodiments. For instance, the functionality involving 3D graphics is described herein with respect to viewing different parts used for a component of an item in a 3D graphic. Those of ordinary skill in the art will recognize that such functionality may be applicable in various embodiments with respect to viewing other components in addition to parts. As previously noted, components may identify functional and/or physical structures of an item and may be broken down into assembly, sub-assembly, sub-sub-assembly, system, sub-system, sub-sub-system, subject, unit, part, and/or the like. Therefore, a 3D graphic may not only be provided at the part level, but may be provided at other levels found within the structure of the item and therefore, the functionality described herein with respect to 3D graphics may be applicable to these other levels and corresponding components. The same can be said with respect to other functionality described herein involving parts such as generating a preview for a part. Therefore, it should be understood the functionality described herein involving parts is not to be limited to use with just parts and may be used with respect to other components of an item in various embodiments.

Claims (24)

1. A method for controlling content found in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation for the item via an IETM viewer, the method comprising:
providing a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation;
receiving a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity;
responsive to receiving the first verbal command:
identifying, via one or more processors, a focus of a first portion of the content; and
causing, via the one or more processors, a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and
after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content:
receiving a second verbal command, wherein the second verbal command is received as a result of the user speaking the second verbal command that is detected by the audio input of the user computing entity; and
responsive to receiving the second verbal command:
identifying, via the one or more processors, a focus of a second portion of the content; and
causing, via the one or more processors, a second action to be performed based at least in part on the second verbal command with respect to a second user interface control element associated with the second portion of the content.
2. The method of claim 1 further comprising:
processing one or more features of the first verbal command using a verbal command machine learning model to generate the first action; and
processing one or more features of the second verbal command using the verbal command machine learning model to generate the second action.
3. The method of claim 2, wherein the verbal command machine learning model is trained using first training data comprising a first plurality of samples of the user speaking the first verbal command for the first action and second training data comprising a second plurality of samples of the user speaking the second verbal command for the second action.
4. The method of claim 3, wherein the user identifies the first action for the first verbal command and the second action for the second verbal command.
5. The method of claim 1, wherein the focus of the first portion of the content comprises a selection of the first portion of the content via a selection verbal command received as a result of the user speaking the selection verbal command that is detected by the audio input of the user computing entity.
6. The method of claim 1, wherein the first action comprises causing the first user interface control element to at least one of convey input, navigate to a particular section of the first portion of the content, or display other content associated with the first portion of the content.
7. The method of claim 1, wherein the focus of the second portion of the content results from the first action being performed with respect to the first user interface control element associated with the first portion of the content.
8. The method of claim 7, wherein the content comprises a plurality of sequential portions of the content, the second portion of the content immediately follows the first portion of the content in the plurality of sequential portions of the content, and the first action with respect to the first user interface control element associated with the first portion of the content comprises setting the first user interface control element associated with the first portion of the content to indicate a completion of the first portion of the content.
9. An apparatus for controlling content found in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation for the item via an IETM viewer, the apparatus comprising at least one processor and at least one memory including a computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to:
provide a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation;
receive a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity;
responsive to receiving the first verbal command:
identify a focus of a first portion of the content; and
cause a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and
after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content:
receive a second verbal command, wherein the second verbal command is received as a result of the user speaking the second verbal command that is detected by the audio input of the user computing entity; and
responsive to receiving the second verbal command:
identify a focus of a second portion of the content; and
cause a second action to be performed based at least in part on the second verbal command with respect to a second user interface control element associated with the second portion of the content.
10. The apparatus of claim 9, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:
process one or more features of the first verbal command using a verbal command machine learning model to generate the first action; and
process one or more features of the second verbal command using the verbal command machine learning model to generate the second action.
11. The apparatus of claim 10, wherein the verbal command machine learning model is trained using first training data comprising a first plurality of samples of the user speaking the first verbal command for the first action and second training data comprising a second plurality of samples of the user speaking the second verbal command for the second action.
12. The apparatus of claim 11, wherein the user identifies the first action for the first verbal command and the second action for the second verbal command.
13. The apparatus of claim 9, wherein the focus of the first portion of the content comprises a selection of the first portion of the content via a selection verbal command received as a result of the user speaking the selection verbal command that is detected by the audio input of the user computing entity.
14. The apparatus of claim 9, wherein the first action comprises causing the first user interface control element to at least one of convey input, navigate to a particular section of the first portion of the content, or display other content associated with the first portion of the content.
15. The apparatus of claim 9, wherein the focus of the second portion of the content results from the first action being performed with respect to the first user interface control element associated with the first portion of the content.
16. The apparatus of claim 15, wherein the content comprises a plurality of sequential portions of the content, the second portion of the content immediately follows the first portion of the content in the plurality of sequential portions of the content, and the first action with respect to the first user interface control element associated with the first portion of the content comprises setting the first user interface control element associated with the first portion of the content to indicate a completion of the first portion of the content.
17. A non-transitory computer storage medium comprising instructions for displaying media content of an electrical connector comprising a plurality of pins that is referenced in technical documentation for an item via an interactive electronic technical manual system (IETM) configured to provide electronic and credentialed access to the technical documentation for the item via an IETM viewer, the instructions being configured to cause one or more processors to at least perform operations configured to:
provide a window for display via the IETM viewer executing on a user computing entity being used by a user signed into the IETM, the window displaying the content found in the technical documentation;
receive a first verbal command, wherein the first verbal command is received as a result of the user speaking the first verbal command that is detected by an audio input of the user computing entity;
responsive to receiving the first verbal command:
identify a focus of a first portion of the content; and
cause a first action to be performed based at least in part on the first verbal command with respect to a first user interface control element associated with the first portion of the content; and
after causing the first action to be performed with respect to the first user interface control element associated with the first portion of the content:
receive a second verbal command, wherein the second verbal command is received as a result of the user speaking the second verbal command that is detected by the audio input of the user computing entity; and
responsive to receiving the second verbal command:
identify a focus of a second portion of the content; and
cause a second action to be performed based at least in part on the second verbal command with respect to a second user interface control element associated with the second portion of the content.
18. The non-transitory computer storage medium of claim 17, wherein the instructions are configured to cause the one or more processors to at least perform operations configured to:
process one or more features of the first verbal command using a verbal command machine learning model to generate the first action; and
process one or more features of the second verbal command using the verbal command machine learning model to generate the second action.
19. The non-transitory computer storage medium of claim 18, wherein the verbal command machine learning model is trained using first training data comprising a first plurality of samples of the user speaking the first verbal command for the first action and second training data comprising a second plurality of samples of the user speaking the second verbal command for the second action.
20. The non-transitory computer storage medium of claim 19, wherein the user identifies the first action for the first verbal command and the second action for the second verbal command.
21. The non-transitory computer storage medium of claim 17, wherein the focus of the first portion of the content comprises a selection of the first portion of the content via a selection verbal command received as a result of the user speaking the selection verbal command that is detected by the audio input of the user computing entity.
22. The non-transitory computer storage medium of claim 17, wherein the first action comprises causing the first user interface control element to at least one of convey input, navigate to a particular section of the first portion of the content, or display other content associated with the first portion of the content.
23. The non-transitory computer storage medium of claim 17, wherein the focus of the second portion of the content results from the first action being performed with respect to the first user interface control element associated with the first portion of the content.
24. The non-transitory computer storage medium of claim 23, wherein the content comprises a plurality of sequential portions of the content, the second portion of the content immediately follows the first portion of the content in the plurality of sequential portions of the content, and the first action with respect to the first user interface control element associated with the first portion of the content comprises setting the first user interface control element associated with the first portion of the content to indicate a completion of the first portion of the content.
US17/249,051 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual Pending US20220262358A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/249,051 US20220262358A1 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/249,051 US20220262358A1 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual
US17/249,039 US11967317B2 (en) 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/249,039 Continuation US11967317B2 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual

Publications (1)

Publication Number Publication Date
US20220262358A1 true US20220262358A1 (en) 2022-08-18

Family

ID=82800374

Family Applications (3)

Application Number Title Priority Date Filing Date
US17/249,052 Active US11929068B2 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual
US17/249,050 Pending US20220261530A1 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual
US17/249,051 Pending US20220262358A1 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US17/249,052 Active US11929068B2 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual
US17/249,050 Pending US20220261530A1 (en) 2021-02-18 2021-02-18 Providing enhanced functionality in an interactive electronic technical manual

Country Status (1)

Country Link
US (3) US11929068B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11700288B2 (en) 2020-09-21 2023-07-11 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US20230325580A1 (en) * 2022-04-10 2023-10-12 Atlassian Pty Ltd. Multi-mode display for documents in a web browser client application
US11929068B2 (en) 2021-02-18 2024-03-12 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11947906B2 (en) 2021-05-19 2024-04-02 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11967317B2 (en) 2021-02-18 2024-04-23 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056384A1 (en) * 2000-06-14 2001-12-27 Yukiharu Matsumura Electronic manual delivery system and method
US20040138899A1 (en) * 2003-01-13 2004-07-15 Lawrence Birnbaum Interactive task-sensitive assistant
US7162426B1 (en) * 2000-10-02 2007-01-09 Xybernaut Corporation Computer motherboard architecture with integrated DSP for continuous and command and control speech processing
US20100315329A1 (en) * 2009-06-12 2010-12-16 Southwest Research Institute Wearable workspace
US20110047503A1 (en) * 2009-08-18 2011-02-24 International Business Machines Corporation File content navigation using binary search
US20170031652A1 (en) * 2015-07-29 2017-02-02 Samsung Electronics Co., Ltd. Voice-based screen navigation apparatus and method
US20180181264A1 (en) * 2016-12-23 2018-06-28 Realwear, Incorporated Context based content navigation for wearable display
US20180260926A1 (en) * 2017-03-07 2018-09-13 Global Tel*Link Corp. Centralized offender management system for multiple jurisdictions
US20200098192A1 (en) * 2017-06-05 2020-03-26 2689090 Canada Inc. System and method for displaying an asset of an interactive electronic technical publication synchronously in a plurality of extended reality display devices
US20200294497A1 (en) * 2018-05-07 2020-09-17 Google Llc Multi-modal interaction between users, automated assistants, and other computing services

Family Cites Families (186)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4985855A (en) 1987-08-24 1991-01-15 International Business Machines Corp. Method for producing installation instructions for three dimensional assemblies
JP2636270B2 (en) 1987-10-29 1997-07-30 ブラザー工業株式会社 Document creation device
US4899292A (en) 1988-03-02 1990-02-06 Image Storage/Retrieval Systems, Inc. System for storing and retrieving text and associated graphics
CA2025120A1 (en) 1989-09-28 1991-03-29 John W. White Portable and dynamic distributed application architecture
US5454074A (en) 1991-09-18 1995-09-26 The Boeing Company Electronic checklist system
US5428733A (en) 1991-12-16 1995-06-27 Apple Computer, Inc. Method of calculating dimensions and positioning of rectangular balloons
US6496872B1 (en) 1994-05-16 2002-12-17 Apple Computer, Inc. Computer system for automatically instantiating tasks designated by a user
US5742768A (en) 1996-07-16 1998-04-21 Silicon Graphics, Inc. System and method for providing and displaying a web page having an embedded menu
US6262720B1 (en) 1998-07-24 2001-07-17 The Boeing Company Electronic checklist system with checklist inhibiting
US6557015B1 (en) 1998-09-18 2003-04-29 International Business Machines Corporation Determining whether a second hypertext document is included in a list of active document trails
US6313854B1 (en) 1998-10-16 2001-11-06 International Business Machines Corporation Display mechanism for HTML frames
US7107539B2 (en) 1998-12-18 2006-09-12 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US6961897B1 (en) 1999-06-14 2005-11-01 Lockheed Martin Corporation System and method for interactive electronic media extraction for web page generation
US7058817B1 (en) 1999-07-02 2006-06-06 The Chase Manhattan Bank System and method for single sign on process for websites with multiple applications and services
US6606731B1 (en) 1999-08-05 2003-08-12 The Boeing Company Intelligent wiring diagram system
US6513036B2 (en) 1999-08-13 2003-01-28 Mindpass A/S Method and apparatus for searching and presenting search result from one or more information sources based on context representations selected from the group of other users
US20020002563A1 (en) 1999-08-23 2002-01-03 Mary M. Bendik Document management systems and methods
US6430556B1 (en) * 1999-11-01 2002-08-06 Sun Microsystems, Inc. System and method for providing a query object development environment
US20010030611A1 (en) 2000-03-02 2001-10-18 O'rourke James D. Checklist device
BR0107243A (en) 2000-09-22 2002-07-23 Honda Motor Co Ltd Parts recovery system, server, user terminal, recording media and maintenance materials recovery system
US6542796B1 (en) 2000-11-18 2003-04-01 Honeywell International Inc. Methods and apparatus for integrating, organizing, and accessing flight planning and other data on multifunction cockpit displays
US6879997B1 (en) 2000-11-27 2005-04-12 Nokia Corporation Synchronously shared online documents
WO2002063535A2 (en) 2001-02-07 2002-08-15 Exalt Solutions, Inc. Intelligent multimedia e-catalog
WO2002101508A2 (en) 2001-06-11 2002-12-19 Mariner Supply, Inc. (Dba Go2Marine.Com) Interactive exploded view diagram ordering tool
US20030025682A1 (en) 2001-07-06 2003-02-06 Dame Stephen G. Aural/visual checklist system for avionics
US7210094B2 (en) 2001-07-11 2007-04-24 International Business Machines Corporation Method and system for dynamic web page breadcrumbing using javascript
US7302675B2 (en) 2001-08-14 2007-11-27 National Instruments Corporation System and method for analyzing a graphical program using debugging graphical programs
US20030187751A1 (en) 2001-10-31 2003-10-02 Mike Watson Interactive electronic reference systems and methods
US6768998B2 (en) 2001-12-19 2004-07-27 General Electric Company Systems and methods for network-based technical library
US6884946B2 (en) 2002-03-05 2005-04-26 Premark Feg L.L.C. Scale for weighing and determining a price of an item
US7356774B2 (en) 2002-08-13 2008-04-08 National Instruments Corporation Grouping components of a measurement system
US20040073794A1 (en) 2002-10-15 2004-04-15 Kevin Nip Method and system for the dynamic and automated storage and retrieval of authentication information via a communications network
US7194693B2 (en) * 2002-10-29 2007-03-20 International Business Machines Corporation Apparatus and method for automatically highlighting text in an electronic document
US20040226048A1 (en) 2003-02-05 2004-11-11 Israel Alpert System and method for assembling and distributing multi-media output
US7216266B2 (en) 2003-03-12 2007-05-08 Thomson Licensing Change request form annotation
US20040204998A1 (en) 2003-04-08 2004-10-14 Rachana Shah System and method for placing orders
US20030191681A1 (en) 2003-05-06 2003-10-09 Gallion Kirk P. Method for managing a business process related to a document publishing project
WO2004104863A1 (en) 2003-05-20 2004-12-02 Victor Company Of Japan, Limited Electronic service manual display control device
US20040260594A1 (en) 2003-06-18 2004-12-23 Maddox Edward P. Maintenance and inspection system and method
US20050027578A1 (en) 2003-07-31 2005-02-03 International Business Machines Corporation Dynamic status checklist procedure
US7827591B2 (en) 2003-10-08 2010-11-02 Fmr Llc Management of hierarchical reference data
US7103434B2 (en) 2003-10-14 2006-09-05 Chernyak Alex H PLM-supportive CAD-CAM tool for interoperative electrical and mechanical design for hardware electrical systems
US7386436B2 (en) 2003-12-22 2008-06-10 Inmedius, Inc. Viewing system that supports multiple electronic document types
US7647628B2 (en) 2004-03-09 2010-01-12 International Business Machines Corporation Authentication to a second application using credentials authenticated to a first application
US7376912B2 (en) 2004-03-25 2008-05-20 Morgan Stanley Interactive user interface for displaying supply chain information
WO2006017160A2 (en) 2004-07-09 2006-02-16 Ade Corporation System and method for searching for patterns of semiconductor wafer features in semiconductor wafer data
US7818683B2 (en) 2004-12-06 2010-10-19 Oracle International Corporation Methods and systems for representing breadcrumb paths, breadcrumb inline menus and hierarchical structure in a web environment
US20080052281A1 (en) 2006-08-23 2008-02-28 Lockheed Martin Corporation Database insertion and retrieval system and method
US7444216B2 (en) 2005-01-14 2008-10-28 Mobile Productivity, Inc. User interface for display of task specific information
US9009074B2 (en) 2005-01-25 2015-04-14 Siemens Aktiengesellschaft Systems and methods for generating electronic spare parts catalogs for complex systems and machines
US7613638B2 (en) * 2005-01-25 2009-11-03 Siemens Corporate Research, Inc. Automated systems and methods to support electronic business transactions for spare parts
US8019752B2 (en) 2005-11-10 2011-09-13 Endeca Technologies, Inc. System and method for information retrieval from object collections with complex interrelationships
US7756883B2 (en) 2005-12-12 2010-07-13 Industrial Technology Research Institute Control method for modifying engineering information from a remote work site and a system of the same
EP1963998A1 (en) 2005-12-22 2008-09-03 International Business Machines Corporation Method and system for automatically generating multilingual electronic content from unstructured data
EP1804183B1 (en) 2005-12-30 2017-06-21 Dassault Systèmes Process for selecting objects in a PLM database and apparatus implementing this process
US20110313899A1 (en) * 2006-01-05 2011-12-22 Drey Leonard L Method of Governing Content Presentation
WO2007091324A1 (en) 2006-02-09 2007-08-16 Fujitsu Limited Work instruction sheet preparing device, method and program
US7299101B2 (en) 2006-03-06 2007-11-20 The Protomold Company, Inc. Manipulatable model for communicating manufacturing issues of a custom part
US20070283317A1 (en) 2006-03-17 2007-12-06 Organizational Strategies, Inc. Inter domain services manager
US8381164B2 (en) 2006-03-28 2013-02-19 The Boeing Company Method and system of intelligent interactive graphics electrical plug map to analyze text and distances between electrical contacts and physical layout file builder
US8214789B2 (en) 2006-10-02 2012-07-03 The Boeing Company Method and system for keyboard managing and navigating among drawing objects
US10635260B2 (en) * 2007-01-22 2020-04-28 Cerner Innovation, Inc. System and user interface for clinical reporting and ordering provision of an item
US8479091B2 (en) 2007-04-30 2013-07-02 Xerox Corporation Automated assembly of a complex document based on production constraints
WO2008140721A2 (en) 2007-05-09 2008-11-20 Lexisnexis Group Systems and methods for analyzing documents
US8739073B2 (en) 2007-05-15 2014-05-27 Microsoft Corporation User interface for document table of contents
US8112715B2 (en) 2007-07-31 2012-02-07 International Business Machines Corporation Content management system that renders a document to a user based on a usage profile that indicates previous activity in accessing the document
US8086970B2 (en) 2007-08-02 2011-12-27 International Business Machines Corporation Address range viewer
US8190590B2 (en) 2007-08-15 2012-05-29 Martin Edward Lawlor System and method for the creation and access of dynamic course content
US8010910B2 (en) 2007-09-04 2011-08-30 Microsoft Corporation Breadcrumb list supplementing for hierarchical data sets
US7983809B2 (en) 2007-12-21 2011-07-19 Sikorsky Aircraft Corporation Aircraft integrated support system (ISS)
JP4597203B2 (en) 2008-03-13 2010-12-15 本田技研工業株式会社 Electronic manual display device
US8826375B2 (en) 2008-04-14 2014-09-02 Lookwithus.Com Inc. Rich media collaboration system
US20090319955A1 (en) 2008-06-20 2009-12-24 Microsoft Corporation Techniques for a navigation based design tool
US20100077320A1 (en) 2008-09-19 2010-03-25 United States Government As Represented By The Secretary Of The Navy SGML/XML to HTML conversion system and method for frame-based viewer
US20100088746A1 (en) * 2008-10-08 2010-04-08 Sony Corporation Secure ebook techniques
US8621390B1 (en) 2008-10-21 2013-12-31 Amazon Technologies, Inc. Table of contents menu over electronic book content on an electronic paper display
US20100191659A1 (en) 2009-01-23 2010-07-29 Terry Hebron System, method and computer program product for calculating an appraisal price for a heavy truck
US20100312595A1 (en) 2009-03-11 2010-12-09 Lynn Darrell D Group Based Management and Delivery System for Education Projects
US9270420B2 (en) 2009-04-24 2016-02-23 Samsung Electronics Co., Ltd. Data communication using 2D bar codes
FR2947650B1 (en) 2009-07-03 2020-10-16 Thales Sa METHOD AND SYSTEM FOR GENERATING ELECTRONIC DOCUMENTATION FOR MAINTENANCE
CN201540571U (en) 2009-07-29 2010-08-04 中国人民解放军海军航空工程学院 Interactive electronic technical manual exploring system
US8161417B1 (en) 2009-11-04 2012-04-17 Sprint Communications Company L.P. Enhancing usability of a moving touch screen
US8527893B2 (en) 2010-02-26 2013-09-03 Microsoft Corporation Taxonomy editor
US8863032B2 (en) 2010-03-01 2014-10-14 Autodesk, Inc. Presenting object properties
JP5598024B2 (en) 2010-03-04 2014-10-01 株式会社リコー Component management system, component management apparatus, component management program, and component management method
US8775552B1 (en) 2010-04-23 2014-07-08 The Boeing Company Methods and systems for distribution of technical manuals
US9361130B2 (en) 2010-05-03 2016-06-07 Apple Inc. Systems, methods, and computer program products providing an integrated user interface for reading content
US8402359B1 (en) 2010-06-30 2013-03-19 International Business Machines Corporation Method and apparatus for managing recent activity navigation in web applications
US20120005624A1 (en) 2010-07-02 2012-01-05 Vesely Michael A User Interface Elements for Use within a Three Dimensional Scene
US20140201029A9 (en) 2010-09-03 2014-07-17 Joseph Anthony Plattsmier 3D Click to Buy
US20120084108A1 (en) 2010-09-30 2012-04-05 Adc Telecommunications, Inc. Systems and methods for a work flow management application suite for mobile communications devices
US20130212507A1 (en) 2010-10-11 2013-08-15 Teachscape, Inc. Methods and systems for aligning items of evidence to an evaluation framework
US8914851B2 (en) 2010-12-06 2014-12-16 Golba Llc Method and system for improved security
US9285950B2 (en) 2011-03-30 2016-03-15 Google Inc. Hover-over gesturing on mobile devices
WO2012142740A1 (en) 2011-04-18 2012-10-26 Egonexus Limited Digital token generator, server for recording digital tokens and method for issuing digital token
US20120304105A1 (en) 2011-05-26 2012-11-29 The Boeing Company Wiring Diagram Visualization System
US9230034B2 (en) 2011-07-15 2016-01-05 International Business Machines Corporation Related page identification based on page hierarchy and subject hierarchy
US10108706B2 (en) 2011-09-23 2018-10-23 Amazon Technologies, Inc. Visual representation of supplemental information for a digital work
US9197718B2 (en) 2011-09-23 2015-11-24 Box, Inc. Central management and control of user-contributed content in a web-based collaboration environment and management console thereof
US20130179309A1 (en) 2012-01-10 2013-07-11 Thermo Fisher Scientific Inc. Methods and Systems For Restocking Inventory
US20130325567A1 (en) 2012-02-24 2013-12-05 Augme Technologies, Inc. System and method for creating a virtual coupon
US9100822B2 (en) 2012-02-24 2015-08-04 Wyse Technology L.L.C. System and method for information sharing using visual tags
US10275727B2 (en) 2012-04-18 2019-04-30 International Business Machines Corporation Dynamic location-aware coordination method and system
US9887992B1 (en) 2012-07-11 2018-02-06 Microstrategy Incorporated Sight codes for website authentication
WO2014009461A1 (en) 2012-07-11 2014-01-16 Tyco Electronics Raychem Bvba Integrated three dimensional product access and display system
KR101407069B1 (en) 2012-10-09 2014-06-12 한국전자통신연구원 Method for authoring xml document and apparatus for performing the same
JP6098120B2 (en) 2012-11-01 2017-03-22 富士通株式会社 Assembly sequence generation program, assembly sequence generation apparatus, and manufacturing method
US10394936B2 (en) 2012-11-06 2019-08-27 International Business Machines Corporation Viewing hierarchical document summaries using tag clouds
US8898771B1 (en) 2012-11-13 2014-11-25 Christine Hana Kim Apparatus and method for preventing a dangerous user behavior with a mobile communication device using an integrated pedometer
FR2999700B1 (en) 2012-12-14 2015-07-10 Thales Sa METHOD AND DEVICE FOR PROVIDING MACHINE MANUAL INTERFACE DATA RELATING TO A FLIGHT PLAN
US9411899B2 (en) 2012-12-21 2016-08-09 Paypal, Inc. Contextual breadcrumbs during navigation
US20140223348A1 (en) 2013-01-10 2014-08-07 Tyco Safety Products Canada, Ltd. Security system and method with information display in flip window
US8976202B2 (en) 2013-01-28 2015-03-10 Dave CAISSY Method for controlling the display of a portable computing device
US9678484B2 (en) 2013-03-15 2017-06-13 Fisher-Rosemount Systems, Inc. Method and apparatus for seamless state transfer between user interface devices in a mobile control room
US9588675B2 (en) * 2013-03-15 2017-03-07 Google Inc. Document scale and position optimization
US20140298184A1 (en) 2013-03-26 2014-10-02 Larry Bailin Computer-implemented system and method for integrating and managing product specific data
US20140310613A1 (en) 2013-04-15 2014-10-16 Microsoft Corporation Collaborative authoring with clipping functionality
US9098593B2 (en) 2013-04-23 2015-08-04 The Boeing Company Barcode access to electronic resources for lifecycle tracking of complex system parts
US20140324779A1 (en) 2013-04-29 2014-10-30 Freerun Technologies, Inc. Travel application for mobile devices
US20140359457A1 (en) 2013-05-30 2014-12-04 NextPlane, Inc. User portal to a hub-based system federating disparate unified communications systems
US9225704B1 (en) 2013-06-13 2015-12-29 Amazon Technologies, Inc. Unified management of third-party accounts
US20150007096A1 (en) 2013-06-28 2015-01-01 Silicon Graphics International Corp. Rotation of graphical scenes
US20150032548A1 (en) 2013-07-26 2015-01-29 Ari Shapiro System and method for enhancing oem parts shopping
US9971752B2 (en) 2013-08-19 2018-05-15 Google Llc Systems and methods for resolving privileged edits within suggested edits
US9787617B2 (en) 2013-09-05 2017-10-10 Quzzup Srl Method and system for establishing a communication between mobile computing devices
US10739951B2 (en) 2013-09-06 2020-08-11 Knowledge Initiatives LLC Interactive user interfaces for electronic textbook implementations
US9953311B2 (en) 2013-09-25 2018-04-24 Visa International Service Association Systems and methods for incorporating QR codes
US20150106723A1 (en) 2013-10-10 2015-04-16 Jones International, Ltd. Tools for locating, curating, editing, and using content of an online library
US20150172418A1 (en) 2013-12-13 2015-06-18 Assemble Systems Llc Real-Time Model-Based Collaboration and Presence
US9280646B1 (en) 2013-12-17 2016-03-08 Vce Company, Llc Methods, systems, and computer readable mediums for role-based access control involving one or more converged infrastructure systems
US20160210268A1 (en) 2014-01-15 2016-07-21 Mark Allen Sales Methods and systems for managing visual text objects using an outline-like layout
US20150234786A1 (en) 2014-02-14 2015-08-20 Kobo Inc. E-reader device to display content from different resources on a partitioned display area
US20150242495A1 (en) 2014-02-24 2015-08-27 Hipmunk, Inc. Search machine for presenting active search results
US10050787B1 (en) * 2014-03-25 2018-08-14 Amazon Technologies, Inc. Authentication objects with attestation
US10268682B2 (en) 2014-04-02 2019-04-23 International Business Machines Corporation Adjusting text in message in light of recipients interests and/or personality traits to sustain recipient's interest in message
WO2015154093A2 (en) 2014-04-05 2015-10-08 Wearable Intelligence Systems and methods for digital workflow and communication
US10394938B2 (en) 2014-04-30 2019-08-27 MBTE Holdings Sweden AB Visual searching and navigation
US20160085430A1 (en) 2014-09-24 2016-03-24 Microsoft Corporation Adapting user interface to interaction criteria and component properties
US10216844B2 (en) 2014-09-26 2019-02-26 Excalibur Ip, Llc Graphical interface presentation of search results
US9984133B2 (en) 2014-10-16 2018-05-29 Palantir Technologies Inc. Schematic and database linking system
US10713699B1 (en) 2014-11-14 2020-07-14 Andersen Corporation Generation of guide materials
US9679411B2 (en) 2014-12-19 2017-06-13 International Business Machines Corporation Hardware management and reconstruction using visual graphics
US9632664B2 (en) 2015-03-08 2017-04-25 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10452755B2 (en) 2015-03-10 2019-10-22 Microsoft Technology Licensing, Llc Hierarchical navigation control
WO2016160680A1 (en) 2015-03-27 2016-10-06 The Parari Group, Llc Apparatus, systems, and methods for providing three-dimensional instruction manuals in a simplified manner
US10209867B1 (en) 2015-05-15 2019-02-19 Jordan M. Becker Electronic documentation integration and viewing system
US10474973B2 (en) 2015-05-19 2019-11-12 Bell Helicopter Textron Inc. Aircraft fleet maintenance system
US20160357376A1 (en) 2015-06-05 2016-12-08 Apple Inc. Ownership-agnostic user interface for media content
US10481765B2 (en) 2015-10-09 2019-11-19 Walmart Apollo, Llc Graphical user interface and method and apparatus of navigating using same
US10303892B1 (en) 2015-10-12 2019-05-28 Nextlabs, Inc. Viewing protected documents in a web browser
WO2017064493A1 (en) 2015-10-13 2017-04-20 Certa 360 Limited System for electronically managing and assessing competency of skilled workers
CA3013322A1 (en) 2016-02-02 2017-08-10 ActiveWrite, Inc. Document collaboration and consolidation tools and methods of use
US10190792B2 (en) 2016-04-27 2019-01-29 Crestron Electronics, Inc. Three-dimensional building management system visualization
CN106095258B (en) 2016-06-07 2020-05-08 Tcl移动通信科技(宁波)有限公司 Electronic book page changing method and system based on mobile terminal
US10852912B2 (en) 2016-06-12 2020-12-01 Apple Inc. Image creation app in messaging app
US20180032618A1 (en) 2016-07-29 2018-02-01 ALQIMI Analytics & Intelligence, LLC System and methods for retrieving raw data from unpredictable data sources
US10496734B2 (en) 2016-09-26 2019-12-03 Microsoft Technology Licensing, Llc Intelligent navigation via a transient user interface control
US10691164B2 (en) 2016-10-10 2020-06-23 MBTE Holdings Sweden AB Device for coordinated use of multiple mobile computing devices
US10515479B2 (en) 2016-11-01 2019-12-24 Purdue Research Foundation Collaborative 3D modeling system
US10594786B1 (en) 2017-01-10 2020-03-17 Lucasfilm Entertainment Company Ltd. Multi-device interaction with an immersive environment
US20180225408A1 (en) 2017-02-03 2018-08-09 Ignitor Labs, LLC System and method for interactive modeling and analysis of 3-d digital assemblies
US10430997B2 (en) 2017-02-23 2019-10-01 OPTO Interactive, LLC Method of managing proxy objects
US10803044B1 (en) 2017-03-07 2020-10-13 The United States Of America, As Represented By The Secretary Of The Navy Technical data flexibility index
US10484358B2 (en) 2017-05-05 2019-11-19 Servicenow, Inc. Single sign-on user interface improvements
US11151488B2 (en) 2017-07-31 2021-10-19 United Parcel Service Of America, Inc. Intelligent user interface and application for operations management
EP3642835A4 (en) 2017-08-03 2021-01-06 Telepathy Labs, Inc. Omnichannel, intelligent, proactive virtual agent
US11210434B2 (en) * 2017-10-17 2021-12-28 Textron Innovations Inc. Fault isolation
US10713423B2 (en) 2017-10-25 2020-07-14 International Business Machines Corporation Content adjustment and display augmentation for communication
CN108268635B (en) 2018-01-17 2022-06-24 百度在线网络技术(北京)有限公司 Method and apparatus for acquiring data
US10623500B2 (en) 2018-01-24 2020-04-14 Vmware, Inc. Remote desktop sharing and collaboration via image scanning
US11175934B2 (en) 2018-05-24 2021-11-16 Nextaxiom Technology, Inc. Method of defining and performing dynamic user-computer interaction, computer guided navigation, and application integration for any procedure, instructions, instructional manual, or fillable form
US10852781B2 (en) 2018-06-13 2020-12-01 MBTE Holdings Sweden AB Synchronized display for a wearable mobile terminal
US10657318B2 (en) 2018-08-01 2020-05-19 Microsoft Technology Licensing, Llc Comment notifications for electronic content
US11886686B2 (en) * 2018-08-28 2024-01-30 Intelligent Medical Objects, Inc. User interface, system, and method for optimizing a patient problem list
US10689128B2 (en) 2018-10-23 2020-06-23 Honeywell International Inc. Methods and systems for a graphical user interface of an electronic aviation checklist
JP7177658B2 (en) 2018-10-25 2022-11-24 カワサキモータース株式会社 METHOD, SYSTEM AND PROGRAM FOR DISPLAYING ELECTRONIC SERVICE MANUAL
US11682390B2 (en) 2019-02-06 2023-06-20 Microstrategy Incorporated Interactive interface for analytics
US11323432B2 (en) 2019-07-08 2022-05-03 Bank Of America Corporation Automatic login tool for simulated single sign-on
US20210126983A1 (en) 2019-10-24 2021-04-29 Microsoft Technology Licensing, Llc Status indicators for communicating user activity across digital contexts
US11032090B2 (en) 2019-10-28 2021-06-08 MITEL NETWORKS (Int'L) Limited Method, system, and device for changing the collaboration state for omni-workspaces
US11093046B2 (en) 2019-12-16 2021-08-17 Microsoft Technology Licensing, Llc Sub-display designation for remote content source device
CN111581948B (en) 2020-04-03 2024-02-09 北京百度网讯科技有限公司 Document analysis method, device, equipment and storage medium
US20220092555A1 (en) 2020-09-21 2022-03-24 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US20220091707A1 (en) 2020-09-21 2022-03-24 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11929068B2 (en) 2021-02-18 2024-03-12 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11875136B2 (en) 2021-04-01 2024-01-16 Microsoft Technology Licensing, Llc Edit automation using a temporal edit pattern

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056384A1 (en) * 2000-06-14 2001-12-27 Yukiharu Matsumura Electronic manual delivery system and method
US7162426B1 (en) * 2000-10-02 2007-01-09 Xybernaut Corporation Computer motherboard architecture with integrated DSP for continuous and command and control speech processing
US20040138899A1 (en) * 2003-01-13 2004-07-15 Lawrence Birnbaum Interactive task-sensitive assistant
US7890336B2 (en) * 2003-01-13 2011-02-15 Northwestern University Interactive task-sensitive assistant
US20100315329A1 (en) * 2009-06-12 2010-12-16 Southwest Research Institute Wearable workspace
US20110047503A1 (en) * 2009-08-18 2011-02-24 International Business Machines Corporation File content navigation using binary search
US20170031652A1 (en) * 2015-07-29 2017-02-02 Samsung Electronics Co., Ltd. Voice-based screen navigation apparatus and method
US20180181264A1 (en) * 2016-12-23 2018-06-28 Realwear, Incorporated Context based content navigation for wearable display
US20180260926A1 (en) * 2017-03-07 2018-09-13 Global Tel*Link Corp. Centralized offender management system for multiple jurisdictions
US20200098192A1 (en) * 2017-06-05 2020-03-26 2689090 Canada Inc. System and method for displaying an asset of an interactive electronic technical publication synchronously in a plurality of extended reality display devices
US20200294497A1 (en) * 2018-05-07 2020-09-17 Google Llc Multi-modal interaction between users, automated assistants, and other computing services

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DT Series Technical Manual (by Deutsch Industrial, Retrieved at https://www.farnell.com/datasheets/628276.pdf, archived February 22, 2020, pages 1-13 ) hereinafter DT Series Technical Manual. (Year: 2020) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11700288B2 (en) 2020-09-21 2023-07-11 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11743302B2 (en) 2020-09-21 2023-08-29 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11792237B2 (en) 2020-09-21 2023-10-17 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11848761B2 (en) 2020-09-21 2023-12-19 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11895163B2 (en) 2020-09-21 2024-02-06 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11909779B2 (en) 2020-09-21 2024-02-20 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11929068B2 (en) 2021-02-18 2024-03-12 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11967317B2 (en) 2021-02-18 2024-04-23 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US11947906B2 (en) 2021-05-19 2024-04-02 MBTE Holdings Sweden AB Providing enhanced functionality in an interactive electronic technical manual
US20230325580A1 (en) * 2022-04-10 2023-10-12 Atlassian Pty Ltd. Multi-mode display for documents in a web browser client application

Also Published As

Publication number Publication date
US11929068B2 (en) 2024-03-12
US20220261530A1 (en) 2022-08-18
US20220261125A1 (en) 2022-08-18
US20220261124A1 (en) 2022-08-18

Similar Documents

Publication Publication Date Title
US11449204B2 (en) Providing enhanced functionality in an interactive electronic technical manual
US20220092555A1 (en) Providing enhanced functionality in an interactive electronic technical manual
US11929068B2 (en) Providing enhanced functionality in an interactive electronic technical manual
US20230114707A1 (en) Providing enhanced functionality in an interactive electronic technical manual
US11967317B2 (en) Providing enhanced functionality in an interactive electronic technical manual
US11947906B2 (en) Providing enhanced functionality in an interactive electronic technical manual
US20220374113A1 (en) Providing enhanced functionality in an interactive electronic technical manual
US20230110336A1 (en) Providing enhanced functionality in an interactive electronic technical manual

Legal Events

Date Code Title Description
AS Assignment

Owner name: MBTE HOLDINGS SWEDEN AB, GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MERIAZ, RAN;MERIAZ, YORAM;TKACHMAN, ALEXANDER;SIGNING DATES FROM 20210218 TO 20210530;REEL/FRAME:056400/0144

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED