NZ792078A - Interactive application adapted for use by multiple users via a distributed computer-based system - Google Patents
Interactive application adapted for use by multiple users via a distributed computer-based systemInfo
- Publication number
- NZ792078A NZ792078A NZ792078A NZ79207819A NZ792078A NZ 792078 A NZ792078 A NZ 792078A NZ 792078 A NZ792078 A NZ 792078A NZ 79207819 A NZ79207819 A NZ 79207819A NZ 792078 A NZ792078 A NZ 792078A
- Authority
- NZ
- New Zealand
- Prior art keywords
- user
- control
- interface
- video
- rendered
- Prior art date
Links
- 230000002452 interceptive Effects 0.000 title claims abstract 6
- 238000000034 method Methods 0.000 claims 31
- 230000004913 activation Effects 0.000 claims 17
- 230000004044 response Effects 0.000 claims 9
- 238000009877 rendering Methods 0.000 claims 7
- 210000003813 Thumb Anatomy 0.000 claims 6
- 230000000875 corresponding Effects 0.000 claims 6
- 210000003414 Extremities Anatomy 0.000 claims 3
- 210000000088 Lip Anatomy 0.000 claims 3
- 238000011156 evaluation Methods 0.000 claims 3
- 230000003068 static Effects 0.000 claims 3
- 150000002500 ions Chemical class 0.000 claims 2
- 230000001360 synchronised Effects 0.000 claims 2
- 230000000007 visual effect Effects 0.000 claims 2
- 101700031732 CDA3 Proteins 0.000 claims 1
Abstract
processing device causes a selection of a multimedia module to be detected. The processing device renders, during a first mode of operation, a dynamic navigation flow control in association with first multimedia content of the multimedia module, wherein the rendered dynamic navigation flow control indicates a first navigation position. The processing device stores a first result to memory obtained by execution of an interactive event during the first mode of operation. Based at least on the first result, a second mode of operation is entered. During the second mode of operation, the dynamic navigation flow control is re-rendered in association with second multimedia content of the multimedia module, wherein the re-rendered dynamic navigation flow control indicates a second navigation position. indicates a first navigation position. The processing device stores a first result to memory obtained by execution of an interactive event during the first mode of operation. Based at least on the first result, a second mode of operation is entered. During the second mode of operation, the dynamic navigation flow control is re-rendered in association with second multimedia content of the multimedia module, wherein the re-rendered dynamic navigation flow control indicates a second navigation position.
Description
INTERACTIVE APPLICATION ADAPTED FOR USE BY MULTIPLE USERS VIA
A DISTRIBUTED COMPUTER-BASED SYSTEM
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
Any and all applications for which a foreign or domestic priority claim is
identified in the Application Data Sheet as filed with the present application are hereby
incorporated by reference under 37 CFR 1.57.
BACKGROUND
Field
This nt relates to systems and techniques for rendering content and
to content navigation tools.
Description of the Related Art
Conventional interactive s exist. However, such conventional
systems do not adequately provide dynamic interactivity with users. Further, accessing
content in such conventional systems is often a cumbersome, confusing process.
SUMMARY
The following presents a simplified summary of one or more s in
order to provide a basic understanding of such aspects. This summary is not an extensive
overview of all contemplated aspects, and is intended to neither identify key or critical
elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to
present some concepts of one or more s in a simplified form as a e to the more
ed description that is presented later.
An aspect of the present disclosure relates to a system, comprising: a
processing device; a computer readable medium that stores programmatic instructions that,
when executed by the processing device, are configured to cause the system to perform
operations comprising: detect a selection of a multimedia ; , during a first mode
of operation, a dynamic navigation flow control in association with first multimedia content
of the multimedia , n the rendered dynamic navigation flow control indicates a
first current navigation position; store a first result to memory obtained by ion of an
ctive event during the first mode of operation; based at least on the first result, enter a
second mode of operation; and re-render, during the second mode of operation, the dynamic
navigation flow control in association with second multimedia content of the multimedia
module, wherein the dered dynamic navigation flow control indicates a second
navigation position corresponding to a second current navigation position.
An aspect of the present disclosure relates to a non-transitory computer
readable medium that stores mmatic instructions that, when executed by a processing
device, are configured to cause the processing device to perform operations comprising:
detect a selection of a multimedia module via a user input device; enable, during a first mode
of operation, a dynamic navigation flow control to be ed in ation with first
multimedia content of the multimedia module, wherein the rendered dynamic navigation
flow control indicates a first t navigation position; enable a first result to stored
memory obtained by execution of an interactive event during the first mode of operation;
enable a second mode of operation to be entered based at least in part on the first result; and
enable the dynamic tion flow control to be dered, during the second mode of
operation, in association with second multimedia content of the multimedia module, wherein
the re-rendered dynamic navigation flow control indicates a second navigation position
corresponding to a second current navigation position.
An aspect of the present disclosure relates to a computer ented
method, the method comprising: detecting, using a computerized system, a selection of a
multimedia module, wherein the select is made via a user input device; using the
computerized system, ng during a first mode of operation, a dynamic navigation flow
control to be rendered in association with first multimedia content of the multimedia module,
wherein the rendered dynamic navigation flow control indicates a first current tion
position; using the computerized system, enabling a first result to stored memory obtained by
execution of an interactive event during the first mode of operation; using the erized
system, enabling a second mode of operation to be d based at least in part on the first
result; and using the computerized system, enabling the dynamic tion flow control to
be re-rendered, during the second mode of operation, in association with second multimedia
t of the multimedia module, wherein the re-rendered dynamic navigation flow control
indicates a second navigation position corresponding to a second current tion position.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments will now be described with reference to the drawings
summarized below. Throughout the drawings, reference numbers may be re-used to indicate
correspondence between referenced elements. The drawings are provided to rate
e embodiments described herein and are not intended to limit the scope of the
disclosure.
Figure 1 illustrates an example architecture.
Figure 2A illustrates an example hardware architecture.
Figure 2B illustrates an e software architecture
Figures 2C-2D illustrate an example process.
Figures 3A-3M illustrates example user aces.
Figures 4A-4H illustrate additional example user interfaces.
DETAILED DESCRIPTION
The task of navigating within an interactive content module (e.g., an
interactive training module), having a large number of related video, audio, and text content
components and subcomponents, to locate content of interest can be burdensome and time
consuming for users. This may especially be the case if the users do not know the structure
of the content module or the names of the content module components and subcomponents.
Further, in many cases, the content ents and subcomponents are not accurately or
intuitively categorized, requiring the user to perform additional navigation or keyword
searching. T hus, conventionally the user frequently has to perform numerous navigational
steps to arrive at the content (e.g., learning or training content) of interest.
Because n example improved user interfaces and their structures
disclosed herein are optimized for navigation of and interaction with related items of video,
audio, and text content for computing devices, it makes it easier for a user to more accurately
provide ctions and ct with content as compared to conventional user interfaces,
thereby reducing user input errors, and making content access quicker. In addition, a
c navigation flow user interface is optionally rendered that visually depicts an
interactive t flow, the current position of the navigation flow, and alternative content
navigation paths. Such a dynamic navigation flow user interface further facilitates accurate
and quick navigation h a complex interactive t flow. By contrast, many
conventional user interfaces tend to provide non-intuitive and cumbersome interfaces,
making it difficult for the user to locate and select a correct item of t. Further,
conventional user interfaces do not provide a dynamic navigation flow user interface, leading
to confusion and erroneous content selection, wasting system resources and network
bandwidth. Still further, certain conventional systems, such as conventional training
systems, e that content be linearly accessed and viewed or make it difficult to access
content in accordance with a user’s ng style.
Thus, as would be appreciated by one of skill in the art, the use of the
sed navigation techniques and user interfaces represent a significant technological
advance over prior conventional implementations. For example, the use of the dynamic
navigation flow user interface enables a user to locate and access content, such as ng
module components and subcomponents, with fewer clicks, scrolls, and/or page (or other
display combination) navigations that would otherwise be required to assess locate
appropriate content.
r, certain disclosed user aces enable users to visually locate
module content components and subcomponents more quickly than with current conventional
user interfaces. For example, in the embodiments bed herein, when a user is ted
with a c navigation flow user interface including identifiers for module components
and ponents, each identifier includes, or is in the form of, a link to the corresponding
component or ponent content, allowing the user to te directly to the
corresponding content. Each dynamic navigation flow user interface entry thus serves as a
programmatically selected navigational shortcut to the corresponding content, allowing the
user to bypass searching for the content or having to navigate through multiple pages (or
other display combinations) of user interfaces to the content. This can be particularly
beneficial for computing devices with small s, where fewer items can be displayed to
the user at a time and thus tion of larger volumes of items is more difficult.
Another benefit of the disclosed dynamic navigation flow user interface is
that it is dynamically updated to provide progress information as the user navigates from one
item of content to another, so the user is aware of where in a module of related content the
user is. The user would otherwise typically have to toggle between different pages/display
combination to determine how far into a module (e.g., an interactive learning module), the
user is.
Still further, certain example user interfaces disclosed herein are optimized
for relatively small, touch-screen devices, y facilitating access and navigation of
content on smart phones, tablets, and the like. Yet further, certain embodiments render a
scene using animated content from multiple perspectives, providing a more immersive
viewing and interaction experience.
In addition, certain user interfaces described herein may include a
complex assortment of swappable content. For example, as will be described, the user may
be enabled to swap information presented in a user ace panel from text data, to graphic
data, to video data. Advantageously, so that such swapping occurs virtually instantaneously,
the entire display combination (.e.g., page) may be loaded together to a user system
(including le items of ble data for a given panel). Optionally, the user
interfaces may be d using HTML (e.g., HTML5), loaded onto a user browser, and may
be then ed using the browser.
Optionally, there may be one or more predefined flows. Optio nally, the
user is enabled to divert from the predefined flow to create a custom flow. For example,
where the navigation flow user interface is for a training flow, a predefined flow may include
some or all of the following states: challenge, observe, focus, practice, review. By way of
further example, a predefined flow may be shorter or longer. For example, where the
navigation flow user interface is for a training flow, a predefined flow may include the
following states: challenge, e, and practice.
As such, the embodiments described herein represent significant
ements in er-related technology.
Figure 1 illustrates an example architecture. A system 106 (which may be
a cloud-based system comprising one or more servers that are co-located and/or that are
geographically dispersed) may host one or more applications that when executed cause a
variety of the processes described herein to execute. For example, with reference to Figure
2B, the system 106 may include an ing system 208B, an animation engine 206B to
generate the animation disclosed herein, a navigation engine 204B to generate the navigation
interfaces and to respond to tional inputs, and/or a performance measurement engine
202 to measure performance (e.g., ate based on user inputs one or more mance
scores, as described in greater detail herein).
Optionally, the cloud system 106 may include one or more Apache
Hadoop clusters, optionally including a Hadoop distributed file system (HDFS) and a
Hadoop MapReduce el processing framework. The system 106 may be configured to
process and store large amounts of data that would not be effectively by conventional system.
The system 106 may be configured to process and store large amounts of structured data,
unstructured data, and/or semi-structured data. The data may se user or trainingrelated
data (including sound and/or image (e.g., still or video) recordings, animation files,
performance data, calendaring information, facilitator data, etc.). The clusters may comprise
master nodes (e.g., a name node, a job tracker, etc.), and slave nodes (e.g., data nodes, task
trackers, etc.). A given data node serves data over a network using the distributed file system
(e.g., HDFS) protocol. The file system may utilize a TCP/IP layer for communication. The
distributed file system may store large files across multiple data node machines and may
store copies of data on multiple hosts to e reliability and data availability.
With respect to the optional Hadoop implementation, other systems may
submit tasks to the job r, which in turn, distributes the tasks to available task tracker
nodes. Optionally, the job tracker may attempt to distribute a given task to a node in
geographic proximity to the needed data. While the foregoing example refers to Hadoop
rs and d components, other distributed platforms may optionally be used in
addition or d to s and store data, such as large amounts of data ing
structured, unstructured, and/or semi-structured data, (e.g., distributed platforms utilizing
Bashreduce, Qizmt, Spark, Disco Project, etc.).
The system 106 may communicate over one or more wired and/or wireless
local and/or wide area networks (e.g., the Internet) 108 with one or more user terminals, such
as one or more optional facilitator terminals 102 and one more subject (e.g., trainee)
terminals 104-1 … 104-N. A given terminal may optionally be a wireless mobile device
(e.g., a smart phone, tablet, , wearable, virtual reality headset, augmented reality
headset, or the like), although a terminal may be connected to a network via a wired
interface. Optionally, some or all of the processes bed herein may be performed using
a dedicated application (an app) download to and hosted by a user terminal 104 or facilitator
terminal 106.
Figure 2A illustrates an example user terminal 200 in the form of a tablet,
phone, laptop, or appliance. In the example illustrated in Figure 2a, the user terminal 200
includes various user input/output devices, such as a touchscreen/display 202, a microphone
204, a camera 206, physical controls 208 (e.g., a power on/off control, a volume control, a
home control, etc.), a speaker 210, and/or other user output devices. The user terminal
200 may optionally include a haptic engine 211 that provides kinesthetic communication to
the user (e.g., via vibrations or taps, which may be used to confirm a user input or to provide
a notification), an accelerometer 212 that measures ration in 2-3 directions, and/or a
ter (e.g., a 3-axis gyroscope) 214 that measures orientation in three axis. The user
terminal 200 may be equipped with an external or integral physical keyboard, trackpad,
joystick, electronic pen, and/or other input device.
The user terminal 200 may include one or more wireless and/or wired
interfaces. For example, the user terminal 200 may include a WiFi interface 216, a Bluetooth
interface 218, a cellular interface 220, an NFC (near field ication) ace 222,
and/or one or more physical connectors 224 (e.g., a USB connector, a LIGHTING connector,
and/or other connector). The user terminal 200 further ses a processor device (e.g., a
rocessor) 230, volatile memory (e.g., RAM solid state memory) and non-volatile
memory (e.g., FLASH ), and a power management device 234.
An application (e.g., a training application) may be utilized to transmit
audible voice input received from a user (e.g., a e) vi a the microphone 204 and
digitized using an analog-to-digital converter and video t captured via the camera 206
over a network to the system 106. The audio and video content may be stored locally on the
user terminal and/or on the system 106 for later access and playback.
User inputs (e.g., commands and/or data) may also be received by the user
terminal 104 via a keyboard, a stylus, via voice entry (provided via the microphone 204)
which may be converted to text via a voice-to-text module, or via pupil movement captured
by the camera 206. The keyboard and/or stylus may be included with the user terminal 200.
The user terminals may include a variety of s (e.g., sound, image,
ation, pressure, touch, mouse, light, acceleration, pupil trackers, and/or other sensors)
configured to detect user input and interaction with the user terminals. The user als
may optionally include touch screens configured to display user interfaces and data and
receive user input via touch. The user terminals may include physical keyboards. The user
terminals may utilize one or more microphones 204 to receive voice data and/or commands,
and one or more speakers 210 to play audible content. The user terminals may utilize the
camera 206 to capture, record, and/or stream video (and/or still image) data (which may be
stored or streamed in association with captured audio data) to other systems, such as the
system 106. For example, the camera 206 may be a front facing camera of a phone, a
PC/laptop webcam, or other image capture device. A given user terminal may include or be
ured with media players that enable the user terminal to play video and/or audio
content, and display still images.
The user terminals may be associated with s user-types, such as
facilitators and trainees (sometimes referred to herein as cts”).
Data itted or ed by the user terminal 200 or system 106 may
be secured by establishing a l private network (VPN) which establishes an encrypted
transmission path between the user terminal and system 106. Optionally, Secure Sockets
Layer (SSL), a secure er tunnel, may be used to encrypt data in transit between the user
terminal (e.g., a training app and/or browser) and the system 106. Optionally, some or all of
the information may be stored on the user terminal and/or the system 106 using file
encryption. Optionally, the encryption key may be stored physically separate from the data
being encrypted (e.g., on different physical servers).
The system 106 may be configured to generate and supply user interfaces
at different resolutions (e.g., HD (1920 by 1080 pixels), 4K (3084 by 2160 pixels), 8K (7680
× 4320 ), or even higher or lower resolutions).
As discussed above, Figure 2B illustrates an example software architecture
including an operating system (e.g., OFT WINDOWS, APPLE OSX, APPLE IOS,
GOOGLE ANDROID, GOOGLE CHROME, UNIX, LINUX, UBUNTU, etc.). An
animation engine generates 202B animations illustrated in the user aces discussed
herein and illustrated in the figures. The animations used in the user aces disclosed
herein may synchronize animated characters’ movements and facial expressions, lip motions,
hand and other limb gestures, other body parts, and/or clothing with the characters’ speech
and sounds.
Optionally, animation content may be generated using motion e of
an actor’s lips, mouth, eyes, arms, hands, legs, torso, and other body parts is performed while
recording the s voice. A skeletal model of the actor’s lips, mouth, eyes, arms, hands,
legs, torso, and other body parts may be generated to capture the exact animation. Thus,
animation of characters may be provided that conveys and simulates human emotions, such
as happiness, fear, inquisitiveness, sadness, anger, aggression, and the like.
Optionally, in addition to or instead of utilizing motion capture, a user
may define key frames via an animation user interface, and specify the location of s
body components, such as lips, mouth, eyes, arms, hands, legs, torso, other body parts,
clothes, and the like. T he differences in body ent positions may be calculated, and
intermediate positions may be interpolated and corresponding frames may be generated.
Optionally, tweening or morphing may be utilized. For example, the following tweening
equation may be utilized for tweening via linear olation:
P=A(1-t)+Bt 0≤t≤1
Where A is the initial location of the point, B is the final position of the
point, and t is the time from 0 to 1.
The ion may then be rendered.
Certain example user interfaces d to interactive content will now be
described. In this example, the interactive content is intended to train a subject in how to
respond in certain ions as a police officer. In particular, a police officer is depicted in
an interaction with a civilian. It is understood that the interactive content may be directed to
different training scenarios. As will be described, an example content module may include
briefing, practice, fing, evaluation, and summary modes. Techniques for reinforcing
skills are described.
A content module may be selected from a library of content modules via a
content library user interface. The content library user interface may list t modules a
subject/user is authorized to access, where a given library entry may include the name of the
module, a brief ption of the module that indicates the types of interactions the user will
be expected to make (e.g., “While on routine , you receive a radio call for a suspicious
male adult following female shoppers and making them uncomfortable. The business owner
has requested that the subject leave several times, but he has refused to do so. The owner has
requested you remove the subject. You must explain the situation to the r and escort
him from the premises.”). Optionally, a snapshot/screenshot image of the scene may be
provided for y via the content library user interface to provide a visual indication as to
the module content or subject matter. Optionally , in addition or d, video clips (e.g.,
trailer videos) may be provided for playback that provide an ew of the module content
or subject matter.
Figure 3A rates an example t user interface. The example user
interface comprises a t name field (“radio call-suspicious male at business”) and a
module identifier le 1”) that may be indicative of where in a sequence of t
items that content about to be presented is positioned. In this example, the module may
comprise text, video, and audio content utilized to train personnel in how to interact with a
suspicious male at a business, where a business operator accessed a ication system to
report such a ious male.
In response to a user selecting the navigation icon at the bottom, middle of
the example user interface, a direct navigation interface (e.g., a tion sidebar or other
navigation control) may be generated and rendered, such as that illustrated in Figure 3B. The
direct navigation interface provides various entries including components and
ponents of the interactive module. A user can directly navigate to a desired
component or subcomponent by activating (e.g., clicking on, touching, hovering over using a
pointing device, staring at, etc.) a given entry. Optionally, the direct navigation interface
renders entries hierarchically, wherein module components may be initially listed, and in
response to a user activating a given component, the listing expands to show the given
component’s subcomponents, indented relative to the parent component. In response to the
user activating the given component again, the list will be collapsed so that the
subcomponents are not displayed. Such a chical interface provides a space efficient
and easy to navigate tool.
In response to the user activating a “?” question mark help icon, a help
information user interface is displayed, such as that example user interface illustrated in
Figure 3C. The user interface comprises a dynamic tion flow user interface
corresponding to the selected module. I n this example, the dynamic navigation flow user
interface may be rendered in an expanded mode, depicting all or substantially all the
components and ponents of the module. The example rendered dynamic navigation
flow user interface includes a start component, a briefing component, a ce (“skills
building”) ent, an instructions component, a debriefing component, an evaluation
component, a summary component, and end component. Other example components may be
included. For e, an “Insights” component may be included that provides background
considerations and concepts relevant to a scene. For e, optionally the insights
ent may not include the spoken dialogue, but may include concerns, theories, advice,
or strategies for the user to er.
The navigation user interface may be dynamically rendered to indicate
(e.g., via highlighting, color, animation, or otherwise) which content component is currently
being rendered to the user to advantageously indicate where in the module training process
the user is. In this example, the ctions component is emphasized (e.g., highlighted), to
indicate the user is accessing the instruction content. The component names may be
ted by rendering the component name using a different form of emphasis (e.g., by
rendering the ent name in a specific type of geometric shape, such as a six-sided
polygon, using a different color, using a different font, underlining, and/or otherwise).
Certain ents may also be rendered with descriptive text (e.g., “build ”).
Optionally, components displayed in the user interface may also serve as navigation uts
to other areas of the module.
In this example, the practice (“skills building”) component of the dynamic
navigation flow user interface is expanded to show subcomponents, including script, key
elements, role model, and challenge components. Other components or sets of components
may be included. As will be described greater detail herein, the script subcomponent renders
the script for one or more videos in a set of videos. For example, where there is video set
ing multiple videos of the same scene including multiple speakers in a verbal
exchange, where a given video is from the perspective of the current speaker, the script
optionally includes the speech for each speaker in the set, including an identifier (e.g., title,
name, etc.) associated with the speaker. The challenge subcomponent comprises a video
displaying a character making a statement or question to which another character is to
respond (the ter that the user is being trained to emulate, such as a police officer in this
example). The key ts subcomponent displays key or main ideas that are to be used in
responding to the challenge (e.g., a statement or question made by a character in a video).
The role model user interface displays a video of the character that the user is being trained
to emulate stating a response to the challenge. T he practice component may be repeated
multiple times, as indicated by the navigation user interface, with an arrow bridging brackets
ly defining the practice component. By way of further example, some or all of the
following may be provided: a challenge video, a role model video, a script, key ts,
insights, a ing panel, and/or a help panel.
Optionally, in response to the user hovering over or touching a video
playback area (e.g., with a cursor, pointing device, etc.), additional controls may be provided,
such as a scrubber control, a volume control, and an expand control. The scrubber control
may enable the user to select a point in the video to view or to begin playing from. The
expand control may enable the user to cause the video to be displayed in full screen mode.
In this example, the evaluation component the dynamic navigation flow
user interface is ed to indicate it comprises challenge and scoring subcomponents.
The scoring subcomponent is further expanded to indicate it includes further subcomponents
(sub-subcomponents), including script, key elements, role model, and challenge
subcomponents. The tion component may be ed multiple times, as indicated by
the navigation user interface, with an arrow bridging brackets ly ng the
evaluation component. In this example, other than the scoring itself, the evaluation
component optionally es the same subcomponents as the practice component.
Optionally, the text in the navigation user interface may comprise links,
n in response to the user activating a given link (e.g., comprising text that describes or
identifies a corresponding content component or subcomponent), the user is automatically
navigated to a user interface comprising the corresponding component or subcomponent,
y providing quick, efficient, and accurate navigation.
An instruction video may be presented that optionally includes an
animated person providing verbal instructions, where the corresponding voice track is
synchronized with the animated person’s lip motions, facial expressions, hand or other limb
motions, etc. as described elsewhere .
In response to the user activating a orward control, the user may be
navigated to an initial content component (the briefing ent in this example), such as
that illustrated in Figure 3D. In this example, the c navigation flow user interface is
updated to be yed in a simple mode ut all the ents expanded) and with
the currently rendered component (“briefing”) ized (e.g., highlighted). Optionally, an
expand control is provided which when activated by the user causes the dynamic navigation
flow user interface to be re-rendered in expanded mode.
Referring again to Figure 3D, a text area may be rendered that displays
text (e.g., in bullet ). The rendered text in this example provides an overview of a
scenario corresponding to multi-perspective animated content that will be displayed (e.g.,
ne ; Suspicious male adult following female shoppers; Owner has requested the
subject leave several times; Owner has requested Officer remove the subject”).
In this example, a corresponding animation (e.g., a briefing video) is
rendered (including corresponding audio) that explains visually and/or audibly a scenario that
the user will be trained to respond to. The rendered animation may include a ter
corresponding to a real world type of person that would give such a briefing in real world
circumstances (e.g., to further add authenticity to the training process). In the illustrated
example, the animated character is a police captain.
As discussed above, the audio track may be synchronized with facial
expressions and with the movements of various body ts (e.g., lips, mouth, eyes, arms,
hands, legs, torso, other body parts, and/or clothing, etc.). When the briefing animation is
complete, the user may be automatically navigated to the next content component, reducing
the need for user navigation inputs. Optionally, in addition or instead, a control is provided
via which the user may manually navigate to the next item of content (a practice ent
as illustrated in Figure 3D in this e) via a next/forward control to provide the user
with more l of the process and content experience. A reverse/previous control is
provided to enable the user to navigate to a previous interface. Advantageously, the reverse
control of the user interface is on the left hand side so that it is easily accessible and
reachable by a left user thumb when displayed on a handheld device with a relatively small
touch screen, such as a phone or tablet computer. Similarly, the forward control is on the
right hand side so that it is easily accessible and reachable by a right user thumb when
yed on a handheld device with a touch screen. Thus, the user interface is optionally
optimized for use on handheld devices, with relatively small (e.g., 2 to 10 inches in the
diagonal, 5 to 30 square inches, etc.) touch displays.
Referring to Figure 3E, the rendered dynamic tion flow user
interface now emphasizes (e.g., via highlighting) the practice component and has removed
the emphasis from the briefing component. The dynamic navigation flow user interface is
optionally rendered with the entry for the practice component (“build skill”) ed to
indicate its subcomponents (e.g., script, key elements, practice, role model, challenge), while
other components are not expanded, making efficient use of display real estate and better
ensuring that the navigation user interface is not confusing to the user. Optionally the repeat
indicator indicates that there are 3 challenges, and the user is tly about to view the first
challenge (“1 of 3”), thereby advantageously enabling the user to know where the user is
with respect to the training process embodied in the . In se to detecting the
user viewing the second nge, the repeat indicator would be dynamically re-rendered
and updated to state “2 of 3”, and so on. The animation/video player may play a
conversation from multiple perspectives (e.g., that of a police officer, the suspicious male the
officer is interacting with, etc.).
The user may activate a challenge control (e.g., a speech bubble icon), and
the animation of the person with whom the officer is cting with may be rendered and
may “speak” his script, which may include “challenges” which the officer needs to respond
to, as illustrated in Figure 3F. The user may audibly speak ses to the challenges as
practice and to reinforce the training. As noted above, the user’s responses may be recorded
via the camera and microphone (e.g., in response to activation of a record control optionally
included in the user ace of 3F) so that the user or facilitator may later review the user’s
performance. Activation of a role model control (e.g., a police officer icon) may cause the
animation of the officer to be ed and to speak role model answers that the user may
emulate.
In response to the user activating a key elements l (e.g., in the shape
of a key icon), key elements spoken by the police officer are rendered lly (e.g., in bullet
format) in real time, as illustrated in Figure 3G. The user may activate the play l, and
in response, the animation video of the police officer will repeat the script, including the key
elements. Advantageously, the animation is displayed over the forward control as the user’s
eyes will tend to be focused on the animation, and hence will be able to quickly locate the
forward control (which may be used more often than the reverse control). The dynamic
navigation flow user interface may be re-rendered and updated with the practice, key
elements, and role model entries emphasized.
The user may te a script control (e.g., an icon of multiple lines
ponding to text), and the script of the officer and the person with whom the officer is
interacting may be displayed as illustrated in Figure 3H. Certain portions of the script text
corresponding to key ts may be emphasized (e.g., via color, font, highlighting,
animation, bolding, underlining, or otherwise).
Some or all of the foregoing key element, script, challenges and role
model controls may be displayed on each user interface of the practice component, providing
quick and efficient access to the corresponding content, and further enabling the user to
access and consume content in a manner that is most ive to the user.
Once the user has completed the practice portion the debriefing user
interface may be rendered, as illustrated in Figure 3I. For example, the user may access the
debriefing user interface by completing all the practice challenges or by navigating to the
debriefing user interface (e.g., via the dynamic tion flow user interface, or by
activating the forward or reverse controls one or more times) from the practice component or
other component. The dynamic navigation flow user interface is now rendered with the
debriefing component entry highlighted. T he debriefing user interface optionally provides
both an animation of a character ly reviewing what the user has learned and text
providing in an ordered, numbered format a l review of what the user has learned. The
combination of an animation, audible speech, and text further reinforces the training t.
In addition or instead, the debriefing user interface optionally provides a video of a character
and/or text g possible next steps in the encounter, required -up actions, and/or
other instructions to te the io.
When the debriefing animation is complete, the user may be automatically
navigated to the next content component, reducing the need for user navigation inputs.
Optionally, in on or instead, a control is provided via which the user may manually
navigate to the next item of content (an ction subcomponent of an evaluation
component as illustrated in Figure 3J in this example) via next/forward l to provide the
user with more control of the process and content experience. Optionally, the user may
navigate to the instruction subcomponent using the c navigation flow user interface,
by activating a corresponding text/link.
Referring to Figure 3J, in this example the evaluation component of the
c tion flow user interface is expanded to indicate it comprises instruction,
challenge, and scoring subcomponents (with the scoring subcomponent expanded further to
indicate it includes script, key elements, role model, and challenge subcomponents). e
the ction subcomponent is being rendered, the instruction component entry of the
c navigation flow user interface is emphasized. The scoring subcomponent is further
expanded to te it includes further subcomponents (sub-subcomponents), including
script, key elements, role model, and nge components. The evaluation component may
be repeated multiple times, as indicated by the navigation user interface, with an arrow
ng brackets visually defining the evaluation component.
An evaluation instruction video may be presented that optionally includes
an animated person providing verbal instructions, where the corresponding voice track is
synchronized with the ed person’s lip motions, facial expressions, hand or other limb
motions, etc., as described elsewhere herein. Corresponding instruction text may also be
rendered alongside the video. For example, the instructions may state: “Now that you have
completed the VPEC Practice Session, you will be required to demonstrate your skills in a
Scored Evaluation. Respond to each Challenge as you did while practicing, accurately
conveying the Key Elements and maintaining an appropriate level of Command Presence. A
Scoring Summary will be provided at the end of the Evaluation. Click the glowing Right
Arrow to proceed to the Scored Evaluation.”
When the instruction ion is te, the user may be automatically
ted to the next content component, reducing the need for user tion inputs.
Optionally, in addition or instead, a control is provided via which the user may manually
navigate to the next item of content (the challenge subcomponent as illustrated in Figure 3K
in this example) via next/forward control to provide the user with more control of the process
and content experience. Optionally, the user may navigate to the challenge subcomponent
using the dynamic navigation flow user interface, by ting a corresponding text/link.
Optionally, the instruction component entry in the dynamic navigation flow user interface is
no longer emphasized or yed, and the challenge component entry is emphasized. The
animation of the person with whom the officer is interacting with may be rendered and may
“speak” his , which may include “challenges” which the officer needs to d to.
The user may audibly speak responses to the challenges and the user may be scored on the
user response as discussed elsewhere herein. ally, the user’s response may be
recorded by the terminal camera and microphone for later access by the user or by a
facilitator.
Once the user has responded to the challenge, the scoring user interface
illustrated in Figure 3L may be rendered. ally, the challenge component entry in the
dynamic navigation flow user interface is no longer emphasized, and the g component
entry is emphasized. The scoring interface may be rendered on the user terminal display
and/or on a separate tator display. The scoring may be manually entered by the user
and/or the facilitator. For example, the user may be scored (e.g., using a grade, number, or a
correct/incorrect indication) on the cy of the user’s response. By way of illustration,
model answer language text may be rendered in conjunction with a scoring input interface to
facilitate scoring. For example, a checkbox may be rendered configured to display a check
indicating a correct answer in response to a user clicking on or touching the checkbox. The
user may be scored on each key element of the model answer. Thus, if the user included two
of the listed model answer elements in the response, but not a third t, the user may
e a score of 2 out of 3, and the user interface may be automatically updated in real time
to indicate how many element elements the user correctly stated (e.g., “0 out of 3”, “1 out of
3”, “2 out of 3”, etc.).
Optionally, the user may also be scored (e.g., using a grade or number
score) on the delivery style (e.g., command presence) of the user’s response. For example,
the user may receive a score of zero if the user was too aggressive in the se, a score of
2 if the user had proper command presence (e.g., confident and firm but not overbearing or
overly aggressive), and a score of zero if the user lacked command presence (e.g., seemed
insecure or uncertain).
Advantageously, the optional scoring user interface optionally includes
controls to access the key elements user interface, the script user interface, the challenge user
interface, and the role model se for the current challenge, enabling the user to further
review the foregoing even during an evaluation and g phase.
Optionally, a control (e.g., a single button) is provided that when activated
causes a practice section for the current challenge to be rendered as a pop-up interface (e.g.,
overlaying the scoring user interface). The user can then access all of the corresponding
t from the pop-up interface, and utilize the various content and controls as d.
When the user closes the pop-up ace (e.g., by selecting a close control), the g
using interface is displayed at the same status as when the pop-up interface was originally
rendered.
Once the scoring is complete for all the challenges, the scoring y
user interface illustrated in Figure 3M may be presented. For example, the challenges may
have included an l interaction challenge, a rebuttals challenge, and a refusal challenge.
The user may have been scored for accuracy and delivery style (e.g., command presence) on
the response to each of those nges using an interface similar to that of Figure 3L.
Based on the scores, a percent score may be calculated and rendered in association with text
describing the corresponding challenge. For example, if the user received a zero with respect
to accuracy for a challenge, the user may e a score of zero percent. If the user
responded correctly to two out of three challenges, the user may receive a score of 66.7%.
Figures 2C and 2D illustrate an example content navigation and content
rendering process. At block 202C, a module selection may be received (e.g., from a al
being ed by a user, such as a trainee and/or a facilitator/trainer). For example, the
module may be a training module comprising video, audio, textual, and/or graphic content.
At block 204C, an initial user interface may be rendered. The interface may provide content
comprising the module name and one or more controls, such as a navigation request control
(see, e.g., Figure 3A). At block 206C, an activation of the navigation request control is
received. At block 208C, a text-based hierarchical navigation interface is rendered (see, e.g.,
Figure 3B).
In response to the user activating a control (e.g., a help control), at block
210C an instruction/help user interface comprising a dynamic navigation flow user interface
and edia (e.g., an instruction audio/video presentation and corresponding text) is
rendered (see, e.g., Figure 3C). As similarly discussed above, the c navigation flow
user interface may optionally be rendered in an expanded form with all or certain
components expanded to show subcomponents and/or sub-subcomponents. An entry
corresponding to the instruction user interface may be highlighted in the dynamic navigation
flow user interface.
In response to the user activating a control (e.g., a next/forward control) or
activation of the briefing component entry in the dynamic navigation flow user interface, at
block 212C a briefing user interface sing a re-rendered and updated dynamic
navigation flow user interface and multimedia (e.g., a briefing audio/video presentation and
corresponding text) is rendered (see, e.g., Figure 3D). As similarly discussed above, the
dynamic navigation flow user ace may optionally be re-rendered in a simple form with
components un-expanded to l subcomponents and/or sub-subcomponents. An entry
corresponding to the briefing user interface may be highlighted in the dynamic navigation
flow user interface.
In response to the user activating a control (e.g., a next/forward control) or
activation of the challenge component entry in the dynamic navigation flow user interface, at
block 214C a challenge user interface comprising a re-rendered and updated dynamic
navigation flow user interface and multimedia (e.g., a briefing audio/video presentation and
corresponding text) is rendered (see, e.g., Figure 3E). As similarly discussed above, the
c tion flow user interface may optionally be dered and updated with the
practice/build skills component expanded to show sub-components, with other components
un-expanded to conceal subcomponents and/or bcomponents. An entry corresponding
to the challenge user interface may be highlighted in the dynamic navigation flow user
interface. In addition, where there are le challenges in the practice ent, the
system may determine what challenge the user is viewing and how many challenges are
included in the practice component, and the dynamic navigation flow user interface may be
rendered to indicate which nge the user is viewing and how many total ce
challenges there are (e.g., “1 of 3”). When the user accesses the next challenge (e.g., by
activating a forward control or view the dynamic navigation flow user interface), the
dynamic navigation flow user interface may be re-rendered to fy the current challenge
(e.g., “2 of 3”).
At block 216C, in response to detecting activation of the script control, a
script user interface may be rendered (see, e.g., Figure 3H) ing text of the script spoken
by one or more characters in the challenge video (e.g., the police officer), and the text may be
rendered with key element words emphasized (e.g., using color, font, highlighting, bolding,
ining, animation, etc.) relative to surrounding text. The dynamic navigation flow user
interface may be dered and updated with the practice, and script entries emphasized.
At block 218C, in response to detecting activation of the nge control
the previously viewed challenge may be played again via a challenge user interface. The
dynamic navigation flow user interface may accordingly be re-rendered and updated.
At block 220C, in response to detecting activation of the key elements
control, a key elements user interface may be rendered (see, e.g., Figure 3G). Key elements
spoken by a character (e.g., the police officer) may be rendered textually (e.g., in bullet
), and a video user ace may be displayed including an initial or title frame. The
user may activate a play control, and in response, the animation video of the character will
repeat a role model script, including the key elements. The dynamic navigation flow user
interface may be re-rendered and updated with the practice, key elements, and role model
entries emphasized.
Thus, during the practice mode, the user may access a video of a role
model response, an interface providing key ts of the role model response in bullet
format, and a script of the role model response with words corresponding to the key elements
visually emphasized, thereby further reinforcing challenge response concepts and ge.
Optionally, the interface includes an insights interface, a recording panel interface, and a help
interface.
At block 222C, a debriefing user interface is rendered (see, e.g., Figure
3I). The user may access the debriefing user interface by completing all the practice
challenges or by navigating to the debriefing user interface (e.g., via the dynamic navigation
flow user interface, or by activating the forward or reverse ls one or more times) from
the practice component or other component. The dynamic navigation flow user interface is
re-rendered and updated with the fing component entry ghted. The debriefing
user ace ally provides both an animation of a ter verbally reviewing what
the user has learned and text providing in an ordered, numbered format a textual review of
what the user has learned. In addition or instead, the debriefing user ace optionally
provides a video of a character and/or text g possible next steps in the encounter,
required follow-up actions, and/or other instructions to complete the scenario.
Referring to Figure 2D, an example evaluation is rated. At block
202D, an instruction user interface comprising a re-rendered updated c navigation
flow user interface and multimedia (e.g., an instruction video presentation and
corresponding text) is rendered (see, e.g., Figure 3J). As similarly discussed above, the
dynamic navigation flow user interface may optionally be rendered with the evaluation
ent entry expanded to show subcomponents and/or sub-subcomponents. An entry
corresponding to the evaluation component instruction user interface may be highlighted in
the dynamic navigation flow user interface. The evaluation component may be repeated
multiple times with multiple challenges, with an arrow bridging brackets visually defining
the evaluation component.
At block 204D, in response to the user activating a control (e.g., a
next/forward control) or activation of the challenge component entry in the dynamic
navigation flow user interface, a challenge user interface comprising a re-rendered and
updated dynamic navigation flow user interface and a video of a ter audibly stating a
nge is provided (see, e.g., Figure 3K). The challenge may be the same as a challenge
presented during the practice mode. The user responds to the challenge. The se
should include key elements from the role model se during the practice mode.
Once the user has responded to the challenge, and in response to the user
activating a control (e.g., a next/forward control) or activation of the challenge component
entry in the dynamic navigation flow user interface, at block 206D, a scoring user interface is
rendered comprising a re-rendered and updated dynamic navigation flow user interface and
scoring fields and s sing key elements (see, e.g., Figure 3L). The challenge
ent entry in the re-rendered, updated, dynamic navigation flow user interface is no
longer ized, and the scoring component entry is emphasized. As discussed above,
during the practice mode, the user may access a video of a role model response, an interface
providing key elements of the role model response in bullet format, and a script of the role
model se with words corresponding to the key elements visually emphasized. The
user is to respond to the challenge using the corresponding key elements learned during the
practice phrase.
The scoring ace may be rendered on the user al display and/or
on a separate facilitator display. Optionally, the scoring may be manually entered by the user
and/or the facilitator. For example, the user may be scored on the accuracy of the user’s
response with respect to the inclusion of key elements. By way of illustration, model answer
language text may be rendered in conjunction with a scoring input ace to facilitate
scoring. Optionally, the user may be scored on each element of the model answer. Thus, if
the user included two of the listed model answer elements in the response, but not a third
element, the user may receive a score of 2 out of 3, and the user interface may be
automatically updated in real time to indicate how many element elements the user tly
stated. As discussed above, the user may also be scored (e.g., using a grade or number score)
on the delivery style (e.g., command presence) of the user’s response. For example, the user
may receive a score of zero if the user was too aggressive in the response, a score of 2 if the
user had proper command presence (e.g., confident and firm but not overbearing or overly
aggressive), and a score of zero if the user lacked command presence (e.g., seemed insecure
or uncertain).
Blocks 204D and 206D may optionally be repeated until the process
detects that all the evaluation challenges have been shown and scored.
At block 208D, once the scoring is te for all the challenges in the
evaluation ent, a g y may be automatically generated and rendered via
a scoring summary user interface (see, e.g., Figure 3M). For example, the challenges may
have included an initial interaction challenge, a rebuttals challenge, and a refusal challenge.
Based on the , a percent score may be calculated and rendered in association with text
describing the ponding challenge.
Certain additional example user interfaces will now be described with
reference to Figures 4A-4G.
In the illustrated es, the content displayed in a given panel/area
may change depending on the state the user is in in a learning process flow. For example,
with reference to Table 1 below, certain user interfaces may contain a left panel and a right
panel, where the content displayed in a given panel may automatically change based on the
current state in the flow. Advantageously, all the content for each panel may optionally be
downloaded to and nt in memory on the user device, so that as the user navigates
h the flow, the change in content in a given panel may take place instantaneously.
STATE LEFT PANEL RIGHT PANEL
CHALLENGE CHALLENGE VIDEO INSIGHTS
(provides additional
“thoughts” on the scenario to
the learner)
OBSERVE ROLE MODEL VIDEO KEY ELEMENTS
PRACTICE CHALLENGE VIDEO RECORDING PANEL
(enables user to self-record
and playback user responses
to challenge
TABLE 1
It is understood that Table 1 corresponds to one example uration.
Table 2 below corresponds to another example configuration. However, these examples are
non-limiting and other configurations may be used.
STATE LEFT PANEL RIGHT PANEL
CHALLENGE NGE VIDEO BLANK
OBSERVE ROLE MODEL VIDEO FULL SCRIPT
PRACTICE CHALLENGE VIDEO RECORDING PANEL
es user to self-record
and playback user responses
to challenge
REVIEW CHALLENGE VIDEO KEY ELEMENTS
TABLE 2
Referring to Figure 4A, an example user interface (a training user interface
for learning skills in this example) is illustrated. The example user interface provides a wide
range of features, capabilities, and ility. In this example, the user ace may present
some or all of the following: video content comprising nges, video content of role
model responses, l text, recording and playback controls that enables the user to record
and view the user’s own responses to challenges, a pre-programmed learning path/process, a
c navigation flow user interface that corresponds to the learning path/process, and
controls that enable the user to customize the presentation in a way that is more satisfactory
for the user.
In contrast to rigid conventional eLearning systems, which rely upon static
pages and learning/training flows, the systems and methods described herein enable a user to
explore the content in various ways to allow the user to access content and perform training
in a way the best suits the user. Further, the s and methods described herein facilitate
repeated user ce on the relevant t matter, in addition to reading text and watching
videos. For example, the example training user interface illustrated in Figure 4A enables the
user to practice tic communication in a safe and non-embarrassing way, s the user
to record practice sessions, and enables the user to self-evaluate her own performance. The
training user interface, as well as other user interfaces described herein, enable the user to
improve the user’s performance.
The example training user interface es the following areas: a topic
area 426A, a dynamic navigation flow user interface 408A, a video area 409A for displaying
prerecorded video via a video , a text area 405A (see, also Figure 4E), and a learning
step interface 420A. Each of these user ace areas and associated controls will now be
described.
The topic area 426A is displayed in the upper left corner of the user
interface. The topic area 426A includes the name of the current module and the specific
learning object within the current module. Optionally, the topic area 426A does not include
navigation controls. An expanded view of the topic area 426A is illustrated in Figure 4B.
The dynamic navigation flow user interface 408A (see, also, Figure 4C)
corresponds to the selected module, where the dynamic navigation flow user interface 408A
may indicate a user’s progress through the module flow. Certain flow states may also be
rendered with descriptive text (e.g., “build skills”, “scoring”, etc.).
In this example, the c navigation flow user interface 408A includes
a “start” state, a ing” state, a “training” (build skills) state, an “assessment” (scoring)
state, a “summary” state, and an “end” state. A repeat indicator displayed in association with
the training state (1 of 6) indicates that there are 6 training presentations/sessions. Another
repeat indicator is displayed in association with the assessment state.
The navigation flow user interface 408A may be dynamically rendered to
indicate (e.g., via highlighting, color, animation, or otherwise) which flow state is currently
being displayed to the user to advantageously indicate where the user is in the module
training process. In this example, the training state is emphasized (e.g., highlighted), to
indicate the user is accessing the training content. The states may be indicated by rendering
the state name using a different form of emphasis (e.g., by rendering the state name in a
specific type of geometric shape, such as a six-sided polygon, using a different color, using a
ent font, underlining, and/or otherwise).
The dynamic navigation flow user interface 408A optionally enables the
user to navigate directly to a specific module section/display combination (e.g., page) by
clicking on or touching the corresponding n/display ation in the dynamic
tion flow user interface 408A. Optionally , for multi-step items (e.g., a practice
section), a menu (e.g., a dropdown menu) is rendered with entries for each of the module
challenges. The menu enables a user to jump to a specific desired challenge by selecting the
desired challenge. Thus, for e, if the user is tly on the first challenge (e.g., “1
of 6” challenges), the user may select and jump to any of the other five challenges via the
menu.
Discrete module navigation controls are ed that enable a user to
manually navigate through module documents/ display ation (e.g., pages). In this
e, a next page control 410A is provided which when activated cause the application to
navigate to the next module display combination (e.g., the next step in the flow). A previ ous
display combination control 411A is ed which when activated cause the application to
navigate to the previous module display combination (e.g., the previous step in the flow).
For example, activation of the next display combination control 410A may navigate the user
to the next learning object training section/display combination in the module, or to an
assessment section/display combination if the user is currently on the last learning object.
The previous display combination control 411A may navigate the user back to the previous
learning object training section/display combination, or back to a ng section/display
ation if the user are currently on the first learning object.
An optional menu l 403A is provided that, when activated, causes a
menu ace to be displayed. The menu interface optionally includes an indented outline
of the module contents. The menu entries may be linked to other content, such as a
corresponding module section/display combination or ng object. Activating an entry in
the outline (e.g., by touching or clicking on the entry) navigates the user to the corresponding
section/display combination in the module, or to a specific learning object within the module.
The video area 409A (illustrated in greater detail in figure 4D) may play,
via a video player, prerecorded video content (which may include an audio track) such as a
challenge video (e.g., a real or animated person speaking one or more questions or
statements), or a role model video (e.g., a real or animated person speaking a best ce
response to a challenge). Optionally, when the displayed video changes from a challenge
video to a role model video (or vice versa), the new video is automatically played.
Optionally, a play control is provided, which the user can manually activate to cause the
t video to be played or ed.
Activation of a challenge control 428A causes the challenge video to be
loaded into and played by the video player in the video area 409A. Activation of a role
model control 430A causes the role model video to be loaded into and played by the video
player in the video area 409A.
When a video is loaded into the video player in the video area 409A, the
corresponding control (e.g., the challenge control 428A or the role model control 430A) may
be cally emphasized (e.g., via animation, a color outline, or otherwise) to indicate
which type (e.g., a challenge video or a role model) is being displayed, thereby
reducing user confusion.
The text area 405A may display text data (e.g., in the right side of the
training user interface illustrated in Figure 4A). An example text area interface is displayed
in Figure 4E. The text may be yed in association with icons or other graphics. For
example, the text area may be used to display l icons (e.g., recording controls) and
corresponding text explaining the functionality and/or use of the control. By way of further
example, the text area may display a full script of a role model response or only key ts
(e.g., key concepts in numbered of bulleted form) of the role model response. The text area
panel may be sized to be larger than the other interfaces of the training user interface to better
focus the user on the text area content and to make the text easier to read.
With respect to Figures 4A and 4E, controls may be provided via which
the user can control what text and d information is displayed in the text area. For
example, activation of a script control 412A may cause a text transcript of the role model
response to be displayed. Optionally, tion of the script l 412A may cause a text
transcript of the role model se to be displayed with the key elements that are spoken in
the role model response highlighted (e.g., using color, outlines, etc.), indicating how the key
elements are incorporated into the role model response.
Optionally, activation of the key elements control 414A causes the text
corresponding to the key elements spoken within the role model response to be displayed
without surrounding text (e.g., in numbered or bullet format). Activation of the
record/playback control 416A may the more detailed
record/playback/pause/cam/clear/microphone controls 402A to be displayed that enable the
user to create and playback an audio and/or video recording of the user. This functionality
enables the user to review the user’s own tone, sions, body language, or the
completeness and accuracy of the user’s response, thereby enabling the user to improve the
her performance. Thus, the user may elect, at the user’s discretion, to record and review the
user’s own performance.
Activation of the help control 418A causes a listing of some or all controls
to be displayed in association with a brief description of their onality (see, e.g., Figure
Optionally, the user may be enabled to clear the text area 405A (e.g., by
re-clicking on the control for the current content).
Thus, the foregoing controls enable the user to select among various
options, the content displayed in the video area and the content displayed in the text area.
The functions of the record/playback/pause/cam/clear/microphone
controls/indicators 402A will now be described. The ls /indicators are oned
closely together. The indicator text is optionally sized larger then the control text to make
the tors more easily read, resulting in less user . In the illustrated example, the
top segment text (in a recording state tor) is re-rendered as the recording mode changes
to reflect the current mode (e.g., standby, recording, ck).
The recording mode may default to a standby mode (with the state
indicator indicating a y state), and may return to the standby mode when exiting record
or playback modes. When in a standby mode, the camera and/or microphone of the user
device are turned on and active, but are not recording video or audio. When the camera is
active, the live streaming view of the camera may be presented in real time via the video
display area 404A. When the recording control is activated, the camera and/or microphone
of the user device are on and active, and are recording video or audio. Optionally, the video
may also be live streamed and displayed in the video display area during the recording
process. Optionally, the video may be displayed in a full screen mode. When the ck
control is activated, the video display area 404A will y the playback of the recording of
the user, and the audio track will be played back via the user device speakers.
A status indicator may be provided that indicates (e.g., via a change in
color, icon, and/or text) whether the camera/microphone or in standby mode, record mode, or
playback mode. In addition, or instead, the color of the controls may lves be changed
when activated or vated. Figure 4F illustrates example display states of the controls
402A (standby 402F, recording 404F, playback 406F). The controls/indicators are optionally
in the form of a multi-segment, circular control set to provide dense, yet easy to access and
well organized record and playback related controls.
Other segments of the record/playback/pause/cam/clear/microphone
controls/indicators 402A can be used during the user’s ing session to perform various
functions. The cam control is configured to turn the user’s camera on and off. Activation of
the clear control causes the previously recorded s) of the user to be erased, enabling the
user to start over (e.g., re-record user responses to challenges). Optionally, when a user
records the user responses to challenges, the recording is automatically erased when the user
navigates to another challenge. This technique both ensures user privacy (as there will not be
permanent ings of the user), and s memory utilization for such recordings.
Activation of the send control enables the video to be stored on the user’s
device in a file or location of the user’s choice. The user may then transmit (e.g., via email,
messaging service, or otherwise) the recording to a destination entered or selected by the
user. The mic segment is an indicator that is configured to indicate (via color, flashing, or
otherwise) when the microphone is on.
ing to the learning step interface 420A, the ng step interface
420A enables the user to linearly progress through one or more pre-configured combinations
to aid the user in following a recommended a multi-step learning path/process. The user may
use the next control 424A and the previous control 422A to navigate h the linear
ng path. Activation of the next control 424A advances the user through each next step
of the pre-configured learning path, loading the left video area 409A video player and the
right side text area with video and text t respectively appropriate for that step. When
the last step of the path is reached, tion of the next control 424A will navigate the user
to a next section/display combination, which may be the next learning object or the start of
the assessment process user interfaces.
The previous control 422A enables the user to return to a previous step
view in the linear learning path, and s the content into the left video area 409A video
player and the right side text area 405A with video and text content respectively riate
for the previous step. When on the very first step of the learning path, activation of the
previous control 422A may navigate the user to the previous section/display ation,
which may be the previous learning object (e.g., starting with the challenge view), or back to
a briefing view if the user were on the first learning object.
In addition or d, the user may utilize the other relevant ls
discussed herein to adapt a learning process the user’s individual learning style.
A pre-configured path may correspond to a recommended path for new
users that enables new users linearly progress through recommended display combinations,
such as those discussed above with respect to Table 1 or Table 2 (e.g., challenge, observe,
focus, practice, and ). Such a recommended learning path may be a combination of
views (e.g., challenge, observe, focus, practice, and review) that will present information to
the user in an easy to comprehend format, which aids new users unfamiliar with the many
options on the training display combination controls. Progressing thro ugh each step of the
path provides the user a complete view of the content presented in a logical and efficient
manner.
For example, the first step of a ng path may be a nge step
which presents the user with the actual spoken challenge that the user will practice
responding to. In the challenge step, the video area 409A video player may be loaded with a
corresponding module challenge video, and the text area may be left blank.
At the e step, the video area 409A video player may be loaded with
a corresponding module role model video and the text area with the full script. This enables
the user to view and listen to the role model video while following along in the text script.
At the focus step, the video area 409A video player may be loaded with a
corresponding module role model video and the text area with the corresponding key
elements (e.g., in numbered or bullet fashion) in the role model video, where the key
elements should be included in a user response to a corresponding challenge, in the context
of the user’s own language.
At the practice step, the video area 409A video player may be loaded with
a corresponding module challenge video, and the text panel may be loaded with a
record/playback control. The practice step enables the user to verbally practice, aloud, in
ding to the questions or statements in the nge video. The user may optionally
record the user’s responses (video and/or audio recording), as similarly sed elsewhere
herein.
At the review step, the user’s spoken response may be reviewed to ensure
that the response includes the corresponding key ts. To assist the user in the review
step, this, the video area 409A video player may be loaded with a corresponding module
challenge video, and the text panel may be loaded with the key elements. Optionally, the
record/playback control may be displayed, enabling the user to playback a recording of the
user’s challenge responses while the key elements are displayed.
As the user navigates or is navigated forwards or backwards through the
learning path, the relevant t for each step will be automatically loaded to the video area
409A video player and the text area 405A. As noted above, optionally when the video
loaded to the video player changes from a first video to a second video, the newly loaded
video will be automatically played by the video player.
As noted elsewhere herein, the recommended learning path is simply one
option that moves the user through the content in an efficient and direct manner. Once the
user has become familiar with the learning path and/or the disclosed system, the user may
desire to customize the user’s experience. Such customization may be reflected in the
learning step interface 420A as illustrated in Figure 4H.
For example, if the user ed herself using the record/playback
control, the user might want to load the role model video in the video area 409A video player
to enable the user to compare the user’s performance to the challenge responses in the role
model video by playing the role model video and then the user’s own ed version.
By way of further example, while the recommended learning path takes
the user through a sequential multi-step process, the user may want to see other combinations
of content that are not on the predefined path, and may control the content and content flow
accordingly. For example, the user might want to perform a theater-style read through of a
script of a challenge response, where the user plays a challenge video in the video area 409A
video player to provide the user with the challenge cues, and has the script displayed on the
text area 405A. The user may then do a “read through” by reading the words in the script
one or more times to better learn how to respond to the challenges.
When a user diverges from a predefined step process path, the
learning step interface 420A may indicate that the user is performing a custom process which
may reflect the user’s current flow. Optionally, the learning step interface 420A may be
ed to indicate recommend steps that are being by-passed via the user’s current flow.
For e, with reference to Figure 4H, the learning step interface 420A has a flow arrow
labeled “custom view” that indicates the custom flow, where the learning step interface 420A
is rendered to indicate what step the user will be navigated to if the user tes the
previous or next controls (the challenge and focus steps respectively in this example). In this
example, the “custom view” is placed above the ve” state to indicate that the
navigation will correspond to the navigation that would occur if the user was on the observe
state. When in a “custom view”, in one e, activation of the step in the learning step
interface 420A will navigate the user to the challenge step with the role model video playing
in the video area 409A video player, and the text area 405A may be blank. Activation of the
next display combination/step control 424A will navigate the user to the focus step, with the
role model video playing in the video area 409A video , and the text area 405A
displaying key ts text. These are re -entry points into the predefined multi-step
process, where the user activates the previous display combination/step control 422A control
or the next display combination/step control 424A controls to return to the ined multistep
process path.
Optionally, the user may, continue to load whatever content the user
would like in the video area 409A video player and the text area 405A using the controls
under each area – wandering on and off the learning path in a freestyle form as may be
desirable for more experienced users.
It is understood that while the foregoing user interfaces and process
described herein relate to training content for a police officer, such interfaces may be
similarly used to train other users, such as bankers, sales , call center ors,
teachers, and the like. Further, the user of the user interfaces, and in particular, the dynamic
navigation flow interface, are not limited to use with training content by may be utilized with
other content, particularly content that includes multiple components, subcomponents, and
sub-subcomponents. Further, the linear learning paths are optionally fully izable and
may be altered to suit specific ry or learning needs. For example, the learning path
may be ent for bankers, law enforcement, retail personnel, etc., e the people,
situations, and training needs may be very different.
Thus, for example, methods and systems disclosed herein may optionally
be used as part of an enhanced cloud-based network training system to efficiently and
consistently embed behavior, skills and knowledge into trainees using enhanced navigation
tools and processes. For example, a computer-based system is disclosed that provides
deliberate verbal practice and rehearsal to provide behavioral ing. T he combination
of animation, audible speech, and text r reinforces the training content. It is
understood, that optionally, the animations disclosed herein may be ed with real actors.
However, advantageously, the user of animation enables more precise control of speakers in
the videos. For e, during the animation creation process, user interfaces may be
provided that enables an animator to select from a menu a desired faces, skin tone, ethnic
group, body type, clothing, and/or ound. Further, the storage of animation files may
occupy this memory storage space than a live action video file.
By way of illustration, as discussed herein, the system is configurable to
train users to respond to “challenges” (a statement or question by r or a scenario that
for which a response is to be provided). The training system can include multiple training
modules, wherein a module can include one or more challenges directed to a specific subject.
Once the user has completed a practice component including the module’s challenges, the
user may be evaluated on how well the user ds to challenges, as described herein.
The practice component optionally includes a briefing component (e.g.,
where the user will be textually told/shown what the user is to learn) and role model
component where a role model video is rendered trating how a response to a
challenge is properly done, with an accurate articulation of the relevant elements, with the
appropriate delivery style (e.g., command presence). Optionally, a key elements component
displays more significant elements (e.g., key elements) that the user is to learn to respond to a
given challenge, where the more significant elements are embedded in phrases included
contextual language. For example, the phrases may optionally include complete role model
language that can be used in responding to the challenge. Further, optionally the more
significant elements are visually emphasized with respect to the contextual language. It
should be noted that optionally, the language of a “key element” within a given phrase does
not have to be contiguous language. Each phrase is optionally separately displayed (e.g.,
displayed on a separate line), with a visual emphasis element (e.g., a bullet or the like)
displayed at the ing of each phrase. Further, although certain examples herein refer to
command presence in the t of law enforcement training, other ‘delivery styles’ and
delivery style scorings may be utilized for other training scenarios (e.g., for a customer
assistance agent, for a call , for a banker, for l personnel, etc.). Thus, in the law
enforcement context, the delivery style may be command presence, but in other applications
the delivery style may be confidence, empathy, body language, and/or other appropriate
se style and scoring of the same.
Unless the context indicates otherwise, the term “video” refers to an
analog or l video comprising actual objects and people and/or ions thereof with
an audio track. For example, as discussed herein, an audio video presentation can include an
ed avatar or other ter that recites the role model ge, challenges,
instructions or other verbal content. In addition, while the avatar is reciting role model
language, the key ts may optionally be textually displayed, without the nding
contextual language. Thus, the avatar acts as an automated coach that demonstrates to the
user, in a consistent and te manner, an example of how to respond to a challenge. The
video component of a given user interface may optionally be tically played with the
user interface is played and/or may be played in se to a user activating a play control.
The methods, systems, and/or user interfaces bed herein may
ally be utilized in conjunction with methods, systems, and/or user interfaces described
in U.S. Application No.: 15/335182, titled “SYSTEMS AND METHODS FOR
COMPUTERIZED INTERACTIVE SKILL NG”, the content of which is
incorporated herein by nce in its ty.
The methods and processes described herein may have fewer or additional
steps or states and the steps or states may be performed in a ent order. Not all steps or
states need to be reached. The s and processes described herein may be embodied in,
and fully or partially automated via, software code modules executed by one or more general
purpose computers, gaming consoles, smart sions, etc. The code modules may be
stored in any type of computer-readable medium or other computer storage device. Some or
all of the methods may alternatively be embodied in whole or in part in specialized computer
hardware. The systems described herein may optionally include displays, user input devices
(e.g., touchscreen, keyboard, mouse, voice recognition, etc.), network interfaces, etc.
The results of the disclosed methods may be stored in any type of
er data repository, such as relational databases and flat file systems that use volatile
and/or non-volatile memory (e.g., magnetic disk storage, optical storage, EEPROM and/or
solid state RAM).
The various illustrative logical blocks, modules, routines, and algorithm
steps described in connection with the embodiments disclosed herein can be implemented as
electronic hardware, computer software, or combinations of both. To clearly rate this
interchangeability of hardware and software, various rative components, blocks,
modules, and steps have been described above generally in terms of their functionality.
Whether such functionality is implemented as hardware or software depends upon the
particular application and design constraints imposed on the overall system. The described
functionality can be implemented in varying ways for each particular application, but such
implementation decisions should not be interpreted as causing a departure from the scope of
the disclosure.
Moreover, the various illustrative logical blocks and modules described in
connection with the embodiments disclosed herein can be implemented or performed by a
machine, such as a processor device, a digital signal processor (DSP), an application specific
integrated circuit (ASIC), a field programmable gate array (FPGA) or other mmable
logic device, discrete gate or transistor logic, discrete hardware components, or any
combination f designed to perform the functions described herein. A sor device
can be a microprocessor, but in the alternative, the processor device can be a controller,
microcontroller, or state machine, combinations of the same, or the like. A processor device
can include electrical circuitry configured to process computer-executable instructions. In
r embodiment, a processor device includes an FPGA or other programmable device
that performs logic operations without processing computer-executable instructions. A
processor device can also be implemented as a combination of computing devices, e.g., a
combination of a DSP and a rocessor, a plurality of microprocessors, one or more
microprocessors in conjunction with a DSP core, or any other such configuration. Although
described herein primarily with respect to digital technology, a sor device may also
include primarily analog components. For example, some or all of the rendering techniques
described herein may be implemented in analog circuitry or mixed analog and digital
circuitry. A computing environment can include any type of computer system, including, but
not limited to, a computer system based on a microprocessor, a ame computer, a
digital signal processor, a portable computing device, a device controller, or a computational
engine within an appliance, to name a few.
The elements of a method, process, routine, or algorithm described in
connection with the embodiments disclosed herein can be ed directly in hardware, in
a software module executed by a sor device, or in a ation of the two. A
software module can reside in RAM memory, flash , ROM memory, EPROM
, EEPROM memory, ers, hard disk, a removable disk, a CD-ROM, or any other
form of a non-transitory computer-readable storage medium. An exemplary storage medium
can be coupled to the processor device such that the processor device can read information
from, and write information to, the storage medium. In the ative, the storage medium
can be integer to the processor device. The processor device and the storage medium can
reside in an ASIC. The ASIC can reside in a user al. In the alternative, the sor
device and the storage medium can reside as discrete components in a user al.
Conditional language used herein, such as, among others, "can," "may,"
"might," "may," “e.g.,” and the like, unless specifically stated otherwise, or otherwise
understood within the context as used, is generally intended to convey that certain
embodiments include, while other embodiments do not include, n features, elements
and/or steps. Thus, such ional language is not generally intended to imply that
es, elements and/or steps are in any way required for one or more embodiments or that
one or more embodiments necessarily include logic for deciding, with or without other input
or prompting, whether these features, elements and/or steps are included or are to be
performed in any particular embodiment. The terms “comprising,” “including,” “having,”
and the like are synonymous and are used inclusively, in an open-ended fashion, and do not
exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is
used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to
connect a list of elements, the term “or” means one, some, or all of the elements in the list.
Disjunctive language such as the phrase “at least one of X, Y, Z,” unless
specifically stated otherwise, is otherwise understood with the context as used in general to
t that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X,
Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not,
imply that certain embodiments require at least one of X, at least one of Y, or at least one of
Z to each be present.
While the phrase “click” may be used with respect to a user selecting a
control, menu selection, or the like, other user inputs may be used, such as voice commands,
text entry, es, etc. For example, a click may be in the form of a user touch (via finger
or stylus) on a touch screen, or in the form of a user moving a cursor (using a mouse of
keyboard navigation keys) to a yed object and activating a physical control (e.g., a
mouse button or keyboard key). User inputs may, by way of e, be provided via an
interface or in response to a prompt (e.g., a voice or text prompt). By way of e an
interface may include text fields, wherein a user provides input by entering text into the field.
By way of r example, a user input may be received via a menu selection (e.g., a drop
down menu, a list or other arrangement via which the user can check via a check box or
otherwise make a selection or ions, a group of individually selectable icons, a menu
selection made via an interactive voice response system, etc.). When the user provides an
input or activates a control, a corresponding computing system may perform a corresponding
operation (e.g., store the user input, process the user input, provide a response to the user
input, etc.). Some or all of the data, inputs and instructions provided by a user may
optionally be stored in a system data store (e.g., a database), from which the system may
access and retrieve such data, inputs, and instructions. The notifications and user interfaces
described herein may be provided via a Web page, a dedicated or non-dedicated phone
application, computer application, a short messaging service e (e.g., SMS, MMS,
etc.), instant messaging, email, push notification, audibly, and/or otherwise.
The user terminals described herein may be in the form of a mobile
communication device (e.g., a cell phone, a VoIP equipped mobile , etc.), laptop,
tablet computer, interactive television, game console, media streaming device, head-wearable
display, virtual reality display/headset, augmented reality display/headset, networked watch,
etc. The user terminals may optionally e displays, user input devices (e.g.,
touchscreen, keyboard, mouse, voice recognition, etc.), network aces, etc.
While the above detailed description has shown, described, and pointed
out novel features as applied to various embodiments, it can be understood that various
omissions, substitutions, and changes in the form and details of the s or algorithms
illustrated can be made without ing from the spirit of the sure. As can be
ized, certain embodiments described herein can be embodied within a form that does
not provide all of the features and benefits set forth herein, as some features can be used or
practiced tely from others.
Claims (30)
1. A system, comprising: a processing device; a computer readable medium that stores programmatic instructions that, when executed by the processing device, are configured to cause the system to perform operations comprising: detect a selection of a multimedia module; render, during a first mode of operation, a dynamic navigation flow control in association with first multimedia content of the multimedia module, wherein the ed dynamic navigation flow control indicates a first current navigation position; store a first result to memory ed by execution of an interactive event during the first mode of operation; based at least on the first , enter a second mode of operation; and re-render, during the second mode of operation, the c navigation flow l in association with second multimedia t of the multimedia module, wherein the re-rendered dynamic navigation flow control indicates a second navigation position corresponding to a second current navigation
2. The system as defined in Claim 1, wherein the dynamic navigation flow control comprises a plurality of links to a first user interface, the first user interface comprising: a first panel comprising a video player configured to selectively play video content, where a plurality of items of video content to be played via the first panel video player are preloaded; a second panel configured to selectively display at least: first static content comprising text, or a video player ured to play real-time video of a user while the video of the user is being recorded; wherein activation of a first link of the dynamic navigation flow control causes: a first video, included in the plurality of items of preloaded video content, to be displayed by the video player in the first panel, and first text to be displayed in the second panel; wherein activation of a second link of the dynamic navigation flow control causes: a second video to be displayed by the video player in the first panel, second text to be displayed in the second panel.
3. The system as defined in Claim 1, wherein the dynamic navigation flow control comprises a plurality of links to respective interfaces, wherein activation of a given link causes a respective interface to be rendered.
4. The system as defined in Claim 1, wherein the dynamic navigation flow control comprises a plurality of ents, wherein: during the first mode, a first component is expanded to render one or more subcomponents of the first component, and a second component is rendered t rendering any subcomponents of the second component, during the second mode, the second component is expanded to render one or more subcomponents of the second component, and the first component is rendered without rendering any ponents of the first component.
5. The system as defined in Claim 1, the operations r comprising enabling a hierarchical navigation interface to be rendered in a sidebar visual displaced from the dynamic tion flow l.
6. The system as defined in Claim 1, wherein the dynamic tion flow control rendered during the first mode of operation in association with the first multimedia content, is rendered in a user interface in association with: a first control positioned on a bottom left-hand side of the user interface and accessible to a left user thumb when displayed on a handheld device comprising a touch display, and a second l oned on a bottom right-hand side of the user interface and accessible to a right user thumb when displayed on the handheld device comprising the touch display.
7. The system as defined in Claim 1, wherein the multimedia module comprises briefing, practice, and evaluation media.
8. The system as defined in Claim 1, wherein the multimedia module comprises animated characters whose lip, hand, limb movement are onized with an audio track.
9. The system as defined in Claim 1, wherein the system is configured to render a first ace sing: a multi-segment video recording-playback set of controls and indicators comprising: a standby indicator, a camera activation control, a video content erase control, a play control, a microphone indicator; wherein: at least partly in response to activation of the record control, the record control, the camera control, and the microphone indicator are visually emphasized, at least partly in response to activation of the playback control, the playback control, the camera control, and the microphone indicator are visually emphasized.
10. The system as defined in Claim 1, wherein the system is configured to render a process step interface displaced from the dynamic navigation flow control, the process step interface comprising: a recommended process flow interface, including: a plurality of tial process state entries, a us state control, which when activated, causes the user to be navigated to a previous process state in the recommended s flow interface, a next state control, which when ted, causes the user to be navigated to a subsequent process state in the recommended s flow interface; wherein the system is configured to detect when a user deviates from the recommended process flow ace, and in response, generate a corresponding deviation notification, and modify the recommended process flow interface in the s step interface to te a process flow state being skipped by the user.
11. Non-transitory computer readable medium that stores programmatic instructions that, when executed by a processing device, are configured to cause the processing device to perform operations comprising: detect a selection of a multimedia module via a user input device; enable, during a first mode of ion, a dynamic navigation flow l to be rendered in association with first multimedia content of the multimedia module, wherein the rendered dynamic navigation flow control indicates a first current navigation position; enable a first result to stored memory obtained by execution of an interactive event during the first mode of operation; enable a second mode of operation to be entered based at least in part on the first result; and enable the dynamic navigation flow control to be re-rendered, during the second mode of ion, in association with second multimedia content of the multimedia module, n the re-rendered dynamic navigation flow control indicates a second navigation position corresponding to a second current navigation position.
12. The non-transitory computer readable medium as d in Claim 11, wherein the dynamic navigation flow control comprises a plurality of links to a first user interface, the first user ace comprising: a first area comprising a video player ured to ively play video content; a second area configured to selectively display at least: first static content comprising text, or a video player configured to play real-time video of a user while the video of the user is being recorded; wherein activation of a first link of the dynamic navigation flow control causes: a first video to be displayed by the video player in the first area, and first text to be displayed in the second area; n tion of a second link of the dynamic navigation flow control causes: a second video to be displayed by the video player in the first area, and second text to be displayed in the second area.
13. The non-transitory computer readable medium as defined in Claim 11, wherein the dynamic navigation flow control comprises a plurality of links to respective interfaces, n activation of a given link causes a respective interface to be rendered.
14. The non-transitory computer readable medium as defined in Claim 11, wherein the dynamic navigation flow control comprises a plurality of components, wherein: during the first mode, a first component is ed to render one or more subcomponents of the first component, and a second component is ed without rendering any subcomponents of the second component, during the second mode, the second component is expanded to render one or more subcomponents of the second component, and the first component is rendered without rendering any subcomponents of the first component.
15. The non-transitory computer le medium as defined in Claim 11, the operations r comprising enabling a hierarchical navigation interface to be rendered in a sidebar visual displaced from the dynamic navigation flow control.
16. The non-transitory er readable medium as defined in Claim 11, wherein the dynamic navigation flow control rendered during the first mode of operation in association with the first multimedia t, is rendered in a user interface in association with: a first control positioned towards a bottom left-hand side of the user interface and accessible to a left user thumb when displayed on a ld device comprising a touch display, and a second control oned on a bottom right-hand side of the user ace and accessible to a right user thumb when displayed on the handheld device comprising the touch display.
17. The non-transitory computer readable medium as defined in Claim 11, wherein the edia module comprises briefing, practice, and evaluation media.
18. The non-transitory computer readable medium as defined in Claim 11, wherein the multimedia module comprises ed characters whose lip, hand, limb movement are synchronized with an audio track.
19. The non-transitory computer le medium as defined in Claim 11, the operations further configured to cause the processing device to render a first interface comprising: a multi-segment video recording-playback set of controls and indicators comprising: a standby indicator, a camera activation control, a video content erase control, a play control, a hone indicator; wherein: at least partly in response to activation of the record control, the record control, the camera control, and the microphone indicator are visually emphasized, at least partly in response to activation of the ck control, the playback control, the camera control, and the microphone indicator are visually ized.
20. The non-transitory computer readable medium as defined in Claim 11, wherein the operations are further configured to cause the processing device to render a process step interface displaced from the dynamic tion flow l, the process step interface comprising: a recommended process flow interface, including: a plurality of sequential process state entries, a previous state control, which when activated, causes the user to be navigated to a previous process state in the recommended process flow a next state control, which when activated, causes the user to be navigated to a subsequent process state in the recommended process flow interface; wherein the operations are configured to detect when a user deviates from the recommended process flow ace, and in response, generate a corresponding deviation notification, and modify the recommended process flow interface in the process step interface to indicate a process flow state being skipped by the user.
21. A er ented method, the method comprising: detecting, using a computerized system, a selection of a multimedia module, wherein the select is made via a user input ; using the computerized system, enabling during a first mode of operation, a dynamic navigation flow l to be rendered in association with first edia content of the multimedia module, wherein the rendered dynamic navigation flow l indicates a first current navigation position; using the computerized system, enabling a first result to stored memory obtained by execution of an interactive event during the first mode of operation; using the computerized system, ng a second mode of operation to be entered based at least in part on the first result; and using the computerized system, enabling the dynamic navigation flow control to be re-rendered, during the second mode of operation, in association with second multimedia content of the multimedia module, wherein the re-rendered dynamic navigation flow l indicates a second navigation position corresponding to a second current navigation position.
22. The computer implemented method as defined in Claim 21, wherein the dynamic navigation flow control comprises a plurality of links to a first user interface, the first user interface comprising: a first area comprising a video player configured to selectively play video content; a second area configured to selectively display at least: first static content comprising text, or a video player ured to play real-time video of a user while the video of the user is being recorded; wherein activation of a first link of the dynamic navigation flow control causes: a first video to be displayed by the video player in the first area, and first text to be displayed in the second area; wherein activation of a second link of the dynamic navigation flow control causes: a second video to be displayed by the video player in the first area, and second text to be displayed in the second area.
23. The computer ented method as defined in Claim 21, wherein the dynamic navigation flow control comprises a plurality of links to respective interfaces, wherein activation of a given link causes a respective interface to be rendered.
24. The computer implemented method as defined in Claim 21, wherein the dynamic tion flow control comprises a plurality of components, wherein: during the first mode, a first component is expanded to render one or more subcomponents of the first component, and a second component is rendered without ing any subcomponents of the second ent, during the second mode, the second ent is expanded to render one or more subcomponents of the second component, and the first component is rendered without rendering any subcomponents of the first component.
25. The er ented method as defined in Claim 21, the method further comprising enabling a hierarchical direct navigation interface to be rendered, the direction navigation interface visually displaced from the dynamic navigation flow control.
26. The non-transitory computer readable medium as defined in Claim 11, wherein the dynamic navigation flow control rendered during the first mode of operation in association with the first multimedia content, is rendered in a user interface in association with: a first control positioned towards a bottom left-hand side of the user ace and accessible to a left user thumb when displayed on a handheld device comprising a touch display, and a second control positioned on a bottom right-hand side of the user interface and accessible to a right user thumb when displayed on the ld device comprising the touch display.
27. The computer implemented method as defined in Claim 21 wherein the multimedia module comprises briefing, practice, and evaluation media.
28. The computer implemented method as defined in Claim 21, n the multimedia module comprises ed characters whose lip, hand, limb nt are synchronized with an audio track.
29. The computer implemented method as d in Claim 21, the method further comprising: rendering a first interface comprising: a multi-segment video recording-playback set of controls and indicators comprising: a standby indicator, a camera activation control, a video content erase l, a play control, a microphone indicator; wherein: at least partly in response to activation of the record control, the record control, the camera l, and the microphone indicator are visually emphasized, at least partly in response to activation of the playback control, the playback control, the camera control, and the hone indicator are visually emphasized.
30. The computer implemented method as defined in Claim 21, the method further comprising: rendering a process step interface separate from the dynamic navigation flow control, the process step interface comprising: a recommended s flow ace, including: a plurality of sequential process state entries, a us state control, which when activated, causes the user to be navigated to a previous process state in the recommended process flow ace, a next state control, which when activated, causes the user to be navigated to a subsequent s state in the recommended process flow interface; detecting when a user deviates from the recommended process flow interface, and in response, generating a corresponding deviation notification, and ing the recommended process flow interface in the process step interface to indicate a process flow state being skipped by the user. WO 12823 2‘72- TOUCH SCREEN DESF’LAY PROCESSOR VOLATELE/ NON—VOLATELE i MEMORY g POWER ¥ ; MANAGEMENT; 2/&-
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US62/664,819 | 2018-04-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
NZ792078A true NZ792078A (en) | 2022-09-30 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11871109B2 (en) | Interactive application adapted for use by multiple users via a distributed computer-based system | |
US10580319B2 (en) | Interactive multimedia story creation application | |
AU2019201980B2 (en) | A collaborative virtual environment | |
Alepis et al. | Object-oriented user interfaces for personalized mobile learning | |
CN107037946B (en) | Digital user interface providing drawing guidance to guide a user | |
JP2020080154A (en) | Information processing system | |
Wang et al. | Virtuwander: Enhancing multi-modal interaction for virtual tour guidance through large language models | |
Jakl et al. | Enlightening patients with augmented reality | |
KR20180056728A (en) | Method for controlling an image processing apparatus | |
WO2014126497A1 (en) | Automatic filming and editing of a video clip | |
US20170018203A1 (en) | Systems and methods for teaching pronunciation and/or reading | |
Doumanis | Evaluating humanoid embodied conversational agents in mobile guide applications | |
NZ792078A (en) | Interactive application adapted for use by multiple users via a distributed computer-based system | |
US20210134177A1 (en) | System and method for displaying voice-animated multimedia content | |
Fisher et al. | Taking a user centred design approach for designing a system to teach sign language | |
Ferdig et al. | Building an augmented reality system for consumption and production of hybrid gaming and storytelling | |
Rodrigues et al. | Studying natural user interfaces for smart video annotation towards ubiquitous environments | |
Carvajal et al. | Alternative and augmentative communication system focused on children with autism spectrum disorder | |
Soares Guedes | Accessibility by design | |
Shepherd | Digital learning content: a designer's guide | |
JP7440889B2 (en) | Learning support systems and programs | |
KR20140136713A (en) | Methods and apparatuses of an learning simulation model using images | |
KR102618311B1 (en) | An apparatus and method for providing conversational english lecturing contents | |
Pardon | An Extensible Presentation Framework for Real-time Data Acquisition, Interaction and Augmented Reality Visualisations | |
US10290224B2 (en) | Interactive outline as method for learning |