NL2019178B1 - Interactive display system, and method of interactive display - Google Patents
Interactive display system, and method of interactive display Download PDFInfo
- Publication number
- NL2019178B1 NL2019178B1 NL2019178A NL2019178A NL2019178B1 NL 2019178 B1 NL2019178 B1 NL 2019178B1 NL 2019178 A NL2019178 A NL 2019178A NL 2019178 A NL2019178 A NL 2019178A NL 2019178 B1 NL2019178 B1 NL 2019178B1
- Authority
- NL
- Netherlands
- Prior art keywords
- image
- user
- head
- feature
- interactive display
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
- G06F9/453—Help systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
In a system and method of interactive display, a display unit is mounted in a fixed position relative to a user’s head. An orientation of the head is detected. A portion of a working environment image is displayed based on the detected orientation of the head, wherein the working environment image represents a real-life working environment. The working environment image comprises different features having different positions in the image. If the detected orientation of the head corresponds to a viewing direction aimed at said position of said feature, a marker image is displayed at or near said position. When the detected orientation corresponds to a viewing direction aimed at said position of said feature, the marker image is removed and a feature detail image is displayed at or near said position, wherein the feature detail image corresponds to a feature having a specific position in the working environment image.
Description
P33111NLOO/ME
Interactive display system, and method of interactive display
FIELD OF THE INVENTION
The invention relates to the field of interactive display systems, and more specifically to methods of interactive display for training personnel or employees. An environment is created by obtaining images from a real-life environment, such as by filming a 360 degrees video using six or more cameras and taking images, the real-life environment being a typical environment where the personnel would normally work, such as a drilling platform or any other environment, such as an industrial environment.
BACKGROUND OF THE INVENTION
Usually, new employees study from books. In addition, they have seen the occasional videos and YouTube® films and many presentations. A lot can be learned in these ways, but it takes a lot of time to get familiar with the essence in reality. Also, there is no efficient tracking and testing of an individual’s understanding of the subject matter.
Therefore, a need exists to provide a training which can be tailored to specific circumstances. A further need exists to provide a training which can address all aspects of the work to be trained. A still further need exists for an individualized training which can be done at low expenses. Also, a need exists for better training and testing the knowledge gained at low expenses.
In particular a need exists for training safety hazards for personnel on board drilling platforms or other offshore platforms. More in particular, a need exists for training for so called “drops”. These are objects located above the personnel which may potentially fall down on the personnel. It was found that it is very difficult to train awareness of these drops.
SUMMARY OF THE INVENTION
Thus, it would be desirable to provide an interactive display system and method which can be tailored to specific circumstances. It would also be desirable to provide an interactive display system and method which can address all aspects of the work to be trained. It would further be desirable to provide an interactive display system and method which allows for an individualized training at relatively low costs. It would further be desirable to provide an interactive display system and method which allows for individualized training and testing progress tracking, wherein the progress and participation of users is logged and companies are enabled to track statistics of an individual’s progress, or the progress of groups of individuals.
To better address one or more of these concerns, in a first aspect of the present invention an interactive display system is provided. The interactive display system comprises: a database containing image data of: - a virtual reality working environment image shot in-situ, and representing a real-life working environment, the working environment image comprising different features having different positions in the working environment image; and - a plurality of feature detail images, wherein each feature detail image corresponds to a feature having a specific position in the working environment image; a display unit configured to be mounted in a fixed position relative to a user’s head; an orientation sensor configured to be coupled to the user’s head; and a user interface component configured for: - retrieving a working environment image from the database; - detecting an orientation of the user’s head by the orientation sensor; - displaying a portion of the working environment image on the display unit based on the detected orientation of the user’s head; - if the detected orientation of the user’s head corresponds to a viewing direction aimed at said position of said feature, displaying a marker image in the working environment image at or near said position; - when the detected orientation corresponds to a viewing direction aimed at said position of said feature, removing the marker image and displaying the feature detail image in the working environment image at or near said position.
With the system according to the present invention, it is possible for new hired personnel to experience an actual working environment without having to be there physically. A working environment image is shot in-situ, at an existing, real-life site, and can be either a still image or a moving image, possibly augmented with audio recorded at an existing site. The working environment image is projected on a display unit to be viewed by a user. The display unit may be mounted in a fixed position to a user’s head. To provide the user with an experience of the working environment, an orientation sensor is coupled to the user’s head to allow a detection of an orientation of the user’s head.
The detected orientation of the user’s head determines displaying a specific portion of the working environment image, which has been retrieved from a database, to be displayed on the display unit. Thus, the user may find himself/herself to be looking at a part of a working environment depending on the orientation of his/her head. For example, when the user looks up, he/she will be shown an upper portion of the working environment image on the display unit. When the user looks left, he/she will be shown a left portion of the working environment image on the display unit, and so on. The working environment image may be a 360 degrees image in any direction, so that the user may also take a look behind him/her, above him/her, and below him/her.
The working environment image contains one or more features, i.e. specific locations in the working environment image each having a different position therein. Such positions have been predefined in the working environment image in a preprocessing step. Furthermore, each of said features has a feature detail image associated with it. Accordingly, different positions in the working environment image each have a feature detail image associated with it. The feature detail image may be a pop-up detail image or drawing or text image, and may be accompanied by a further feature detail image, whether a still image or a moving image, an audio fragment or any other type of media related to the action.
If the detected orientation of the user’s head corresponds to a viewing direction aimed at a predefined position of a feature, a marker image is displayed in the working environment image at or near said position. A marker image overlays the working environment image at its display location. A marker image may have or be a geometrical shape, a character or any other selected shape, may have a specific colour, may be still or moving, etc.. In general, a marker image is different from the part of the working environment image it overlays. A marker image is intended to draw the attention of the user, and to invite the user to maintain the orientation of his/her head for some time.
When the detected orientation corresponds to a viewing direction aimed at said position of said feature for some time, after a predetermined period of time the marker image may beO removed, and the feature detail image associated with the position in the working environment image is displayed in the working environment image at or near said position.
The feature detail image may comprise one or more images, still or moving. The feature detail image may contain visual and/or textual and/or audible information relating to the feature. The feature detail image may, for example, show an enlarged view of the feature it is associated with. The feature detail image may also show a list of properties or explanations of the feature it is associated with. The feature detail image may also provide a list of multiple choice questions about the feature.
Accordingly, the system of the present invention may provide an immersive training. This training is not restricted to any job, installation, environment, culture or language. It can be tailored to any knowledge level. Training can be completed by unlimited tests until a user is confident enough to take his/her examination. Examinations can be taken in the same way. Examinations may be randomized, every time they are taken. Thus, there is no chance to cheat, and the training program may be completely impartial.
The system is in particular very useful for training risks associated with “drops” and training an awareness of these drops. In an embodiment, the features relate to objects (or “drops”) positioned above the user, and the user is trained to look up in order to spot these “drops” and become aware of the risks which they pose. A drop can be any mechanical part which in case of failure of a connection may drop down and injure or kill personnel. Drops may include parts of hoisting systems, sub-systems positioned overhead, drilling pipes or other pipes positioned overhead, other mechanical parts which are positioned overhead and which are not welded to the drilling derrick or more in general a frame extending above the user but fastened with a fastener.
The detected user behavior may be tracked, logged, stored and evaluated. The interactive display system may provide training score cards and test score cards. Depending from the user behavior, further feature detail images can be displayed. Also, feature detail images may provide user feedback, or incentives to locate other features in the working environment image, such as for identifying steps of a working sequence.
As an example, a company can ensure that everything is inspected as it should be. When using weekly or monthly or yearly checklists, training to use paper checklists on a clipboard is often a nuisance. It is hard to write with gloves on. It might be raining, and it is hard to read and tick boxes when hanging in a ladder. By repeating a checklist over and over again in a replica of the actual environment provided with the system according to the present invention, it can be ensured that personnel reaches a level of familiarization that minimizes reliance on such checklists.
Training can be done in comfort at home, or in a classroom. Safe and effective operations can be created.
In an embodiment of the interactive display system of the present invention, the step of displaying the feature detail image in the working environment image at or near said position is performed after a predetermined period of time. Preferably, the predetermined period of time is triggered by a starting time of displaying the marker image. Accordingly, the fact that a user orients his/her head such that his/her view is aimed at a specific feature, first triggers a display of a marker image, which will alert the user that more information is available about the specific feature. The triggering of the display of the marker image starts a time counter. When the time counter has counted a predetermined period of time, then the marker image is removed (whereby the part of the original working environment image within the previous boundaries ofthe marker image is displayed again), and the feature detail image is displayed. The predetermined period of time is selected to be brief enough to prevent the user to change the orientation of his/her head, and may be less than 2 seconds or less than 1 second.
On the other hand, if the orientation of the user’s head corresponding to a viewing direction aimed at said position of said feature changes to an orientation ofthe user’s head not corresponding to a viewing direction aimed at said position of said feature within said predetermined period of time, the displaying ofthe marker image is terminated, and the part of the original working environment image within the previous boundaries of the marker image is displayed again.
In an alternative embodiment of the interactive display system of the present invention, the user interface component further is configured for, after displaying the marker image in the working environment image at or near said position, detecting a user input at a user input device. Only after detecting the user input, the user interface component is controlled to perform the step of displaying the feature detail image in the working environment image at or near said position. Here, the user, after having found a position of a feature in the working environment image, may actively operate a user input device to display the feature detail image. The marker image may be removed at such step. When the user input device is not operated, no feature detail image appears.
In an embodiment, the interactive display system of the present invention further comprises an evaluation component, which is configured for recording detected orientations of the user’s head, and/or recording the user input.
Detected orientations of the user’s head, and/or the user input may be evaluated, e.g. determined to be correct or incorrect, by the interactive display system through the evaluation component. Depending from the recorded data, further feature detail images can be displayed, and further operation of the user input device may be required. Also, feature detail images may provide user feedback, or incentives to locate other features in the working environment image, such as for identifying steps of a working sequence.
In an embodiment of the interactive display system of the present invention, the user interface component further is configured for, after displaying the feature detail image in the working environment image at or near said position, detecting a user input at a user input device, and the evaluation component is further configured for recording the user input. The feature detail image may provide questions and answers to the user relating to the associated feature. The user selects answers by the user input at the user input device. The answers are recorded, and can be used to assess the level of knowledge and skills of the user.
In an embodiment of the interactive display system of the present invention, the evaluation component further is configured for, for predetermined features, recording whether detected orientations of the user’s head correspond to viewing directions aimed at the positions of the predetermined features. Accordingly, it can be recorded whether a user is able to find a number of, or all relevant features in a working environment image.
Some of such orientations of the user’s head may correspond to viewing directions aimed at the positions of the predetermined features by coincidence, without the user even noticing such features. To increase the reliability that the user consciously observes the features, the evaluation component may further be configured for recording a time period of the orientation of the user’s head having a viewing direction aimed at the positions of the predetermined features. A very short time period will indicate that a probability of the user actually consciously observed the corresponding feature is low.
In order to establish a high probability of the user having actually consciously viewed a feature, in an embodiment of the interactive display system of the present invention the evaluation component further is configured for determining whether each time period exceeds a predetermined time period threshold. The time period threshold is selected sufficiently long, for example at least 2 seconds.
In an embodiment of the interactive display system of the present invention, the evaluation component is further configured for generating performance data of the user based on detected orientations of the user’s head and/or the user input. The performance data may take the form of a performance score card or a test score card containing alphanumerical and/or graphical data.
Herein, the user interface component and the evaluation component comprise a processing unit having instructions loaded into it for performing the steps of the invention. The user interface component may use the same processing unit as the evaluation component, or a different one, possibly at a different location.
In an embodiment of the interactive display system of the invention, the orientation sensor is coupled to the display unit. Accordingly, the user may mount both the display unit and the orientation sensor on his/her head in one simple operation.
In a low-cost, powerful embodiment of the interactive display system of the present invention, the display unit, the orientation sensor and the user interface component are comprised by a smartphone device. The smartphone preferably is coupled to a head mounted device. The head mounted device may comprise a touch screen for providing said user input. However, user input may also be provided by a predefined handling of the smartphone or other user input device, or by sound or speech through a microphone, for example.
Instead of a smartphone, another mobile device and/or virtual reality viewing device may be employed in the present invention.
In a second aspect of the present invention, a method of interactive display is provided. The method comprises: mounting a display unit in a fixed position relative to a user’s head; detecting an orientation of the user’s head; displaying a portion of a working environment image on the display unit based on the detected orientation of the user’s head, wherein the working environment image is a virtual reality image shot in-situ, and representing a real-life working environment, the working environment image comprising different features having different positions in the working environment image; if the detected orientation of the user’s head corresponds to a viewing direction aimed at said position of said feature, displaying a marker image in the working environment image at or near said position; when the detected orientation corresponds to a viewing direction aimed at said position of said feature, removing the marker image and displaying a feature detail image in the working environment image at or near said position, wherein the feature detail image corresponds to a feature having a specific position in the working environment image.
These and other aspects ofthe invention will be more readily appreciated as the same becomes better understood by reference to the following detailed description and considered in connection with the accompanying drawings in which like reference symbols designate like parts.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 depicts an embodiment of an interactive display system ofthe present invention.
Figure 2 depicts a flow diagram of a method of interactive display ofthe present invention.
Figure 3a depicts a part of a working environment image as viewable by a user.
Figure 3b depicts a part of the working environment image according to Figure 3a, with a marker image associated with a feature of the working environment image.
Figure 3c depicts a part of the working environment image according to Figure 3a, with a feature detail image associated with a feature of the working environment image.
DETAILED DESCRIPTION OF EMBODIMENTS
Figure 1 shows components of an embodiment of an interactive display system of the present invention. Figure 1 depicts a head 10 of a user wearing a head mounted device 12. The head mounted device 12 comprises a smartphone 14 as seen at the back side thereof. The smartphone 14 comprises a display unit at the front side ofthe smartphone 14, opposite to the back side. The display unit is a screen facing the eyes of the user. The display unit takes a fixed position relative to the head 10 of the user. Other head mounted devices, in particular virtual reality, VR, viewing devices may be used, mobile and vr viewing devices
The head mounted device 12 may be provided with optics to convert a first image shown on the display unit to a second image suitable to be seen by the eyes of the user.
The head mounted device 12 may comprise a user input device comprising a touch screen 16a and/or one or more buttons 16b. The user input device 16a, 16b is connected to the smartphone 14 to allow the smartphone to receive a user input signal from the user input device 16a, 16b. A user input device may alternatively be provided separately from the head mounted device 12.
As further indicated in Figure 1, the smartphone 14 comprises a memory 18 storing image data. The combination of the memory 18 and the image date may be referred to as a database. The database contains image data of one or more virtual reality working environment images shot in-situ, and representing a real-life working environment, wherein the working environment image comprising different features having different positions in the working environment image. These positions have been identified before, and are known. The database further contains a plurality of feature detail images, wherein each feature detail image corresponds to a feature having a specific position in the working environment image(s).
It is noted that the image data, or part of the image data, need not be stored in the memory of the smartphone 14. The image data can also be stored at a remote location, and retrieved from the remote location through a network telecommunication path established between the smartphone 14 and the remote location.
As further indicated in Figure 1, the head mounted device 12 or the smartphone 14 may comprise an orientation sensor 20, such as a set of accelerometers or a gyroscope.
The smartphone 14 is configured to load and run software, to thereby constitute a user interface component and/or an evaluation component. In general, the user interface component and the evaluation component comprise a processing unit, embodied in the smartphone 14, having instructions loaded into it for performing the steps as explained by reference to Figure 2. It is noted that the user interface component and the evaluation component need not be constituted by the smartphone 14, or may also be only partially be constituted by the smartphone 14, where the user interface component and/or the evaluation component is/are constituted in whole or in part by one or more processing units having instructions loaded into it located at a remote location, at least separate from the head mounted display 12. Then, at least part of the user interface functions and evaluation functions may be performed at the remote location, and at least a function of displaying the working environment image(s), the feature detail image(s) and marker images is performed by the smartphone 14. Data communication between the remote location and the smartphone 14 takes place through a network telecommunication path established between the smartphone 14 and the remote location.
It is noted that instead of the smartphone 14 merely a display unit may be mounted in the head mounted device 12.
Figure 2 depicts a flow diagram of functions or steps performed by the user interface component and the evaluation component, referring to the embodiment of the interactive display system of Figure 1.
In a step 201, a working environment image, WEI, is retrieved from the database. In a step 202, an orientation of the user’s head 10 is detected by the orientation sensor 20. In a step 203, a portion of the WEI is displayed on a display unit, such as the display unit of the smartphone 14, based on the orientation of the user’s head as detected by the orientation sensor 20 in step 202. In a decision step 204, it is determined whether (Y) or not (N) the detected orientation of the user’s head corresponds to a viewing direction aimed at a position of a feature of the WEI. If this is the case (Y), in a step 205 a marker image is displayed in the WEI at or near said position. IF this is not the case (N), the flow returns to step 202.
Upon display of the marker image, step 205, a timer is started. In a decision step 206, it is determined whether (Y) or not (N) a predetermined time period has lapsed. If this is the case (Y), in a step 207 the marker image is removed. If this is not the case (N), the flow returns to step 204. If the orientation of the user’s head 10 corresponding to a viewing direction aimed at said position of said feature changes to an orientation of the user’s head 10 not corresponds to a viewing direction aimed at said position of said feature within said predetermined period of time after the flow from step 206 to step 204, then from step 204 the flow will transfer to step 202 and, according to step 208, the displaying of the marker image is ended, if the marker image was displayed before according to step 205.
In a step 209 following step 207, the feature detail image is displayed in the WEI at or near said position. In a step 210 following step 209, a user input at a user input device, such as an answer to a question posed in, or in association with, the feature detail image, is detected. In a step 211 following step 210, the user input is recorded in a memory for further assessment. After step 211, the flow returns to step 202.
Parallel to step 205 and following, in a step 212 detected orientations ofthe user’s head 10 corresponding to viewing directions aimed at the positions ofthe predetermined features are recorded in a memory. In a decision step 213, it is determined whether for the predetermined features the recorded detected orientations of the user’s head correspond to viewing directions aimed at the positions of the (or all) predetermined features. If this is the case (Y), in a step 214 this is recorded.
Also parallel to step 205 and following, in a step 215 a time period of the orientation of the user’s head having a viewing direction aimed at the positions of the predetermined features is recorded in a memory. In a decision step 216, it is determined whether (Y) each time period exceeds a predetermined time period threshold. If this is the case (Y), in a step 217 this is recorded in a memory.
The recordings in steps 214 and 217 can be used in an assessment of the training of the user. The recordings may be used to generate a report, such as a customized report, which may be automatically uploaded in an existing database.
Alternatively to steps 206 to 207, as indicated by dashed lines, in a decision step 218, after step 205 of displaying the marker image in the WEI at or near said position, it is determined whether (Y) or not (N) a user input is detected at a user input device. If this is the case (Y), the flow continues with step 209 of displaying the feature detail image in the WEI at or near said position. If this is not the case (N), the flow continues with step 218 until timed out.
As illustrated in Figure 3a, a user may view on the display device a portion of a working environment image 300. In this example, the working environment image shows a portal 302 in a working environment, the portal 302 having two supporting structures 304 and a beam structure 306 supported by the supporting structures 304. The beam structure 306 carries two lamp units 308a, 308b.
For the lamp units 308a, 308b, it has been defined that a specific part 310, indicated by dashed lines, ofthe working environment image 300 is regarded as a “position” of the feature ofthe lamp unit 308a ofthe working environment image 300.
If the detected orientation of the user’s head 10 corresponds to a viewing direction aimed at said position of the feature of the lamp unit 308a, a marker image 312 is displayed in the working environment image 300, as illustrated in Figure 3b.
When predetermined further conditions have been met, such as the lapse of a predetermined period of time, the marker image 312 is removed and a predetermined feature detail image 314 of the lamp unit 308a is displayed in the working environment image 300, as illustrated in Figure 3c. Different feature detail images are possible. As an example, also a feature detail image 315 is possible, containing text associated with the feature. Also, both feature detail images 314, 314, and even more feature detail images, are possible.
Through the working environment image 300 and the feature detail images such as 314, 315, interacting with the user, the user may be trained to perform a specific task, without entering the real working environment. The performance of the user in the training may be recorded and evaluated.
As another example, a lifting process of a tube from a location A to a location B may be performed, wherein a user fulfils a function of a supervisor. When the user has identified a first step in the lifting process, the user interface component may continue to provide possibility for the user to orient his/her head such that it corresponds to a viewing direction aimed at a position of a second step of the lifting process. If the user correctly identifies all steps in a sequence of the lifting process, the user is evaluated to correctly perform the lifting process.
As another example, a process of testing a lining up of a manifold can be performed. The user is to indicate the valves which need to be in an open or a closed position. If the user does not line up the valves properly, he/she fails the test. However, if he/she can do the lineup complete and correct, he/she passes the test. This can be done with or without a time limit, causing the user to have to work undertime pressure.
As explained in detail above, in a system and method of interactive display, a display unit is mounted in a fixed position relative to a user’s head. An orientation of the head is detected. A portion of a working environment image is displayed based on the detected orientation of the head, wherein the working environment image represents a real-life working environment. The working environment image comprises different features having different positions in the image. If the detected orientation of the head corresponds to a viewing direction aimed at said position of said feature, a marker image is displayed at or near said position. When the detected orientation corresponds to a viewing direction aimed at said position of said feature, the marker image is removed and a feature detail image is displayed at or near said position, wherein the feature detail image corresponds to a feature having a specific position in the working environment image.
As required, detailed embodiments of the present invention are disclosed herein. However, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Further, the terms and phrases used herein are not intended to be limiting, but rather, to provide an understandable description of the invention.
The terms "a"/"an", as used herein, are defined as one or more than one. The term plurality, as used herein, is defined as two or more than two. The term another, as used herein, is defined as at least a second or more. The terms including and/or having, as used herein, are defined as comprising (i.e., open language, not excluding other elements or steps). Any reference signs in the claims should not be construed as limiting the scope of the claims or the invention.
The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The term coupled, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. A single processor or other unit may fulfil the functions of several items recited in the claims. On the other hand, a function recited in the claims may be performed by multiple processors in communication with each other.
The terms software, program, software application, and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system. A program, computer program, software or software application may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Claims (23)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2019178A NL2019178B1 (en) | 2017-07-05 | 2017-07-05 | Interactive display system, and method of interactive display |
PCT/NL2018/050433 WO2019009712A1 (en) | 2017-07-05 | 2018-07-04 | Interactive display system, and method of interactive display |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2019178A NL2019178B1 (en) | 2017-07-05 | 2017-07-05 | Interactive display system, and method of interactive display |
Publications (1)
Publication Number | Publication Date |
---|---|
NL2019178B1 true NL2019178B1 (en) | 2019-01-16 |
Family
ID=59656132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2019178A NL2019178B1 (en) | 2017-07-05 | 2017-07-05 | Interactive display system, and method of interactive display |
Country Status (2)
Country | Link |
---|---|
NL (1) | NL2019178B1 (en) |
WO (1) | WO2019009712A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109947546B (en) * | 2019-03-13 | 2021-08-20 | 北京乐我无限科技有限责任公司 | Task execution method and device, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140139551A1 (en) * | 2012-11-21 | 2014-05-22 | Daniel McCulloch | Augmented reality help |
US20150268469A1 (en) * | 2013-12-10 | 2015-09-24 | The Boeing Company | Systems and methods for providing interactive production illustration information |
US20170072305A1 (en) * | 2015-09-16 | 2017-03-16 | Gree, Inc. | Virtual image display program, virtual image display apparatus, and virtual image display method |
AU2017100357A4 (en) * | 2017-03-28 | 2017-04-27 | Suegeo Pty Ltd | Interactive safety training and assessment |
US20170148214A1 (en) * | 2015-07-17 | 2017-05-25 | Ivd Mining | Virtual reality training |
-
2017
- 2017-07-05 NL NL2019178A patent/NL2019178B1/en active
-
2018
- 2018-07-04 WO PCT/NL2018/050433 patent/WO2019009712A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140139551A1 (en) * | 2012-11-21 | 2014-05-22 | Daniel McCulloch | Augmented reality help |
US20150268469A1 (en) * | 2013-12-10 | 2015-09-24 | The Boeing Company | Systems and methods for providing interactive production illustration information |
US20170148214A1 (en) * | 2015-07-17 | 2017-05-25 | Ivd Mining | Virtual reality training |
US20170072305A1 (en) * | 2015-09-16 | 2017-03-16 | Gree, Inc. | Virtual image display program, virtual image display apparatus, and virtual image display method |
AU2017100357A4 (en) * | 2017-03-28 | 2017-04-27 | Suegeo Pty Ltd | Interactive safety training and assessment |
Also Published As
Publication number | Publication date |
---|---|
WO2019009712A1 (en) | 2019-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jeelani et al. | Development of virtual reality and stereo-panoramic environments for construction safety training | |
Wolf et al. | Investigating hazard recognition in augmented virtuality for personalized feedback in construction safety education and training | |
US11012595B2 (en) | Augmented reality | |
US20180357922A1 (en) | Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems | |
CN108229791B (en) | Electronic device and method for reporting sign-based training sessions | |
US20120139828A1 (en) | Communication And Skills Training Using Interactive Virtual Humans | |
De Armas et al. | Use of virtual reality simulators for training programs in the areas of security and defense: a systematic review | |
KR102137006B1 (en) | Safety education training system using virtual reality device and method controlling thereof | |
Barot et al. | V3S: A virtual environment for risk-management training based on human-activity models | |
US20080147585A1 (en) | Method and System for Generating a Surgical Training Module | |
JP6382490B2 (en) | Symbiotic helper | |
NL2019178B1 (en) | Interactive display system, and method of interactive display | |
Olayiwola et al. | Design and Usability Evaluation of an Annotated Video–Based Learning Environment for Construction Engineering Education | |
Boel et al. | Applying educational design research to develop a low-cost, mobile immersive virtual reality serious game teaching safety in secondary vocational education | |
CN113064486B (en) | VR education training method and device based on crime scene investigation | |
US11587451B2 (en) | VR education system | |
Crego | Critical incident management: Engendering experience through simulation | |
US11917324B1 (en) | Anti-cheating methods in an extended reality environment | |
Choong et al. | Augmented Reality (AR) Usability Evaluation Framework | |
O’Kane et al. | Perception studies | |
US11990059B1 (en) | Systems and methods for extended reality educational assessment | |
Uchiya et al. | Development of Indoor Evacuation Training System Using VR HMD | |
Smith et al. | Simulation Scriptwriting and Storyboarding Design Considerations for Production | |
Dalto | New Technologies in Safety Training | |
Kleygrewe | Immersed in Training: Advancing Police Practice with Virtual Reality |