NL2019178B1 - Interactive display system, and method of interactive display - Google Patents

Interactive display system, and method of interactive display Download PDF

Info

Publication number
NL2019178B1
NL2019178B1 NL2019178A NL2019178A NL2019178B1 NL 2019178 B1 NL2019178 B1 NL 2019178B1 NL 2019178 A NL2019178 A NL 2019178A NL 2019178 A NL2019178 A NL 2019178A NL 2019178 B1 NL2019178 B1 NL 2019178B1
Authority
NL
Netherlands
Prior art keywords
image
user
head
feature
interactive display
Prior art date
Application number
NL2019178A
Other languages
Dutch (nl)
Inventor
Hendrik Gerardus Kiewik Jeroen
Nghia Trieu Sung
Klunder Reinder
Original Assignee
Cap R&D B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cap R&D B V filed Critical Cap R&D B V
Priority to NL2019178A priority Critical patent/NL2019178B1/en
Priority to PCT/NL2018/050433 priority patent/WO2019009712A1/en
Application granted granted Critical
Publication of NL2019178B1 publication Critical patent/NL2019178B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/453Help systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

In a system and method of interactive display, a display unit is mounted in a fixed position relative to a user’s head. An orientation of the head is detected. A portion of a working environment image is displayed based on the detected orientation of the head, wherein the working environment image represents a real-life working environment. The working environment image comprises different features having different positions in the image. If the detected orientation of the head corresponds to a viewing direction aimed at said position of said feature, a marker image is displayed at or near said position. When the detected orientation corresponds to a viewing direction aimed at said position of said feature, the marker image is removed and a feature detail image is displayed at or near said position, wherein the feature detail image corresponds to a feature having a specific position in the working environment image.

Description

P33111NLOO/ME
Interactive display system, and method of interactive display
FIELD OF THE INVENTION
The invention relates to the field of interactive display systems, and more specifically to methods of interactive display for training personnel or employees. An environment is created by obtaining images from a real-life environment, such as by filming a 360 degrees video using six or more cameras and taking images, the real-life environment being a typical environment where the personnel would normally work, such as a drilling platform or any other environment, such as an industrial environment.
BACKGROUND OF THE INVENTION
Usually, new employees study from books. In addition, they have seen the occasional videos and YouTube® films and many presentations. A lot can be learned in these ways, but it takes a lot of time to get familiar with the essence in reality. Also, there is no efficient tracking and testing of an individual’s understanding of the subject matter.
Therefore, a need exists to provide a training which can be tailored to specific circumstances. A further need exists to provide a training which can address all aspects of the work to be trained. A still further need exists for an individualized training which can be done at low expenses. Also, a need exists for better training and testing the knowledge gained at low expenses.
In particular a need exists for training safety hazards for personnel on board drilling platforms or other offshore platforms. More in particular, a need exists for training for so called “drops”. These are objects located above the personnel which may potentially fall down on the personnel. It was found that it is very difficult to train awareness of these drops.
SUMMARY OF THE INVENTION
Thus, it would be desirable to provide an interactive display system and method which can be tailored to specific circumstances. It would also be desirable to provide an interactive display system and method which can address all aspects of the work to be trained. It would further be desirable to provide an interactive display system and method which allows for an individualized training at relatively low costs. It would further be desirable to provide an interactive display system and method which allows for individualized training and testing progress tracking, wherein the progress and participation of users is logged and companies are enabled to track statistics of an individual’s progress, or the progress of groups of individuals.
To better address one or more of these concerns, in a first aspect of the present invention an interactive display system is provided. The interactive display system comprises: a database containing image data of: - a virtual reality working environment image shot in-situ, and representing a real-life working environment, the working environment image comprising different features having different positions in the working environment image; and - a plurality of feature detail images, wherein each feature detail image corresponds to a feature having a specific position in the working environment image; a display unit configured to be mounted in a fixed position relative to a user’s head; an orientation sensor configured to be coupled to the user’s head; and a user interface component configured for: - retrieving a working environment image from the database; - detecting an orientation of the user’s head by the orientation sensor; - displaying a portion of the working environment image on the display unit based on the detected orientation of the user’s head; - if the detected orientation of the user’s head corresponds to a viewing direction aimed at said position of said feature, displaying a marker image in the working environment image at or near said position; - when the detected orientation corresponds to a viewing direction aimed at said position of said feature, removing the marker image and displaying the feature detail image in the working environment image at or near said position.
With the system according to the present invention, it is possible for new hired personnel to experience an actual working environment without having to be there physically. A working environment image is shot in-situ, at an existing, real-life site, and can be either a still image or a moving image, possibly augmented with audio recorded at an existing site. The working environment image is projected on a display unit to be viewed by a user. The display unit may be mounted in a fixed position to a user’s head. To provide the user with an experience of the working environment, an orientation sensor is coupled to the user’s head to allow a detection of an orientation of the user’s head.
The detected orientation of the user’s head determines displaying a specific portion of the working environment image, which has been retrieved from a database, to be displayed on the display unit. Thus, the user may find himself/herself to be looking at a part of a working environment depending on the orientation of his/her head. For example, when the user looks up, he/she will be shown an upper portion of the working environment image on the display unit. When the user looks left, he/she will be shown a left portion of the working environment image on the display unit, and so on. The working environment image may be a 360 degrees image in any direction, so that the user may also take a look behind him/her, above him/her, and below him/her.
The working environment image contains one or more features, i.e. specific locations in the working environment image each having a different position therein. Such positions have been predefined in the working environment image in a preprocessing step. Furthermore, each of said features has a feature detail image associated with it. Accordingly, different positions in the working environment image each have a feature detail image associated with it. The feature detail image may be a pop-up detail image or drawing or text image, and may be accompanied by a further feature detail image, whether a still image or a moving image, an audio fragment or any other type of media related to the action.
If the detected orientation of the user’s head corresponds to a viewing direction aimed at a predefined position of a feature, a marker image is displayed in the working environment image at or near said position. A marker image overlays the working environment image at its display location. A marker image may have or be a geometrical shape, a character or any other selected shape, may have a specific colour, may be still or moving, etc.. In general, a marker image is different from the part of the working environment image it overlays. A marker image is intended to draw the attention of the user, and to invite the user to maintain the orientation of his/her head for some time.
When the detected orientation corresponds to a viewing direction aimed at said position of said feature for some time, after a predetermined period of time the marker image may beO removed, and the feature detail image associated with the position in the working environment image is displayed in the working environment image at or near said position.
The feature detail image may comprise one or more images, still or moving. The feature detail image may contain visual and/or textual and/or audible information relating to the feature. The feature detail image may, for example, show an enlarged view of the feature it is associated with. The feature detail image may also show a list of properties or explanations of the feature it is associated with. The feature detail image may also provide a list of multiple choice questions about the feature.
Accordingly, the system of the present invention may provide an immersive training. This training is not restricted to any job, installation, environment, culture or language. It can be tailored to any knowledge level. Training can be completed by unlimited tests until a user is confident enough to take his/her examination. Examinations can be taken in the same way. Examinations may be randomized, every time they are taken. Thus, there is no chance to cheat, and the training program may be completely impartial.
The system is in particular very useful for training risks associated with “drops” and training an awareness of these drops. In an embodiment, the features relate to objects (or “drops”) positioned above the user, and the user is trained to look up in order to spot these “drops” and become aware of the risks which they pose. A drop can be any mechanical part which in case of failure of a connection may drop down and injure or kill personnel. Drops may include parts of hoisting systems, sub-systems positioned overhead, drilling pipes or other pipes positioned overhead, other mechanical parts which are positioned overhead and which are not welded to the drilling derrick or more in general a frame extending above the user but fastened with a fastener.
The detected user behavior may be tracked, logged, stored and evaluated. The interactive display system may provide training score cards and test score cards. Depending from the user behavior, further feature detail images can be displayed. Also, feature detail images may provide user feedback, or incentives to locate other features in the working environment image, such as for identifying steps of a working sequence.
As an example, a company can ensure that everything is inspected as it should be. When using weekly or monthly or yearly checklists, training to use paper checklists on a clipboard is often a nuisance. It is hard to write with gloves on. It might be raining, and it is hard to read and tick boxes when hanging in a ladder. By repeating a checklist over and over again in a replica of the actual environment provided with the system according to the present invention, it can be ensured that personnel reaches a level of familiarization that minimizes reliance on such checklists.
Training can be done in comfort at home, or in a classroom. Safe and effective operations can be created.
In an embodiment of the interactive display system of the present invention, the step of displaying the feature detail image in the working environment image at or near said position is performed after a predetermined period of time. Preferably, the predetermined period of time is triggered by a starting time of displaying the marker image. Accordingly, the fact that a user orients his/her head such that his/her view is aimed at a specific feature, first triggers a display of a marker image, which will alert the user that more information is available about the specific feature. The triggering of the display of the marker image starts a time counter. When the time counter has counted a predetermined period of time, then the marker image is removed (whereby the part of the original working environment image within the previous boundaries ofthe marker image is displayed again), and the feature detail image is displayed. The predetermined period of time is selected to be brief enough to prevent the user to change the orientation of his/her head, and may be less than 2 seconds or less than 1 second.
On the other hand, if the orientation of the user’s head corresponding to a viewing direction aimed at said position of said feature changes to an orientation ofthe user’s head not corresponding to a viewing direction aimed at said position of said feature within said predetermined period of time, the displaying ofthe marker image is terminated, and the part of the original working environment image within the previous boundaries of the marker image is displayed again.
In an alternative embodiment of the interactive display system of the present invention, the user interface component further is configured for, after displaying the marker image in the working environment image at or near said position, detecting a user input at a user input device. Only after detecting the user input, the user interface component is controlled to perform the step of displaying the feature detail image in the working environment image at or near said position. Here, the user, after having found a position of a feature in the working environment image, may actively operate a user input device to display the feature detail image. The marker image may be removed at such step. When the user input device is not operated, no feature detail image appears.
In an embodiment, the interactive display system of the present invention further comprises an evaluation component, which is configured for recording detected orientations of the user’s head, and/or recording the user input.
Detected orientations of the user’s head, and/or the user input may be evaluated, e.g. determined to be correct or incorrect, by the interactive display system through the evaluation component. Depending from the recorded data, further feature detail images can be displayed, and further operation of the user input device may be required. Also, feature detail images may provide user feedback, or incentives to locate other features in the working environment image, such as for identifying steps of a working sequence.
In an embodiment of the interactive display system of the present invention, the user interface component further is configured for, after displaying the feature detail image in the working environment image at or near said position, detecting a user input at a user input device, and the evaluation component is further configured for recording the user input. The feature detail image may provide questions and answers to the user relating to the associated feature. The user selects answers by the user input at the user input device. The answers are recorded, and can be used to assess the level of knowledge and skills of the user.
In an embodiment of the interactive display system of the present invention, the evaluation component further is configured for, for predetermined features, recording whether detected orientations of the user’s head correspond to viewing directions aimed at the positions of the predetermined features. Accordingly, it can be recorded whether a user is able to find a number of, or all relevant features in a working environment image.
Some of such orientations of the user’s head may correspond to viewing directions aimed at the positions of the predetermined features by coincidence, without the user even noticing such features. To increase the reliability that the user consciously observes the features, the evaluation component may further be configured for recording a time period of the orientation of the user’s head having a viewing direction aimed at the positions of the predetermined features. A very short time period will indicate that a probability of the user actually consciously observed the corresponding feature is low.
In order to establish a high probability of the user having actually consciously viewed a feature, in an embodiment of the interactive display system of the present invention the evaluation component further is configured for determining whether each time period exceeds a predetermined time period threshold. The time period threshold is selected sufficiently long, for example at least 2 seconds.
In an embodiment of the interactive display system of the present invention, the evaluation component is further configured for generating performance data of the user based on detected orientations of the user’s head and/or the user input. The performance data may take the form of a performance score card or a test score card containing alphanumerical and/or graphical data.
Herein, the user interface component and the evaluation component comprise a processing unit having instructions loaded into it for performing the steps of the invention. The user interface component may use the same processing unit as the evaluation component, or a different one, possibly at a different location.
In an embodiment of the interactive display system of the invention, the orientation sensor is coupled to the display unit. Accordingly, the user may mount both the display unit and the orientation sensor on his/her head in one simple operation.
In a low-cost, powerful embodiment of the interactive display system of the present invention, the display unit, the orientation sensor and the user interface component are comprised by a smartphone device. The smartphone preferably is coupled to a head mounted device. The head mounted device may comprise a touch screen for providing said user input. However, user input may also be provided by a predefined handling of the smartphone or other user input device, or by sound or speech through a microphone, for example.
Instead of a smartphone, another mobile device and/or virtual reality viewing device may be employed in the present invention.
In a second aspect of the present invention, a method of interactive display is provided. The method comprises: mounting a display unit in a fixed position relative to a user’s head; detecting an orientation of the user’s head; displaying a portion of a working environment image on the display unit based on the detected orientation of the user’s head, wherein the working environment image is a virtual reality image shot in-situ, and representing a real-life working environment, the working environment image comprising different features having different positions in the working environment image; if the detected orientation of the user’s head corresponds to a viewing direction aimed at said position of said feature, displaying a marker image in the working environment image at or near said position; when the detected orientation corresponds to a viewing direction aimed at said position of said feature, removing the marker image and displaying a feature detail image in the working environment image at or near said position, wherein the feature detail image corresponds to a feature having a specific position in the working environment image.
These and other aspects ofthe invention will be more readily appreciated as the same becomes better understood by reference to the following detailed description and considered in connection with the accompanying drawings in which like reference symbols designate like parts.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 depicts an embodiment of an interactive display system ofthe present invention.
Figure 2 depicts a flow diagram of a method of interactive display ofthe present invention.
Figure 3a depicts a part of a working environment image as viewable by a user.
Figure 3b depicts a part of the working environment image according to Figure 3a, with a marker image associated with a feature of the working environment image.
Figure 3c depicts a part of the working environment image according to Figure 3a, with a feature detail image associated with a feature of the working environment image.
DETAILED DESCRIPTION OF EMBODIMENTS
Figure 1 shows components of an embodiment of an interactive display system of the present invention. Figure 1 depicts a head 10 of a user wearing a head mounted device 12. The head mounted device 12 comprises a smartphone 14 as seen at the back side thereof. The smartphone 14 comprises a display unit at the front side ofthe smartphone 14, opposite to the back side. The display unit is a screen facing the eyes of the user. The display unit takes a fixed position relative to the head 10 of the user. Other head mounted devices, in particular virtual reality, VR, viewing devices may be used, mobile and vr viewing devices
The head mounted device 12 may be provided with optics to convert a first image shown on the display unit to a second image suitable to be seen by the eyes of the user.
The head mounted device 12 may comprise a user input device comprising a touch screen 16a and/or one or more buttons 16b. The user input device 16a, 16b is connected to the smartphone 14 to allow the smartphone to receive a user input signal from the user input device 16a, 16b. A user input device may alternatively be provided separately from the head mounted device 12.
As further indicated in Figure 1, the smartphone 14 comprises a memory 18 storing image data. The combination of the memory 18 and the image date may be referred to as a database. The database contains image data of one or more virtual reality working environment images shot in-situ, and representing a real-life working environment, wherein the working environment image comprising different features having different positions in the working environment image. These positions have been identified before, and are known. The database further contains a plurality of feature detail images, wherein each feature detail image corresponds to a feature having a specific position in the working environment image(s).
It is noted that the image data, or part of the image data, need not be stored in the memory of the smartphone 14. The image data can also be stored at a remote location, and retrieved from the remote location through a network telecommunication path established between the smartphone 14 and the remote location.
As further indicated in Figure 1, the head mounted device 12 or the smartphone 14 may comprise an orientation sensor 20, such as a set of accelerometers or a gyroscope.
The smartphone 14 is configured to load and run software, to thereby constitute a user interface component and/or an evaluation component. In general, the user interface component and the evaluation component comprise a processing unit, embodied in the smartphone 14, having instructions loaded into it for performing the steps as explained by reference to Figure 2. It is noted that the user interface component and the evaluation component need not be constituted by the smartphone 14, or may also be only partially be constituted by the smartphone 14, where the user interface component and/or the evaluation component is/are constituted in whole or in part by one or more processing units having instructions loaded into it located at a remote location, at least separate from the head mounted display 12. Then, at least part of the user interface functions and evaluation functions may be performed at the remote location, and at least a function of displaying the working environment image(s), the feature detail image(s) and marker images is performed by the smartphone 14. Data communication between the remote location and the smartphone 14 takes place through a network telecommunication path established between the smartphone 14 and the remote location.
It is noted that instead of the smartphone 14 merely a display unit may be mounted in the head mounted device 12.
Figure 2 depicts a flow diagram of functions or steps performed by the user interface component and the evaluation component, referring to the embodiment of the interactive display system of Figure 1.
In a step 201, a working environment image, WEI, is retrieved from the database. In a step 202, an orientation of the user’s head 10 is detected by the orientation sensor 20. In a step 203, a portion of the WEI is displayed on a display unit, such as the display unit of the smartphone 14, based on the orientation of the user’s head as detected by the orientation sensor 20 in step 202. In a decision step 204, it is determined whether (Y) or not (N) the detected orientation of the user’s head corresponds to a viewing direction aimed at a position of a feature of the WEI. If this is the case (Y), in a step 205 a marker image is displayed in the WEI at or near said position. IF this is not the case (N), the flow returns to step 202.
Upon display of the marker image, step 205, a timer is started. In a decision step 206, it is determined whether (Y) or not (N) a predetermined time period has lapsed. If this is the case (Y), in a step 207 the marker image is removed. If this is not the case (N), the flow returns to step 204. If the orientation of the user’s head 10 corresponding to a viewing direction aimed at said position of said feature changes to an orientation of the user’s head 10 not corresponds to a viewing direction aimed at said position of said feature within said predetermined period of time after the flow from step 206 to step 204, then from step 204 the flow will transfer to step 202 and, according to step 208, the displaying of the marker image is ended, if the marker image was displayed before according to step 205.
In a step 209 following step 207, the feature detail image is displayed in the WEI at or near said position. In a step 210 following step 209, a user input at a user input device, such as an answer to a question posed in, or in association with, the feature detail image, is detected. In a step 211 following step 210, the user input is recorded in a memory for further assessment. After step 211, the flow returns to step 202.
Parallel to step 205 and following, in a step 212 detected orientations ofthe user’s head 10 corresponding to viewing directions aimed at the positions ofthe predetermined features are recorded in a memory. In a decision step 213, it is determined whether for the predetermined features the recorded detected orientations of the user’s head correspond to viewing directions aimed at the positions of the (or all) predetermined features. If this is the case (Y), in a step 214 this is recorded.
Also parallel to step 205 and following, in a step 215 a time period of the orientation of the user’s head having a viewing direction aimed at the positions of the predetermined features is recorded in a memory. In a decision step 216, it is determined whether (Y) each time period exceeds a predetermined time period threshold. If this is the case (Y), in a step 217 this is recorded in a memory.
The recordings in steps 214 and 217 can be used in an assessment of the training of the user. The recordings may be used to generate a report, such as a customized report, which may be automatically uploaded in an existing database.
Alternatively to steps 206 to 207, as indicated by dashed lines, in a decision step 218, after step 205 of displaying the marker image in the WEI at or near said position, it is determined whether (Y) or not (N) a user input is detected at a user input device. If this is the case (Y), the flow continues with step 209 of displaying the feature detail image in the WEI at or near said position. If this is not the case (N), the flow continues with step 218 until timed out.
As illustrated in Figure 3a, a user may view on the display device a portion of a working environment image 300. In this example, the working environment image shows a portal 302 in a working environment, the portal 302 having two supporting structures 304 and a beam structure 306 supported by the supporting structures 304. The beam structure 306 carries two lamp units 308a, 308b.
For the lamp units 308a, 308b, it has been defined that a specific part 310, indicated by dashed lines, ofthe working environment image 300 is regarded as a “position” of the feature ofthe lamp unit 308a ofthe working environment image 300.
If the detected orientation of the user’s head 10 corresponds to a viewing direction aimed at said position of the feature of the lamp unit 308a, a marker image 312 is displayed in the working environment image 300, as illustrated in Figure 3b.
When predetermined further conditions have been met, such as the lapse of a predetermined period of time, the marker image 312 is removed and a predetermined feature detail image 314 of the lamp unit 308a is displayed in the working environment image 300, as illustrated in Figure 3c. Different feature detail images are possible. As an example, also a feature detail image 315 is possible, containing text associated with the feature. Also, both feature detail images 314, 314, and even more feature detail images, are possible.
Through the working environment image 300 and the feature detail images such as 314, 315, interacting with the user, the user may be trained to perform a specific task, without entering the real working environment. The performance of the user in the training may be recorded and evaluated.
As another example, a lifting process of a tube from a location A to a location B may be performed, wherein a user fulfils a function of a supervisor. When the user has identified a first step in the lifting process, the user interface component may continue to provide possibility for the user to orient his/her head such that it corresponds to a viewing direction aimed at a position of a second step of the lifting process. If the user correctly identifies all steps in a sequence of the lifting process, the user is evaluated to correctly perform the lifting process.
As another example, a process of testing a lining up of a manifold can be performed. The user is to indicate the valves which need to be in an open or a closed position. If the user does not line up the valves properly, he/she fails the test. However, if he/she can do the lineup complete and correct, he/she passes the test. This can be done with or without a time limit, causing the user to have to work undertime pressure.
As explained in detail above, in a system and method of interactive display, a display unit is mounted in a fixed position relative to a user’s head. An orientation of the head is detected. A portion of a working environment image is displayed based on the detected orientation of the head, wherein the working environment image represents a real-life working environment. The working environment image comprises different features having different positions in the image. If the detected orientation of the head corresponds to a viewing direction aimed at said position of said feature, a marker image is displayed at or near said position. When the detected orientation corresponds to a viewing direction aimed at said position of said feature, the marker image is removed and a feature detail image is displayed at or near said position, wherein the feature detail image corresponds to a feature having a specific position in the working environment image.
As required, detailed embodiments of the present invention are disclosed herein. However, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Further, the terms and phrases used herein are not intended to be limiting, but rather, to provide an understandable description of the invention.
The terms "a"/"an", as used herein, are defined as one or more than one. The term plurality, as used herein, is defined as two or more than two. The term another, as used herein, is defined as at least a second or more. The terms including and/or having, as used herein, are defined as comprising (i.e., open language, not excluding other elements or steps). Any reference signs in the claims should not be construed as limiting the scope of the claims or the invention.
The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The term coupled, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. A single processor or other unit may fulfil the functions of several items recited in the claims. On the other hand, a function recited in the claims may be performed by multiple processors in communication with each other.
The terms software, program, software application, and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system. A program, computer program, software or software application may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

Claims (23)

1. Interactief weergavesysteem, omvattende: een database die beelddata bevat van: - een in-situ opgenomen virtuele-werkelijkheid werkomgevingsbeeld, dat een werkomgeving van het werkelijke leven representeert, waarbij het werkomgevingsbeeld verschillende kenmerken omvat met verschillende posities in het werkomgevingsbeeld; en - een veelheid van kenmerkdetailbeelden, waarbij elk kenmerkdetailbeeld correspondeert met een kenmerk met een specifieke positie in het werkomgevingsbeeld; een weergave-eenheid die is geconfigureerd om in een vaste positie ten opzichte van een hoofd van een gebruiker te worden gezet; een oriëntatiesensor die is geconfigureerd om te worden gekoppeld met het hoofd van de gebruiker; en een gebruikersinterfacecomponent die is geconfigureerd voor: - het ophalen van een werkomgevingsbeeld van de database; - het detecteren van een oriëntatie van het hoofd van de gebruiker door de oriëntatiesensor; het weergeven van een gedeelte van het werkomgevingsbeeld op de weergave-eenheid op basis van de gedetecteerde oriëntatie van het hoofd van de gebruiker; - indien de gedetecteerde oriëntatie van het hoofd van de gebruiker correspondeert met een kijkrichting die is gericht op genoemde positie van genoemd kenmerk, het weergeven van een merktekenbeeld in het werkomgevingsbeeld op of nabij genoemde positie; - wanneer de gedetecteerde oriëntatie correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd kenmerk, het verwijderen van het merktekenbeeld en het weergeven van het kenmerkdetailbeeld in het werkomgevingsbeeld op of nabij genoemde positie.An interactive display system comprising: a database containing image data of: - an in-situ virtual reality work environment image representing a work environment of real life, wherein the work environment image comprises different features with different positions in the work environment image; and a plurality of feature detail images, each feature detail image corresponding to a feature having a specific position in the work environment image; a display unit configured to be set in a fixed position relative to a user's head; an orientation sensor configured to be coupled to the user's head; and a user interface component configured for: - retrieving a work environment image from the database; - detecting an orientation of the user's head by the orientation sensor; displaying a portion of the work environment image on the display unit based on the detected orientation of the user's head; - if the detected orientation of the user's head corresponds to a viewing direction directed at said position of said feature, displaying a marker image in the work environment image at or near said position; - when the detected orientation corresponds to a viewing direction directed to said position of said feature, removing the tag image and displaying the feature detail image in the operating environment image at or near said position. 2. Interactief weergavesysteem volgens conclusie 1, waarbij de stap van het weergeven van het kenmerkdetailbeeld in het werkomgevingsbeeld op of nabij genoemde positie wordt uitgevoerd na een vooraf bepaalde tijdsperiode.The interactive display system of claim 1, wherein the step of displaying the feature detail image in the operating environment image is performed at or near said position after a predetermined period of time. 3. Interactief weergavesysteem volgens conclusie 2, waarbij de vooraf bepaalde tijdsperiode wordt gestart door een starttijd van het weergeven van het merktekenbeeld.The interactive display system of claim 2, wherein the predetermined time period is started by a start time of displaying the marker image. 4. Interactief weergavesysteem volgens conclusie 2 of 3, waarbij, indien de oriëntatie van het hoofd van de gebruiker die correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd kenmerk wijzigt naar een oriëntatie van het hoofd van de gebruiker die niet correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd kenmerk binnen genoemde vooraf bepaalde tijdsperiode, het beëindigen van het weergeven van het merktekenbeeld.An interactive display system according to claim 2 or 3, wherein, if the orientation of the user's head corresponding to a viewing direction directed to said position of said feature changes to an orientation of the user's head not corresponding to a viewing direction directed to said position of said feature within said predetermined period of time, terminating the display of the mark image. 5. Interactief weergavesysteem volgens conclusie 1, waarbij de gebruikersinterfacecomponent verder is geconfigureerd voor: - na het weergeven van het merktekenbeeld in het werkomgevingsbeeld op of nabij genoemde positie, het detecteren van een gebruikersinvoer op een gebruikersinvoerinrichting; en - na het detecteren van de gebruikersinvoer, het besturen van de gebruikersinterfacecomponent voor het uitvoeren van de stap van het weergeven van het kenmerkdetailbeeld in het werkomgevingsbeeld op of nabij genoemde positie.The interactive display system of claim 1, wherein the user interface component is further configured for: - after displaying the tag image in the work environment image at or near said position, detecting a user input on a user input device; and - after detecting the user input, controlling the user interface component to perform the step of displaying the feature detail image in the operating environment image at or near said position. 6. Interactief weergavesysteem volgens conclusiel, verder omvattende een evaluatiecomponent, waarbij de evaluatiecomponent is geconfigureerd voor: - het registreren van gedetecteerde oriëntaties van het hoofd van de gebruiker.An interactive display system according to claim 1, further comprising an evaluation component, wherein the evaluation component is configured for: - registering detected orientations of the user's head. 7. Interactief weergavesysteem volgens conclusie 1, verder omvattende een evaluatiecomponent, waarbij de gebruikersinterfacecomponent verder is geconfigureerd voor: - na het weergeven van het kenmerkdetailbeeld in het werkomgevingsbeeld op of nabij genoemde positie, het detecteren van een gebruikersinvoer op een gebruikersinvoerinrichting; en waarbij de evaluatiecomponent is geconfigureerd voor: - het registreren van de gebruikersinvoer.The interactive display system of claim 1, further comprising an evaluation component, wherein the user interface component is further configured for: - after displaying the feature detail image in the operating environment image at or near said position, detecting a user input on a user input device; and wherein the evaluation component is configured for: - registering the user input. 8. Interactief weergavesysteem volgens conclusie 6 of 7, waarbij de evaluatiecomponent verder is geconfigureerd voor: - voor vooraf bepaalde kenmerken, het registreren of gedetecteerde oriëntaties van het hoofd van de gebruiker corresponderen met kijkrichtingen die zijn gericht op de posities van de vooraf bepaalde kenmerken.The interactive display system according to claim 6 or 7, wherein the evaluation component is further configured for: - for predetermined features, recording or detected orientations of the user's head corresponding to viewing directions directed at the positions of the predetermined features. 9. Interactief weergavesysteem volgens conclusie 8, waarbij de evaluatiecomponent verder is geconfigureerd voor het registreren van een tijdsperiode van de oriëntatie van het hoofd van de gebruiker met een kijkrichting die is gericht naar de posities van de vooraf bepaalde kenmerken.The interactive display system of claim 8, wherein the evaluation component is further configured to record a time period of the orientation of the user's head with a viewing direction directed to the positions of the predetermined features. 10. Interactief weergavesysteem volgens conclusie 9, waarbij de evaluatiecomponent verder is geconfigureerd voor het bepalen of elke tijdsperiode een vooraf bepaalde tijdsperiodedrempelwaarde overschrijdt.The interactive display system of claim 9, wherein the evaluation component is further configured to determine whether each time period exceeds a predetermined time period threshold. 11. Interactief weergavesysteem volgens een of meer van de conclusies 6 tot en met 10, waarbij de evaluatiecomponent verder is geconfigureerd voor: - het genereren van prestatiedata van de gebruiker op basis van de gedetecteerde oriëntaties van het hoofd van de gebruiker en/of de gebruikersinvoer.Interactive display system according to one or more of claims 6 to 10, wherein the evaluation component is further configured for: - generating user performance data based on the detected orientations of the user's head and / or user input . 12. Interactief weergavesysteem volgens een van de voorgaande conclusies, waarbij de gebruikersinterfacecomponent een verwerkingseenheid omvat met instructies daarin geladen voor het uitvoeren van de stappen van een van de voorgaande conclusies.An interactive display system according to any of the preceding claims, wherein the user interface component comprises a processing unit with instructions loaded therein for performing the steps of any of the preceding claims. 13. Interactief weergavesysteem volgens een van de conclusies 6 tot en met 12, waarbij de evaluatiecomponent een verwerkingseenheid omvat met instructies daarin geladen voor het uitvoeren van de stappen van een van de conclusies 6 tot en met 12.The interactive display system of any one of claims 6 to 12, wherein the evaluation component comprises a processing unit with instructions loaded therein for performing the steps of any of claims 6 to 12. 14. Interactief weergavesysteem volgens een van de voorgaande conclusies, waarbij het werkomgevingsbeeld een stilstaand beeld of een bewegend beeld is, dat in-situ is geregistreerd.The interactive display system according to any of the preceding claims, wherein the work environment image is a still image or a moving image that is recorded in-situ. 15. Interactief weergavesysteem volgens een van de voorgaande conclusies, waarbij het kenmerkdetailbeeld een stilstaand beeld of een bewegend beeld is.The interactive display system according to any of the preceding claims, wherein the feature detail image is a still image or a moving image. 16. Interactief weergavesysteem volgens een van de voorgaande conclusies, waarbij het kenmerkdetailbeeld is geassocieerd met visuele en/of tekstuele en/of hoorbare informatie die betrekking heeft op het kenmerk.An interactive display system according to any one of the preceding claims, wherein the feature detail image is associated with visual and / or textual and / or audible information relating to the feature. 17. Interactief weergavesysteem volgens een van de voorgaande conclusies, waarbij de oriëntatiesensor met de weergave-eenheid is gekoppeld.The interactive display system according to any of the preceding claims, wherein the orientation sensor is coupled to the display unit. 18. Interactief weergavesysteem volgens een van de voorgaande conclusies, waarbij de weergave-eenheid, de oriëntatiesensor en de gebruikersinterfacecomponent deel uitmaken van een smartphone-inrichting.The interactive display system of any preceding claim, wherein the display unit, the orientation sensor, and the user interface component are part of a smartphone device. 19. Interactief weergavesysteem volgens conclusie 18, waarbij de smartphone is gekoppeld met een op het hoofd te zetten inrichting.The interactive display system of claim 18, wherein the smartphone is coupled to a head-mounted device. 20. Interactief weergavesysteem volgens conclusie 19, waarbij de op het hoofd te zetten inrichting een aanraakscherm omvat voor het verschaffen van genoemde gebruikersinvoer.The interactive display system of claim 19, wherein the head-to-head device comprises a touch screen for providing said user input. 21. Interactief weergavesysteem volgens een van de voorgaande conclusies, waarbij de kenmerken betrekking hebben op “valelementen”, d.w.z. mechanische delen die zich bevinden boven de gebruiker en die potentieel naar beneden op een gebruiker kunnen vallen en verwondingen of dood kunnen veroorzaken, en die uitsluitend kunnen worden gezien door omhoog te kijken, en waarbij het systeem is geconfigureerd voor het creëren van besef van deze valelementen door het trainen van de gebruiker om de valelementen boven zich waar te nemen door: - indien de gedetecteerde oriëntatie van het hoofd van de gebruiker correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd valelement, het weergeven van een merktekenbeeld van het valelement in het werkomgevingsbeeld op of nabij genoemde positie; - wanneer de gedetecteerde oriëntatie correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd valelement, het verwijderen van het merktekenbeeld van het valelement en het weergeven van het kenmerkdetailbeeld van het valelement in het werkomgevingsbeeld op of nabij genoemde positie.An interactive display system according to any of the preceding claims, wherein the features relate to "fall elements", ie mechanical parts that are above the user and that can potentially fall down on a user and cause injury or death, and which exclusively can be seen by looking up, and wherein the system is configured to create awareness of these fall elements by training the user to perceive the fall elements above by: - if the detected orientation of the user's head corresponds with a viewing direction directed to said position of said fall element, displaying a marker image of the fall element in the working environment image at or near said position; - when the detected orientation corresponds to a viewing direction directed to said position of said fall element, removing the mark image from the fall element and displaying the feature detail image of the fall element in the working environment image at or near said position. 22. Werkwijze voor interactieve weergave, omvattende: het zetten van een weergave-eenheid op een vaste positie ten opzichte van een hoofd van een gebruiker; het detecteren van een oriëntatie van het hoofd van de gebruiker; het weergeven van een gedeelte van een werkomgevingsbeeld op de weergave- eenheid op basis van de gedetecteerde oriëntatie van het hoofd van de gebruiker, waarbij het werkomgevingsbeeld een in-situ opgenomen virtuele-werkelijkheidbeeld is, en een werkomgeving van het werkelijke leven weergeeft, waarbij het werkomgevingsbeeld verschillende kenmerken omvat met verschillende posities in het werkomgevingsbeeld; indien de gedetecteerde oriëntatie van het hoofd van de gebruiker correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd kenmerk, het weergeven van een merktekenbeeld in het werkomgevingsbeeld op of nabij genoemde positie; wanneer de gedetecteerde oriëntatie correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd kenmerk, het verwijderen van het merktekenbeeld en het weergeven van een kenmerkdetailbeeld in het werkomgevingsbeeld op of nabij genoemde positie, waarbij het kenmerkdetailbeeld correspondeert met een kenmerk met een specifieke positie in het werkomgevingsbeeld.A method of interactive display, comprising: placing a display unit at a fixed position relative to a user's head; detecting an orientation of the user's head; displaying a portion of a work environment image on the display unit based on the detected orientation of the user's head, the work environment image being an in-situ recorded virtual reality image, and displaying a real life work environment, work environment image includes different features with different positions in the work environment image; if the detected orientation of the user's head corresponds to a viewing direction directed to said position of said feature, displaying a marker image in the operating environment image at or near said position; when the detected orientation corresponds to a viewing direction directed to said position of said feature, removing the tag image and displaying a feature detail image in the operating environment image at or near said position, the feature detail image corresponding to a feature having a specific position in the work environment image. 23. Werkwijze voor interactieve weergave volgens conclusie 22, waarbij de kenmerken betrekking hebben op valelementen, d.w.z. mechanische delen die zich boven de gebruiker bevinden en die potentieel naar beneden op een gebruiker kunnen vallen en verwondingen of dood kunnen veroorzaken, en die uitsluitend kunnen worden waargenomen door omhoog te kijken, en waarbij de werkwijze het creëren van besef van deze valelementen omvat door het trainen van de gebruiker om alle valelementen boven zich waar te nemen, waarbij de werkwijze omvat: indien de gedetecteerde oriëntatie van het hoofd van de gebruiker correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd valelement, het weergeven van een merktekenbeeld in het werkomgevingsbeeld op of nabij genoemde positie; wanneer de gedetecteerde oriëntatie correspondeert met een kijkrichting die is gericht naar genoemde positie van genoemd valelement, het verwijderen van het merktekenbeeld en het weergeven van een kenmerkdetailbeeld in het werkomgevingsbeeld op of nabij genoemde positie, waarbij het kenmerkdetailbeeld correspondeert met een kenmerk met een specifieke positie in het werkomgevingsbeeld.The interactive display method according to claim 22, wherein the features relate to fall elements, ie mechanical parts that are above the user and that can potentially fall down on a user and cause injury or death, and which can only be observed by looking up, and wherein the method comprises creating awareness of these fall elements by training the user to perceive all fall elements above, the method comprising: if the detected orientation of the user's head corresponds to a viewing direction directed to said position of said fall element, displaying a marker image in the working environment image at or near said position; when the detected orientation corresponds to a viewing direction directed to said position of said fall element, removing the marker image and displaying a feature detail image in the operating environment image at or near said position, the feature detail image corresponding to a feature having a specific position in the work environment image.
NL2019178A 2017-07-05 2017-07-05 Interactive display system, and method of interactive display NL2019178B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
NL2019178A NL2019178B1 (en) 2017-07-05 2017-07-05 Interactive display system, and method of interactive display
PCT/NL2018/050433 WO2019009712A1 (en) 2017-07-05 2018-07-04 Interactive display system, and method of interactive display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2019178A NL2019178B1 (en) 2017-07-05 2017-07-05 Interactive display system, and method of interactive display

Publications (1)

Publication Number Publication Date
NL2019178B1 true NL2019178B1 (en) 2019-01-16

Family

ID=59656132

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2019178A NL2019178B1 (en) 2017-07-05 2017-07-05 Interactive display system, and method of interactive display

Country Status (2)

Country Link
NL (1) NL2019178B1 (en)
WO (1) WO2019009712A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109947546B (en) * 2019-03-13 2021-08-20 北京乐我无限科技有限责任公司 Task execution method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139551A1 (en) * 2012-11-21 2014-05-22 Daniel McCulloch Augmented reality help
US20150268469A1 (en) * 2013-12-10 2015-09-24 The Boeing Company Systems and methods for providing interactive production illustration information
US20170072305A1 (en) * 2015-09-16 2017-03-16 Gree, Inc. Virtual image display program, virtual image display apparatus, and virtual image display method
AU2017100357A4 (en) * 2017-03-28 2017-04-27 Suegeo Pty Ltd Interactive safety training and assessment
US20170148214A1 (en) * 2015-07-17 2017-05-25 Ivd Mining Virtual reality training

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139551A1 (en) * 2012-11-21 2014-05-22 Daniel McCulloch Augmented reality help
US20150268469A1 (en) * 2013-12-10 2015-09-24 The Boeing Company Systems and methods for providing interactive production illustration information
US20170148214A1 (en) * 2015-07-17 2017-05-25 Ivd Mining Virtual reality training
US20170072305A1 (en) * 2015-09-16 2017-03-16 Gree, Inc. Virtual image display program, virtual image display apparatus, and virtual image display method
AU2017100357A4 (en) * 2017-03-28 2017-04-27 Suegeo Pty Ltd Interactive safety training and assessment

Also Published As

Publication number Publication date
WO2019009712A1 (en) 2019-01-10

Similar Documents

Publication Publication Date Title
Jeelani et al. Development of virtual reality and stereo-panoramic environments for construction safety training
Wolf et al. Investigating hazard recognition in augmented virtuality for personalized feedback in construction safety education and training
US11012595B2 (en) Augmented reality
US20180357922A1 (en) Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems
CN108229791B (en) Electronic device and method for reporting sign-based training sessions
US20120139828A1 (en) Communication And Skills Training Using Interactive Virtual Humans
De Armas et al. Use of virtual reality simulators for training programs in the areas of security and defense: a systematic review
KR102137006B1 (en) Safety education training system using virtual reality device and method controlling thereof
Barot et al. V3S: A virtual environment for risk-management training based on human-activity models
US20080147585A1 (en) Method and System for Generating a Surgical Training Module
JP6382490B2 (en) Symbiotic helper
NL2019178B1 (en) Interactive display system, and method of interactive display
Olayiwola et al. Design and Usability Evaluation of an Annotated Video–Based Learning Environment for Construction Engineering Education
Boel et al. Applying educational design research to develop a low-cost, mobile immersive virtual reality serious game teaching safety in secondary vocational education
CN113064486B (en) VR education training method and device based on crime scene investigation
US11587451B2 (en) VR education system
Crego Critical incident management: Engendering experience through simulation
US11917324B1 (en) Anti-cheating methods in an extended reality environment
Choong et al. Augmented Reality (AR) Usability Evaluation Framework
O’Kane et al. Perception studies
US11990059B1 (en) Systems and methods for extended reality educational assessment
Uchiya et al. Development of Indoor Evacuation Training System Using VR HMD
Smith et al. Simulation Scriptwriting and Storyboarding Design Considerations for Production
Dalto New Technologies in Safety Training
Kleygrewe Immersed in Training: Advancing Police Practice with Virtual Reality