EP3649538A1 - Assembly and method for communicating by means of two visual output devices - Google Patents
Assembly and method for communicating by means of two visual output devicesInfo
- Publication number
- EP3649538A1 EP3649538A1 EP18736891.5A EP18736891A EP3649538A1 EP 3649538 A1 EP3649538 A1 EP 3649538A1 EP 18736891 A EP18736891 A EP 18736891A EP 3649538 A1 EP3649538 A1 EP 3649538A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- output device
- visual output
- image
- human
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 222
- 238000000034 method Methods 0.000 title claims abstract description 15
- 230000005540 biological transmission Effects 0.000 claims abstract description 32
- 238000004891 communication Methods 0.000 claims abstract description 27
- 241000282414 Homo sapiens Species 0.000 claims description 167
- 210000003128 head Anatomy 0.000 claims description 25
- 210000001508 eye Anatomy 0.000 claims description 22
- 238000011156 evaluation Methods 0.000 claims description 20
- 230000004807 localization Effects 0.000 claims description 17
- 210000001525 retina Anatomy 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000008054 signal transmission Effects 0.000 claims description 5
- 238000005259 measurement Methods 0.000 claims description 4
- 230000001419 dependent effect Effects 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 claims 1
- 239000011521 glass Substances 0.000 description 100
- 230000033001 locomotion Effects 0.000 description 17
- 241000282412 Homo Species 0.000 description 7
- 230000008921 facial expression Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 230000004886 head movement Effects 0.000 description 4
- 230000001755 vocal effect Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 210000000697 sensory organ Anatomy 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000008094 contradictory effect Effects 0.000 description 2
- 201000003152 motion sickness Diseases 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 1
- 208000037175 Travel-Related Illness Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 206010025482 malaise Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/56—Display arrangements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/56—Display arrangements
- G01S7/62—Cathode-ray tube displays
- G01S7/6245—Stereoscopic displays; Three-dimensional displays; Pseudo-three dimensional displays
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/52—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
- G01S7/56—Display arrangements
- G01S7/62—Cathode-ray tube displays
- G01S7/6281—Composite displays, e.g. split-screen, multiple images
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/142—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S15/00—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
- G01S15/02—Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
- G01S15/06—Systems determining the position data of a target
- G01S15/46—Indirect determination of position data
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0141—Head-up displays characterised by optical features characterised by the informative content of the display
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B2027/0178—Eyeglass type
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
Definitions
- the invention relates to an arrangement and a method which facilitates the communication between two people using two visual output devices.
- the communication server 12 and a network 14, which supports a person in need of removal from the distance.
- the communication server 12 has a first receiving device 18.
- the person in need of assistance wears spectacles 28, to which a stereo camera 30 and a lamp 34 are attached and which a second
- Acceleration sensor 46 has.
- an optical output device is arranged.
- the images from the camera 30 in the glasses 28 are on the
- the communication system 10 may further include a mobile terminal 48 and a stationary computer 52.
- the user of the computer 52 or the user of the smartphone 48 can then see what the wearer of the glasses 28 sees and what is picked up by the camera 30 and transmitted to the server 12.
- the user of the computer 52 or the user of the smartphone 48 can acoustically and visually (video phone ie) give the wearer of the glasses 28 a support. In one embodiment evaluates a
- Image evaluation unit 20 of the communication system 10 Images of the camera 30
- Deformation sensors for detecting the movement of the upper half of the face of the wearer are disclosed, wherein on the head-mounted display a camera, for example by means of a curved support, is fixed such that the camera is aligned on the lower half of the face, which is not covered by the head-mounted display, to also detect the movements of the lower half of the face.
- DE 102014018056 A1 is a virtual reality glasses with a display device as near-eye display for displaying a virtual reality and a
- Detecting means for detecting a predetermined head movement of the wearer of the virtual reality glasses in which a captured by a camera system camera image of the environment of the wearer is displayed on the display device.
- the camera system is arranged directly on the virtual reality glasses in the direction of the wearer. After the head has moved, the camera image can also be partially overlaid by the real environment of the virtual environment.
- a disadvantage of this virtual reality glasses is that an indication of the real environment in each case only after carrying out the predetermined head movement, such as a pitching motion, carried by the wearer. Thus, another person can not tell if this pitching movement is being executed as approval in a communication or to activate the display.
- DE 10201410721 A1 describes a display device, for example a three-dimensional screen or a three-dimensional spectacle, for displaying a virtual reality with a gesture recognition device with two integrated cameras for determining a movement of a hand and a display device, wherein the display device displays a representation of the hand.
- DE 202009010719 U1 describes a communication system with a person operating a transmitting station and a person operating a receiving station, wherein the person operating the transmitting station instructs the person operating the receiving station via executable instructions such that the latter uses images of an object recorded with a camera Supporting station supporting, for example, in their purchase decision by transferring the images.
- DE 202016008297 U1 discloses a computer-based storage medium for
- the object of the invention is to provide an arrangement and a method which facilitate the exchange of messages between two people.
- the arrangement according to the invention comprises
- the first visual output device comprises a first presentation device.
- the second visual output device comprises a second presentation device. Both the first and the second visual output device can be carried by one human each.
- Each camera system is capable of producing an image of the real environment of the camera system.
- the first camera system is positioned or can be positioned so that the image generated by the first camera system completely or at least partially shows a human wearing the first visual output device.
- the second camera system is positioned or can be positioned so that the image generated by the second camera system completely or at least partially shows a person wearing the second visual output device.
- the image transmission device is capable of transmitting an image, which has generated the first camera system, to the second visual output device. Accordingly, the image transmission device is capable of transmitting an image, which has generated the second camera system, to the first visual output device.
- the respective display device of a visual output device is able to present a representation comprising an image which has been transmitted to this visual output device.
- the presentation device presents this representation in a form in which a person wearing this visual output device can visually perceive them.
- the image transmission device is capable of transmitting images of the real environment that the first camera system has generated to the second visual output device. It is able to transmit images of the real environment which the second camera system has generated to the first visual output device.
- the method according to the invention determines how messages are exchanged between a first person and a second person, and is carried out using a solution according to the invention.
- the solution according to the invention can be used in a situation in which two people have to exchange messages with each other in order to work together
- Task to solve, for example, together to control a technical system or to regulate or monitor or to jointly assess an environmental condition.
- the two people talk to each other.
- the two people are in different rooms, and an acoustic barrier, such as a sound-proof wall, may be present between the two people his. Or the ambient sounds superimpose spoken words. Or the two people do not speak a common language.
- the solution according to the arrangement and the method according to the solution improve the exchange of messages, especially in this situation.
- the solution according to the arrangement and the method according to the solution can be used in combination with acoustic message transmission, for example by means of a microphone and headphones, or instead of an acoustic communication.
- the solution according to the arrangement and the method according to the solution enable the following type of message exchange:
- the first camera system generates at least one image that shows the first human
- the second camera system generates at least one image that shows the second human.
- the images from the first camera system are transmitted to the second visual output device, and the images from the second camera system are transmitted to the first visual output device.
- the first person thus sees images of the second person, and conversely, the second person sees images of the first human being.
- One person sees the gestures and facial expressions of the other person. These gestures and facial expressions can complement spoken words. It is well known that the risk of misunderstandings between two people is reduced if gestures and facial expressions are added to verbal communication.
- gestures and facial expressions can also take the place of spoken words, for example if the two people do not speak a common language. Thanks Of the two visual output devices and the two camera systems, the two people need not be able to see each other visually
- the first camera system is capable of producing an image of a human wearing the first visual output device.
- the first camera system and the first output device can be designed such that the first output device can be moved freely relative to the first camera system and in particular the distance and the orientation between the first output device and the first camera system can be changed freely. This becomes a human being who is the first visual
- Output device or at least components of this carries on its head, not restricted in its movements by the first camera system.
- this person it is not necessary for this person to carry a camera of the first camera system on his body in addition to the first visual output device.
- the first display device is capable of generating a first virtual object.
- the second presentation device is capable of generating a second virtual object.
- These two generated virtual objects represent the same information, in a form visually perceivable by a human.
- the first presentation device is able to present a common representation that a person wearing the first visual output device can visually perceive.
- This common representation includes an image transmitted to the first visual output device and the first virtual object.
- the second presentation device is able to present a common representation which a person wearing the second visual output device can visually perceive.
- This shared representation includes an image transmitted to the second visual output device and the second virtual object.
- the two virtual objects may be the same or different. Even with different virtual objects they represent the same information.
- each presentation device generates at least one virtual object in each case and rewards this virtual object together with an image which has been transmitted via the image transmission device to the visual output device. It is possible that at least one presentation device generates a plurality of virtual objects and displays this plurality of virtual objects together with the received image.
- the visual output device is able to output information in a form visually perceptible by a human and thus to represent a virtual reality.
- a variety of information can be displayed.
- the presentation device represents this virtual reality virtually on a screen or on the retina of a human eye. Because of this
- Screen is part of the head-worn visual output device, this screen moves with, when the human moves his head. This feature ensures that the human always has the presented virtual reality in mind.
- a visual output device with its own output device camera eliminates the need to switch between a representation of the real environment and a representation of a virtual reality. Such switching may confuse a person carrying the visual output device, particularly when the switching is abrupt or when the human is moving relative to the depicted real environment, but not the virtual reality depicted.
- the first visual object showing the first one
- Displaying means, and the second visual object displaying the second displaying means displays the same information in a human-visually perceptible form. If two people are the two visual
- the solution according to the arrangement additionally comprises a signal transmission device.
- This signal transmission device is able to transmit a first signal to the first visual output device. It can transmit a second signal to the second visual output device. Both signals contain the information, which is therefore covered by both signals.
- the first display device is capable of generating the first virtual object depending on the first signal.
- the second display device is capable of generating the second virtual object depending on the second signal.
- This embodiment ensures that the two visual objects are based on the same signal and thus in fact the same information is displayed on both visual output devices.
- the arrangement comprises a sensor.
- the arrangement is in data connection to a sensor.
- the sensor is capable of measuring a value of a variable magnitude.
- the sensor is able to generate a signal which depends on the measured value.
- the signal transmission device is capable of transmitting the generated signal to both visual output devices, preferably at the same time.
- the measured value which the sensor has measured is transmitted to both visual output devices.
- Each display device generates a virtual object depending on the received signals. This virtual object shows the same measured value.
- two people carrying the two visual output devices can exchange messages about this reading. It is not necessary for one of the two people to view the sensor or a physical output device that displays the measured reading and read the reading. This is especially important if the sensor is located in a difficult to reach or dangerous area for people.
- each virtual object is a virtually replica of the
- the sensor whose signal is transmitted to both visual output devices, is for example an active sonar system, a passive sonar system, a
- Towed antenna a radar system, a geoposition receiver, a
- Speedometer and / or a wind direction or wind speed gauge in particular on board a watercraft.
- the arrangement comprises a first camera system and a second camera system.
- the first visual output device comprises a first output device camera.
- the second visual output device includes a second output device camera.
- the first output device camera is capable of producing an image of the real environment of the first visual output device.
- the second output device camera is capable of producing an image of the real environment of the second visual output device.
- the first output device camera can be attached to the head of a person, who carries the first visual output device.
- the second output device camera can be attached to the head of a person, who carries the second visual output device.
- each visual output device has an output device camera.
- This output device camera is capable of producing an image of the real environment.
- the output device camera of a visual output device shows what a human wearing this output device on his head would see if he did not wear the output device.
- An image of the output device camera can be displayed to the person carrying this output device camera on his head, or to the person carrying the other visual output device of the arrangement on his head.
- the image transmission device is additionally able to transmit images from the real environment which the first output device camera has generated to the second visual output device. It is additionally capable of transmitting images of the real environment which the second output device camera has generated to the first visual output device.
- the first presentation device is able to present a representation which comprises an image which was generated by the first output device camera. This representation can be visually perceived by a person wearing the first visual output device.
- the second display device is able to present a representation with an image which has generated the second output device camera. This representation can be visually perceived by a person wearing the second visual output device.
- an image that has created an output device camera is presented to a human who uses the visual output device with this output device. Camera wears.
- the presentation device presents this image to a human wearing the visual output device with this display device on its head. Because the human carries the visual output device on his head, the output device moves when the human moves his head together with the output device and thus the presentation device. It is prevented that the different sense organs of this person provide contradictory information, namely, on the one hand, the eyes, which see the image shown, and on the other hand further sense organs that perceive the spatial position, orientation and movement of people in the room.
- the presentation device is capable of displaying an image of the real environment that a human wearing the output device would see if he did not wear the output device.
- This image of the real environment is generated by the output device camera and follows a head movement of the camera
- a visual output device with an output device camera allows one
- Output device camera displays this machine or system or system.
- Presentation device of this visual output device shows this image. It is possible to additionally display a virtual object which displays information in a human perceptible form.
- the first display device can be switched between at least two different modes.
- the Representation presenting the first display device, an image, which has been transmitted from the image transmission device and was generated for example by the second camera system.
- this presentation includes an image created by the first output device camera. Accordingly, the second display device can be switched between two different modes.
- the viewing direction of the first output device camera coincides with the standard viewing direction of a human carrying the first visual output device with the first output device camera.
- the viewing direction of the second output device camera coincides with the standard viewing direction of a person carrying the second visual output device with the second output device camera.
- the viewing direction of at least one output device camera coincides with the standard viewing direction of a person wearing the visual output device.
- the visual output device includes a carrier on which the display device and the output device camera are mounted.
- the output device camera is mounted on the carrier so that it faces away from the human face and faces outward in the standard viewing direction into the real environment.
- the images provided by the output device camera thus arranged will show the real environment from the same viewing direction from which the human would perceive the real environment if he did not wear the visual output device.
- the displayed images from the output device camera even better match the spatial position and movement of the human head. Contradictory information from different sensory organs of humans are prevented with even greater certainty. Prevented is the often perceived as unpleasant impression that a human, the visual
- the image transmission device and / or at least one visual output device comprises an image intensifier.
- This image intensifier is capable of amplifying an image produced by a camera system or an output device camera.
- the respective presentation device is able to present an image which has been amplified by the image intensifier.
- the image transmission device and / or at least one visual output device comprises a conversion device.
- This conversion device is capable of converting an image in the infrared light region into an image in the visible light region.
- the respective presentation device is able to present an image that has been converted by the conversion device.
- the arrangement comprises at least one input device.
- the first visual output device and / or the second visual output device is in data communication with the or an input device. It is possible that each visual output device is in data connection with one input device each.
- the or each input device is capable of detecting an input of a human, in particular a human, who is wearing a visual output device of the device.
- the presentation device of a visual output device is capable of altering the presentation presented in response to an input acquired with the associated input device. For example, the appearance of a virtual object is changed.
- Input device can change a presented presentation with an image, without having to serve the visual output device.
- the human being can increase or decrease the imaging scale or the brightness of the image
- the input device comprises a
- Remote control for the first camera system and / or the second camera system can change the images that the second camera system generates and which ones to the first visual one Output device and presented by the first display device. Accordingly, a person wearing the second visual output device can change the images from the first camera system.
- the input device may comprise, for example, a mouse or a joy stick or a switch or a touchpad.
- the input device may be configured to detect a head movement of the person, for example by means of a
- the input device may also include a visual evaluation unit, which detects a human gesture and derives therefrom a user input of this person, for example by pattern recognition.
- actuation of the input device causes both
- This embodiment with the input device allows the two people to communicate acoustically as well as visually without the two humans needing to be within earshot or sight.
- the arrangement comprises a voice recognition device.
- This speech recognition device recognizes a speech input of a human wearing a visual output device having a speech input unit.
- the arrangement generates information from the recognized voice input.
- Display devices each generate a virtual object containing these
- Each presentation device presents the visual object for voice input along with an image of one Camera system or an output device camera. It is also possible that the voice recognition device recognizes a voice input that was made with the one visual output device.
- the presentation device of the other visual output device generates a virtual object which depends on the recognized speech input and presents this virtual object together with an image. Both embodiments make it possible to visually represent spoken words, for example a verbal statement or a verbal reference, additionally on the visual output device.
- the first visual output device and / or the second visual output device belong to a communication device.
- at least one component each of the first visual output device and / or the second visual output device belong to a communication device.
- This communication device can be carried by a human at his head, which person carries a visual output device of the arrangement.
- the communication device further comprises a voice input unit, in particular a microphone, and a voice output unit, in particular a headphone.
- the first visual output device encloses an optically dense space in front of the eyes of a human wearing the first visual output device.
- the second visual output device encloses a visually dense space in front of the eyes of a person wearing the second visual output device. It is possible that both visual output devices of the arrangement each enclose a visually dense space in front of the eyes of a human. The or each optically dense space prevents light from penetrating the real environment into the optically dense space.
- At least one visual visual output device encloses an optically dense space in front of the eyes of a human wearing the dispenser.
- the visual output device prevents light from entering the optically dense space from the real environment. Because an optically dense space is provided, the representation presented is the only visual information for a human being who is the subject carries visual output device.
- the visual output device prevents light impressions from the outside from superimposing on the presentation presented. These light impressions can cause the person to become confused or not recognize certain segments of the presented presentation at all or only badly, or that the eyes become overloaded and fatigue quickly. This unwanted effect can occur, in particular, when the impressions vary greatly or rapidly over time due to changing ambient brightnesses.
- a presentation device can be designed in such a way that it displays pictures of differing brightness differently.
- a visual output device presents an image from a camera system and / or from an output device camera together with at least one virtual object.
- at least one virtual object and thus a virtual reality is overlaid with an image of the real environment.
- the real environment is rendered weaker or more powerful than the virtual reality information. Allows that one
- Man who wears the visual output device, perceives a partially transparent overlay of the virtual reality with the illustrated real environment.
- At least one screen is adjacent to the optically dense space provided by a visual output device in front of the eyes of a human wearing this output device.
- the or each screen is located in front of at least one eye of a human wearing this visual output device.
- the presentation device is able to present the representation with the transmitted image from a camera system and / or from an output device camera on this screen.
- At least one visual output device presents a
- the visual output device includes a single screen adjacent to the optically dense space and simultaneously positioned in front of both eyes of a human carrying the output device.
- the output device comprises two screens. Each screen adjoins the optically dense space and is positioned in front of each human eye.
- At least one display device comprises a so-called retina projector.
- This retina projector projects the image with the image onto the retina of at least one eye of a human wearing the visual output device.
- the presentation device functions as a so-called retinal projector.
- This retina projector imprints the image with the image of the real environment directly on the retina of a human eye wearing the visual output device.
- the arrangement comprises a first camera system and a second camera system.
- the first camera system is capable of producing at least one image of a human wearing the first visual output device.
- the second camera system is capable of producing at least one image of a human carrying the second visual output device.
- the first person with the first visual output device can move freely relative to the first camera system, the second person with the second visual output device freely relative to the second camera system.
- the arrangement comprises at least one localization device which corresponds to the first camera system and / or to the second camera system. It is possible that each camera system is assigned a localization device.
- the associated camera system includes a camera and an actuator for this camera.
- the localization device is able to determine the position of a visual output device in the room.
- the associated camera system generates images of the person carrying this visual output device.
- the actuator is capable of moving the camera of this associated camera system, depending on localization device signals that locate the visual output device.
- At least one camera system comprises an actuator, and the arrangement comprises a location device.
- the locator device can detect the location of a human, especially a human, who carries a visual output device of the device. It is possible that a visual
- Output device is mechanically connected to a position transmitter and the
- Locating device receives and evaluates signals from this position transmitter.
- the actuator is capable of moving at least one camera of the camera system, depending on localization device signals. This allows the moving camera to follow the movements of a person carrying the visual output device
- corresponding markings are arranged on the visual output device or on the human body.
- the localization device comprises an image recognition unit which displays these markings
- That visual output device whose position is to be determined in space is mechanically connected to a position transmitter.
- the location facility includes this location transmitter and a receiver.
- the position transmitter on the visual output device is capable of transmitting a position signal.
- the receiver of the localization device is able to receive this position signal.
- the Localization device can continue to drive the actuator depending on a received position signal.
- the visual output device comprises a transmitter, which transmits a position signal to the localization device.
- Output device can emit ultrasonic signals, for example.
- the design with a transmitter and a receiver makes it possible in many applications reliably to determine the current position of a person who carries a visual output device with the transmitter, in particular when an image, which generates a camera system, in addition to the human with the visual output device shows another human and or if bad
- the position signal comprises an identifier of the visual output device, so that the localization device is able to reliably distinguish the received position signal from the visual output device from other signals.
- the arrangement comprises an image evaluation unit.
- This image evaluation unit can automatically evaluate an image which was generated by a camera system of the arrangement. By evaluating the image, the image evaluation unit can automatically determine optically detectable attributes of a human being, with this human being being shown in this image.
- the image evaluation unit is furthermore able to identify a data record for a human, namely in a data memory among a predefined set of data records with information about different people. This data record identifies the image evaluation unit as a function of the ascertained optically detectable attributes. Or the image evaluation unit automatically determines that no record belongs to the person shown in the image.
- at least one camera system comprises one
- Pattern recognition device or an image recognition device which is preferably realized with software. This device determines from at least one image, preferably a plurality of images, from the camera system information about a human, which is shown in the images. If the images of the first
- Camera system they show the first human wearing the first visual output device. If they come from the second camera system, they show the second person wearing the second visual output device. In some applications, this person is difficult to recognize, especially in low light conditions. The information obtained is presented to the other person. This reduces the risk that one person will not know with whom he or she is exchanging messages or with another unauthorized person
- this information about a person shown in the images is transmitted with visually ascertainable information
- This information about different people is stored in a database.
- the different people are
- a unique identification of the human and / or a portrait of the human being without a visual output device is stored in the database.
- Marking and / or this portrait is presented to the person who carries the other output device to which the images are transmitted that show the person with the visual output device.
- the arrangement comprises a third visual output device with a third display device and a third camera system.
- the camera system is positioned to produce an image of a human wearing the third visual output device.
- the image transfer device is capable of transferring images to any visual output device. This can be three People who carry the three visual output devices, exchange messages with each other.
- the image transmission facility may provide a direct wireless transmission channel between the two visual output devices. It is also possible that the image transmission device comprises a relay station, for example in the form of a computer or server. Between the first visual output device and the relay station a first wireless transmission channel is provided, between the second visual output device and the relay station a second transmission channel.
- a wireless transmission channel for example, electromagnetic waves, mobile radio, Bluetooth, WLAN, near-field communication and / or optical directional radio can be used.
- the provided transmission channel between the two output devices consists of a wired transmission link and in each case a wireless transmission link for each visual output device. If the solution according to the invention is used on board a watercraft, then the electrical system of the vessel can be used to make up part of the watercraft
- each visual output device comprises a virtual reality glasses (VR glasses).
- VR glasses are also called video glasses, helmet displays or VR helmets.
- the visual output device can also be designed as augmented reality glasses (AR glasses).
- AR glasses augmented reality glasses
- Each camera system may include a single camera or multiple cameras. If a camera system includes several cameras, they have one
- Cameras of a camera system prefer different viewing directions and / or different viewing angles.
- each camera is capable of producing static or moving optical images of the real environment, in particular a video sequence.
- each camera repeatedly generates images, for example at a predetermined sampling rate or sampling frequency.
- the or each camera is configured as a digital camera with a CCD chip.
- a lens system guides light on this CCD chip.
- the presentation device uses data on this CCD chip and in the common representation to represent the image of the real environment.
- At least one camera may be configured as a 3D camera, which comprises spaced-apart lenses or similar optical imaging units.
- each camera system comprises a fixed camera
- each camera system comprises a mobile camera.
- the cameras of the camera systems are spatially separate from the visual output device.
- the first camera system generates an image of a first person who carries the first visual output device. This image is sent to the second visual output device
- the first presentation device presents an image that was generated by the second camera system.
- the first display device may optionally present an image from the second camera system or an image from the first camera system. Thanks to this configuration, a first person wearing the first visual output device can selectively see the second person carrying the second output device, or himself. This embodiment allows the first person to see and check his own gestures and facial expressions. It is also possible that a display device simultaneously presents images of both camera systems, in one embodiment additionally with virtual objects.
- FIG. 1 shows a first person who carries a solution-based visual output device in the form of a first VR glasses and an input device; two digital cameras, an evaluation computer and a localization facility
- FIG. 2 shows a schematic representation of what the VR glasses present to the first person wearing the first VR glasses on the screen.
- the invention is used on board a manned watercraft, wherein the watercraft may be an overwater vehicle or an underwater vehicle.
- the watercraft may be an overwater vehicle or an underwater vehicle.
- Two crew members of the watercraft use two solution-based visual output devices of a solution according to the invention.
- each visual output device has the form of a virtual reality glasses (VR glasses).
- VR glasses virtual reality glasses
- Oculus Rift® is used as VR glasses, which is extended in accordance with the solution.
- a first person M.1 wears a first VR glasses 101.
- These VR glasses 101 comprise a carrier which comprises a preferably elastic and variable in length tension belt 107 and a frame 106.
- the tension belt 107 is guided around the head K.1 of the person M.1 and carries the frame 106.
- the tension belt 107 ensures a secure fit of the VR glasses 101.
- the frame 106 carries a plate-shaped and preferably flexible holding element 105.
- two camera lenses 103 are embedded, which belong to two digital cameras of the first VR glasses 101.
- Each digital camera is capable of producing an image of the real environment of VR glasses 101.
- the viewing direction of each digital camera preferably coincides with the standard viewing direction of the person M.1 who wears the VR glasses 101.
- the two digital cameras with the lenses 103 form virtually the human's "eyes of reality" and act as the first output device camera of the embodiment.
- the signals from the camera lenses 103 are recorded on CCD chips. As a result, optical images which generate the camera lenses 103 can be recorded.
- a computer 1 15 of a display device is mounted on the frame 106 The computer 1 15 evaluates the signals from the camera lenses 103 and generates an image of the real environment.
- a screen 21 1 is provided on the inside of the holding element 105, that is to say on the surface of the holding element 105 facing the human M.1, a screen 21 1 is provided on the inside of the holding element 105, that is to say on the surface of the holding element 105 facing the human M.1, a screen 21 1 is provided on the inside of the holding element 105, that is to say on the surface of the holding element 105 facing the human M.1, a screen 21 1 is provided on the inside of the holding element 105, that is to say on the surface of the holding element 105 facing the human M.1, a screen 21 1 is provided on the inside of the holding element 105, that is to say on the surface of the holding element 105 facing the human
- the signals from the camera lenses 103 are recorded on CCD chips. As a result, optical images which generate the camera lenses 103 can be recorded.
- a computer 1 15 of the display device is mounted on the frame 106 The computer 1 15 evaluates the signals from the camera lenses 103 and generates an image of the real environment. The computer 1 15 automatically causes this image to be presented on the or each screen 21 1.
- the computer 1 15 generates a stereoscopic representation of the real environment in front of the person M.1 who wears the first VR glasses 101, and uses signals from both digital cameras 103 for this purpose. This stereoscopic representation is presented on the screen 21 1 ,
- FIG. 2 shows by way of example an image 215 presented on a screen 21 1.
- the person M.1 who wears the VR glasses 101 looks at another one People M.2, for example, another crew member of the vessel.
- the image 215 shows an image with the head K.2 and the upper body of this other human M.2.
- the other person M.2 wears a second VR glasses 301, which is the same as the first VR glasses 101.
- the person M.1 who wears the VR glasses 101 can perceive gestures and the facial expressions of the other person M.2 without having to set down the first VR glasses 101. This allows the two people M.1 and M.2 communicate visually with each other. This visual communication can complement or even replace acoustic communication when acoustic communication is not possible.
- the first VR-goggle 101 is opaque, i. it encloses a visually dense space in front of the eyes of the person M.1, who wears the VR glasses 101.
- This optically dense space is bounded by the holding element 105, in the frame 106 and the head K.1 of the human M.1.
- the holding element 105 is configured with the two camera lenses 103 completely opaque.
- the first VR glasses 101 prevents light from the real environment from entering the optically dense space in front of the head K.1 of the first human M.1.
- a receiving device 1 17 is arranged on the frame 106 or on the holding element 105 of the first VR glasses 101. This receiving device is able to receive signals wirelessly. The computer 1 15 of the display device generates images from these signals, which are then presented on the screen 21 1
- Two digital cameras 123 continuously take images of the first human M.1, preferably at a predetermined sampling rate.
- the two cameras belong to the first camera system 121 of the embodiment.
- Two actuators 132 are capable of moving the stationary cameras 123. Thanks to these actuators 132, the digital cameras 123 follow a movement of the first human M.1 in space, so that the images from the cameras 123 show the human M.1.
- a controller 134 controls these actuators 132.
- an evaluation unit not shown, evaluates the images that generate the cameras 123, and thereby determines a movement of the first human M.1.
- a transmitter 128 is mounted on the frame 106 of the first VR glasses 101. This transmitter 128 continuously transmits position signal.
- the transmitter 128 sends a signal which distinguishes the first VR-goggles 101 from all other devices on board the vessel, which also emit signals.
- a receiver 130 receives the position signals from the transmitter 128.
- a transmission unit 124 transmits the received position signals to the controller 134.
- the controller 134 evaluates the received signals and controls the actuator 132 based on the received and evaluated signals.
- the first human M.1 visually exchanges messages with a second human M.2.
- the second person M.2 carries at his head K.2 a second visual output device in the form of a second VR glasses 301.
- the second VR glasses 301 is constructed the same as the first VR glasses 101.
- FIG. 2 shows these second VR glasses 301 from the front-more precisely, an image of the second VR glasses 301 in the image 215, which is presented on the screen 21 1.
- This standard viewing direction of the second human M.2 and thus the viewing directions of the two digital cameras of the second VR glasses 301 are directed at the viewer.
- the following components of the second VR glasses 301 can be seen in FIG. 2:
- a transmitter 328 which transmits position signals
- a receiving device 317 which corresponds to the receiving device 1 17 of the first VR glasses 101.
- two stationary digital cameras 123 generate images of the first human M.1 carrying the first VR glasses 101. Thanks to the actuated actuators 132 for the two stationary digital cameras 123 to follow the movements of the first human M.1.
- An evaluation computer 125 receives signals from the two cameras 123 and generates processed images of the first human M.1. For example, the evaluation computer 125 automatically illuminates or darkens the images to compensate for excessive variations in the brightness of the images.
- a transmitter 127 automatically transmits these processed images.
- a transmitter (not shown) transmits the processed images, which the first camera system 121 has produced, directly and wirelessly to the second VR glasses 301, which the second person M.2 carries.
- the evaluation computer 125 transmits the processed images to a central computer 220, preferably wired via the electrical system of the watercraft.
- the central computer 220 is connected to a transmitter 127.
- This transmitter 127 then transmits the images wirelessly to the second VR glasses 301.
- the receiving device 317 of the second VR glasses 301 receives the transmitted signals with the processed images from the first human M.1.
- the second human M.2 carrying the second VR glasses 301 is shown the rendered images from the cameras 123 showing the first human M.1.
- the second human M.2 is enabled to perceive gestures and facial expressions of the first human M.1 without having to discard the second VR glasses 301.
- Fig. 2 shows schematically how the first human M.1 on a screen 21 1 sees the second human M.2.
- a sonar system with an underwater antenna (not shown) is arranged on board the vessel.
- This sonar system aims at a sound source that emits sound waves under water, and in particular determines the direction and / or the distance from the vessel to this sound source.
- This sonar system generates signals depending on the measured values (e.g., direction and distance, and sound intensity as a function of time and / or frequency). These signals are transmitted via the wired electrical system of the vessel to the central computer 220. With the aid of the transmitter 127, the central computer 220 transmits these signals with measured values of the sonar system at the same time to the two receiving devices 17 and 317 of the two VR glasses 101 and 301, for example with local radio.
- the receiving device 1 17 receives the signals at a fixed frequency.
- the computer 1 15 of the first display device evaluates the received signals and generates virtual objects in the form of virtual instruments 213.
- the computer 1 15 of the first display device generates a common representation, which simultaneously the image 215 of the real environment, which shows humans M.2, and presents several virtual objects in the form of virtual instruments 213.
- the receiving device 317 of the second VR glasses 301 receives the same signals at the same frequency.
- the computer 315 of the second display device evaluates the received signals and also generates virtual objects in the form of virtual instruments.
- the virtual instruments shown on the screen 21 1 of the first VR glasses 101 show the same information as the virtual instruments on the screen of the second VR glasses 301, and can be graphically constructed the same or constructed graphically different his. It is possible that a person M.1, M.2 changes the virtual instruments shown, for example enlarged or reduced.
- the computer 1 15 updates the virtual instruments 213 at the frequency at which the receiving devices 17 and 317 receive the signals.
- the or at least some signals contain presentation information. If a measured value is outside a predetermined range, for example, if the distance to a sound source falls below a predetermined barrier, then the corresponding virtual instrument 213 on the screen 21 1 of the first VR glasses 101 and the corresponding virtual instrument on the screen of the second VR glasses 301 highlighted. As a result, the attention of the human M.1 is directed to this virtual instrument 213 for a relevant measurement. The same goes for the second person M.2.
- the person M.1 uses the human M.2, for example, because the crew member M.2 has previously addressed the crew member M.1.
- the human M.1 sees on the screen 21 1 on the one hand an image 215 which shows the human M.2, and on the other hand the virtual instruments 213, cf. 2.
- the human M.1 can thereby perceive the measured values from the sonar system, which are displayed with the aid of the virtual instruments 213, and at the same time communicate visually with the human M.2.
- the two humans M.1 and M.2 see the same information (the same measurements from the sonar system), represented by virtual instruments.
- the virtual instruments are presented to the two humans M.1 and M.2, each in a common representation together with an image of the other human M.2 or M.1.
- the common presentation which is presented to the human M.1, shows the virtual instruments 213 and an image of the second human M.2, cf. Fig. 2.
- the common representation presented to the human M.2 shows the virtual instruments for the same measurements and an image of the first one People M.1. Both the images of the humans M.2, M.1 and the virtual instruments are constantly updated.
- the first human M.1 carries an input device 109 in his left hand.
- the second human M.2 carries a corresponding input device (not shown).
- the human M.1 operates a button or button or a touch-sensitive panel on the input device 109.
- the input device 109 has a motion sensor or an acceleration sensor that registers a certain left-hand movement.
- the first human can select a virtual instrument 213 in the common representation on the screen 21 1.
- This selection is transmitted by a transmitter 136 of the first VR glasses 101 to a receiver (not shown) of the central computer 220.
- the transmitter 136 is mounted on the input device 109.
- This selection and highlighting are communicated to the second VR glasses 301.
- the second human M.2 also sees in the common representation the virtual instrument 213 highlighted, which the first human M.1 has selected with the aid of the input device 109.
- the first human M.1 can visually give explanations of the emphasis, for example, with gestures that the second human M.2 sees on the screen of the second VR goggles 301.
- the two people M.1, M.2 additionally each carry a voice input unit, for example a microphone, and a voice output unit (not shown).
- the voice output unit that the first human M.1 uses may be integrated into the frame 106 or the straps 107 of the first VR glasses 101. Accordingly, the voice output unit using the second human M.2 may be integrated with the second VR glasses 301. Thanks to the speech input units and the speech output units, the two people M.1 and M.2 can additionally communicate acoustically with one another, even if considerable ambient noise makes acoustical communication without aids difficult. In particular, a Human verbal information on the virtual object just that he or she has previously selected.
- the first human M.1 can select a virtual instrument 213 with the aid of the input device 109 and point the second human M.2 to the displayed sensor value.
- the first person M.1 can instruct the second human M.2 to perform a certain action.
- two camera lenses 103 of two digital cameras are embedded in the holding element 105 of the first VR glasses 101.
- two camera lenses 303 are embedded in the holding member 305 of the second VR glasses 301.
- the viewing direction of each digital camera preferably coincides with the standard viewing direction of the person M.1 who wears the VR glasses 101.
- the two digital cameras with the lenses 103 virtually form the "eyes of the reality" of the human being.
- These two cameras with the lenses 103 virtually form the "eyes of the reality” of the human being.
- the first human M.1 can switch between two different representations, which are optionally presented:
- the first person M.1 can choose the representation with the real environment in front of the first person M.1 himself, if he does not communicate with a second person M.2, or if he wants to move and make sure that he is not against an obstacle running.
- the virtual instruments 213 are displayed, so that the first person on the screen 21 1 always a common representation.
- the second human M.2, who wears the second VR glasses 301 can choose between the following two representations:
- FIG. 2 a representation produced by the stationary cameras 123 and showing the first human M.1 and constructed according to the representation which is shown schematically in FIG. 2, and FIG.
- the first VR glasses 101 can transmit to the central computer 220 the images of the real environment which the two cameras have produced with the lenses 103.
- the central computer 220 transmits these images via the transmitter 127 to the second VR glasses 301.
- the second human M.2 can therefore choose between three representations:
- the first human M.1 can thus show the second human M.2 something that is in front of the first human M.1, for example a machine or a facility aboard the watercraft. Accordingly, in one embodiment, the first human M.1 can likewise select between three representations. In all three representations, the virtual instruments 213 are preferably faded in each case.
- a person M.1, M.2 may consider it disadvantageous that he sees in the image the other human M.2, M.1 only with VR glasses 101, 301 in front of his eyes.
- the following embodiment offers a possible remedy for this Problem.
- a set of data records is stored in a database.
- Each record relates to each crew member of the vessel and preferably includes an identifier, a portrait, and a plurality of optically recordable attributes of that crew member, the portrait showing the crew member without a visual output device.
- An image evaluation unit automatically searches in the images of a human M.1, M.2, which were generated by the first or second camera system for optically detectable attributes of this person the image evaluation unit searches in the database for a record, the appropriate optical detectable attributes and thereby determines the crew member shown in the images.
- the image evaluation unit determines the portrait of this crew member and transmits this portrait to the visual output device, which is worn by the other person. On one screen, the display device of this other visual output device displays the portrait. It is also possible that the image evaluation unit determines that no data record contains suitable optically detectable attributes.
- 101 first VR glasses acts as the first visual output device, includes the
- first camera system includes the two cameras 123, the actuators
- evaluation computer receives signals from the digital cameras 123, transmits evaluated images to the central computer 220
- Locating device receiver receives position signals from
- Transmitter 128 on the first VR glasses 101 132 localization device actuators move the digital cameras 123 in response to position signals
- Control device of the locator receives position signals from the receiver 130, drives the actuators 132 in response to the received position signals
- 21 1 screen of the first VR glasses 101 is held by the holding member 105, belongs to the display device, generates the common representation with the image 215 and the virtual instruments 213th
- 220 central computer, connected to the transmitter 127, transmits images of the cameras 123, the second VR glasses 301 and images of the human M.2 to the first VR glasses 101 and signals from the sonar system to both VR glasses 101 and 301
- 301 second VR glasses acts as the second visual output device, includes the
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102017114905.8A DE102017114905A1 (en) | 2017-07-04 | 2017-07-04 | Communication device and operating system |
DE102017114914.7A DE102017114914A1 (en) | 2017-07-04 | 2017-07-04 | Visual output device and operating system |
PCT/EP2018/067892 WO2019007934A1 (en) | 2017-07-04 | 2018-07-03 | Assembly and method for communicating by means of two visual output devices |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3649538A1 true EP3649538A1 (en) | 2020-05-13 |
Family
ID=64950631
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18736893.1A Withdrawn EP3649539A1 (en) | 2017-07-04 | 2018-07-03 | Visual output device with a camera and presentation method |
EP18736891.5A Withdrawn EP3649538A1 (en) | 2017-07-04 | 2018-07-03 | Assembly and method for communicating by means of two visual output devices |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18736893.1A Withdrawn EP3649539A1 (en) | 2017-07-04 | 2018-07-03 | Visual output device with a camera and presentation method |
Country Status (2)
Country | Link |
---|---|
EP (2) | EP3649539A1 (en) |
WO (2) | WO2019007934A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111031292A (en) * | 2019-12-26 | 2020-04-17 | 北京中煤矿山工程有限公司 | Coal mine safety production real-time monitoring system based on VR technique |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6774869B2 (en) * | 2000-12-22 | 2004-08-10 | Board Of Trustees Operating Michigan State University | Teleportal face-to-face system |
DE102004019989B3 (en) * | 2004-04-23 | 2005-12-15 | Siemens Ag | Arrangement and method for carrying out videoconferencing |
US20070030211A1 (en) * | 2005-06-02 | 2007-02-08 | Honeywell International Inc. | Wearable marine heads-up display system |
DE202009010719U1 (en) | 2009-08-07 | 2009-10-15 | Eckardt, Manuel | communication system |
EP2611152A3 (en) * | 2011-12-28 | 2014-10-15 | Samsung Electronics Co., Ltd. | Display apparatus, image processing system, display method and imaging processing thereof |
US9390561B2 (en) * | 2013-04-12 | 2016-07-12 | Microsoft Technology Licensing, Llc | Personal holographic billboard |
US10262462B2 (en) * | 2014-04-18 | 2019-04-16 | Magic Leap, Inc. | Systems and methods for augmented and virtual reality |
DE102014107211A1 (en) | 2014-05-22 | 2015-11-26 | Atlas Elektronik Gmbh | Device for displaying a virtual reality as well as measuring device |
US20160054791A1 (en) * | 2014-08-25 | 2016-02-25 | Daqri, Llc | Navigating augmented reality content with a watch |
US10032388B2 (en) * | 2014-12-05 | 2018-07-24 | Illinois Tool Works Inc. | Augmented and mediated reality welding helmet systems |
DE102014018056A1 (en) | 2014-12-05 | 2016-06-09 | Audi Ag | Method of operating a virtual reality glasses and virtual reality glasses |
US9904054B2 (en) | 2015-01-23 | 2018-02-27 | Oculus Vr, Llc | Headset with strain gauge expression recognition system |
US9910275B2 (en) * | 2015-05-18 | 2018-03-06 | Samsung Electronics Co., Ltd. | Image processing for head mounted display devices |
DE102015006612B4 (en) | 2015-05-21 | 2020-01-23 | Audi Ag | Method for operating data glasses in a motor vehicle and system with data glasses |
DE202016000449U1 (en) | 2016-01-26 | 2016-03-08 | Johannes Schlemmer | Communication system for the remote support of seniors |
US10019131B2 (en) | 2016-05-10 | 2018-07-10 | Google Llc | Two-handed object manipulations in virtual reality |
-
2018
- 2018-07-03 WO PCT/EP2018/067892 patent/WO2019007934A1/en unknown
- 2018-07-03 EP EP18736893.1A patent/EP3649539A1/en not_active Withdrawn
- 2018-07-03 WO PCT/EP2018/067894 patent/WO2019007936A1/en unknown
- 2018-07-03 EP EP18736891.5A patent/EP3649538A1/en not_active Withdrawn
Also Published As
Publication number | Publication date |
---|---|
EP3649539A1 (en) | 2020-05-13 |
WO2019007934A1 (en) | 2019-01-10 |
WO2019007936A1 (en) | 2019-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3426366B1 (en) | Determining a position, aligning a virtual reality headset, and an amusement ride with a virtual reality headset | |
DE102017110283A1 (en) | CONTROLLING FUNCTIONS AND EXPENSES OF AUTONOMOUS VEHICLES BASED ON A POSITION AND ATTENTION FROM OCCUPANTS | |
EP3286614B1 (en) | Operating system for a machine of the food industry | |
DE102014109079A1 (en) | DEVICE AND METHOD FOR DETECTING THE INTEREST OF A DRIVER ON A ADVERTISING ADVERTISEMENT BY PURSUING THE OPERATOR'S VIEWS | |
DE202013012457U1 (en) | Digital device | |
EP3286532A1 (en) | Method for detecting vibrations of a device and vibration detection system | |
DE102010038341A1 (en) | Video surveillance system and method for configuring a video surveillance system | |
WO2011051009A1 (en) | System for providing notification of positional information | |
DE102014213021A1 (en) | Localization of an HMD in the vehicle | |
DE102014006732A1 (en) | Image overlay of virtual objects in a camera image | |
EP3117261A1 (en) | Assistance system for a piece of agricultural machinery, and method for assisting an operator | |
DE102014222355A1 (en) | Fatigue detection with sensors of data glasses | |
DE102005045973A1 (en) | Device for camera-based tracking has processing arrangement for determining movement information for evaluating images, whereby information characterizes object position change within time interval | |
EP3649538A1 (en) | Assembly and method for communicating by means of two visual output devices | |
DE202018006462U1 (en) | Display system to be carried by an animal and to be attached to the head | |
DE112016003273T5 (en) | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS AND PROGRAM | |
WO2017220667A1 (en) | Method and device for modifying the affective visual information in the field of vision of an user | |
DE102019103360A1 (en) | Method and device for operating a display system with data glasses | |
DE112019003962T5 (en) | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM AND INFORMATION PROCESSING SYSTEM | |
DE112018003820T5 (en) | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS AND PROGRAM | |
DE102019131640A1 (en) | Method and device for operating a display system with data glasses | |
DE102009043252A1 (en) | Device and method for assistance for visually impaired persons with three-dimensionally spatially resolved object detection | |
DE112021003465T5 (en) | INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM | |
DE102017200337A1 (en) | motor vehicle | |
DE102015220683A1 (en) | Driver information system with a drive device and a display device and method for operating a driver information system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200204 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20210707 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20211118 |