SE1950623A1 - System for providing a telepresence - Google Patents

System for providing a telepresence

Info

Publication number
SE1950623A1
SE1950623A1 SE1950623A SE1950623A SE1950623A1 SE 1950623 A1 SE1950623 A1 SE 1950623A1 SE 1950623 A SE1950623 A SE 1950623A SE 1950623 A SE1950623 A SE 1950623A SE 1950623 A1 SE1950623 A1 SE 1950623A1
Authority
SE
Sweden
Prior art keywords
display
video data
distance
spatial relationship
user
Prior art date
Application number
SE1950623A
Other languages
Swedish (sv)
Inventor
Elijs Dima
Joakim Edlund
Mattias Andersson
Mårten Sjöström
Original Assignee
Elijs Dima
Joakim Edlund
Mattias Andersson
Sjoestroem Maarten
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elijs Dima, Joakim Edlund, Mattias Andersson, Sjoestroem Maarten filed Critical Elijs Dima
Priority to SE1950623A priority Critical patent/SE1950623A1/en
Publication of SE1950623A1 publication Critical patent/SE1950623A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G05D1/243
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/35Nc in input of data, input till input file format
    • G05B2219/35506Camera images overlayed with graphics, model
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39449Pendant, pda displaying camera images overlayed with graphics, augmented reality
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39451Augmented reality for robot programming
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40169Display of actual situation at the remote site
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The present invention relates to a system (1) for providing a telepresence comprising a display (2), at least one camera (3) to obtain video data, a distance determining arrangement (5) to obtain distance data between objects, a selection arrangement (6) adapted to select objects displayed on the display (2) and processing circuitry (9). The processing circuitry (9) being adapted to display at least part of the video data on the display (2), mark a first object (8a) indicated by the selection arrangement (6) upon detecting an actuation of the selection arrangement (6), obtain a marking of a second object (8b), determine the positions of the first and second object (8a, 8b) based on the video data and the distance data, calculate a spatial relationship (10) between the first and second object (8a, 8b) based on the determined positions, and display the spatial relationship (10) between the first and second object (8a, 8b) on the display (2) in association to the displayed at least part of the video data.

Description

System for providing a telepresence Technical field The present invention relates to a system for providing a telepresence.
Background There are many systems today for assisting a user in remotely controlling a situation. Some ofthe systems are: US20160196692A1 discloses a head mounted device that can display 3D stereoscopic images.The document regards to generating an augmented reality environment having one or morevirtual objects combined with a reality-based environment. A user may move and rotatehis/her head in order to move a virtual laser pointer. The method is to be used with a mobilephone, where the camera is used to capture the reality-based environment. Interaction withobjects in the augmented reality are interactions with virtual objects.
US20160048204A1 discloses hands-free selection of virtual objects. Gaze swipe gestures areused to select objects. Head movements are tracked by a head mounted display to detectwhether a virtual pointer has swiped across two or more edges of a virtual object. This maybe done in combination with that the user is directing his/her gaze at the object.
Unity3D discloses a video that shows a head mounted display with a pointer that is fixed inthe center of the image and a user selects and moves objects by moving his head so that thepointer is located on the object to be selected. The objects are in a virtual environment.
US20190039862A1 discloses a remote-control device for a crane or other constructionmachine or truck. The remote-control device may receive the machine surroundings by meansof at least one camera that is provided in the region of the remote-controlled machine, see[0024]. A plurality of cameras is advantageous to be able to observe the pieces of workingequipment from different perspectives, see [0027]. A control station mimicking a controlstation ofthe machine may be arranged so that a user can remotely control the machine froma control station similar to an actual control station.
There is always a need to improve and provide new systems for remote control.
Summary lt is an aim of the present disclosure to provide an improved system for providing atelepresence.
This aim is achieved by a system as defined in claim 1.
The disclosure provides a system for providing a telepresence. The system comprises a display,at least one camera arranged to capture video data of a scenery so that video data of thescenery is obtained, a distance determining arrangement for determining a distance to anobject in the scenery to obtain distance data, a selection arrangement adapted to be actuatedby the user to select objects displayed on the display, and processing circuitry arranged incommunication with the at least one camera and the distance determining arrangement. Theprocessing circuitry being adapted to display at least part of the video data on the display,mark a first object, in the at least part of the video data, indicated by the selectionarrangement upon detecting an actuation of the selection arrangement, obtain a marking ofa second object, in the at least part ofthe video data, determine the positions ofthe first andsecond object, calculate a spatial relationship between the first and second object based onthe determined positions, and display the spatial relationship between the first and secondobject on the display in association to the displayed at least part of the video data. With thissystem it is possible to select an object and then get assisting information regarding the spatialrelationship between the selected first object and the second object. An operator controllinga remote site is thus provided with information that assists in determining the spatialrelationship between objects in the scenery.
According to some aspects, the display is arranged in a headset. The user thus has a headseton and thus gets the feeling of being present at the remote site.
According to some aspects, the headset comprises at least one movement sensor fordetecting movement of the headset and wherein the processing circuitry is adapted to receivesensor data from the at least one movement sensor, to detect a movement of the headsetbased on the received sensor data, to determine the size and direction ofthe movement andmove the displayed part of the video data in correspondence to the determined size anddirection ofthe movement ofthe headset. The user wearing the headset can thus look aroundin the scenery using the headset.
According to some aspects, the processing circuitry is adapted to arrange a cursor at apredetermined fixed position on the display, the fixed position being fixed relative to thedisplay. A fixed cursor has the advantage that the user does not have to keep track of itsposition, it is always placed in the same position.
According to some aspects, the cursor is maintained in its fixed position on the display as thedisplayed part of the video data is moved. The user can thus use the headset to look aroundin the scenery and the cursor will remain fixed so that the user may move her/his head so thatthe cursor is indicating objects.
According to some aspects, the selection arrangement comprises the cursor and an actuationdevice such that objects are selected by the user moving their head such that the cursor isindicating an object and then actuating the actuation device. The user can thus use the headset to move the cursor so that it is indicating a desired object and then actuate theactuation device.
According to some aspects, the selection arrangement comprises an eye tracking sensor andan actuation device such that objects are selected by the user directing their eyes at an objectand then actuating the actuation device. A cursor may be arranged where the user is lookingso that it is clear to the user what she/he is marking. ln this case, no head movement of theuser is necessary. Users can select objects in the video data by gazing at it with their eyes andactuating the actuation device.
According to some aspects, the actuation device is any one of: a button which is actuated bypressing it, a movement sensor which is actuated by a predetermined movement, an eyetracking sensor which is actuated by a predetermined eye movement. The actuation device isthus easily used.
According to some aspects, the selection arrangement comprises an eye tracking sensor suchthat objects are selected with predetermined eye movements. The marking of an object maythus be done only with eye tracking which leaves the hands of the operator free to controlother things.
According to some aspects, the processing circuitry is adapted to repeatedly update thedetermining positions of the first and second object based on the video data and distancedata, the calculating spatial relationship between the first and second object and thedisplaying the spatial relationship to account for movement of the first and second object.Thus, the objects may physically change position, so the system repeatedly updates therelevant steps to provide an up to date spatial relationship.
According to some aspects, the processing circuitry is adapted to display the spatialrelationship in association to the first and second object in the displayed at least part of thevideo data. The spatial relationship is thus displayed where it easy for the user to apprehendwhich object the spatial relationship is referring to.
According to some aspects, the processing circuitry is adapted to move the displayed spatialrelationship between the first and second object correspondingly when the displayed part ofthe video data is moved in correspondence to the determined size and direction of themovement of the headset. The displayed spatial relationship is thus moved correspondingwith the scenery in the video data.
According to some aspects, the spatial relationship is a distance between the first and secondobject and the distance is displayed in the at least part ofthe video data as a line for assistingthe user in determining relationships between objects. The distance is thus easy to see for theuser. lf the first or second object is moving closer to the other, it will be clear to the user thatthe distance is shortening because the line becomes shorter and shorter.
According to some aspects, the spatial relationship comprises the shortest path to bring thesecond object to the position of the first object and the path is displayed in the at least partof the video data for assisting the user in controlling the movement of the second object tothe position ofthe first object. This is useful if the user is an operator which is to move one ofthe objects to the position ofthe other object. The user will then be presented with a guidancein how to move the object in an efficient way.
According to some aspects, the spatial relationship is in cylindrical coordinates and the spatialrelationship is distance and difference in angle between the first and second object andwherein the processing circuitry being adapted to display the distance and angle over the atleast part of the video data as a three-dimensional cylinder sector for assisting the user indetermining relationships between objects. Depending on what kind of scenery is viewed,different coordinate systems may be used.
According to some aspects, the spatial relationship is in Cartesian coordinates and theprocessing circuitry being adapted to display the spatial relationship as a hyperrectangle forassisting the user in determining relationships between objects.
According to some aspects, the spatial relationship is in spherical coordinates and theprocessing circuitry being adapted to display the spatial relationship as a spherical sector forassisting the user in determining relationships between objects.
According to some aspects, the processing circuitry being adapted to display a grid of linesover the displayed at least part over the video data for assisting the user in determiningrelationships between objects. The grid is useful for the user to more easily see differentrelationships and proportions between objects.
According to some aspects, at least some of the spatial relationship is expressed in writtennumbers or visualized by graphic elements such as the size of bars, a graph or a cylinder. Eachof which may display each coordinate in the selected coordinate system. This is to furtherassist the user.
According to some aspects, the distance determining arrangement for determining a distanceto an object in the scenery to obtain distance data is an additional camera which is used incombination with the at least one camera to obtain stereoscopic video data from which thedistance data is calculated. ln this way, a stereoscopic view can be provided to the user andthe video data can also be used to determine distances between object.
According to some aspects, the distance determining arrangement for determining a distanceto an object in the scenery to obtain distance data is any one of: a lidar, a time-of-flight cameraand radar. ln this way, only one camera is required and the distance determining arrangementdetermines the distances between objects in the video data.
Brief description of the drawings The invention will now be explained more closely by the description ofdifferent embodimentsof the invention and with reference to the appended figures.
Fig. 1 shows an example of how the system may be arranged.
Fig. 2 shows an example view on a display. The two bottom images represent each pa rt of astereoscopic image.
Fig. 3 shows another example view on a display. The two bottom images represent each partof a stereoscopic image.
Fig. 4 shows another example view on a display. The two bottom images represent each partof a stereoscopic image with a fixed cursor.
Fig. 5 shows another example view on a display. The two bottom images represent each partof a stereoscopic image with a cursor fixed on the display and with a cursor fixed at a markedfirst object.
Fig. 6 shows another example view on a display where the spatial relationship is indicated asa line between the first and second object and as information in text in association to the line.Fig. 7 shows another example view on a display where the spatial relationship is indicated asa suggested path to move the second object to reach the first object and as information intext in association to the line. Additional data may be displayed in relation to the path ininformation spaces, here illustrated as hexagons.
Fig. 8 shows another example view on a display where the spatial relationship is indicated asinformation in text and on a grid.
Fig. 9 shows another example view on a display where the spatial relationship is indicated asinformation in text and on a grid in cylindrical coordinates.
Fig. 10 shows another example view on a display where the spatial relationship is indicated asinformation in text and as a hyperrectangle in the Cartesian coordinate system.
Fig. 11 shows another example view on a display where the spatial relationship is indicated asinformation in text and as a cylinder sector in a cylindrical coordinate system.
Fig. 12 shows another example view on a display where the spatial relationship is indicated asinformation in text and on a grid in cylindrical coordinates.
Detailed description The present invention is not limited to the embodiments disclosed but may be varied andmodified within the scope ofthe following claims.
Aspects of the present disclosure will be described more fully hereinafter with reference tothe accompanying drawings. The devices and methods disclosed herein can, however, berealized in many different forms and should not be construed as being limited to the aspectsset forth herein. Like numbers in the drawings refer to like arrangements throughout.
The terminology used herein is for the purpose of describing particular aspects of thedisclosure only and is not intended to limit the invention. As used herein, the singular forms ll H ll a , an" and "the" are intended to include the plural forms as well, unless the context clearlyindicates otherwise.
Unless otherwise defined, all terms (including technical and scientific terms) used herein havethe same meaning as commonly understood by one of ordinary skill in the art to which thisdisclosure belongs.
The disclosure provides a system 1 for providing a telepresence. Figure 1 shows an exampleof how the system 1 may be arranged. ln the example, a user 7, is wearing a headset 11 witha display 2, the system 1 provides one or more cameras 3 and an arrangement 5 fordetermining a distance to objects. A scenery 4 is filmed with the camera 3 and at least part ofthe video is displayed to the user 7 on the display 2. The system 1 determines a spatialrelationship between objects 8a, 8b and the spatial relationship is at least partly displayed tothe user 7 for assisting the user 7 in analyzing the objects arrangements in the scenery 4. Thefirst object 8a and the second object 8b thus has their real-life equivalents of first object 13second object 14 in the scenery 4.
The system 1 comprises a display 2. The display 2 may be any kind of display 2 capable ofdisplaying a video to a user 7. According to some aspects, the display 2 is arranged in a headset11. The user 7 thus has a headset 11 on and thus gets the feeling of being present at theremote site. The headset 11 is for example a VR headset or an AR headset. The display 2 mayalso be the display 2 of any kind of mobile communication device such as a smart phone.
The system 1 comprises at least one camera 3 arranged to capture video data of a scenery 4so that video data of the scenery 4 is obtained. The camera 3 may be any kind of camera 3capable of capturing video data ofthe scenery 4. The camera 3 has a resolution so that objectscan be marked in the video data and/or distance computed. The camera 3 is for example anHD (1280x720) camera or full HD (1920x1080) camera.
The system 1 comprises a distance determining arrangement 5 for determining a distance toan object in the scenery 4 to obtain distance data. There are many alternative devices that arecapable of determining distance between a distance determining device and an object. Adistance determining arrangement 5 may be to use two or more cameras. Thus, according tosome aspects, the distance determining arrangement 5 for determining a distance to an objectin the scenery 4 to obtain distance data is an additional camera which is used in combinationwith the at least one camera 3 to obtain stereoscopic video data from which the distance datais calculated. ln this way, a stereoscopic view can be provided to the user 7 and the video datacan also be used to determine distances between object. Figure 2 shows an example view ona display 2. The two bottom images represent each part of a stereoscopic image. The dashedsquare in the figures represent an example of where spatial relationship 10 data can beillustrated. ln figure 3, it is shown that the images shown in the display 2 are a part ofthe videodata where not the entire scenery 4 is visible on the display 2. lt should be noted that somecameras can render a stereoscopic image from using only one camera. One such camera is a plenoptic camera which capture everything required to render stereoscopic video. There arealso some special lenses for cameras that allows the camera to record stereoscopic videodirectly. An alternative is to use a Kinect which is a camera with a depth sensor. Certainprocessing can also be done to create faux stereoscopic data based on assumed object sizesand/or colors in the video. ln such a case, the distance determining arrangement 5 iscomprised in the at least one camera 3. An alternative is to capture depth with a secondarydevice by which any pixel can be re-projected to a secondary view, which may be used as thesecond stereo view.
The distance determining arrangement 5 for determining a distance to an object in thescenery 4 to obtain distance data may also be any one of: a lidar, a time-of-flight camera andradar. Other alternatives for the distance determining arrangement 5 are a plenoptic camera,a light field camera, an echolocator, or a magnetic field sensor. ln this way, only one camerais required and the distance determining arrangement 5 determines the distances betweenobjects in the video data. lf it is not desired to display a stereoscopic video to a user 7 of thesystem 1 on the display 2, it may be a less expensive alternative to use a distance determiningarrangement 5 which only measures the distance so that only one camera is needed. Thechoice of camera 3 and distance determining arrangement 5 is up to the system 1 designerbecause different kinds may be advantageous in different setups.
The system 1 comprises a selection arrangement 6 adapted to be actuated by the user 7 toselect objects displayed on the display 2. ln figure 1, the selection arrangement 6 is a buttonon a joystick that the user 7 is holding but the selection arrangement 6 may be many kinds ofarrangement capable of being actuated by a user 7 to make a selection. Alternative selectionarra ngement 6s are described more closely below.
The system 1 comprises processing circuitry 9 arranged in communication with the at leastone camera 3 and the distance determining arrangement 5. The processing circuitry 9comprises any kind of processing circuitry 9, such as a single processor, a multi core processor,several processors or several processors with multiple cores. The processing circuitry 9 mayalso be a Field Programmable Gate Array (FPGA) or a custom printed circuitry. The processingcircuitry 9 may also be located at different geographical locations so that the processes of theprocessing circuitry 9 are performed in the cloud.
The processing circuitry 9 being adapted to obtain the video data and the distance data. Thevideo data and the distance data may be obtained through wireless or wired communicationwith the at least one camera 3 and the distance determining arrangement 5. Thecommunication may be direct or go through other, intermediate, systems.
The processing circuitry 9 is adapted to display at least part of the video data on the display 2.The processing circuitry 9 is thus in communication with the display 2. The communicationmay be wireless or wired. How a processing circuitry 9 sends data to be displayed will not bedescribed in detail because it is known to the skilled person.
The processing circuitry 9 is adapted to mark a first object 8a, in the at least part ofthe videodata, indicated by the selection arrangement 6 upon detecting an actuation of the selectionarrangement 6. ln other words, when a user 7 actuates the selection arrangement 6, theobject indicated by the selection arrangement 6 is selected. The marking of the first object 8amay comprise that the marking of the object is visualized by the processing circuitry 9 on thedisplay 2. ln figure 4, an example marking is illustrated where a cursor 12 is arranged just overthe object to indicate that is has been marked. The indication may also be for example tohighlight the object. lt may also be the case that a selected object is not visually marked onthe display 2.
The processing circuitry 9 is adapted to obtain a marking of a second object 8b, in the at leastpart of the video data. The marking of the second object 8b may be done in the same way asthe marking of the first object 8a. lt may also be the case that the marking ofthe second objectis received from an external system. The marking is then for example received as spatialcoordinates of the position of the second object 14. The processing circuitry 9 may be spacesynchronized, i.e. calibrated to relate the coordinate system of the processing circuitry 9 tothe coordinate system ofthe external system, or the coordinates may be an absolute position,received in for example Real Time Kinematic, RTK GPS coordinates. The marking ofthe secondobject 14 may also be obtained by having a special feature arranged on the object in real life,such as a QR-code or a graphic pattern, which is automatically recognized by the processingcircuitry 9 in the part of the video data.
When the marking ofthe second object 14 is received from an external system, the processingtranslates the received position into a position in the video data. The transform from thereceived position into a position in video data is a mathematical transform, i.e. calculation,based on the knowledge of how the at least one camera 3 is positioned and/or video viewingposition is related to the received object position. For multiple-camera systems, the camerasare calibrated to relate each cameras' coordinate system to each other camera and/or to thescenery 4.
As long as the received object position and the camera positions can be related to each otherin one of their coordinate systems, or both to a third related coordinate system, then thepositions can be transformed between the coordinate systems via calculations.
The processing circuitry 9 is adapted to determine the positions of the first and second object.The positions may be absolute positions or positions relative to other objects in the videodata. The determining of the position of the first object 8a is based on the video data and thedistance data. The determining ofthe position ofthe second object may be based on the sameif the second object is acquired in the same way. Otherwise the determining of the secondposition is done according to above.
The processing circuitry 9 is adapted to calculate a spatial relationship 10 between the firstand second object 8a, 8b based on the determined positions. Depending on if the positions are absolute positions or positions relative to other objects in the video data the spatialrelationship 10 will be either a real spatial relationship 10 that is consistent with the realscenery 4 or it will be relative to what is viewed in the display 2.
The processing circuitry 9 is adapted to display the spatial relationship 10 between the firstand second object 8a, 8b on the display 2 in association to the displayed at least part of thevideo data. With this system 1 it is possible to select an object and then get assistinginformation regarding the spatial relationship 10 between the selected first object 8a and thesecond object 8b. An operator controlling a remote site is thus provided with information thatassists in determining the spatial relationship 10 between objects in the scenery 4. |fthe user7 is to control the movement of the second object 14 it may be sufficient to supply relativeinformation in the display 2, rather than seeing the scenario in reality, because the user 7 willanyway apprehend when the objects are closing in on each other and if any rotations areneeded.
Different examples of spatial relationships 10 are further discussed below.
According to some aspects, when the display 2 is arranged in a headset 11, the headset 11comprises at least one movement sensor for detecting movement of the headset 11 andwherein the processing circuitry 9 is adapted to receive sensor data from the at least onemovement sensor, to detect a movement of the headset 11 based on the received sensordata, to determine the size and direction ofthe movement and to move the displayed part ofthe video data in correspondence to the determined size and direction of the movement ofthe headset 11. ln other words, the processing circuitry 9 displays part of the filmed scenery4 and the user 7 may move the displayed part by moving their head with the headset 11. Theuser 7 wearing the headset 11 can thus look around in the scenery 4 using the headset 11. Themovement sensor is for example an IR sensor, a camera, or an lnertial I\/leasurement Unit,|I\/IU, a unit consisting of both accelerometer and gyro, and possibly a magnetometer.
Figure 4 shows another example view on a display 2. The two bottom images represent eachpart of a stereoscopic image 15a, 15b with a fixed cursor 12 in the displayed video data.According to some aspects, the processing circuitry 9 is adapted to arrange a cursor 12 at apredetermined fixed position on the display 2, the fixed position being fixed relative to thedisplay 2. The cursor 12 thus always stays in the fixed position, but it may also be fixated on amarked object. A fixed cursor 12 has the advantage that the user 7 a does not have to keeptrack of its position, it is always in the same position on the display 2. According to someaspects, the cursor 12 is maintained in its fixed position on the display 2 as the displayed partof the video data is moved. The user 7 can thus use the headset 11 to look around in thescenery 4 and the cursor 12 will remain fixed so that the user 7 may move her/his head so thatthe cursor 12 is indicating objects. ln figure 5, it is illustrated that a cursor 12 can also be putto indicate a marked object by arranging it in association with the object. There may thus be one cursor 12 which is fixed relative the display 2 and one cursor 12 that is fixed relative themarked object.
According to some aspects, the selection arrangement 6 comprises the cursor 12 and anactuation device such that objects are selected by the user 7 moving their head such that thecursor 12 is indicating an object and then actuating the actuation device. ln other words, thecursor 12 is arranged in a fixed location relative the display 2, when the user 7 moves theirhead around, the cursor 12 can be put over an object the user 7 wishes to mark. The user 7actuates the actuation device to mark the indicated object. The user 7 can thus use theheadset 11 to move the cursor 12 so that it is indicating a desired object and then actuate theactuation device.
Another example is that the selection arrangement 6 may comprise an eye tracking sensorand an actuation device such that objects are selected by the user 7 directing their eyes at anobject and then actuating the actuation device. The user 7 thus marks an object by looking atit and then actuating the actuation device. A cursor 12 may be arranged where the user 7 islooking so that it is clear to the user 7 what she/he is marking. The cursor 12 thus follows thegaze of the user 7. |.e. the cursor 12 is arranged by the processing circuitry 9, on the part ofthe display 2 where the user 7 is looking. ln this case, the user 7 does not need to move theirhead, the eyes can be used to indicate an object in the video data.
The actuation device may be any kind of device where the user 7 can indicate a selection.According to some aspects, the actuation device is any one of: a button which is actuated bypressing it, a movement sensor which is actuated by a predetermined movement, an eyetracking sensor which is actuated by a predetermined eye movement. The actuation device isthus easily used for the user 7. A movement sensor for example detects movement of a partof the controller or a movement of a part of the user's body, e.g. a head nod or a head shakeor an arm gesture. The movement sensor is for example a lever. The predetermined eyemovement is for example a blink or a fast looking away and back again.
The selection arrangement 6 may comprise of only an eye tracking sensor. Thus, according tosome aspects, the selection arrangement 6 comprises an eye tracking sensor such that objectsare selected with predetermined eye movements. A cursor 12 may be arranged where theuser 7 is looking for making it clear to the user 7 which objects the system 1 interprets theyare looking at. The marking of an object may thus be done only with eye tracking which leavesthe hands of the operator free to control other things. The actuation by the user 7 is thus thepredetermined eye movements.
The first and second object 13, 14 may physically change position, so the system 1 mayrepeatedly updates the relevant steps to provide an up-to-date spatial relationship 10. Thus,according to some aspects, the processing circuitry 9 is adapted to repeatedly update thedetermining positions of the first and second object 8a, 8b based on the video data anddistance data, the calculating spatial relationship 10 between the first and second object and 11 the displaying the spatial relationship 10 to account for movement of the first and secondobject. This is particularly important ifthe user 7 controls the movement of one ofthe objects,the user 7 then requires continuous feedback on the movement so that they do not bump intoother objects in the real-world scenery 4.
Figures 6 to 12 illustrates different ways to illustrate the spatial relationship 10 and also toprovide assisting information to the user 7. Figure 6 shows another example view on a display2 where the spatial relationship 10 is indicated as a line between the first and second object8a, 8b and as information in text in association to the line. According to some aspects, theprocessing circuitry 9 is adapted to display the spatial relationship 10 in association to the firstand second object in the displayed at least part of the video data. The spatial relationship 10is thus displayed where it easy for the user 7 to see which object the spatial relationship 10 isreferring to.
When the user 7 is wearing a headset 11 with the display 2 and can look around in the videodata using movement sensors, the processing circuitry 9 may be adapted to move thedisplayed spatial relationship 10 between the first and second object 8a, 8b correspondinglywhen the displayed part of the video data is moved in correspondence to the determined sizeand direction of the movement ofthe headset 11. The displayed spatial relationship 10 is thusmoved correspondingly with the scenery 4 in the video data.
As can be seen in figure 6, the spatial relationship 10 is for example distance between the firstand second object 8a, 8b and the distance may be displayed in the at least part of the videodata as a line for assisting the user 7 in determining relationships between objects. Thedistance is thus easy to apprehend for the user 7. |fthe first or second object 13, 14 is movingcloser to the other, it will be apparent to the user 7 that the distance is shortening becausethe line becomes shorter and shorter.
Figure 7 shows another example view on a display 2 where the spatial relationship 10 isindicated as a suggested path to move the second object 14 to reach the first object 13 and asinformation in text in association to the line. Additional data may be displayed in relation tothe path. ln the example of figure 7, the data may be illustrated in the hexagons. Thus,according to some aspects, the spatial relationship 10 comprises the shortest path to bringthe second object 14 to the position of the first object 13 and the path is displayed in the atleast part of the video data for assisting the user 7 in controlling the movement of the secondobject 14 to the position ofthe first object 13. The path may be non-linear to take into accountobstacles being in the way of moving the first and second object together via a straight path.This is advantageous if the user 7 is an operator which is to move one of the first and secondobject to the other. The user 7 will then be presented with a good option in how to move theobject in an efficient way while avoiding obstacles.
According to some aspects, the processing circuitry 9 being adapted to display a grid of linesover the displayed at least part over the video data for assisting the user 7 in determining 12 relationships between objects. The grid is useful for the user 7 to more easily see differentrelationships and proportions between objects. The grid can be in different coordinatesystems, e.g. Cartesian, circular, oval coordinates. As can be seen in figures 8 and 9, Cartesianor polar coordinate systems may be used. Cartesian, cylindrical and spherical coordinatesystems may also be used in a 3D grid. ln the examples of the figures, virtual floors aredisplayed to assist the user 7 in comprehending the space of the scenery 4. Virtual walls mayalso be used. ln figures 8 and 9 the spatial relationship 10 is indicated as information in texton a grid.
Figure 10 shows another example view on a display 2 where the spatial relationship 10 isindicated as information in text and as a hyperrectangle in the Cartesian coordinate system.According to some aspects, the spatial relationship 10 is in Cartesian coordinates and theprocessing circuitry 9 being adapted to display the spatial relationship 10 as a hyperrectanglefor assisting the user 7 in determining relationships between objects.
According to some aspects, the spatial relationship 10 is in spherical coordinates and theprocessing circuitry 9 being adapted to display the spatial relationship 10 as a spherical sectorfor assisting the user 7 in determining relationships between objects.
Figure 11 shows another example view on a display 2 where the spatial relationship 10 isindicated as information in text and as a cylinder sector in a cylindrical coordinate system.According to some aspects, the spatial relationship 10 is in cylindrical coordinates and thespatial relationship 10 is distance and difference in angle between the first and second object8a, 8b and wherein the processing circuitry 9 being adapted to display the distance and angleover the at least part ofthe video data as a three-dimensional cylinder sector for assisting theuser 7 in determining relationships between objects. Depending on what kind of scenery 4 isviewed, different coordinate systems may be used.
The spherical sector and the cylinder sector may be hollow.
Figure 12 shows another example view on a display 2 where the spatial relationship 10 isindicated as information in text and on a grid in cylindrical coordinates. The figure shows acylindrical coordinate system with the origin placed at a viewer position or at object positionor at system position. The intent of this figure is to show that the viewing position, andtherefore the position or setting ofthe coordinate systems, can be varied.
At least some ofthe spatial relationship 10 may be expressed in written numbers or visualizedby graphic elements such as the size of bars, a graph or a cylinder. The bars may be straight orcurved. Each of which may display each coordinate in the selected coordinate system. This isto further assist the user 7. 13 Reference list ÉWNFÛWPPJNP .11.12.13.14.15.
SystemDisplayCameraSceneryDistance determining arrangementSelection arrangementUserObjects on the displaya. First objectb. Second objectProcessing circuitrySpatial relationshipHeadsetCursorFirst objectSecond objectVideo data on displaya. First part of stereoscopic videob. Second part of stereoscopic video

Claims (4)

1. 4. m 1A system (1) for providing a telepresence, the system (1) comprises: - a display (2), - at least one camera (3) arranged to capture video data of a scenery (4) so thatvideo data of the scenery (4) is obtained, - a distance determining arrangement (5) for determining a distance to an objectin the scenery (4) to obtain distance data, - a selection arrangement (6) adapted to be actuated by a user (7) to selectobjects (8) displayed on the display (2), - processing circuitry (9) arranged in communication with the at least onecamera (3) and the distance determining arrangement (5), the processingcircuitry (9) being adapted to: 0 display at least part of the video data on the display (2), 0 mark a first object (8a), in the at least part of the video data, indicatedby the selection arrangement (6) upon detecting an actuation of theselection arrangement (6), 0 obtain a marking of a second object (8b), in the at least part ofthe videodata, 0 determine the positions of the first and second object (8a, 8b), 0 calculate a spatial relationship (10) between the first and second object(8a, 8b) based on the determined positions, and 0 display the spatial relationship (10) between the first and second object(8a, 8b) on the display (2) in association to the displayed at least part ofthe video data.
2. The system (1) according to claim 1, wherein the display (2) is arranged in a headset(11).
3. The system (1) according to claim 2, wherein the headset (11) comprises at least onemovement sensor for detecting movement of the headset (11) and wherein theprocessing circuitry (9) is adapted to: 0 receive sensor data from the at least one movement sensor, 0 detect a movement of the headset (11) based on the received sensordata, 0 determine the size and direction of the movement, 0 move the displayed part of the video data in correspondence to thedetermined size and direction ofthe movement of the headset (11).
4. The system (1) according to any one of the preceding claims, wherein the processingcircuitry (9) is adapted to: 10. 11. 12 0 arrange a cursor (12) at a predetermined fixed position on the display(2), the fixed position being fixed relative to the display (2). The system (1) according to claim 3 and 4, wherein the cursor (12) is maintained in itsfixed position on the display (2) as the displayed part of the video data is moved. The system (1) according to claim 5, wherein the selection arrangement (6) comprisesthe cursor (12) and an actuation device such that objects are selected by the user (7)moving their head such that the cursor (12) is indicating an object and then actuatingthe actuation device. The system (1) according to any one of claims 1 to 3, wherein the selectionarrangement (6) comprises an eye tracking sensor and an actuation device such thatobjects are selected by the user (7) directing their eyes at an object and then actuatingthe actuation device. The system (1) according to claim 6 or 7, wherein the actuation device is any one of: abutton which is actuated by pressing it, a movement sensor which is actuated by apredetermined movement, an eye tracking sensor which is actuated by apredetermined eye movement. The system (1) according to any one of claims 1 to 3, wherein the selectionarra ngement (6) comprises an eye tracking sensor such that objects are selected withpredetermined eye movements. The system (1) according to any one of the preceding claims, wherein the processingcircuitry (9) is adapted to: 0 repeatedly update the determining positions of the first and secondobject (8a, 8b) based on the video data and distance data, thecalculating spatial relationship (10) between the first and second object(8a, 8b) and the displaying the spatial relationship (10) to account formovement of the first and second object. The system (1) according to any of the preceding claims, wherein the processingcircuitry (9) is adapted to:0 display the spatial relationship (10) in association to the first and secondobject (8a, 8b) in the displayed at least part ofthe video data. .The system (1) according to claim 3 and 11, wherein the processing circuitry (9) is adapted to: 16 0 move the displayed spatial relationship (10) between the first andsecond object (8a, 8b) correspondingly when the displayed part of thevideo data is moved in correspondence to the determined size anddirection of the movement of the headset (11). 13. The system (1) according to any one of the preceding claims, wherein the distancedetermining arrangement (5) for determining a distance to an object in the scenery (4)to obtain distance data is an additional camera which is used in combination with theat least one camera (3) to obtain stereoscopic video data from which the distance datais calculated. 14. The system (1) according to any one of claims 1 to 12, wherein the distancedetermining arrangement (5) for determining a distance to an object in the scenery (4)to obtain distance data is any one of: a lidar, a time-of-flight camera and radar.
SE1950623A 2019-05-27 2019-05-27 System for providing a telepresence SE1950623A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
SE1950623A SE1950623A1 (en) 2019-05-27 2019-05-27 System for providing a telepresence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE1950623A SE1950623A1 (en) 2019-05-27 2019-05-27 System for providing a telepresence

Publications (1)

Publication Number Publication Date
SE1950623A1 true SE1950623A1 (en) 2020-11-28

Family

ID=74849282

Family Applications (1)

Application Number Title Priority Date Filing Date
SE1950623A SE1950623A1 (en) 2019-05-27 2019-05-27 System for providing a telepresence

Country Status (1)

Country Link
SE (1) SE1950623A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010037163A1 (en) * 2000-05-01 2001-11-01 Irobot Corporation Method and system for remote control of mobile robot
US20110301786A1 (en) * 2010-05-12 2011-12-08 Daniel Allis Remote Vehicle Control System and Method
US20120072023A1 (en) * 2010-09-22 2012-03-22 Toyota Motor Engineering & Manufacturing North America, Inc. Human-Robot Interface Apparatuses and Methods of Controlling Robots
US20120215380A1 (en) * 2011-02-23 2012-08-23 Microsoft Corporation Semi-autonomous robot that supports multiple modes of navigation
US8442661B1 (en) * 2008-11-25 2013-05-14 Anybots 2.0, Inc. Remotely controlled self-balancing robot including a stabilized laser pointer
US20130231779A1 (en) * 2012-03-01 2013-09-05 Irobot Corporation Mobile Inspection Robot
US20150190925A1 (en) * 2014-01-07 2015-07-09 Irobot Corporation Remotely Operating a Mobile Robot
US20180164802A1 (en) * 2015-03-06 2018-06-14 Alberto Daniel Lacaze Point-and-Click Control of Unmanned, Autonomous Vehicle Using Omni-Directional Visors
US20190051054A1 (en) * 2017-08-08 2019-02-14 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010037163A1 (en) * 2000-05-01 2001-11-01 Irobot Corporation Method and system for remote control of mobile robot
US8442661B1 (en) * 2008-11-25 2013-05-14 Anybots 2.0, Inc. Remotely controlled self-balancing robot including a stabilized laser pointer
US20110301786A1 (en) * 2010-05-12 2011-12-08 Daniel Allis Remote Vehicle Control System and Method
US20120072023A1 (en) * 2010-09-22 2012-03-22 Toyota Motor Engineering & Manufacturing North America, Inc. Human-Robot Interface Apparatuses and Methods of Controlling Robots
US20120215380A1 (en) * 2011-02-23 2012-08-23 Microsoft Corporation Semi-autonomous robot that supports multiple modes of navigation
US20130231779A1 (en) * 2012-03-01 2013-09-05 Irobot Corporation Mobile Inspection Robot
US20150190925A1 (en) * 2014-01-07 2015-07-09 Irobot Corporation Remotely Operating a Mobile Robot
US20180164802A1 (en) * 2015-03-06 2018-06-14 Alberto Daniel Lacaze Point-and-Click Control of Unmanned, Autonomous Vehicle Using Omni-Directional Visors
US20190051054A1 (en) * 2017-08-08 2019-02-14 Smart Picture Technologies, Inc. Method for measuring and modeling spaces using markerless augmented reality

Similar Documents

Publication Publication Date Title
JP5966510B2 (en) Information processing system
US8711218B2 (en) Continuous geospatial tracking system and method
JP6619871B2 (en) Shared reality content sharing
US11865448B2 (en) Information processing apparatus and user guide presentation method
EP3117290B1 (en) Interactive information display
US20160089980A1 (en) Display control apparatus
WO2014141504A1 (en) Three-dimensional user interface device and three-dimensional operation processing method
WO2017213070A1 (en) Information processing device and method, and recording medium
US11228737B2 (en) Output control apparatus, display terminal, remote control system, control method, and non-transitory computer-readable medium
EP3005303B1 (en) Method and apparatus for rendering object for multiple 3d displays
CN108027700A (en) Information processor
US10437874B2 (en) Searching image content
KR20170062439A (en) Control device, control method, and program
US9001186B2 (en) Method and device for combining at least two images to form a panoramic image
JP2019125215A (en) Information processing apparatus, information processing method, and recording medium
CN113467731A (en) Display system, information processing apparatus, and display control method for display system
CN113677412A (en) Information processing apparatus, information processing method, and program
SE1950623A1 (en) System for providing a telepresence
US11475606B2 (en) Operation guiding system for operation of a movable device
JP6699944B2 (en) Display system
US10642349B2 (en) Information processing apparatus
US20220230357A1 (en) Data processing
JP2018074420A (en) Display device, display system, and control method for display device
JP2020031413A (en) Display device, mobile body, mobile body control system, manufacturing method for them, and image display method
KR20230124363A (en) Electronic apparatus and method for controlling thereof

Legal Events

Date Code Title Description
NAV Patent application has lapsed