WO2017056632A1 - 情報処理装置及び情報処理方法 - Google Patents
情報処理装置及び情報処理方法 Download PDFInfo
- Publication number
- WO2017056632A1 WO2017056632A1 PCT/JP2016/070483 JP2016070483W WO2017056632A1 WO 2017056632 A1 WO2017056632 A1 WO 2017056632A1 JP 2016070483 W JP2016070483 W JP 2016070483W WO 2017056632 A1 WO2017056632 A1 WO 2017056632A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- ghost
- information processing
- processing apparatus
- information
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1069—Session establishment or de-establishment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1083—In-session procedures
- H04L65/1089—In-session procedures by adding media; by removing media
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/30—Profiles
- H04L67/306—User profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- the technology disclosed in the present specification relates to an information processing apparatus and information processing method for performing matching between users, for example, an information processing apparatus and information for performing matching between a user who provides first person video and a user who views first person video. It relates to the processing method.
- a technology is known in which a user accesses a field of view other than himself (a scene seen from a moving body other than himself).
- a mobile camera system that remotely acquires an image captured by a mobile camera mounted on a moving body such as a vehicle has been proposed (see, for example, Patent Document 1).
- an image processing system that provides information similar to visual information acquired by a person wearing glasses with an imaging sensing wireless device to a head-mounted display wearer has been proposed (for example, Patent Document 2). checking).
- an image display system for designating a viewpoint position and a line-of-sight direction to be picked up from a display device that displays a picked-up image of a moving object to a moving image pickup device, and a speed at the time of photographing has been proposed (for example, (See Patent Document 3).
- a telepresence technique has been proposed that provides an interface for operating a remote object while transmitting a sense of being at the place through a visual distance of a remote robot (for example, a patent). (Ref. 4).
- the purpose of the technology disclosed in this specification is to provide an excellent information processing apparatus and information processing method capable of matching between users.
- the technology disclosed in the present specification has been made in consideration of the above-mentioned problems, and the first aspect thereof is a connection between a first device that transmits an image and a second device that receives the image.
- An information processing apparatus including a control unit that controls according to which of the first apparatus and the second apparatus becomes the leading device.
- control unit of the information processing apparatus is configured such that the first apparatus is connected to the second apparatus in a leading manner. Is configured to notify the first device in a waiting state upon receiving a connection request from the second device, and start image transmission from the first device to the second device. Yes.
- control unit of the information processing device is configured so that the second device leads the first device. Is configured to notify the first device of a connection request from the second device and start image transmission from the first device to the second device.
- the control unit of the information processing apparatus includes a plurality of the second devices that lead to the first device.
- the first device is notified only when connection requests from the plurality of second devices satisfy a predetermined start condition, and the first device transmits to the plurality of second devices.
- the image transmission is configured to start.
- control unit of the information processing device together with the start of image transmission from the first device to the second device, It is configured to control intervention from the second device to the first device.
- a connection between a first device that transmits an image and a second device that receives the image is connected to the first device or the second device. It is an information processing method which has a control step which controls according to which becomes leading.
- an information processing apparatus including a selection unit that selects a first apparatus that transmits an image to a second apparatus based on position information of the first apparatus. It is.
- the selection unit of the information processing device is configured to present a UI indicating the position of the first device on a map. Has been.
- the selection unit of the information processing device is configured to select the first device in further consideration of user behavior. Has been.
- the selection unit of the information processing device presents only the first device extracted based on a user's action on the UI. Is configured to do.
- the selection unit of the information processing device presents only the first device extracted based on the user's behavior in the UI. Is configured to do.
- the selection unit of the information processing apparatus is configured to present information related to intervention in the first apparatus using the UI. ing.
- a thirteenth aspect of the technology disclosed in the present specification is an information processing method including a selection step of selecting a first device that transmits an image to a second device based on position information of the first device. is there.
- an information processing including a selection unit that selects a first device that transmits an image to the second device based on a user's action of the first device.
- a selection unit that selects a first device that transmits an image to the second device based on a user's action of the first device.
- the selection unit of the information processing apparatus presents a UI indicating information regarding an image transmitted from the first apparatus. It is configured.
- the selection unit of the information processing device is configured to present information related to the first device or a user thereof on the UI. ing.
- the selection unit of the information processing device only receives an image transmitted from the first device extracted based on a user's action.
- the UI is configured to present.
- An eighteenth aspect of the technology disclosed in this specification is an information processing method including a selection step of selecting a first device that transmits an image to a second device based on a user's action of the first device. It is.
- a nineteenth aspect of the technology disclosed in the present specification includes a selection unit that selects a second device from which the first device transmits an image based on information about the second device or a user thereof.
- Information processing apparatus includes a selection unit that selects a second device from which the first device transmits an image based on information about the second device or a user thereof.
- the selection unit of the information processing apparatus is configured to present a UI indicating information related to the second apparatus or a user thereof. Has been.
- FIG. 1 is a diagram illustrating an overview of a view information sharing system 100 to which the technology disclosed in this specification is applied.
- FIG. 2 is a diagram schematically showing a one-to-N network topology.
- FIG. 3 is a diagram schematically showing an N-to-1 network topology.
- FIG. 4 is a diagram schematically showing an N-to-N network topology.
- FIG. 5 is a diagram illustrating a functional configuration example of the image providing apparatus 101 and the image display apparatus 102.
- FIG. 6 is a diagram schematically illustrating a start flow by Body initial start.
- FIG. 7 is a diagram schematically showing a start flow by ghost initial start.
- FIG. 8 is a diagram showing a UI display example for selecting a Body.
- FIG. 1 is a diagram illustrating an overview of a view information sharing system 100 to which the technology disclosed in this specification is applied.
- FIG. 2 is a diagram schematically showing a one-to-N network topology.
- FIG. 3 is a
- FIG. 9 is a diagram showing a UI display example for selecting a Body.
- FIG. 10 is a diagram illustrating a UI display example for selecting a Body.
- FIG. 11 is a diagram showing a UI display example for selecting the Body.
- FIG. 12 is a diagram illustrating a UI display example for selecting a Body.
- FIG. 13 is a diagram illustrating a tag displayed on the Body selection UI.
- FIG. 14A is a diagram illustrating a UI display example for selecting a Body.
- FIG. 14B is a diagram illustrating a UI display example for selecting a Body.
- FIG. 15 is a diagram illustrating an example of a UI used by Body to select ghost.
- FIG. 1 shows an overview of a view information sharing system 100 to which the technology disclosed in this specification is applied.
- the view information sharing system 100 shown in the figure is configured by a combination of an image providing apparatus 101 that provides an image obtained by photographing a site and an image display apparatus 102 that displays an image provided from the image providing apparatus 101.
- the image providing apparatus 101 is specifically configured by a see-through head mounted display with a camera that is worn on the head of an observer 111 who is actually active at the site.
- the "see-through type" head-mounted display here is basically an optical transmission type, but may be a video see-through type.
- the camera mounted on the head-mounted display captures the direction of the sight line of the observer 111 and provides a first person video (FPV: First Person View).
- the image display apparatus 102 is disposed on the site, that is, apart from the image providing apparatus 101, and the image providing apparatus 101 and the image display apparatus 102 communicate via a network.
- the term “separation” as used herein includes not only a remote place but also a situation in which the same room is slightly separated (for example, about several meters). It is also assumed that data exchange is performed between the image providing apparatus 101 and the image display apparatus 102 via a server (not shown).
- the image display device 102 is, for example, a head-mounted display worn by a person (viewer of a captured image) 112 who is not in the field. If an immersive head-mounted display is used for the image display device 102, the viewer 112 can experience the same scene as the viewer 111 more realistically. However, a see-through type head mounted display may be used for the image display device 102.
- the image display device 102 is not limited to a head-mounted display, and may be, for example, a wristwatch type display. Alternatively, the image display device 102 does not need to be a wearable terminal, but is a multi-function information terminal such as a smartphone or a tablet, a general monitor display such as a computer screen or a television receiver, a game machine, or a screen. A projector that projects an image may be used.
- the observer 111 Since the observer 111 is actually at the site and is active with his / her own body, the observer 111 (or the image providing apparatus 101) who is the user of the image providing apparatus 101 is described below. Then, it is also called “Body”. On the other hand, the viewer 112 is not a person who is active on the spot, but is a user of the image display device 102 because he / she has awareness of the spot by watching the first person video of the viewer 111. The viewer 112 (or the image display device 102) is also referred to as “Ghost” below.
- JackIn The start flow of JackIn is broadly divided into a case where the body performs initiative (Body initial start) and a case where the host performs the initiative (Ghost initial start). Details of the JackIn start flow will be described later.
- the view information sharing system 100 has a basic function of transmitting a first person video from the body to the host and viewing / experience on the host side, and communicating between the body and the host.
- ghost is able to operate and stimulate the body or part of the body of the “visual intervention” that intervenes in the body of the body, “auditory intervention” that intervenes in the body of the body of the body.
- Body interaction can be realized by remote intervention such as “physical intervention” and “alternative conversation” in which Ghost speaks on site in place of Body.
- there are a plurality of communication channels such as “visual intervention”, “auditory intervention”, “physical intervention”, and “alternative conversation”. The details of “visual field intervention”, “auditory intervention”, “physical intervention”, and “alternative conversation” will be described later.
- Ghost can instruct Body to act in the field through “vision intervention”, “auditory intervention”, “physical intervention”, and “alternative conversation”.
- medical sites such as surgery and construction sites such as civil engineering work
- instructions and guidance for aircraft and helicopter operations guidance for car drivers, coaching or instruction in sports, etc.
- the view information sharing system 100 can be utilized.
- Body wants to receive (or must receive) support, instructions, guidance, and guidance from other people for the work they are currently doing, such as when they want to share their field of view with others.
- JackIn Body initial start
- JackIn Body initial start
- Ghost wants to provide support, instructions, guidance, and guidance for the work being done by others in addition to (or must do) other than when he / she wants to watch on-site video (first person video of another person) without going out. If it is necessary, JackIn (Ghost initial start) with the corresponding Body will be carried out.
- FIG. 1 depicts a one-to-one network topology of Body and ghost, in which only one image providing apparatus 101 and one image display apparatus 102 exist.
- a one-to-N network topology in which one Body and multiple (N) Hosts JackIn simultaneously as shown in FIG. 2, or multiple (N) Body and one ghost simultaneously in JackIn as shown in FIG. 3.
- a network topology (not shown) is also assumed in which one device JackIn a Body as a ghost and functions as a Body to another ghost, and three or more devices are daisy chain connected.
- a server (not shown) may be interposed between the Body and the ghost.
- FIG. 5 shows a functional configuration example of the image providing apparatus 101 and the image display apparatus 102.
- the image providing apparatus 101 is an apparatus provided for use by a user (observer 112) who plays the role of Body.
- the image providing apparatus 101 includes an imaging unit 501, an image processing unit 502, a display unit 503 as an output unit, a first audio output unit 504, a drive unit 505, and a second audio output unit. 506, a position detection unit 507, a communication unit 508, a control unit 509, and an authentication unit 510.
- the imaging unit 501 is composed of a camera that shoots a first person video of Body.
- the imaging unit 501 is attached to the head of the observer 111 so as to photograph, for example, Body, that is, the line of sight of the observer 111.
- an omnidirectional camera may be used as the imaging unit 501 to provide a 360-degree omnidirectional image around the body.
- the whole sky image does not necessarily need to be 360 degrees, and a part of the visual field may be missing.
- the all-sky image may be a hemisphere image that does not include a floor surface with little information (the same applies hereinafter).
- the image processing unit 502 processes the image signal output from the imaging unit 501.
- the Body looks around the surroundings or changes the direction of the line of sight on its own intention, so ghost watches a video with intense shaking, and VR ( There are concerns about health hazards, such as the occurrence of sickness or motion sickness.
- the image processing unit 502 artificially constructs a surrounding space from a continuous image of the first person video of the Body photographed by the imaging unit 501.
- the image processing unit 502 performs real-time space recognition based on a SLAM (Simultaneous Localization and Mapping) recognition technology on a video (all-round image) captured by the imaging unit 501 in real time
- the video from the virtual camera viewpoint controlled by ghost is rendered by spatially connecting the frame and the past video frame.
- the video rendered from the virtual camera viewpoint is a viewpoint video that is pseudo-out of Body, rather than the first person video of Body. Therefore, since the ghost can observe the surrounding environment of the body independently of the movement of the body, the shaking of the image can be stabilized to prevent VR sickness and another place where the body is not focused can be viewed. .
- the display unit 503 displays and outputs the information sent from the image display device 102, and realizes intervention on the body field of view by ghost.
- the display unit 503 observes an AR (Augmented Reality) image that expresses ghost's consciousness sharing the first person experience with Body. And superimposed on the field of view of the person 111 (that is, the real world landscape).
- the AR image includes, for example, an image such as a pointer or an annotation indicating the location pointed to by ghost. Therefore, ghost can intervene in the field of view through communication with Body, and can interact with Body in the field.
- the first audio output unit 504 is composed of, for example, an earphone or a headphone, and allows the body to listen to the information sent from the image display device 102, thereby realizing intervention of the body to be heard by ghost. From the image display device 102, information related to ghost's consciousness sharing the first person experience with Body is transmitted. On the image providing apparatus 101 side, the received information is converted into an audio signal, and the audio is output from the first audio output unit 504 to be heard by the Body, that is, the observer 111. Alternatively, an audio signal spoken by ghost who is experiencing the first person is transmitted from the image display apparatus 102 as it is.
- the received audio signal is output as audio from the first audio output unit 504 as it is, so that Body, that is, the observer 111 listens.
- the volume, quality, output timing, and the like of the sound output from the first sound output unit 504 may be adjusted as appropriate.
- image information and character information received from the image display apparatus 102 may be converted into an audio signal and output from the first audio output unit 504 as audio. Therefore, ghost can intervene in the hearing through communication with Body, and can interact with Body in the field.
- the drive unit 505 operates the body of the body or a part of the body or gives a stimulus to realize intervention on the body of the body by ghost.
- the drive unit 505 includes, for example, an actuator that applies a tactile sensation (tactile) or a slight electrical stimulus (not harmful to health) to the body of the observer 111.
- the driving unit 505 is a device that assists or restrains body movement by driving a power suit or exoskeleton that the observer 111 wears on an arm, hand, leg, or the like (see, for example, Patent Document 5). Consists of). Therefore, ghost can intervene in the body through communication with Body, and can interact with Body in the field.
- the second audio output unit 506 is composed of, for example, a wearable speaker worn by Body, and outputs information or an audio signal received from the image display device 102 to the outside.
- the sound output from the second sound output unit 506 can be heard on the scene as if the body is speaking. Therefore, ghost can talk with people on the site where the body is located or can give a voice instruction (alternative conversation) instead of the body.
- the position detection unit 507 detects current position information of the image providing apparatus 101 (that is, Body) using, for example, a GPS (Global Positioning System) signal.
- the detected position information is used, for example, when searching for a Body at a location desired by ghost (described later).
- the communication unit 508 is interconnected with the image display device 102 via a network, and transmits first-person images and spatial information captured by the image capturing unit 501 and communicates with the image display device 102.
- the communication means of the communication unit 508 may be either wireless or wired, and is not limited to a specific communication standard.
- the authentication unit 510 performs an authentication process on the image display device 102 (or the user, ghost) interconnected via the network, and determines an output unit that outputs information from the image display device 102. Then, the control unit 509 controls the output operation from the output unit according to the authentication result by the authentication unit 510.
- the control unit 509 has functions corresponding to, for example, a CPU (Central Processing Unit) and a GPU (Graphic Processing Unit).
- the control unit 509 executes only the display output from the display unit 503.
- the control unit 509 executes audio output from the first audio output unit 504 together with display output from the display unit 503. .
- the range in which the body allows intervention from ghost is defined as the permission level.
- the range in which ghost intervenes on Body is defined as the mission level (described later).
- the view information sharing is performed so that the above-described processing by the authentication unit 510 and the control unit 509 is executed not by the image providing apparatus 101 but by a server (not shown) interposed between the image providing apparatus 101 and the image display apparatus 102. It is also possible to configure the system 100.
- the image display apparatus 102 is an apparatus provided for use by a user (viewer 112) that plays the role of ghost.
- the image display apparatus 102 includes a communication unit 511, an image decoding unit 512, a display unit 513, a user input unit 514, and a position / orientation detection unit 515.
- the communication unit 511 is interconnected with the image providing apparatus 101 via the network, and receives first person video from the image providing apparatus 101 and communicates with the image providing apparatus 101.
- the communication means of the communication unit 511 may be either wireless or wired and is not limited to a specific communication standard, but is assumed to be consistent with the communication unit 508 on the image providing apparatus 101 side.
- the image decoding unit 512 decodes the image signal received from the image providing apparatus 101 by the communication unit 511.
- the display unit 513 displays and outputs the whole sky image (Body first person video) after being decoded by the image decoding unit 512. Note that the processing (described above) for rendering the viewpoint video that has left the body from the first person video of Body may be performed by the image decoding unit 512 instead of the image processing unit 502 on the image providing apparatus 101 side.
- the position / orientation detection unit 515 detects the position and orientation of the viewer's 112 head.
- the detected position and orientation correspond to the current viewpoint position and line-of-sight direction of ghost.
- the position of the viewer 112 detected by the position / orientation detection unit 515 indicates the viewpoint position and line-of-sight direction of a virtual camera (described above) when creating a viewpoint image that is artificially removed from the body of the first person of Body. Control can be based on position and orientation.
- the display unit 513 includes, for example, a head-mounted display worn by the viewer 112 as ghost. If an immersive head-mounted display is used for the display unit 513, the viewer 112 can experience the same scene as the viewer 111 more realistically.
- the video viewed by the viewer 112, that is, ghost is not the first person video of the body itself, but is a surrounding space (a viewpoint video that is pseudo outside the body of the body) that is artificially constructed from a continuous image of the first person video. (See above). Further, it is possible to move the display angle of view of the display unit 513 by controlling the virtual camera so as to follow the viewpoint position and line-of-sight direction of the viewer 112 detected by the ghost head tracking, that is, the position / orientation detection unit 515. it can.
- a wearable terminal such as a see-through type head mounted display or a watch type display may be used instead of the immersive type head mounted display.
- the display unit 513 does not need to be a wearable terminal, and is a multifunctional information terminal such as a smartphone or a tablet, a general monitor display such as a computer screen or a television receiver, a game machine, or an image on the screen. It may be a projector that projects
- the user input unit 514 is a device for inputting the intention and consciousness of ghost itself when the viewer 112 as the Host observes the first person video of the Body displayed on the display unit 513.
- the user input unit 514 includes a coordinate input device such as a touch panel, a mouse, and a joystick.
- ghost can directly indicate a place of particular interest by touching or clicking the mouse on the screen displaying the first person video of Body. Although ghost gives an instruction on the pixel coordinates of the video being viewed, it does not make sense because the photographed video on the Body side always changes. Therefore, the user input unit 514 specifies position information in the three-dimensional space corresponding to the pixel position designated by ghost by touching or clicking on the screen, etc. by image analysis, and the position information in the three-dimensional space is imaged. Transmit to the providing apparatus 101. Therefore, ghost can perform pointing that can be fixed with respect to space, not pixel coordinates.
- the user input unit 514 captures eye movements using a ghost face image captured by the camera and an electro-oculogram, determines a location where ghost is gazed, and specifies information for identifying the location. You may make it transmit to the image provision apparatus 101.
- FIG. Also in this case, the user input unit 514 specifies position information in the three-dimensional space corresponding to the pixel position that ghost takes a close look by image analysis or the like, and transmits the position information in the three-dimensional space to the image providing apparatus 101. To do. Therefore, ghost can perform pointing that can be fixed with respect to space, not pixel coordinates.
- the user input unit 514 includes a character input device such as a keyboard.
- ghost can input the intention or consciousness he / she wants to give to Body when he / she has the same first person experience as Body, as character information.
- the user input unit 514 may transmit the character information input by ghost to the image providing apparatus 101 as it is, or may transmit it to the image providing apparatus 101 after replacing it with another signal format such as an audio signal.
- the user input unit 514 includes a voice input device such as a microphone, and inputs the voice uttered by ghost.
- the user input unit 514 may transmit the input sound from the communication unit 511 to the image providing apparatus 101 as an audio signal.
- the user input unit 514 may recognize the input voice, convert it into character information, and transmit it to the image providing apparatus 101 as character information.
- Ghost points to an object using a directive such as “that” or “this” while watching the first person video of Body.
- the user input unit 514 specifies the position information in the three-dimensional space of the thing indicated by the instruction word by language analysis and image analysis, and transmits the position information in the three-dimensional space to the image providing apparatus 101. To do. Therefore, ghost can perform pointing that can be fixed with respect to space, not pixel coordinates.
- the user input unit 514 may be a gesture input device that inputs ghost gestures and hand gestures.
- the means for capturing the gesture is not particularly limited.
- the user input unit 514 may include a camera that captures the motion of the ghost limb and an image recognition device that processes the captured image. In order to facilitate image recognition, a marker may be attached to the body of ghost.
- the user input unit 514 may transmit the input gesture from the communication unit 411 to the image providing apparatus 101 as a control signal that intervenes in the body of Body, for example.
- the user input unit 514 converts the input gesture into image information (such as coordinate information, AR image to be superimposed, or character information) that intervenes in the body's field of view, or an audio signal that intervenes in the body's hearing. Then, it may be transmitted from the communication unit 511 to the image providing apparatus 101.
- the user input unit 514 specifies position information in the three-dimensional space corresponding to the pixel position designated by ghost by a gesture by image analysis or the like, and transmits the position information in the three-dimensional space to the image providing apparatus 101. . Therefore, ghost can perform pointing that can be fixed with respect to space, not pixel coordinates.
- JackIn developed in the view information sharing system 100 is similar to general AR technology from the viewpoint of displaying an AR image in a superimposed manner. However, in JackIn, it is thought that it differs from a normal AR technology provided by a computer in that a human (Ghost) expands another human (Body).
- JackIn is also similar to telepresence (described above). However, normal telepresence is an interface for viewing the world from the viewpoint of a machine such as a robot, whereas JackIn is a situation where a human (Ghost) views from the viewpoint of another human (Body). Is different. Telepresence is based on the premise that a human being is a master and a machine is a slave, and that the slave machine faithfully reproduces human movements. On the other hand, when a human (Ghost) JackIn to another human (Body), Body does not always move according to ghost, but is an interface that allows independence.
- the video provided from the image providing device 101 to the image display device 102 is not always a real-time video (that is, a live video taken by the imaging unit 501) that is observed by the body on the spot.
- the image providing apparatus 101 may include a large-capacity storage device (not shown) that records past videos, and the past videos may be distributed from the image providing apparatus 101.
- a past recorded video by the image providing apparatus 101 is accumulated on a JackIn server (provisional name) that controls JackIn between Body and ghost, or other recording servers, and ghost (image display apparatus 102) is stored from these servers.
- the past video may be streamed.
- ghost is not allowed to intervene in Body including visual field and hearing when viewing past videos. This is because the video that ghost is watching is not the video of the site where Body is currently working, and intervening based on the past video will hinder Body's current work.
- “permission” and “mission” are defined in order to realize appropriate matching between Body and ghost.
- the range in which Body allows the intervention from ghost is defined as “permission”, and the intervention from ghost is limited to the range specified by permission.
- the range of operations in which ghost intervenes in Body is defined as “mission”, and the range in which ghost intends to intervene in Body is limited to the range specified by mission.
- Level 1 Only field of view exchange (transmission of first person video) is allowed. In this case, the image providing apparatus 101 only transmits the captured image of the imaging unit 501 and does not operate the output unit at all.
- Level 2 Allow only view exchange and view intervention. In this case, the image providing apparatus 101 transmits the captured image of the imaging unit 501 and performs only the display output of the display unit 503.
- Level 3 Further, auditory intervention is allowed. In this case, the image providing apparatus 101 transmits the captured image of the imaging unit 501 and performs the display output of the display unit 503 and the audio output from the first audio output unit 504.
- Level 4 Allow all interventions, including physical interventions and alternative conversations. In this case, the image providing apparatus 101 can further drive the drive unit 505 and can output audio from the second audio output unit 506 to the outside.
- each Body may give an individual permission for each ghost instead of giving a uniform permission to all the ghosts.
- Body may set permission according to the user attribute of ghost.
- the user attributes mentioned here include personal information such as age, gender, relationship with Body (such as relationships, friends, bosses and subordinates), birthplace, occupation, and qualifications, as well as rating information for work skills to be supported It also includes information such as past ghost (assistant, instructor, etc.) results (how many hours the work has been done so far), evaluations, and other Body reputations (posts, voting results, etc.).
- Body does not set permissions according to attributes, but may set permissions on an individual basis (permission for Mr. A, permission for Mr. B, etc.). In other words, a permission may be set for each combination of Body and ghost.
- the Body may set a permission based on the human relationship with the user, or may set the permission based on ghost's own ability that is personally understood by the body.
- a method of granting temporary ghost to ghost by one-to-one negotiation or arbitration between Body and ghost giving a certain ghost a high-level ermisson for a predetermined period, when the period elapses, the original (Return to level permission).
- Body may be able to set a user who prohibits JackIn to himself.
- Example 1 Only shared view (level 1 permission) is allowed for others. (Example 2) Friends are allowed up to visual intervention as well as auditory intervention (level 2 or 3 permission). (Example 3) Physical intervention (level 4 permission) is specifically allowed for close friends or those who have authentication or qualifications. Or, an alternative conversation is temporarily allowed.
- Example 4 For Ghost paying 5 dollars, only view sharing (level 1 permission) is allowed. (Example 5) A ghost paying 10 dollars allows visual intervention as well as auditory intervention (level 2 or 3 permission). Example 6 A ghost paying $ 100 is allowed physical intervention (level 4 permission). Or, an alternative conversation is temporarily allowed.
- the range of operations in which ghost intervenes in Body is defined as “mission”, and the range in which ghost can intervene in Body is limited to the range specified in mission.
- the ghost mission is set, for example, within the range of missions and abilities that the ghost itself bears. It is preferable that the mission is permitted or authenticated by, for example, an authoritative institution, and is not determined by each individual ghost on their own.
- Mission, duties, occupation, qualifications, intervention skill ratings, past ghost (assistant, instructor, etc.) experience e.g., experience time as ghost
- evaluation review
- reputation by Body Depending on the posting, voting result, etc., different levels of missions as exemplified below can be defined.
- Level 1 Only field of view exchange (sending first person video) is performed. In this case, the image display device 102 only displays the image received from the image providing device 101.
- Level 2 Perform up to field exchange and field intervention. In this case, the image display apparatus 102 displays the image received from the image providing apparatus 101 and transmits information related to an image to be displayed on the image providing apparatus 101 side (an image to be superimposed and displayed in the field of view). .
- Level 3 In addition, an auditory intervention is performed. In this case, the image display apparatus 102 further transmits information related to the sound to be output by the image providing apparatus 101 (the sound to be heard by the Body).
- Level 4) Perform all interventions, including physical interventions and alternative conversations. In this case, the image display apparatus 102 further transmits information for operating the drive unit 505 and information related to the sound to be output from the second sound output unit 506.
- Body When Body starts JackIn with ghost, it filters based on personal information and attribute information of ghost, and further, the permission specified by Body matches the mission that Ghost has, and whether or not JackIn is accepted. What is necessary is just to judge the range which can intervene in a state. For example, the filtering process is effective when Body takes the lead in starting JackIn for a large number of unspecified ghosts (Large number ghost) (Body initial start).
- Such filtering processing may be performed on the Body side (that is, the image providing apparatus 101), or may be performed by a JackIn server (tentative name) that controls JackIn between a large number of Bodies and a large number of Hosts. Good.
- JackIn server tentative name
- JackIn start flow JackIn is a situation in which ghost is immersed in the first person experience of Body in the view information sharing system 100, and ghost interacts with Body.
- JackIn is roughly divided into a case where the body is initiated by the initiative (Body initial start) and a case where the host is initiated by the host (Ghost initial start).
- JackIn is basically started by an action in which Ghost enters Body (jack in). Therefore, when Body wants to start JackIn initiatively, after Body requests that a desired (or a predetermined number of) Hosts enter, the work starts in a waiting state.
- FIG. 6 schematically shows a start flow by Body initial start. In the figure, for simplicity, only one ghost is drawn, but it is assumed that there are a plurality of ghosts.
- Body starts “acceptance” for accepting ghost, and starts work.
- the form of requesting ghost that makes JackIn from the Body side is arbitrary.
- Body uses SNS (Social Networking Service) to comment on “Need Help!”, “Tell me how to drive someone”, “Tell me how to go to XX”, etc. You may also raise ghost.
- ghost may JackIn and charge a service for providing support, instructions, guidance, and guidance for Body work.
- Body may present the amount of money that can be paid when recruiting ghost through SNS or the like.
- ghost applying for the recruitment sends a JackIn request.
- an external device such as a wearable terminal worn by the user of the image providing apparatus 101
- receives a JackIn request from the ghost (image display apparatus 102) instead of the Body (image providing apparatus 101) it notifies the Body.
- Body determines mechanically whether or not the connection is possible based on selection criteria such as past results and evaluation of ghost, or the user directly determines.
- selection criteria such as past results and evaluation of ghost, or the user directly determines.
- JackIn it is also assumed that the set permission and mission are different for each ghost.
- JackIn is basically started according to the same sequence as in FIG.
- a situation is expected in which an unspecified person is requested to provide light work support such as advice or assistant.
- the Body recruits ghost who will JackIn by SNS etc. and starts work in a waiting state. Each time the wearable terminal receives a JackIn request from ghost, it notifies the Body. When connecting to the ghost, the Body mechanically determines whether or not the connection is possible based on selection criteria such as past results and evaluation of the ghost, or the user directly determines. In addition, when there are a plurality of ghosts that have been JackIn, it is also assumed that the set permission and mission are different for each ghost.
- the procedure in which a single (or a specific small number) ghost takes the lead in JackIn is basically realized by an action in which Ghost enters Body (jack in), and an operation of making a call from ghost to Body. Similar to.
- FIG. 7 schematically shows a start flow by ghost initial start.
- a JackIn request is transmitted from the ghost to the Body, the JackIn state is entered, the first person video is transmitted from the Body to the Host, and the intervention by the Host to the Body is performed.
- Body determines mechanically whether or not the connection is possible based on selection criteria such as past results and evaluation of ghost, or the user directly determines. At that time, Body may set permission for ghost that has JackIn, or ghost may set its own mission.
- the image providing apparatus 101 and the image display apparatus 102 may each present a user for setting a permission (User Interface) or a UI for setting a mission.
- Body can set the start condition of JackIn in advance.
- the wearable terminal is set not to notify the Body every time a JackIn request is received from ghost, but to notify the Body only when the start condition is satisfied.
- the number of ghosts who have applied can be set as the start condition.
- the wearable terminal notifies Body when the ghost that has received the JackIn request reaches a predetermined number or more. Only when the ghost reaches 100 or more, the first person video is distributed from the Body at the site.
- a body participating in a festival writes “I am coming to the festival now” and video distribution starts when 100 or more ghosts want to watch gather.
- the current Body can be selected or filtered Body to be Jackin.
- the selection process of the Body may be performed by each ghost, or a JackIn server (provisional name) that controls JackIn between the Body and the ghost may be interposed in the selection process.
- a mechanism for notifying ghost of the position and action of each Body or notifying the JackIn server is required.
- the Body side that is, the image providing apparatus 101 measures the current position based on GPS or the like, or recognizes the action that the user is currently performing based on the recognition of the activity (Activity), and notifies the ghost or JackIn server.
- the body action recognition may not be automated but may rely on the body's own character input (writing) or voice input. Below, it demonstrates without limiting the mechanism which specifies the position and action of each Body.
- FIG. 8 shows an example of a UI that ghost selects based on the position information of the Body.
- an icon (or character) indicating the current position of each Body is displayed on the map in the currently designated range.
- Such a UI is displayed on, for example, the display unit 514 of the image display apparatus 102, and the user, that is, ghost selects a Body to be JackIn by specifying an icon at a desired position by a UI operation such as touch or click. Can do.
- the map display area can be changed by an operation such as dragging or moving the cursor.
- a UI screen as shown in FIG. 8 may be displayed on the screen of another terminal possessed by ghost instead of the display unit 514 of the image display apparatus 102 main body.
- FIG. 9 shows an example of a UI that ghost selects based on the action in addition to the position information of the Body.
- This figure is a display example when “person watching fireworks” is input in the search field, and the target of JackIn is limited to “person watching fireworks”.
- a JackIn server provisional name
- JackIn between Body and ghost selects a Body that matches a keyword (in this case, Body's action) entered in the search field from among the Body groups displayed on the map. Search for.
- a UI screen shown in FIG. 9 only the Body extracted by the action “watching fireworks” is displayed.
- Input to the search field can be performed by character input or voice input.
- FIG. 8 among the displayed Body icons, those who have not seen the fireworks disappear, and therefore ghost can narrow down the Body to be selected.
- FIG. 10 shows another example of UI that ghost selects based on the position information of Body.
- This figure is a modification of the UI shown in FIG. 8, and a tag indicating the action of the body or the like is attached to each body icon. Even if the search word is not searched for as in the case shown in FIG. 9, ghost identifies the current action of each body based on the display content of the tag, and selects the body to be jacked in. be able to.
- tags are always displayed on all icons, the display becomes complicated and the map is difficult to read. Therefore, a temporary selection is made by touch, click, hovering, or the like.
- the number of tags displayed at the same time may be limited, for example, a tag may be displayed only for an icon in a closed state.
- the tag includes whether the acceptance is open (described above), information related to permission (a range in which intervention is permitted), and fee information (whether the view sharing is free or paid). In the case of the fee information).
- FIG. 13 shows a display example of tags attached to the Body icon.
- Body indicates whether or not each intervention operation such as visual field intervention, auditory intervention, physical intervention, and alternative conversation is permitted.
- intervention operation such as visual field intervention, auditory intervention, physical intervention, and alternative conversation is permitted.
- ghost can easily determine what can be done at the place by JackIn to Body.
- FIG. 11 shows still another example of a UI for selecting a Body by ghost.
- thumbnails of first-person videos of each Body are displayed in detail in a list format.
- the thumbnail of each first person video may be a real-time video or a representative image (still image).
- tag information such as the action of the body, the current position of the body, the acceptance status, permission setting, and fee information may be displayed together with the thumbnail of each first person video.
- FIG. 12 shows still another example of a UI for selecting a Body by ghost.
- This figure is a modification of the UI shown in FIG. 11, and thumbnails of first-person videos of each Body are displayed in a list format instead of a list format.
- tag information such as Body action, Body current position, acceptance status, permission setting, fee information, and the like may be displayed together with the thumbnail of each first person video.
- FIG. 12 is a display example when the Body to be Jacked in is limited to “person watching fireworks”.
- a JackIn server tentative name
- JackIn between Body and ghost searches for a Body that matches the keyword (here, Body's action) entered in the search field.
- the Body is searched without being linked to the place, so if you are “watching fireworks”, you are in a remote place such as Hokkaido and Okinawa.
- Body may be displayed as a search result at the same time.
- the video provided from the Body side is not limited to the real-time video that the Body is observing in the field, but may be a recorded past video.
- ghost does not allow any intervention including visual field and auditory sense to Body when viewing past videos. Therefore, in the UI examples shown in FIGS. 11 and 12, it is preferable to indicate the real-time video or the recorded past video in the thumbnail of the first person video in order to prevent intervention due to ghost's misunderstanding.
- Ghost can JackIn while visually recognizing the actions performed by each Body based on the thumbnail of the first person video displayed. Furthermore, if the UI shown in FIG. 12 is used, the ghost can smoothly JackIn to the Body performing the designated action.
- Ghost can select a Body efficiently by linking to a location using the map-based Body selection UI shown in FIGS. 8 to 10, while FIG. 11 to FIG.
- the Body selection UI that displays the thumbnail of the first person video shown in FIG. 12 in a list or list format, the Body can be efficiently selected while visually checking the activity.
- these two types of Body selection UIs may be overlapped and the UIs may be switched by tabs.
- FIG. 14A when the “MAP” tab is selected, a map-based Body selection UI is displayed on the front, and ghost can select a Body to be JackIn linked to a place.
- FIG. 14B when the “Activity” tab is selected, a Body selection UI displaying a list of thumbnails of first-person videos of each Body is displayed on the front, and the Host can monitor the Body while visually checking the actions of each Body.
- a selection UI can be selected.
- Body Body E. Ghost selection by Body Body is a host that supports Jackin and assists you when you want to receive (or have to) receive assistance, instructions, guidance, or guidance from others. Wanted. For example, Body uses SNS (Social Networking Service) to comment on “Need Help!”, “Tell me how to drive someone”, “Tell me how to go to ⁇ ”, etc. You may also raise ghost.
- SNS Social Networking Service
- Ghost may JackIn and monetize a service that provides support, instructions, guidance, and guidance for Body work.
- the amount that can be paid may be presented together.
- the host that wants to apply can refer to the Body to be recruited through the UI screens shown in FIGS. 8 to 12, for example.
- description of the UI on the ghost side is omitted.
- FIG. 15 shows an example of a UI for selecting Body by Body.
- the illustrated UI includes a list of ghosts to be selected, and information on each ghost is displayed.
- the listed ghost is a user who has applied for a Body recruitment.
- a JackIn server provisional name
- JackIn between Body and ghost may be a person selected in accordance with the contents of Body's recruitment.
- Each ghost listed in the illustrated UI is a user who has applied for JackIn to Body by specifying an action such as “person watching fireworks”.
- Ghost information displayed on the ghost selection UI shown in the figure includes personal information such as age, sex, relationship with Body (such as relationships, friends, supervisors and subordinates), birthplace, occupation, and qualifications, as well as support targets Skill rating information, information such as past ghost (assistant, instructor, etc.) track record (how many hours you have experienced), evaluation (review), reputation by other Bodies (posts, voting results, etc.) Shall be included. Further, when displaying the list of ghosts on the ghost selection UI, the display order of the ghosts may be sorted based on the correspondence relationship between missions and missions, past results, evaluations, and reputations. Body can select a ghost to receive support, instructions (such as coaching in sports competition), guidance, guidance, etc. through a ghost selection UI as shown in FIG.
- the technology disclosed in this specification can be used for work support in various industrial fields, such as medical sites such as surgery, construction sites such as civil engineering, airplane and helicopter operations, car driver navigation, and sports instructions. It can be used for such applications.
- the description will focus on an embodiment relating to a system in which ghost, who shares the first person image of Body, intervenes in the body's field of view, hearing, body, etc., with respect to Body who is active in the field with the body.
- the gist of the technology disclosed in the present specification is not limited to this.
- the technology disclosed in the present specification can be similarly applied to various information processing apparatuses that display information on support, instructions, guidance, and guidance from others in the field of view of a person.
- the technology disclosed in the present specification can also be configured as follows.
- (1) Controlling the connection between the first device that transmits an image and the second device that receives the image depending on which of the first device and the second device is leading
- An information processing apparatus including a control unit.
- the control unit receives the connection request from the second device and sends the connection request to the first device in a waiting state. Notify and start image transmission from the first device to the second device, The information processing apparatus according to (1) above.
- the control unit When the second device leads to the first device, the control unit notifies the first device of a connection request from the second device, and Starting image transmission from the first device to the second device; The information processing apparatus according to (1) above.
- the control unit In the case where a plurality of the second devices lead to connect to the first device, the control unit, when connection requests from the plurality of second devices satisfy a predetermined start condition Only to the first device to start image transmission from the first device to the plurality of second devices, The information processing apparatus according to (1) above. (5) The control unit controls intervention from the second device to the first device together with the start of image transmission from the first device to the second device. The information processing apparatus according to any one of (1) to (4). (6) Controlling connection between the first device that transmits an image and the second device that receives the image, depending on which of the first device and the second device is leading An information processing method having a control step.
- An information processing apparatus including a selection unit that selects a first apparatus that transmits an image to a second apparatus based on position information of the first apparatus.
- the selection unit presents a UI indicating the position of the first device on the map.
- the information processing apparatus according to (7) above. (9) The selection unit selects the first device in consideration of user behavior.
- the information processing apparatus according to any one of (7) or (8). (10) The selection unit presents only the first device extracted based on the user's behavior on the UI.
- the information processing apparatus according to (8) above. (11)
- the selection unit presents the user's action of the first device on the UI.
- the selection unit presents information related to intervention in the first device using the UI.
- An information processing method including a selection step of selecting a first device that transmits an image to a second device based on position information of the first device.
- An information processing apparatus including a selection unit that selects a first apparatus that transmits an image to a second apparatus based on a user's action of the first apparatus.
- the selection unit presents a UI indicating information about an image transmitted from the first device.
- the selection unit presents information on the first device or its user on the UI.
- the selection unit presents only the image to be transmitted from the first device extracted based on the user's behavior on the UI.
- An information processing method including a selection step of selecting a first device that transmits an image to a second device based on an action of a user of the first device.
- An information processing apparatus including a selection unit that selects a second apparatus from which the first apparatus transmits an image based on information about the second apparatus or its user.
- the selection unit presents a UI indicating information on the second device or its user.
- An information processing method including a selection step of selecting a second device that transmits an image from the first device based on an action of a user of the first device.
- DESCRIPTION OF SYMBOLS 100 ... Visibility information sharing system 101 ... Image provision apparatus, 102 ... Image display apparatus 501 ... Imaging part, 502 ... Image processing part, 503 ... Display part 504 ... 1st audio
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- User Interface Of Digital Computer (AREA)
- Closed-Circuit Television Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
図1には、本明細書で開示する技術を適用した視界情報共有システム100の概要を示している。図示の視界情報共有システム100は、現場を撮影した画像を提供する画像提供装置101と、画像提供装置101から提供される画像を表示する画像表示装置102の組み合わせで構成される。
図5には、画像提供装置101と画像表示装置102の機能的構成例を示している。
JackInでは、「視界介入」、「聴覚介入」、「身体介入」、「代替会話」といった複数のコミュニケーション・チャネルがある。したがって、Bodyは、GhostとのJackInを開始することによって、自分の視界をGhostと共有できるとともに、視界介入などを通じて、現在行なっている作業に対してGhostから支援や指示、誘導、案内を受けることができる。また、Ghostは、BodyとのJackInを開始することによって、自分は現場に出向かなくてもBodyの一人称体験をすることができるとともに、視界介入などを通じてBodyの作業に対して支援や指示、誘導、案内を行なうことができる。
まず、permissionについて説明する。各Bodyは、以下に例示するように介入を許容するレベルの異なるpermissionを、それぞれ適宜設定することができる。
(レベル2)視界交換と視界介入までしか許容しない。この場合、画像提供装置101は、撮像部501の撮像画像を送信するとともに、表示部503の表示出力のみを行なう。
(レベル3)さらに聴覚介入も許容する。この場合、画像提供装置101は、撮像部501の撮像画像を送信するとともに、表示部503の表示出力並びに第1の音声出力部504からの音声出力を行なう。
(レベル4)身体介入及び代替会話を含む、すべての介入を許容する。この場合、画像提供装置101は、さらに駆動部505を駆動できるとともに、第2の音声出力部506から音声を外部出力することができる。
(例2)友人には視界介入並びに聴覚介入(レベル2又は3のpermission)まで許容する。
(例3)親しい友人や認証若しくは資格を得ている人には、特別に身体介入(レベル4のpermission)を許容する。又は、一時的に代替会話を許容する。
(例5)10ドル支払うGhostには、視界介入並びに聴覚介入(レベル2又は3のpermission)まで許容する。
(例6)100ドル支払うGhostには、身体介入(レベル4のpermission)を許容する。又は、一時的に代替会話を許容する。
次に、missionについて説明する。本実施形態では、GhostがBodyに対して介入する操作の範囲を「mission」として定義し、GhostがBodyに介入できる範囲をmissionで規定する範囲に限定する。Ghostのmissionは、例えば、Ghost自身が背負っている使命や能力の範囲で設定される。missionは、個々のGhostが自分で勝手に決めるものではなく、例えば権威のある機関などによって許可若しくは認証されていることが好ましい。Ghostに課された使命、職務、職業や、資格、介入のスキルのレーティング、過去のGhost(アシスタントやインストラクターなど)としての実績(Ghostとしての経験時間など)や評価(review)、Bodyによる評判(投稿や投票結果など)などに応じて、以下に例示するようなレベルの異なるmissionを定義することができる。
(レベル2)視界交換と視界介入まで行なう。この場合、画像表示装置102は、画像提供装置101から受信した画像を表示するとともに、画像提供装置101側で表示すべき画像(重畳表示して、視界に介入すべき画像)に関する情報を送信する。
(レベル3)さらに聴覚介入も行なう。この場合、画像表示装置102は、画像提供装置101で出力すべき音声( Bodyに聴かせるべき音声)に関する情報をさらに送信する。
(レベル4)身体介入及び代替会話を含む、すべての介入を行なう。この場合、画像表示装置102は、駆動部505を動作させる情報や、第2の音声出力部506から外部に出力すべき音声に関する情報をさらに送信する。
JackInは、視界情報共有システム100において、GhostがBodyの一人称体験に没入する状況であり、GhostはBodyに対してインタラクションを行なう。
Ghostは、Bodyの現在位置や現在行なっている行為(作業)に基づいて、JackInしたいBodyを選定若しくはフィルタリングすることができる。Bodyの選定処理は、個々のGhostが実施してもよいし、Body及びGhost間のJackInを統制するJackInサーバー(仮称)が選定処理に介在してもよい。
Bodyは、現在行なっている作業に対して他人から支援や指示、誘導、案内を受けたい(若しくは、受けなければならない)場合に、自分にJackInして、支援してくれるGhostを募集する。例えば、Bodyは、SNS(Social Networking Service)を利用して、「Need Help!」、「誰かクルマの運転の仕方を教えてください」、「○○に行く道を教えてください」などのコメントを掲げて、Ghostを募集してもよい。
(1)画像を送信する第1の装置と前記画像を受信する第2の装置間の接続を、前記第1の装置又は前記第2の装置のうちいずれが主導的となるかに応じて制御する制御部を具備する情報処理装置。
(2)前記制御部は、前記第1の装置が主導的に前記第2の装置と接続する場合には、前記第2の装置から接続要求を受け取ると、待ち状態の前記第1の装置に通知して、前記第1の装置から前記第2の装置への画像伝送を開始させる、
上記(1)に記載の情報処理装置。
(3)前記制御部は、前記第2の装置が主導的に前記第1の装置と接続する場合には、前記第2の装置からの接続要求を前記第1の機器に通知して、前記第1の装置から前記第2の装置への画像伝送を開始させる、
上記(1)に記載の情報処理装置。
(4)前記制御部は、複数の前記第2の装置が主導的に前記第1の装置と接続する場合には、複数の前記第2の装置からの接続要求が所定の開始条件を満たす場合にのみ前記第1の機器に通知して、前記第1の装置から複数の前記第2の装置への画像伝送を開始させる、
上記(1)に記載の情報処理装置。
(5)前記制御部は、前記第1の装置から前記第2の装置への画像伝送の開始とともに、前記第2の装置から前記第1の装置への介入を制御する、
上記(1)乃至(4)のいずれかに記載の情報処理装置。
(6)画像を送信する第1の装置と前記画像を受信する第2の装置間の接続を、前記第1の装置又は前記第2の装置のうちいずれが主導的となるかに応じて制御する制御ステップを有する情報処理方法。
(7)第2の装置へ画像を送信する第1の装置を第1の装置の位置情報に基づいて選定する選定部を具備する情報処理装置。
(8)前記選定部は、地図上で第1の装置の位置を示したUIを提示する、
上記(7)に記載の情報処理装置。
(9)前記選定部は、ユーザーの行動をさらに考慮して、第1の装置を選定する、
上記(7)又は(8)のいずれかに記載の情報処理装置。
(10)前記選定部は、ユーザーの行動に基づいて抽出された第1の装置だけを前記UIで提示する、
上記(8)に記載の情報処理装置。
(11)前記選定部は、第1の装置のユーザーの行動を前記UIで提示する、
上記(8)に記載の情報処理装置。
(12)前記選定部は、第1の装置への介入に関する情報を前記UIで提示する、
上記(8)に記載の情報処理装置。
(13)第2の装置へ画像を送信する第1の装置を第1の装置の位置情報に基づいて選定する選定ステップを有する情報処理方法。
(14)第2の装置へ画像を送信する第1の装置を第1の装置のユーザーの行動に基づいて選定する選定部を具備する情報処理装置。
(15)前記選定部は、第1の装置から送信する画像に関する情報を示したUIを提示する、
上記(14)に記載の情報処理装置。
(16)前記選定部は、第1の装置又はそのユーザーに関する情報を前記UIで提示する、
上記(15)に記載の情報処理装置。
(17)前記選定部は、ユーザーの行動に基づいて抽出された第1の装置から送信する画像だけを前記UIで提示する、
上記(15)に記載の情報処理装置。
(18)第2の装置へ画像を送信する第1の装置を第1の装置のユーザーの行動に基づいて選定する選定ステップを有する情報処理方法。
(19) 第1の装置が画像を送信する第2の装置を、第2の装置又はそのユーザーに関する情報に基づいて選定する選定部を具備する情報処理装置。
(20)前記選定部は、第2の装置又はそのユーザーに関する情報を示したUIを提示する、
上記(19)に記載の情報処理装置。
(21)第1の装置から画像を送信する第2の装置を、前記第1の装置のユーザーの行動に基づいて選定する選定ステップを有する情報処理方法。
101…画像提供装置、102…画像表示装置
501…撮像部、502…画像処理部、503…表示部
504…第1の音声出力部、505…駆動部
506…第2の音声出力部、507…位置検出部、508…通信部
509…制御部、510…認証部
511…通信部、512…画像復号部、513…表示部
514…ユーザー入力部、515…位置姿勢検出部
Claims (20)
- 画像を送信する第1の装置と前記画像を受信する第2の装置間の接続を、前記第1の装置又は前記第2の装置のうちいずれが主導的となるかに応じて制御する制御部を具備する情報処理装置。
- 前記制御部は、前記第1の装置が主導的に前記第2の装置と接続する場合には、前記第2の装置から接続要求を受け取ると、待ち状態の前記第1の装置に通知して、前記第1の装置から前記第2の装置への画像伝送を開始させる、
請求項1に記載の情報処理装置。 - 前記制御部は、前記第2の装置が主導的に前記第1の装置と接続する場合には、前記第2の装置からの接続要求を前記第1の機器に通知して、前記第1の装置から前記第2の装置への画像伝送を開始させる、
請求項1に記載の情報処理装置。 - 前記制御部は、複数の前記第2の装置が主導的に前記第1の装置と接続する場合には、複数の前記第2の装置からの接続要求が所定の開始条件を満たす場合にのみ前記第1の機器に通知して、前記第1の装置から複数の前記第2の装置への画像伝送を開始させる、
請求項1に記載の情報処理装置。 - 前記制御部は、前記第1の装置から前記第2の装置への画像伝送の開始とともに、前記第2の装置から前記第1の装置への介入を制御する、
請求項1に記載の情報処理装置。 - 画像を送信する第1の装置と前記画像を受信する第2の装置間の接続を、前記第1の装置又は前記第2の装置のうちいずれが主導的となるかに応じて制御する制御ステップを有する情報処理方法。
- 第2の装置へ画像を送信する第1の装置を第1の装置の位置情報に基づいて選定する選定部を具備する情報処理装置。
- 前記選定部は、地図上で第1の装置の位置を示したUIを提示する、
請求項7に記載の情報処理装置。 - 前記選定部は、ユーザーの行動をさらに考慮して、第1の装置を選定する、
請求項7に記載の情報処理装置。 - 前記選定部は、ユーザーの行動に基づいて抽出された第1の装置だけを前記UIで提示する、
請求項8に記載の情報処理装置。 - 前記選定部は、第1の装置のユーザーの行動を前記UIで提示する、
請求項8に記載の情報処理装置。 - 前記選定部は、第1の装置への介入に関する情報を前記UIで提示する、
請求項8に記載の情報処理装置。 - 第2の装置へ画像を送信する第1の装置を第1の装置の位置情報に基づいて選定する選定ステップを有する情報処理方法。
- 第2の装置へ画像を送信する第1の装置を第1の装置のユーザーの行動に基づいて選定する選定部を具備する情報処理装置。
- 前記選定部は、第1の装置から送信する画像に関する情報を示したUIを提示する、
請求項14に記載の情報処理装置。 - 前記選定部は、第1の装置又はそのユーザーに関する情報を前記UIで提示する、
請求項15に記載の情報処理装置。 - 前記選定部は、ユーザーの行動に基づいて抽出された第1の装置から送信する画像だけを前記UIで提示する、
請求項15に記載の情報処理装置。 - 第2の装置へ画像を送信する第1の装置を第1の装置のユーザーの行動に基づいて選定する選定ステップを有する情報処理方法。
- 第1の装置が画像を送信する第2の装置を、第2の装置又はそのユーザーに関する情報に基づいて選定する選定部を具備する情報処理装置。
- 前記選定部は、第2の装置又はそのユーザーに関する情報を示したUIを提示する、
請求項19に記載の情報処理装置。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020187003490A KR102512855B1 (ko) | 2015-09-30 | 2016-07-11 | 정보 처리 장치 및 정보 처리 방법 |
JP2017542951A JPWO2017056632A1 (ja) | 2015-09-30 | 2016-07-11 | 情報処理装置及び情報処理方法 |
CN201680055341.6A CN108141565A (zh) | 2015-09-30 | 2016-07-11 | 信息处理设备及信息处理方法 |
US15/760,060 US20180278888A1 (en) | 2015-09-30 | 2016-07-11 | Information processing device and information processing method |
EP16850809.1A EP3358837A4 (en) | 2015-09-30 | 2016-07-11 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD |
US16/381,593 US10771739B2 (en) | 2015-09-30 | 2019-04-11 | Information processing device and information processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-195193 | 2015-09-30 | ||
JP2015195193 | 2015-09-30 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/760,060 A-371-Of-International US20180278888A1 (en) | 2015-09-30 | 2016-07-11 | Information processing device and information processing method |
US16/381,593 Division US10771739B2 (en) | 2015-09-30 | 2019-04-11 | Information processing device and information processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017056632A1 true WO2017056632A1 (ja) | 2017-04-06 |
Family
ID=58427403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/070483 WO2017056632A1 (ja) | 2015-09-30 | 2016-07-11 | 情報処理装置及び情報処理方法 |
Country Status (6)
Country | Link |
---|---|
US (2) | US20180278888A1 (ja) |
EP (1) | EP3358837A4 (ja) |
JP (1) | JPWO2017056632A1 (ja) |
KR (1) | KR102512855B1 (ja) |
CN (1) | CN108141565A (ja) |
WO (1) | WO2017056632A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023547945A (ja) * | 2020-11-11 | 2023-11-14 | 北京字跳▲網▼絡技▲術▼有限公司 | ホットスポットリストの表示方法、装置、電子機器および記憶媒体 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020214864A1 (en) | 2019-04-17 | 2020-10-22 | Prestacom Services Llc | User interfaces for tracking and finding items |
EP3963433A4 (en) | 2019-04-28 | 2023-01-25 | Apple Inc. | PRODUCTION OF TOUCH OUTPUT SEQUENCES ASSOCIATED WITH AN OBJECT |
CN110455304A (zh) * | 2019-08-05 | 2019-11-15 | 深圳市大拿科技有限公司 | 车辆导航方法、装置及系统 |
WO2022067316A1 (en) | 2020-09-25 | 2022-03-31 | Apple Inc. | User interfaces for tracking and finding items |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998038798A1 (en) * | 1997-02-26 | 1998-09-03 | Mitsubishi Denki Kabushiki Kaisha | Device, system, and method for distributing video data |
JP2003345909A (ja) * | 2002-05-28 | 2003-12-05 | Tokio Deguchi | 学業指導方法および学業指導システム |
JP2012133534A (ja) * | 2010-12-21 | 2012-07-12 | Mitsubishi Electric Corp | 遠隔作業支援システム、遠隔作業支援端末及び遠隔作業支援方法 |
WO2012127799A1 (ja) * | 2011-03-23 | 2012-09-27 | パナソニック株式会社 | 通信サーバ、通信方法、記録媒体、および、集積回路 |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004222254A (ja) | 2002-12-27 | 2004-08-05 | Canon Inc | 画像処理システム、方法及びプログラム |
US20040254982A1 (en) * | 2003-06-12 | 2004-12-16 | Hoffman Robert G. | Receiving system for video conferencing system |
JP2005222254A (ja) | 2004-02-04 | 2005-08-18 | Haisaabu Ueno:Kk | キャッシュレジスタ装置 |
JP4926400B2 (ja) | 2004-12-27 | 2012-05-09 | 京セラ株式会社 | 移動カメラシステム |
JP5245257B2 (ja) | 2006-11-22 | 2013-07-24 | ソニー株式会社 | 画像表示システム、表示装置、表示方法 |
WO2008129765A1 (ja) * | 2007-04-17 | 2008-10-30 | Panasonic Corporation | 監視機器制御システム |
CN101163160B (zh) * | 2007-11-05 | 2011-04-06 | 中兴通讯股份有限公司 | 网络电视系统中融合多方网络游戏业务的方法及系统 |
US10875182B2 (en) * | 2008-03-20 | 2020-12-29 | Teladoc Health, Inc. | Remote presence system mounted to operating room hardware |
US10808882B2 (en) * | 2010-05-26 | 2020-10-20 | Intouch Technologies, Inc. | Tele-robotic system with a robot face placed on a chair |
JP5750935B2 (ja) * | 2011-02-24 | 2015-07-22 | 富士ゼロックス株式会社 | 情報処理システム、情報処理装置、サーバ装置およびプログラム |
US20130027561A1 (en) * | 2011-07-29 | 2013-01-31 | Panasonic Corporation | System and method for improving site operations by detecting abnormalities |
US8761933B2 (en) * | 2011-08-02 | 2014-06-24 | Microsoft Corporation | Finding a called party |
US20130249947A1 (en) * | 2011-08-26 | 2013-09-26 | Reincloud Corporation | Communication using augmented reality |
JP5741358B2 (ja) | 2011-10-04 | 2015-07-01 | トヨタ自動車株式会社 | 樹脂成形部品及び製造方法 |
JP5114807B1 (ja) | 2011-10-04 | 2013-01-09 | 株式会社新盛インダストリーズ | プリンター |
JP2013078893A (ja) | 2011-10-04 | 2013-05-02 | Canon Inc | 記録装置および記録方法 |
JP2013191464A (ja) | 2012-03-14 | 2013-09-26 | Sharp Corp | 有機エレクトロルミネッセンス素子及びその製造方法、液晶表示装置。 |
JP5334145B1 (ja) * | 2012-06-29 | 2013-11-06 | トーヨーカネツソリューションズ株式会社 | 物品のピッキング作業の支援システム |
JP2014104185A (ja) | 2012-11-28 | 2014-06-09 | Sony Corp | 運動補助装置及び運動補助方法 |
US20160132046A1 (en) * | 2013-03-15 | 2016-05-12 | Fisher-Rosemount Systems, Inc. | Method and apparatus for controlling a process plant with wearable mobile control devices |
US9699500B2 (en) * | 2013-12-13 | 2017-07-04 | Qualcomm Incorporated | Session management and control procedures for supporting multiple groups of sink devices in a peer-to-peer wireless display system |
KR102159353B1 (ko) * | 2014-04-24 | 2020-09-23 | 현대모비스 주식회사 | 어라운드 뷰 시스템의 동작방법 |
US9818225B2 (en) * | 2014-09-30 | 2017-11-14 | Sony Interactive Entertainment Inc. | Synchronizing multiple head-mounted displays to a unified space and correlating movement of objects in the unified space |
US10187692B2 (en) * | 2014-12-15 | 2019-01-22 | Rovi Guides, Inc. | Methods and systems for distributing media guidance among multiple devices |
CN104657099B (zh) | 2015-01-15 | 2019-04-12 | 小米科技有限责任公司 | 屏幕投射方法、装置及系统 |
US9690103B2 (en) * | 2015-02-16 | 2017-06-27 | Philip Lyren | Display an image during a communication |
US9298283B1 (en) * | 2015-09-10 | 2016-03-29 | Connectivity Labs Inc. | Sedentary virtual reality method and systems |
KR101844885B1 (ko) * | 2016-07-11 | 2018-05-18 | 엘지전자 주식회사 | 차량 운전 보조장치 및 이를 포함하는 차량 |
US11062243B2 (en) * | 2017-07-25 | 2021-07-13 | Bank Of America Corporation | Activity integration associated with resource sharing management application |
KR102188721B1 (ko) * | 2020-04-27 | 2020-12-08 | 현대모비스 주식회사 | 탑-뷰 영상 생성 장치 및 그 방법 |
-
2016
- 2016-07-11 JP JP2017542951A patent/JPWO2017056632A1/ja not_active Abandoned
- 2016-07-11 CN CN201680055341.6A patent/CN108141565A/zh active Pending
- 2016-07-11 EP EP16850809.1A patent/EP3358837A4/en not_active Withdrawn
- 2016-07-11 WO PCT/JP2016/070483 patent/WO2017056632A1/ja active Application Filing
- 2016-07-11 US US15/760,060 patent/US20180278888A1/en not_active Abandoned
- 2016-07-11 KR KR1020187003490A patent/KR102512855B1/ko active IP Right Grant
-
2019
- 2019-04-11 US US16/381,593 patent/US10771739B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998038798A1 (en) * | 1997-02-26 | 1998-09-03 | Mitsubishi Denki Kabushiki Kaisha | Device, system, and method for distributing video data |
JP2003345909A (ja) * | 2002-05-28 | 2003-12-05 | Tokio Deguchi | 学業指導方法および学業指導システム |
JP2012133534A (ja) * | 2010-12-21 | 2012-07-12 | Mitsubishi Electric Corp | 遠隔作業支援システム、遠隔作業支援端末及び遠隔作業支援方法 |
WO2012127799A1 (ja) * | 2011-03-23 | 2012-09-27 | パナソニック株式会社 | 通信サーバ、通信方法、記録媒体、および、集積回路 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3358837A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023547945A (ja) * | 2020-11-11 | 2023-11-14 | 北京字跳▲網▼絡技▲術▼有限公司 | ホットスポットリストの表示方法、装置、電子機器および記憶媒体 |
JP7407340B2 (ja) | 2020-11-11 | 2023-12-28 | 北京字跳▲網▼絡技▲術▼有限公司 | ホットスポットリストの表示方法、装置、電子機器および記憶媒体 |
Also Published As
Publication number | Publication date |
---|---|
US10771739B2 (en) | 2020-09-08 |
JPWO2017056632A1 (ja) | 2018-07-19 |
EP3358837A1 (en) | 2018-08-08 |
EP3358837A4 (en) | 2019-07-31 |
CN108141565A (zh) | 2018-06-08 |
US20180278888A1 (en) | 2018-09-27 |
KR20180063040A (ko) | 2018-06-11 |
KR102512855B1 (ko) | 2023-03-23 |
US20190238793A1 (en) | 2019-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10009542B2 (en) | Systems and methods for environment content sharing | |
US10771739B2 (en) | Information processing device and information processing method | |
JP6822413B2 (ja) | サーバ装置及び情報処理方法、並びにコンピュータ・プログラム | |
Kurata et al. | Remote collaboration using a shoulder-worn active camera/laser | |
TWI610097B (zh) | 電子系統、可攜式顯示裝置及導引裝置 | |
JP6822410B2 (ja) | 情報処理システム及び情報処理方法 | |
US20160188585A1 (en) | Technologies for shared augmented reality presentations | |
JP2019020908A (ja) | 情報処理方法、情報処理プログラム、情報処理システム、および情報処理装置 | |
CN113678206B (zh) | 用于高级脑功能障碍的康复训练系统及图像处理装置 | |
WO2017064926A1 (ja) | 情報処理装置及び情報処理方法 | |
US20230351644A1 (en) | Method and device for presenting synthesized reality companion content | |
JP6919568B2 (ja) | 情報端末装置及びその制御方法、情報処理装置及びその制御方法、並びにコンピュータ・プログラム | |
WO2017068928A1 (ja) | 情報処理装置及びその制御方法、並びにコンピュータ・プログラム | |
CN118235104A (zh) | 用于电子设备的基于意图的用户界面控制 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16850809 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017542951 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20187003490 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15760060 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016850809 Country of ref document: EP |