WO2020032239A1 - 情報出力装置、設計支援システム、情報出力方法及び情報出力プログラム - Google Patents
情報出力装置、設計支援システム、情報出力方法及び情報出力プログラム Download PDFInfo
- Publication number
- WO2020032239A1 WO2020032239A1 PCT/JP2019/031576 JP2019031576W WO2020032239A1 WO 2020032239 A1 WO2020032239 A1 WO 2020032239A1 JP 2019031576 W JP2019031576 W JP 2019031576W WO 2020032239 A1 WO2020032239 A1 WO 2020032239A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- detection
- unit
- space
- user
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 5
- 238000001514 detection method Methods 0.000 claims abstract description 289
- 230000003190 augmentative effect Effects 0.000 claims abstract description 11
- 230000033001 locomotion Effects 0.000 claims description 11
- 210000004556 brain Anatomy 0.000 claims description 8
- 230000008451 emotion Effects 0.000 claims description 6
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 claims description 5
- 230000001133 acceleration Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 13
- 230000004044 response Effects 0.000 description 10
- 238000009434 installation Methods 0.000 description 9
- 230000008447 perception Effects 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000004434 saccadic eye movement Effects 0.000 description 5
- 238000005401 electroluminescence Methods 0.000 description 3
- 210000004243 sweat Anatomy 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 230000035900 sweating Effects 0.000 description 2
- 239000012141 concentrate Substances 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/18—Details relating to CAD techniques using virtual or augmented reality
Definitions
- the present invention relates to an information output device, a design support system, an information output method, and an information output program for outputting information about a user who visually recognizes a virtual reality space.
- Patent Literature 1 discloses a layout design support device that allows a user to visually recognize a virtual reality space corresponding to a layout related to a floor plan of a building, and receives an operation of changing the layout in the virtual reality space from the user. I have.
- the layout design support apparatus analyzes the tendency of the layout change based on the layout change operation, and specifies a recommended layout mode.
- Patent Literature 1 merely specifies a recommended layout mode based on a user's layout change operation, and cannot specify a user's perception of a space.
- the object of the present invention is to provide an information output device, a design support system, an information output method, and an information output program that enable a user to recognize a space.
- the inventor of the present invention determines, based on a detection state of a user's predetermined reaction in a virtual reality (VR: Virtual Reality) space or an augmented reality (AR: Augmented Reality) space, a recognition state of a space included in the user's space. I found it to be specific.
- VR Virtual Reality
- AR Augmented Reality
- An information output device is configured to acquire detection information indicating a detection state of a predetermined reaction of the user when the user visually recognizes a predetermined space that is a virtual reality space or an augmented reality space. And an output unit that outputs information indicating the predetermined space and the detection information acquired by the acquisition unit in association with each other.
- the obtaining unit obtains the detection information in which a detection state of the predetermined reaction when the user visually recognizes the moving image indicating the predetermined space and information indicating a reproduction position of the moving image, and the output unit May output the detection information indicating a detection status of the predetermined reaction for each reproduction position of the moving image.
- the output unit reproduces the moving image and displays the moving image on a display unit, and displays information indicating a detection status of the predetermined reaction at each reproduction position of the moving image, and information indicating a current reproduction position of the moving image.
- the information may be displayed on the display unit in association with each other.
- the output unit displays the information indicating the reproduction position where the predetermined reaction is detected in a display mode different from the information indicating the reproduction position where the predetermined reaction is not detected. It may be displayed.
- the output unit may cause the display unit to display one or more images corresponding to a reproduction position where the predetermined reaction has been detected, among a plurality of images corresponding to the moving image.
- the output unit may display, among a plurality of images different from the images included in the moving image, one or more images corresponding to an image at a reproduction position where the predetermined reaction in the moving image is detected. .
- the output unit may receive an image selection from one or more images corresponding to each of the one or more reproduction positions where the predetermined reaction is detected, and may display an image similar to the selected image. .
- the output unit may specify a position in the predetermined space when the predetermined reaction is detected, and display information indicating the specified position on a map indicating the predetermined space.
- the acquisition unit acquires the detection information corresponding to each of the plurality of users, and the output unit includes information indicating the predetermined space and detection information corresponding to each of the plurality of users acquired by the acquisition unit. It may be output in association with information.
- the output unit may receive a selection of at least one of the plurality of users, and output information indicating the predetermined space and detection information corresponding to the selected user in association with each other.
- the obtaining unit obtains prediction information indicating a reproduction position of the moving image in which the predetermined reaction is predicted to be detected when the user visually recognizes the moving image indicating the predetermined space
- the output unit includes: The detection information acquired by the acquisition unit and the prediction information may be displayed on a display unit.
- the acquisition unit includes first detection information that is the detection information when the user visually recognizes a first predetermined space, and second detection that is the detection information when the user visually recognizes a second predetermined space.
- Information and the output unit may cause the display unit to display the first detection information and the second detection information acquired by the acquisition unit.
- a storage unit that stores information indicating the emotion of the user corresponding to each of the plurality of detection patterns of the predetermined reaction, wherein the output unit is configured to output the user corresponding to the detection pattern included in the detection information. May be output.
- the obtaining unit stores, for each of the plurality of predetermined spaces, information indicating the predetermined space, result information obtained by associating the obtained detection information, and information indicating whether to release the result information, in association with each other.
- the output unit may output the result information to the terminal when the output unit receives a request to acquire the result information to be disclosed stored in the storage unit from the terminal.
- the information output device may further include a charging unit that charges a user of the terminal for output of the result information.
- the acquisition unit may acquire detection information indicating a detection state of a look-around operation, which is an operation of the user looking around the predetermined space, as detection information indicating a detection state of the predetermined reaction.
- the acquisition unit when the user visually recognizes the predetermined space, the user swings his / her head in a predetermined direction as the look-around operation, and may acquire the detection information indicating a detection state of the look-around operation. Good.
- the acquisition unit may use the movement of the line of sight of the predetermined pattern of the user when the user visually recognizes the predetermined space as the look-around operation, and acquire the detection information indicating a detection state of the look-around operation.
- the acquisition unit may acquire detection information indicating a detection state of a gaze operation, which is an operation in which the user gazes at the predetermined space for a predetermined time or more, as detection information indicating a detection state of the predetermined reaction.
- the acquisition unit may acquire detection information indicating a detection state of the brain wave of the user when the user visually recognizes the predetermined space as detection information indicating a detection state of the predetermined reaction.
- a design support system is a design support system including a display device worn by a user and an information output device, wherein the display device includes a display unit, a virtual reality space or an extension.
- a display control unit that causes the display unit to display a predetermined space that is a real space, and a detection information generation unit that generates detection information indicating a detection state of a predetermined reaction of the user when the user visually recognizes the predetermined space.
- the information output device associates an acquisition unit that acquires the detection information generated by the detection information generation unit, information indicating the predetermined space, and the detection information acquired by the acquisition unit. And an output unit for outputting.
- the display device further includes a space information acquisition unit that acquires information indicating the predetermined space from an information disclosure device that publishes information, and the display control unit includes a space information acquisition unit that acquires the information indicated by the space information acquisition unit.
- the space may be displayed on the display unit.
- An information output method is a computer-implemented detection method that indicates a detection state of a predetermined reaction of the user when the user visually recognizes a predetermined space that is a virtual reality space or an augmented reality space. Obtaining information, and outputting the information indicating the predetermined space in association with the obtained detection information.
- An information output program provides a computer that outputs detection information indicating a detection state of a predetermined reaction of the user when the user visually recognizes a predetermined space that is a virtual reality space or an augmented reality space.
- An acquisition unit that acquires the information and an output unit that outputs the information indicating the predetermined space and the detection information acquired by the acquisition unit in association with each other.
- FIG. 1 is a diagram illustrating an outline of a design support system according to a first embodiment.
- FIG. 2 is a diagram illustrating a configuration of a display device according to the first embodiment. It is a figure showing composition of an information output device concerning a 1st embodiment.
- FIG. 6 is a sequence diagram illustrating a flow of processing until the information output device outputs recognition result information in the first embodiment. It is a figure showing the example of a display of recognition result information of one user. It is a figure showing an example of a display of recognition result information of one or more users. It is a figure showing the example of a display of the map corresponding to VR space.
- FIG. 1 is a diagram illustrating an outline of a design support system S according to the first embodiment.
- the design support system S is a system that includes a generation device 1, a display device 2, and an information output device 3, and supports design of a space related to a building.
- the generation device 1 is, for example, the computer 1A or the imaging device 1B.
- the designer D who designs the space operates the generating device 1 to generate a model of the virtual reality space as information indicating the virtual reality space or a moving image indicating the virtual reality space ((1) in FIG. 1).
- Each of the model of the virtual reality space and the moving image showing the virtual reality space corresponds to an actual space or a space to be designed.
- the moving image indicating the virtual reality space is, for example, a virtual reality content that allows the user to freely look around while standing still or moving in the virtual reality space, a 360 ° lookable moving image that can provide the same experience as the relevant experience, or 180.
- ° A moving image related to VR content such as a moving image that can be looked around.
- a virtual reality space is referred to as a VR space
- a model of the virtual reality space is referred to as a VR model
- a moving image indicating the virtual reality space is referred to as a VR moving image.
- the VR model and the VR moving image are collectively referred to as VR information.
- the generating device 1 uploads, for example, the VR information to the publishing device 4 that publishes the information ((2) in FIG. 1).
- the display device 2 is, for example, a wearable device that allows a user to browse a VR space such as a VR goggle or a VR headset.
- the VR goggle or the VR headset may be an assembled VR goggle or a VR headset for making a smartphone or the like function as a VR goggle.
- the user U who is a user of the space or the subject to be examined operates the display device 2 to display the VR space indicated by the VR information uploaded to the disclosure device 4 on the display unit of the display device 2 ( (3) of FIG. Then, the user U wears the display device 2 and visually recognizes the VR space displayed on the display unit of the display device 2.
- the display device 2 detects a predetermined reaction of the user U while the user U is viewing the VR space ((4) in FIG. 1).
- the predetermined reaction is a look-up operation or a gaze operation of the user U, or generation of a ⁇ wave from the user U.
- the display device 2 may be a personal computer. In this case, it is assumed that a device capable of detecting a ⁇ wave among brain waves of the user U is connected to the display device 2.
- the display device 2 may detect a predetermined reaction based on detection of generation of a ⁇ wave by the device.
- the display device 2 may detect the ⁇ wave in a state where a device capable of detecting the ⁇ wave among the brain waves of the user U is not connected.
- the display device 2 stores, for example, a database of the operation contents of the personal computer performed by the user U who is visually recognizing the VR space when the ⁇ wave is detected, so that the user U It may be determined that a ⁇ wave has been generated when a personal computer that has a strong tendency to perform the above operation is operated.
- the information output device 3 is, for example, a mobile terminal such as a smartphone or a personal computer.
- the information output device 3 acquires, from the display device 2, detection information indicating a detection status of a predetermined reaction of the user U when the user U visually recognizes the VR space ((5) in FIG. 1).
- the information output device 3 outputs information that associates the information indicating the predetermined space with the acquired detection information as recognition result information indicating the result of the space recognition of the user U ((6) in FIG. 1).
- the information output device 3 associates moving image information indicating a VR moving image with information indicating a detection state of a predetermined reaction at each playback position of the VR moving image as shown in FIG. ,
- the recognition result information in which the information indicating the detection state of the predetermined reaction is displayed on the map indicating the VR space.
- the predetermined reaction is a reaction in which the user U attempts to recognize the space.
- the user U when the look-up operation, the gaze operation, or the ⁇ wave of the user U is detected, the user U is traveling while recognizing that the user U is present in the space recognized by the user U, or the user U is recognizing. It is considered that the space is ahead of the line of sight, and the user is proceeding while recognizing that he is not in the space.
- the designer D grasps the user U's perception of the space included in the VR space by confirming the recognition result information in which the VR space information output by the information output device 3 is associated with the acquired detection information. be able to.
- the designer D can evaluate the actual space corresponding to the VR space or the space to be designed, and use it for designing the space.
- the configurations of the display device 2 and the information output device 3 will be described.
- FIG. 2 is a diagram illustrating a configuration of the display device 2 according to the first embodiment.
- the display device 2 includes an input unit 21, a display unit 22, a detection unit 23, a storage unit 24, and a control unit 25.
- the input unit 21 is configured by, for example, a button, a contact sensor arranged to be superimposed on the display unit 22, and receives an operation input from a user of the display device 2.
- the display unit 22 is configured by, for example, a liquid crystal display or an organic EL (Electro-Luminescence) display.
- the display unit 22 displays various information under the control of the control unit 25.
- the detection unit 23 is, for example, a three-dimensional acceleration sensor, and detects the acceleration applied to the display device 2. Upon detecting the acceleration, the detection unit 23 outputs information indicating the detected acceleration to the control unit 25.
- the storage unit 24 is, for example, a ROM (Read Only Memory) and a RAM (Random Access Memory).
- the storage unit 24 stores various programs for causing the display device 2 to function.
- the storage unit 24 stores a program that causes the control unit 25 of the display device 2 to function as an acquisition unit 251, a display control unit 252, a detection information generation unit 253, and a transmission unit 254 described below.
- the control unit 25 is, for example, a CPU (Central Processing Unit).
- the control unit 25 controls functions related to the display device 2 by executing various programs stored in the storage unit 24.
- the control unit 25 functions as an acquisition unit 251 as a spatial information acquisition unit, a display control unit 252, a detection information generation unit 253, and a transmission unit 254 by executing a program stored in the storage unit 24. Details of these functions will be described later.
- the display device 2 is a personal computer
- a device that is attached to the head of the user U and detects acceleration applied to the head of the user U and a device that detects brain waves ( ⁇ waves) of the user U are provided. It is assumed that it is connected to the display device 2.
- FIG. 3 is a diagram illustrating a configuration of the information output device 3 according to the first embodiment.
- the information output device 3 includes an input unit 31, a display unit 32, a storage unit 33, and a control unit 34.
- the input unit 31 is configured by, for example, a button, a contact sensor arranged so as to be superimposed on the display unit 32, and receives an operation input from a user of the information output device 3.
- the display unit 32 is configured by, for example, a liquid crystal display or an organic EL display. The display unit 32 displays various information under the control of the control unit 34.
- the storage unit 33 is, for example, a ROM and a RAM.
- the storage unit 33 stores various programs for causing the information output device 3 to function.
- the storage unit 33 stores an information output program that causes the control unit 34 of the information output device 3 to function as an acquisition unit 341 and an output unit 342 described below.
- the control unit 34 is, for example, a CPU.
- the control unit 34 controls the functions of the information output device 3 by executing various programs stored in the storage unit 33.
- the control unit 34 functions as the acquisition unit 341 and the output unit 342 by executing the information output program stored in the storage unit 33.
- FIG. 4 is a sequence diagram illustrating a flow of processing until the information output device 3 outputs recognition result information in the first embodiment.
- the generation device 1 generates a VR space model or a VR moving image as VR space information according to an operation from the designer D (S1).
- the generation device 1 generates a VR space model or a VR moving image, but is not limited thereto.
- the generation device 1 may generate, as VR space information, a plurality of still images corresponding to the VR space indicated by the VR space model, a plurality of moving images, or a plurality of still images indicating a part of the VR moving image.
- the generating device 1 uploads the generated VR space information to the publishing device 4 according to the operation of the operator of the generating device 1 (S2).
- the generation device 1 may upload the VR moving image to a moving image sharing site.
- the acquisition unit 251 of the display device 2 acquires the VR space information generated by the generation device 1 from the disclosure device 4 (S3).
- the acquisition unit 251 acquires the VR space information from the disclosure device 4, but is not limited thereto.
- the acquisition unit 251 may acquire VR space information from the generation device 1 via a storage medium such as an SD (registered trademark) card.
- the display control unit 252 of the display device 2 causes the display unit 22 to display the VR space indicated by the obtained VR space information (S4).
- the display control unit 252 may display the VR space indicated by the VR space information on the display unit 22 while receiving the VR space information stored in the external device.
- the detection information generation unit 253 of the display device 2 generates detection information indicating a detection state of a predetermined reaction to the VR space of the user U who is wearing the display device 2 and viewing the VR space (S5).
- the predetermined reaction is, as described above, a look-around operation, a gaze operation, or the generation of a ⁇ wave.
- description will be given focusing on a case where the look-around operation or the gaze operation is detected.
- the detection information generation unit 253 generates detection information indicating a detection state of a look-around operation and a gaze operation of the user U who is viewing the VR space as detection information indicating a predetermined reaction.
- the look-up operation is a swing operation in which the user U swings his / her head in a predetermined direction when the user U visually recognizes the VR space.
- the predetermined direction is, for example, a direction horizontal to the ground and perpendicular to the traveling direction.
- the gaze operation is an operation in which the user U gazes at the VR space for a certain time or more.
- the predetermined direction will be referred to as a horizontal direction, and the description will proceed.
- the look-up operation is likely to be detected, for example, when the user U is in a “selection state” indicating a state in which the user U is searching for information on a target place.
- the gaze operation is, for example, a “progressing state” indicating a state in which the traveling direction is determined by the user U finding information on the target place, or a state in which the user U is traveling while looking directly at the target place itself. Is more likely to be detected in the case of the “arrival state” indicating For this reason, the detection information generating unit 253 may classify the predetermined reaction according to which of the “selection state”, the “progression state”, and the “arrival state” the state of the look-out operation is.
- the detection information generation unit 253 performs a predetermined reaction based on only an arbitrary element among the three elements of “selection state”, “progression state”, and “arrival state” according to the configuration of the building or the purpose of the user. It may be specified. In this way, the detection information generation unit 253 can classify and quantify the user's search behavior using the look-around operation.
- the detection information generation unit 253 specifies the traveling direction based on, for example, the acceleration indicated by the acceleration information output from the detection unit 23. Then, the detection information generation unit 253 sets a detection state of a look-around operation in a state in which, among the accelerations indicated by the acceleration information, the lateral acceleration exceeds the first threshold. In addition, the detection information generation unit 253 sets a state in which the lateral acceleration indicated by the acceleration information is less than a predetermined threshold for a predetermined time or more as a gaze operation detection state. Note that the detection information generation unit 253 may also set the state in which the vertical acceleration exceeds the second threshold among the accelerations indicated by the acceleration information as the detection state of the look-up operation.
- the detection information generation unit 253 can also detect a look-up operation when looking up at a stairwell or a tall building in a building.
- the detection information generation unit 253 detects the looking around operation and the gaze operation based on the acceleration indicated by the acceleration information output from the detection unit 23 has been described as an example.
- the detection information generation unit 253 is not limited thereto. May detect the gaze operation and the gaze operation based on a change in a region in the VR space viewed by the user U.
- the detection information generation unit 253 detects the position information indicating the position of the user U in the VR space and the detection state of the look-around operation or the gaze operation at each position. Is stored in the storage unit 24 as the detection information in association with the acceleration information indicating the lateral acceleration indicating the acceleration.
- the detection information generating unit 253 includes reproduction position information indicating each reproduction position in the VR moving image, and a horizontal direction indicating a detection state of a look-around operation or a gaze operation at each reproduction position. Is stored in the storage unit 24 as detection information.
- the detection information generation unit 253 detects that the user U performs the look-around operation, and the lateral acceleration is less than the predetermined threshold for a certain time or more. In this case, it may be detected that the user U has performed the gaze operation.
- the detection information generation unit 253 specifies the position in the VR space or the playback position of the VR moving image when the look-up operation or the gaze operation of the user U is detected, the detection state when the look-up operation or the gaze operation is detected, and the VR space May be stored in the storage unit 24 as detection information in which the position information indicating the position of the VR moving image or the reproduction position information indicating the reproduction position of the VR moving image is associated.
- the detection information includes space identification information for identifying VR space information (VR space model or VR moving image) corresponding to the VR space that the user U is browsing, and the operation (looking-around operation and gaze operation) of the user U. It is assumed that type information indicating the type and a user ID for identifying the user U are included.
- the space identification information is, for example, a name of a building or the like corresponding to the VR space indicated by the VR space information, and is set by the designer D.
- the VR space information may be visually recognized by a plurality of users U. In this case, the detection information generation unit 253 generates detection information corresponding to each of the plurality of users U.
- the look-around operation is a swing operation in which the user U swings his / her head in a predetermined direction when the user U visually recognizes the VR space, but is not limited thereto.
- the look-around operation may be a movement of the user's line of sight in a predetermined pattern when the user visually recognizes the VR space.
- the display device 2 is provided with a visual line detection sensor that detects the visual line of the user U.
- the detection information generation unit 253 detects that the pattern of the line of sight of the user U output from the line of sight detection sensor is a predetermined pattern.
- the predetermined pattern means that when the line of sight moves in a first predetermined angle range (for example, from 60 degrees to 120 degrees) when the traveling direction is 0 degrees, the second predetermined pattern is moved within a predetermined time.
- This predetermined pattern is called a saccade.
- the detection information generation unit 253 may specify the position in the VR space or the reproduction position of the VR moving image when the user U performs the swing motion and the gaze pattern indicates the saccade.
- the detection information generation unit 253 further detects the state of the look-ahead operation when the head movement is performed and the gaze pattern indicates a saccade, and the position information indicating the position in the VR space or the playback position of the VR moving image.
- Information associated with the indicated reproduction position information may be stored in the storage unit 24 as detection information.
- the display device 2 determines the timing at which the user U is trying to recognize the space to which the user U is moving in the VR space by considering the saccade detection situation, by using only the swinging motion. The detection can be performed with higher accuracy.
- the detection information generation unit 253 causes the storage unit 24 to store the detection information including the position information indicating the position in the VR space or the reproduction position information indicating the reproduction position of the VR moving image when a predetermined reaction is detected.
- the detection information generation unit 253 may include an image indicating the VR space or a display image of the VR moving image displayed by the display control unit 252 as a capture image in the detection information.
- the detection information generation unit 253 may include, in the detection information, a moving image or a VR moving image indicating a VR space within a predetermined period including a timing at which a predetermined reaction is detected.
- the display device 2 may include an electroencephalogram detection unit that detects an electroencephalogram of a user who wears the display device 2 in order to detect generation of a ⁇ wave, which is an example of a predetermined reaction. And the detection information generation part 253 may generate
- the detection information generation unit 253 specifies the position in the VR space or the playback position of the VR moving image when the ⁇ wave, which is a type of brain wave, is detected.
- the detection information generation unit 253 causes the storage unit 24 to store, as detection information, information in which the detection state of the ⁇ wave is associated with position information indicating the position in the VR space or reproduction position information indicating the reproduction position of the VR moving image.
- the predetermined response may be that the line of sight of the user U is a predetermined pattern and the user is in a sweating state.
- the predetermined reaction may be that the user's U line of sight is a predetermined pattern, the user is in a sweating state, and the ⁇ wave is detected.
- the display device 2 may include a sweat detection unit that detects the sweat of the user wearing the display device 2 in order to detect the sweat of the user.
- the detection information generation unit 253 may generate detection information indicating the state of perspiration when the user U visually recognizes the VR space as detection information indicating a detection state of a predetermined reaction.
- the display device 2 may not include the detecting unit 23.
- the transmission unit 254 of the display device 2 transmits, to the information output device 3, one or more detection information indicating a detection status of a predetermined reaction when one or more users U visually recognize the VR space (S6).
- the acquisition unit 341 of the information output device 3 transmits an acquisition request for requesting the display device 2 to acquire the detection information.
- the transmission unit 254 of the display device 2 transmits one or more pieces of detection information stored in the storage unit 24 to the information output device 3.
- the acquisition unit 341 of the information output device 3 receives one or more pieces of detection information indicating a detection state of a predetermined reaction of the user U when the user U visually recognizes the VR space from the display device 2, thereby obtaining one or more users.
- the VR space information corresponding to the detection information is obtained from the disclosure device 4 (S7).
- the acquiring unit 341 obtains position information indicating a position in the VR space when a predetermined reaction is detected, information indicating a detection state of the predetermined reaction, Obtain detection information including.
- the acquiring unit 341 transmits reproduction position information indicating a reproduction position of the VR moving image when a predetermined reaction is detected and information indicating a detection state of the predetermined reaction. Get the detection information including.
- the acquisition unit 341 causes the storage unit 33 to store recognition result information indicating the result of the space recognition of the user U in which the VR space information is associated with the detection information.
- the acquisition unit 341 causes the storage unit 33 to store, for each of the plurality of VR space information, recognition result information in which the VR space information is associated with the acquired detection information.
- the output unit 342 associates the VR space information with the detection information acquired by the acquisition unit 341 and outputs the VR space information as recognition result information indicating a result of the user U's spatial recognition.
- the output unit 342 causes the display unit 32 to display the recognition result information stored in the storage unit 33 in response to receiving the display operation of the recognition result information via the input unit 31 (S8).
- FIG. 5 is a diagram illustrating a display example of recognition result information of one user.
- a display example of recognition result information when a user visually recognizes a VR moving image as a VR space indicated by VR space information will be described. Further, a display example of the recognition result information will be described assuming that the predetermined reaction is a look-around operation.
- the output unit 342 displays the user ID included in the detection information stored in the storage unit 33 on the display unit 32, and outputs the user ID of the user ID via the input unit 31. Accept the selection.
- the output unit 342 receives a selection of any of the three user IDs.
- the output unit 342 When the output unit 342 receives the selection of one user ID, the output unit 342 specifies the VR moving image corresponding to the space identification information included in the detection information corresponding to the user ID. Then, the output unit 342 outputs information indicating the detection status of the look-up operation for each playback position of the specified VR moving image. Specifically, as shown in FIG. 5B, the output unit 342 reproduces the specified VR moving image and causes the display unit 32 to display the specified VR moving image, and also detects a look-around operation at each of the VR moving image reproduction positions. And a playback position mark M that is information indicating the current playback position of the VR moving image.
- the horizontal axis of the graph G indicates the playback position of the VR moving image
- the vertical axis indicates the magnitude of the acceleration in the horizontal direction. Note that, when the display device 2 detects a ⁇ wave instead of the look-up operation, the vertical axis of the graph G indicates the detection state of the ⁇ wave.
- the output unit 342 may extract still images at predetermined time intervals (for example, every 1 second or every 3 seconds) from the specified VR moving image, and cause the display unit 32 to display the still images. Further, the output unit 342 extracts a still image corresponding to a position where the user who has watched the VR moving image has stopped, a position where the user has watched the space, and a position where the user has performed a predetermined operation. Alternatively, the still image may be displayed on the display unit 32. When the detection information includes a captured image, the output unit 342 may cause the display unit 32 to display the captured image.
- the predetermined operation is an operation in which the user shakes his / her head vertically so as to nod, or an operation of a button which is the input unit 21 of the display device 2, but is not limited thereto.
- a microphone not shown
- a predetermined operation may be performed. In this case, it is assumed that operation information indicating that a predetermined operation has been performed is associated with the recognition result information.
- the output unit 342 may display information indicating the playback position of the VR moving image at the time when the look-around operation is detected on the graph G displayed on the display unit 32.
- FIG. 5C illustrates character strings “01”, “02”, and “02” as information indicating the playback position of the VR moving image when the look-around motion is detected (when the lateral acceleration is equal to or more than a predetermined amount).
- 03 "and" 04 "are displayed. Note that when the display device 2 detects a ⁇ wave instead of a look-around operation, the output unit 342 displays information indicating the playback position of the VR moving image when the ⁇ wave having a predetermined amplitude or more is detected on the display unit 32. May be displayed on the displayed graph G.
- the output unit 342 displays information indicating the playback position where the look-around operation is detected in the display mode different from information indicating the playback position where the look-around operation is not detected among the information indicating the playback position of the moving image. Is also good. For example, as shown in FIG. 5C, the output unit 342 stores the reproduction position corresponding to the character strings “01”, “02”, “03”, and “04” indicating the reproduction position where the look-around operation is detected. Are displayed in a different color from the area indicating the reproduction position where the look-around operation is not detected. Further, as illustrated in FIG.
- the output unit 342 outputs a detection-time image i ⁇ b> 1 that indicates one or more images corresponding to a reproduction position where a look-around operation is detected, among a plurality of images corresponding to a moving image.
- i2 and i3 may be displayed on the display unit 32.
- the designer D who visually recognizes the screen displayed on the display unit 32 can easily grasp the space in which the look-around operation is detected among the spaces corresponding to the VR moving image.
- the output unit 342 outputs information indicating a reproduction position at which a ⁇ wave having a predetermined amplitude or more is detected, among information indicating a reproduction position of a moving image.
- the information may be displayed in a display mode different from the information indicating the reproduction position where the ⁇ wave having the predetermined amplitude or more is not detected.
- the output unit 342 includes one or more images corresponding to the image at the reproduction position where the predetermined reaction is detected in the VR moving image, among other plural images different from the VR moving image corresponding to the selected user ID. May be displayed.
- the other plurality of images include a plurality of images corresponding to other VR moving images and a plurality of images stored in a database in advance.
- the plurality of images stored in the database for example, images previously extracted and stored from other moving images, databases in image posting / storage type services of other companies, web search sites are used as search results. Provided images are provided.
- the one or more images corresponding to the image at the playback position where the predetermined reaction is detected are, for example, an image indicating a space that is a proposed improvement of the space included in the image, or a similar predetermined reaction is detected. It is an image at the time.
- the output unit 342 displays a plurality of images stored in the database or a detected image included in another VR moving image in the vicinity of the detected image corresponding to the VR moving image corresponding to the selected user ID. Buttons to do this are displayed. Then, the output unit 342 displays another VR moving image or an image at the time of detection corresponding to a database in response to the button being pressed. By doing so, the designer D confirms the images at the time of detection in a plurality of VR moving images, and grasps what kind of tendency the space in which a predetermined reaction such as a look-around operation is detected has. Can be.
- Tag information indicating an attribute of an image may be associated with each of the plurality of images included in the VR moving image.
- the tag information is, for example, position specifying information for specifying a type of a building indicated by the image or a position inside the building.
- the types of buildings are, for example, detached houses and commercial facilities.
- the position specifying information is, for example, an entrance, a corridor, a room, and the like.
- the output unit 342 receives selection of an image at the time of detection and receives a pressing operation of a button for displaying an image at the time of detection included in another VR moving image.
- the output unit 342 displays one or more detection-time images corresponding to the tag information associated with the selected image among the detection-time images corresponding to other VR moving images.
- the designer D confirms an image that is highly likely to be similar to the image at the time of detection selected by the designer D, and determines the tendency of the space in which a predetermined reaction such as a look-around operation is detected. Can be grasped.
- the output unit 342 may display a button for selecting the image and displaying an image at the time of detection different from the selected image. Good. Then, the output unit 342 displays one or more detection-time images corresponding to the tag information associated with the selected image. By doing so, the designer D can easily compare the image different from the image at the time of detection with the image at the time of detection, and the space where a predetermined reaction such as a look-around operation is detected is not considered. You can consider what difference there is between the empty space.
- the output unit 342 may display a button for displaying an image similar to these images near the image at the time of detection. Then, the output unit 342 may receive a selection of an image from among the images at the time of detection, and may also receive a pressing operation of the button. Then, the output unit 342 may display an image similar to the selected detection-time image. For example, the output unit 342 searches for an image similar to the selected detection time image from a plurality of other VR moving images stored in the storage unit 33, or inputs the selected image to an external search engine, An image similar to the search engine may be searched.
- the output unit 342 includes a VR moving image, a graph G that is information indicating a detection state of a look-up operation at each of the playback positions of the moving image, and a reproduction in which the look-up operation is detected.
- One or more images at the time of detection corresponding to the position may be displayed on the display unit 32 at one time.
- the output unit 342 receives selection of at least one of the one or more users U acquired by the acquisition unit 341 and outputs the VR space indicated by the VR space information in association with detection information corresponding to the user U that has received the selection. May be.
- FIG. 6 is a diagram illustrating a display example of recognition result information of one or more users.
- the output unit 342 displays the user ID included in the detection information stored in the storage unit 33 on the display unit 32, and outputs a plurality of user IDs via the input unit 31. The selection of one or more users U among the users U is received.
- the output unit 342 superimposes the graph G that is the information indicating the detection state of the look-around operation and the gaze operation at each of the playback positions of the VR moving image as the recognition result information corresponding to each of the selected one or more users U.
- FIG. 6B is a display example of the graph G when two users U are selected. As shown in FIG. 6B, since the graphs indicating the detection states of the look-up operations of the two users U are displayed in an overlapping manner, the designer D tends to perform the look-up operation at any reproduction position. It is easy to grasp if there is.
- the output unit 342 displays the detection information corresponding to each of the user IDs included in the detection information stored in the storage unit 33 in the first area A1 of the display unit 32, as illustrated in FIG. At the same time, the detection result corresponding to the selected user ID may be displayed so as to overlap the second area A2 of the display unit 32.
- the output unit 342 may cause the display unit 32 to display, at the same timing, information indicating prediction of the user's look-around operation and gaze operation by the designer D or the like, and detection information corresponding to the selected user ID.
- the acquisition unit 341 acquires, via the input unit 31, prediction information indicating a playback position at which the user U is expected to perform the look-around operation and the gaze operation when the VR video is visually recognized by the user U. For example, a playback position corresponding to a place where the user can visually recognize a landscape similar to the landscape when the user looks visually from the place where the look-up operation and the gaze operation are actually performed is predicted as the playback position where the look-up operation and the gaze operation are performed.
- the prediction information is information in which reproduction position information indicating each reproduction position is associated with a detection state (lateral acceleration) predicted at each reproduction position.
- the output unit 342 includes a graph G that is information indicating a detection state of a look-around operation at each of the playback positions of the VR moving image, and a prediction result of the look-around operation at each of the playback positions of the VR moving image. Is superimposed and displayed on a graph G2 which is information indicating. In this way, the designer D can compare the predicted state of the looking-out operation with the detected state of the actual looking-out operation, and make a study.
- the acquisition unit 341 detects first detection information that is detection information when the user U visually recognizes the first VR space and second detection information that is detection information when the user U visually recognizes a second VR space different from the first VR space. And two pieces of detection information. Then, the output unit 342 may cause the display unit 32 to display the first detection information and the second detection information at the same timing.
- the first VR space is a building before the renewal
- the second VR space is a building after the renewal.
- the first VR space may be a first study plan for a building
- the second VR space may be a second study plan for a building.
- the output unit 342 may display the first detection information and the second detection information in parallel on the display unit 32 or may display the first detection information and the second detection information in a superimposed manner. Also, the output unit 342 may cause the display unit 32 to display an image at the time of detection of the first VR moving image and an image at the time of detection of the second VR moving image corresponding to the playback position at which the look-out operation indicated by each detection information is detected. Good. By doing so, the designer D can detect the look-out operation in any situation, for example, while checking the detection state of the look-up operation in the space before and after the renewal or the space corresponding to each of the plurality of plans. Can be considered.
- the storage unit 33 may store information indicating the emotion of the user corresponding to each of the plurality of detection patterns of the look-up operation. Then, the output unit 342 may analyze the detection information and specify any one of the plurality of detection patterns stored in the storage unit 33. Then, information indicating the emotion of the user U corresponding to the specified detection pattern may be output in association with the detection information. By doing so, the designer D can grasp what kind of emotion the user U has in the VR space.
- the output unit 342 may specify the position in the VR space when detecting the look-up operation by the user U.
- the acquisition unit 341 acquires map information indicating the VR space in advance, and causes the storage unit 33 to store the map information.
- the output unit 342 displays information indicating the specified position on a map indicated by the map information stored in the storage unit 33.
- FIG. 7 is a diagram illustrating a display example of a map corresponding to the VR space.
- the output unit 342 outputs the position information indicating the position of the VR space included in the detection information and the detection state of the looking around operation or the gaze operation at the position. Based on the above, the information indicating the detection state is displayed at the position in the VR space indicated by the map corresponding to the VR space. For example, as shown in FIG. 7, the output unit 342 displays a mark M2 indicating a detection state on a map corresponding to the VR space. By doing so, the designer D can easily confirm the position where the look-around operation or the gaze operation is detected.
- the movement trajectory of the user U in the VR space may be displayed on the map shown in FIG.
- the arrangement position of objects such as furniture, product shelves, and signs (signs) arranged in the VR space may be received in advance from the designer D.
- the output unit 342 may display a mark indicating the object on the map shown in FIG. 7 based on the arrangement position of the object arranged in the VR space.
- the storage unit 33 associates and stores the reproduction position information indicating the reproduction position of the VR moving image and the position information indicating the position in the VR space corresponding to the VR moving image. Keep it.
- the output unit 342 specifies the reproduction position of the VR moving image when the looking-up operation by the user U is detected based on the detection information corresponding to the VR moving image.
- the output unit 342 refers to the storage unit 33, specifies the position information associated with the reproduction position information indicating the specified reproduction position, and specifies the detection state associated with the reproduction position information, thereby performing the reproduction. A position in the VR space corresponding to the position and a detection state at the position are specified. Then, the output unit 342 displays information indicating the detection state at the specified position on the map indicated by the map information stored in the storage unit 33.
- the output unit 342 displays the VR moving image and the graph G shown in FIGS. 5B to 5D in parallel with the map corresponding to the VR space shown in FIG. May be displayed on the map.
- the designer D can easily confirm the position corresponding to the VR moving image.
- FIG. 8 is a diagram illustrating a detection state of a ⁇ wave of a user who has visually recognized a VR moving image, and a detection state of a swing operation as a look-around operation.
- FIG. 8A shows a detection state of the ⁇ wave
- FIG. 8B shows a detection state of the swinging operation and the gaze operation.
- the horizontal axis of the graph shown in FIG. 8A and the horizontal axis of the graph shown in FIG. 8B are common time axes.
- the vertical axis shown in FIG. 8A shows the magnitude of the ⁇ wave
- the vertical axis shown in FIG. 8B shows the acceleration in the lateral direction.
- FIG. 8 it can be confirmed that the detection state of the head swing operation and the gaze operation has changed greatly in response to the detection of the ⁇ wave, and a constant Correlation is observed.
- the detection information generation unit 253 may generate the detection information based on a predetermined reaction detected while the user is not walking.
- the information output device 3 analyzes, for example, a correlation between a look-around motion detected in a series of actions while the user is seated and an electroencephalogram.
- a stable state when the gaze is not as fast as a saccade and moves slowly, such as slowly moving, or when there is a lot of gaze, it was defined as a stable state with few segments of the space, and the interior etc. were evaluated The result may be output.
- the design support system S uses the generation of the ⁇ wave as the predetermined reaction
- an electroencephalogram other than the ⁇ wave may be used.
- the ⁇ wave indicating the relaxation often shows a value equal to or more than a certain value. Therefore, the information output device 3 may estimate that the user is in a relaxed state only by looking around.
- the ⁇ wave indicating the degree of concentration often indicates a value equal to or more than a certain value when a certain time or more has elapsed in the gaze operation. Therefore, the information output device 3 concentrates the user based on only the gaze operation. May be estimated.
- the information output device 3 acquires the detection information indicating the detection status of the predetermined reaction of the user when the user visually recognizes the VR space, and obtains the VR space information and the acquired detection information. Output in association with information.
- the information output device 3 outputs the VR space information and the acquired detection information in association with each other since the look-up operation or the generation of the ⁇ wave, which is a predetermined reaction in the space, and the space recognition have a correlation. By doing so, the designer who checks these pieces of information can grasp the user's perception of the space included in the VR space.
- ⁇ Second embodiment> [Publish recognition result information] Subsequently, a second embodiment will be described.
- the designer D who designs the space confirms the recognition result information.
- the recognition result indicated by the recognition result information is also useful for a resident or the like of a house who examines an indoor layout such as an arrangement of furniture in the space. It is valid. Therefore, the information output device 3 according to the second embodiment differs from the first embodiment in that it provides a public service for publishing recognition result information.
- an information output device 3 according to the second embodiment will be described. The description of the same parts as those in the first embodiment will be omitted as appropriate.
- FIG. 9 is a diagram showing an outline of a design support system S according to the second embodiment.
- the information output device 3 is, for example, a server, and is communicably connected to the terminal 5 used by the second user U2.
- the second user U2 of the terminal 5 is a user of a public service that discloses recognition result information.
- the information output device 3 outputs recognition result information to the terminal 5.
- the processing flow from (1) to (5) shown in FIG. 9 is the same as the processing flow in the design support system S according to the first embodiment shown in (1) to (5) in FIG.
- the design support system S according to the second embodiment outputs recognition result information in response to the information output device 3 receiving a request to acquire recognition result information from the terminal 5 ((6), (7 in FIG. 9). )).
- FIG. 10 is a diagram showing the configuration of the information output device 3 according to the second embodiment.
- the information output device 3 according to the second embodiment is different from the information output device 3 according to the first embodiment in that the information output device 3 does not include the input unit 31 and the display unit 32.
- the control unit 34 of the information output device 3 according to the second embodiment is different from the information output device 3 according to the first embodiment in further including a charging unit 343.
- the information output device 3 publishes the recognition result information and the VR space information from a person (for example, a designer D or a manager) of the VR space information corresponding to the recognition result information. The setting of whether to permit is accepted.
- the information output device 3 associates the recognition result information stored in the storage unit 33 with a permission flag indicating permission or prohibition of disclosure. For example, the value of the permission flag is 1 when disclosure of the recognition result information and the VR space information is permitted, and is 0 when the disclosure is not permitted.
- the charging unit 343 charges the second user U2 using the terminal 5 for the output of the recognition result information.
- the charging unit 343 may charge the fee for outputting the recognition result information during a predetermined period as a fixed amount, and may charge the second user U2 at the fixed amount every predetermined period.
- charging may be performed after outputting the recognition result information.
- the charging unit 343 may return at least a part of the profit obtained by the charging to a person related to the VR space information corresponding to the recognition result information.
- the output unit 342 receives a request to acquire the recognition result information stored in the storage unit 33 from the terminal 5 used by the second user U2.
- the output unit 342 presents, to the terminal 5, a partial image or space identification information indicating VR space information corresponding to one or more recognition result information permitted to be disclosed, and outputs one or more recognition result information. From among them, an acquisition request for at least one piece of recognition result information is received.
- the output unit 342 Upon receiving the request for obtaining the recognition result information, the output unit 342 outputs at least one of the screens shown in FIGS. 5B to 5D to the terminal 5 as the recognition result information.
- the charging unit 343 may change the amount to be charged according to the output form of the recognition result information to the terminal 5. For example, when outputting the screen shown in FIG. 5B, the charging unit 343 may or may not charge the first amount.
- the billing unit 343 is considered to be more convenient when outputting the screen shown in FIG. 5C or FIG. 5D than when outputting the screen shown in FIG. 5B. You may charge for the 2nd amount higher than the 1st amount.
- the charging unit 343 uses the function of performing a search for an image at the time of detection corresponding to other VR space information or a search similar to the image at the time of detection. In this case, it is considered that the convenience is higher than the case where the screen shown in FIG. 5C or FIG. 5D is output, and therefore, a third amount higher than the second amount may be charged. .
- a person related to the VR space information corresponding to the recognition result information may want to release only the image at the time of detection without releasing the VR moving image indicating the VR space information. Therefore, when the information output device 3 accepts from the related person of the VR space information corresponding to the recognition result information the permission of disclosure, the information output device 3 receives a selection as to whether or not to release only the image at the time of detection.
- Information indicating the content may be stored in the storage unit 33 in association with the recognition result information. In this case, the output unit 342 may control whether to output the screen illustrated in FIG. 5B based on the information indicating the selected content.
- the information output device 3 since the information output device 3 according to the second embodiment charges the second user U2 using the terminal 5 for the output of the recognition result information, it is possible to obtain a price for the output of the recognition result information. . Since the information output device 3 outputs to the terminal 5 VR space information corresponding to the recognition result information permitted to be disclosed, the recognition result information that is not desired to be disclosed is disclosed to the outside. Can be prevented.
- the display device 2 is a wearable device that allows a user to browse a VR space, such as a VR goggle or a VR headset, but is not limited thereto.
- the display device 2 may be an augmented reality projection device that projects a VR space onto a real space.
- the information output device 3 is equipped with a detection device provided with an acceleration detection sensor for detecting acceleration on the user's head, and detects the acceleration detected by the detection device when the VR space is projected on the real space. Alternatively, it may be obtained as detection information of a look-around operation or a gaze operation.
- the present invention is not limited to this, and may be applied to a case where the user visually recognizes the AR space instead of the VR space. That is, when the user visually recognizes the AR space in which the virtual objects are arranged in the real space, the display device 2 detects the user's predetermined reaction and detects the predetermined reaction when the user detects the AR. Detection information in which a capture image indicating the AR space is associated may be generated, and the detection information may be transmitted to the information output device 3. Then, the information output device 3 may output the information indicating the AR space in association with the detection information acquired from the information output device 3. For example, the information output device 3 associates a capture image indicating the AR space when a predetermined reaction is detected with information indicating that the user has performed a predetermined reaction with the capture image, and associates the display unit 32 with the information. May be displayed.
- the information output device 3 determines a position suitable for installation of a sign such as an object such as a home appliance, furniture, a houseplant, or a sign on the VR space based on the position where the user U has given a predetermined response.
- a determination unit may be further provided. For example, the information output device 3 determines whether the position is a position suitable for the installation of the object based on the size of the space corresponding to the position where the user U has given a predetermined response.
- the determination unit may determine a position immediately before or immediately after the user U has given a predetermined response, that is, a position of a space break, as a position suitable for sign arrangement.
- the information output device 3 determines whether the position is a position suitable for the installation of the object, based on the visual recognition frequency of each of the plurality of positions where the user U has given a predetermined response. For example, the information output device 3 determines a position having a high visibility frequency among a plurality of positions where the user U has performed the gaze operation in a predetermined reaction as a position suitable for designer furniture. In addition, the information output device 3 determines a position with a low visibility frequency among a plurality of positions where the user U has performed the look-out operation in the predetermined reaction as a position suitable for installation of furniture whose design is not sophisticated. I do.
- the information output device 3 may determine whether a position where the user U does not show a predetermined response is a position suitable for setting an object. In this case, the information output device 3 may determine whether or not the position is suitable for the installation of the object, further based on the visibility frequency of the position. Then, the information output device 3 may cause the display unit 32 to display a position determined to be suitable for the installation of the object as a candidate installation position of the object.
- the information output device 3 generates a VR space in which a virtual object is arranged at a candidate installation position of an object, and can compare a moving image in the VR space with a moving image in a VR space in which no virtual object is arranged. May be output. Further, the information output device 3 specifies a predetermined reaction when the user U visually recognizes the VR space in which the virtual object is arranged and the VR space in which the virtual object is not arranged, and determines the predetermined reaction. May be used to evaluate these spaces. By doing so, the information output device 3 can also support the installation of the object.
- the design support system S has been described as supporting the design of a space related to a building, but the present invention is not limited to this.
- the information output device 3 reproduces and outputs a VR moving image as shown in FIGS. 5B to 5D, but is not limited thereto.
- the control unit 34 of the information output device 3 may function as a receiving unit, and may receive a comment or a comment from a user of the information output device 3 for each of the reproduction positions. Then, the control unit 34 stores the received comment or annotation and the playback position of the VR moving image in the storage unit 33, and when the VR moving image is played back, associates the received comment or annotation with the playback position.
- the information may be displayed. By doing so, the designer D can grasp what impression the user of the information output device 3 who has viewed the VR moving image has on the space in the VR moving image.
- the specific embodiment of the dispersion / integration of the apparatus is not limited to the above-described embodiment, and all or a part of the apparatus may be functionally or physically dispersed / integrated in an arbitrary unit. Can be.
- new embodiments that are generated by arbitrary combinations of the plurality of embodiments are also included in the embodiments of the present invention. The effect of the new embodiment caused by the combination has the effect of the original embodiment.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Neurology (AREA)
- General Health & Medical Sciences (AREA)
- Dermatology (AREA)
- Biomedical Technology (AREA)
- Computer Hardware Design (AREA)
- Pure & Applied Mathematics (AREA)
- Architecture (AREA)
- Civil Engineering (AREA)
- Evolutionary Computation (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Structural Engineering (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
前記出力部は、前記動画に対応する複数の画像のうち、前記所定の反応が検出された再生位置に対応する一以上の画像を表示部に表示させてもよい。
前記出力部は、前記所定の反応が検出された一以上の再生位置のそれぞれに対応する一以上の画像の中から画像の選択を受け付け、選択された画像と類似する画像を表示させてもよい。
前記取得部は、前記ユーザが前記所定空間を見回す動作である見回し動作の検出状態を示す検出情報を、前記所定の反応の検出状況を示す検出情報として取得してもよい。
前記取得部は、前記ユーザが前記所定空間を視認した場合における前記ユーザの所定パターンの視線の動きを前記見回し動作とし、当該見回し動作の検出状態を示す前記検出情報を取得してもよい。
前記取得部は、前記ユーザが前記所定空間を視認した場合における、前記ユーザの脳波の検出状態を示す検出情報を、前記所定の反応の検出状況を示す検出情報として取得してもよい。
[設計支援システムSの概要]
図1は、第1実施形態に係る設計支援システムSの概要を示す図である。設計支援システムSは、生成装置1と、表示装置2と、情報出力装置3とを備え、建築物に関する空間の設計を支援するシステムである。
以下、表示装置2及び情報出力装置3の構成について説明する。
まず、表示装置2の構成について説明する。図2は、第1実施形態に係る表示装置2の構成を示す図である。
表示装置2は、入力部21と、表示部22と、検出部23と、記憶部24と、制御部25とを備える。
表示部22は、例えば、液晶ディスプレイや有機EL(Electro-Luminescence)ディスプレイ等により構成される。表示部22は、制御部25の制御に応じて各種情報を表示する。
検出部23は、例えば、3次元加速度センサであり、表示装置2にかかる加速度を検出する。検出部23は、加速度を検出すると、検出した加速度を示す情報を制御部25に出力する。
続いて、情報出力装置3の構成について説明する。図3は、第1実施形態に係る情報出力装置3の構成を示す図である。
入力部31は、例えば、ボタンや、表示部32に重畳して配置される接触センサ等により構成されており、情報出力装置3のユーザから操作入力を受け付ける。
表示部32は、例えば、液晶ディスプレイや有機ELディスプレイ等により構成される。表示部32は、制御部34の制御に応じて各種情報を表示する。
以下に、情報出力装置3が、VR空間情報と、取得した検出情報とを関連付けた認識結果情報を出力するときの処理の流れを示すシーケンス図を参照しながら、制御部25及び制御部34が備える各機能について説明する。図4は、第1実施形態における情報出力装置3が認識結果情報を出力するまでの処理の流れを示すシーケンス図である。
図5は、一人のユーザの認識結果情報の表示例を示す図である。ここでは、ユーザが、VR空間情報が示すVR空間として、VR動画を視認した場合の認識結果情報の表示例について説明する。また、所定の反応が、見回し動作であるものとして、認識結果情報の表示例について説明する。
出力部342は、取得部341が取得した一以上のユーザUの少なくともいずれかの選択を受け付け、VR空間情報が示すVR空間と、選択を受け付けたユーザUに対応する検出情報とを関連付けて出力してもよい。図6は、一以上のユーザの認識結果情報の表示例を示す図である。
出力部342は、ユーザUによる見回し動作を検出したときのVR空間における位置を特定してもよい。取得部341は、VR空間を示す地図情報を予め取得し、当該地図情報を記憶部33に記憶させる。出力部342は、記憶部33に記憶されている地図情報が示す地図に、特定した位置を示す情報を表示させる。図7は、VR空間に対応する地図の表示例を示す図である。
続いて、脳波と、見回し動作や注視動作との関係性について説明する。図8は、VR動画を視認したユーザのθ波の検出状況と、見回し動作としての首振り動作の検出状況とを示す図である。図8(a)は、θ波の検出状況を示し、図8(b)は、首振り動作や注視動作の検出状況を示している。なお、図8(a)に示すグラフの横軸と、図8(b)に示すグラフの横軸とは共通の時間軸であるものとする。また、図8(a)に示す縦軸は、θ波の大きさを示し、図8(b)に示す縦軸は、横方向の加速度を示している。図8に示すように、θ波を検出したことに応じて、首振り動作や注視動作の検出状況が大きく変化していることが確認でき、θ波と首振り動作や注視動作とに一定の相関性が認められる。
検出情報生成部253は、ユーザが歩行していない状態で検出された所定の反応に基づいて検出情報を生成してもよい。この場合、情報出力装置3は、例えば、ユーザが着席している間の一連の行動において検出された見回し動作と脳波との相関を分析し、見回しが多い場合には、空間の分節が多い不安定な状態、視線がサッカードほど速くなくゆっくり動くなどのように見回しが少ない場合、又は注視が多い場合には、空間の分節が少ない安定した状態であると定義して、室内等を評価した結果を出力してもよい。
以上のとおり、第1実施形態に係る情報出力装置3は、ユーザがVR空間を視認した場合における、ユーザの所定の反応の検出状況を示す検出情報を取得し、VR空間情報と、取得した検出情報とを関連付けて出力する。空間における所定の反応である見回し動作又はθ波の発生と、空間認識とは相関性を有していることから、情報出力装置3は、VR空間情報と、取得した検出情報とを関連付けて出力することにより、これらの情報を確認する設計者は、ユーザのVR空間内に含まれる空間に対する認知を把握することができる。
[認識結果情報を公開する]
続いて、第2実施形態について説明する。第1実施形態では、空間を設計する設計者Dが認識結果情報を確認したが、認識結果情報が示す認識結果は、空間における家具の配置等の室内レイアウトを検討する住居の居住者等にとっても有効である。そこで、第2実施形態に係る情報出力装置3は、認識結果情報を公開する公開サービスを提供する点で第1実施形態と異なる。以下に、第2実施形態に係る情報出力装置3について説明する。なお、第1実施形態と同じ部分については適宜説明を省略する。
以上のとおり、第2実施形態に係る情報出力装置3は、端末5を使用する第2ユーザU2に、認識結果情報の出力に対する課金を行うので、認識結果情報の出力に対する対価を得ることができる。また、情報出力装置3は、端末5に、公開が許可されている認識結果情報に対応するVR空間情報を出力するので、公開することが望まれていない認識結果情報が外部に公開されてしまうことを防止することができる。
Claims (24)
- ユーザがバーチャルリアリティ空間又は拡張現実空間である所定空間を視認した場合における、前記ユーザの所定の反応の検出状況を示す検出情報を取得する取得部と、
前記所定空間を示す情報と、前記取得部が取得した検出情報とを関連付けて出力する出力部と、
を備える情報出力装置。 - 前記取得部は、前記所定空間を示す動画を前記ユーザが視認した場合の前記所定の反応の検出状況と、前記動画の再生位置を示す情報とを関連付けた前記検出情報を取得し、
前記出力部は、前記動画の再生位置ごとの前記所定の反応の検出状況を示す前記検出情報を出力する、
請求項1に記載の情報出力装置。 - 前記出力部は、前記動画を再生して表示部に表示させるとともに、前記動画の再生位置のそれぞれにおける前記所定の反応の検出状況を示す情報と、前記動画の現在の再生位置を示す情報とを関連付けて表示部に表示させる、
請求項2に記載の情報出力装置。 - 前記出力部は、前記動画の再生位置を示す情報のうち、前記所定の反応が検出された再生位置を示す情報を、前記所定の反応が検出されていない再生位置を示す情報と異なる表示態様で表示させる、
請求項2又は3に記載の情報出力装置。 - 前記出力部は、前記動画に対応する複数の画像のうち、前記所定の反応が検出された再生位置に対応する一以上の画像を表示部に表示させる、
請求項2から4のいずれか1項に記載の情報出力装置。 - 前記出力部は、前記動画に含まれている画像とは異なる複数の画像のうち、前記動画における前記所定の反応が検出された再生位置の画像に対応する一以上の画像を表示させる、
請求項2から5のいずれか1項に記載の情報出力装置。 - 前記出力部は、前記所定の反応が検出された一以上の再生位置のそれぞれに対応する一以上の画像の中から画像の選択を受け付け、選択された画像と類似する画像を表示させる、
請求項2から6のいずれか1項に記載の情報出力装置。 - 前記出力部は、前記所定の反応を検出したときの前記所定空間における位置を特定し、前記所定空間を示す地図に、特定した位置を示す情報を表示させる、
請求項2から7のいずれか1項に記載の情報出力装置。 - 前記取得部は、複数の前記ユーザのそれぞれに対応する前記検出情報を取得し、
前記出力部は、前記所定空間を示す情報と、前記取得部が取得した複数の前記ユーザのそれぞれに対応する検出情報とを関連付けて出力する、
請求項1から8のいずれか1項に記載の情報出力装置。 - 前記出力部は、複数の前記ユーザの少なくともいずれかの選択を受け付け、前記所定空間を示す情報と、選択されたユーザに対応する検出情報とを関連付けて出力する、
請求項9に記載の情報出力装置。 - 前記取得部は、前記所定空間を示す動画を前記ユーザが視認した場合に前記所定の反応が検出されると予測される前記動画の再生位置を示す予測情報を取得し、
前記出力部は、前記取得部が取得した検出情報と、前記予測情報とを表示部に表示させる、
請求項1から10のいずれか1項に記載の情報出力装置。 - 前記取得部は、第1の所定空間を前記ユーザが視認した場合における前記検出情報である第1検出情報と、第2の所定空間を前記ユーザが視認した場合における前記検出情報である第2検出情報とを取得し、
前記出力部は、前記取得部が取得した前記第1検出情報と、前記第2検出情報とを表示部に表示させる、
請求項1から11のいずれか1項に記載の情報出力装置。 - 前記所定の反応の複数の検出パターンのそれぞれに対応する前記ユーザの感情を示す情報を記憶する記憶部をさらに備え、
前記出力部は、前記検出情報に含まれている検出パターンに対応する前記ユーザの感情を示す情報を出力する、
請求項1から12のいずれか1項に記載の情報出力装置。 - 前記取得部は、複数の所定空間のそれぞれについて、当該所定空間を示す情報と、取得した検出情報とを関連付けた結果情報と、当該結果情報を公開するか否かを示す情報とを関連付けて記憶部に記憶させ、
前記出力部は、端末から、前記記憶部に記憶されている、公開対象の前記結果情報の取得要求を受け付けると、当該端末に当該結果情報を出力する、
請求項1から13のいずれか1項に記載の情報出力装置。 - 前記端末のユーザに、前記結果情報の出力に対する課金を行う課金部をさらに備える、
請求項14に記載の情報出力装置。 - 前記取得部は、前記ユーザが前記所定空間を見回す動作である見回し動作の検出状態を示す検出情報を、前記所定の反応の検出状況を示す検出情報として取得する、
請求項1から15のいずれか1項に記載の情報出力装置。 - 前記取得部は、前記ユーザが前記所定空間を視認した場合に前記ユーザが首を所定方向に振る首振り動作を前記見回し動作とし、当該見回し動作の検出状態を示す前記検出情報を取得する、
請求項16に記載の情報出力装置。 - 前記取得部は、前記ユーザが前記所定空間を視認した場合における前記ユーザの所定パターンの視線の動きを前記見回し動作とし、当該見回し動作の検出状態を示す前記検出情報を取得する、
請求項16又は17に記載の情報出力装置。 - 前記取得部は、前記ユーザが前記所定空間を所定時間以上注視する動作である注視動作の検出状態を示す検出情報を、前記所定の反応の検出状況を示す検出情報として取得する、
請求項1から18のいずれか1項に記載の情報出力装置。 - 前記取得部は、前記ユーザが前記所定空間を視認した場合における、前記ユーザの脳波の検出状態を示す検出情報を、前記所定の反応の検出状況を示す検出情報として取得する、
請求項1から19のいずれか1項に記載の情報出力装置。 - ユーザが装着する表示装置と、情報出力装置と、を備える設計支援システムであって、
前記表示装置は、
表示部と、
バーチャルリアリティ空間又は拡張現実空間である所定空間を前記表示部に表示させる表示制御部と、
前記ユーザが前記所定空間を視認した場合における、前記ユーザの所定の反応の検出状況を示す検出情報を生成する検出情報生成部と、
を有し、
前記情報出力装置は、
前記検出情報生成部が生成した前記検出情報を取得する取得部と、
前記所定空間を示す情報と、前記取得部が取得した検出情報とを関連付けて出力する出力部と、
を有する、
設計支援システム。 - 前記表示装置は、
情報を公開する情報公開装置から前記所定空間を示す情報を取得する空間情報取得部をさらに有し、
前記表示制御部は、前記空間情報取得部が取得した前記情報が示す所定空間を前記表示部に表示させる、
請求項21に記載の設計支援システム。 - コンピュータが実行する、
ユーザがバーチャルリアリティ空間又は拡張現実空間である所定空間を視認した場合における、前記ユーザの所定の反応の検出状況を示す検出情報を取得するステップと、
前記所定空間を示す情報と、取得された検出情報とを関連付けて出力するステップと、
を備える情報出力方法。 - コンピュータを、
ユーザがバーチャルリアリティ空間又は拡張現実空間である所定空間を視認した場合における、前記ユーザの所定の反応の検出状況を示す検出情報を取得する取得部、及び、
前記所定空間を示す情報と、前記取得部が取得した検出情報とを関連付けて出力する出力部、
として機能させる情報出力プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020535906A JP7312465B2 (ja) | 2018-08-09 | 2019-08-09 | 情報出力装置、設計支援システム、情報出力方法及び情報出力プログラム |
US17/167,604 US11798597B2 (en) | 2018-08-09 | 2021-02-04 | Information output apparatus, information output method and design support system |
JP2023109399A JP7479735B2 (ja) | 2018-08-09 | 2023-07-03 | 情報出力装置、設計支援システム、情報出力方法及び情報出力プログラム |
JP2024066628A JP2024091779A (ja) | 2018-08-09 | 2024-04-17 | 情報出力装置、設計支援システム、情報出力方法及び情報出力プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-150679 | 2018-08-09 | ||
JP2018150679 | 2018-08-09 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/167,604 Continuation US11798597B2 (en) | 2018-08-09 | 2021-02-04 | Information output apparatus, information output method and design support system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020032239A1 true WO2020032239A1 (ja) | 2020-02-13 |
Family
ID=69414256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/031576 WO2020032239A1 (ja) | 2018-08-09 | 2019-08-09 | 情報出力装置、設計支援システム、情報出力方法及び情報出力プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US11798597B2 (ja) |
JP (3) | JP7312465B2 (ja) |
WO (1) | WO2020032239A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023228931A1 (ja) * | 2022-05-26 | 2023-11-30 | 株式会社ジオクリエイツ | 情報処理システム、情報処理装置、情報処理方法及びプログラム |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003186904A (ja) * | 2001-12-17 | 2003-07-04 | Nippon Telegr & Teleph Corp <Ntt> | コンテンツ紹介方法、コンテンツ紹介装置、コンテンツ紹介プログラム及びそのプログラムを記録した媒体 |
JP2009005094A (ja) * | 2007-06-21 | 2009-01-08 | Mitsubishi Electric Corp | 携帯端末 |
JP2012104037A (ja) * | 2010-11-12 | 2012-05-31 | Renesas Electronics Corp | コンテンツ再生システム、サーバ、コンテンツ再生方法、及びプログラム |
JP6298561B1 (ja) * | 2017-05-26 | 2018-03-20 | 株式会社コロプラ | ヘッドマウントデバイスと通信可能なコンピュータによって実行されるプログラム、当該プログラムを実行するための情報処理装置、およびヘッドマウントデバイスと通信可能なコンピュータによって実行される方法 |
JP2018097437A (ja) * | 2016-12-08 | 2018-06-21 | 株式会社テレパシージャパン | ウェアラブル情報表示端末及びこれを備えるシステム |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000172740A (ja) | 1998-12-10 | 2000-06-23 | Matsushita Electric Works Ltd | 視点情報表示機能付き設計支援システム |
JP5319564B2 (ja) | 2010-01-18 | 2013-10-16 | 大成建設株式会社 | レイアウト設計支援装置および方法 |
US20150309316A1 (en) * | 2011-04-06 | 2015-10-29 | Microsoft Technology Licensing, Llc | Ar glasses with predictive control of external device based on event input |
JP6124517B2 (ja) | 2012-06-01 | 2017-05-10 | 任天堂株式会社 | 情報処理プログラム、情報処理装置、情報処理システム、およびパノラマ動画表示方法 |
US9699433B2 (en) * | 2013-01-24 | 2017-07-04 | Yuchen Zhou | Method and apparatus to produce re-focusable vision with detecting re-focusing event from human eye |
JP6380814B2 (ja) * | 2013-07-19 | 2018-08-29 | ソニー株式会社 | 検出装置および方法 |
US10269184B2 (en) | 2014-07-23 | 2019-04-23 | Sony Corporation | Information processing apparatus, information processing method, and image display system |
KR101741739B1 (ko) * | 2016-02-19 | 2017-05-31 | 광주과학기술원 | 브레인 컴퓨터 인터페이스를 위한 장치 및 방법 |
WO2017158776A1 (ja) | 2016-03-16 | 2017-09-21 | 株式会社ジオクリエイツ | 位相特定装置、携帯端末、位相特定方法及びプログラム |
US20180095635A1 (en) * | 2016-10-04 | 2018-04-05 | Facebook, Inc. | Controls and Interfaces for User Interactions in Virtual Spaces |
US11921921B2 (en) * | 2016-11-11 | 2024-03-05 | Matthew Hurst | Electroencephalograph-based user interface for virtual and augmented reality systems |
-
2019
- 2019-08-09 WO PCT/JP2019/031576 patent/WO2020032239A1/ja active Application Filing
- 2019-08-09 JP JP2020535906A patent/JP7312465B2/ja active Active
-
2021
- 2021-02-04 US US17/167,604 patent/US11798597B2/en active Active
-
2023
- 2023-07-03 JP JP2023109399A patent/JP7479735B2/ja active Active
-
2024
- 2024-04-17 JP JP2024066628A patent/JP2024091779A/ja active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003186904A (ja) * | 2001-12-17 | 2003-07-04 | Nippon Telegr & Teleph Corp <Ntt> | コンテンツ紹介方法、コンテンツ紹介装置、コンテンツ紹介プログラム及びそのプログラムを記録した媒体 |
JP2009005094A (ja) * | 2007-06-21 | 2009-01-08 | Mitsubishi Electric Corp | 携帯端末 |
JP2012104037A (ja) * | 2010-11-12 | 2012-05-31 | Renesas Electronics Corp | コンテンツ再生システム、サーバ、コンテンツ再生方法、及びプログラム |
JP2018097437A (ja) * | 2016-12-08 | 2018-06-21 | 株式会社テレパシージャパン | ウェアラブル情報表示端末及びこれを備えるシステム |
JP6298561B1 (ja) * | 2017-05-26 | 2018-03-20 | 株式会社コロプラ | ヘッドマウントデバイスと通信可能なコンピュータによって実行されるプログラム、当該プログラムを実行するための情報処理装置、およびヘッドマウントデバイスと通信可能なコンピュータによって実行される方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023228931A1 (ja) * | 2022-05-26 | 2023-11-30 | 株式会社ジオクリエイツ | 情報処理システム、情報処理装置、情報処理方法及びプログラム |
WO2023228342A1 (ja) * | 2022-05-26 | 2023-11-30 | 株式会社ジオクリエイツ | 情報処理システム、情報処理装置、情報処理方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
US11798597B2 (en) | 2023-10-24 |
JP7312465B2 (ja) | 2023-07-21 |
JP7479735B2 (ja) | 2024-05-09 |
JP2023123787A (ja) | 2023-09-05 |
JPWO2020032239A1 (ja) | 2021-08-12 |
US20210249050A1 (en) | 2021-08-12 |
JP2024091779A (ja) | 2024-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7169405B2 (ja) | 複合現実システムのための位置特定の決定 | |
US11010726B2 (en) | Information processing apparatus, control method, and storage medium | |
US9978174B2 (en) | Remote sensor access and queuing | |
US9536350B2 (en) | Touch and social cues as inputs into a computer | |
CN103760968B (zh) | 数字标牌显示内容选择方法和装置 | |
US20170150230A1 (en) | Information processing apparatus, information processing method, and program | |
Winkler et al. | Pervasive information through constant personal projection: the ambient mobile pervasive display (AMP-D) | |
US20130174213A1 (en) | Implicit sharing and privacy control through physical behaviors using sensor-rich devices | |
JP6720385B1 (ja) | プログラム、情報処理方法、及び情報処理端末 | |
WO2018092545A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
EP2904767A2 (en) | Information processing device, display control method, and program | |
KR20150036713A (ko) | 검출된 물리적 표시를 통한 사용자 관심 결정 | |
JP2014149832A (ja) | 客体表示方法、客体提供方法及びそのためのシステム | |
JP2024091779A (ja) | 情報出力装置、設計支援システム、情報出力方法及び情報出力プログラム | |
JP2010061452A (ja) | 端末装置、情報処理方法及びプログラム | |
US20130229342A1 (en) | Information providing system, information providing method, information processing apparatus, method of controlling the same, and control program | |
WO2013024667A1 (ja) | 関心地点抽出装置、関心地点抽出方法、及びコンピュータ読み取り可能な記録媒体 | |
KR20180088005A (ko) | Vr 영상 저작 도구 및 vr 영상 저작 장치 | |
JP6318289B1 (ja) | 関連情報表示システム | |
US20230316675A1 (en) | Traveling in time and space continuum | |
JP7266984B2 (ja) | サーバ装置 | |
US20190096108A1 (en) | Phase specifying method, phase specifying apparatus and non-transitory computer-readable storage medium | |
WO2021152834A1 (ja) | ライフログ管理装置、制御方法及び記憶媒体 | |
JP2014116891A (ja) | 情報表示システム、サーバ装置、情報処理装置、サーバ装置の制御方法、情報処理装置の制御方法及びプログラム | |
CN114047814B (zh) | 一种交互式体验系统及方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19846509 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020535906 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/04/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19846509 Country of ref document: EP Kind code of ref document: A1 |