WO2017221492A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2017221492A1
WO2017221492A1 PCT/JP2017/011963 JP2017011963W WO2017221492A1 WO 2017221492 A1 WO2017221492 A1 WO 2017221492A1 JP 2017011963 W JP2017011963 W JP 2017011963W WO 2017221492 A1 WO2017221492 A1 WO 2017221492A1
Authority
WO
WIPO (PCT)
Prior art keywords
display object
user
display
information
information processing
Prior art date
Application number
PCT/JP2017/011963
Other languages
French (fr)
Japanese (ja)
Inventor
貴広 岡山
邦在 鳥居
遼 深澤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2017221492A1 publication Critical patent/WO2017221492A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory

Definitions

  • the present disclosure relates to an information processing apparatus, an information processing method, and a program.
  • Patent Document 1 An object based on a real space image is displayed by being superimposed on a real space image on a non-transmissive display, or is superimposed on a real space background on a transmissive (see-through) display.
  • a technique for displaying is disclosed.
  • an information processing apparatus including a control unit that notifies the user of the first notification information related to the display object when the user's line of sight is directed toward the display object.
  • an information processing method including causing a user to notify the user of first notification information related to the display object when a user's line of sight is directed to the display object.
  • a program for realizing a function of notifying the user of the first notification information related to the display object when the user's line of sight is directed to the display object is provided.
  • FIG. 10 is an explanatory diagram illustrating an example of a display object displayed on the display unit 16 by the output control unit 104 according to the embodiment. It is explanatory drawing for demonstrating the example which notification information is notified by the display of an icon. It is explanatory drawing for demonstrating the example which notification information is notified by displaying a display object by a predetermined shape. It is explanatory drawing which shows the example of the label which the output control part 104 which concerns on the same embodiment displays. It is explanatory drawing for demonstrating the clipping volume concerning the embodiment.
  • FIG. 6 is an explanatory diagram for explaining an operation example 1 according to the embodiment.
  • FIG. 10 is an explanatory diagram for explaining an operation example 2 according to the embodiment.
  • FIG. 10 is an explanatory diagram for explaining an operation example 3 according to the embodiment.
  • FIG. 10 is an explanatory diagram for explaining an operation example 4 according to the embodiment.
  • FIG. 11 is an explanatory diagram for explaining an operation example 5 according to the same embodiment.
  • FIG. 11 is an explanatory diagram for describing an operation example 6 according to the same embodiment.
  • FIG. 11 is an explanatory diagram for explaining an operation example 7 according to the embodiment.
  • It is explanatory drawing for demonstrating the application example 2 which applied the embodiment to the SNS (Social Networking Service) application for interacting with another user.
  • It is explanatory drawing for demonstrating the application example 3 which applied the embodiment to the shop search application for searching the information regarding the shop which exists in a visual field.
  • FIG. 1 is a block diagram illustrating a configuration example of an information processing apparatus 1 according to an embodiment of the present disclosure.
  • the information processing apparatus 1 according to the present embodiment is an information processing apparatus including a control unit 10, a sensor unit 12, a storage unit 14, a display unit 16, a sound output unit 17, and a communication unit 18. .
  • the information processing apparatus 1 can be realized in various forms such as an HMD (Head Mounted Display), a mobile phone, a smartphone, a tablet PC (Personal Computer), and a projector.
  • the information processing apparatus 1 is an eyeglass-type HMD having a transmissive display unit 16 as an example.
  • the control unit 10 controls each component of the information processing apparatus 1.
  • the information processing apparatus 1 can provide various applications to the user under the control of the control unit 10.
  • An example of an application provided by the information processing apparatus 1 will be described later.
  • the information processing apparatus 1 is, for example, a group chat application for performing a voice chat with another user, or an SNS (for interacting with another user). Social Networking Service) application, store search application, etc. may be provided.
  • control unit 10 also functions as a line-of-sight recognition unit 101, a voice recognition unit 102, an image recognition unit 103, and an output control unit 104, as shown in FIG.
  • the line-of-sight recognition unit 101 acquires information on the user's line of sight based on information acquired by the sensor unit 12 described later.
  • the information regarding the user's line of sight according to the present embodiment may include, for example, the direction of the user's line of sight, the position of the user's line of sight, and the focal position.
  • the position of the user's line of sight may be, for example, a position (for example, a coordinate position) where the display unit 16 and the user's line of sight intersect in the display unit 16.
  • the line-of-sight recognition unit 101 may acquire information related to the user's line of sight based on, for example, a user's eye image acquired by a camera included in the sensor unit 12 described below. Further, the line-of-sight recognition unit 101 may acquire information on the user's line of sight based on an infrared sensor included in the sensor unit 12 described later or information acquired by another sensor.
  • the line-of-sight recognition unit 101 determines the reference point of the eye (for example, a point corresponding to a non-moving part of the eye such as the eye or corneal reflection) or the moving point of the eye (for example, iris or pupil from the information acquired by the sensor unit 12. A point corresponding to a moving part of the eye) may be detected. Further, the line-of-sight recognition unit 101 may acquire information on the user's line of sight based on the position of the eye movement point with respect to the eye reference point. In addition, the acquisition method of the information regarding a user's gaze by the gaze recognition part 101 which concerns on this embodiment is not limited above, For example, you may perform by the arbitrary gaze detection techniques which can detect a gaze.
  • the speech recognition unit 102 recognizes a user's utterance (speech) collected by a microphone included in the sensor unit 12 described later, converts the speech into a character string, and acquires the utterance text.
  • the speech recognition unit 102 performs natural language processing such as morphological analysis and character string pattern matching on the utterance text, for example, a designation word for designating a display object from the utterance text, a command that is a command to the display object, etc. May be obtained.
  • the image recognition unit 103 analyzes an image acquired by a camera or a stereo camera included in the sensor unit 12 to be described later, and acquires real space information. For example, the image recognition unit 103 performs a stereo matching method on a plurality of images acquired at the same time, an SfM (Structure from Motion) method on a plurality of images acquired in time series, etc. Information may be acquired. In addition, the image recognition unit 103 recognizes an object (real object), a marker, or the like existing in real space by matching feature point information prepared in advance with feature point information detected from the captured image. Information such as the object and marker may be acquired.
  • the marker recognized by the image recognition unit 103 may be a specific pattern of texture information or a set of image feature point information expressed by, for example, a two-dimensional code.
  • the object from which the image recognition unit 103 acquires information may be an operation body such as a user's hand, a finger, or an object held by the hand.
  • the output control unit 104 controls output according to the present embodiment based on information acquired by the line-of-sight recognition unit 101, the voice recognition unit 102, and the image recognition unit 103.
  • the output control unit 104 controls display (display output) by the display unit 16 to be described later, and controls sound output by the sound output unit 17 to be described later.
  • the output control unit 104 may display the display object based on information on the real space (three-dimensional shape of the real space, information on objects existing in the space, markers, etc.) acquired by the image recognition unit 103. .
  • the output control unit 104 is perceived to be present at a position in the real space (three-dimensional position) according to the information in the real space. It is possible to display the display object as described above.
  • FIG. 2 is an explanatory diagram illustrating an example of a display object that the output control unit 104 displays on the display unit 16.
  • the marker M10 shown in FIG. 2 is a marker that exists in real space.
  • the output control unit 104 acquires the display object A10 corresponding to the marker M10 from the storage unit 14 based on the information of the marker M10 acquired by the image recognition unit 103. Then, as illustrated in FIG. 2, the output control unit 104 may display the display object A10 so as to be perceived as being present at a position corresponding to the marker M10 (for example, on the marker M10).
  • the display object display by the output control unit 104 is not limited to the above example.
  • the output control unit 104 acquires a display object corresponding to the real object from the storage unit 14 based on the information on the real object acquired by the image recognition unit 103, and places the display object at a position corresponding to the real object. It may be displayed. Further, the output control unit 104 displays the display object so that it is perceived that the display object exists on the plane in the real space based on the information on the three-dimensional shape of the real space acquired by the image recognition unit 103. Also good.
  • the output control unit 104 may display a display object based on information acquired from another device via the communication unit 18. For example, based on the information on the position of the other user acquired through the communication unit 18, it is perceived that an avatar indicating the other user exists as a display object in a direction where the other user exists. You may display as follows. In such a case, the output control unit 104 may perform the display control based on the direction information acquired from the geomagnetic sensor or the like included in the sensor unit 12.
  • the output control unit 104 notifies the user of notification information related to the display object.
  • the output control unit 104 may be notified by display by controlling the display unit 16, or may be notified by sound output by controlling the sound output unit 17. You may let them. A specific example in which the output control unit 104 notifies the user of notification information will be described later.
  • the output control unit 104 may cause the user to be notified of the first notification information related to the display object when the user's line of sight is directed to the display object. According to such a configuration, the user can easily grasp information related to the display object in which he / she is interested.
  • the first notification information notified to the user may be information indicating whether or not an input to the display object can be accepted, for example.
  • the user can easily grasp whether or not the display object accepts the user's input. For example, the user tries to input the display object that cannot accept the input. It becomes possible to prevent.
  • the first notification information may be information indicating whether or not an input (for example, an input of a command) by voice to the display object is acceptable. If it is unclear whether or not voice input can be accepted, for example, if a voice command is input but the command is not executed on the display object, it is difficult for the user to grasp the cause of the command not being executed. For example, since voice input generally requires voice recognition processing, the user must determine whether the command is not executed because the voice recognition processing has failed, or whether the display object cannot accept voice input. It is difficult. As a result, there is a possibility that the user repeats the voice input meaninglessly on the display object that cannot accept the voice input. However, as described above, the user is notified of the information indicating whether or not the voice input can be accepted, so that, for example, the voice input to the display object that the user cannot accept the voice input becomes meaningless. It is possible to prevent repetition.
  • an input for example, an input of a command
  • the acceptance of input to the display object indicated by the first notification information is not limited to acceptance of input by voice, but accepts various inputs such as touch input, input using an operation tool, input by line of sight, etc. It may be acceptable.
  • the first notification information is not limited to the above, and may indicate various information.
  • the first notification information may be information indicating that the display object has been selected. According to such a configuration, for example, when a plurality of display objects are displayed, the user can grasp which object has been selected. The selection of the display object will be described later.
  • the first notification information may indicate whether or not an interaction (for example, voice chat) with the user indicated by the display object is possible.
  • the output control unit 104 notifies the user of the second notification information related to the selected display object. You may be notified.
  • control related to the display object may vary depending on the type of command, display object, and application.
  • the control related to the display object may be display control of the display object such as rotation or movement, or may be control for starting an interaction (for example, voice chat) with the user indicated by the display object.
  • the output control unit 104 causes the display object to input the input. You may make a user notify the 2nd notification information which shows that it was not received. According to such a configuration, even if the user performs input in a state where a display object that cannot be input is selected, the user grasps that the input to the display object has not been received. It becomes possible to do. As a result, for example, it is possible to prevent the user from further repeating the input with respect to the display object that cannot accept the voice input.
  • the second notification information may be notified to the user explicitly or stronger than the first notification information.
  • the second notification information may be notified by a display having higher visibility (conspicuous) than the first notification information.
  • the first notification information and the second notification information are notified by sound output
  • the second notification information is notified by sound larger (having a higher sound pressure level) than the first notification information. May be.
  • the first notification information may be notified by display, and the second notification information may be notified by sound output. According to such a configuration, for example, it becomes easier to grasp that the display object does not accept input.
  • the output control unit 104 may cause the user to be notified of the third notification information related to the display object when the user's line of sight is not directed to the display object. According to such a configuration, the user can more easily grasp information related to the display object in which he is interested.
  • the third notification information may be information indicating whether or not an input can be accepted with respect to the display object, for example.
  • the third notification information may be information indicating whether or not a display object can be selected, or information indicating whether or not an interaction with the user indicated by the display object is possible.
  • the third notification information may be notified to the user implicitly or weaker than the first notification information.
  • the third notification information may be notified by a display that is less visible (not conspicuous) than the first notification information. Good.
  • the third notification information is notified by sound smaller (having a lower sound pressure level) than the first notification information. May be. According to such a configuration, for example, when a plurality of display objects are displayed, it is difficult for the user to receive a complicated impression due to the notification.
  • the notification method of the first notification information, the second notification information, and the third notification information will be specifically described.
  • the first notification information, the second notification information, and the third notification information may be collectively referred to as notification information.
  • the output control unit 104 may notify the notification information by displaying a predetermined icon at a position corresponding to the position of the display object.
  • FIG. 3 is an explanatory diagram for explaining an example in which notification information is notified by icon display. As shown in FIG. 3, an icon T12 is displayed in the vicinity of the display object A10 (a position corresponding to the display object A10), which indicates that it is possible to accept voice input to the display object A10.
  • notification of notification information by displaying icons is not limited to the above example.
  • an icon including, for example, an X (cross) mark is displayed near the display object. May be.
  • the output control unit 104 may notify the notification information by displaying the display object in a predetermined shape.
  • FIG. 4 is an explanatory diagram for explaining an example in which notification information is notified by displaying a display object in a predetermined shape.
  • the display object A ⁇ b> 10 is displayed in a shape with a headset T ⁇ b> 14 indicating that it is possible to accept voice input to the display object A ⁇ b> 10.
  • the notification by a part of the display object equipment is more suitable for the notification of the third notification information because the visibility is low.
  • notification of notification information by displaying a display object in a predetermined shape is not limited to the above example.
  • the avatar may be displayed in a gesture pose indicating rejection according to the cultural sphere (for example, making a x mark by hand).
  • the notification by the avatar pose (an example of the shape) has high visibility, and is more suitable for the notification of the first notification information or the second notification information, for example.
  • the output control unit 104 may notify the notification information by displaying the display object in an animation.
  • the notification information may be notified by an animation of the avatar. If interaction with the user indicated by the avatar is not possible, the avatar sees a direction other than the front (direction tomorrow), shakes his head, works busy (eg, toward the desk), or the user You may move away from the object or hide in the shade.
  • the notification of the first notification information or the third notification information may be notified by an animation of the avatar.
  • the output control unit 104 may notify the notification information by displaying a display object under a predetermined display condition.
  • a display object that is displayed when input to the display object cannot be received may be displayed with higher transparency than a display object that is displayed when input to the display object can be received.
  • the display object that is displayed when input to the display object cannot be received is displayed in a state that is more familiar with the surrounding environment than the display object that is displayed when input to the display object is possible May be.
  • the display object can be blended into the real space and can be brought into a state suitable for the surrounding environment.
  • the display object that is displayed when it is impossible to accept input to the display object may be displayed in monochrome.
  • notification of notification information by displaying a display object under a predetermined display condition can also reduce visibility, and is suitable for notification of third notification information, for example.
  • the output control unit 104 may notify the notification information by displaying the display object under display conditions that enhance (emphasize) the visibility of the display object.
  • the output control unit 104 may notify the notification information by displaying the display object under a display condition such as glow expression that flashes around the display object or blinking.
  • notification of notification information based on display conditions that enhance the visibility of a display object can make the display object stand out from other display objects. Therefore, notification of notification information based on display conditions that improve the visibility of a display object is more suitable, for example, for notification information indicating selection or notification information indicating that control based on input has been performed.
  • the display conditions according to the present embodiment are not limited to the above, and may include various conditions relating to luminance, transparency, color, size, and the like.
  • the notification information notification method by the output control unit 104 has been described.
  • the determination of whether or not the user's line of sight has been directed to the display object, which is used to determine whether or not the output control unit 104 notifies the first notification information, will be described.
  • the determination as to whether or not the user's line of sight is directed to the display object may be made, for example, based on whether or not the position of the line of sight is included in the display range of the display object on the display unit 16.
  • the determination as to whether or not the user's line of sight is directed to the display object may be made based on whether or not the line of sight intersects with a three-dimensional area in which the display object is perceived to exist in real space.
  • the determination as to whether or not the user's line of sight is directed to the display object may be performed, for example, in a gaze state where the line-of-sight position exists within a predetermined range for a predetermined time.
  • a gaze object an object to which the user's line of sight is directed (ahead of the user's line of sight) may be hereinafter referred to as a gaze object.
  • the output control unit 104 manages the selection state of the display object. For example, when it is determined that the user's line of sight is directed to the display object, the output control unit 104 may set the display object to a selected state. When the display object is brought into a selected state by directing the user's line of sight to the display object, first notification information indicating that the display object has been selected may be notified as described above.
  • the output control unit 104 may set the display object to the selected state based on the voice recognition result by the voice recognition unit 102. For example, when the designated word for designating the display object acquired by the speech recognition unit 102 matches the label (meta information such as text indicating the display object) added to the display object, the output control unit 104 The display object may be set to a selected state.
  • the output control unit 104 may display a label at a position corresponding to the position of the display object.
  • the label may be associated with the display object and stored in advance in the storage unit 14 described later, or may be acquired from the outside via the communication unit 18.
  • FIG. 5 is an explanatory diagram illustrating an example of a label displayed by the output control unit 104.
  • “Bike”, which is text indicating the display object A22 is associated with the display object A22 as a label L22, and is displayed at a position corresponding to the position of the display object A22.
  • “Menu”, which is text indicating the display object A24 is associated with the display object A24 as a label L24 and is displayed at a position corresponding to the position of the display object A24.
  • the user can grasp the label and specify the display object to be selected by voice.
  • the output control unit 104 may set a display object existing in the user's field of view to a selected state. For example, the output control unit 104 extracts a display object that exists in the current user's field of view and sets the selected display object, and when a command is acquired by the voice recognition unit 102, the display object set in the selected state is displayed. On the other hand, control based on commands may be performed.
  • the display object existing in the user's field of view may be a display object included in a clipping volume, for example.
  • FIG. 6 is an explanatory diagram for explaining the clipping volume.
  • a point C1 illustrated in FIG. 6 indicates the position of the user.
  • the clipping volume V1 includes a front clipping plane P1 existing at a first distance from the point C1 in a pyramid having the point C1 as a vertex, and a second larger than the first distance from the point C1. This is a space between the rear clipping plane P2 existing at a distance.
  • whether or not the output control unit 104 displays a display object can also be determined by whether or not it is included in the clipping volume.
  • the first distance and the second distance in the clipping volume for the determination may be the same as the first distance and the second distance in the clipping volume for selecting the display object. And may be different.
  • the output control unit 104 does not need to set a display object that is included in the clipping volume, but is hidden by an object or other display object that exists in the real space, and is not displayed. Further, the output control unit 104 may change the shape of the above-described clipping volume based on information on the user's line of sight.
  • the output control unit 104 may display various icons, buttons, and the like that are used by the user for input and operation.
  • the sensor unit 12 shown in FIG. 1 is a sensor device that senses information around the information processing apparatus 1.
  • the sensor unit 12 may include one or more microphones that collect ambient sounds.
  • the sensor unit 12 may include a camera that acquires surrounding images, a stereo camera, and a camera that acquires user's eye images.
  • the sensor unit 12 may include an infrared sensor, a distance sensor, a human sensor, a biological information sensor, an acceleration sensor, a gyro sensor, a geomagnetic sensor, and the like.
  • the storage unit 14 stores a program and parameters for each component of the information processing apparatus 1 to function.
  • the storage unit 14 may store feature point information used by the image recognition unit 103, a display object used by the output control unit 104, a label (meta information) associated with the display object, and the like.
  • the display unit 16 is a display that is controlled by the output control unit 104 and displays various information.
  • the display unit 16 may be, for example, a transmission type (optical see-through type) and a binocular spectacles type display. With such a configuration, the output control unit 104 can display the display object so that it is perceived to exist at an arbitrary three-dimensional position in the real space.
  • the sound output unit 17 is a device that outputs sound such as a speaker or an earphone.
  • the sound output unit 17 has a function of converting an acoustic signal into sound and outputting the sound according to the control of the output control unit 104.
  • the communication unit 18 is a communication interface that mediates communication with other devices.
  • the communication unit 18 supports an arbitrary wireless communication protocol or wired communication protocol, and establishes a communication connection with another device via a communication network (not shown). Thereby, for example, the information processing apparatus 1 can perform communication for providing various applications to the user.
  • Example of operation The configuration example of the information processing apparatus 1 according to the present embodiment has been described above. Subsequently, several operation examples of the information processing apparatus 1 according to the present embodiment will be described with reference to FIGS. The operation examples described below may be combined or switched according to a user input.
  • FIG. 7 is an explanatory diagram for explaining an operation example 1 according to the present embodiment.
  • the user can specify a display object by voice and input a command to the display object.
  • the output control unit 104 displays a display object and a label associated with the display object (S102). Subsequently, when a voice is input by the user (YES in S104), the voice recognition unit 102 acquires a designated word and a command for designating a display object, and searches for a label based on the designated word (S106). . In step S104, the user may continuously utter a designated word (Bike) and a command (Rotate), for example, “Bike Rotate”.
  • FIG. 8 is an explanatory diagram for explaining an operation example 2 according to the present embodiment.
  • the user can input commands to all display objects that can accept voice input by voice.
  • the output control unit 104 displays a display object and an icon indicating whether or not the display object can be accepted by voice (S122). Subsequently, when a voice is input by the user (YES in S124), the voice recognition unit 102 acquires a command and controls the display object that can accept the voice input based on the command (S126). . In step S124, the user may speak only a command, for example. On the other hand, if no voice is input (NO in S124), the process ends.
  • FIG. 9 is an explanatory diagram for explaining an operation example 3 according to the present embodiment.
  • the user can select a display object by gazing and can input a command to the selected display object by voice.
  • the output control unit 104 displays a display object (S202). Subsequently, it is determined whether or not the user is in a gaze state based on the information regarding the line of sight (S204). When it is in the gaze state (YES in S204) and there is a display object ahead of the line of sight (YES in S206), the display object (gaze object) is set to the selected state (S208).
  • the output control unit 104 may notify the user of notification information indicating that the gaze object has been selected, or may notify the user of notification information indicating whether or not voice input to the gaze object is possible. Good. Further, when the gaze object is in the gaze state again, the selected state of the gaze object may be canceled (cancelled).
  • step S212 when the user inputs a command by voice (YES in S212), the selected gaze object is controlled based on the command (S214).
  • step S214 if voice input to the selected gaze object is not possible, the output control unit 104 notifies the user of second notification information indicating that the gaze object has not received input. Also good.
  • FIG. 10 is an explanatory diagram for explaining an operation example 4 according to the present embodiment.
  • the user can select a display object by gazing and switch to the voice input mode by further gazing, and then input a command by voice to the display object switched to the voice input mode. Is possible.
  • the output control unit 104 displays a display object (S222). Subsequently, it is determined whether or not the user is in the gaze state (first gaze state) based on the information regarding the line of sight (S224). When it is in the gaze state (YES in S224) and there is a display object ahead of the line of sight (YES in S226), the display object (gaze object) is set to the selected state (S228).
  • step S2208 the output control unit 104 may notify the user of notification information indicating that the gaze object has been selected.
  • the selected display object is in a further gaze state (second gaze state) (S234).
  • second gaze state the time used for the determination may be different between the determination related to the first gaze state in step S224 and the determination related to the second gaze state in step S234.
  • the display object (gaze object) selected by the control unit 10 is switched to the voice input mode (S234).
  • control on the gaze object selected and switched to the voice input mode is performed based on the command (S234).
  • the user can switch the display object to the voice input mode by performing a two-step gaze, and thus it is possible to suppress the voice input to the wrong display object.
  • FIG. 11 is an explanatory diagram for explaining an operation example 5 according to the present embodiment.
  • the user can switch one or more selected display objects to the voice input mode by selecting one or more display objects by gazing and further gazing at the voice input icon.
  • the output control unit 104 displays a display object and a voice input icon for switching the display object to the voice input mode (S242).
  • the display object is set to the selected state (S248).
  • step S244 Each time the processes of steps S244 to S248 are repeated once, one display object is set to the selected state.
  • control on the gaze object selected and switched to the voice input mode is performed based on the command (S256).
  • the user can select a plurality of display objects based on the line of sight, and can simultaneously input by voice to the plurality of display objects.
  • FIG. 12 is an explanatory diagram for explaining an operation example 6 according to the present embodiment.
  • the user can select all the display objects existing in the field of view.
  • the output control unit 104 displays a display object (S302).
  • the output control unit 104 extracts display objects existing in the current user's field of view and sets them in a selected state (S304).
  • the process of step S304 may be performed based on whether or not the display object is included in the clipping volume described with reference to FIG.
  • the user can select a plurality of display objects existing in the field of view, and can simultaneously input by voice to the plurality of display objects.
  • FIG. 13 is an explanatory diagram for explaining an operation example 7 according to the present embodiment.
  • the user can group objects by moving an object with his / her hand or the like, and can simultaneously input voice to the grouped display objects.
  • the output control unit 104 displays a display object on a plane or the like in front of the user (S402). Subsequently, a plurality of display objects are grouped together by an operation in which the user pinches and moves the display object with an operation body such as his / her hand (S404). In step S ⁇ b> 404, the output control unit 104 may perform control to move the display position of the display object based on the positional relationship between the operation tool recognized by the image recognition unit 103 and the display object.
  • the output control unit 104 displays a voice input enable icon at the center of the screen (S412). ).
  • the voice input enabled icon displayed in step S412 is an icon indicating that voice input is possible, and is different from the voice input icon displayed in step S242 of FIG.
  • the user can move a plurality of display objects so as to fit in the field of view and group them, and then perform a voice input on the plurality of display objects existing in the field of view. It is.
  • FIG. 14 is an explanatory diagram for explaining an application example 1 in which the present embodiment is applied to a group chat application for performing a voice chat with another user.
  • the output control unit 104 lists (displays) avatars (an example of display objects) indicating other users in front of the user (S502). Subsequently, an avatar is selected by the user (S506).
  • the selection of the avatar in step S506 may be a selection based on the user's line of sight as described with reference to FIGS. 9 to 11, for example.
  • step S512 when a command for starting a conversation (for example, “Start Chat” or the like) is input by voice (YES in S508), all of the selected avatars are indicated by the control of the control unit 10 (according to the avatar). ) Voice chat with the user is started (S512). In step S512, a non-selected avatar may not be displayed, for example, by moving outside the field of view.
  • step S512 after voice chat is started by step S512, if a user pinches and moves an avatar using operating objects, such as his hand, the user which the said avatar shows will be excluded from voice chat. It may be controlled by the control unit 10 as described above.
  • FIG. 15 is an explanatory diagram for explaining an application example 2 in which the present embodiment is applied to an SNS (Social Networking Service) application for interacting with other users.
  • SNS Social Networking Service
  • the output control unit 104 displays a friend avatar (an example of a display object) corresponding to the user of the exchange partner and a label (name) corresponding to the friend avatar (S522).
  • the output control unit 104 may perform the process of step S522, for example, triggered by the user's line of sight facing upward.
  • a friend avatar may be displayed using a corresponding user's photograph or icon, and may be displayed in the direction where the said user exists.
  • a friend avatar is selected by the user (S526).
  • the selection of the friend avatar in step S526 may be a selection by designating a label by voice as described with reference to FIG. 7, for example, or as described with reference to FIGS. The selection may be based on the user's line of sight.
  • the face of the friend avatar corresponding to the user when the user is active (a state where interaction such as conversation is possible), the face of the friend avatar corresponding to the user may be directed toward the user and displayed. Further, when the user is not active (in a state where interaction such as conversation is impossible), the face of the friend avatar corresponding to the user may be turned away and displayed.
  • step S512 after voice chat is started by step S512, if a user pinches and moves an avatar using operating objects, such as his hand, the user which the said avatar shows will be excluded from voice chat. It may be controlled by the control unit 10 as described above.
  • FIG. 16 is an explanatory diagram for explaining an application example 3 in which the present embodiment is applied to a store search application for, for example, a user walking in a city to search for information related to stores existing in the field of view.
  • the output control unit 104 displays an icon (an example of a display object) associated with a store that exists in the user's field of view (S542).
  • the icon associated with the store may be, for example, an icon indicating the type of the store, and may be perceived as being present in the vicinity of the store, such as a so-called POP (Point of purchase) advertisement. May be displayed.
  • the user inputs a command requesting a search (for example, “Search” or the like) by voice while watching the store of interest (with the store in view) (YES in S544).
  • a command requesting a search for example, “Search” or the like
  • the output control unit 104 searches for information for display objects existing in the user's field of view, and updates the display according to the search result (S546). For example, as a result of the search, the output control unit 104 makes it easy to understand that information about the store has been acquired by increasing the visibility of the icon associated with the store from which the information has been acquired (for example, glow expression). May be. The acquired information may be displayed by further selecting an icon associated with the store from which the information has been acquired.
  • FIG. 17 is a block diagram illustrating an example of a hardware configuration of the information processing apparatus according to the present embodiment. Note that the information processing apparatus 900 illustrated in FIG. 17 can realize the information processing apparatus 1 illustrated in FIG. 1, for example. Information processing by the information processing apparatus 1 according to the present embodiment is realized by cooperation between software and hardware described below.
  • the information processing apparatus 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, and a host bus 904a.
  • the information processing apparatus 900 includes a bridge 904, an external bus 904b, an interface 905, an input device 906, an output device 907, a storage device 908, a drive 909, a connection port 911, a communication device 913, and a sensor 915.
  • the information processing apparatus 900 may include a processing circuit such as a DSP or an ASIC in place of or in addition to the CPU 901.
  • the CPU 901 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the information processing apparatus 900 according to various programs. Further, the CPU 901 may be a microprocessor.
  • the ROM 902 stores programs used by the CPU 901, calculation parameters, and the like.
  • the RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
  • the CPU 901 can form the control unit 10 shown in FIG.
  • the CPU 901, ROM 902, and RAM 903 are connected to each other by a host bus 904a including a CPU bus.
  • the host bus 904 a is connected to an external bus 904 b such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 904.
  • an external bus 904 b such as a PCI (Peripheral Component Interconnect / Interface) bus
  • PCI Peripheral Component Interconnect / Interface
  • the host bus 904a, the bridge 904, and the external bus 904b do not necessarily have to be configured separately, and these functions may be mounted on one bus.
  • the input device 906 is realized by a device in which information is input by the user, such as a mouse, a keyboard, a touch panel, a button, a microphone, a switch, and a lever.
  • the input device 906 may be, for example, a remote control device using infrared rays or other radio waves, or may be an external connection device such as a mobile phone or a PDA that supports the operation of the information processing device 900.
  • the input device 906 may include, for example, an input control circuit that generates an input signal based on information input by the user using the above-described input means and outputs the input signal to the CPU 901.
  • a user of the information processing apparatus 900 can input various data and instruct a processing operation to the information processing apparatus 900 by operating the input device 906.
  • the output device 907 is formed of a device that can notify the user of the acquired information visually or audibly. Examples of such devices include CRT display devices, liquid crystal display devices, plasma display devices, EL display devices, display devices such as lamps, audio output devices such as speakers and headphones, printer devices, and the like.
  • the output device 907 outputs results obtained by various processes performed by the information processing device 900.
  • the display device visually displays results obtained by various processes performed by the information processing device 900 in various formats such as text, images, tables, and graphs.
  • the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs it aurally.
  • the display device can form, for example, the display unit 16 and the sound output unit 17 shown in FIG.
  • the storage device 908 is a data storage device formed as an example of a storage unit of the information processing device 900.
  • the storage apparatus 908 is realized by, for example, a magnetic storage device such as an HDD, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
  • the storage device 908 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like.
  • the storage device 908 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
  • the drive 909 is a storage medium reader / writer, and is built in or externally attached to the information processing apparatus 900.
  • the drive 909 reads information recorded on a removable storage medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and outputs the information to the RAM 903.
  • the drive 909 can also write information to a removable storage medium.
  • connection port 911 is an interface connected to an external device, and is a connection port with an external device capable of transmitting data by USB (Universal Serial Bus), for example.
  • USB Universal Serial Bus
  • the communication device 913 is a communication interface formed by a communication device or the like for connecting to the network 920, for example.
  • the communication device 913 is, for example, a communication card for wired or wireless LAN (Local Area Network), LTE (Long Term Evolution), Bluetooth (registered trademark), or WUSB (Wireless USB).
  • the communication device 913 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communication, or the like.
  • the communication device 913 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet and other communication devices.
  • the communication device 913 can form, for example, the communication unit 18 illustrated in FIG.
  • the network 920 is a wired or wireless transmission path for information transmitted from a device connected to the network 920.
  • the network 920 may include a public line network such as the Internet, a telephone line network, and a satellite communication network, various LANs including the Ethernet (registered trademark), a wide area network (WAN), and the like.
  • the network 920 may include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
  • IP-VPN Internet Protocol-Virtual Private Network
  • the sensor 915 is various sensors such as an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, a sound sensor, a distance measuring sensor, and a force sensor.
  • the sensor 915 acquires information on the state of the information processing apparatus 900 itself, such as the posture and movement speed of the information processing apparatus 900, and information on the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900.
  • Sensor 915 may also include a GPS sensor that receives GPS signals and measures the latitude, longitude, and altitude of the device.
  • the sensor 915 can form, for example, the sensor unit 12 shown in FIG.
  • each of the above components may be realized using a general-purpose member, or may be realized by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
  • a computer program for realizing each function of the information processing apparatus 900 according to the present embodiment as described above can be produced and mounted on a PC or the like.
  • a computer-readable recording medium storing such a computer program can be provided.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
  • the above computer program may be distributed via a network, for example, without using a recording medium.
  • the number of computers that execute the computer program is not particularly limited.
  • the computer program may be executed by a plurality of computers (for example, a plurality of servers) in cooperation with each other.
  • a single computer or a combination of a plurality of computers is also referred to as a “computer system”.
  • the same device includes a control unit that performs control related to notification, a display unit that performs output related to notification, and a sound output unit. It is not limited to such an example.
  • the device having the function of the control unit may be different from the device having the function of the sound output unit and the display unit that performs output related to the notification.
  • the control unit may generate a control signal related to the notification and cause the display unit or the acoustic output unit included in the other device to perform the output related to the notification by transmitting the control signal to the other device. .
  • control signal generated by the control unit to control display by the display unit is not limited to a video signal (so-called RGB signal or the like) that can be directly reproduced by the display unit.
  • control signal generated by the control unit may be a streaming packet for performing streaming reproduction, or data including arrangement information and content information (HTML data) that allows an image to be obtained by performing a rendering process. Etc.).
  • the notification method according to the present technology is not limited to display or sound output.
  • the information processing apparatus has a vibration output function, a light emission function, or the like, notification may be performed by output or light emission by vibration.
  • the present technology is not limited to such an example.
  • the present technology may be applied to an information processing apparatus (such as a video see-through HMD) that displays an image generated by superimposing a display object on an image in real space acquired by a camera on a display unit.
  • the present technology may be applied to a head-up display that displays an image on a windshield of an automobile or the like, and the present technology may be applied to a stationary display device.
  • the present technology may be applied to an information processing apparatus that renders an image in which a display object is arranged in a virtual space and displays the image on a non-transmissive display unit with the virtual space as a background.
  • an example in which a display object is displayed with a real space as a background has been described.
  • a virtual space is displayed.
  • a display object may be displayed as a background.
  • each step in the above embodiment does not necessarily have to be processed in time series in the order described as a flowchart.
  • each step in the processing of the above embodiment may be processed in an order different from the order described as the flowchart diagram or may be processed in parallel.
  • An information processing apparatus comprising: a control unit that notifies the user of first notification information related to the display object when the user's line of sight is directed toward the display object.
  • a control unit that notifies the user of first notification information related to the display object when the user's line of sight is directed toward the display object.
  • the information processing apparatus according to (1) wherein the display object is displayed based on real space information.
  • the first notification information indicates whether or not an input to the display object can be accepted.
  • the information processing apparatus according to (3), wherein the input is an input by voice.
  • the control unit indicates that the first notification information cannot accept the input, and indicates that the display object has not accepted the input when the input is performed.
  • the information processing apparatus according to (1) or (2), wherein the first notification information indicates that the display object has been selected.
  • the control unit causes the user to be notified of third notification information related to the display object when the user's line of sight is not directed to the display object.
  • the control unit causes the first notification information and the third notification information to be notified by display, The information processing apparatus according to (7), wherein the third notification information is notified by a display having lower visibility than the first notification information.
  • An information processing method including: (15) Computer system, A program for realizing a function of notifying the user of first notification information relating to the display object when a user's line of sight is directed toward the display object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To provide an information processing device, an information processing method, and a program. [Solution] An information processing device provided with a control unit which causes first notification information relating to a display object to be provided to a user when the user's line of sight is directed to the display object.

Description

情報処理装置、情報処理方法、及びプログラムInformation processing apparatus, information processing method, and program
 本開示は、情報処理装置、情報処理方法、及びプログラムに関する。 The present disclosure relates to an information processing apparatus, an information processing method, and a program.
 背景(実空間、または仮想空間)にオブジェクトを重畳させて、ユーザに提示する技術が存在する。例えば特許文献1には、実空間の画像に基づくオブジェクトを、非透過型のディスプレイにおいて実空間の画像に重畳させて表示させる、または透過型(シースルー型)のディスプレイにおいて実空間背景に重畳させて表示させる、技術が開示されている。 There is a technique for superimposing an object on the background (real space or virtual space) and presenting it to the user. For example, in Patent Document 1, an object based on a real space image is displayed by being superimposed on a real space image on a non-transmissive display, or is superimposed on a real space background on a transmissive (see-through) display. A technique for displaying is disclosed.
特開2014-106681号公報JP 2014-106681 A
 上記のように表示されたオブジェクト(表示オブジェクト)に対して、ユーザが選択を行ったり、ユーザがコマンド(命令)等の入力を行ったりすることが考えられる。しかし、表示オブジェクトの選択状況に係る情報や、表示オブジェクトの入力受付可否に係る情報等を、ユーザが把握することは困難な場合があった。 It is conceivable that the user selects the object (display object) displayed as described above, or the user inputs a command (command) or the like. However, there are cases where it is difficult for the user to grasp information relating to the selection status of the display object, information relating to whether or not input of the display object can be accepted, and the like.
 そこで、表示オブジェクトに係る情報をより容易にユーザが把握することが可能な、新規かつ改良された情報処理装置、情報処理方法、及びプログラムを提案する。 Therefore, a new and improved information processing apparatus, information processing method, and program are proposed in which a user can more easily grasp information related to a display object.
 本開示によれば、表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させる制御部を備える情報処理装置が提供される。 According to the present disclosure, there is provided an information processing apparatus including a control unit that notifies the user of the first notification information related to the display object when the user's line of sight is directed toward the display object.
 また、本開示によれば、プロセッサが、表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させること、を含む情報処理方法が提供される。 In addition, according to the present disclosure, there is provided an information processing method including causing a user to notify the user of first notification information related to the display object when a user's line of sight is directed to the display object. The
 また、本開示によれば、コンピュータシステムに、表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させる機能を実現させるための、プログラムが提供される。 Further, according to the present disclosure, there is provided a program for realizing a function of notifying the user of the first notification information related to the display object when the user's line of sight is directed to the display object. Provided.
 以上説明したように本開示によれば、表示オブジェクトに係る情報をより容易にユーザが把握することが可能である。 As described above, according to the present disclosure, it is possible for the user to more easily grasp information related to the display object.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
本開示の一実施形態に係る情報処理装置1の構成例を示すブロック図である。1 is a block diagram illustrating a configuration example of an information processing apparatus 1 according to an embodiment of the present disclosure. 同実施形態に係る出力制御部104が表示部16に表示させる表示オブジェクトの一例を示す説明図である。FIG. 10 is an explanatory diagram illustrating an example of a display object displayed on the display unit 16 by the output control unit 104 according to the embodiment. アイコンの表示により通知情報が通知される例を説明するための説明図である。It is explanatory drawing for demonstrating the example which notification information is notified by the display of an icon. 表示オブジェクトを所定の形状で表示させることにより、通知情報が通知される例を説明するための説明図である。It is explanatory drawing for demonstrating the example which notification information is notified by displaying a display object by a predetermined shape. 同実施形態に係る出力制御部104が表示させるラベルの例を示す説明図である。It is explanatory drawing which shows the example of the label which the output control part 104 which concerns on the same embodiment displays. 同実施形態に係るクリッピングボリュームを説明するための説明図である。It is explanatory drawing for demonstrating the clipping volume concerning the embodiment. 同実施形態に係る動作例1を説明するための説明図である。FIG. 6 is an explanatory diagram for explaining an operation example 1 according to the embodiment. 同実施形態に係る動作例2を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an operation example 2 according to the embodiment. 同実施形態に係る動作例3を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an operation example 3 according to the embodiment. 同実施形態に係る動作例4を説明するための説明図である。FIG. 10 is an explanatory diagram for explaining an operation example 4 according to the embodiment. 同実施形態に係る動作例5を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an operation example 5 according to the same embodiment. 同実施形態に係る動作例6を説明するための説明図である。FIG. 11 is an explanatory diagram for describing an operation example 6 according to the same embodiment. 同実施形態に係る動作例7を説明するための説明図である。FIG. 11 is an explanatory diagram for explaining an operation example 7 according to the embodiment. 他のユーザとボイスチャットを行うためのグループチャットアプリケーションに同実施形態を応用した応用例1を説明するための説明図である。It is explanatory drawing for demonstrating the application example 1 which applied the same embodiment to the group chat application for performing a voice chat with another user. 他のユーザと交流するためのSNS(Social Networking Service)アプリケーションに同実施形態を応用した応用例2を説明するための説明図である。It is explanatory drawing for demonstrating the application example 2 which applied the embodiment to the SNS (Social Networking Service) application for interacting with another user. 視界に存在する店舗に関する情報を検索するための店舗検索アプリケーションに同実施形態を応用した応用例3を説明するための説明図である。It is explanatory drawing for demonstrating the application example 3 which applied the embodiment to the shop search application for searching the information regarding the shop which exists in a visual field. ハードウェア構成例を示す説明図である。It is explanatory drawing which shows the hardware structural example.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
 <<1.構成例>>
 <<2.動作例>>
  <2-1.動作例1>
  <2-2.動作例2>
  <2-3.動作例3>
  <2-4.動作例4>
  <2-5.動作例5>
  <2-6.動作例6>
  <2-7.動作例7>
 <<3.応用例>>
  <3-1.応用例1>
  <3-2.応用例2>
  <3-3.応用例3>
 <<4.ハードウェア構成例>>
 <<5.むすび>>
The description will be made in the following order.
<< 1. Configuration example >>
<< 2. Example of operation >>
<2-1. Operation example 1>
<2-2. Operation example 2>
<2-3. Operation Example 3>
<2-4. Operation Example 4>
<2-5. Operation Example 5>
<2-6. Operation Example 6>
<2-7. Operation Example 7>
<< 3. Application example >>
<3-1. Application Example 1>
<3-2. Application Example 2>
<3-3. Application Example 3>
<< 4. Hardware configuration example >>
<< 5. Conclusion >>
 <<1.構成例>>
 まず、図1を参照しながら、本開示の一実施形態の構成例を説明する。図1は、本開示の一実施形態に係る情報処理装置1の構成例を示すブロック図である。図1に示すように、本実施形態に係る情報処理装置1は、制御部10、センサ部12、記憶部14、表示部16、音響出力部17、及び通信部18を備える情報処理装置である。
<< 1. Configuration example >>
First, a configuration example of an embodiment of the present disclosure will be described with reference to FIG. FIG. 1 is a block diagram illustrating a configuration example of an information processing apparatus 1 according to an embodiment of the present disclosure. As illustrated in FIG. 1, the information processing apparatus 1 according to the present embodiment is an information processing apparatus including a control unit 10, a sensor unit 12, a storage unit 14, a display unit 16, a sound output unit 17, and a communication unit 18. .
 本実施形態に係る情報処理装置1は、例えば、HMD(Head Mounted Display)、携帯電話、スマートフォン、タブレットPC(Personal Computer)、プロジェクタ等、多様な形態で実現され得る。なお、以下では情報処理装置1が透過型の表示部16を有する眼鏡型のHMDである場合を例に説明を行う。 The information processing apparatus 1 according to the present embodiment can be realized in various forms such as an HMD (Head Mounted Display), a mobile phone, a smartphone, a tablet PC (Personal Computer), and a projector. In the following description, the information processing apparatus 1 is an eyeglass-type HMD having a transmissive display unit 16 as an example.
 制御部10は、情報処理装置1の各構成を制御する。例えば、制御部10による制御により、情報処理装置1は、ユーザに多様なアプリケーションを提供することができる。なお、情報処理装置1が提供するアプリケーションの例については後述するが、情報処理装置1は、例えば他のユーザとボイスチャットを行うためのグループチャットアプリケーションや、他のユーザと交流するためのSNS(Social Networking Service)アプリケーション、店舗検索アプリケーション等を提供してもよい。 The control unit 10 controls each component of the information processing apparatus 1. For example, the information processing apparatus 1 can provide various applications to the user under the control of the control unit 10. An example of an application provided by the information processing apparatus 1 will be described later. The information processing apparatus 1 is, for example, a group chat application for performing a voice chat with another user, or an SNS (for interacting with another user). Social Networking Service) application, store search application, etc. may be provided.
 また、制御部10は、図1に示すように、視線認識部101、音声認識部102、画像認識部103、及び出力制御部104としても機能する。 Further, the control unit 10 also functions as a line-of-sight recognition unit 101, a voice recognition unit 102, an image recognition unit 103, and an output control unit 104, as shown in FIG.
 視線認識部101は、後述するセンサ部12により取得される情報に基づき、ユーザの視線に関する情報を取得する。本実施形態に係るユーザの視線に関する情報とは、例えばユーザの視線の方向、ユーザの視線の位置、焦点位置等を含んでもよい。また、ユーザの視線の位置とは、例えば、表示部16において、表示部16とユーザの視線の交差する位置(例えば座標位置)であってもよい。 The line-of-sight recognition unit 101 acquires information on the user's line of sight based on information acquired by the sensor unit 12 described later. The information regarding the user's line of sight according to the present embodiment may include, for example, the direction of the user's line of sight, the position of the user's line of sight, and the focal position. The position of the user's line of sight may be, for example, a position (for example, a coordinate position) where the display unit 16 and the user's line of sight intersect in the display unit 16.
 視線認識部101は、例えば後述するセンサ部12に含まれるカメラにより取得されるユーザの目画像に基づいて、ユーザの視線に関する情報を取得してもよい。また、視線認識部101は、後述するセンサ部12に含まれる赤外線センサや、他のセンサにより取得される情報に基づいて、ユーザの視線に関する情報を取得してもよい。 The line-of-sight recognition unit 101 may acquire information related to the user's line of sight based on, for example, a user's eye image acquired by a camera included in the sensor unit 12 described below. Further, the line-of-sight recognition unit 101 may acquire information on the user's line of sight based on an infrared sensor included in the sensor unit 12 described later or information acquired by another sensor.
 視線認識部101は、センサ部12により取得される情報から、目の基準点(例えば目頭や角膜反射等の目における動かない部分に対応する点)や、目の動点(例えば、虹彩や瞳孔等の目における動く部分に対応する点)を検出してもよい。また、視線認識部101は、目の基準点に対する目の動点の位置に基づいて、ユーザの視線に関する情報を取得してもよい。なお、本実施形態に係る視線認識部101によるユーザの視線に関する情報の取得方法は上記に限定されず、例えば視線を検出することが可能な任意の視線検出技術により行われてもよい。 The line-of-sight recognition unit 101 determines the reference point of the eye (for example, a point corresponding to a non-moving part of the eye such as the eye or corneal reflection) or the moving point of the eye (for example, iris or pupil from the information acquired by the sensor unit 12. A point corresponding to a moving part of the eye) may be detected. Further, the line-of-sight recognition unit 101 may acquire information on the user's line of sight based on the position of the eye movement point with respect to the eye reference point. In addition, the acquisition method of the information regarding a user's gaze by the gaze recognition part 101 which concerns on this embodiment is not limited above, For example, you may perform by the arbitrary gaze detection techniques which can detect a gaze.
 音声認識部102は、後述するセンサ部12に含まれるマイクロフォンにより収音されたユーザの発話(音声)を認識し、文字列への変換を行い、発話テキストを取得する。また、音声認識部102は、発話テキストに対して形態素解析や文字列パターンマッチング等の自然言語処理を行い、例えば発話テキストから表示オブジェクトを指定する指定ワードや、表示オブジェクトへの命令であるコマンド等を取得してもよい。 The speech recognition unit 102 recognizes a user's utterance (speech) collected by a microphone included in the sensor unit 12 described later, converts the speech into a character string, and acquires the utterance text. The speech recognition unit 102 performs natural language processing such as morphological analysis and character string pattern matching on the utterance text, for example, a designation word for designating a display object from the utterance text, a command that is a command to the display object, etc. May be obtained.
 画像認識部103は、後述するセンサ部12に含まれるカメラ、またはステレオカメラにより取得された画像を解析し、実空間の情報を取得する。例えば、画像認識部103は、同時に取得された複数画像に対するステレオマッチング法や、時系列的に取得された複数画像に対するSfM(Structure from Motion)法等を行うことで、実空間の三次元形状の情報を取得してもよい。また、画像認識部103は、予め用意された特徴点情報と、撮像画像から検出された特徴点情報のマッチングを行うことで、実空間に存在する物体(実物体)やマーカ等を認識し、当該物体やマーカ等の情報を取得してもよい。なお、画像認識部103が認識するマーカは、例えば二次元コード等で表現される、特定パターンのテクスチャ情報、または画像特徴点情報の集合であってもよい。また、画像認識部103が情報を取得する物体は、ユーザの手、指、手で把持する物体等の操作体であってもよい。 The image recognition unit 103 analyzes an image acquired by a camera or a stereo camera included in the sensor unit 12 to be described later, and acquires real space information. For example, the image recognition unit 103 performs a stereo matching method on a plurality of images acquired at the same time, an SfM (Structure from Motion) method on a plurality of images acquired in time series, etc. Information may be acquired. In addition, the image recognition unit 103 recognizes an object (real object), a marker, or the like existing in real space by matching feature point information prepared in advance with feature point information detected from the captured image. Information such as the object and marker may be acquired. Note that the marker recognized by the image recognition unit 103 may be a specific pattern of texture information or a set of image feature point information expressed by, for example, a two-dimensional code. The object from which the image recognition unit 103 acquires information may be an operation body such as a user's hand, a finger, or an object held by the hand.
 出力制御部104は、視線認識部101、音声認識部102、及び画像認識部103により取得される情報に基づき、本実施形態に係る出力を制御する。例えば、出力制御部104は、後述する表示部16による表示(表示出力)を制御し、また、後述する音響出力部17による音響出力を制御する。 The output control unit 104 controls output according to the present embodiment based on information acquired by the line-of-sight recognition unit 101, the voice recognition unit 102, and the image recognition unit 103. For example, the output control unit 104 controls display (display output) by the display unit 16 to be described later, and controls sound output by the sound output unit 17 to be described later.
 出力制御部104は、例えば、画像認識部103により取得された実空間の情報(実空間の三次元形状、空間に存在する物体、マーカ等の情報)に基づき、表示オブジェクトを表示させてもよい。後述するように、表示部16が両眼視差をユーザに与えることが可能である場合、出力制御部104は、実空間の情報に応じた実空間における位置(三次元位置)に存在すると知覚されるように表示オブジェクトを表示させることが可能である。 For example, the output control unit 104 may display the display object based on information on the real space (three-dimensional shape of the real space, information on objects existing in the space, markers, etc.) acquired by the image recognition unit 103. . As will be described later, when the display unit 16 can provide binocular parallax to the user, the output control unit 104 is perceived to be present at a position in the real space (three-dimensional position) according to the information in the real space. It is possible to display the display object as described above.
 図2は、出力制御部104が表示部16に表示させる表示オブジェクトの一例を示す説明図である。図2に示すマーカM10は、実空間に存在するマーカである。出力制御部104は、画像認識部103により取得されたマーカM10の情報に基づき、当該マーカM10に対応する表示オブジェクトA10を記憶部14から取得する。そして、図2に示すように、出力制御部104は、当該マーカM10に応じた位置(例えばマーカM10上)に存在すると知覚されるように表示オブジェクトA10を表示させてもよい。 FIG. 2 is an explanatory diagram illustrating an example of a display object that the output control unit 104 displays on the display unit 16. The marker M10 shown in FIG. 2 is a marker that exists in real space. The output control unit 104 acquires the display object A10 corresponding to the marker M10 from the storage unit 14 based on the information of the marker M10 acquired by the image recognition unit 103. Then, as illustrated in FIG. 2, the output control unit 104 may display the display object A10 so as to be perceived as being present at a position corresponding to the marker M10 (for example, on the marker M10).
 なお、出力制御部104による表示オブジェクトの表示は上記の例に限定されない。例えば、出力制御部104は、画像認識部103により取得された実物体の情報に基づき、当該実物体に対応する表示オブジェクトを記憶部14から取得し、当該実物体に応じた位置に表示オブジェクトを表示させてもよい。また、出力制御部104は、画像認識部103により取得された実空間の三次元形状の情報に基づき、実空間における平面上に表示オブジェクトが存在すると知覚されるように、表示オブジェクトを表示させてもよい。 The display object display by the output control unit 104 is not limited to the above example. For example, the output control unit 104 acquires a display object corresponding to the real object from the storage unit 14 based on the information on the real object acquired by the image recognition unit 103, and places the display object at a position corresponding to the real object. It may be displayed. Further, the output control unit 104 displays the display object so that it is perceived that the display object exists on the plane in the real space based on the information on the three-dimensional shape of the real space acquired by the image recognition unit 103. Also good.
 また、出力制御部104は、通信部18を介して、他の装置から取得された情報に基づいて、表示オブジェクトを表示させてもよい。例えば、通信部18を介して取得される、他のユーザの位置の情報に基づいて、表示オブジェクトとして、当該他のユーザを示すアバターを、当該他のユーザの存在する方角に存在すると知覚されるように表示させてもよい。係る場合、出力制御部104は、センサ部12に含まれる地磁気センサ等から取得される方角の情報に基づいて、上記の表示制御を行ってもよい。 Further, the output control unit 104 may display a display object based on information acquired from another device via the communication unit 18. For example, based on the information on the position of the other user acquired through the communication unit 18, it is perceived that an avatar indicating the other user exists as a display object in a direction where the other user exists. You may display as follows. In such a case, the output control unit 104 may perform the display control based on the direction information acquired from the geomagnetic sensor or the like included in the sensor unit 12.
 また、出力制御部104は、表示オブジェクトに係る通知情報を、ユーザに通知させる。通知の方法は多様であってよいが、例えば、出力制御部104は、表示部16を制御することで、表示により通知させてもよいし、音響出力部17を制御することで音響出力により通知させてもよい。出力制御部104がユーザに通知情報を通知させる具体例については後述する。 Also, the output control unit 104 notifies the user of notification information related to the display object. For example, the output control unit 104 may be notified by display by controlling the display unit 16, or may be notified by sound output by controlling the sound output unit 17. You may let them. A specific example in which the output control unit 104 notifies the user of notification information will be described later.
 例えば、出力制御部104は、表示オブジェクトへユーザの視線が向けられた場合に、表示オブジェクトに係る第一の通知情報を、ユーザに通知させてもよい。係る構成によれば、ユーザは興味を持った表示オブジェクトに係る情報を容易に把握することが可能となる。 For example, the output control unit 104 may cause the user to be notified of the first notification information related to the display object when the user's line of sight is directed to the display object. According to such a configuration, the user can easily grasp information related to the display object in which he / she is interested.
 表示オブジェクトへユーザの視線が向けられた場合に、ユーザに通知される第一の通知情報は、例えば表示オブジェクトに対する入力の受付可否を示す情報であってもよい。係る構成によれば、ユーザは表示オブジェクトがユーザの入力を受け付けるか否かを容易に把握することが可能となり、例えば、入力の受付が不可能な表示オブジェクトに対して入力を試みてしまうことを防ぐことが可能となる。 When the user's line of sight is directed to the display object, the first notification information notified to the user may be information indicating whether or not an input to the display object can be accepted, for example. According to such a configuration, the user can easily grasp whether or not the display object accepts the user's input. For example, the user tries to input the display object that cannot accept the input. It becomes possible to prevent.
 また、第一の通知情報は、例えば表示オブジェクトに対する音声による入力(例えばコマンドの入力)の受付可否を示す情報であってもよい。音声による入力の受付可否が不明である場合、例えば音声によるコマンド入力を行ったが表示オブジェクトに対して当該コマンドが実行されないと、コマンドが実行されない原因を把握することはユーザにとって困難である。例えば、音声による入力は、一般に音声認識処理を要するため、音声認識処理に失敗したためにコマンドが実行されないのか、または表示オブジェクトが音声による入力を受け付けることが不可能であるのか、ユーザが判断することは困難である。その結果、ユーザは音声による入力の受付が不可能な表示オブジェクトに対して、音声による入力を無意味に繰り返してしまう恐れがある。しかし、上記のように、音声による入力の受付可否を示す情報がユーザに通知されることにより、例えば、ユーザが音声による入力の受付が不可能な表示オブジェクトに対して音声による入力を無意味に繰り返してしまうことを防ぐことが可能となる。 Also, the first notification information may be information indicating whether or not an input (for example, an input of a command) by voice to the display object is acceptable. If it is unclear whether or not voice input can be accepted, for example, if a voice command is input but the command is not executed on the display object, it is difficult for the user to grasp the cause of the command not being executed. For example, since voice input generally requires voice recognition processing, the user must determine whether the command is not executed because the voice recognition processing has failed, or whether the display object cannot accept voice input. It is difficult. As a result, there is a possibility that the user repeats the voice input meaninglessly on the display object that cannot accept the voice input. However, as described above, the user is notified of the information indicating whether or not the voice input can be accepted, so that, for example, the voice input to the display object that the user cannot accept the voice input becomes meaningless. It is possible to prevent repetition.
 なお、第一の通知情報が示す、表示オブジェクトに対する入力の受付可否は、音声による入力の受付可否に限定されず、タッチ入力や操作体を用いた入力、視線による入力等、様々な入力の受付可否であってもよい。 The acceptance of input to the display object indicated by the first notification information is not limited to acceptance of input by voice, but accepts various inputs such as touch input, input using an operation tool, input by line of sight, etc. It may be acceptable.
 なお、第一の通知情報は、上記に限定されず、様々な情報を示してもよい。例えば、第一の通知情報は、当該表示オブジェクトが選択されたことを示す情報であってもよい。係る構成によれば、例えば複数の表示オブジェクトが表示されている場合に、ユーザはいずれのオブジェクトが選択済であるのかを把握することが可能となる。なお、表示オブジェクトの選択については後述する。また、後述するようなグループチャットアプリケーションや、SNSアプリケーションにおいて、第一の通知情報は、表示オブジェクトが示すユーザとのインタラクション(例えばボイスチャット)の可否を示してもよい。 Note that the first notification information is not limited to the above, and may indicate various information. For example, the first notification information may be information indicating that the display object has been selected. According to such a configuration, for example, when a plurality of display objects are displayed, the user can grasp which object has been selected. The selection of the display object will be described later. In a group chat application or SNS application as described later, the first notification information may indicate whether or not an interaction (for example, voice chat) with the user indicated by the display object is possible.
 また、表示オブジェクトが選択された状態で、ユーザによる入力(例えば音声によるコマンドの入力)が行われた場合、出力制御部104は、当該選択された表示オブジェクトに係る第二の通知情報をユーザに通知させてもよい。 In addition, when an input (for example, voice command input) is performed by the user while the display object is selected, the output control unit 104 notifies the user of the second notification information related to the selected display object. You may be notified.
 例えば、入力の受付が可能な表示オブジェクトが選択された状態でコマンドの入力が行われた場合、出力制御部104は、当該コマンドに基づいて、当該表示オブジェクトに関する制御を行い、当該制御が行われたことを示す第三の通知情報ユーザに通知してもよい。なお、表示オブジェクトに関する制御は、コマンドや表示オブジェクト、アプリケーションの種類によって多様であってよい。例えば、表示オブジェクトに関する制御は、回転や移動等、表示オブジェクトの表示制御であってもよいし、表示オブジェクトが示すユーザとのインタラクション(例えばボイスチャット)を開始するための制御であってもよい。 For example, when a command is input in a state where a display object that can accept input is selected, the output control unit 104 performs control related to the display object based on the command, and the control is performed. You may notify to the 3rd notification information user which shows that. Note that the control related to the display object may vary depending on the type of command, display object, and application. For example, the control related to the display object may be display control of the display object such as rotation or movement, or may be control for starting an interaction (for example, voice chat) with the user indicated by the display object.
 また、表示オブジェクトに対する入力の受付が不可能であることを示す第一の通知情報が通知されたにもかかわらず、入力が行われた場合、出力制御部104は、当該表示オブジェクトが当該入力を受け付けられなかったことを示す第二の通知情報をユーザに通知させてもよい。係る構成によれば、仮にユーザが入力の受付が不可能な表示オブジェクトが選択された状態で入力を行ってしまった場合であっても、表示オブジェクトに対する入力が受け付けられなかったことをユーザが把握することが可能となる。その結果、例えば、ユーザが音声による入力の受付が不可能な表示オブジェクトに対して入力をさらに繰り返してしまうことを防ぐことが可能となる。 In addition, when the first notification information indicating that the input to the display object cannot be received is notified, when the input is performed, the output control unit 104 causes the display object to input the input. You may make a user notify the 2nd notification information which shows that it was not received. According to such a configuration, even if the user performs input in a state where a display object that cannot be input is selected, the user grasps that the input to the display object has not been received. It becomes possible to do. As a result, for example, it is possible to prevent the user from further repeating the input with respect to the display object that cannot accept the voice input.
 なお、第二の通知情報は、第一の通知情報よりも明示的に、または強く、ユーザに通知されてもよい。例えば、第一の通知情報と第二の通知情報とが表示により通知される場合、第二の通知情報は、第一の通知情報よりも視認性の高い(目立つ)表示により通知されてもよい。また、第一の通知情報と第二の通知情報とが音響出力により通知される場合、第二の通知情報は、第一の通知情報よりも大きな(高い音圧レベルを有する)音響により通知されてもよい。また、第一の通知情報が表示により通知され、第二の通知情報が音響出力により通知されてもよい。係る構成によれば、例えば、表示オブジェクトが入力を受け付けないことをより把握し易くなる。 Note that the second notification information may be notified to the user explicitly or stronger than the first notification information. For example, when the first notification information and the second notification information are notified by display, the second notification information may be notified by a display having higher visibility (conspicuous) than the first notification information. . Further, when the first notification information and the second notification information are notified by sound output, the second notification information is notified by sound larger (having a higher sound pressure level) than the first notification information. May be. Further, the first notification information may be notified by display, and the second notification information may be notified by sound output. According to such a configuration, for example, it becomes easier to grasp that the display object does not accept input.
 また、出力制御部104は、表示オブジェクトへユーザの視線が向けられない場合に、表示オブジェクトに係る第三の通知情報を、ユーザに通知させてもよい。係る構成によれば、ユーザは興味を持った表示オブジェクトに係る情報をより容易に把握することが可能となる。なお、第三の通知情報は、例えば表示オブジェクトに対する入力の受付可否を示す情報であってもよい。また、第三の通知情報は、表示オブジェクトの選択可否を示す情報であってもよいし、表示オブジェクトが示すユーザとのインタラクションの可否を示す情報であってもよい。 Also, the output control unit 104 may cause the user to be notified of the third notification information related to the display object when the user's line of sight is not directed to the display object. According to such a configuration, the user can more easily grasp information related to the display object in which he is interested. Note that the third notification information may be information indicating whether or not an input can be accepted with respect to the display object, for example. The third notification information may be information indicating whether or not a display object can be selected, or information indicating whether or not an interaction with the user indicated by the display object is possible.
 なお、第三の通知情報は、第一の通知情報よりも暗示的に、または弱く、ユーザに通知されてもよい。例えば、第一の通知情報と第三の通知情報とが表示により通知される場合、第三の通知情報は、第一の通知情報よりも視認性の低い(目立たない)表示により通知されてもよい。また、第一の通知情報と第三の通知情報とが音響出力により通知される場合、第三の通知情報は、第一の通知情報よりも小さな(低い音圧レベルを有する)音響により通知されてもよい。係る構成によれば、例えば、複数の表示オブジェクトが表示される場合に、ユーザは通知による煩雑な印象を受けづらくなる。 Note that the third notification information may be notified to the user implicitly or weaker than the first notification information. For example, when the first notification information and the third notification information are notified by display, the third notification information may be notified by a display that is less visible (not conspicuous) than the first notification information. Good. In addition, when the first notification information and the third notification information are notified by sound output, the third notification information is notified by sound smaller (having a lower sound pressure level) than the first notification information. May be. According to such a configuration, for example, when a plurality of display objects are displayed, it is difficult for the user to receive a complicated impression due to the notification.
 続いて、第一の通知情報、第二の通知情報、第三の通知情報の通知方法について具体的に説明する。なお、以下では、第一の通知情報、第二の通知情報、第三の通知情報をまとめて通知情報と呼ぶ場合がある。 Subsequently, the notification method of the first notification information, the second notification information, and the third notification information will be specifically described. Hereinafter, the first notification information, the second notification information, and the third notification information may be collectively referred to as notification information.
 例えば、出力制御部104は、表示オブジェクトの位置に応じた位置に所定のアイコンを表示させることにより通知情報を通知させてもよい。図3はアイコンの表示により通知情報が通知される例を説明するための説明図である。図3に示すように、表示オブジェクトA10の付近(表示オブジェクトA10に応じた位置)に、表示オブジェクトA10に対する音声による入力の受付が可能であることを示すアイコンT12が表示されている。 For example, the output control unit 104 may notify the notification information by displaying a predetermined icon at a position corresponding to the position of the display object. FIG. 3 is an explanatory diagram for explaining an example in which notification information is notified by icon display. As shown in FIG. 3, an icon T12 is displayed in the vicinity of the display object A10 (a position corresponding to the display object A10), which indicates that it is possible to accept voice input to the display object A10.
 なお、アイコンの表示による通知情報の通知は上記の例に限定されない。例えば、表示オブジェクトに対する入力の受付が不可能である場合や、表示オブジェクトが示すユーザとのインタラクションが不可能である場合、表示オブジェクトの付近に、例えば、×(バツ)マークを含むアイコンが表示されてもよい。 Note that notification of notification information by displaying icons is not limited to the above example. For example, when it is impossible to accept input to the display object, or when it is impossible to interact with the user indicated by the display object, an icon including, for example, an X (cross) mark is displayed near the display object. May be.
 また、出力制御部104は、表示オブジェクトを所定の形状で表示させることにより、通知情報を通知させてもよい。図4は、表示オブジェクトを所定の形状で表示させることにより、通知情報が通知される例を説明するための説明図である。図4に表示オブジェクトA10は、表示オブジェクトA10に対する音声による入力の受付が可能であることを示すヘッドセットT14を装着した形状で表示されている。図4に示すように、表示オブジェクトの装備(形状の一例)の一部による通知は、視認性が低いため、例えば第三の通知情報の通知により適している。 Further, the output control unit 104 may notify the notification information by displaying the display object in a predetermined shape. FIG. 4 is an explanatory diagram for explaining an example in which notification information is notified by displaying a display object in a predetermined shape. In FIG. 4, the display object A <b> 10 is displayed in a shape with a headset T <b> 14 indicating that it is possible to accept voice input to the display object A <b> 10. As shown in FIG. 4, the notification by a part of the display object equipment (an example of the shape) is more suitable for the notification of the third notification information because the visibility is low.
 なお、表示オブジェクトを所定の形状で表示させることによる通知情報の通知は上記の例に限定されない。例えば、アバター(表示オブジェクトの一例)が示すユーザとのインタラクションが不可能である場合、アバターが文化圏に応じた拒否を表すジェスチャーポーズ(例えば手で×マークを作る等)で表示されてもよい。なお、アバターのポーズ(形状の一例)による通知は、視認性が高いため、例えば第一の通知情報、または第二の通知情報の通知により適している。 Note that notification of notification information by displaying a display object in a predetermined shape is not limited to the above example. For example, when the interaction with the user indicated by the avatar (an example of the display object) is impossible, the avatar may be displayed in a gesture pose indicating rejection according to the cultural sphere (for example, making a x mark by hand). . Note that the notification by the avatar pose (an example of the shape) has high visibility, and is more suitable for the notification of the first notification information or the second notification information, for example.
 また、出力制御部104は、表示オブジェクトをアニメーション表示させることにより、通知情報を通知させてもよい。例えば、表示オブジェクトがユーザを示すアバターである場合、アバターのアニメーションにより、通知情報が通知されてもよい。アバターが示すユーザとのインタラクションが不可能である場合、アバターは正面以外の方向(あさっての方向)を見たり、首を振ったり、忙しそうに(例えば机に向かって)作業していたり、ユーザから離れるように移動したり、物陰に隠れたりしてもよい。なお、例えば、第一の通知情報、または第三の通知情報の通知がアバターのアニメーションにより通知されてもよい。 Further, the output control unit 104 may notify the notification information by displaying the display object in an animation. For example, when the display object is an avatar indicating the user, the notification information may be notified by an animation of the avatar. If interaction with the user indicated by the avatar is not possible, the avatar sees a direction other than the front (direction tomorrow), shakes his head, works busy (eg, toward the desk), or the user You may move away from the object or hide in the shade. For example, the notification of the first notification information or the third notification information may be notified by an animation of the avatar.
 また、出力制御部104は、所定の表示条件で表示オブジェクトを表示させることにより、通知情報を通知させてもよい。例えば、表示オブジェクトに対する入力の受付が不可能である場合に表示される表示オブジェクトは、表示オブジェクトに対する入力の受付が可能である場合に表示される表示オブジェクトよりも高い透過度で表示されてもよい。また、表示オブジェクトに対する入力の受付が不可能である場合に表示される表示オブジェクトは、表示オブジェクトに対する入力の受付が可能である場合に表示される表示オブジェクトよりも周囲の環境になじんだ状態で表示されてもよい。例えば、表示オブジェクトをより実写的な描画方法で表示することで、実空間に溶け込ませ、周囲の環境になじんだ状態とすることが可能である。また、表示オブジェクトに対する入力の受付が不可能である場合に表示される表示オブジェクトは、モノクロに表示されてもよい。上記のように、所定の表示条件で表示オブジェクトを表示させることによる通知情報の通知は、視認性を低めることも可能であるため、例えば第三の通知情報の通知にも適している Further, the output control unit 104 may notify the notification information by displaying a display object under a predetermined display condition. For example, a display object that is displayed when input to the display object cannot be received may be displayed with higher transparency than a display object that is displayed when input to the display object can be received. . In addition, the display object that is displayed when input to the display object cannot be received is displayed in a state that is more familiar with the surrounding environment than the display object that is displayed when input to the display object is possible May be. For example, by displaying the display object by a more realistic drawing method, the display object can be blended into the real space and can be brought into a state suitable for the surrounding environment. In addition, the display object that is displayed when it is impossible to accept input to the display object may be displayed in monochrome. As described above, notification of notification information by displaying a display object under a predetermined display condition can also reduce visibility, and is suitable for notification of third notification information, for example.
 また、出力制御部104は、表示オブジェクトの視認性を高める(強調する)表示条件で表示オブジェクトを表示させることにより通知情報を通知させてもよい。例えば、出力制御部104は、表示オブジェクトの周囲を光らせるグロー表現や、明滅等の表示条件で表示オブジェクトを表示させることにより通知情報を通知させてもよい。上記のように表示オブジェクトの視認性を高める表示条件による通知情報の通知は、他の表示オブジェクトより当該表示オブジェクトを目立たせることが可能である。したがって、表示オブジェクトの視認性を高める表示条件による通知情報の通知は、例えば、選択されたことを示す通知情報や、入力に基づく制御が行われたことを示す通知情報の通知により適している。 Further, the output control unit 104 may notify the notification information by displaying the display object under display conditions that enhance (emphasize) the visibility of the display object. For example, the output control unit 104 may notify the notification information by displaying the display object under a display condition such as glow expression that flashes around the display object or blinking. As described above, notification of notification information based on display conditions that enhance the visibility of a display object can make the display object stand out from other display objects. Therefore, notification of notification information based on display conditions that improve the visibility of a display object is more suitable, for example, for notification information indicating selection or notification information indicating that control based on input has been performed.
 なお、本実施形態に係る表示条件は上記に限定されず、例えば、輝度、透過度、色、大きさ等に関する多様な条件を含んでもよい。 Note that the display conditions according to the present embodiment are not limited to the above, and may include various conditions relating to luminance, transparency, color, size, and the like.
 以上、出力制御部104による通知情報の通知方法について説明した。続いて、出力制御部104が第一の通知情報を通知するか否かを判定するために用いられる、表示オブジェクトへユーザの視線が向けられたか否かの判定について説明する。 In the above, the notification information notification method by the output control unit 104 has been described. Next, the determination of whether or not the user's line of sight has been directed to the display object, which is used to determine whether or not the output control unit 104 notifies the first notification information, will be described.
 表示オブジェクトへユーザの視線が向けられたか否かの判定は、例えば、表示部16における表示オブジェクトの表示範囲内に視線の位置が含まれるか否かにより行われてもよい。また、表示オブジェクトへユーザの視線が向けられたか否かの判定は、実空間において表示オブジェクトが存在すると知覚される三次元的な領域と、視線が交差するか否かにより判定されてもよい。 The determination as to whether or not the user's line of sight is directed to the display object may be made, for example, based on whether or not the position of the line of sight is included in the display range of the display object on the display unit 16. The determination as to whether or not the user's line of sight is directed to the display object may be made based on whether or not the line of sight intersects with a three-dimensional area in which the display object is perceived to exist in real space.
 また、表示オブジェクトへユーザの視線が向けられたか否かの判定は、例えば視線の位置が所定時間、所定の範囲内に存在する注視状態の場合に行われてもよい。なお、注視状態において、ユーザの視線が向けられた(ユーザの視線の先にある)オブジェクトを、以下では注視オブジェクトと呼ぶ場合がある。 Further, the determination as to whether or not the user's line of sight is directed to the display object may be performed, for example, in a gaze state where the line-of-sight position exists within a predetermined range for a predetermined time. In the gaze state, an object to which the user's line of sight is directed (ahead of the user's line of sight) may be hereinafter referred to as a gaze object.
 また、出力制御部104は、表示オブジェクトの選択状態を管理する。例えば、出力制御部104は、表示オブジェクトへユーザの視線が向けられたと判定された場合に、当該表示オブジェクトを選択状態に設定してもよい。表示オブジェクトへユーザの視線が向けられたことにより、当該表示オブジェクトが選択状態にされた場合、上述したように、選択されたことを示す第一の通知情報が通知されてもよい。 Further, the output control unit 104 manages the selection state of the display object. For example, when it is determined that the user's line of sight is directed to the display object, the output control unit 104 may set the display object to a selected state. When the display object is brought into a selected state by directing the user's line of sight to the display object, first notification information indicating that the display object has been selected may be notified as described above.
 また、出力制御部104は、音声認識部102による音声認識結果に基づいて、表示オブジェクトを選択状態に設定してもよい。例えば、出力制御部104は、音声認識部102により取得される表示オブジェクトを指定する指定ワードと、表示オブジェクトに付加されたラベル(表示オブジェクトを示すテキスト等のメタ情報)が一致した場合に、当該表示オブジェクトを選択状態に設定してもよい。 Further, the output control unit 104 may set the display object to the selected state based on the voice recognition result by the voice recognition unit 102. For example, when the designated word for designating the display object acquired by the speech recognition unit 102 matches the label (meta information such as text indicating the display object) added to the display object, the output control unit 104 The display object may be set to a selected state.
 また、出力制御部104は、当該表示オブジェクトの位置に応じた位置に、ラベルを表示させてもよい。例えば、ラベルは表示オブジェクトに対応付けられて、後述する記憶部14に予め記憶されてもよいし、通信部18を介して外部から取得されてもよい。 Further, the output control unit 104 may display a label at a position corresponding to the position of the display object. For example, the label may be associated with the display object and stored in advance in the storage unit 14 described later, or may be acquired from the outside via the communication unit 18.
 図5は、出力制御部104が表示させるラベルの例を示す説明図である。図5に示すように、表示オブジェクトA22を示すテキストである「Bike」がラベルL22として表示オブジェクトA22と対応付けられ、表示オブジェクトA22の位置に応じた位置に表示されている。また、同様に、表示オブジェクトA24を示すテキストである「Menu」がラベルL24として表示オブジェクトA24と対応付けられ、表示オブジェクトA24の位置に応じた位置に表示されている。図5に示すように、複数の表示オブジェクトが表示された場合であっても、ユーザはラベルを把握し、選択する表示オブジェクトを音声により指定することが可能となる。 FIG. 5 is an explanatory diagram illustrating an example of a label displayed by the output control unit 104. As shown in FIG. 5, “Bike”, which is text indicating the display object A22, is associated with the display object A22 as a label L22, and is displayed at a position corresponding to the position of the display object A22. Similarly, “Menu”, which is text indicating the display object A24, is associated with the display object A24 as a label L24 and is displayed at a position corresponding to the position of the display object A24. As shown in FIG. 5, even when a plurality of display objects are displayed, the user can grasp the label and specify the display object to be selected by voice.
 また、出力制御部104は、ユーザの視界内に存在する表示オブジェクトを選択状態に設定してもよい。例えば、出力制御部104は、現在のユーザの視界内に存在する表示オブジェクトを抽出して選択状態に設定し、音声認識部102によりコマンドが取得された場合、選択状態に設定された表示オブジェクトに対して、コマンドに基づく制御を行ってもよい。 Further, the output control unit 104 may set a display object existing in the user's field of view to a selected state. For example, the output control unit 104 extracts a display object that exists in the current user's field of view and sets the selected display object, and when a command is acquired by the voice recognition unit 102, the display object set in the selected state is displayed. On the other hand, control based on commands may be performed.
 なお、本実施形態に係るユーザの視界内に存在する表示オブジェクトは、例えばクリッピングボリュームに含まれる表示オブジェクトであってもよい。図6は、クリッピングボリュームを説明するための説明図である。図6に示す点C1は、ユーザの位置を示す。また、クリッピングボリュームV1は、図6に示すように、点C1を頂点とした角錐において点C1から第一の距離に存在する前方クリッピング面P1と、点C1から第一の距離より大きい第二の距離に存在する後方クリッピング面P2と、に挟まれる空間である。 Note that the display object existing in the user's field of view according to the present embodiment may be a display object included in a clipping volume, for example. FIG. 6 is an explanatory diagram for explaining the clipping volume. A point C1 illustrated in FIG. 6 indicates the position of the user. Further, as shown in FIG. 6, the clipping volume V1 includes a front clipping plane P1 existing at a first distance from the point C1 in a pyramid having the point C1 as a vertex, and a second larger than the first distance from the point C1. This is a space between the rear clipping plane P2 existing at a distance.
 なお、出力制御部104が表示オブジェクトを表示させるか否かの判定も、同様にクリッピングボリュームに含まれるか否かによって行われ得る。当該判定のためのクリッピングボリュームにおける上記第一の距離、及び第二の距離と、表示オブジェクトを選択するためのクリッピングボリュームにおける第一の距離、及び第二の距離と、は同一であってもよいし、異なってもよい。 Note that whether or not the output control unit 104 displays a display object can also be determined by whether or not it is included in the clipping volume. The first distance and the second distance in the clipping volume for the determination may be the same as the first distance and the second distance in the clipping volume for selecting the display object. And may be different.
 また、出力制御部104は、クリッピングボリュームに含まれるが、実空間に存在する物体や他の表示オブジェクトに隠蔽され、表示されていない表示オブジェクトを選択状態に設定しなくてもよい。また、出力制御部104は、ユーザの視線に関する情報に基づき、上述したクリッピングボリュームの形状を異ならせてもよい。 Also, the output control unit 104 does not need to set a display object that is included in the clipping volume, but is hidden by an object or other display object that exists in the real space, and is not displayed. Further, the output control unit 104 may change the shape of the above-described clipping volume based on information on the user's line of sight.
 また、出力制御部104は、ユーザが入力や操作のために用いる多様なアイコンやボタン等を表示させてもよい。 Further, the output control unit 104 may display various icons, buttons, and the like that are used by the user for input and operation.
 図1に示すセンサ部12は、情報処理装置1の周囲の情報をセンシングするセンサ装置である。例えば、センサ部12は、周囲の音声を収音する1または複数のマイクロフォンを含んでもよい。また、センサ部12は、周囲の画像を取得するカメラ、またはステレオカメラ、及びユーザの目画像を取得するカメラを含んでもよい。また、センサ部12は、赤外線センサ、距離センサ、人感センサ、生体情報センサ、加速度センサ、ジャイロセンサ、地磁気センサ等を含んでもよい。 The sensor unit 12 shown in FIG. 1 is a sensor device that senses information around the information processing apparatus 1. For example, the sensor unit 12 may include one or more microphones that collect ambient sounds. The sensor unit 12 may include a camera that acquires surrounding images, a stereo camera, and a camera that acquires user's eye images. The sensor unit 12 may include an infrared sensor, a distance sensor, a human sensor, a biological information sensor, an acceleration sensor, a gyro sensor, a geomagnetic sensor, and the like.
 記憶部14は、情報処理装置1の各構成が機能するためのプログラムやパラメータを記憶する。例えば、記憶部14は、画像認識部103が用いる特徴点情報や、出力制御部104が用いる表示オブジェクト、及び表示オブジェクトに対応付けられたラベル(メタ情報)等を記憶してもよい。 The storage unit 14 stores a program and parameters for each component of the information processing apparatus 1 to function. For example, the storage unit 14 may store feature point information used by the image recognition unit 103, a display object used by the output control unit 104, a label (meta information) associated with the display object, and the like.
 表示部16は、出力制御部104により制御され、種々の情報を表示するディスプレイである。表示部16は、例えば透過型(光学シースルー型)、かつ両眼の眼鏡型ディスプレイであってもよい。係る構成により、出力制御部104は、実空間における任意の三次元位置に存在すると知覚されるように表示オブジェクトを表示させることが可能である。 The display unit 16 is a display that is controlled by the output control unit 104 and displays various information. The display unit 16 may be, for example, a transmission type (optical see-through type) and a binocular spectacles type display. With such a configuration, the output control unit 104 can display the display object so that it is perceived to exist at an arbitrary three-dimensional position in the real space.
 音響出力部17は、例えばスピーカやイヤホン等の音響を出力する装置である。音響出力部17は、出力制御部104の制御に従い、音響信号を音響に変換して出力する機能を有する。 The sound output unit 17 is a device that outputs sound such as a speaker or an earphone. The sound output unit 17 has a function of converting an acoustic signal into sound and outputting the sound according to the control of the output control unit 104.
 通信部18は、他の装置との間の通信を仲介する通信インタフェースである。通信部18は、任意の無線通信プロトコルまたは有線通信プロトコルをサポートし、不図示の通信網を介して他の装置との間の通信接続を確立する。それにより、例えば、情報処理装置1が様々なアプリケーションをユーザに提供するための通信を行うこと可能となる。 The communication unit 18 is a communication interface that mediates communication with other devices. The communication unit 18 supports an arbitrary wireless communication protocol or wired communication protocol, and establishes a communication connection with another device via a communication network (not shown). Thereby, for example, the information processing apparatus 1 can perform communication for providing various applications to the user.
 <<2.動作例>>
 以上、本実施形態による情報処理装置1の構成例について説明した。続いて、本実施形態による情報処理装置1のいくつかの動作例について、図7~13を参照して説明する。以下に説明する動作例は、組み合わされてもよいし、ユーザの入力に応じて切り替えられてもよい。
<< 2. Example of operation >>
The configuration example of the information processing apparatus 1 according to the present embodiment has been described above. Subsequently, several operation examples of the information processing apparatus 1 according to the present embodiment will be described with reference to FIGS. The operation examples described below may be combined or switched according to a user input.
  <2-1.動作例1>
 図7は本実施形態に係る動作例1を説明するための説明図である。動作例1において、ユーザは音声により、表示オブジェクトを指定し、表示オブジェクトに対するコマンドの入力を行うことが可能である。
<2-1. Operation example 1>
FIG. 7 is an explanatory diagram for explaining an operation example 1 according to the present embodiment. In the first operation example, the user can specify a display object by voice and input a command to the display object.
 まず、図7に示すように、出力制御部104が、表示オブジェクトと、表示オブジェクトに対応付けられたラベルを表示させる(S102)。続いて、ユーザにより音声が入力される(S104においてYES)と音声認識部102により、表示オブジェクトを指定する指定ワードと、コマンドが取得され、指定ワードに基づいてラベルの検索が行われる(S106)。なお、ステップS104においてユーザは、例えば「Bike Rotate」のように、指定ワード(Bike)と、コマンド(Rotate)を続けて発話してもよい。 First, as shown in FIG. 7, the output control unit 104 displays a display object and a label associated with the display object (S102). Subsequently, when a voice is input by the user (YES in S104), the voice recognition unit 102 acquires a designated word and a command for designating a display object, and searches for a label based on the designated word (S106). . In step S104, the user may continuously utter a designated word (Bike) and a command (Rotate), for example, “Bike Rotate”.
 現在表示されたラベルのうち、指定ワードと一致するラベルが見つかった場合(S108においてYES)、見つかったラベルに対応する表示オブジェクトに対して、コマンドに基づく制御が行われる(S112)。 When a label that matches the specified word is found among the currently displayed labels (YES in S108), the display object corresponding to the found label is controlled based on the command (S112).
 一方、音声が入力されない場合(S104においてNO)、または現在表示されたラベルのうち、指定ワードと一致するラベルが見つからなかった場合(S108においてNO)、処理は終了する。 On the other hand, if no voice is input (NO in S104) or no label that matches the specified word is found among the currently displayed labels (NO in S108), the process ends.
  <2-2.動作例2>
 図8は本実施形態に係る動作例2を説明するための説明図である。動作例2において、ユーザは音声により、音声による入力を受け付け可能な全ての表示オブジェクトに対するコマンドの入力を行うことが可能である。
<2-2. Operation example 2>
FIG. 8 is an explanatory diagram for explaining an operation example 2 according to the present embodiment. In the second operation example, the user can input commands to all display objects that can accept voice input by voice.
 まず、図8に示すように、出力制御部104が、表示オブジェクトと、表示オブジェクトに対する音声による入力の受付可否を示すアイコンを表示させる(S122)。続いて、ユーザにより音声が入力される(S124においてYES)と音声認識部102により、コマンドが取得され、音声による入力を受付可能な表示オブジェクトに対して、コマンドに基づく制御が行われる(S126)。なお、ステップS124においてユーザは、例えばコマンドのみを発話してもよい。一方、音声が入力されない場合(S124においてNO)、処理は終了する。 First, as shown in FIG. 8, the output control unit 104 displays a display object and an icon indicating whether or not the display object can be accepted by voice (S122). Subsequently, when a voice is input by the user (YES in S124), the voice recognition unit 102 acquires a command and controls the display object that can accept the voice input based on the command (S126). . In step S124, the user may speak only a command, for example. On the other hand, if no voice is input (NO in S124), the process ends.
  <2-3.動作例3>
 図9は本実施形態に係る動作例3を説明するための説明図である。動作例3において、ユーザは注視により表示オブジェクトを選択し、選択された表示オブジェクトに対して、音声によりコマンドの入力を行うことが可能である。
<2-3. Operation Example 3>
FIG. 9 is an explanatory diagram for explaining an operation example 3 according to the present embodiment. In the operation example 3, the user can select a display object by gazing and can input a command to the selected display object by voice.
 まず、図9に示すように、出力制御部104が、表示オブジェクトを表示させる(S202)。続いて、視線に関する情報に基づいて注視状態であるか否かの判定が行われる(S204)。注視状態であり(S204においてYES)、かつ視線の先に表示オブジェクトが存在する場合(S206においてYES)、当該表示オブジェクト(注視オブジェクト)が選択状態に設定される(S208)。 First, as shown in FIG. 9, the output control unit 104 displays a display object (S202). Subsequently, it is determined whether or not the user is in a gaze state based on the information regarding the line of sight (S204). When it is in the gaze state (YES in S204) and there is a display object ahead of the line of sight (YES in S206), the display object (gaze object) is set to the selected state (S208).
 なお、ステップS208において、出力制御部104は、注視オブジェクトが選択されたことを示す通知情報をユーザに通知させてもよいし、注視オブジェクトに対する音声入力可否を示す通知情報をユーザに通知させてもよい。また、再度注視オブジェクトが注視状態になった場合、当該注視オブジェクトの選択状態が解除(キャンセル)されてもよい。 In step S208, the output control unit 104 may notify the user of notification information indicating that the gaze object has been selected, or may notify the user of notification information indicating whether or not voice input to the gaze object is possible. Good. Further, when the gaze object is in the gaze state again, the selected state of the gaze object may be canceled (cancelled).
 続いて、ユーザが音声によりコマンドを入力すると(S212においてYES)、選択された注視オブジェクトに対する制御がコマンドに基づいて行われる(S214)。なお、ステップS214において、選択された注視オブジェクトに対する音声入力が不可能である場合、出力制御部104は、注視オブジェクトが入力を受け付けられなかったことを示す第二の通知情報をユーザに通知させてもよい。 Subsequently, when the user inputs a command by voice (YES in S212), the selected gaze object is controlled based on the command (S214). In step S214, if voice input to the selected gaze object is not possible, the output control unit 104 notifies the user of second notification information indicating that the gaze object has not received input. Also good.
 注視状態でない場合(S204においてNO)、または視線の先に表示オブジェクトが存在しない場合(S206においてNO)、または音声コマンドが入力されない場合(S212においてNO)、処理は終了する。 If not in the gaze state (NO in S204), if there is no display object ahead of the line of sight (NO in S206), or if no voice command is input (NO in S212), the process ends.
  <2-4.動作例4>
 図10は本実施形態に係る動作例4を説明するための説明図である。動作例4において、ユーザは注視により表示オブジェクトを選択し、さらに注視することで音声入力モードに切り替えてから、音声入力モードに切り替えられた表示オブジェクトに対して、音声によりコマンドの入力を行うことが可能である。
<2-4. Operation Example 4>
FIG. 10 is an explanatory diagram for explaining an operation example 4 according to the present embodiment. In the operation example 4, the user can select a display object by gazing and switch to the voice input mode by further gazing, and then input a command by voice to the display object switched to the voice input mode. Is possible.
 まず、図10に示すように、出力制御部104が、表示オブジェクトを表示させる(S222)。続いて、視線に関する情報に基づいて注視状態(第一の注視状態)であるか否かの判定が行われる(S224)。注視状態であり(S224においてYES)、かつ視線の先に表示オブジェクトが存在する場合(S226においてYES)、当該表示オブジェクト(注視オブジェクト)が選択状態に設定される(S228)。 First, as shown in FIG. 10, the output control unit 104 displays a display object (S222). Subsequently, it is determined whether or not the user is in the gaze state (first gaze state) based on the information regarding the line of sight (S224). When it is in the gaze state (YES in S224) and there is a display object ahead of the line of sight (YES in S226), the display object (gaze object) is set to the selected state (S228).
 なお、ステップS228において、出力制御部104は、注視オブジェクトが選択されたことを示す通知情報をユーザに通知させてもよい。また、ステップS228において、注視オブジェクトに対する音声入力が不可能である場合、当該注視オブジェクトは選択状態に設定されなくてもよい。 In step S228, the output control unit 104 may notify the user of notification information indicating that the gaze object has been selected. In step S228, when voice input to the gaze object is impossible, the gaze object may not be set to the selected state.
 続いて、視線に関する情報に基づいて、選択された表示オブジェクトに対してさらなる注視状態(第二の注視状態)であるか否かの判定が行われる(S234)。なお、ステップS224における第一の注視状態に係る判定と、ステップS234における第二の注視状態に係る判定では、判定に用いられる時間が異なっていてもよい。 Subsequently, based on the information regarding the line of sight, it is determined whether or not the selected display object is in a further gaze state (second gaze state) (S234). Note that the time used for the determination may be different between the determination related to the first gaze state in step S224 and the determination related to the second gaze state in step S234.
 第二の注視状態である場合(S234においてYES)、制御部10により選択された表示オブジェクト(注視オブジェクト)が音声入力モードに切り替えられる(S234)。 If it is in the second gaze state (YES in S234), the display object (gaze object) selected by the control unit 10 is switched to the voice input mode (S234).
 続いて、ユーザが音声によりコマンドを入力すると(S236においてYES)、選択され、音声入力モードに切り替えられた注視オブジェクトに対する制御がコマンドに基づいて行われる(S234)。 Subsequently, when the user inputs a command by voice (YES in S236), control on the gaze object selected and switched to the voice input mode is performed based on the command (S234).
 第一の注視状態でない場合(S224においてNO)、視線の先に表示オブジェクトが存在しない場合(S226においてNO)、第二の注視状態でない場合(S232においてNO)、または音声コマンドが入力されない場合(S212においてNO)、処理は終了する。 When not in the first gaze state (NO in S224), when no display object exists at the tip of the line of sight (NO in S226), when not in the second gaze state (NO in S232), or when no voice command is input ( The process ends in S212.
 上述した動作例4によれば、ユーザは二段階の注視を行うことで、表示オブジェクトを音声入力モードに切り替えることが可能となるため、誤った表示オブジェクトに音声入力を行ってしまうことが抑制される。 According to the operation example 4 described above, the user can switch the display object to the voice input mode by performing a two-step gaze, and thus it is possible to suppress the voice input to the wrong display object. The
  <2-5.動作例5>
 図11は本実施形態に係る動作例5を説明するための説明図である。動作例5において、ユーザは注視により一または複数の表示オブジェクトを選択し、さらに音声入力アイコンを注視することで、選択された一または複数の表示オブジェクトを音声入力モードに切り替えることが可能である。
<2-5. Operation Example 5>
FIG. 11 is an explanatory diagram for explaining an operation example 5 according to the present embodiment. In the fifth operation example, the user can switch one or more selected display objects to the voice input mode by selecting one or more display objects by gazing and further gazing at the voice input icon.
 まず、図11に示すように、出力制御部104が、表示オブジェクトと、表示オブジェクトを音声入力モードに切り替えるための音声入力アイコンを表示させる(S242)。 First, as shown in FIG. 11, the output control unit 104 displays a display object and a voice input icon for switching the display object to the voice input mode (S242).
 続いて、視線に関する情報に基づいて注視状態であるか否かの判定が行われる(S244)。注視状態であり(S244においてYES)、かつ視線の先に未選択状態の表示オブジェクトが存在する場合(S246においてYES)、当該表示オブジェクト(注視オブジェクト)が選択状態に設定される(S248)。 Subsequently, it is determined whether or not the user is in a gaze state based on the information regarding the line of sight (S244). If the display object is in the gaze state (YES in S244) and there is an unselected display object ahead of the line of sight (YES in S246), the display object (gaze object) is set to the selected state (S248).
 続いて、処理はステップS244に戻る。ステップS244~S248の処理が1回繰り返されるごとに、1つの表示オブジェクトが選択状態に設定される。 Subsequently, the process returns to step S244. Each time the processes of steps S244 to S248 are repeated once, one display object is set to the selected state.
 注視状態であり(S244においてYES)、かつ視線の先に未選択状態の表示オブジェクトが存在しない場合(S246においてNO)、視線の先に音声入力アイコンが存在するか否かが判定される(S250)。 If it is in the gaze state (YES in S244) and there is no display object in the unselected state at the end of the line of sight (NO at S246), it is determined whether or not an audio input icon is present at the end of the line of sight (S250). ).
 視線の先に音声入力アイコンが存在する場合(S250においてYES)、現時点で選択された全ての表示オブジェクトが音声入力モードに切り替えられる(S252)。 If there is a voice input icon ahead of the line of sight (YES in S250), all display objects selected at the present time are switched to the voice input mode (S252).
 続いて、ユーザが音声によりコマンドを入力すると(S254においてYES)、選択され、音声入力モードに切り替えられた注視オブジェクトに対する制御がコマンドに基づいて行われる(S256)。 Subsequently, when the user inputs a command by voice (YES in S254), control on the gaze object selected and switched to the voice input mode is performed based on the command (S256).
 一方、視線の先に音声入力アイコンが存在しない場合(S250においてNO)、または音声コマンドが入力されない場合(S254においてNO)、処理は終了する。 On the other hand, if there is no voice input icon at the end of the line of sight (NO in S250), or if no voice command is input (NO in S254), the process ends.
 上述した動作例5によれば、ユーザは複数の表示オブジェクトを視線に基づいて選択することが可能であり、複数の表示オブジェクトに対して同時に音声による入力を行うことが可能である。 According to the operation example 5 described above, the user can select a plurality of display objects based on the line of sight, and can simultaneously input by voice to the plurality of display objects.
  <2-6.動作例6>
 図12は本実施形態に係る動作例6を説明するための説明図である。動作例6において、ユーザは視界内に存在する表示オブジェクトをまとめて選択することが可能である。
<2-6. Operation Example 6>
FIG. 12 is an explanatory diagram for explaining an operation example 6 according to the present embodiment. In the operation example 6, the user can select all the display objects existing in the field of view.
 まず、図12に示すように、出力制御部104が、表示オブジェクトを表示させる(S302)。出力制御部104は、現在のユーザの視界内に存在する表示オブジェクトを抽出して選択状態に設定する(S304)。ステップS304の処理は、表示オブジェクトが図6を参照して説明したクリッピングボリュームに含まれるか否かに基づいて行われてもよい。 First, as shown in FIG. 12, the output control unit 104 displays a display object (S302). The output control unit 104 extracts display objects existing in the current user's field of view and sets them in a selected state (S304). The process of step S304 may be performed based on whether or not the display object is included in the clipping volume described with reference to FIG.
 続いて、ユーザが音声によりコマンドを入力すると(S306においてYES)、抽出されて選択された表示オブジェクトに対する制御がコマンドに基づいて行われる(S308)。一方、音声コマンドが入力されない場合(S308においてNO)、処理は終了する。 Subsequently, when the user inputs a command by voice (YES in S306), the display object selected by extraction is controlled based on the command (S308). On the other hand, if no voice command is input (NO in S308), the process ends.
 上述した動作例6によれば、ユーザは視界内に存在する複数の表示オブジェクトをまとめて選択することが可能であり、複数の表示オブジェクトに対して同時に音声による入力を行うことが可能である。 According to the operation example 6 described above, the user can select a plurality of display objects existing in the field of view, and can simultaneously input by voice to the plurality of display objects.
  <2-7.動作例7>
 図13は本実施形態に係る動作例7を説明するための説明図である。動作例7において、ユーザはオブジェクトを自身の手等の操作体により移動させてグループ化し、グループ化された表示オブジェクトに対して同時に音声による入力を行うことが可能である。
<2-7. Operation Example 7>
FIG. 13 is an explanatory diagram for explaining an operation example 7 according to the present embodiment. In the operation example 7, the user can group objects by moving an object with his / her hand or the like, and can simultaneously input voice to the grouped display objects.
 まず、図13に示すように、出力制御部104が、ユーザの目の前の平面等に表示オブジェクトを表示させる(S402)。続いて、ユーザが自身の手等の操作体により表示オブジェクトをつまんで移動させる操作により、複数の表示オブジェクトをまとめることでグループ化する(S404)。なお、ステップS404において、出力制御部104は、画像認識部103により認識される操作体と表示オブジェクトの位置関係に基づいて、表示オブジェクトの表示位置を移動させる制御を行ってもよい。 First, as shown in FIG. 13, the output control unit 104 displays a display object on a plane or the like in front of the user (S402). Subsequently, a plurality of display objects are grouped together by an operation in which the user pinches and moves the display object with an operation body such as his / her hand (S404). In step S <b> 404, the output control unit 104 may perform control to move the display position of the display object based on the positional relationship between the operation tool recognized by the image recognition unit 103 and the display object.
 続いて、ユーザは、グループ化された表示オブジェクトが視界に含まれるように視界を移動する(S406)。続いて、視線認識部101により取得される視線に関する情報に基づき、ユーザの焦点がスクリーン面に合ったと判定されると、出力制御部104は、音声入力可能アイコンをスクリーンの中央に表示させる(S412)。なお、ステップS412で表示される音声入力可能アイコンは、音声入力が可能となったことを示すアイコンであり、図11のステップS242で表示される音声入力アイコンとは異なる。 Subsequently, the user moves the field of view so that the grouped display objects are included in the field of view (S406). Subsequently, when it is determined that the user's focus is on the screen surface based on the information regarding the line of sight acquired by the line-of-sight recognition unit 101, the output control unit 104 displays a voice input enable icon at the center of the screen (S412). ). Note that the voice input enabled icon displayed in step S412 is an icon indicating that voice input is possible, and is different from the voice input icon displayed in step S242 of FIG.
 続いて、ユーザが音声によりコマンドを入力すると(S414においてYES)、視界内に存在する表示オブジェクトに対する制御がコマンドに基づいて行われる(S416)。一方、音声コマンドが入力されない場合(S414においてNO)、処理は終了する。 Subsequently, when the user inputs a command by voice (YES in S414), the display object existing in the field of view is controlled based on the command (S416). On the other hand, if no voice command is input (NO in S414), the process ends.
 上述した動作例7によれば、ユーザは複数の表示オブジェクトを視界に収まるように移動させてグループ化した後に、視界内に存在する複数の表示オブジェクトに対してまとめて音声入力を行うことが可能である。 According to the operation example 7 described above, the user can move a plurality of display objects so as to fit in the field of view and group them, and then perform a voice input on the plurality of display objects existing in the field of view. It is.
 <<3.応用例>>
 以上、本実施形態の構成例、及び動作例を説明した。続いて、本実施形態の応用例として、いくつかのアプリケーションに本実施形態を応用した例について説明する。
<< 3. Application example >>
The configuration example and operation example of the present embodiment have been described above. Subsequently, as an application example of the present embodiment, an example in which the present embodiment is applied to some applications will be described.
  <3-1.応用例1>
 図14は、他のユーザとボイスチャットを行うためのグループチャットアプリケーションに本実施形態を応用した応用例1を説明するための説明図である。
<3-1. Application Example 1>
FIG. 14 is an explanatory diagram for explaining an application example 1 in which the present embodiment is applied to a group chat application for performing a voice chat with another user.
 まず、図14に示すように、出力制御部104が、ユーザの目の前に他のユーザを示すアバター(表示オブジェクトの一例)を一覧して(並べて)表示させる(S502)。続いて、ユーザにより、アバターが選択される(S506)。ステップS506におけるアバターの選択は、例えば図9~11を参照して説明したような、ユーザの視線に基づく選択であってもよい。 First, as shown in FIG. 14, the output control unit 104 lists (displays) avatars (an example of display objects) indicating other users in front of the user (S502). Subsequently, an avatar is selected by the user (S506). The selection of the avatar in step S506 may be a selection based on the user's line of sight as described with reference to FIGS. 9 to 11, for example.
 続いて、会話を開始するコマンド(例えば、「Start Chat」等)が音声により入力されると(S508においてYES)、制御部10の制御により、選択された全てのアバターが示す(アバターに応じた)ユーザとボイスチャットが開始される(S512)。ステップS512において、選択されていないアバターは、例えば視界の外に移動する等により、表示されなくなってもよい。 Subsequently, when a command for starting a conversation (for example, “Start Chat” or the like) is input by voice (YES in S508), all of the selected avatars are indicated by the control of the control unit 10 (according to the avatar). ) Voice chat with the user is started (S512). In step S512, a non-selected avatar may not be displayed, for example, by moving outside the field of view.
 一方、音声コマンドが入力されない場合(S508においてNO)、処理は終了する。 On the other hand, if no voice command is input (NO in S508), the process ends.
 なお、ステップS512によりボイスチャットが開始された後に、ユーザが自身の手等の操作体を用いて、アバターをつまんで視界の外に移動させると、当該アバターが示すユーザは、ボイスチャットから除外されるように、制御部10により制御されてもよい。 In addition, after voice chat is started by step S512, if a user pinches and moves an avatar using operating objects, such as his hand, the user which the said avatar shows will be excluded from voice chat. It may be controlled by the control unit 10 as described above.
  <3-2.応用例2>
 図15は、他のユーザと交流するためのSNS(Social Networking Service)アプリケーションに本実施形態を応用した応用例2を説明するための説明図である。
<3-2. Application Example 2>
FIG. 15 is an explanatory diagram for explaining an application example 2 in which the present embodiment is applied to an SNS (Social Networking Service) application for interacting with other users.
 まず、図15に示すように、出力制御部104が、交流相手のユーザに応じたフレンドアバター(表示オブジェクトの一例)と、当該フレンドアバターに対応するラベル(名前)を表示させる(S522)。なお、出力制御部104は、例えばユーザの視線が上方に向いたことをトリガとして、ステップS522の処理を行ってもよい。また、フレンドアバターは対応するユーザの写真、またはアイコンを用いて表示されてもよく、当該ユーザの存在する方角に表示されてもよい。 First, as shown in FIG. 15, the output control unit 104 displays a friend avatar (an example of a display object) corresponding to the user of the exchange partner and a label (name) corresponding to the friend avatar (S522). Note that the output control unit 104 may perform the process of step S522, for example, triggered by the user's line of sight facing upward. Moreover, a friend avatar may be displayed using a corresponding user's photograph or icon, and may be displayed in the direction where the said user exists.
 続いて、ユーザにより、フレンドアバターが選択される(S526)。ステップS526におけるフレンドアバターの選択は、例えば図7を参照して説明したような、ラベルを音声で指定することによる選択であってもよいし、図9~11を参照して説明したような、ユーザの視線に基づく選択であってもよい。 Subsequently, a friend avatar is selected by the user (S526). The selection of the friend avatar in step S526 may be a selection by designating a label by voice as described with reference to FIG. 7, for example, or as described with reference to FIGS. The selection may be based on the user's line of sight.
 続いて、コマンド(例えば、「Talk」、「Mail」等)が音声により入力されると(S528においてYES)、制御部10の制御により、選択されたフレンドに対する制御がコマンドに基づいて行われる(S530)。 Subsequently, when a command (for example, “Talk”, “Mail”, etc.) is input by voice (YES in S528), control of the selected friend is performed based on the command under the control of the control unit 10 ( S530).
 一方、音声コマンドが入力されない場合(S508においてNO)、処理は終了する。 On the other hand, if no voice command is input (NO in S508), the process ends.
 なお、上述したSNSアプリケーションにおいて、ユーザがアクティブ(会話等のインタラクションが可能な状態)である場合には、当該ユーザに応じたフレンドアバターの顔がユーザに向けられて表示されてもよい。また、ユーザがアクティブではない(会話等のインタラクションが不可能な状態)である場合には、当該ユーザに応じたフレンドアバターの顔が背けられて表示されてもよい。 In the above-described SNS application, when the user is active (a state where interaction such as conversation is possible), the face of the friend avatar corresponding to the user may be directed toward the user and displayed. Further, when the user is not active (in a state where interaction such as conversation is impossible), the face of the friend avatar corresponding to the user may be turned away and displayed.
 なお、ステップS512によりボイスチャットが開始された後に、ユーザが自身の手等の操作体を用いて、アバターをつまんで視界の外に移動させると、当該アバターが示すユーザは、ボイスチャットから除外されるように、制御部10により制御されてもよい。 In addition, after voice chat is started by step S512, if a user pinches and moves an avatar using operating objects, such as his hand, the user which the said avatar shows will be excluded from voice chat. It may be controlled by the control unit 10 as described above.
  <3-3.応用例3>
 図16は、例えば街中を歩いているユーザが、視界に存在する店舗に関する情報を検索するための店舗検索アプリケーションに本実施形態を応用した応用例3を説明するための説明図である。
<3-3. Application Example 3>
FIG. 16 is an explanatory diagram for explaining an application example 3 in which the present embodiment is applied to a store search application for, for example, a user walking in a city to search for information related to stores existing in the field of view.
 まず、図16に示すように、出力制御部104が、ユーザの視界に存在する店舗に対応付けられたアイコン(表示オブジェクトの一例)を表示させる(S542)。なお、店舗に対応付けられたアイコンは、例えば店舗の種類を示すアイコン等であってもよく、所謂POP(Point of purchase)広告のように、店舗の付近に存在するように知覚されるように表示されてもよい。 First, as shown in FIG. 16, the output control unit 104 displays an icon (an example of a display object) associated with a store that exists in the user's field of view (S542). The icon associated with the store may be, for example, an icon indicating the type of the store, and may be perceived as being present in the vicinity of the store, such as a so-called POP (Point of purchase) advertisement. May be displayed.
 続いて、ユーザは、気になる店舗の方を見ながら(当該店舗を視界に入れながら)、検索を要求するコマンド(例えば「Search」等)を音声により入力する(S544においてYES)。 Subsequently, the user inputs a command requesting a search (for example, “Search” or the like) by voice while watching the store of interest (with the store in view) (YES in S544).
 続いて、出力制御部104は、ユーザの視界に存在する表示オブジェクトを対象に情報の検索を行い、検索結果に応じて表示を更新する(S546)。例えば、出力制御部104は、検索の結果、情報が取得された店舗に対応付けられたアイコンの視認性を高めること(例えばグロー表現等)で、当該店舗に関する情報が取得されたことをわかりやすくしてもよい。なお、情報が取得された店舗に対応付けられたアイコンをさらに選択することで、取得された情報が表示されてもよい。 Subsequently, the output control unit 104 searches for information for display objects existing in the user's field of view, and updates the display according to the search result (S546). For example, as a result of the search, the output control unit 104 makes it easy to understand that information about the store has been acquired by increasing the visibility of the icon associated with the store from which the information has been acquired (for example, glow expression). May be. The acquired information may be displayed by further selecting an icon associated with the store from which the information has been acquired.
 一方、音声コマンドが入力されない場合(S544においてNO)、処理は終了する。 On the other hand, if no voice command is input (NO in S544), the process ends.
 <<4.ハードウェア構成例>>
 以上、本開示の実施形態を説明した。最後に、図17を参照して、本実施形態に係る情報処理装置のハードウェア構成について説明する。図17は、本実施形態に係る情報処理装置のハードウェア構成の一例を示すブロック図である。なお、図17に示す情報処理装置900は、例えば、図1に示した情報処理装置1を実現し得る。本実施形態に係る情報処理装置1による情報処理は、ソフトウェアと、以下に説明するハードウェアとの協働により実現される。
<< 4. Hardware configuration example >>
The embodiment of the present disclosure has been described above. Finally, with reference to FIG. 17, a hardware configuration of the information processing apparatus according to the present embodiment will be described. FIG. 17 is a block diagram illustrating an example of a hardware configuration of the information processing apparatus according to the present embodiment. Note that the information processing apparatus 900 illustrated in FIG. 17 can realize the information processing apparatus 1 illustrated in FIG. 1, for example. Information processing by the information processing apparatus 1 according to the present embodiment is realized by cooperation between software and hardware described below.
 図17に示すように、情報処理装置900は、CPU(Central Processing Unit)901、ROM(Read Only Memory)902、RAM(Random Access Memory)903及びホストバス904aを備える。また、情報処理装置900は、ブリッジ904、外部バス904b、インタフェース905、入力装置906、出力装置907、ストレージ装置908、ドライブ909、接続ポート911、通信装置913、及びセンサ915を備える。情報処理装置900は、CPU901に代えて、又はこれとともに、DSP若しくはASIC等の処理回路を有してもよい。 As shown in FIG. 17, the information processing apparatus 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, and a host bus 904a. The information processing apparatus 900 includes a bridge 904, an external bus 904b, an interface 905, an input device 906, an output device 907, a storage device 908, a drive 909, a connection port 911, a communication device 913, and a sensor 915. The information processing apparatus 900 may include a processing circuit such as a DSP or an ASIC in place of or in addition to the CPU 901.
 CPU901は、演算処理装置および制御装置として機能し、各種プログラムに従って情報処理装置900内の動作全般を制御する。また、CPU901は、マイクロプロセッサであってもよい。ROM902は、CPU901が使用するプログラムや演算パラメータ等を記憶する。RAM903は、CPU901の実行において使用するプログラムや、その実行において適宜変化するパラメータ等を一時記憶する。CPU901は、例えば、図1に示す制御部10を形成し得る。 The CPU 901 functions as an arithmetic processing unit and a control unit, and controls the overall operation in the information processing apparatus 900 according to various programs. Further, the CPU 901 may be a microprocessor. The ROM 902 stores programs used by the CPU 901, calculation parameters, and the like. The RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like. For example, the CPU 901 can form the control unit 10 shown in FIG.
 CPU901、ROM902及びRAM903は、CPUバスなどを含むホストバス904aにより相互に接続されている。ホストバス904aは、ブリッジ904を介して、PCI(Peripheral Component Interconnect/Interface)バスなどの外部バス904bに接続されている。なお、必ずしもホストバス904a、ブリッジ904および外部バス904bを分離構成する必要はなく、1つのバスにこれらの機能を実装してもよい。 The CPU 901, ROM 902, and RAM 903 are connected to each other by a host bus 904a including a CPU bus. The host bus 904 a is connected to an external bus 904 b such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge 904. Note that the host bus 904a, the bridge 904, and the external bus 904b do not necessarily have to be configured separately, and these functions may be mounted on one bus.
 入力装置906は、例えば、マウス、キーボード、タッチパネル、ボタン、マイクロフォン、スイッチ及びレバー等、ユーザによって情報が入力される装置によって実現される。また、入力装置906は、例えば、赤外線やその他の電波を利用したリモートコントロール装置であってもよいし、情報処理装置900の操作に対応した携帯電話やPDA等の外部接続機器であってもよい。さらに、入力装置906は、例えば、上記の入力手段を用いてユーザにより入力された情報に基づいて入力信号を生成し、CPU901に出力する入力制御回路などを含んでいてもよい。情報処理装置900のユーザは、この入力装置906を操作することにより、情報処理装置900に対して各種のデータを入力したり処理動作を指示したりすることができる。 The input device 906 is realized by a device in which information is input by the user, such as a mouse, a keyboard, a touch panel, a button, a microphone, a switch, and a lever. The input device 906 may be, for example, a remote control device using infrared rays or other radio waves, or may be an external connection device such as a mobile phone or a PDA that supports the operation of the information processing device 900. . Furthermore, the input device 906 may include, for example, an input control circuit that generates an input signal based on information input by the user using the above-described input means and outputs the input signal to the CPU 901. A user of the information processing apparatus 900 can input various data and instruct a processing operation to the information processing apparatus 900 by operating the input device 906.
 出力装置907は、取得した情報をユーザに対して視覚的又は聴覚的に通知することが可能な装置で形成される。このような装置として、CRTディスプレイ装置、液晶ディスプレイ装置、プラズマディスプレイ装置、ELディスプレイ装置及びランプ等の表示装置や、スピーカ及びヘッドホン等の音声出力装置や、プリンタ装置等がある。出力装置907は、例えば、情報処理装置900が行った各種処理により得られた結果を出力する。具体的には、表示装置は、情報処理装置900が行った各種処理により得られた結果を、テキスト、イメージ、表、グラフ等、様々な形式で視覚的に表示する。他方、音声出力装置は、再生された音声データや音響データ等からなるオーディオ信号をアナログ信号に変換して聴覚的に出力する。上記表示装置は、例えば、図1に示す表示部16、音響出力部17を形成し得る。 The output device 907 is formed of a device that can notify the user of the acquired information visually or audibly. Examples of such devices include CRT display devices, liquid crystal display devices, plasma display devices, EL display devices, display devices such as lamps, audio output devices such as speakers and headphones, printer devices, and the like. For example, the output device 907 outputs results obtained by various processes performed by the information processing device 900. Specifically, the display device visually displays results obtained by various processes performed by the information processing device 900 in various formats such as text, images, tables, and graphs. On the other hand, the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs it aurally. The display device can form, for example, the display unit 16 and the sound output unit 17 shown in FIG.
 ストレージ装置908は、情報処理装置900の記憶部の一例として形成されたデータ格納用の装置である。ストレージ装置908は、例えば、HDD等の磁気記憶部デバイス、半導体記憶デバイス、光記憶デバイス又は光磁気記憶デバイス等により実現される。ストレージ装置908は、記憶媒体、記憶媒体にデータを記録する記録装置、記憶媒体からデータを読み出す読出し装置および記憶媒体に記録されたデータを削除する削除装置などを含んでもよい。このストレージ装置908は、CPU901が実行するプログラムや各種データ及び外部から取得した各種のデータ等を格納する。 The storage device 908 is a data storage device formed as an example of a storage unit of the information processing device 900. The storage apparatus 908 is realized by, for example, a magnetic storage device such as an HDD, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like. The storage device 908 may include a storage medium, a recording device that records data on the storage medium, a reading device that reads data from the storage medium, a deletion device that deletes data recorded on the storage medium, and the like. The storage device 908 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
 ドライブ909は、記憶媒体用リーダライタであり、情報処理装置900に内蔵、あるいは外付けされる。ドライブ909は、装着されている磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリ等のリムーバブル記憶媒体に記録されている情報を読み出して、RAM903に出力する。また、ドライブ909は、リムーバブル記憶媒体に情報を書き込むこともできる。 The drive 909 is a storage medium reader / writer, and is built in or externally attached to the information processing apparatus 900. The drive 909 reads information recorded on a removable storage medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, and outputs the information to the RAM 903. The drive 909 can also write information to a removable storage medium.
 接続ポート911は、外部機器と接続されるインタフェースであって、例えばUSB(Universal Serial Bus)などによりデータ伝送可能な外部機器との接続口である。 The connection port 911 is an interface connected to an external device, and is a connection port with an external device capable of transmitting data by USB (Universal Serial Bus), for example.
 通信装置913は、例えば、ネットワーク920に接続するための通信デバイス等で形成された通信インタフェースである。通信装置913は、例えば、有線若しくは無線LAN(Local Area Network)、LTE(Long Term Evolution)、Bluetooth(登録商標)又はWUSB(Wireless USB)用の通信カード等である。また、通信装置913は、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ又は各種通信用のモデム等であってもよい。この通信装置913は、例えば、インターネットや他の通信機器との間で、例えばTCP/IP等の所定のプロトコルに則して信号等を送受信することができる。通信装置913は、例えば、図1に示す通信部18を形成し得る。 The communication device 913 is a communication interface formed by a communication device or the like for connecting to the network 920, for example. The communication device 913 is, for example, a communication card for wired or wireless LAN (Local Area Network), LTE (Long Term Evolution), Bluetooth (registered trademark), or WUSB (Wireless USB). The communication device 913 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communication, or the like. The communication device 913 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet and other communication devices. The communication device 913 can form, for example, the communication unit 18 illustrated in FIG.
 なお、ネットワーク920は、ネットワーク920に接続されている装置から送信される情報の有線、または無線の伝送路である。例えば、ネットワーク920は、インターネット、電話回線網、衛星通信網などの公衆回線網や、Ethernet(登録商標)を含む各種のLAN(Local Area Network)、WAN(Wide Area Network)などを含んでもよい。また、ネットワーク920は、IP-VPN(Internet Protocol-Virtual Private Network)などの専用回線網を含んでもよい。 Note that the network 920 is a wired or wireless transmission path for information transmitted from a device connected to the network 920. For example, the network 920 may include a public line network such as the Internet, a telephone line network, and a satellite communication network, various LANs including the Ethernet (registered trademark), a wide area network (WAN), and the like. Further, the network 920 may include a dedicated line network such as an IP-VPN (Internet Protocol-Virtual Private Network).
 センサ915は、例えば、加速度センサ、ジャイロセンサ、地磁気センサ、光センサ、音センサ、測距センサ、力センサ等の各種のセンサである。センサ915は、情報処理装置900の姿勢、移動速度等、情報処理装置900自身の状態に関する情報や、情報処理装置900の周辺の明るさや騒音等、情報処理装置900の周辺環境に関する情報を取得する。また、センサ915は、GPS信号を受信して装置の緯度、経度及び高度を測定するGPSセンサを含んでもよい。センサ915は、例えば、図1に示すセンサ部12を形成し得る。 The sensor 915 is various sensors such as an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, a sound sensor, a distance measuring sensor, and a force sensor. The sensor 915 acquires information on the state of the information processing apparatus 900 itself, such as the posture and movement speed of the information processing apparatus 900, and information on the surrounding environment of the information processing apparatus 900, such as brightness and noise around the information processing apparatus 900. . Sensor 915 may also include a GPS sensor that receives GPS signals and measures the latitude, longitude, and altitude of the device. The sensor 915 can form, for example, the sensor unit 12 shown in FIG.
 以上、本実施形態に係る情報処理装置900の機能を実現可能なハードウェア構成の一例を示した。上記の各構成要素は、汎用的な部材を用いて実現されていてもよいし、各構成要素の機能に特化したハードウェアにより実現されていてもよい。従って、本実施形態を実施する時々の技術レベルに応じて、適宜、利用するハードウェア構成を変更することが可能である。 Heretofore, an example of the hardware configuration capable of realizing the functions of the information processing apparatus 900 according to the present embodiment has been shown. Each of the above components may be realized using a general-purpose member, or may be realized by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
 なお、上述のような本実施形態に係る情報処理装置900の各機能を実現するためのコンピュータプログラムを作製し、PC等に実装することが可能である。また、このようなコンピュータプログラムが格納された、コンピュータで読み取り可能な記録媒体も提供することができる。記録媒体は、例えば、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ等である。また、上記のコンピュータプログラムは、記録媒体を用いずに、例えばネットワークを介して配信されてもよい。また、当該コンピュータプログラムを実行させるコンピュータの数は特に限定されない。例えば、当該コンピュータプログラムを、複数のコンピュータ(例えば、複数のサーバ等)が互いに連携して実行してもよい。なお、単数のコンピュータ、または、複数のコンピュータが連携するものを、「コンピュータシステム」とも称する。 It should be noted that a computer program for realizing each function of the information processing apparatus 900 according to the present embodiment as described above can be produced and mounted on a PC or the like. In addition, a computer-readable recording medium storing such a computer program can be provided. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed via a network, for example, without using a recording medium. The number of computers that execute the computer program is not particularly limited. For example, the computer program may be executed by a plurality of computers (for example, a plurality of servers) in cooperation with each other. A single computer or a combination of a plurality of computers is also referred to as a “computer system”.
 <<5.むすび>>
 以上説明したように、本開示の実施形態によれば、表示オブジェクトに係る情報をより容易にユーザが把握することが可能である。
<< 5. Conclusion >>
As described above, according to the embodiment of the present disclosure, it is possible for the user to more easily grasp information related to the display object.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 例えば、上記では、通知に係る制御を行う制御部と、通知に係る出力を行う表示部、及び音響出力部とを、同一装置(情報処理装置1)が備える例を説明したが、本技術は係る例に限定されない。制御部の機能を有する装置と、通知に係る出力を行う表示部、及び音響出力部の機能を有する装置は、異なってもよい。係る場合、制御部は通知に係る制御信号を生成し、当該制御信号を他の装置へ送信させることで、他の装置が有する表示部や音響出力部に通知に係る出力を行わせてもよい。 For example, in the above description, an example in which the same device (information processing device 1) includes a control unit that performs control related to notification, a display unit that performs output related to notification, and a sound output unit is described. It is not limited to such an example. The device having the function of the control unit may be different from the device having the function of the sound output unit and the display unit that performs output related to the notification. In such a case, the control unit may generate a control signal related to the notification and cause the display unit or the acoustic output unit included in the other device to perform the output related to the notification by transmitting the control signal to the other device. .
 なお、制御部が、表示部による表示を制御するために生成する制御信号は、表示部が直接再生可能な映像信号(所謂RGB信号など)に限定されない。例えば、制御部が生成する制御信号はストリーミング再生を行うためのストリーミングパケットであってもよいし、レンダリング処理が行われることで画像が得られるような、配置情報とコンテンツ情報を含むデータ(HTMLデータなど)であってもよい。 Note that the control signal generated by the control unit to control display by the display unit is not limited to a video signal (so-called RGB signal or the like) that can be directly reproduced by the display unit. For example, the control signal generated by the control unit may be a streaming packet for performing streaming reproduction, or data including arrangement information and content information (HTML data) that allows an image to be obtained by performing a rendering process. Etc.).
 また、本技術に係る通知の方法は表示や音響による出力に限定されない。情報処理装置が振動出力機能や発光機能等を備える場合には、振動による出力や発光により、通知が行われてもよい。 Also, the notification method according to the present technology is not limited to display or sound output. When the information processing apparatus has a vibration output function, a light emission function, or the like, notification may be performed by output or light emission by vibration.
 また、上記実施形態では、表示オブジェクトが透過型の表示部を有する眼鏡型の表示デバイスに表示される例を説明したが、本技術はかかる例に限定されない。例えば、カメラにより取得された実空間の画像に表示オブジェクトを重畳して生成した画像を、表示部に表示させる情報処理装置(ビデオシースルー型のHMD等)に本技術が適用されてもよい。また、自動車のフロントガラス等に画像を表示させるヘッドアップディスプレイに本技術が適用されてもよいし、設置型の表示デバイスに本技術が適用されてもよい。また、仮想空間を背景として、仮想空間内に表示オブジェクトが配置された画像をレンダリングして非透過型の表示部に表示させる情報処理装置に本技術が適用されてもよい。なお、上記実施形態では、実空間を背景として表示オブジェクトが表示される例を説明したが、非透過型の表示部に表示させる情報処理装置に本技術が適用される場合には、仮想空間を背景として表示オブジェクトが表示されてもよい。 In the above embodiment, an example in which a display object is displayed on a glasses-type display device having a transmissive display unit has been described. However, the present technology is not limited to such an example. For example, the present technology may be applied to an information processing apparatus (such as a video see-through HMD) that displays an image generated by superimposing a display object on an image in real space acquired by a camera on a display unit. In addition, the present technology may be applied to a head-up display that displays an image on a windshield of an automobile or the like, and the present technology may be applied to a stationary display device. Further, the present technology may be applied to an information processing apparatus that renders an image in which a display object is arranged in a virtual space and displays the image on a non-transmissive display unit with the virtual space as a background. In the above-described embodiment, an example in which a display object is displayed with a real space as a background has been described. However, when the present technology is applied to an information processing apparatus that displays on a non-transmissive display unit, a virtual space is displayed. A display object may be displayed as a background.
 また、上記実施形態における各ステップは、必ずしもフローチャート図として記載された順序に沿って時系列に処理する必要はない。例えば、上記実施形態の処理における各ステップは、フローチャート図として記載した順序と異なる順序で処理されても、並列的に処理されてもよい。 In addition, each step in the above embodiment does not necessarily have to be processed in time series in the order described as a flowchart. For example, each step in the processing of the above embodiment may be processed in an order different from the order described as the flowchart diagram or may be processed in parallel.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させる制御部を備える情報処理装置。
(2)
 前記表示オブジェクトは、実空間の情報に基づいて表示される、前記(1)に記載の情報処理装置。
(3)
 前記第一の通知情報は、前記表示オブジェクトに対する入力の受付可否を示す、前記(1)または(2)に記載の情報処理装置。
(4)
 前記入力は、音声による入力である、前記(3)に記載の情報処理装置。
(5)
 前記制御部は、前記第一の通知情報が前記入力の受付が不可能であることを示し、かつ、前記入力が行われた場合に、前記表示オブジェクトが前記入力を受け付けられなかったことを示す第二の通知情報を前記ユーザに通知させる、前記(3)または(4)に記載の情報処理装置。
(6)
 前記第一の通知情報は、前記表示オブジェクトが選択されたことを示す、前記(1)または(2)に記載の情報処理装置。
(7)
 前記制御部は、前記表示オブジェクトへ前記ユーザの視線が向けられない場合に、前記表示オブジェクトに係る第三の通知情報を前記ユーザに通知させる、前記(1)~(6)のいずれか一項に記載の情報処理装置。
(8)
 前記制御部は、前記第一の通知情報、及び前記第三の通知情報を表示により通知させ、
 前記第三の通知情報は、前記第一の通知情報よりも視認性の低い表示により通知される、前記(7)に記載の情報処理装置。
(9)
 前記制御部は、前記表示オブジェクトの位置に応じた位置に所定のアイコンを表示させることにより前記第一の通知情報を通知させる、前記(1)~(8)のいずれか一項に記載の情報処理装置。
(10)
 前記制御部は、前記表示オブジェクトを所定の形状で表示させることにより、前記第一の通知情報を通知させる、前記(1)~(9)のいずれか一項に記載の情報処理装置。
(11)
 前記制御部は、前記表示オブジェクトをアニメーション表示させることにより、前記第一の通知情報を通知させる、前記(1)~(10)のいずれか一項に記載の情報処理装置。
(12)
 前記制御部は、所定の表示条件で前記表示オブジェクトを表示させることにより、前記第一の通知情報を通知させる、前記(1)~(11)のいずれか一項に記載の情報処理装置。
(13)
 前記表示条件は、輝度、透過度、色、大きさのうち少なくとも一つに関する条件を含む、前記(12)に記載の情報処理装置。
(14)
 プロセッサが、表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させること、
 を含む情報処理方法。
(15)
 コンピュータシステムに、
 表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させる機能を実現させるための、プログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
An information processing apparatus comprising: a control unit that notifies the user of first notification information related to the display object when the user's line of sight is directed toward the display object.
(2)
The information processing apparatus according to (1), wherein the display object is displayed based on real space information.
(3)
The information processing apparatus according to (1) or (2), wherein the first notification information indicates whether or not an input to the display object can be accepted.
(4)
The information processing apparatus according to (3), wherein the input is an input by voice.
(5)
The control unit indicates that the first notification information cannot accept the input, and indicates that the display object has not accepted the input when the input is performed. The information processing apparatus according to (3) or (4), wherein the user is notified of second notification information.
(6)
The information processing apparatus according to (1) or (2), wherein the first notification information indicates that the display object has been selected.
(7)
The control unit causes the user to be notified of third notification information related to the display object when the user's line of sight is not directed to the display object. The information processing apparatus described in 1.
(8)
The control unit causes the first notification information and the third notification information to be notified by display,
The information processing apparatus according to (7), wherein the third notification information is notified by a display having lower visibility than the first notification information.
(9)
The information according to any one of (1) to (8), wherein the control unit is configured to notify the first notification information by displaying a predetermined icon at a position corresponding to the position of the display object. Processing equipment.
(10)
The information processing apparatus according to any one of (1) to (9), wherein the control unit is configured to notify the first notification information by displaying the display object in a predetermined shape.
(11)
The information processing apparatus according to any one of (1) to (10), wherein the control unit is configured to notify the first notification information by displaying the display object in an animation.
(12)
The information processing apparatus according to any one of (1) to (11), wherein the control unit is configured to notify the first notification information by displaying the display object under a predetermined display condition.
(13)
The information processing apparatus according to (12), wherein the display condition includes a condition related to at least one of luminance, transparency, color, and size.
(14)
When the processor is directed to the display object, the first notification information related to the display object is notified to the user;
An information processing method including:
(15)
Computer system,
A program for realizing a function of notifying the user of first notification information relating to the display object when a user's line of sight is directed toward the display object.
  1 情報処理装置
  10 制御部
  12 センサ部
  14 記憶部
  16 表示部
  17 音響出力部
  18 通信部
  101 視線認識部
  102 音声認識部
  103 画像認識部
  104 出力制御部
DESCRIPTION OF SYMBOLS 1 Information processing apparatus 10 Control part 12 Sensor part 14 Memory | storage part 16 Display part 17 Sound output part 18 Communication part 101 Line-of-sight recognition part 102 Voice recognition part 103 Image recognition part 104 Output control part

Claims (15)

  1.  表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させる制御部を備える情報処理装置。 An information processing apparatus including a control unit that notifies the user of first notification information related to the display object when the user's line of sight is directed toward the display object.
  2.  前記表示オブジェクトは、実空間の情報に基づいて表示される、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the display object is displayed based on real space information.
  3.  前記第一の通知情報は、前記表示オブジェクトに対する入力の受付可否を示す、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the first notification information indicates whether or not an input to the display object can be accepted.
  4.  前記入力は、音声による入力である、請求項3に記載の情報処理装置。 The information processing apparatus according to claim 3, wherein the input is a voice input.
  5.  前記制御部は、前記第一の通知情報が前記入力の受付が不可能であることを示し、かつ、前記入力が行われた場合に、前記表示オブジェクトが前記入力を受け付けられなかったことを示す第二の通知情報を前記ユーザに通知させる、請求項3に記載の情報処理装置。 The control unit indicates that the first notification information cannot accept the input, and indicates that the display object has not accepted the input when the input is performed. The information processing apparatus according to claim 3, wherein the user is notified of second notification information.
  6.  前記第一の通知情報は、前記表示オブジェクトが選択されたことを示す、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the first notification information indicates that the display object has been selected.
  7.  前記制御部は、前記表示オブジェクトへ前記ユーザの視線が向けられない場合に、前記表示オブジェクトに係る第三の通知情報を前記ユーザに通知させる、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the control unit causes the user to be notified of third notification information related to the display object when the user's line of sight is not directed toward the display object.
  8.  前記制御部は、前記第一の通知情報、及び前記第三の通知情報を表示により通知させ、
     前記第三の通知情報は、前記第一の通知情報よりも視認性の低い表示により通知される、請求項7に記載の情報処理装置。
    The control unit causes the first notification information and the third notification information to be notified by display,
    The information processing apparatus according to claim 7, wherein the third notification information is notified by a display having lower visibility than the first notification information.
  9.  前記制御部は、前記表示オブジェクトの位置に応じた位置に所定のアイコンを表示させることにより前記第一の通知情報を通知させる、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the control unit is configured to notify the first notification information by displaying a predetermined icon at a position corresponding to the position of the display object.
  10.  前記制御部は、前記表示オブジェクトを所定の形状で表示させることにより、前記第一の通知情報を通知させる、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the control unit is configured to notify the first notification information by displaying the display object in a predetermined shape.
  11.  前記制御部は、前記表示オブジェクトをアニメーション表示させることにより、前記第一の通知情報を通知させる、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the control unit notifies the first notification information by displaying the display object in an animation.
  12.  前記制御部は、所定の表示条件で前記表示オブジェクトを表示させることにより、前記第一の通知情報を通知させる、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the control unit is configured to notify the first notification information by displaying the display object under a predetermined display condition.
  13.  前記表示条件は、輝度、透過度、色、大きさのうち少なくとも一つに関する条件を含む、請求項12に記載の情報処理装置。 The information processing apparatus according to claim 12, wherein the display condition includes a condition related to at least one of luminance, transparency, color, and size.
  14.  プロセッサが、表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させること、
     を含む情報処理方法。
    When the processor is directed to the display object, the first notification information related to the display object is notified to the user;
    An information processing method including:
  15.  コンピュータシステムに、
     表示オブジェクトへユーザの視線が向けられた場合に、前記表示オブジェクトに係る第一の通知情報を前記ユーザに通知させる機能を実現させるための、プログラム。
    Computer system,
    A program for realizing a function of notifying the user of first notification information relating to the display object when a user's line of sight is directed toward the display object.
PCT/JP2017/011963 2016-06-20 2017-03-24 Information processing device, information processing method, and program WO2017221492A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-121736 2016-06-20
JP2016121736 2016-06-20

Publications (1)

Publication Number Publication Date
WO2017221492A1 true WO2017221492A1 (en) 2017-12-28

Family

ID=60783980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/011963 WO2017221492A1 (en) 2016-06-20 2017-03-24 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2017221492A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019136474A (en) * 2018-07-05 2019-08-22 グリー株式会社 Game processing system, game processing method, and game processing program
JP2019136475A (en) * 2018-07-06 2019-08-22 グリー株式会社 Game processing system, game processing method, and game processing program
JP2020198065A (en) * 2019-06-03 2020-12-10 アイドス インタラクティブ コープ Communication with augmented reality virtual agents
JP2021034977A (en) * 2019-08-28 2021-03-01 大日本印刷株式会社 Server, reproduction device, content reproduction system, content reproduction method, and program
US10981067B2 (en) 2018-02-06 2021-04-20 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
US10983590B2 (en) 2018-02-06 2021-04-20 Gree, Inc. Application processing system, method of processing application and storage medium storing program for processing application
US10981052B2 (en) 2018-02-06 2021-04-20 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
US11083959B2 (en) 2018-02-06 2021-08-10 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
CN115460328A (en) * 2019-06-07 2022-12-09 佳能株式会社 Information processing system and information processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10301675A (en) * 1997-02-28 1998-11-13 Toshiba Corp Multimodal interface device and multimodal interface method
JP2011077913A (en) * 2009-09-30 2011-04-14 Nhk Service Center Inc Apparatus and system for processing information transmission, and computer program to be used therefor
US20140347262A1 (en) * 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude
JP2015510629A (en) * 2012-01-12 2015-04-09 クゥアルコム・インコーポレイテッドQualcomm Incorporated Augmented reality using sound analysis and geometric analysis
JP2015219441A (en) * 2014-05-20 2015-12-07 パナソニックIpマネジメント株式会社 Operation support device and operation support method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10301675A (en) * 1997-02-28 1998-11-13 Toshiba Corp Multimodal interface device and multimodal interface method
JP2011077913A (en) * 2009-09-30 2011-04-14 Nhk Service Center Inc Apparatus and system for processing information transmission, and computer program to be used therefor
JP2015510629A (en) * 2012-01-12 2015-04-09 クゥアルコム・インコーポレイテッドQualcomm Incorporated Augmented reality using sound analysis and geometric analysis
US20140347262A1 (en) * 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude
JP2015219441A (en) * 2014-05-20 2015-12-07 パナソニックIpマネジメント株式会社 Operation support device and operation support method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TETSURO CHINO ET AL.: "GAZE TO TALK: A Non- verbal Interface System with Speech Recognition, Gaze Detection, and Agent CG Output", INFORMATION PROCESSING SOCIETY OF JAPAN, 4 March 1998 (1998-03-04), Retrieved from the Internet <URL:http://www.interaction-ipsj.org/archives/paper1998/pdf1998/paper98-169.pdf> [retrieved on 20170605] *
YUICHI KOYAMA ET AL.: "Behavioral Design of Robot for Remote Active Listening Support in Video Communication", IEICE TECHNICAL REPOR T, vol. 109, no. 375, 14 January 2010 (2010-01-14), pages 103 - 108 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11161047B2 (en) 2018-02-06 2021-11-02 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
US10981067B2 (en) 2018-02-06 2021-04-20 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
US10983590B2 (en) 2018-02-06 2021-04-20 Gree, Inc. Application processing system, method of processing application and storage medium storing program for processing application
US10981052B2 (en) 2018-02-06 2021-04-20 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
US11083959B2 (en) 2018-02-06 2021-08-10 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
US11110346B2 (en) 2018-02-06 2021-09-07 Gree, Inc. Game processing system, method of processing game, and storage medium storing program for processing game
JP2019136474A (en) * 2018-07-05 2019-08-22 グリー株式会社 Game processing system, game processing method, and game processing program
JP2019136475A (en) * 2018-07-06 2019-08-22 グリー株式会社 Game processing system, game processing method, and game processing program
JP2020198065A (en) * 2019-06-03 2020-12-10 アイドス インタラクティブ コープ Communication with augmented reality virtual agents
JP7440223B2 (en) 2019-06-03 2024-02-28 スクウェア エニックス、リミテッド Communication with virtual agents in augmented reality
CN115460328A (en) * 2019-06-07 2022-12-09 佳能株式会社 Information processing system and information processing method
JP2021034977A (en) * 2019-08-28 2021-03-01 大日本印刷株式会社 Server, reproduction device, content reproduction system, content reproduction method, and program
JP7293991B2 (en) 2019-08-28 2023-06-20 大日本印刷株式会社 Server, playback device, content playback system, content playback method, and program

Similar Documents

Publication Publication Date Title
WO2017221492A1 (en) Information processing device, information processing method, and program
JP6841241B2 (en) Information processing equipment, information processing methods, and programs
JP7020474B2 (en) Information processing equipment, information processing method and recording medium
JP6760271B2 (en) Information processing equipment, information processing methods and programs
WO2017130486A1 (en) Information processing device, information processing method, and program
US20180197342A1 (en) Information processing apparatus, information processing method, and program
WO2018216355A1 (en) Information processing apparatus, information processing method, and program
WO2016125672A1 (en) Information processing device, image processing method, and program
US11768576B2 (en) Displaying representations of environments
US20220191577A1 (en) Changing Resource Utilization associated with a Media Object based on an Engagement Score
WO2016088410A1 (en) Information processing device, information processing method, and program
EP3528024A1 (en) Information processing device, information processing method, and program
JP6575518B2 (en) Display control apparatus, display control method, and program
WO2018135057A1 (en) Information processing device, information processing method, and program
JP6922743B2 (en) Information processing equipment, information processing methods and programs
JP7468506B2 (en) Information processing device, information processing method, and recording medium
JP3621861B2 (en) Conversation information presentation method and immersive virtual communication environment system
US11934627B1 (en) 3D user interface with sliding cylindrical volumes
US20230370578A1 (en) Generating and Displaying Content based on Respective Positions of Individuals
WO2023069591A1 (en) Object-based dual cursor input and guiding system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17814964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17814964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP