WO2017221492A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2017221492A1
WO2017221492A1 PCT/JP2017/011963 JP2017011963W WO2017221492A1 WO 2017221492 A1 WO2017221492 A1 WO 2017221492A1 JP 2017011963 W JP2017011963 W JP 2017011963W WO 2017221492 A1 WO2017221492 A1 WO 2017221492A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
display object
user
information
display
example
Prior art date
Application number
PCT/JP2017/011963
Other languages
French (fr)
Japanese (ja)
Inventor
貴広 岡山
邦在 鳥居
遼 深澤
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory

Abstract

[Problem] To provide an information processing device, an information processing method, and a program. [Solution] An information processing device provided with a control unit which causes first notification information relating to a display object to be provided to a user when the user's line of sight is directed to the display object.

Description

The information processing apparatus, information processing method, and program

The present disclosure relates to an information processing apparatus, information processing method, and a program.

Background (actual space or virtual space) is superimposed objects, the present technology to be presented to the user. For example, Patent Document 1, the object based on the image of the real space, is superimposed on the image of the real space in the non-transmissive display is displayed, or superimposed on the real space background in the display of the transmissive (see-through) displaying, techniques have been disclosed.

JP 2014-106681 JP

The display objects (display objects) as described above, the user or perform selection, the user is considered or perform input such commands (instructions). However, information relating to the selected status of the display object, the information and the like according to the input reception propriety of the display object, the user it was sometimes difficult to grasp.

Therefore, that can be grasped more easily user information related to the display object, the new and improved information processing apparatus, it proposes an information processing method, and a program.

According to the present disclosure, if directed line of sight of the user to the display object, the information processing apparatus is provided with a control unit for notifying the first notification information according to the display object to the user.

Further, according to the present disclosure, processor, if directed line of sight of the user to the display object, the information processing method comprising, thereby notifying the first notification information according to the display object to the user is provided that.

Further, according to the present disclosure, a computer system, when directed by the user's line of sight to the display object, in order to realize the function of notifying the first notification information according to the display object to the user, program It is provided.

According to the present disclosure described above, it is possible to grasp more readily the user information relating to the display object.

Incidentally, the above effect is not necessarily restrictive, with the above effects, or instead of the above effects, any effects shown herein, or other effects that may be grasped from the description, it may be achieved.

Is a block diagram showing a configuration example of an information processing apparatus 1 according to an embodiment of the present disclosure. Is an explanatory diagram showing an example of a display object where the output control unit 104 according to the embodiment on the display unit 16. Is an explanatory diagram for explaining an example of notification information is notified by a display icon. By displaying the display object in a predetermined shape is an explanatory diagram for explaining an example of notification information is notified. It is an explanatory view showing an example of a label to be displayed is the output control unit 104 according to the embodiment. It is an explanatory view for explaining a clipping volume according to the embodiment. It is an explanatory diagram for explaining an operation example 1 according to the embodiment. It is an explanatory diagram for explaining an operation example 2 according to the embodiment. It is an explanatory diagram for explaining an operation example 3 according to the embodiment. It is an explanatory diagram for explaining an operation example 4 according to the embodiment. It is an explanatory diagram for explaining an operation example 5 according to the embodiment. It is an explanatory diagram for explaining an operation example 6 according to the embodiment. It is an explanatory diagram for explaining an operation example 7 according to the embodiment. It is an explanatory view for explaining an application example 1 by applying the same embodiment to a group chat application for other users and voice chat. It is an explanatory diagram for explaining an application example 2 applying the embodiment to the SNS (Social Networking Service) application to interact with other users. It is an explanatory view for explaining an application example 3 applying the embodiment to the store locator to find information about the store that exists in the field of view. Is an explanatory diagram showing an example of the hardware configuration.

Reference will now be described in detail preferred embodiments of the present disclosure. In the specification and the drawings, components having substantially the same function and structure are a repeated explanation thereof by referring to the figures.

The description will be made in the following order.
<< 1. Configuration example >>
<< 2. Operation Example >>
<2-1. Operation Example 1>
<2-2. Operation Example 2>
<2-3. Operation Example 3>
<2-4. Operation Example 4>
<2-5. Operation Example 5>
<2-6. Operation Example 6>
<2-7. Operation Example 7>
<< 3. Applications >>
<3-1. Application Example 1>
<3-2. Application Example 2>
<3-3. Application Example 3>
<< 4. Hardware configuration example >>
<< 5. Conclusion >>

<< 1. Configuration example >>
First, referring to FIG. 1, illustrating a configuration example of an embodiment of the present disclosure. Figure 1 is a block diagram showing a configuration example of an information processing apparatus 1 according to an embodiment of the present disclosure. As shown in FIG. 1, the information processing apparatus 1 according to this embodiment is an information processing apparatus including a control unit 10, the sensor unit 12, storage unit 14, a display unit 16, audio output unit 17, and the communication unit 18 .

The information processing apparatus 1 according to this embodiment, for example, HMD (Head Mounted Display), a mobile phone, a smart phone, a tablet PC (Personal Computer), a projector or the like, may be implemented in various forms. Incidentally, a description an example in the information processing apparatus 1 is an HMD in the eyeglass-type having a display unit 16 of the transmission in the following.

Control unit 10 controls each component of the information processing apparatus 1. For example, the control of the control unit 10, the information processing apparatus 1 can provide a variety of applications to the user. As will be described later for an example of an application of the information processing apparatus 1 is provided, the information processing apparatus 1, for example with other users or group chat application for voice chat, SNS to meet other users ( Social Networking Service) application, may provide a store locator application or the like.

The control unit 10 includes, as shown in FIG. 1, line-of-sight recognition unit 101, the voice recognition unit 102, the image recognition unit 103, and also functions as an output control unit 104.

Sight recognition unit 101, based on the information acquired by the sensor unit 12 to be described later, acquires information about the user's line of sight. The information on the user's line of sight according to the present embodiment, for example, the direction of the user's gaze, the position of the user's gaze may include a focal position or the like. Also, the position of the user's gaze, for example, the display unit 16 may be a position that intersects the line of sight of the display unit 16 and the user (e.g., coordinates).

Sight recognition unit 101, for example based on the user's eye image acquired by the camera included in the sensor unit 12 to be described later, may acquire information about the user's line of sight. Further, gaze recognition unit 101, and an infrared sensor contained in the sensor unit 12 to be described later, based on the information acquired by other sensor may obtain information about the user's line of sight.

Sight recognition unit 101, the information acquired by the sensor unit 12, (the point corresponding to immobile part of the eye such as, for example, inner corner of the eye and corneal reflection) the reference point of the eyes and the eyes of the moving point (e.g., the iris and the pupil it may detect points) corresponding to the moving parts in the eye and the like. Further, gaze recognition unit 101, based on the position of the eyes of the moving points with respect to the reference point of the eye, may acquire information about the user's line of sight. A method of obtaining information about the user's line of sight by line-of-sight recognition unit 101 according to the present embodiment is not limited to the above, for example, the line of sight may be performed by any of the sight line detection technique capable of detecting.

Speech recognition unit 102 recognizes the speech (voice) of the user is picked up by the microphone included in the sensor unit 12 to be described later, to convert it to a string, it obtains the spoken text. The voice recognition unit 102 performs a natural language processing such as morphological analysis or a character string pattern matching on spoken text, for example, specify a word that specifies the display object from the spoken text, commands such as a command to display objects it may be acquired.

Image recognition unit 103 analyzes the image acquired by the camera or a stereo camera, is included in the sensor unit 12 to be described later, acquires information of the real space. For example, the image recognition unit 103, and a stereo matching method for multiple images acquired at the same time, when by performing SfM (Structure from Motion) method for series manner acquired plurality of images, the three-dimensional shape of the real space information may be acquired. Further, the image recognition unit 103 recognizes the feature point information prepared in advance, by performing the matching characteristic point information detected from the captured image, the object present in the real space (real objects) or markers or the like, information such as the object or markers may be obtained. Note that the image recognition unit 103 recognizes the marker, for example, is represented by a two-dimensional code, etc., it may be a set of texture information of a specific pattern or image feature point information. Further, the object image recognition unit 103 acquires the information, the user's hand, finger, or may be an operation of an object such as gripping by the hand.

The output control unit 104, based on the information obtained by the sight line recognition unit 101, the voice recognition unit 102, and the image recognition unit 103, controls the output according to the present embodiment. For example, the output control unit 104 controls the display by the display unit 16 to be described later (display output), also controls the sound output by the audio output unit 17 described later.

The output control unit 104, for example, based on the real space obtained by the image recognition unit 103 information (three-dimensional shape of the real space, an object present in the space, the information of the marker, etc.), may display a display object . As described later, when the display unit 16 is capable of providing binocular parallax to the user, the output control unit 104 is perceived to be present at the position (three-dimensional position) in the real space corresponding to the information of the real space it is possible to display the display object so that.

Figure 2 is an explanatory diagram showing an example of a display object where the output control unit 104 causes the display unit 16. Marker M10 shown in FIG. 2 is a marker present in the real space. The output control unit 104, based on the information of the marker M10 obtained by the image recognition unit 103 acquires the display object A10 corresponding to the marker M10 from the storage unit 14. Then, as shown in FIG. 2, the output control unit 104 may display a display object A10 as perceived to be present at a position corresponding to the marker M10 (e.g. above marker M10).

Note that the display of the display object by the output control unit 104 is not limited to the above example. For example, the output control unit 104, based on the information of the real obtained by the image recognition unit 103 body acquires a display object corresponding to the real object from the storage unit 14, the display object to a position corresponding to the real object it may be displayed. Further, the output control unit 104, based on the three-dimensional shape information of the real space obtained by the image recognition unit 103, as the display object on a plane in the real space is perceived to exist, by displaying a display object it may be.

Further, the output control unit 104 via the communication unit 18, based on the information obtained from another device may display a display object. For example, is acquired via the communication unit 18, based on the information of the position of the other users, as a display object, the avatar indicating the other users, is the perceived to exist in direction to the presence of the other user it may be displayed as. A case, the output control unit 104, based on information of the direction obtained from the geomagnetic sensor or the like included in the sensor unit 12 may perform the above-described display control.

Further, the output control unit 104, a notification information related to the display object, thereby notifying the user. Notification methods can be a variety but notice, for example, the output control section 104, by controlling the display unit 16, may also be notified by the display, the sound output by controlling the audio output unit 17 it may be. Output control unit 104 will be described later example for notifying the notification information to the user.

For example, the output control unit 104, if directed by the user's line of sight to the display object, the first notification information according to the display object, may be reported to the user. With such a configuration, the user can easily grasp the information related to the display object interested.

If directed line of sight of the user to the display object, the first notification information to be notified to the user may be information indicating the acceptability of the input to for example the display object. With such a configuration, the user will be able to easily grasp whether or not the display object accepts user input, for example, that the reception of input will try to input to the display object can not It can be prevented to become.

Further, the first notification information may be information indicating the acceptability of the input voice (for example, the input commands) for example, the display object. If accepted whether the input voice is not known, for example, when the command is not executed with respect but the display object was command input by voice, to know the cause of the command is not executed, it is difficult for the user. For example, input by voice, it takes generally a voice recognition process, whether a command because of a failed speech recognition processing is not executed, or the display if the object that it is impossible to accept an input by voice, the user determines It is difficult. As a result, the user is likely to the display object that can not be accepted in the input voice, resulting in meaningless repeatedly input by voice. However, as described above, by the information indicating the acceptability of the input voice is notified to the user, for example, the user unnecessarily input by voice to the display object that can not be accepted in the input voice repeatedly that it is possible to prevent the would.

Incidentally, showing the first notification information, the acceptability of the input to the display object is not limited to the acceptability of the input voice, input using the touch input and the operation body, the input by line-of-sight and the like, accept a variety of input it may be a possibility.

Incidentally, the first notification information is not limited to the above, it may indicate a variety of information. For example, the first notification information may be information indicating that the display object is selected. With such a configuration, for example, when a plurality of display objects are displayed, the user can grasp one of the object that is already selected. It should be noted, it will be described later selection of the display object. Moreover, and group chat application which will be described later, in the SNS application, the first notification information may indicate whether the interaction (e.g., voice chat) with the user indicated by the display object.

Further, in a state in which the display object is selected, if the user input (e.g., input of a command by voice) is performed, the output control unit 104, a second notification information according to the selected display object to the user it may be notified.

For example, if the input command is performed in the state in which the display object that can accept the input is selected, the output control unit 104, based on the command, performs control related to the display object, the control is performed may notify the third notification information user indicating that the. The control for the display object can be a variety the command or display object, the kind of application. For example, the control for the display object is rotated or moved, etc., it may be a display control of the display object may be a control for initiating an interaction (e.g., voice chat) with the user indicated by the display object.

Also, even though the first notification information indicating that it is impossible to accept the input to the display object is notified, if the input has been made, the output control unit 104, the display object to the input second notification information indicates not accepted may be notified to the user. With such a configuration, even if a case where the user display objects that can not be accepted inputs gone input in a state of being selected, the user knows that the input to the display object is not accepted it is possible to become. As a result, for example, the user is able to prevent you further repeatedly input to the display object that can not be accepted in the input voice.

Incidentally, the second notification information explicitly than the first notification information, or stronger, may be reported to the user. For example, if the first notification information and the second notification information is notified by the display, the second notification information, highly visible than the first notification information (noticeable) may be notified by the display . Also, if the first notification information and the second notification information is notified by the acoustic output, the second notification information (having a high sound pressure level) greater than the first notification information is notified by an acoustic it may be. The first notification information is notified by the display, the second notification information may be notified by an acoustic output. With such a configuration, for example, more easily grasp that the display object does not accept.

Further, the output control unit 104, if not directed by the user's line of sight to the display object, the third notification information according to the display object, may be reported to the user. With such a configuration, the user can more easily grasp the information related to the display object interested. Incidentally, the third notification information may be information indicating the acceptability of the input to for example the display object. The third notification information may be information indicating the selection whether the display object, may be information indicating whether the interaction with the user indicated by the display object.

Incidentally, the third notification information implicitly than the first notification information or weakly, may be reported to the user. For example, if the first notification information and a third notification information is notified by a display, a third notification information, low visibility than the first notification information (inconspicuous) it is informed by the display good. Also, if the first notification information and a third notification information is notified by the acoustic output, the third notification information (having a low sound pressure level) smaller than the first notification information is notified by an acoustic it may be. With such a configuration, for example, when a plurality of display objects are displayed, the user is hardly subjected to complicated impression by notification.

Subsequently, the first notification information, the second notification information, will be specifically described notification method of the third of the notification information. In the following, the first notification information, the second notification information, it may be referred to as a third notification information collectively notification information.

For example, the output control unit 104 may be notified of the notification information by displaying a predetermined icon in a position corresponding to the position of the display object. Figure 3 is an explanatory diagram for explaining an example of notification information is notified by a display icon. As shown in FIG. 3, in the vicinity of the display object A10 (corresponding to the display object A10 position), it is displayed icons T12 indicating that it is possible to accept the input voice to the display object A10.

The notification of the notification information by the display of the icons is not limited to the above example. For example, if it is impossible to accept the input to the display object, if it is impossible to interaction with the user indicated by the display object, in the vicinity of the display object, for example, it displays icons including × (cross) mark it may be.

Further, the output control unit 104, by displaying a display object in a predetermined shape, may be notified of the notification information. 4, by displaying a display object in a predetermined shape is an explanatory diagram for explaining an example of notification information is notified. Display object A10 in FIG. 4 is displayed in a shape wearing the headset T14 indicating that it is possible to accept the input voice to the display object A10. As shown in FIG. 4, the notification in accordance with some equipment of the display object (an example of the shape) has a low visibility, for example, suitable for notification of the third notification information.

The notification of the notification information by displaying the display object in a predetermined shape is not limited to the above example. For example, if it is impossible to interaction with the user indicated by the avatar (an example of a display object) may be displayed in the gesture poses representing the rejection avatar corresponding to cultures (e.g. hand make × mark etc.) . The notification by the avatar pose (an example of the shape) has a high visibility, more suitable for notification of example the first notification information or the second notification information.

Further, the output control unit 104, by animate the display object, may be notified of the notification information. For example, if the display object is the avatar indicating the user, the avatar animation, the notification information may be notified. If it is impossible to interaction with the user indicated by the avatar, the avatar can see the direction other than the front (the day after tomorrow in the direction), or shook his head, (toward, for example, desk) seemed to be busy, or are working, user move away from, or may be hiding the shadows. Incidentally, for example, the first notification information or a notification of the third notification information, may be notified by the animated avatar.

Further, the output control unit 104, by displaying a display object in a predetermined display condition may be notified of the notification information. For example, the display object to be displayed when it is impossible to accept the input to the display object may be displayed at a higher permeability than the display object to be displayed when it is possible to accept the input to the display object . Further, the display object to be displayed when it is impossible to accept the input to the display object displayed with the familiar to the surrounding environment than the display object to be displayed when it is possible to accept the input to the display object it may be. For example, by displaying a display object in a more Stock way of drawing, so dissolve in the real space, it is possible to state that familiar with the surrounding environment. Further, the display object to be displayed when it is impossible to accept the input to the display object may be displayed in black and white. As described above, the notification of the notification information by displaying the display object in a predetermined display condition, since it is possible to lower the visibility, and suitable for being notified of the third notification information

Further, the output control unit 104 may be notified of the notification information by displaying a display object in enhancing the visibility of the display object (emphasizing) displaying condition. For example, the output control unit 104, and a glow expressions illuminate the periphery of the display object, may be notified of the notification information by displaying a display object in the display condition of blinking like. Notification of the notification information by the display condition to increase the visibility of the display object as described above, it is possible to highlight the display object from the other display objects. Therefore, the notification of the notification information by the display condition to increase the visibility of the display object, for example, are suitable for the notification of the notification information indicating that the or notification information indicating that it has been selected, the control based on the input has been performed.

The display conditions according to the present embodiment is not limited to the above, for example, brightness, opacity, color, and may include a variety of conditions on the size and the like.

It has been described above notification method of the notification information by the output control unit 104. Subsequently, the output control unit 104 is used to determine whether to notify the first notification information, will be described determines whether directed line of sight of the user to the display object.

Determining whether directed line of sight of the user to the display object, for example, may be performed by whether including the position of the line of sight within the display range of the display object on the display unit 16. The determination of whether directed line of sight of the user to the display object, a three-dimensional region is perceived as a display object in the real space is present may be determined by whether the line of sight intersects.

The determination of whether directed line of sight of the user to the display object, for example, the position of the line of sight is a predetermined time, or may be performed when the gaze conditions which exist within a predetermined range. Incidentally, in the gaze condition, directed by the user's line of sight (located ahead of the user's line of sight) the object may be referred to as gaze object below.

Further, the output control unit 104 manages the selection state of the display object. For example, the output control unit 104, if it is determined that directed a user's line of sight to the display object may be set the display object in the selected state. By directed line of sight of the user to the display object, if the display object is in the selected state, as described above, it may be notified first notification information indicating that it has been selected.

Further, the output control unit 104, based on the result of the speech recognition by the speech recognition unit 102 may set the display object in the selected state. For example, the output control unit 104, if the specified word that specifies the display object that is acquired by the voice recognition unit 102, a label attached to the display object (meta information such as text indicating the display object) matches, the it may be set to display objects in the selected state.

Further, the output control unit 104, in a position corresponding to the position of the display object may be displayed the label. For example, the label is associated with the display object, may be stored in advance in the storage unit 14 described later, it may be obtained from the outside via the communication unit 18.

Figure 5 is an explanatory diagram showing an example of a label to the output control unit 104 causes the display. As shown in FIG. 5, a text indicating the display object A22 "Bike" is associated with the display object A22 as the label L22, are displayed in a position corresponding to the position of the display object A22. Similarly, a text indicating the display object A24 "Menu" is associated with the display object A24 as the label L24, are displayed in a position corresponding to the position of the display object A24. As shown in FIG. 5, even when a plurality of display objects is displayed, the user grasps the label, it is possible to specify the audio display object to be selected.

Further, the output control unit 104 may set a display object that exists within the field of view of the user selected. For example, the output control unit 104 sets the selection state by extracting display object that exists within the field of view of the current user, when a command is acquired by the voice recognition unit 102, to the set in the selected state display object in contrast, control may be performed based on the command.

The display object that exists within the field of view of the user according to the present embodiment may be a display object included for example clipping volume. Figure 6 is an explanatory diagram for explaining a clipping volume. C1 points shown in FIG. 6 shows the position of the user. Further, the clipping volume V1, as shown in FIG. 6, the front clipping plane P1 that is present from the point C1 to a first distance in a pyramid in which the point C1 and the vertex, from the point C1 is larger than the second first distance and back clipping plane P2 that exists in the distance, which is a space sandwiched between.

Even determination output control unit 104 of whether to display the display object, may be similarly performed by whether included in the clipping volume. And the said first distance in the clipping volume for determination, and a second distance, the first distance in the clipping volume for selecting a display object, and a second distance, may be the same and, it may be different.

Further, the output control unit 104 is included in the clipping volume is concealed in an object or other display objects present in the real space may not be set in the selected state display object is not displayed. Further, the output control unit 104, based on information about the user's line of sight, may have different shapes of the clipping volume described above.

Further, the output control unit 104 may display various icons and buttons, etc. used by the user for input or operation.

Sensor portion 12 shown in FIG. 1 is a sensor device for sensing information around the information processing apparatus 1. For example, the sensor unit 12 may include one or more microphones for picking up ambient sounds. The sensor unit 12 may include a camera for acquiring a camera acquires an image around or stereo cameras, and the eye image of the user. The sensor unit 12, an infrared sensor, a distance sensor, motion sensor, biometric information sensor, an acceleration sensor, a gyro sensor, may include a geomagnetic sensor or the like.

The storage unit 14 stores programs and parameters for each configuration of the information processing apparatus 1 functions. For example, the storage unit 14, and feature point information image recognition unit 103 is used, the display object output control unit 104 is used, and the label (meta information) associated with the display object or the like may be stored.

Display unit 16 is controlled by the output control unit 104 is a display that displays various information. The display unit 16, for example, transmission (optical see-through), and may be a glasses-type display of both eyes. According to such a constitution, the output control section 104, it is possible to display the display object as perceived to be present in any three-dimensional position in the real space.

Audio output unit 17 is, for example, a device that outputs sound such as a speaker or an earphone. Audio output unit 17, under the control of the output control section 104 has a function for converting the acoustic signal into an acoustic.

The communication unit 18 is a communication interface that mediates communication between other devices. The communication unit 18 supports any wireless communication protocol or wired communication protocol, establishing a communication connection with other devices via a communication network (not shown). Thus, for example, is possible performs communication for the information processing apparatus 1 provides various applications to the user.

<< 2. Operation Example >>
This completes the description structure of the information processing apparatus 1 according to this embodiment. Subsequently, some of the operations of the information processing apparatus 1 according to the present embodiment will be described with reference to FIGS. 7-13. Operation example described below may be combined, may be switched in response to a user input.

<2-1. Operation Example 1>
Figure 7 is an explanatory diagram for explaining an operation example 1 of the present embodiment. In operation example 1, the user by voice, and specifies the display object, it is possible to input the command for the display object.

First, as shown in FIG. 7, the output control section 104 displays a display object, the label associated with the display object (S102). Then, the speech recognition unit 102 (YES at S104) the voice input by the user, and specifies word specifying a display object, the command is acquired, the search of the label is performed based on the specified word (S106) . The user at step S104, as for example in "Bike Rotate", and specifies word (Bike), it may be uttered continue command (Rotate).

Among currently displayed label, if the label that matches the specified word is found (YES in S108), the display object corresponding to the label is found, control based on the command is performed (S112).

On the other hand, if the voice is not input (NO at S104), or of the currently displayed label, if not found a label that matches the specified word (NO in S108), the process ends.

<2-2. Operation Example 2>
Figure 8 is an explanatory diagram for explaining an operation example 2 according to the present embodiment. In operation example 2, the user by voice, it is possible to input the command to all of the display objects that can accept an input by voice.

First, as shown in FIG. 8, the output control unit 104, a display object, to display the icon indicating the acceptability of the input voice to the display object (S122). Subsequently, the by the voice recognition unit 102 (YES at S124) the voice is input by the user, a command is acquired, the input voice to the accepting display object, control based on the command is performed (S126) . The user, for example, only be spoken command at step S124. On the other hand, if the voice is not input (NO at S124), the process ends.

<2-3. Operation Example 3>
Figure 9 is an explanatory diagram for explaining an operation example 3 of the present embodiment. In operation example 3, the user selects a display object by fixation, it is possible for the selected display object, to input a command by speech.

First, as shown in FIG. 9, the output control section 104 displays the display object (S202). Subsequently, it is determined whether or not the gaze state based on the information about the line of sight is performed (S204). A gaze condition (YES in S204), and if there is a display object gaze previously (YES in S206), the display object (gaze object) is set to the selected state (S208).

Note that, in step S208, the output control unit 104 may be allowed to notify the user of the notification information indicating that the gaze object is selected, even if the user is notified of the notification information indicating a voice input usability of the gaze object good. Also, if the gaze object becomes gaze state again, the selected state of the gaze object may be released (canceled).

Subsequently, when the user enters a command by voice (YES in S212), the control for the selected attention object is performed based on the command (S214). Incidentally, in step S214, the case is not possible speech input for the selected attention object, the output control unit 104, a second notification information indicating that the gaze object is not accepted the input by the user is informed it may be.

If (at S204 NO) if not gaze condition, or if the previously displayed objects sight does not exist (NO in S206), or a voice command is not input (NO in S212), the process ends.

<2-4. Operation Example 4>
Figure 10 is an explanatory diagram for explaining an operation example 4 of the present embodiment. In operation example 4, the user selects a display object by gaze, switch to more audio input mode by watching for the display object is switched to the voice input mode, is possible to input commands by speech possible it is.

First, as shown in FIG. 10, the output control section 104 displays the display object (S222). Subsequently, it is determined whether or not the gaze state based on the information about the line of sight (the first gaze state) is performed (S224). A gaze condition (YES in S224), and if there is a display object gaze previously (YES in S226), the display object (gaze object) is set to the selected state (S228).

Note that, in step S228, the output control unit 104 may be notified to the user the notification information indicating that the gaze object is selected. Further, in step S228, if the voice input to the gaze object is not possible, the gaze object may not be set to the selected state.

Then, based on information about the line of sight, a determination is made as to whether a further gaze state (second fixation state) with respect to the display object selected (S234). Note that the determination according to the first gaze condition in step S224, the determination according to the second gaze condition in step S234, may be different time used for the determination.

If a second gaze state (YES in S234), the display object selected by the control unit 10 (gaze object) is switched to the voice input mode (S234).

Subsequently, when the user enters a command by voice (YES in S236), is selected, the control for the gaze object is switched to the voice input mode is performed based on the command (S234).

If not the first gaze state (NO in S224), (NO in S226) when there is no display object ahead of the line of sight, if the case is not a second gaze state (NO in S232), or a voice command is not inputted ( in S212 NO), the process ends.

According to the operation example 4 described above, the user by performing a gaze two stages, it becomes possible to switch the display object to the voice input mode, be gone audio input the wrong display object is suppressed that.

<2-5. Operation Example 5>
Figure 11 is an explanatory diagram for explaining an operation example 5 of the present embodiment. In operation example 5, the user selects one or more display objects by gaze, further by watching the speech input icon, it is possible to switch one or more display objects selected audio input mode.

First, as shown in FIG. 11, the output control section 104 displays a display object, a voice input icon for switching the display object to the voice input mode (S242).

Subsequently, it is determined whether or not the gaze state based on the information about the line of sight is performed (S244). A gaze condition (YES in S244), and if the display object unselected condition exists gaze previously (YES in S246), the display object (gaze object) is set to the selected state (S248).

Then, the process returns to step S244. Each time the processing of steps S244 ~ S248 are repeated once, one display object is set to the selected state.

A gaze condition (YES in S244), and if the display object unselected state to the viewing of the previous absence (NO in S246), whether or not voice input icon exists ahead of the line of sight is determined (S250 ).

If there is voice input icon sight ahead (in S250 YES), all the display object selected at present is switched to the voice input mode (S252).

Subsequently, when the user enters a command by voice (YES in S254), is selected, the control for the gaze object is switched to the voice input mode is performed based on the command (S256).

On the other hand, if (at S250 NO) if there is no voice input icon above the line of sight, or voice command is not input (NO at S254), the process ends.

According to the operation example 5 described above, the user can be selected based on the line of sight of the plurality of display objects, which can be input by voice at the same time to a plurality of display objects.

<2-6. Operation Example 6>
Figure 12 is an explanatory diagram for explaining an operation example 6 of the present embodiment. In operation example 6, the user is able to select a group of display objects present in the field of view.

First, as shown in FIG. 12, the output control section 104 displays the display object (S302). The output control unit 104 sets the selection state by extracting display object that exists within the field of view of the current user (S304). Processing in step S304 may be made based on whether or not the display object included in the clipping volume described with reference to FIG.

Subsequently, when the user enters a command by voice (YES in S306), control for extracted with selected display object is performed based on the command (S308). On the other hand, if the voice command is not input (NO at S308), the process ends.

According to the operation example 6 described above, the user is able to select a group of a plurality of display objects present in the field of view, it can be input by voice at the same time to a plurality of display objects.

<2-7. Operation Example 7>
Figure 13 is an explanatory diagram for explaining an operation example 7 according to the present embodiment. In operation example 7, the user can perform an input by voice at the same time to the display object grouping is moved by an operation body such as hands themselves objects are grouped.

First, as shown in FIG. 13, the output control section 104 displays the display object on the plane or the like in front of the user's eyes (S402). Subsequently, the user by the operation of moving pinch the display object by the operation body such as their hands, grouped by combining multiple display objects (S404). Note that, in step S404, the output control unit 104, based on the positional relationship of the operating body and the display object recognized by the image recognition unit 103 may perform control to move the display position of the display object.

Then, the user, grouped display object moves the view to be included in the field of view (S406). Subsequently, on the basis of the information about the sight line obtained by the line-of-sight recognition unit 101, the focus of the user is determined to match the screen surface, the output control unit 104 displays the voice input icons to the center of the screen (S412 ). Incidentally, voice input icons displayed in step S412 is an icon indicating that enables speech input, different from the voice input icon displayed in step S242 of FIG. 11.

Subsequently, when the user enters a command by voice (YES in S414), the control for the display objects present within the field of view is performed based on a command (S416). On the other hand, if the voice command is not input (NO at S414), the process ends.

According to the operation example 7 described above, the user after grouping is moved to fit into view a plurality of display objects, can perform a voice input together to a plurality of display objects present in the field of view it is.

<< 3. Applications >>
Above, a configuration example of the present embodiment, and describes the operating example. Then, as an application example of the present embodiment, some of the examples of the present invention applied to the application will be described.

<3-1. Application Example 1>
Figure 14 is an explanatory diagram for explaining an application example 1 of the present invention applied in a group chat application for the other users voice chat.

First, as shown in FIG. 14, the output control unit 104, and list the avatar (an example of a display object) indicating the other users before the user's eyes (side by side) is displayed (S502). Then, by the user, the avatar is selected (S506). Selection avatar in step S506, for example 9-11 reference as described with the, or may be selected based on the user's line of sight.

Then the command to start the conversation (e.g., "Start Chat", etc.) is input by voice (YES in S508), the control of the controller 10, according to (avatar show all avatar selected ) user and voice chat is started (S512). In step S512, the avatar that are not selected, for example, such as by moving out of the field of view, may no longer be displayed.

On the other hand, if the voice command is not input (NO at S508), the process ends.

Incidentally, after the voice chat is started in step S512, the user using an operating body such as their hands, is moved out of the field of view by pinching an avatar, the user of the avatar is shown, it is excluded from the voice chat to so that may be controlled by the control unit 10.

<3-2. Application Example 2>
Figure 15 is an explanatory diagram for explaining an application example 2 of the present invention applied in a SNS (Social Networking Service) application to interact with other users.

First, as shown in FIG. 15, the output control unit 104, a friend avatar according to the user's exchange partner (an example of a display object), and displays the label (name) corresponding to the friend avatar (S522). The output control unit 104 as a trigger that the example is the user's line of sight directed upward, may perform the processing of step S522. Furthermore, Friend avatar may be displayed with the corresponding user photos or icons, it may be displayed in the direction of presence of the user.

Then, by the user, friend avatar is selected (S526). Selection of the friend avatar in step S526, for example 7 reference as described with the, to label may be selected by specifying by voice, as described with reference to FIGS. 9-11, it may be a selection based on the user's line of sight.

Then the command (e.g., "Talk", "Mail", etc.) is input by voice (YES in S528), the control of the controller 10, the control for the selected friend is performed based on the command ( S530).

On the other hand, if the voice command is not input (NO at S508), the process ends.

Note that in the SNS application described above, when the user is active (capable of interaction conversation like state), the face of the friend avatar corresponding to the user may be displayed directed to the user. Also, if the user is not active (that can not be interaction conversation like state) may be displayed being turned away face friend avatar corresponding to the user.

Incidentally, after the voice chat is started in step S512, the user using an operating body such as their hands, is moved out of the field of view by pinching an avatar, the user of the avatar is shown, it is excluded from the voice chat to so that may be controlled by the control unit 10.

<3-3. Application Example 3>
16, for example, a user walking around town is an explanatory diagram for explaining an application example 3 of the present invention applied in a store locator for searching for information on shops that exist vision.

First, as shown in FIG. 16, the output control section 104 displays the icon associated with the store that exists in the field of view of the user (an example of a display object) (S542). Incidentally, the associated icon in a store, for example, may be an icon for indicating the type of store, as the so-called POP (Point of purchase) advertising, as perceived to be present in the vicinity of the store it may be displayed.

Then, the user, while looking towards the store anxious (while taking the retailer sight), command requesting a search (e.g., "Search", etc.) for inputting a voice (YES in S544).

Subsequently, the output control unit 104 performs a search for information on the target display object that exists in the field of view of the user, to update the display in accordance with the search result (S546). For example, the output control unit 104, the result of the search, by increasing the visibility of the information associated with the acquired shop icon (e.g. glow representation, etc.), easy-to-understand that information about the shop is acquired it may be. The information that is further selects an icon associated with the acquired shop may be displayed is acquired information.

On the other hand, if the voice command is not input (NO at S544), the process ends.

<< 4. Hardware configuration example >>
It has been described an embodiment of the present disclosure. Finally, with reference to FIG. 17, a description will be given of a hardware configuration of an information processing apparatus according to the present embodiment. Figure 17 is a block diagram showing an example of a hardware configuration of an information processing apparatus according to the present embodiment. The information processing apparatus 900 shown in FIG. 17, for example, can realize the information processing apparatus 1 shown in FIG. Processing by the information processing apparatus 1 according to this embodiment, software is realized by cooperation of hardware described below.

As shown in FIG. 17, the information processing apparatus 900 includes a CPU (Central Processing Unit) 901, ROM (Read Only Memory) 902, RAM (Random Access Memory) 903 and a host bus 904a. The information processing apparatus 900 includes a bridge 904, an external bus 904b, an interface 905, an input device 906, output device 907, storage device 908, a drive 909, a connection port 911, the communication device 913, and sensor 915. The information processing apparatus 900, instead of the CPU 901, or together with, may have a processing circuit such as DSP or ASIC.

CPU901 functions as an arithmetic processing device and a control device, and controls the overall operation of the information processing apparatus 900 according to various programs. Further, CPU 901 may be a microprocessor. ROM902 stores programs, operation parameters CPU901 uses. RAM903 stores programs used in execution of the CPU 901, temporarily stores the parameters that appropriately change during execution thereof. CPU901, for example, to form a control unit 10 shown in FIG.

CPU 901, ROM 902 and RAM903 are connected to each other by a host bus 904a that includes a CPU bus. The host bus 904a via the bridge 904 is connected to the external bus 904b such as a PCI (Peripheral Component Interconnect / Interface) bus. Incidentally, not necessarily the host bus 904a, it is not necessary to separate a bridge 904 and the external bus 904b, may be implemented these functions in a single bus.

Input device 906 may be, for example, a mouse, a keyboard, a touch panel, button, microphone, switch and a lever or the like, is realized by a device information is input by the user. The input device 906 is, for example, may be a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a PDA corresponding to the operation of the information processing apparatus 900 . Further, the input device 906, for example, based on information input by the user using the input means of said generating an input signal may include an input control circuit for outputting the CPU 901. The user of the information processing apparatus 900, by operating the input device 906, and can instruct the input processing operation of various data to the information processing apparatus 900.

The output device 907 is formed of a device capable of visually or audibly notifying the user of acquired information. Such devices, CRT display device, a liquid crystal display device, a plasma display device, a display device and such an EL display device and lamps, audio output devices such as a speaker and a headphone, a printer device or the like. The output device 907 outputs, for example, results obtained by various processing performed by the information processing apparatus 900 has performed. More specifically, the display device, a result obtained by various processing performed by the information processing apparatus 900 has performed, text, images, tables, graphs, etc., are graphically displayed in various formats. On the other hand, the audio output device, aurally converts the audio signal consisting of audio data, acoustic data or the like which is reproduced into an analog signal. The display device, for example, the display unit 16 shown in FIG. 1, to form a sound output section 17.

The storage device 908 is a device for data storage formed as an example of a storage unit of the information processing apparatus 900. The storage device 908 is, for example, a magnetic storage device such as HDD, a semiconductor storage device, is implemented by an optical storage device, or magneto-optical storage device or the like. The storage device 908, storage medium, a recording apparatus for recording data on a storage medium may include, deleting device for deleting the data recorded in the reading device and a storage medium reading out data from the storage medium. The storage device 908 stores various data and the like acquired from the program and various data, and outside CPU901 executes.

Drive 909 is a reader writer for storage medium, and is embedded in the information processing apparatus 900 or externally attached. Drive 909, a mounted magnetic disk, optical disk, reads a magneto-optical disk or information recorded on the removable storage medium such as a semiconductor memory, and outputs to the RAM 903. The drive 909 can also write information into the removable storage medium.

The connection port 911 is an interface connected to an external device, a USB (Universal Serial Bus) is a connection port of the data transmission can be an external device or the like.

Communication device 913 is a communication interface formed by a communication device for connecting to a network 920. Communication device 913, for example, a wired or wireless LAN (Local Area Network), LTE (Long Term Evolution), a communication card for Bluetooth (registered trademark), or WUSB (Wireless USB). The communication device 913 may be a router for optical communication, it may be a modem or the like of the ADSL (Asymmetric Digital Subscriber Line) for the router or various communications. The communication device 913 may be, for example, transmit and receive a signal in accordance with a predetermined protocol such as TCP / IP or the like on the Internet and with other communication devices. Communication device 913, for example, to form a communication section 18 shown in FIG.

The network 920, the information transmitted from a device connected to the network 920 wired or wireless transmission paths. For example, network 920 may be the Internet, a telephone network, or a public line network such as a satellite communication network, Ethernet various LAN containing (R) (Local Area Network), may include such WAN (Wide Area Network). The network 920 may include a leased line network such as IP-VPN (Internet Protocol-Virtual Private Network).

Sensor 915 may be, for example, an acceleration sensor, a gyro sensor, a geomagnetic sensor, an optical sensor, a sound sensor, ranging sensor, a variety of sensors such as a force sensor. Sensor 915 acquires posture of the information processing apparatus 900, the moving speed, etc., and information about the status of the information processing apparatus 900 itself, brightness and noise like around the information processing apparatus 900, information about the surrounding environment of the information processing apparatus 900 . The sensor 915, latitude receive and apparatus of GPS signals may include a GPS sensor for measuring the longitude and altitude. Sensor 915 may, for example, may form a sensor unit 12 shown in FIG.

An example of the hardware configuration capable of realizing the functions of the information processing apparatus 900 according to this embodiment. Each component described above may be implemented using a general-purpose member or may be implemented by hardware specialized for the function of each component. Therefore, according to the technique level when implementing the embodiment, as appropriate, it is possible to change the hardware configuration to be used.

Incidentally, a computer program for realizing each function of the information processing apparatus 900 according to the present embodiment as described above, it is possible to implement a PC or the like. Further, such a computer program is stored may be provided a computer-readable recording medium. The recording medium may be a magnetic disk, an optical disk, a magneto-optical disk, a flash memory or the like. The computer program, without using the recording medium, for example may be distributed via a network. Further, the number of computers that execute the computer program is not particularly limited. For example, the computer program, a plurality of computers (e.g., a plurality of servers, etc.) may be performed is in cooperation with each other. Incidentally, the singular computer, or a a plurality of computers to work together, also referred to as "computer system".

<< 5. Conclusion >>
As described above, according to embodiments of the present disclosure, it is possible to more easily grasp the user information relating to the display object.

Having described in detail preferred embodiments of the present disclosure with reference to the accompanying drawings, the technical scope of the present disclosure is not limited to such an example. It would be appreciated by those skilled in the art of the present disclosure, within the scope of the technical idea described in the claims, it is intended to cover various modifications, combinations, these for it is also understood to belong to the technical scope of the present disclosure.

For example, in the above, a control unit that performs control according to the notification, the display unit for performing an output of the notification, and an acoustic output unit, the same apparatus (information processing apparatus 1) has been exemplified with the present technology not limited to the embodiments described above. A device having a function of a control unit, a display unit for performing an output of the notification, and a device having a function of sound output section may be different. Such a case, the control unit generates a control signal according to the notification, by sending the control signal to another device, it may be to perform the output of the notification on the display unit and the audio output unit other devices have .

The control unit, the control signal generated to control the display by the display unit, the display unit is not limited to directly playable video signal (so-called RGB signals, etc.). For example, control signal control unit generates may be streaming packets for performing streaming playback, things like image is obtained by rendering processing is performed, data including location information and content information (HTML data it may be, and so on).

The notification method according to the present technology is not limited to the output by the display or sound. If the information processing apparatus comprises a vibration output function and light emitting function or the like, the output or light emission due to vibration, notification may be performed.

In the above embodiment, an example has been described in which the display object is displayed on the glasses-type display device having a display portion of the transmission type, the present technology is not limited to such an example. For example, an image generated by superimposing an image on the display object in the real space acquired by the camera, the present techniques may be applied to an information processing apparatus to be displayed on the display unit (video see-through HMD, etc.). Further, to this technology head-up display for displaying an image on the windshield of an automobile may be applied, it may be present technique is applied to a stationary display device. Further, the background of a virtual space, the technology information processing apparatus to be displayed by rendering the image display objects are arranged in a virtual space on a display unit of the non-transmissive may be applied. In the above embodiment, when the display object in real space as a background an example has been described that appears, to which the present technique is applied to an information processing apparatus to be displayed on the display unit of the non-transmission type, the virtual space may be a display object is displayed as the background.

Each step in the above embodiments need not be processed chronologically according to the order described as a flowchart. For example, each step in the processing of the above embodiments may be processed in the order different from the order described as the flowchart, or may be processed in parallel.

The effects described herein are not limiting be those that only illustrative or exemplary. In other words, the technology according to the present disclosure, together with the above effects, or instead of the above effects, can exhibit the apparent other effects to those skilled in the art from the description herein.

Also within the scope of the present disclosure the following configurations.
(1)
If directed line of sight of the user to the display object, the information processing apparatus including a control unit for notifying the first notification information according to the display object to the user.
(2)
Wherein the display object is displayed based on the information of the real space, the information processing apparatus according to (1).
(3)
Wherein the first notification information, indicating the acceptability of the input to the display object, the information processing apparatus according to (1) or (2).
(4)
Wherein the input is an input by voice, the information processing apparatus according to (3).
(5)
Wherein the control unit indicates that the first notification information is impossible reception of the input, and, when the input has been performed, indicating that the display object is not accepted the input to notify the second notification information to the user, the information processing apparatus according to (3) or (4).
(6)
Wherein the first notification information indicates that the display object is selected, the information processing apparatus according to (1) or (2).
(7)
Wherein, when the not directed line of sight of the the display object user, a third notification information according to the display object to be notified to the user, wherein (1) any one of - (6) the information processing apparatus according to.
(8)
Wherein the control unit, the first notification information, and is notified by displaying the third notification information,
Said third notification information, the notified by low display visibility than the first notification information, the information processing apparatus according to (7).
(9)
Wherein the control unit is configured to notify the first notification information by displaying a predetermined icon on the display corresponding to the position of the object position information according to any one of (1) to (8) processing apparatus.
(10)
Wherein, by displaying the display object in a predetermined shape, thereby notifying the first notification information, wherein (1) The information processing apparatus according to any one of - (9).
(11)
Wherein the control unit is configured by a display object to be animated, to notify the first notification information, wherein (1) The information processing apparatus according to any one of - (10).
(12)
Wherein, by displaying the display object in a predetermined display condition, to notify the first notification information, wherein (1) The information processing apparatus according to any one of - (11).
(13)
The display condition, brightness, opacity, color, including conditions relating to at least one of size, the information processing apparatus according to (12).
(14)
Processor, if directed by the user's line of sight to the display object, thereby notifying the first notification information according to the display object to the user,
Information processing method, including.
(15)
In the computer system,
If it directed line of sight of the user to the display object, in order to realize the function of notifying the first notification information according to the display object to the user, program.

1 the information processing apparatus 10 control unit 12 sensor unit 14 storage unit 16 display unit 17 audio output unit 18 communication unit 101 sight recognition unit 102 a voice recognition unit 103 image recognition unit 104 outputs the control section

Claims (15)

  1. If directed line of sight of the user to the display object, the information processing apparatus including a control unit for notifying the first notification information according to the display object to the user.
  2. Wherein the display object is displayed based on the information of the real space, the information processing apparatus according to claim 1.
  3. Wherein the first notification information, indicating the acceptability of the input to the display object, the information processing apparatus according to claim 1.
  4. Wherein the input is an input by voice, the information processing apparatus according to claim 3.
  5. Wherein the control unit indicates that the first notification information is impossible reception of the input, and, when the input has been performed, indicating that the display object is not accepted the input to notify the second notification information to the user, the information processing apparatus according to claim 3.
  6. Wherein the first notification information indicates that the display object is selected, the information processing apparatus according to claim 1.
  7. Wherein, when said visual axis of said the display object the user is not directed, to notify a third notification information according to the display object to the user, the information processing apparatus according to claim 1.
  8. Wherein the control unit, the first notification information, and is notified by displaying the third notification information,
    Said third notification information, the notified by low display visibility than the first notification information, the information processing apparatus according to claim 7.
  9. Wherein the control unit is configured to notify the first notification information by displaying a predetermined icon in a position corresponding to the position of the display object, the information processing apparatus according to claim 1.
  10. Wherein, by displaying the display object in a predetermined shape, thereby notifying the first notification information, the information processing apparatus according to claim 1.
  11. Wherein the control unit by animation display the display object, thereby notifying the first notification information, the information processing apparatus according to claim 1.
  12. Wherein, by displaying the display object in a predetermined display condition, to notify the first notification information, the information processing apparatus according to claim 1.
  13. The display condition, brightness, opacity, color, including conditions relating to at least one of size, the information processing apparatus according to claim 12.
  14. Processor, if directed by the user's line of sight to the display object, thereby notifying the first notification information according to the display object to the user,
    Information processing method, including.
  15. In the computer system,
    If it directed line of sight of the user to the display object, in order to realize the function of notifying the first notification information according to the display object to the user, program.
PCT/JP2017/011963 2016-06-20 2017-03-24 Information processing device, information processing method, and program WO2017221492A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2016121736 2016-06-20
JP2016-121736 2016-06-20

Publications (1)

Publication Number Publication Date
WO2017221492A1 true true WO2017221492A1 (en) 2017-12-28

Family

ID=60783980

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/011963 WO2017221492A1 (en) 2016-06-20 2017-03-24 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2017221492A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10301675A (en) * 1997-02-28 1998-11-13 Toshiba Corp Multimodal interface device and multimodal interface method
JP2011077913A (en) * 2009-09-30 2011-04-14 Nhk Service Center Inc Apparatus and system for processing information transmission, and computer program to be used therefor
US20140347262A1 (en) * 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude
JP2015510629A (en) * 2012-01-12 2015-04-09 クゥアルコム・インコーポレイテッドQualcomm Incorporated Augmented reality using a sound analysis and geometric analysis
JP2015219441A (en) * 2014-05-20 2015-12-07 パナソニックIpマネジメント株式会社 Operation support device and operation support method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10301675A (en) * 1997-02-28 1998-11-13 Toshiba Corp Multimodal interface device and multimodal interface method
JP2011077913A (en) * 2009-09-30 2011-04-14 Nhk Service Center Inc Apparatus and system for processing information transmission, and computer program to be used therefor
JP2015510629A (en) * 2012-01-12 2015-04-09 クゥアルコム・インコーポレイテッドQualcomm Incorporated Augmented reality using a sound analysis and geometric analysis
US20140347262A1 (en) * 2013-05-24 2014-11-27 Microsoft Corporation Object display with visual verisimilitude
JP2015219441A (en) * 2014-05-20 2015-12-07 パナソニックIpマネジメント株式会社 Operation support device and operation support method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TETSURO CHINO ET AL.: "GAZE TO TALK: A Non- verbal Interface System with Speech Recognition, Gaze Detection, and Agent CG Output", INFORMATION PROCESSING SOCIETY OF JAPAN, 4 March 1998 (1998-03-04), Retrieved from the Internet <URL:http://www. interaction-ipsj.org/archives/paper1998/ pdf1998/paper98-169.pdf> [retrieved on 20170605] *
YUICHI KOYAMA ET AL.: "Behavioral Design of Robot for Remote Active Listening Support in Video Communication", IEICE TECHNICAL REPOR T, vol. 109, no. 375, 14 January 2010 (2010-01-14), pages 103 - 108 *

Similar Documents

Publication Publication Date Title
US20140267010A1 (en) System and Method for Indicating a Presence of Supplemental Information in Augmented Reality
US20130044128A1 (en) Context adaptive user interface for augmented reality display
US20120299950A1 (en) Method and apparatus for providing input through an apparatus configured to provide for display of an image
US8325214B2 (en) Enhanced interface for voice and video communications
US20140237366A1 (en) Context-aware augmented reality object commands
US20140375683A1 (en) Indicating out-of-view augmented reality images
US20140361988A1 (en) Touch Free Interface for Augmented Reality Systems
US20130169682A1 (en) Touch and social cues as inputs into a computer
US20130328763A1 (en) Multiple sensor gesture recognition
US9140554B2 (en) Audio navigation assistance
US20140306994A1 (en) Personal holographic billboard
JP2009087026A (en) Video display device
US20130179303A1 (en) Method and apparatus for enabling real-time product and vendor identification
US20140049558A1 (en) Augmented reality overlay for control devices
US9075435B1 (en) Context-aware notifications
US20110254859A1 (en) Image processing system, image processing apparatus, image processing method, and program
US20160212538A1 (en) Spatial audio with remote speakers
US20140168262A1 (en) User Interface for Augmented Reality Enabled Devices
US20150206329A1 (en) Method and system of augmented-reality simulations
US20160054565A1 (en) Information processing device, presentation state control method, and program
US20140129937A1 (en) Methods, apparatuses and computer program products for manipulating characteristics of audio objects by using directional gestures
CN101141611A (en) Method and system for informing a user of gestures made by others out of the user&#39;s line of sight
US20150215450A1 (en) Terminal device and content displaying method thereof, server and controlling method thereof
US20140354534A1 (en) Manipulation of virtual object in augmented reality via thought
US20150227222A1 (en) Control device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17814964

Country of ref document: EP

Kind code of ref document: A1