EP1039364A2 - User interface method, information processing apparatus, and program storage medium - Google Patents

User interface method, information processing apparatus, and program storage medium Download PDF

Info

Publication number
EP1039364A2
EP1039364A2 EP00103254A EP00103254A EP1039364A2 EP 1039364 A2 EP1039364 A2 EP 1039364A2 EP 00103254 A EP00103254 A EP 00103254A EP 00103254 A EP00103254 A EP 00103254A EP 1039364 A2 EP1039364 A2 EP 1039364A2
Authority
EP
European Patent Office
Prior art keywords
character
user
agent
rule
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP00103254A
Other languages
German (de)
French (fr)
Other versions
EP1039364A3 (en
Inventor
Mahoro Mixed Reality Systems Lab. Inc. Anabuki
Yuko Mixed Reality Systems Lab. Inc. Moriya
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Mixed Reality Systems Laboratory Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc, Mixed Reality Systems Laboratory Inc filed Critical Canon Inc
Publication of EP1039364A2 publication Critical patent/EP1039364A2/en
Publication of EP1039364A3 publication Critical patent/EP1039364A3/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present invention generally relates to a user interface in an interactive application environment and, more particularly, to an agent character supported user interface.
  • the present invention especially relates to an agent layout determination method in a "three-dimensional space (a virtual reality space and a mixed reality space obtained by superposing the virtual reality space onto a real space)", which does not impose negative influences (coerced and anxious feelings) on the user.
  • Windows as an operating system available from Microsoft Corp., or Word as a wordprocessing application program displays an agent character to give a guidance to the user in response to a user request or when a predetermined condition for the program is satisfied.
  • An agent character provided by such program is the one for a display environment of a two-dimensional image, and does not consider the gazing position of the user on the display screen.
  • An "agent display apparatus" disclosed by Japanese Patent Laid-Open No. 9-153145 is a technique for agent display implemented as a user interface which makes processes in correspondence with user's purpose, favor, and skill level. More specifically, a plurality of types of agent states (appearances (costumes of agent), modes of expression, and emotional expressions) are prepared, and a combination of agent states is determined in correspondence with user's purpose, favor, and skill level, thus displaying an agent character in the determined combination.
  • the agent makes only predetermined actions, and the display layout of the agent character does not consider relative position between the user position and the display position of the agent character.
  • the agent character In order to realize a user interface in a display environment of a three-dimensional image, the agent character must also be displayed as a three-dimensional image. That is, the agent character is also given a three-dimensional display position and scale.
  • Fig. 1 explains a case wherein the agent character given a certain scale value is displayed in an unexpected size and surprises the user. More specifically, since the agent character is given a three-dimensional display position and scale in a three-dimensional image space, if the user wants to communicate with the agent character at a close distance in a three-dimensional display environment in which the agent character is given the same size as that of a human being, a program that presents the agent character displays the "face" of the agent character in the same size as that of a human face, as shown in Fig. 1. In some cases, the face of the agent character is displayed in a full size of the display screen, and makes the user feel anxious or coerced.
  • the agent character may occlude an object the user wants to see depending on the viewpoint position of the user, as shown in Fig. 2.
  • the present invention has been proposed to solve the conventional problems, and has as its object to provide a user interface or an agent character layout determination method, which considers position relative to the user so as not to impose negative influences on the user.
  • a user interface method which three-dimensionally displays a character designated by a program, and allows the user to interact with a computer system while communicating with the displayed character, comprises the steps of:
  • the user can be free from any negative influences.
  • a reference value used to determine if display of the character is against user's interests is changed in accordance with a distance from the user.
  • a reference value used to determine if display of the character is against user's interests is changed in accordance with a distance from the user on a front side of a line-of-sight direction of the user.
  • display of the character is inhibited in a first zone which has a distance from the user less than a first distance, and display of the character is permitted in a second zone which has the distance from the user that falls within a range from the first distance to a second distance larger than the first distance, and a third zone which is farther than the second distance.
  • the character in the rule when the user and character do not communicate with each other, stays in the third zone which is farther than the second distance.
  • the character in the rule when the user and character communicate with each other, the character is located in the second zone.
  • the position/posture of the user are detected, and the character is located so as to prevent the region of interest from being occluded by the character from the detected position/posture of the user.
  • the position/posture of the user are detected, and the character is located so as to prevent the character from being occluded by the region of interest from the detected position/posture of the user.
  • the character is located not to fall in the region of interest.
  • the character in the rule when an object to be manipulated by the character is present in the region of interest, the character is located within a range in which the object is accessible.
  • the character if there are a plurality of different layout positions of the character, which satisfy the rule, the character is located at a position which can minimize a distance between the region of interest and the layout position of the character.
  • the character is located at a layout position closest to the user.
  • a plurality of layout candidates may be present depending on the two layout conditions.
  • the character is located at one of the plurality of different layout positions.
  • the character is located outside a field of view of the user.
  • the rule further considers a size of the character.
  • the rule further considers an attribute of the character.
  • the size is determined in advance in correspondence with the character.
  • the attribute is determined in advance in correspondence with the character.
  • the rule further considers a situation of the character.
  • the situation of the character considers if the character is waiting for an instruction from the user, having conversation with the user, or is automonously moving.
  • the above object can also be achieved by a storage medium which stores a program code for implementing the user interface or an information processing apparatus which stores and executes that program code.
  • an agent layout support module group which is running under the control of a general operating system provides a user interface environment to a library information database application program, which is running under the control of the general operating system.
  • the database application program and agent layout support module group make inter-program communications using known application program interfaces (APIs) via the operating system.
  • APIs application program interfaces
  • Fig. 4 shows the hardware environment of this system. More specifically, most of modules of this system are stored in a workstation (or PC) group 100. Various input/output devices are connected to this workstation/PC group 100. An HMD 201, stereoscopic display 202, or immersive display 203 provides a three-dimensional image to the user. One of these displays can be arbitrarily used. The three-dimensional image is provided from the workstation/PC group 100, and an agent character is built in this provided three-dimensional image.
  • a loudspeaker 204 and headphone 205 provide audio information including voice of the agent. Either the loudspeaker 204 and headphone 205 can be arbitrarily used.
  • a microphone 206 is used to collect the user voice and sound from a real world 210.
  • Images of the user and real world 210 are captured by a camera 207.
  • the positions of the user and real world 210 are detected by an infrared sensor 208. Since this system aims at providing library information, the real world 210 corresponds to bookshelves.
  • the position of each bookshelf is recognized by detecting infrared light emitted by an infrared ray emitter set on each bookshelf by the sensor 208, and analyzing the signal output from the sensor 208 by the workstation/Pc group 100.
  • the viewpoint position/posture of the user are detected by the infrared sensor 208 and a magnetic sensor 209.
  • a virtual space presented to the user is output to and displayed on the HMD 201 or the like by merging a virtual three-dimensional image generated based on data from a library database 300 and the agent character the data of which is generated by the agent layout support module group.
  • the database 300 stores distance information of each object together with a color image of a library.
  • the three-dimensional position/posture of the user and the three-dimensional position of the real world are recognized by various sensors and, hence, a virtual space generation module 101 gives a convergence angle and field angle corresponding to the detected position/posture of the user upon merging the virtual images.
  • a characteristic feature of this system lies in that an agent layout support module 400 optimally determines the display position/posture of the agent character in correspondence with the viewpoint position of the user.
  • Fig. 5 shows an example of the layout of the user and a real object of interest of the user in a three-dimensional space (a virtual space or a space obtained by superposing the virtual space and real space).
  • the agent layout support module 400 determines the area where the agent can be laid out as follows.
  • the module 400 determines an "exclusive zone”, “conversation zone”, “neighboring zone”, and “mutual recognition zone” to have the user as the center. These four zones may be set on concentric circles corresponding to distances from the user, or may be set as zones with a predetermined shape to have the user as the center. In this example, if r (m) represents distance from the user, the exclusive zone is set to satisfy: 0 m ⁇ r ⁇ 0.5 Since the user wants nobody to enter this exclusive zone, he or she does not want the agent character to appear in this zone, either. Outside the exclusive zone, the module 400 sets in turn:
  • the module 400 applies the following rules:
  • the module 400 locates the agent at one of these locations.
  • the module 400 applies the following RULE:
  • the module 400 applies the following rules:
  • the module 400 applies the following rule:
  • the module 400 applies the following general rule:
  • a zone shown as an "agent layout candidate zone" in Fig. 6 is determined as a candidate zone where the agent is to be displayed.
  • the agent character is given a manipulation system for picking up a book. That is, assume that the user picks up a book displayed as a virtual image by the virtual "hand" of the agent character or a desired book of the user is pointed by the virtual "hand” of the agent character in a VR or MR environment.
  • the final agent position (except for its level) is determined by adding conditions that "the manipulation system of the character” can “just access” the target object (a book in this example), and "the agent is located at a position closest to the object".
  • the agent position is determined, as shown in Fig. 7.
  • the agent is located at the level of the target object or the level with which the agent can naturally stand on the ground in accordance with whether or not the agent can float.
  • Fig. 8 shows the arrangement of the system for checking the aforementioned rules to determine the display position of the agent, i.e., a program interface.
  • the agent layout support module group 400 has an agent layout module 401, agent size determination module 402, and another agent module 403. These modules are coupled via application program interfaces (APIs; to be described later). Also, the agent layout support module group 400 is connected to the virtual space generation module 101 via predetermined APIs.
  • APIs application program interfaces
  • this system determines an optimal layout of the agent character in correspondence with the position/posture of the user.
  • a user measurement module (103, 104) computes the position/posture or the like of the user on the basis of the aforementioned sensor information, and passes such data onto a user information management module 105.
  • This module 105 tracks and manages the position/posture of the user.
  • the managed user information is sent to the agent layout module 401 in real time via APIs.
  • the agent layout module 401 requires real space information and virtual space information as inputs in addition to the user information, as will be described later.
  • the user information includes:
  • the real space information is measured by a real space information measurement module, and is managed by a real space information management module. This managed real space information is also sent to the agent layout module 401 via APIs.
  • the "real space information" includes:
  • the position/posture of the agent character is sent from the layout module 401 to the virtual space generation module 101 via APIs, and the module 101 generates an image of the agent to merge the agent character with the virtual space or to be matched with the real space and outputs and displays the generated image on the HMD or the like.
  • Agent information in the APIs between the virtual space generation module 101 and agent layout support module group 400, i.e., "agent information” will be explained below.
  • the agent information is required to present the agent to the user, and is classified into three types of information, i.e., the one output from the layout module 401, the one used by the layout module 401, and another information. This information is passed onto an agent output module group including the virtual space generation module 101.
  • the information output from the layout module 101 to the virtual space generation module 101 includes:
  • the information used by the layout determination module includes:
  • Another information includes:
  • the size and shape of the agent are important factors to prevent the target object from being occluded and to prevent the user from experiencing coerced feeling, as well as to allow the user to easily recognize the agent.
  • the agent size is defined in advance in the agent system by the size determination module in accordance with an object to be handled by the agent and the task contents of the agent before the layout is determined.
  • the agent shape is a shape model of the agent character prepared in the agent system. This shape structure can be changed. That is, the pose of the agent can be changed. The shape (pose) is changed in correspondence with the agent situation.
  • Fig. 9 shows the overall processing sequence of the agent layout support module group 400.
  • step S10 the system is started up and a predetermined initialization process is done.
  • step S20 attributes of the agent are determined. Note that "floatability” and “personality/role” are assigned as “attributes of the agent” in this embodiment. This is because the layout reference must be changed in correspondence with the attributes of the agent.
  • the attributes of the agent determined in step S20 are stored in a predetermined memory. The attribute data in the memory are read out in step S500 in Fig. 10, and are used in step S730. In step S730, if an agent is assigned a "floatability" value,
  • the agent size is determined in step S30.
  • the size determination module 402 is launched to determine the agent size. As can be seen from the flow chart shown in Fig. 9, this module 402 is launched upon starting up the agent system (after step S20) and after step S80 upon movement in the virtual space.
  • the agent size is not that on the screen but that in the space.
  • a "space dependent/independent" size can be set in addition to "floatability" and "personality/role”.
  • a "space independent” size is set, for example, when the agent is a copy of an existing person or the like, a given value is passed onto the layout module as the size.
  • the agent size rules include:
  • the agent size determined according to the aforementioned rules are stored in the predetermined memory, and is read out in step S400 upon execution.
  • step S40 passes the determined "agent information" onto the virtual space generation module 101 so as to determine the agent shape and layout and to display the determined result in correspondence with an agent situation.
  • An event that requires this loop is a change in agent situation.
  • step S30 The loop of step S30 ⁇ step S40 ⁇ step S60 ⁇ step S80 ⁇ step S30 is executed when the virtual space has changed.
  • step S40 the agent situation is checked, and the agent shape is determined. This is because the layout reference of the agent may change depending on the agent situation.
  • the agent situation is determined by a command input from the user or by scenario, and is defined in the agent before the layout is determined.
  • an "instruction wait mode" from the user, "object manipulation mode” of the user, and “conversation mode” with the user are set as the "agent situation”. That is, in the "instruction wait mode",
  • the "agent situation" determined in step S40 is stored in the predetermined memory, and is read out in step S600.
  • step S60 the layout of the agent is finally determined. Details of step S60 are shown in the flow charts of Figs. 10 and 11.
  • step S100 in Fig. 10 the "user information”, which is managed by the user information management module 105 and stored in the predetermined memory, is read out. With this information, the layout module 401 can detect the position/posture of the user.
  • step S200 the "real space information”, which is managed by a real space management module 106 and stored at a predetermined memory address, is input.
  • step S300 virtual space information generated by the user application program is input from the virtual space generation module 101.
  • step S400 the agent size, which is determined and stored in the memory in step S30, and the agent shape, which is determined in step S40, are acquired.
  • step S500 the agent attributes determined in step S20 are acquired.
  • step S600 the agent situation determined in step S40 is acquired.
  • step S700 the agent layout is determined by applying the aforementioned RULEs using the acquired user information, real space information, virtual space information, agent size, agent shape, agent attributes, agent situation, and the like. More specifically, the "attributes”, “personality/role”, and basic “shape” of the agent are determined before step S700, and they never change along with the progress of the application program.
  • step S700 the shape and size of the "agent” roughly determined based on the “attributes”, “personality/role”, and basic “shape", i.e., the layout and display pattern of the agent are determined in correspondence with changes in the "user information", “real space information”, “virtual space information”, “agent situation”, and the like.
  • Step S700 is described in detail in steps S710 to S750 in Fig. 11.
  • step S710 the shape of the agent, i.e., pose, is changed in correspondence with the acquired agent situation.
  • step S720 RULE 22 to RULE 24 are applied in accordance with the agent situation to determine zone candidates where the agent can be located.
  • step S730 the candidates determined in step S720 are narrowed down to fewer candidates in accordance with the agent attributes using RULE 15 to RULE 18.
  • step S740 RULE 1 to RULE 8 and RULE 22 to RULE 24 are applied on the basis of various kinds of input information such as the "user information”, “real space information”, “virtual space information”, and the like, agent size, agent shape, and agent situation to narrow them down to still fewer, relevant candidates.
  • step S750 the display position/posture of the agent character is finally determined by applying RULE 9 to RULE 13.
  • step S760 agent information is computed based on the layout position.
  • step S800 in Fig. 10 the agent information is output to the virtual space generation module 101.
  • the present invention is not limited to the above embodiment.
  • the present invention is applied to the library information database in the above embodiment.
  • the present invention is not limited to such specific application, but may be applied to every application fields that require an agent.
  • the application program and agent are programs independent from the position system.
  • the present invention is not limited to such specific system, but they may be integrated.
  • the present invention is not limited to a three-dimensional display environment. Even in a two-dimensional display environment, distance information can be saved, and the positional relationship between the agent and user can be considered on the basis of the saved distance information.
  • the layout program (agent layout determination module) of the agent layout method is built in the operating system
  • the user application program such as the database application program or the like need only describe the aforementioned program interface in the program, thus improving program productivity.
  • the program to be built in the operating system is preferably stored in a CD-ROM or the like solely or together with other programs. This CD-ROM is inserted into the workstation or PC in which the operating system has already been installed to load the aforementioned module.
  • APIs have been exemplified as program interfaces in the above embodiment.
  • the present invention is not limited to such specific interface, and any other existing or newly developed program interfaces can be used.
  • a user interface environment which is not against user's interest in terms of manipulations can be provided.
  • a system for displaying an agent character which does not give any anxious feeling to the user and is easy to use, in consideration of the position/posture of the user.
  • the agent character Upon displaying the agent character, the distance from the user and the line-of-sight direction of the user are taken into consideration, and the agent character is inhibited from being located at the closest distance position in front of the user.
  • the agent character When the user does not communicate with the agent character, the agent character is located in a far zone in front of the user.
  • the agent character When the user communicates with the agent character, the agent character is located in a middle-distance zone in front of the user.

Abstract

There is disclosed a system for displaying an agent character, which does not give any anxious feeling to the user and is easy to use, in consideration of the position/posture of the user. Upon displaying the agent character, the distance from the user and the line-of-sight direction of the user are taken into consideration, and the agent character is inhibited from being located at the closest distance position in front of the user. When the user does not communicate with the agent character, the agent character is located in a far zone in front of the user. When the user communicates with the agent character, the agent character is located in a middle-distance zone in front of the user.

Description

    FIELD OF THE INVENTION
  • The present invention generally relates to a user interface in an interactive application environment and, more particularly, to an agent character supported user interface.
  • The present invention especially relates to an agent layout determination method in a "three-dimensional space (a virtual reality space and a mixed reality space obtained by superposing the virtual reality space onto a real space)", which does not impose negative influences (coerced and anxious feelings) on the user.
  • BACKGROUND OF THE INVENTION
  • Along with the advance of multimedia information presentation technologies, huge volume of information is presented to the user and it may often become difficult for the user to understand what kind of request is needed to be input to a computer or what is required of him or her by the computer system.
  • Under these circumstances, conventionally, Windows as an operating system available from Microsoft Corp., or Word as a wordprocessing application program displays an agent character to give a guidance to the user in response to a user request or when a predetermined condition for the program is satisfied.
  • An agent character provided by such program is the one for a display environment of a two-dimensional image, and does not consider the gazing position of the user on the display screen.
  • As known technologies pertaining to agent characters, for example, Japanese Patent Laid-Open No. 9-153145 (Laid-Open date: June 10, 1997), "Computer which sees, hears, and talks with smile (Multimodal interactive system)", Electrotechnical Laboratory Press Meeting Reference, and Lewis Johnson, Jeff Rickel, Randy Stiles, & Allen Munro, "Integrating Pedagogical Agents into Virtual Environments", Presence, Vol. 7, No. 6, pp. 523 - 546, are known.
  • An "agent display apparatus" disclosed by Japanese Patent Laid-Open No. 9-153145 is a technique for agent display implemented as a user interface which makes processes in correspondence with user's purpose, favor, and skill level. More specifically, a plurality of types of agent states (appearances (costumes of agent), modes of expression, and emotional expressions) are prepared, and a combination of agent states is determined in correspondence with user's purpose, favor, and skill level, thus displaying an agent character in the determined combination.
  • However, the technique disclosed by this reference does not essentially consider any three-dimensional space.
  • "Computer which sees, hears, and talks with smile (Multimodal interactive system)", Electrotechnical Laboratory Press Meeting Reference was released at a home page
    "http://www.etl.go.jp/etl/mm-press/mmpress.html"
    (updated November 9, 1996), and pertains to an agent implementation technique present in a three-dimensional space. More specifically, this paper is a study about a prototype agent which comprises real-time image recognition, speech recognition, and speech synthesis functions. This paper aims at exploring specific specifications that a computer must satisfy to implement smooth interactions, and acknowledging technical problems to be solved. In this paper, an agent is present in a virtual space in a display, and interacts with the user via the window (screen). However, in this reference, the agent is always displayed at a given position, and never changes its position in consideration of its position relative to the user.
  • As for an agent character display technique in a three-dimensional space as an objective of the present invention, "Integrating Pedagogical Agents into Virtual Environments" discloses a study of a system which reconstructs a life-size virtual three-dimensional work space, and allows the user to learn a task process with the help of an agent. When the user who wears an HMD selects a given menu item, the agent responds by synthetic voice. Then the user selects a task demonstrate menu (Show Me!), the agent responds by its actions.
  • However, in this prior art, the agent makes only predetermined actions, and the display layout of the agent character does not consider relative position between the user position and the display position of the agent character.
  • In order to realize a user interface in a display environment of a three-dimensional image, the agent character must also be displayed as a three-dimensional image. That is, the agent character is also given a three-dimensional display position and scale.
  • Fig. 1 explains a case wherein the agent character given a certain scale value is displayed in an unexpected size and surprises the user. More specifically, since the agent character is given a three-dimensional display position and scale in a three-dimensional image space, if the user wants to communicate with the agent character at a close distance in a three-dimensional display environment in which the agent character is given the same size as that of a human being, a program that presents the agent character displays the "face" of the agent character in the same size as that of a human face, as shown in Fig. 1. In some cases, the face of the agent character is displayed in a full size of the display screen, and makes the user feel anxious or coerced.
  • Or even when the display size of the agent character is not so large, the agent character may occlude an object the user wants to see depending on the viewpoint position of the user, as shown in Fig. 2.
  • Such problems do not normally occur in a two-dimensional display space, and occur only in a three-dimensional display space when the viewpoint position/posture of the user, and the position of the agent character are not taken into consideration.
  • The aforementioned Electrotechnical Laboratory Press Meeting Reference does not pose such problems due to limitations, i.e., since the agent is always displayed at a predetermined position.
  • Also, in "Integrating Pedagogical Agents into Virtual Environments", since the agent makes only predetermined actions, as described above, the above problems do not occur.
  • In other words, in an environment in which a three-dimensional agent character freely moves in a three-dimensional space, the aforementioned prior arts cannot avoid the problems shown in Figs. 1 and 2.
  • SUMMARY OF THE INVENTION
  • The present invention has been proposed to solve the conventional problems, and has as its object to provide a user interface or an agent character layout determination method, which considers position relative to the user so as not to impose negative influences on the user.
  • In order to achieve the above object, according to the present invention, a user interface method which three-dimensionally displays a character designated by a program, and allows the user to interact with a computer system while communicating with the displayed character, comprises the steps of:
  • estimating a region of interest of the user;
  • estimating a display purpose of information in an application program; and
  • determining a three-dimensional display pattern of the character by applying a rule, which aims at preventing display of the character from being against user's interest, to the estimated region of interest of the user and the estimated display purpose of information.
  • By considering the region of interest of the user, e.g., the position or line-of-sight direction of the user, the user can be free from any negative influences.
  • According to a preferred aspect of the present invention, in the rule a reference value used to determine if display of the character is against user's interests is changed in accordance with a distance from the user.
  • According to a preferred aspect of the present invention, in the rule a reference value used to determine if display of the character is against user's interests is changed in accordance with a distance from the user on a front side of a line-of-sight direction of the user.
  • According to a preferred aspect of the present invention, display of the character is inhibited in a first zone which has a distance from the user less than a first distance, and display of the character is permitted in a second zone which has the distance from the user that falls within a range from the first distance to a second distance larger than the first distance, and a third zone which is farther than the second distance.
  • According to a preferred aspect of the present invention, in the rule when the user and character do not communicate with each other, the character stays in the third zone which is farther than the second distance.
  • According to a preferred aspect of the present invention, in the rule when the user and character communicate with each other, the character is located in the second zone.
  • According to a preferred aspect of the present invention, in the rule the position/posture of the user are detected, and the character is located so as to prevent the region of interest from being occluded by the character from the detected position/posture of the user.
  • According to a preferred aspect of the present invention, in the rule the position/posture of the user are detected, and the character is located so as to prevent the character from being occluded by the region of interest from the detected position/posture of the user.
  • According to a preferred aspect of the present invention, in the rule the character is located not to fall in the region of interest.
  • According to a preferred aspect of the present invention, in the rule when an object to be manipulated by the character is present in the region of interest, the character is located within a range in which the object is accessible.
  • According to a preferred aspect of the present invention, if there are a plurality of different layout positions of the character, which satisfy the rule, the character is located at a position which can minimize a distance between the region of interest and the layout position of the character.
  • According to a preferred aspect of the present invention, if there are a plurality of different layout positions of the character, which satisfy the rule, and the region of interest is not present, the character is located at a layout position closest to the user.
  • In some cases, a plurality of layout candidates may be present depending on the two layout conditions.
  • According to a preferred aspect of the present invention, if there are still another plurality of different layout positions of the character, which satisfy the rule, the character is located at one of the plurality of different layout positions.
  • According to a preferred aspect of the present invention, if a layout position that satisfies the rule is not present, the character is located outside a field of view of the user.
  • According to a preferred aspect of the present invention, the rule further considers a size of the character.
  • According to a preferred aspect of the present invention, the rule further considers an attribute of the character.
  • According to a preferred aspect of the present invention, the size is determined in advance in correspondence with the character.
  • According to a preferred aspect of the present invention, the attribute is determined in advance in correspondence with the character.
  • According to a preferred aspect of the present invention, the rule further considers a situation of the character.
  • According to a preferred aspect of the present invention, the situation of the character considers if the character is waiting for an instruction from the user, having conversation with the user, or is automonously moving.
  • The above object can also be achieved by a storage medium which stores a program code for implementing the user interface or an information processing apparatus which stores and executes that program code.
  • Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
  • Fig. 1 is a view for explaining a problem of conventional agent character display;
  • Fig. 2 is a view for explaining another problem of conventional agent character display;
  • Fig. 3 is a block diagram showing the system arrangement when the present invention is applied to a library database application;
  • Fig. 4 is a block diagram showing the hardware arrangement of input/output devices of the system shown in Fig. 3;
  • Fig. 5 is a view for explaining the principle of agent layout according to the system shown in Fig. 3;
  • Fig. 6 is a view for explaining the principle of agent layout according to the system shown in Fig. 3;
  • Fig. 7 is a view for explaining the principle of agent layout according to the system shown in Fig. 3;
  • Fig. 8 is a block diagram showing the system arrangement of the system shown in Fig. 3 in more detail;
  • Fig. 9 is a flow chart for explaining the overall control sequence of an agent layout support module group 400;
  • Fig. 10 is a flow chart for explaining the control sequence of an agent layout module 401; and
  • Fig. 11 is a flow chart for explaining the control sequence of the agent layout module 401.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • A library information navigation system according to a preferred embodiment of the present invention will be described hereinafter with reference to the accompanying drawings.
  • In this system, as shown in Fig. 3, an agent layout support module group according to the present invention, which is running under the control of a general operating system provides a user interface environment to a library information database application program, which is running under the control of the general operating system. The database application program and agent layout support module group make inter-program communications using known application program interfaces (APIs) via the operating system.
  • Fig. 4 shows the hardware environment of this system. More specifically, most of modules of this system are stored in a workstation (or PC) group 100. Various input/output devices are connected to this workstation/PC group 100. An HMD 201, stereoscopic display 202, or immersive display 203 provides a three-dimensional image to the user. One of these displays can be arbitrarily used. The three-dimensional image is provided from the workstation/PC group 100, and an agent character is built in this provided three-dimensional image.
  • A loudspeaker 204 and headphone 205 provide audio information including voice of the agent. Either the loudspeaker 204 and headphone 205 can be arbitrarily used.
  • A microphone 206 is used to collect the user voice and sound from a real world 210.
  • Images of the user and real world 210 are captured by a camera 207.
  • The positions of the user and real world 210 are detected by an infrared sensor 208. Since this system aims at providing library information, the real world 210 corresponds to bookshelves. The position of each bookshelf is recognized by detecting infrared light emitted by an infrared ray emitter set on each bookshelf by the sensor 208, and analyzing the signal output from the sensor 208 by the workstation/Pc group 100. The viewpoint position/posture of the user are detected by the infrared sensor 208 and a magnetic sensor 209.
  • In the system shown in Fig. 4, a virtual space presented to the user is output to and displayed on the HMD 201 or the like by merging a virtual three-dimensional image generated based on data from a library database 300 and the agent character the data of which is generated by the agent layout support module group. The database 300 stores distance information of each object together with a color image of a library. On the other hand, the three-dimensional position/posture of the user and the three-dimensional position of the real world are recognized by various sensors and, hence, a virtual space generation module 101 gives a convergence angle and field angle corresponding to the detected position/posture of the user upon merging the virtual images.
  • A characteristic feature of this system lies in that an agent layout support module 400 optimally determines the display position/posture of the agent character in correspondence with the viewpoint position of the user.
  • Examples of rules in which an agent layout module of this system determines the display position/posture of the agent character will be explained below with reference to Figs. 5 to 7.
  • Fig. 5 shows an example of the layout of the user and a real object of interest of the user in a three-dimensional space (a virtual space or a space obtained by superposing the virtual space and real space).
  • The agent layout support module 400 determines the area where the agent can be laid out as follows. The module 400 determines an "exclusive zone", "conversation zone", "neighboring zone", and "mutual recognition zone" to have the user as the center. These four zones may be set on concentric circles corresponding to distances from the user, or may be set as zones with a predetermined shape to have the user as the center. In this example, if r (m) represents distance from the user, the exclusive zone is set to satisfy: 0 m < r < 0.5 Since the user wants nobody to enter this exclusive zone, he or she does not want the agent character to appear in this zone, either. Outside the exclusive zone, the module 400 sets in turn:
  • Conversation zone (0.5 m < r < 1.5 m) →
  • neighboring zone (1.5 m < r < 3 m) → mutual
  • recognition zone (3 m < r < 20 m)
  • In consideration of these zones, the module 400 determines the layout position of the agent character according to the following RULE 1 to RULE 8:
    • RULE 1: display no agent character in the "exclusive zone";
    • RULE 2: keep the agent character in the "mutual recognition zone" when no communication is made with the user;
    • RULE 3: locate the agent character in the "neighboring zone" or "conversation zone" when a communication is made with the user;
    • RULE 4: appropriately move the agent character not to occlude a target object when a communication is made with the user;
    • RULE 5: locate the character not to be occluded by the target character;
    • RULE 6: locate the character not to invade another object;
    • RULE 7: locate the character within the field of view of the user when the agent communicates with the user; locate the character near the field of view or outside the field of view when the agent does not communicate with the user; and
    • RULE 8: when there is a target object, locate the character where a manipulation system (e.g., the hand of a person) can access the object.
  • If there are more than one locations that satisfy RULE 1 to RULE 8, the module 400 applies the following rules:
  • RULE 9: locate the agent at a position closest to the object; and
  • RULE 10: locate the agent at a position closest to the user position when there is no object.
  • If there are more than one locations that satisfy RULE 1 to RULE 10, the module 400 locates the agent at one of these locations.
  • If there is no location that satisfies RULE 1 to RULE 8, the module 400 applies the following RULE:
  • RULE 11: locate the agent within the mutual recognition zone by ignoring the condition "within the neighboring zone/conversation zone".
  • If no appropriate location is found yet, the module 400 applies the following rules:
  • RULE 12: locate the character outside the field of view.
  • As for the level of character, the module 400 applies the following rule:
  • RULE 13: locate the character to float at the eye level of the user or the level of the target object if the agent can float; or locate the character to naturally stand on the ground if the agent cannot float.
  • As for the character size, the module 400 applies the following general rule:
  • RULE 14: set the character size to be equal to or smaller than the average person size so as to allow the user to recognize the existence of the agent in a manner equivalent to that of an actual person.
  • In the example shown in Fig. 5, when the above rules are applied to a case wherein an interaction with the user is required, a zone shown as an "agent layout candidate zone" in Fig. 6 is determined as a candidate zone where the agent is to be displayed.
  • In this example in which the present invention is applied to the library database system, assume that the agent character is given a manipulation system for picking up a book. That is, assume that the user picks up a book displayed as a virtual image by the virtual "hand" of the agent character or a desired book of the user is pointed by the virtual "hand" of the agent character in a VR or MR environment. In such case, the final agent position (except for its level) is determined by adding conditions that "the manipulation system of the character" can "just access" the target object (a book in this example), and "the agent is located at a position closest to the object". In this example, the agent position is determined, as shown in Fig. 7.
  • Finally, the agent is located at the level of the target object or the level with which the agent can naturally stand on the ground in accordance with whether or not the agent can float.
  • Fig. 8 shows the arrangement of the system for checking the aforementioned rules to determine the display position of the agent, i.e., a program interface.
  • Referring to Fig. 8, the agent layout support module group 400 has an agent layout module 401, agent size determination module 402, and another agent module 403. These modules are coupled via application program interfaces (APIs; to be described later). Also, the agent layout support module group 400 is connected to the virtual space generation module 101 via predetermined APIs.
  • As described above, this system determines an optimal layout of the agent character in correspondence with the position/posture of the user. Note that a user measurement module (103, 104) computes the position/posture or the like of the user on the basis of the aforementioned sensor information, and passes such data onto a user information management module 105. This module 105 tracks and manages the position/posture of the user. The managed user information is sent to the agent layout module 401 in real time via APIs. The agent layout module 401 requires real space information and virtual space information as inputs in addition to the user information, as will be described later. The user information includes:
  • data 1: user position with respect to reference position; and
  • data 2: user posture
  • Note that the reference position indicates a reference used upon merging the real space and virtual space, and the position is expressed by coordinates (x, y, z). The posture indicates angles (roll, pitch, yaw), and the user position and posture indicate the viewpoint position and posture of the user.
  • The real space information is measured by a real space information measurement module, and is managed by a real space information management module. This managed real space information is also sent to the agent layout module 401 via APIs. The "real space information" includes:
  • data 1: position of real object with respect to reference position;
  • data 2: posture of real object;
  • data 3: size of real object; and
  • data 4: shape of real object.
  • On the other hand, the position/posture of the agent character, the layout of which has been determined, is sent from the layout module 401 to the virtual space generation module 101 via APIs, and the module 101 generates an image of the agent to merge the agent character with the virtual space or to be matched with the real space and outputs and displays the generated image on the HMD or the like.
  • Interface information in the APIs between the virtual space generation module 101 and agent layout support module group 400, i.e., "agent information" will be explained below.
  • The agent information is required to present the agent to the user, and is classified into three types of information, i.e., the one output from the layout module 401, the one used by the layout module 401, and another information. This information is passed onto an agent output module group including the virtual space generation module 101.
  • The information output from the layout module 101 to the virtual space generation module 101 includes:
  • data 1: agent position with respect to reference position; and
  • data 2: agent posture.
  • The information used by the layout determination module includes:
  • data 1: agent size; and
  • data 2: agent shape (pose).
  • Another information includes:
  • data 1: agent appearance (expression or the like); and
  • data 2: agent voice.
  • As described above, the size and shape of the agent are important factors to prevent the target object from being occluded and to prevent the user from experiencing coerced feeling, as well as to allow the user to easily recognize the agent.
  • The agent size is defined in advance in the agent system by the size determination module in accordance with an object to be handled by the agent and the task contents of the agent before the layout is determined.
  • The agent shape is a shape model of the agent character prepared in the agent system. This shape structure can be changed. That is, the pose of the agent can be changed. The shape (pose) is changed in correspondence with the agent situation.
  • Fig. 9 shows the overall processing sequence of the agent layout support module group 400.
  • In step S10, the system is started up and a predetermined initialization process is done.
  • In step S20, attributes of the agent are determined. Note that "floatability" and "personality/role" are assigned as "attributes of the agent" in this embodiment. This is because the layout reference must be changed in correspondence with the attributes of the agent. The attributes of the agent determined in step S20 are stored in a predetermined memory. The attribute data in the memory are read out in step S500 in Fig. 10, and are used in step S730. In step S730, if an agent is assigned a "floatability" value,
  • RULE 15: locate the face (or corresponding) portion of the agent at the level of the line-of-sight position of the user or the level of the target object is applied. On the other hand, if an agent cannot float,
  • RULE 16: locate the agent to naturally stand on the ground or an object (or hang on it and so forth) is applied. If a value "intrusive or obtrusive" is set as "personality/role",
  • RULE 17: locate the agent to possibly fall within the field of view of the user is applied; if a value "unobtrusive" is set,
  • RULE 18: locate the agent to possibly fall within a region around the field of view is applied.
  • Note that the attributes and layout rules based on the attributes should be determined depending on each specific module design.
  • After the attributes of the agent are determined, the agent size is determined in step S30. In step S30, the size determination module 402 is launched to determine the agent size. As can be seen from the flow chart shown in Fig. 9, this module 402 is launched upon starting up the agent system (after step S20) and after step S80 upon movement in the virtual space. In this embodiment, the agent size is not that on the screen but that in the space. Note that as agent attributes, a "space dependent/independent" size can be set in addition to "floatability" and "personality/role". When a "space independent" size is set, for example, when the agent is a copy of an existing person or the like, a given value is passed onto the layout module as the size. The agent size rules include:
  • RULE 19: set substantially the same scale as that of an object to be handled by the agent (for example, when the agent handles a miniature object, the agent is also set to have a miniature size; when it handles a normal object, the agent is set to have a size falling within the range from small animals to average person);
  • RULE 20: set a size equal to or smaller than that of an average person so as not to give anxious feeling to the user; and
  • RULE 21: ignore RULE 20 under the condition that the agent who handles a giant object far away from the user does not give any anxiety to the user.
  • The agent size determined according to the aforementioned rules are stored in the predetermined memory, and is read out in step S400 upon execution.
  • The loop of step S40 → step S60 → step S80 → step S40 passes the determined "agent information" onto the virtual space generation module 101 so as to determine the agent shape and layout and to display the determined result in correspondence with an agent situation. An event that requires this loop is a change in agent situation.
  • The loop of step S30 → step S40 → step S60 → step S80 → step S30 is executed when the virtual space has changed.
  • In step S40, the agent situation is checked, and the agent shape is determined. This is because the layout reference of the agent may change depending on the agent situation. The agent situation is determined by a command input from the user or by scenario, and is defined in the agent before the layout is determined. In this embodiment, an "instruction wait mode" from the user, "object manipulation mode" of the user, and "conversation mode" with the user are set as the "agent situation". That is, in the "instruction wait mode",
  • RULE 22: locate the agent not to be against user's interest (e.g., not to occlude an object) is applied. In the "object manipulation mode",
  • RULE 23: locate the agent at a position where the agent can naturally manipulate the object and which is not against user's interest is applied. In the "conversation mode",
  • RULE 24: locate the agent at the center of the field of view of the user is applied.
  • The "agent situation" determined in step S40 is stored in the predetermined memory, and is read out in step S600.
  • In step S60, the layout of the agent is finally determined. Details of step S60 are shown in the flow charts of Figs. 10 and 11.
  • More specifically, in step S100 in Fig. 10 the "user information", which is managed by the user information management module 105 and stored in the predetermined memory, is read out. With this information, the layout module 401 can detect the position/posture of the user. In step S200, the "real space information", which is managed by a real space management module 106 and stored at a predetermined memory address, is input. In step S300, virtual space information generated by the user application program is input from the virtual space generation module 101.
  • In step S400, the agent size, which is determined and stored in the memory in step S30, and the agent shape, which is determined in step S40, are acquired. In step S500, the agent attributes determined in step S20 are acquired. In step S600, the agent situation determined in step S40 is acquired. In step S700, the agent layout is determined by applying the aforementioned RULEs using the acquired user information, real space information, virtual space information, agent size, agent shape, agent attributes, agent situation, and the like. More specifically, the "attributes", "personality/role", and basic "shape" of the agent are determined before step S700, and they never change along with the progress of the application program. On the other hand, the user information, real space information, virtual space information, and agent situation change along with the progress of the program. Hence, in step S700 the shape and size of the "agent" roughly determined based on the "attributes", "personality/role", and basic "shape", i.e., the layout and display pattern of the agent are determined in correspondence with changes in the "user information", "real space information", "virtual space information", "agent situation", and the like. Step S700 is described in detail in steps S710 to S750 in Fig. 11.
  • More specifically, in step S710 the shape of the agent, i.e., pose, is changed in correspondence with the acquired agent situation. In step S720, RULE 22 to RULE 24 are applied in accordance with the agent situation to determine zone candidates where the agent can be located.
  • In step S730, the candidates determined in step S720 are narrowed down to fewer candidates in accordance with the agent attributes using RULE 15 to RULE 18.
  • In step S740, RULE 1 to RULE 8 and RULE 22 to RULE 24 are applied on the basis of various kinds of input information such as the "user information", "real space information", "virtual space information", and the like, agent size, agent shape, and agent situation to narrow them down to still fewer, relevant candidates. In step S750, the display position/posture of the agent character is finally determined by applying RULE 9 to RULE 13.
  • In step S760, agent information is computed based on the layout position. In step S800 in Fig. 10, the agent information is output to the virtual space generation module 101.
  • The present invention is not limited to the above embodiment.
  • For example, the present invention is applied to the library information database in the above embodiment. However, the present invention is not limited to such specific application, but may be applied to every application fields that require an agent.
  • In the above embodiment, the application program and agent are programs independent from the position system. However, the present invention is not limited to such specific system, but they may be integrated.
  • The present invention is not limited to a three-dimensional display environment. Even in a two-dimensional display environment, distance information can be saved, and the positional relationship between the agent and user can be considered on the basis of the saved distance information.
  • When the layout program (agent layout determination module) of the agent layout method is built in the operating system, the user application program such as the database application program or the like need only describe the aforementioned program interface in the program, thus improving program productivity. In such case, the program to be built in the operating system is preferably stored in a CD-ROM or the like solely or together with other programs. This CD-ROM is inserted into the workstation or PC in which the operating system has already been installed to load the aforementioned module.
  • Note that APIs have been exemplified as program interfaces in the above embodiment. However, the present invention is not limited to such specific interface, and any other existing or newly developed program interfaces can be used.
  • To restate, according to the present invention, a user interface environment which is not against user's interest in terms of manipulations can be provided.
  • As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.
  • There is disclosed a system for displaying an agent character, which does not give any anxious feeling to the user and is easy to use, in consideration of the position/posture of the user. Upon displaying the agent character, the distance from the user and the line-of-sight direction of the user are taken into consideration, and the agent character is inhibited from being located at the closest distance position in front of the user. When the user does not communicate with the agent character, the agent character is located in a far zone in front of the user. When the user communicates with the agent character, the agent character is located in a middle-distance zone in front of the user.

Claims (45)

  1. A user interface method which three-dimensionally displays a character designated by a program, and allows the user to interact with a computer system while communicating with the displayed character, comprising the steps of:
    estimating a region of interest of the user;
    estimating a display purpose of information in an application program; and
    determining a three-dimensional display pattern of the character by applying a rule, which aims at preventing display of the character from being against user's interest, to the estimated region of interest of the user and the estimated display purpose of information.
  2. The method according to claim 1, wherein the character is an agent character.
  3. The method according to claim 1, wherein the region of interest of the user is estimated by detecting a position/posture of the user.
  4. The method according to claim 1, wherein in the rule a reference value used to determine if display of the character is against user's interests is changed in accordance with a distance from the user.
  5. The method according to claim 1, wherein in the rule a reference value used to determine if display of the character is against user's interests is changed in accordance with a distance from the user on a front side of a line-of-sight direction of the user.
  6. The method according to claim 1, wherein in the rule display of the character is inhibited in a first zone which has a distance from the user less than a first distance, and display of the character is permitted in a second zone which has the distance from the user that falls within a range from the first distance to a second distance larger than the first distance, and a third zone which is farther than the second distance.
  7. The method according to claim 6, wherein in the rule when the user and character do not communicate with each other, the character stays in the third zone which is farther than the second distance.
  8. The method according to claim 6, wherein in the rule when the user and character communicate with each other, the character is located in the second zone.
  9. The method according to claim 1, wherein in the rule the position/posture of the user are detected, and the character is located so as to prevent the region of interest from being occluded by the character from the detected position/posture of the user.
  10. The method according to claim 1, wherein in the rule the position/posture of the user are detected, and the character is located so as to prevent the character from being occluded by the region of interest from the detected position/posture of the user.
  11. The method according to claim 1, wherein in the rule the character is located not to fall in the region of interest.
  12. The method according to claim 1, wherein in the rule when an object to be manipulated by the character is present in the region of interest, the character is located within a range in which the object is accessible.
  13. The method according to claim 1, wherein if there are a plurality of different layout positions of the character, which satisfy the rule, the character is located at a position which can minimize a distance between the region of interest and the layout position of the character.
  14. The method according to claim 13, wherein if there are still another plurality of different layout positions of the character, which satisfy the rule, the character is located at one of the plurality of different layout positions.
  15. The method according to claim 1, wherein if there are a plurality of different layout positions of the character, which satisfy the rule, and the region of interest is not present, the character is located at a layout position closest to the user.
  16. The method according to claim 14, wherein if there are still another plurality of different layout positions of the character, which satisfy the rule, the character is located at one of the plurality of different layout positions.
  17. The method according to claim 1, wherein if a layout position that satisfies the rule is not present, the character is located outside a field of view of the user.
  18. The method according to claim 1, wherein the rule further considers a size of the character.
  19. The method according to claim 1, wherein the rule further considers an attribute of the character.
  20. The method according to claim 18, wherein the size is determined in advance in correspondence with the character.
  21. The method according to claim 19, wherein the attribute is determined in advance in correspondence with the character.
  22. The method according to claim 19, wherein the attribute is determined in consideration of at least one of floatability, personality, and role.
  23. The method according to claim 1, wherein the rule further considers a situation of the character.
  24. The method according to claim 23, wherein the situation of the character considers if the character is waiting for an instruction from the user, having conversation with the user, or is automonously moving.
  25. The method according to claim 1, wherein the display pattern includes a size of the character.
  26. The method according to claim 1, wherein the display pattern includes a shape of the character.
  27. A storage medium which stores a program for implementing a user interface method which three-dimensionally displays a character designated by a program, and allows the user to interact with a computer system while communicating with the displayed character, said medium storing:
    the program step of estimating a region of interest of the user;
    the program step of estimating a display purpose of information in an application program; and
    the program step of determining a three-dimensional display pattern of the character by applying a rule, which aims at preventing display of the character from being against user's interest, to the estimated region of interest of the user and the estimated display purpose of information.
  28. A storage medium which stores a program that runs as an additional function program of an operating system on which the program runs, and implements a user interface method which three-dimensionally displays a character designated by a program, and allows the user to interact with a computer system while communicating with the displayed character, and said medium storing:
    the program step of estimating a region of interest of the user;
    the program step of estimating a display purpose of information in an application program; and
    the program step of determining a three-dimensional display pattern of the character by applying a rule, which aims at preventing display of the character from being against user's interest, to the estimated region of interest of the user and the estimated display purpose of information.
  29. An information processing apparatus which can execute a user application program that can provide a user interface by means of an agent character, and an agent layout program for determining a layout method of the agent character,
    said agent layout program having:
    a program code of estimating a region of interest of the user;
    a program code of estimating a display purpose of information in an application program; and
    a program code of determining a three-dimensional display pattern of the character by applying a rule, which aims at preventing display of the character from being against user's interest, to the estimated region of interest of the user and the estimated display purpose of information.
  30. An image generation apparatus comprising:
    agent character generation means for generating and displaying a predetermined agent character within a field of view of a user in a virtual or real space; and
    control means for controlling a display position of the agent character by said agent character generation means on the basis of a location and/or posture of the user in the virtual or real space.
  31. The apparatus according to claim 30, wherein said control means inhibits the agent character from displaying within a range of a predetermined distance to the user.
  32. The apparatus according to claim 30, wherein said control means changes the display position of the agent character on the basis of the presence/absence of an interaction with the user.
  33. The apparatus according to claim 32, wherein said control means changes a position of the agent character so as not to hide an object as a target of the user when there is an interaction with the user.
  34. The apparatus according to claim 32, wherein said control means displays the agent character within the field of view of the user when there is an interaction with the user, and displays the agent character at a position around or outside the field of view when there is no interaction with the user.
  35. The apparatus according to claim 32, wherein said control means controls to locate the agent character in the vicinity of a target object when the target object is present.
  36. The apparatus according to claim 30, wherein said control means changes the display position of the agent character in the virtual or real space in accordance with a line-of-sight position of the user.
  37. The apparatus according to claim 30, wherein said control means changes a size of the agent character on the basis of a size of a target object in the virtual or real space.
  38. An image generation method comprising:
    the agent character generation step of generating and displaying a predetermined agent character within a field of view of a user in a virtual or real space; and
    the control step of controlling a display position of the agent character in the agent character generation step on the basis of a location and/or posture of the user in the virtual or real space.
  39. The method according to claim 38, wherein the control step includes the step of inhibiting the agent character from displaying within a range of a predetermined distance to the user.
  40. The method according to claim 38, wherein the control step includes the step of changing the display position of the agent character on the basis of the presence/absence of an interaction with the user.
  41. The method according to claim 40, wherein the control step includes the step of changing a position of the agent character so as not to hide an object as a target of the user when there is an interaction with the user.
  42. The method according to claim 40, wherein the control step includes the step of displaying the agent character within the field of view of the user when there is an interaction with the user, and displaying the agent character at a position around or outside the field of view when there is no interaction with the user.
  43. The method according to claim 40, wherein the control step includes the step of controlling to locate the agent character in the vicinity of a target object when the target object is present.
  44. The method according to claim 38, wherein the control step includes the step of changing the display position of the agent character in the virtual or real space in accordance with a line-of-sight position of the user.
  45. The method according to claim 38, wherein the control step includes the step of changing a size of the agent character on the basis of a size of a target object in the virtual or real space.
EP00103254A 1999-03-26 2000-02-17 User interface method, information processing apparatus, and program storage medium Withdrawn EP1039364A3 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP08466199A JP3368226B2 (en) 1999-03-26 1999-03-26 Information processing method and information processing apparatus
JP8466199 1999-03-26

Publications (2)

Publication Number Publication Date
EP1039364A2 true EP1039364A2 (en) 2000-09-27
EP1039364A3 EP1039364A3 (en) 2007-10-31

Family

ID=13836918

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00103254A Withdrawn EP1039364A3 (en) 1999-03-26 2000-02-17 User interface method, information processing apparatus, and program storage medium

Country Status (3)

Country Link
US (1) US6559870B1 (en)
EP (1) EP1039364A3 (en)
JP (1) JP3368226B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015128212A1 (en) * 2014-02-28 2015-09-03 Thales System comprising a headset equipped with a display device and documentation display and management means
CN109086108A (en) * 2018-06-29 2018-12-25 福建天晴数码有限公司 A kind of method and system of interface layout rationality checking
US11023038B2 (en) 2015-03-05 2021-06-01 Sony Corporation Line of sight detection adjustment unit and control method

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW401548B (en) * 1996-12-20 2000-08-11 Sony Corp Method and apparatus for sending E-mail, method and apparatus for receiving E-mail, sending program supplying medium, receiving program supplying medium
JP3602061B2 (en) * 2001-02-02 2004-12-15 九州日本電気ソフトウェア株式会社 Three-dimensional graphics display device and method
US20020138607A1 (en) * 2001-03-22 2002-09-26 There System, method and computer program product for data mining in a three-dimensional multi-user environment
JP4298407B2 (en) 2002-09-30 2009-07-22 キヤノン株式会社 Video composition apparatus and video composition method
JP4268191B2 (en) 2004-12-14 2009-05-27 パナソニック株式会社 Information presenting apparatus, information presenting method, program, and recording medium
US20070176921A1 (en) * 2006-01-27 2007-08-02 Koji Iwasaki System of developing urban landscape by using electronic data
WO2009080995A2 (en) * 2007-12-14 2009-07-02 France Telecom Method of managing the displaying or the deleting of a representation of a user in a virtual environment
KR20090092153A (en) * 2008-02-26 2009-08-31 삼성전자주식회사 Method and apparatus for processing image
JP2010244322A (en) * 2009-04-07 2010-10-28 Bitto Design Kk Communication character device and program therefor
CA2686991A1 (en) * 2009-12-03 2011-06-03 Ibm Canada Limited - Ibm Canada Limitee Rescaling an avatar for interoperability in 3d virtual world environments
KR101874895B1 (en) * 2012-01-12 2018-07-06 삼성전자 주식회사 Method for providing augmented reality and terminal supporting the same
TWI493432B (en) * 2012-11-22 2015-07-21 Mstar Semiconductor Inc User interface generating apparatus and associated method
US9256348B2 (en) * 2013-12-18 2016-02-09 Dassault Systemes Americas Corp. Posture creation with tool pickup
EP3163421A4 (en) * 2014-06-24 2018-06-13 Sony Corporation Information processing device, information processing method, and program
JP6645058B2 (en) * 2015-07-17 2020-02-12 富士通株式会社 CG agent display method, CG agent display program, and information processing terminal
EP3364270A4 (en) * 2015-10-15 2018-10-31 Sony Corporation Information processing device and information processing method
JP2018097437A (en) * 2016-12-08 2018-06-21 株式会社テレパシージャパン Wearable information display terminal and system including the same
JP7077603B2 (en) 2017-12-19 2022-05-31 富士通株式会社 Judgment program, judgment method and image generator
JP7458749B2 (en) * 2019-11-11 2024-04-01 株式会社ソニー・インタラクティブエンタテインメント Terminal devices, server devices, information processing systems, and programs

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998008192A1 (en) 1996-08-02 1998-02-26 Microsoft Corporation Method and system for virtual cinematography
JPH10105736A (en) 1996-09-30 1998-04-24 Sony Corp Device and method for image display control, and information recording medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0612401A (en) 1992-06-26 1994-01-21 Fuji Xerox Co Ltd Emotion simulating device
US5347306A (en) * 1993-12-17 1994-09-13 Mitsubishi Electric Research Laboratories, Inc. Animated electronic meeting place
US5563988A (en) * 1994-08-01 1996-10-08 Massachusetts Institute Of Technology Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment
US5736982A (en) * 1994-08-03 1998-04-07 Nippon Telegraph And Telephone Corporation Virtual space apparatus with avatars and speech
JP3127084B2 (en) 1994-08-11 2001-01-22 シャープ株式会社 Electronic secretary system
CA2180891C (en) * 1995-07-12 2010-01-12 Junichi Rekimoto Notification of updates in a three-dimensional virtual reality space sharing system
US6219045B1 (en) * 1995-11-13 2001-04-17 Worlds, Inc. Scalable virtual world chat client-server system
US6331856B1 (en) * 1995-11-22 2001-12-18 Nintendo Co., Ltd. Video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing
JPH09231413A (en) * 1996-02-22 1997-09-05 Suzuki Nobuo Physical constitution processor
US6154211A (en) * 1996-09-30 2000-11-28 Sony Corporation Three-dimensional, virtual reality space display processing apparatus, a three dimensional virtual reality space display processing method, and an information providing medium
JPH1173289A (en) 1997-08-29 1999-03-16 Sanyo Electric Co Ltd Information processor
US6522312B2 (en) * 1997-09-01 2003-02-18 Canon Kabushiki Kaisha Apparatus for presenting mixed reality shared among operators

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998008192A1 (en) 1996-08-02 1998-02-26 Microsoft Corporation Method and system for virtual cinematography
JPH10105736A (en) 1996-09-30 1998-04-24 Sony Corp Device and method for image display control, and information recording medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015128212A1 (en) * 2014-02-28 2015-09-03 Thales System comprising a headset equipped with a display device and documentation display and management means
FR3018119A1 (en) * 2014-02-28 2015-09-04 Thales Sa HELMET VISUALIZATION SYSTEM HAVING DISPLAY AND DOCUMENTATION MANAGEMENT MEANS
CN106068476A (en) * 2014-02-28 2016-11-02 泰勒斯公司 Including assembling display device head-mounted machine and file shows and the system of managing device
US11023038B2 (en) 2015-03-05 2021-06-01 Sony Corporation Line of sight detection adjustment unit and control method
CN109086108A (en) * 2018-06-29 2018-12-25 福建天晴数码有限公司 A kind of method and system of interface layout rationality checking
CN109086108B (en) * 2018-06-29 2021-04-27 福建天晴数码有限公司 Method and system for detecting reasonability of interface layout

Also Published As

Publication number Publication date
JP3368226B2 (en) 2003-01-20
JP2000276610A (en) 2000-10-06
EP1039364A3 (en) 2007-10-31
US6559870B1 (en) 2003-05-06

Similar Documents

Publication Publication Date Title
US6559870B1 (en) User interface method for determining a layout position of an agent, information processing apparatus, and program storage medium
Brooks The intelligent room project
Craig et al. Developing virtual reality applications: Foundations of effective design
US20200241730A1 (en) Position-dependent Modification of Descriptive Content in a Virtual Reality Environment
US5999185A (en) Virtual reality control using image, model and control data to manipulate interactions
US20040130579A1 (en) Apparatus, method, and program for processing information
US7292240B2 (en) Virtual reality presentation device and information processing method
Pingali et al. Steerable interfaces for pervasive computing spaces
WO2021163373A1 (en) 3d object annotation
KR101616591B1 (en) Control system for navigating a principal dimension of a data space
MacIntyre et al. Future multimedia user interfaces
US20050264555A1 (en) Interactive system and method
US10699490B2 (en) System and method for managing interactive virtual frames for virtual objects in a virtual environment
Sharma et al. Computer vision-based augmented reality for guiding manual assembly
KR20110082636A (en) Spatially correlated rendering of three-dimensional content on display components having arbitrary positions
CN115698909A (en) Augmented reality guidance
US20170257610A1 (en) Device and method for orchestrating display surfaces, projection devices, and 2d and 3d spatial interaction devices for creating interactive environments
Kjeldsen et al. Dynamically reconfigurable vision-based user interfaces
CN116210021A (en) Determining angular acceleration
US5577176A (en) Method and apparatus for displaying a cursor along a two dimensional representation of a computer generated three dimensional surface
CN108920230A (en) Response method, device, equipment and the storage medium of mouse suspension procedure
Grasset et al. Augmented Reality Collaborative Environment: Calibration\& Interactive Scene Editing
Ledermann An authoring framework for augmented reality presentations
Neves et al. Virtual environments and GIS
JPH1165814A (en) Interactive system and image display method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20000529

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: CANON KABUSHIKI KAISHA

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

AKX Designation fees paid

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

17Q First examination report despatched

Effective date: 20080619

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20140502