WO2019155564A1 - Information providing system and information providing method - Google Patents

Information providing system and information providing method Download PDF

Info

Publication number
WO2019155564A1
WO2019155564A1 PCT/JP2018/004311 JP2018004311W WO2019155564A1 WO 2019155564 A1 WO2019155564 A1 WO 2019155564A1 JP 2018004311 W JP2018004311 W JP 2018004311W WO 2019155564 A1 WO2019155564 A1 WO 2019155564A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
information
user
content
work
Prior art date
Application number
PCT/JP2018/004311
Other languages
French (fr)
Japanese (ja)
Inventor
恭平 海野
洋輝 大橋
克行 中村
谷田部 祐介
瑛 長坂
浩彦 佐川
栗原 恒弥
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2018/004311 priority Critical patent/WO2019155564A1/en
Publication of WO2019155564A1 publication Critical patent/WO2019155564A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Definitions

  • the present invention relates to a technique for providing information to a user by display.
  • an operator is detected as a machine learning device, a numerical control device, a machine tool system, a manufacturing system, and a machine learning method capable of displaying an optimum operation menu for each operator.
  • a machine learning device 2 that communicates with a database in which information is registered and learns display of an operation menu based on the information of the operator, and a state observation unit 21 that observes an operation history of the operation menu;
  • a configuration including a learning unit 22 that learns display of the operation menu based on an operation history of the operation menu observed by the state observation unit is described.
  • Patent Document 1 it is possible to determine an optimum operation menu for the user by machine learning based on an operation history of the operator (user).
  • An object of the present invention is to provide a technique for improving convenience in displaying provided information in a configuration in which information is provided by display.
  • An information providing system is an information providing system for displaying provided information provided to a user on a display device, and generates display content of the provided information on the display device and displays the display content on the display device.
  • a display optimization unit that estimates a user's response to the display mode of the display content on the display device and adjusts the display mode of the display content according to the response based on the sensor information obtained from the sensor.
  • the display content generation unit causes the display device to display the display content in the display mode adjusted by the display optimization unit.
  • the present invention it is possible to automatically acquire display contents that are easy to see and use for the user, and it is possible to improve convenience in displaying provided information.
  • FIG. 3 is a flowchart in a case where learning is not performed for display content optimization processing in the display optimization unit illustrated in FIGS. 1 and 2.
  • FIG. It is a figure for demonstrating the specific operation example in the information provision system shown in FIG.1 and FIG.2. It is a figure for demonstrating the specific operation example in the information provision system shown in FIG.1 and FIG.2.
  • a system using a GUI is taken as an example of an information providing system, and a system that provides work support by instructing a worker at a factory using AR (Augmented Reality) will be described.
  • FIG. 1 is a block diagram of an information providing system according to this embodiment.
  • a worker who is a user wears wearable glasses (AR glasses) capable of AR display for displaying information in a real space as a display device, and displays provided information in the real space in a superimposed manner.
  • AR glasses wearable glasses
  • the display device may be a VR (Virtual Reality) head-mounted display, an MR (Mixed Reality) display, a PC display, or the like.
  • the information providing system includes a sensor 101, an operation information recognition unit 102, a control unit 103, a display content generation unit 104, a display unit 105 serving as a display device, and a work environment recognition unit. 106, a display optimization unit 107, a work analysis unit 108, a user identification unit 109, a learning storage unit 110, and a business information database 111. That is, in the present embodiment, the display device is included in the information providing system, but the information providing system may be configured as a server separately from the display device and accessible from the display device. That is, the display device may or may not be included in the information providing system.
  • Sensor 101 is a sensor such as an acceleration sensor, a gyro sensor, a camera, a microphone, or the like provided on the wearable glass, and a myoelectric sensor attached to the user.
  • the sensor 101 outputs sensor information acquired by these sensors 101. Further, it is conceivable that the sensor 101 includes an illuminance sensor, a temperature sensor, and an atmospheric pressure sensor.
  • the work environment recognition unit 106 acquires the sensor information output from the sensor 101, and based on the sensor information, the three-dimensional map of the work space serving as the user's work environment, the position of the wearable glass on the work space, and The user's posture composed of the orientation is estimated, and work environment information composed of these is output. Note that the work environment recognition unit 106 may estimate either the three-dimensional map of the work space or the user's posture. In addition, the work environment recognition unit 106 estimates weather, day / night, and the like as the work environment based on the sensor information output from the sensor 101.
  • the operation information recognition unit 102 acquires the sensor information output from the sensor 101, estimates or recognizes the user's operation on the system based on the sensor information, and outputs it as user operation information. For example, the operation information recognition unit 102 recognizes the user's operation by estimating the user's operation by a gesture operation photographed by the camera serving as the sensor 101 or by recognizing the voice input to the microphone serving as the sensor 101. To do.
  • the control unit 103 acquires the operation information output from the operation information recognition unit 102, and controls the operation of the wearable glass according to the operation information. For example, the control unit 103 performs an operation of displaying / deleting the AR display, an operation of starting a designated application, and the like. Further, the display unit 105 outputs a control signal for displaying the provided information.
  • the display content generation unit 104 generates specific display content of the provided information on the display unit 105 based on the control signal output from the display unit 105 and causes the display unit 105 to display the display content.
  • the display optimization unit 107 adjusts the display position of the display content on the display unit 105 based on the work environment information output from the work environment recognition unit 106. Can be displayed on the display unit 105 to make it appear as if the display exists in real space when performing AR display.
  • the display content generation unit 104 causes the display unit 105 to display the generated display content in the display mode adjusted by the display optimization unit 107.
  • the work analysis unit 108 receives an image obtained from the wearable camera of the sensor 101 and sensor information output from the myoelectric sensor, and what kind of work the user is currently performing through image recognition processing and machine learning.
  • the work information is estimated.
  • the work contents are, for example, a tool, work target (screws, bolts, etc.) such as loosening the screw of the cover of the work target device with a screwdriver, tightening bolts with a wrench, etc. (Tightened / loosened).
  • the display optimization unit 107 receives the sensor information output from the sensor 101, estimates the user's response to the display mode of the display content on the display unit 105 based on the sensor information, and displays the display content according to this response. Adjust the display mode.
  • the user's reaction is a body reaction, and includes, for example, body movement, posture change, line of sight change, and the like. You may include gestures, facial expression changes, and monologues. Further, the execution status of work related to the display content may be included.
  • the display optimization unit 107 adjusts the display mode of the display content based on the work environment recognition information output from the work environment recognition unit 106.
  • the display optimization unit 107 can optimize the display mode by adjusting the display mode of the display content based on the work environment recognition information. In this way, by adjusting the display mode of the display content based on the work environment information output from the work environment recognition unit 106, it reflects what kind of reaction the user has made in what environment, Convenience in displaying provided information can be improved.
  • the display optimization unit 107 receives the operation information output from the operation information recognition unit 102 and the work environment recognition information output from the work environment recognition unit 106, and receives the sensor information, the operation information, and the work environment recognition. Using the information, the display mode of the adjusted display content is automatically optimized by updating by machine learning.
  • the display mode of the display content is at least one parameter among the display position and size of the display content and the presence / absence of display.
  • the initial values of parameters optimized by machine learning are those downloaded from the learning result storage unit 110.
  • the parameters optimized by learning are uploaded to the learning result storage unit 110 and updated.
  • a display mode a display position, size, color, change (flashing, etc.), transparency, and the like can be considered.
  • the learning result storage unit 110 stores the parameters optimized by the display optimization unit 107. At this time, the display optimization unit 107 adjusts the display mode parameter for each user identified by the display unit 105 and the user identification unit 109, and displays the adjustment result in the learning result storage unit 110.
  • the display mode information is stored and managed in association with the model and user of the unit 105. Thereby, when a plurality of types of display devices are used, it is possible to display the provided information in an appropriate display mode for each model.
  • the learning result storage unit 110 may be provided in a local environment such as a flash memory mounted on a wearable glass, or may be centrally managed on another device connected to the network or in the cloud.
  • the user identification unit 109 identifies who the user of the system has registered in advance.
  • any general technique for personal authentication such as biometric authentication such as password authentication or fingerprint authentication may be used.
  • the business information database 111 includes work performance data and work instruction data related to work using the system, and user data related to a user who performs the work, and is accessible from the system via the network 112.
  • the work is performed by accessing the business information database 111 from the system and referring to the stored data.
  • the work performance data stores a preferred posture for performing the work, and it is conceivable to perform the work with reference to this posture.
  • the business information database 111 may be built in the system.
  • FIG. 2 is a diagram illustrating an internal configuration example of the display optimization unit 107 illustrated in FIG. In this example, a case where reinforcement learning is used as machine learning will be described.
  • a display optimization unit 107 including a state estimation unit 201, a reward calculation unit 202, a value function update unit 203, and a processing content determination unit 204 is considered. It is done.
  • the state estimation unit 201 includes sensor information output from the sensor 101, work environment recognition information output from the work environment recognition unit 106, current display content generated by the display content generation unit 104, and work analysis unit Based on the work information output from 108, the state s in reinforcement learning is estimated. Specifically, for example, (1) the wearable camera image information that is sensor information output from the sensor 101, the work environment recognition information output from the work environment recognition unit 106, and the display content generation unit 104 are generated. User view information on how the display looks to the user estimated from the current display content (2) Two types of information of work information output from the work analysis unit 108 are output as parameters of the state s.
  • the reward calculation unit 202 receives the state s output from the state estimation unit 201, the sensor information output from the sensor 101, and the operation information output from the operation information recognition unit 102, and receives the reward r for reinforcement learning. Calculate and output.
  • FIG. 3 is a diagram showing a specific example of the reward r calculated by the reward calculation unit 202 shown in FIG.
  • the condition column shows conditions for giving a reward
  • the input column shows input information for determining the condition
  • the reward column shows each reward value.
  • the value of the reward is merely an example, and can be set to an arbitrary value according to the actual application.
  • the reward calculation unit 202 includes a user's information based on the state s output from the state estimation unit 201, the sensor information output from the sensor 101, and the operation information output from the operation information recognition unit 102.
  • a reward r for the display mode of the current display content is calculated by adding a reward associated with the condition. That is, the reward calculation unit 202 calculates the reward r using the reward for the reaction type for each combination of the work content, the work environment, and the display unit 105 for which the provided information displayed on the display unit 105 is a target. It will be.
  • No. 1-No. 4 is a reward obtained directly from the operation information output from the operation information recognition unit 102.
  • a reward is obtained directly from operation information indicating that the display has been turned off, the display that has been turned off, the display has been enlarged / reduced, the display position has been moved, and the like.
  • the fact that the user has changed the display contents by directly operating it is considered that the current display mode is unfavorable for the user. Therefore, by giving a negative reward, a user-friendly display is provided by reinforcement learning. Can learn automatically.
  • No. 5 and no. 6 is a reward for estimating the user's reaction to the current display mode from the sensor information output from the sensor 101 and the state s output from the state estimation unit 201 and providing feedback instead of direct operation. is there.
  • the reward calculation unit 202 performs motion estimation from the user view information included in the state s and the movement of the head obtained from the sensor information, and gives a reward corresponding thereto. For example, no. As shown in FIG. 5, when the user's action is estimated to have moved the head forward or backward with the line of sight toward the display content, For example, when it is estimated that the user's action is looking into the back of the display in a state where the display with depth is displayed as in FIG. 6, it is considered that the display is difficult for the user to see.
  • No. 7-No. 10 is a reward given when a series of work is completed. Details of the processing timing will be described later with reference to FIG.
  • No. 8-No. 10 is a reward when the user evaluates the ease of viewing directly on the system.
  • the evaluation result input interface may be any means as long as the evaluation result such as GUI, voice, and gesture can be input to the system as intended by the user.
  • the evaluation input by the user may be divided into three stages of “+20”, “0”, and “ ⁇ 20” as shown in FIG. 3, or may be divided into two stages or more than four stages. In this way, by explicitly directly feeding back the user's evaluation result for the display mode of the display unit 105, it is possible to learn the display more in line with the user's intention and improve the rationality of the learning result. Also, by using reinforcement learning, it is possible to learn whether each processing content is good or bad not based on feedback for each processing content but also from the evaluation of overall good / bad as described here.
  • No. 11 is a reward when evaluating the display order of the content in the display content.
  • the icons of content frequently used by the user in the display contents are arranged in an appropriate location according to the user's viewpoint and line of sight included in the operation information, it is considered that the user's usability is better, so a higher reward is obtained. .
  • the value function update unit 203 receives the state s output from the state estimation unit 201, the reward r output from the reward calculation unit 202, and the processing content a output from the processing content determination unit 204, and these states
  • the value function is updated from s, reward r, and processing content a.
  • the value function updating method includes, for example, a method such as Q-learning, and a value function updating method already proposed in the field of reinforcement learning or proposed in the future can be applied. Further, the value function updating unit 203 calculates and outputs the value q related to the processing content a based on the state s.
  • the initial value of the value function uses the previous learning result for each user stored in the learning result storage unit 110, and the value function after learning is uploaded to the learning result storage unit 110 and stored in the learning result storage unit 110. Update the value function.
  • the processing content determination unit 204 receives the state s output from the state estimation unit 201 and the value q output from the value function update unit 203, and based on the state s and value q, the display content generation unit The processing content a regarding the display mode of the display content generated at 104 is determined and output.
  • FIG. 4 is a diagram showing a specific example of the processing content a output from the processing content determination unit 204 shown in FIG.
  • the processing content a output from the processing content determination unit 204 shown in FIG. 2 is to erase the display, display the erased display, enlarge / reduce the display, move the display, and what There may be no such thing.
  • the processing content determination unit 204 determines the processing content for each object. Further, regarding the movement of the display, the moving direction and the moving amount are also determined. For example, the movement direction is defined as eight directions with slanting on the top, bottom, left, and right, and the movement amount is 5 cm, 10 cm,... You can decide the processing contents.
  • FIG. 5 is a flowchart when learning is performed on the display content optimization processing in the display optimization unit 107 shown in FIGS. 1 and 2.
  • one sequence is from the start of work to the end of work.
  • This work start / end may be explicitly designated as ON / OFF by gesture operation or voice operation, or the work analysis unit 108 may automatically determine the start / end of a specific work.
  • step 501 the display optimization unit 107 downloads the user value function recognized by the user identification unit 109 from the learning result storage unit 110 and sets it to an initial value.
  • the display optimization unit 107 detects the start of work based on the sensor information output from the sensor 101 and the work information output from the work analysis unit 108 in step 502, first, in step 503, the display optimization unit 107 outputs the information from the sensor 101. Based on the sensor information, the work environment recognition information output from the work environment recognition unit 106, and the work information output from the work analysis unit 108, the state s in reinforcement learning is estimated.
  • step 504 the display optimization unit 107 determines the processing content a that maximizes the value function based on the state s estimated in step 503 and the value function immediately before downloaded from the learning result storage unit 110. And output.
  • the display content generation unit 104 causes the display unit 105 to display the generated display content in a display mode according to the processing content a in step 505.
  • step 506 the display optimization unit 107, based on the state s output from the state estimation unit 201, the sensor information output from the sensor 101, and the operation information output from the operation information recognition unit 102, Reinforcement learning reward r is calculated and output.
  • step 507 the display optimization unit 107 outputs the state s output from the state estimation unit 201, the reward r output from the reward calculation unit 202, and the processing content a output from the processing content determination unit 204. Update the value function from
  • step 508 the display optimization unit 107 detects the end of the work based on the sensor information output from the sensor 101 and the work information output from the work analysis unit 108. 8-No. As shown in FIG. 10, when the user inputs an evaluation regarding the display contents during a series of work, the display optimization unit 107 updates the value function using the input evaluation result as a reward in step 509, as a learning result. Upload to the learning result storage unit 110.
  • step 508 If it is determined in step 508 that the work has not yet been completed, the process returns to step 503, and the processing after step 503 is performed again.
  • value function update processing is performed at two locations of steps 507 and 509, but only one of them may be performed.
  • the display mode is optimized by reinforcement learning, the display mode can be optimized by simple reaction evaluation even when the user performs a complicated task. Further, for each combination of the work content, work environment, and display unit 105 for which the provided information displayed on the display unit 105 is targeted, the reward r is calculated using the reward for the reaction type, and reinforcement learning is executed. Thus, it becomes possible to set a more appropriate reward and better reinforcement learning is possible.
  • FIG. 6 is a flowchart in a case where learning is not performed on the display content optimization processing in the display optimization unit 107 shown in FIGS. 1 and 2.
  • the display optimization unit 107 When learning is not performed on the display content optimization process in the display optimization unit 107 shown in FIGS. 1 and 2, as shown in FIG. 6, the reward calculation process in step 506 shown in FIG. 508, the display function is not updated, and the display optimization unit 107 performs only display optimization using the learned value function q.
  • Switching between the flow shown in FIG. 5 and the flow shown in FIG. 6 is performed, for example, by executing the flow shown in FIG. 5 as a basic flow, and the reward r calculated by the display optimization unit 107 is If the flow shifts to the flow shown in FIG. 6 when it does not decrease, or when the work information output from the work analysis unit 108 changes greatly when the flow shown in FIG. 6 is executed, FIG. It is possible to return to the indicated flow.
  • the display mode of the provided information is adjusted according to the reaction of the user, so that the convenience is improved so that it does not appear in the user operation history, and the convenience is improved. Can do.
  • the optimal display mode is optimized by automatically learning suitable display contents for each user through reinforcement learning, so even if the user performs complex tasks, the display mode is optimized by simple reaction evaluation. Can be
  • FIG. 7 is a diagram for explaining a specific operation example in the information providing system shown in FIGS. 1 and 2.
  • FIG. 8 is a diagram for explaining a specific operation example in the information providing system shown in FIGS. 1 and 2.
  • the work target 401 is selected from the work content information 402.
  • work instruction information 403 indicating the work procedure of the work object 401 is superimposed on the work object 401, and in this state, the worker looks at the work instruction information 403 while viewing the work instruction information 403. 401 will be worked on.
  • the case where reinforcement learning is used for learning of display contents has been described as an example.
  • other machine learning methods such as supervised learning may be used.
  • the optimum display content corresponding to the user's field of view information is prepared as teacher data, and the optimum display content can be learned by learning.
  • the case where the optimum learning is performed for each user has been described as an example.
  • the learning may be performed for each attribute level such as a title instead of an individual.
  • the display can be optimized using data already learned by a person who performs the same work.
  • the structure which learns the optimal display with respect to all the users and work, without distinguishing an attribute or a user may be sufficient.
  • a universally suitable display can be learned regardless of the person or work content.
  • each of the above-described configurations may be configured such that a part or all of the configuration is configured by hardware, or is realized by executing a program by a processor.
  • control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

In a configuration for providing information using a display, the present invention improves the usability of the displayed provided information. This information providing system comprises: a display content generation unit 104 which generates display content for information to be provided, and causes a display unit 105 to display the display content; and a display optimization unit 107 which, on the basis of sensor information acquired from a sensor 101, estimates a user's reaction to a manner in which the display content may be displayed on the display unit 105, and adjusts the manner of display of the display content in accordance with this reaction, wherein the display content generation unit 104 causes the display unit 105 to display the display content in the manner of display adjusted by the display optimization unit 107.

Description

情報提供システムおよび情報提供方法Information providing system and information providing method
 本発明は、表示によりユーザに情報を提供する技術に関する。 The present invention relates to a technique for providing information to a user by display.
 特許文献1には、それぞれの操作者に最適な操作メニューを表示することができる機械学習器、数値制御装置、工作機械システム、製造システムおよび機械学習方法として、操作者を検知し、操作者の情報が登録してあるデータベースと通信して、前記操作者の情報に基づいた操作メニューの表示を学習する機械学習器2であって、前記操作メニューの操作履歴を観測する状態観測部21と、前記状態観測部により観測された前記操作メニューの操作履歴に基づいて、前記操作メニューの表示を学習する学習部22と、を備える構成が記載されている。 In Patent Document 1, an operator is detected as a machine learning device, a numerical control device, a machine tool system, a manufacturing system, and a machine learning method capable of displaying an optimum operation menu for each operator. A machine learning device 2 that communicates with a database in which information is registered and learns display of an operation menu based on the information of the operator, and a state observation unit 21 that observes an operation history of the operation menu; A configuration including a learning unit 22 that learns display of the operation menu based on an operation history of the operation menu observed by the state observation unit is described.
特開2017-138881号公報JP 2017-133881 A
 近年、ユーザに作業補助などのための情報を提供する仕組みとしてAR(Augmented Reality(拡張現実))グラスによる三次元空間上への重畳表示が注目されている。また他の表示デバイスとして、従来、PC(Personal Computer)等のディスプレイへの情報の表示などがある。 In recent years, superimposing display on a three-dimensional space using AR (Augmented Reality) glasses has been attracting attention as a mechanism for providing information for work assistance to users. As another display device, conventionally, there is a display of information on a display such as a PC (Personal Computer).
 このように情報を表示によって提供する情報提供システムでは、提供される情報の内容だけでなく、提供される情報の表示位置や表示の大きさもユーザの利便性に関わる。また、提供される情報を最適な位置や大きさで表示するためのユーザによる操作はできるだけ少ないことが好ましい。 In such an information providing system that provides information by display, not only the content of the information provided but also the display position and display size of the information provided are related to the convenience of the user. In addition, it is preferable that the number of operations by the user for displaying the provided information at an optimal position and size is as small as possible.
 特許文献1に開示された技術によれば、操作者(ユーザ)の操作履歴に基づく機械学習によりユーザにとって最適な操作メニューを決定することができる。 According to the technique disclosed in Patent Document 1, it is possible to determine an optimum operation menu for the user by machine learning based on an operation history of the operator (user).
 しかしながら、ユーザにとっての利便性の良し悪しが操作履歴に現れるとは限らない。表示の利便性が高くても低くてもユーザは同じ操作を行う場合もある。 However, good or bad convenience for the user does not always appear in the operation history. The user may perform the same operation regardless of whether the display convenience is high or low.
 本発明の目的は、情報を表示によって提供する構成において、提供情報の表示における利便性を向上させる技術を提供することである。 An object of the present invention is to provide a technique for improving convenience in displaying provided information in a configuration in which information is provided by display.
 本発明の一態様による情報提供システムは、ユーザに提供する提供情報を表示装置に表示させる情報提供システムであって、表示装置における提供情報の表示内容を生成し、その表示内容を表示装置に表示させる表示内容生成部と、センサから得られるセンサ情報に基づいて、表示装置における表示内容の表示態様に対するユーザの反応を推定し、その反応に応じて表示内容の表示態様を調整する表示最適化部と、を有し、表示内容生成部は、表示最適化部にて調整された表示態様で表示内容を表示装置に表示させる。 An information providing system according to an aspect of the present invention is an information providing system for displaying provided information provided to a user on a display device, and generates display content of the provided information on the display device and displays the display content on the display device. And a display optimization unit that estimates a user's response to the display mode of the display content on the display device and adjusts the display mode of the display content according to the response based on the sensor information obtained from the sensor. The display content generation unit causes the display device to display the display content in the display mode adjusted by the display optimization unit.
 本発明によれば、ユーザにとって見やすく使いやすい表示内容を自動的に獲得することができ、提供情報の表示における利便性を向上させることができる。 According to the present invention, it is possible to automatically acquire display contents that are easy to see and use for the user, and it is possible to improve convenience in displaying provided information.
本実施形態による情報提供システムのブロック図である。It is a block diagram of the information provision system by this embodiment. 図1に示した表示最適化部の内部構成例を示す図である。It is a figure which shows the example of an internal structure of the display optimization part shown in FIG. 図2に示した報酬計算部にて計算される報酬の具体例を示す図である。It is a figure which shows the specific example of the reward calculated in the reward calculation part shown in FIG. 図2に示した処理内容決定部から出力される処理内容の具体例を示す図である。It is a figure which shows the specific example of the processing content output from the processing content determination part shown in FIG. 図1及び図2に示した表示最適化部における表示内容の最適化処理について学習を行う場合のフローチャートである。It is a flowchart in the case of learning about the optimization process of the display content in the display optimization part shown in FIG.1 and FIG.2. 図1及び図2に示した表示最適化部における表示内容の最適化処理について学習を行わない場合のフローチャートである。FIG. 3 is a flowchart in a case where learning is not performed for display content optimization processing in the display optimization unit illustrated in FIGS. 1 and 2. FIG. 図1及び図2に示した情報提供システムにおける具体的な動作例を説明するための図である。It is a figure for demonstrating the specific operation example in the information provision system shown in FIG.1 and FIG.2. 図1及び図2に示した情報提供システムにおける具体的な動作例を説明するための図である。It is a figure for demonstrating the specific operation example in the information provision system shown in FIG.1 and FIG.2.
 以下に、本発明の実施形態について図面を参照して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 本実施形態では、情報提供システムとして、GUIを用いたシステムを例に挙げ、AR(拡張現実)により工場での作業員の作業指示を行うことで、作業支援を行うシステムについて説明する。 In this embodiment, a system using a GUI is taken as an example of an information providing system, and a system that provides work support by instructing a worker at a factory using AR (Augmented Reality) will be described.
 図1は、本実施形態による情報提供システムのブロック図である。 FIG. 1 is a block diagram of an information providing system according to this embodiment.
 本形態においては、ユーザである作業員は、表示装置として、現実空間に情報を重畳表示するAR表示が可能なウェアラブルグラス(ARグラス)を装着し、提供情報を現実空間に重畳表示しているものとするが、提供情報を仮想空間に表示することも考えられる。また、表示装置としては、VR(Virtual Reality(仮想現実))ヘッドマウントディスプレイ、MR(Mixed Reality(複合現実))ディスプレイ、PCディスプレイなどであってもよい。 In this embodiment, a worker who is a user wears wearable glasses (AR glasses) capable of AR display for displaying information in a real space as a display device, and displays provided information in the real space in a superimposed manner. However, it is also possible to display the provided information in a virtual space. The display device may be a VR (Virtual Reality) head-mounted display, an MR (Mixed Reality) display, a PC display, or the like.
 本形態における情報提供システムは図1に示すように、センサ101と、操作情報認識部102と、制御部103と、表示内容生成部104と、表示装置となる表示部105と、作業環境認識部106と、表示最適化部107と、作業解析部108と、ユーザ識別部109と、学習保存部110と、業務情報データベース111とを有している。つまり、本形態においては、情報提供システム内に表示装置が含まれているが、情報提供システムは、サーバとして表示装置とは別個に構成され、表示装置からアクセス可能となっていてもよい。つまり、表示装置は、情報提供システムに含まれていても含まれていなくてもよい。 As shown in FIG. 1, the information providing system according to the present embodiment includes a sensor 101, an operation information recognition unit 102, a control unit 103, a display content generation unit 104, a display unit 105 serving as a display device, and a work environment recognition unit. 106, a display optimization unit 107, a work analysis unit 108, a user identification unit 109, a learning storage unit 110, and a business information database 111. That is, in the present embodiment, the display device is included in the information providing system, but the information providing system may be configured as a server separately from the display device and accessible from the display device. That is, the display device may or may not be included in the information providing system.
 センサ101は、ウェアラブルグラス上に備えられている加速度センサやジャイロセンサ、カメラ、マイクなどのセンサ、及びユーザに装着した筋電センサである。センサ101は、これらセンサ101にて取得されたセンサ情報を出力する。また、センサ101として、照度センサや温度センサ、気圧センサを含めることも考えられる。 Sensor 101 is a sensor such as an acceleration sensor, a gyro sensor, a camera, a microphone, or the like provided on the wearable glass, and a myoelectric sensor attached to the user. The sensor 101 outputs sensor information acquired by these sensors 101. Further, it is conceivable that the sensor 101 includes an illuminance sensor, a temperature sensor, and an atmospheric pressure sensor.
 作業環境認識部106は、センサ101から出力されたセンサ情報を取得し、センサ情報に基づいて、ユーザの作業環境となる作業空間の三次元マップと、その作業空間上でのウェアラブルグラスの位置及び向きからなるユーザの姿勢とを推定し、これらからなる作業環境情報を出力する。なお、作業環境認識部106は、作業空間の三次元マップとユーザの姿勢とのいずれか一方を推定するものであってもよい。また、作業環境認識部106は、センサ101から出力されたセンサ情報に基づいて、作業環境として天候や昼夜の別などを推定する。 The work environment recognition unit 106 acquires the sensor information output from the sensor 101, and based on the sensor information, the three-dimensional map of the work space serving as the user's work environment, the position of the wearable glass on the work space, and The user's posture composed of the orientation is estimated, and work environment information composed of these is output. Note that the work environment recognition unit 106 may estimate either the three-dimensional map of the work space or the user's posture. In addition, the work environment recognition unit 106 estimates weather, day / night, and the like as the work environment based on the sensor information output from the sensor 101.
 操作情報認識部102は、センサ101から出力されたセンサ情報を取得し、センサ情報に基づいて、ユーザのシステムに対する操作を推定または認識し、ユーザの操作情報として出力する。例えば、操作情報認識部102は、センサ101となるカメラによって撮影されたジェスチャ操作によってユーザの動作を推定したり、センサ101となるマイクに入力された音声を音声認識することでユーザの操作を認識したりする。 The operation information recognition unit 102 acquires the sensor information output from the sensor 101, estimates or recognizes the user's operation on the system based on the sensor information, and outputs it as user operation information. For example, the operation information recognition unit 102 recognizes the user's operation by estimating the user's operation by a gesture operation photographed by the camera serving as the sensor 101 or by recognizing the voice input to the microphone serving as the sensor 101. To do.
 制御部103は、操作情報認識部102から出力された操作情報を取得し、操作情報に応じてウェアラブルグラスの動作を制御する。例えば、制御部103は、AR表示を表示する/消す、といった動作や、指定されたアプリを起動したりするなどといった動作を行う。また、表示部105にて提供情報を表示するための制御信号を出力する。 The control unit 103 acquires the operation information output from the operation information recognition unit 102, and controls the operation of the wearable glass according to the operation information. For example, the control unit 103 performs an operation of displaying / deleting the AR display, an operation of starting a designated application, and the like. Further, the display unit 105 outputs a control signal for displaying the provided information.
 表示内容生成部104は、表示部105から出力された制御信号に基づいて表示部105における提供情報の具体的な表示内容を生成し、表示部105に表示させる。その際、後述するように、表示最適化部107において、作業環境認識部106から出力された作業環境情報に基づいて表示部105における表示内容の表示位置が調整されるので、その調整に従って表示内容を表示部105に表示させることで、AR表示を行う場合に、あたかも表示が実空間上に存在しているかのように見せることができる。このように、表示内容生成部104は、生成した表示内容を、表示最適化部107にて調整された表示態様で表示内容を表示部105に表示させる。 The display content generation unit 104 generates specific display content of the provided information on the display unit 105 based on the control signal output from the display unit 105 and causes the display unit 105 to display the display content. At this time, as will be described later, the display optimization unit 107 adjusts the display position of the display content on the display unit 105 based on the work environment information output from the work environment recognition unit 106. Can be displayed on the display unit 105 to make it appear as if the display exists in real space when performing AR display. As described above, the display content generation unit 104 causes the display unit 105 to display the generated display content in the display mode adjusted by the display optimization unit 107.
 作業解析部108は、センサ101のうちウェアラブルカメラから得られた映像や、筋電センサから出力されたセンサ情報が入力され、画像認識処理や機械学習によってユーザが今どのような作業を行っているかという作業情報を推定する。作業内容は例えば、ドライバーで作業対象装置のカバーのねじを緩めている、レンチでボルトを締めている、など、工具、作業対象(ねじ、ボルト、など)、作業対象がどこのパーツか、動作(締めている/緩めている)、の組み合わせで定義される。 The work analysis unit 108 receives an image obtained from the wearable camera of the sensor 101 and sensor information output from the myoelectric sensor, and what kind of work the user is currently performing through image recognition processing and machine learning. The work information is estimated. The work contents are, for example, a tool, work target (screws, bolts, etc.) such as loosening the screw of the cover of the work target device with a screwdriver, tightening bolts with a wrench, etc. (Tightened / loosened).
 表示最適化部107は、センサ101から出力されたセンサ情報が入力され、このセンサ情報に基づいて表示部105における表示内容の表示態様に対するユーザの反応を推定し、この反応に応じて表示内容の表示態様を調整する。ユーザの反応は、身体反応であり、例えば、体の移動、姿勢の変化、視線の変化などがある。しぐさ、表情変化、独り言などを含めてもよい。また、表示内容に関連する作業の遂行状況を含めてもよい。また、表示最適化部107は、作業環境認識部106から出力された作業環境認識情報に基づいて表示内容の表示態様を調整する。例えば、作業環境が晴天の場合と雨天の場合とでは、同一の作業をする場合でもその作業姿勢などが異なり、また、昼間と夜間とでは、ユーザの視界の範囲が異なることが予想されるため、表示最適化部107が作業環境認識情報に基づいて表示内容の表示態様を調整することで、表示態様を最適化することができる。このように、作業環境認識部106から出力された作業環境情報に基づいても表示内容の表示態様を調整することで、ユーザがどのような環境でどのような反応をしたのかを反映して、提供情報の表示における利便性を向上させることができる。また、表示最適化部107は、操作情報認識部102から出力された操作情報と、作業環境認識部106から出力された作業環境認識情報とが入力され、これらセンサ情報、操作情報及び作業環境認識情報とを用いて、調整した表示内容の表示態様を機械学習により更新することで自動的に最適化する。なお、表示内容の表示態様とは、表示内容の表示位置や大きさ、表示の有無のうち少なくとも1つのパラメータである。機械学習で最適化するパラメータの初期値は、学習結果保存部110からダウンロードしたものを使用する。また、学習によって最適化したパラメータは学習結果保存部110へアップロードして更新する。なお、表示態様としては、表示の位置、大きさ、色、変化(点滅など)、透明度、などが考えられる。 The display optimization unit 107 receives the sensor information output from the sensor 101, estimates the user's response to the display mode of the display content on the display unit 105 based on the sensor information, and displays the display content according to this response. Adjust the display mode. The user's reaction is a body reaction, and includes, for example, body movement, posture change, line of sight change, and the like. You may include gestures, facial expression changes, and monologues. Further, the execution status of work related to the display content may be included. The display optimization unit 107 adjusts the display mode of the display content based on the work environment recognition information output from the work environment recognition unit 106. For example, when the work environment is clear and rainy, even if the same work is performed, the work posture and the like are different, and the range of the user's field of view is expected to differ between daytime and nighttime. The display optimization unit 107 can optimize the display mode by adjusting the display mode of the display content based on the work environment recognition information. In this way, by adjusting the display mode of the display content based on the work environment information output from the work environment recognition unit 106, it reflects what kind of reaction the user has made in what environment, Convenience in displaying provided information can be improved. Further, the display optimization unit 107 receives the operation information output from the operation information recognition unit 102 and the work environment recognition information output from the work environment recognition unit 106, and receives the sensor information, the operation information, and the work environment recognition. Using the information, the display mode of the adjusted display content is automatically optimized by updating by machine learning. The display mode of the display content is at least one parameter among the display position and size of the display content and the presence / absence of display. The initial values of parameters optimized by machine learning are those downloaded from the learning result storage unit 110. The parameters optimized by learning are uploaded to the learning result storage unit 110 and updated. In addition, as a display mode, a display position, size, color, change (flashing, etc.), transparency, and the like can be considered.
 学習結果保存部110は、表示最適化部107で最適化したパラメータを保存する。このとき、表示最適化部107においては、表示部105の機種や、ユーザ識別部109にて識別されるユーザ毎に表示態様のパラメータを調整し、この調整結果を、学習結果保存部110において表示部105の機種やユーザに対応づけて表示態様情報として保存して管理する。これにより、複数の機種の表示装置が利用される場合に各機種に対して適切な表示態様での提供情報の表示が可能となる。学習結果保存部110は、例えばウェアラブルグラスに搭載したフラッシュメモリなどのローカル環境に備えていてもよいし、ネットワーク接続された別の機器やクラウド上で集中管理しても良い。別の機器やクラウド上で集中管理する場合、学習結果のデータ量が増えた際に記憶容量の追加がしやすいと言う利点がある。また、ユーザ毎に複数のデバイス(例えばウェアラブルグラス)で学習した結果を集中管理することにより、同一のユーザが別のデバイスを試用した際にも、そのデバイスが学習結果保存部110からネットワーク経由で表示態様情報を取得することで、以前の学習データを初期値とすることができ、ユーザに適切な表示態様で表示内容を表示することができる。また、同一ユーザでなくても同様な作業を行うユーザの学習結果が保存されていれば、それを初期値として用いることもできる。 The learning result storage unit 110 stores the parameters optimized by the display optimization unit 107. At this time, the display optimization unit 107 adjusts the display mode parameter for each user identified by the display unit 105 and the user identification unit 109, and displays the adjustment result in the learning result storage unit 110. The display mode information is stored and managed in association with the model and user of the unit 105. Thereby, when a plurality of types of display devices are used, it is possible to display the provided information in an appropriate display mode for each model. The learning result storage unit 110 may be provided in a local environment such as a flash memory mounted on a wearable glass, or may be centrally managed on another device connected to the network or in the cloud. In the case of centralized management on another device or cloud, there is an advantage that it is easy to add storage capacity when the amount of learning result data increases. In addition, by centrally managing the results learned by a plurality of devices (for example, wearable glasses) for each user, even when the same user tries another device, the device can be accessed from the learning result storage unit 110 via the network. By acquiring the display mode information, previous learning data can be set as an initial value, and display contents can be displayed in a display mode suitable for the user. Moreover, if the learning result of the user who performs the same work even if it is not the same user is preserve | saved, it can also be used as an initial value.
 ユーザ識別部109は、システムのユーザが事前に登録した誰であるかを識別する。具体的な方法としては、パスワード認証や指紋認証などの生体認証など、一般的な個人認証用の技術であればよい。 The user identification unit 109 identifies who the user of the system has registered in advance. As a specific method, any general technique for personal authentication such as biometric authentication such as password authentication or fingerprint authentication may be used.
 業務情報データベース111は、システムを用いた作業に関する作業実績データや作業指示データ、さらには、作業を行うユーザに関するユーザデータなどからなり、ネットワーク112を介してシステムからアクセス可能となっている。システムを用いて作業を行う場合は、システムから業務情報データベース111にアクセスし、保存されたデータを参照することで作業を行うことになる。その際、例えば、作業実績データには、作業を行う際の好ましい姿勢が保存されており、この姿勢を参照して作業を行うことなどが考えられる。なお、業務情報データベース111は、システム内に構築しておいてもよい。 The business information database 111 includes work performance data and work instruction data related to work using the system, and user data related to a user who performs the work, and is accessible from the system via the network 112. When working using the system, the work is performed by accessing the business information database 111 from the system and referring to the stored data. At this time, for example, the work performance data stores a preferred posture for performing the work, and it is conceivable to perform the work with reference to this posture. The business information database 111 may be built in the system.
 図2は、図1に示した表示最適化部107の内部構成例を示す図である。なお、本例では機械学習として強化学習を用いる場合について説明する。 FIG. 2 is a diagram illustrating an internal configuration example of the display optimization unit 107 illustrated in FIG. In this example, a case where reinforcement learning is used as machine learning will be described.
 図1に示した表示最適化部107としては、図2に示すように、状態推定部201と、報酬計算部202と、価値関数更新部203と、処理内容決定部204とを有するものが考えられる。 As the display optimization unit 107 illustrated in FIG. 1, as illustrated in FIG. 2, a display optimization unit 107 including a state estimation unit 201, a reward calculation unit 202, a value function update unit 203, and a processing content determination unit 204 is considered. It is done.
 状態推定部201は、センサ101から出力されたセンサ情報と、作業環境認識部106から出力された作業環境認識情報と、表示内容生成部104にて生成された現在の表示内容と、作業解析部108から出力された作業情報とに基づいて、強化学習における状態sを推定する。具体的には、例えば
(1)センサ101から出力されたセンサ情報であるウェアラブルカメラの映像情報と、作業環境認識部106から出力された作業環境認識情報と、表示内容生成部104にて生成された現在の表示内容から推定した、ユーザにとって表示がどう見えているかというユーザ視界情報
(2)作業解析部108から出力された作業情報
の2種類の情報を状態sのパラメータとして出力する。
The state estimation unit 201 includes sensor information output from the sensor 101, work environment recognition information output from the work environment recognition unit 106, current display content generated by the display content generation unit 104, and work analysis unit Based on the work information output from 108, the state s in reinforcement learning is estimated. Specifically, for example, (1) the wearable camera image information that is sensor information output from the sensor 101, the work environment recognition information output from the work environment recognition unit 106, and the display content generation unit 104 are generated. User view information on how the display looks to the user estimated from the current display content (2) Two types of information of work information output from the work analysis unit 108 are output as parameters of the state s.
 報酬計算部202は、状態推定部201から出力された状態sと、センサ101から出力されたセンサ情報と、操作情報認識部102から出力された操作情報とが入力され、強化学習の報酬rを計算して出力する。 The reward calculation unit 202 receives the state s output from the state estimation unit 201, the sensor information output from the sensor 101, and the operation information output from the operation information recognition unit 102, and receives the reward r for reinforcement learning. Calculate and output.
 図3は、図2に示した報酬計算部202にて計算される報酬rの具体例を示す図である。なお、図3においては、条件欄には報酬を与える際の条件、入力欄には条件を判定する際の入力情報、報酬欄には各報酬値を示している。ここで、報酬の値はあくまでも一例であり、実際の用途に応じて任意の値に設定可能である。 FIG. 3 is a diagram showing a specific example of the reward r calculated by the reward calculation unit 202 shown in FIG. In FIG. 3, the condition column shows conditions for giving a reward, the input column shows input information for determining the condition, and the reward column shows each reward value. Here, the value of the reward is merely an example, and can be set to an arbitrary value according to the actual application.
 報酬計算部202は、図3に示すように、状態推定部201から出力された状態sと、センサ101から出力されたセンサ情報と、操作情報認識部102から出力された操作情報とによるユーザの反応種別に対して報酬が設定されており、これらに応じた条件が与えられた場合、それに対応づけられた報酬を加算することで、現在の表示内容の表示態様に対する報酬rを計算する。すなわち、報酬計算部202は、表示部105に表示される提供情報が対象とする作業の作業内容、作業環境、および表示部105の組み合わせ毎に、反応種別に対する報酬を用いて報酬rを計算することになる。 As shown in FIG. 3, the reward calculation unit 202 includes a user's information based on the state s output from the state estimation unit 201, the sensor information output from the sensor 101, and the operation information output from the operation information recognition unit 102. When a reward is set for the reaction type and conditions corresponding to these are given, a reward r for the display mode of the current display content is calculated by adding a reward associated with the condition. That is, the reward calculation unit 202 calculates the reward r using the reward for the reaction type for each combination of the work content, the work environment, and the display unit 105 for which the provided information displayed on the display unit 105 is a target. It will be.
 No.1~No.4は、操作情報認識部102から出力された操作情報から直接得られる報酬である。例えば、表示を消した、消していた表示を出した、表示を拡大/縮小した、表示位置を移動したなどを示す操作情報から直接報酬を得ることになる。ユーザが直接操作をして表示内容を変更したということは、ユーザにとって現在の表示態様が好ましくないものであると考えられるため、負の報酬を与えることで、強化学習によりユーザにとって使いやすい表示を自動的に学習することができる。 No. 1-No. 4 is a reward obtained directly from the operation information output from the operation information recognition unit 102. For example, a reward is obtained directly from operation information indicating that the display has been turned off, the display that has been turned off, the display has been enlarged / reduced, the display position has been moved, and the like. The fact that the user has changed the display contents by directly operating it is considered that the current display mode is unfavorable for the user. Therefore, by giving a negative reward, a user-friendly display is provided by reinforcement learning. Can learn automatically.
 No.5及びNo.6は、直接操作ではなく、センサ101から出力されたセンサ情報と、状態推定部201から出力された状態sとから、現在の表示態様に対するユーザの反応を推定し、フィードバックを与えるための報酬である。報酬計算部202は、状態sに含まれるユーザ視界情報と、センサ情報から得られる頭部の動きから動作推定を行い、それに応じた報酬を与える。例えば、No.5のように、ユーザの動作が、表示内容に視線を向けた状態で頭を前方あるいは後方に移動させたものと推定された場合や、No.6のように、例えば、奥行きがある表示がされた状態において、ユーザの動作が、表示の奥を覗きこんだものと推定された場合は、ユーザにとって表示が見にくいものと考えられるため、それぞれに対応する負の報酬を与える。このように、直接操作のみでなく、ユーザが表示内容に視線を向けた状態で頭を前方あるいは後方に移動させたという、ユーザの反応を推定した情報から学習を行うことで、操作には反映されないユーザの意図を汲み取り、より最適な表示をすることできる。 No. 5 and no. 6 is a reward for estimating the user's reaction to the current display mode from the sensor information output from the sensor 101 and the state s output from the state estimation unit 201 and providing feedback instead of direct operation. is there. The reward calculation unit 202 performs motion estimation from the user view information included in the state s and the movement of the head obtained from the sensor information, and gives a reward corresponding thereto. For example, no. As shown in FIG. 5, when the user's action is estimated to have moved the head forward or backward with the line of sight toward the display content, For example, when it is estimated that the user's action is looking into the back of the display in a state where the display with depth is displayed as in FIG. 6, it is considered that the display is difficult for the user to see. Give the corresponding negative reward. In this way, not only direct operation, but also learning is performed from information that estimates the user's reaction that the user has moved his / her head forward or backward with the line of sight directed at the display content, and reflected in the operation. The user's intention that is not used is taken into account, and a more optimal display can be performed.
 No.7~No.10は、一連の作業が終了した時点で与える報酬である。処理タイミングの詳細は図5を用いて後述する。 No. 7-No. 10 is a reward given when a series of work is completed. Details of the processing timing will be described later with reference to FIG.
 No.7は、一連の作業中にNo.1~No.6のいずれの条件も検知されなかった場合に正の報酬を与える。一連の作業中にNo.1~No.6のいずれの条件も検知されなかったということは、現在の表示態様がその一連の作業においてユーザにとって好ましいものと考えられるため、正の報酬を与えることにより、ユーザが表示に対して操作やみ見やすくするための行動をする必要のない、見やすい表示を学習することができる。 No. No. 7 during the series of operations. 1-No. A positive reward is given if none of the conditions of 6 is detected. No. during the series of work. 1-No. The fact that none of the conditions of 6 is detected means that the current display mode is preferable for the user in the series of operations. Therefore, by giving a positive reward, the user can easily operate and view the display. It is possible to learn an easy-to-see display that does not require any action.
 No.8~No.10は、ユーザがシステムに対して直接表示の見やすさの評価を行った場合の報酬である。評価結果の入力インターフェースは、GUI、音声、ジェスチャなど評価結果がユーザの意図通りシステムに入力できればどんな手段でも良い。また、ユーザが入力する評価は、図3に示すように“+20”,“0”,“-20”の3段階としてもよいし、2段階、あるいは、4段階以上に細かく分けてもよい。このように、表示部105の表示態様に対するユーザの評価結果を明示的に直接フィードバックすることで、よりユーザの意図に即した表示を学習できるとともに、学習結果の合理性向上も可能になる。また、強化学習を用いることで、一つ一つの処理内容に対するフィードバックではなく、個々ここで説明したように全体の良い/悪いという評価からも個々の処理内容の良し悪しを学習できる。 No. 8-No. 10 is a reward when the user evaluates the ease of viewing directly on the system. The evaluation result input interface may be any means as long as the evaluation result such as GUI, voice, and gesture can be input to the system as intended by the user. Further, the evaluation input by the user may be divided into three stages of “+20”, “0”, and “−20” as shown in FIG. 3, or may be divided into two stages or more than four stages. In this way, by explicitly directly feeding back the user's evaluation result for the display mode of the display unit 105, it is possible to learn the display more in line with the user's intention and improve the rationality of the learning result. Also, by using reinforcement learning, it is possible to learn whether each processing content is good or bad not based on feedback for each processing content but also from the evaluation of overall good / bad as described here.
 No.11は、表示内容におけるコンテンツの表示順について評価を行った場合の報酬である。 No. 11 is a reward when evaluating the display order of the content in the display content.
 表示内容においてユーザがよく使うコンテンツのアイコンが、操作情報に含まれるユーザの視点や視線に応じて適切な箇所に並んでいるほど、ユーザの使い勝手がよいものと考えられるため、高い報酬が得られる。 As the icons of content frequently used by the user in the display contents are arranged in an appropriate location according to the user's viewpoint and line of sight included in the operation information, it is considered that the user's usability is better, so a higher reward is obtained. .
 価値関数更新部203は、状態推定部201から出力された状態sと、報酬計算部202から出力された報酬rと、処理内容決定部204から出力された処理内容aとが入力され、これら状態sと報酬rと処理内容aとから価値関数を更新する。価値関数の更新方法は、例えばQ学習などの手法があり、強化学習の分野で既に提案された、または今後提案される価値関数の更新手法が適用可能である。また、価値関数更新部203は、状態sに基づいて処理内容aに関する価値qを計算して出力する。価値関数の初期値は学習結果保存部110に保存されているユーザ毎の前回までの学習結果を使用し、学習後の価値関数は学習結果保存部110へアップロードし、学習結果保存部110に保存された価値関数を更新する。 The value function update unit 203 receives the state s output from the state estimation unit 201, the reward r output from the reward calculation unit 202, and the processing content a output from the processing content determination unit 204, and these states The value function is updated from s, reward r, and processing content a. The value function updating method includes, for example, a method such as Q-learning, and a value function updating method already proposed in the field of reinforcement learning or proposed in the future can be applied. Further, the value function updating unit 203 calculates and outputs the value q related to the processing content a based on the state s. The initial value of the value function uses the previous learning result for each user stored in the learning result storage unit 110, and the value function after learning is uploaded to the learning result storage unit 110 and stored in the learning result storage unit 110. Update the value function.
 処理内容決定部204は、状態推定部201から出力された状態sと、価値関数更新部203から出力された価値qとが入力され、これら状態sと価値qとに基づいて、表示内容生成部104にて生成された表示内容の表示態様に関する処理内容aを決定して出力する。 The processing content determination unit 204 receives the state s output from the state estimation unit 201 and the value q output from the value function update unit 203, and based on the state s and value q, the display content generation unit The processing content a regarding the display mode of the display content generated at 104 is determined and output.
 図4は、図2に示した処理内容決定部204から出力される処理内容aの具体例を示す図である。 FIG. 4 is a diagram showing a specific example of the processing content a output from the processing content determination unit 204 shown in FIG.
 図4に示すように、図2に示した処理内容決定部204から出力される処理内容aとしては、表示を消す、消していた表示を出す、表示の拡大/縮小、表示の移動、及び何もしないといったものが考えられる。ここで、現在表示しているオブジェクトが複数ある場合は、処理内容決定部204はオブジェクトごとに処理内容を決定する。また、表示の移動に関しては移動方向と移動量もあわせて決定する。移動方向は例えば上下左右に斜めを加えた8方向、移動量は5cm、10cm、・・・と言ったように、予め移動方向と移動量のバリエーションを定義しておき、その中から選択することで処理内容を決定できる。 As shown in FIG. 4, the processing content a output from the processing content determination unit 204 shown in FIG. 2 is to erase the display, display the erased display, enlarge / reduce the display, move the display, and what There may be no such thing. Here, when there are a plurality of objects currently displayed, the processing content determination unit 204 determines the processing content for each object. Further, regarding the movement of the display, the moving direction and the moving amount are also determined. For example, the movement direction is defined as eight directions with slanting on the top, bottom, left, and right, and the movement amount is 5 cm, 10 cm,... You can decide the processing contents.
 以下に、上記のように構成された表示最適化部107における表示内容の最適化処理について説明する。 Hereinafter, the display content optimization process in the display optimization unit 107 configured as described above will be described.
 まず、表示最適化部107において最適化処理の際に学習を行う場合について説明する。 First, the case where learning is performed during the optimization process in the display optimization unit 107 will be described.
 図5は、図1及び図2に示した表示最適化部107における表示内容の最適化処理について学習を行う場合のフローチャートである。 FIG. 5 is a flowchart when learning is performed on the display content optimization processing in the display optimization unit 107 shown in FIGS. 1 and 2.
 なお、下記処理は、作業開始から作業終了までを一つのシーケンスとする。この作業開始、終了はジェスチャ操作や音声操作で明示的にON/OFFを指定しても良いし、作業解析部108において特定の作業の開始、終了を自動的に判定しても良い。 In the following processing, one sequence is from the start of work to the end of work. This work start / end may be explicitly designated as ON / OFF by gesture operation or voice operation, or the work analysis unit 108 may automatically determine the start / end of a specific work.
 表示最適化部107は、まずステップ501において、ユーザ識別部109で認識したユーザの価値関数を学習結果保存部110からダウンロードし初期値に設定する。 First, in step 501, the display optimization unit 107 downloads the user value function recognized by the user identification unit 109 from the learning result storage unit 110 and sets it to an initial value.
 表示最適化部107はステップ502において、センサ101から出力されたセンサ情報と、作業解析部108から出力された作業情報とに基づいて作業の開始を検知すると、まずステップ503において、センサ101から出力されたセンサ情報と、作業環境認識部106から出力された作業環境認識情報と、作業解析部108から出力された作業情報とに基づいて、強化学習における状態sを推定する。 When the display optimization unit 107 detects the start of work based on the sensor information output from the sensor 101 and the work information output from the work analysis unit 108 in step 502, first, in step 503, the display optimization unit 107 outputs the information from the sensor 101. Based on the sensor information, the work environment recognition information output from the work environment recognition unit 106, and the work information output from the work analysis unit 108, the state s in reinforcement learning is estimated.
 次に、表示最適化部107はステップ504において、ステップ503で推定した状態sと、学習結果保存部110からダウンロードされた直前の価値関数に基づいて、価値関数を最大化する処理内容aを決定して出力する。 Next, in step 504, the display optimization unit 107 determines the processing content a that maximizes the value function based on the state s estimated in step 503 and the value function immediately before downloaded from the learning result storage unit 110. And output.
 表示最適化部107から処理内容aが出力されると、表示内容生成部104はステップ505において、生成した表示内容を処理内容aに応じた表示態様で表示部105に表示させる。 When the processing content a is output from the display optimization unit 107, the display content generation unit 104 causes the display unit 105 to display the generated display content in a display mode according to the processing content a in step 505.
 また、表示最適化部107はステップ506において、状態推定部201から出力された状態sと、センサ101から出力されたセンサ情報と、操作情報認識部102から出力された操作情報とに基づいて、強化学習の報酬rを計算して出力する。 Further, in step 506, the display optimization unit 107, based on the state s output from the state estimation unit 201, the sensor information output from the sensor 101, and the operation information output from the operation information recognition unit 102, Reinforcement learning reward r is calculated and output.
 次に、表示最適化部107はステップ507において、状態推定部201から出力された状態sと、報酬計算部202から出力された報酬rと、処理内容決定部204から出力された処理内容aとから価値関数を更新する。 Next, in step 507, the display optimization unit 107 outputs the state s output from the state estimation unit 201, the reward r output from the reward calculation unit 202, and the processing content a output from the processing content determination unit 204. Update the value function from
 表示最適化部107がステップ508において、センサ101から出力されたセンサ情報と、作業解析部108から出力された作業情報とに基づいて作業の終了を検知した後、図3のNo.8~No.10で示したように、一連の作業中の表示内容に関する評価をユーザが入力した場合、表示最適化部107はステップ509において、入力された評価結果を報酬として価値関数を更新し、学習結果として学習結果保存部110にアップロードする。 In step 508, the display optimization unit 107 detects the end of the work based on the sensor information output from the sensor 101 and the work information output from the work analysis unit 108. 8-No. As shown in FIG. 10, when the user inputs an evaluation regarding the display contents during a series of work, the display optimization unit 107 updates the value function using the input evaluation result as a reward in step 509, as a learning result. Upload to the learning result storage unit 110.
 また、ステップ508にて作業がまだ終了していないと判断した場合は、ステップ503に戻り、再度ステップ503以降の処理を行う。 If it is determined in step 508 that the work has not yet been completed, the process returns to step 503, and the processing after step 503 is performed again.
 なお、上述した一連のフローでは、価値関数の更新処理がステップ507,509の二箇所で行われているが、これはどちらか一方のみでも差し支えない。 In the series of flows described above, value function update processing is performed at two locations of steps 507 and 509, but only one of them may be performed.
 このように、強化学習により表示態様を最適化するので、ユーザが複雑な作業を実行する場合であっても単純な反応の評価により表示態様を最適化することができる。また、表示部105に表示される提供情報が対象とする作業の作業内容、作業環境、および表示部105の組み合わせ毎に、反応種別に対する報酬を用いて報酬rを計算して強化学習を実行することで、より適切な報酬を設定することが可能となり、より良好な強化学習が可能となる。 Thus, since the display mode is optimized by reinforcement learning, the display mode can be optimized by simple reaction evaluation even when the user performs a complicated task. Further, for each combination of the work content, work environment, and display unit 105 for which the provided information displayed on the display unit 105 is targeted, the reward r is calculated using the reward for the reaction type, and reinforcement learning is executed. Thus, it becomes possible to set a more appropriate reward and better reinforcement learning is possible.
 次に、表示最適化部107において最適化処理の際に学習を行わない場合について説明する。 Next, a case where learning is not performed in the optimization process in the display optimization unit 107 will be described.
 図6は、図1及び図2に示した表示最適化部107における表示内容の最適化処理について学習を行わない場合のフローチャートである。 FIG. 6 is a flowchart in a case where learning is not performed on the display content optimization processing in the display optimization unit 107 shown in FIGS. 1 and 2.
 図1及び図2に示した表示最適化部107における表示内容の最適化処理について学習を行わない場合は、図6に示すように、図5に示すステップ506における報酬の計算処理と、ステップ507,508における価値関数の更新処理を行わず、表示最適化部107において、学習済みの価値関数qを用いて表示の最適化のみを行うことになる。 When learning is not performed on the display content optimization process in the display optimization unit 107 shown in FIGS. 1 and 2, as shown in FIG. 6, the reward calculation process in step 506 shown in FIG. 508, the display function is not updated, and the display optimization unit 107 performs only display optimization using the learned value function q.
 図5に示したフローと図6に示したフローとの切り替えは、例えば、基本的なフローとして図5に示したフローを実行しており、表示最適化部107にて計算される報酬rが減らなくなった場合に図6に示したフローに移行することや、図6に示したフローを実行している場合に、作業解析部108から出力された作業情報が大きく変化した場合に図5に示したフローに戻ることなどが考えられる。 Switching between the flow shown in FIG. 5 and the flow shown in FIG. 6 is performed, for example, by executing the flow shown in FIG. 5 as a basic flow, and the reward r calculated by the display optimization unit 107 is If the flow shifts to the flow shown in FIG. 6 when it does not decrease, or when the work information output from the work analysis unit 108 changes greatly when the flow shown in FIG. 6 is executed, FIG. It is possible to return to the indicated flow.
 上述したように本実施形態においては、ユーザの反応に応じて提供情報の表示態様を調整するので、ユーザの操作履歴に現れないような利便性の良し悪しを反映し、利便性を向上させることができる。また、強化学習によりユーザ毎に好適な表示内容を自動的に学習して表示態様を最適化するので、ユーザが複雑な作業を実行する場合であっても単純な反応の評価により表示態様を最適化することができる。 As described above, in the present embodiment, the display mode of the provided information is adjusted according to the reaction of the user, so that the convenience is improved so that it does not appear in the user operation history, and the convenience is improved. Can do. In addition, the optimal display mode is optimized by automatically learning suitable display contents for each user through reinforcement learning, so even if the user performs complex tasks, the display mode is optimized by simple reaction evaluation. Can be
 以下に、上述した情報提供システムにおける情報の提供方法について具体的な例を挙げて説明する。 Hereinafter, a method of providing information in the information providing system described above will be described with a specific example.
 図7は、図1及び図2に示した情報提供システムにおける具体的な動作例を説明するための図である。 FIG. 7 is a diagram for explaining a specific operation example in the information providing system shown in FIGS. 1 and 2.
 図1及び図2に示した情報提供システムを適用しないものにおいては、図7(a)に示すように、作業対象301に対して作業を行う場合、作業指示情報302が中央に表示され、作業者が作業対象301に近づいても表示がそのままであるため、作業対象301が作業指示情報302に隠れた状態となってしまう。そのため、作業者が作業指示情報302の表示位置やサイズを変更し、作業指示情報302が表示されたまま作業対象301が見えるような状態とし、その状態で作業指示情報302を見ながら作業対象301に対して作業をすることになる。 In the case where the information providing system shown in FIGS. 1 and 2 is not applied, as shown in FIG. 7A, when work is performed on the work target 301, work instruction information 302 is displayed in the center, Even if the person approaches the work target 301, the display remains as it is, so that the work target 301 is hidden in the work instruction information 302. Therefore, the worker changes the display position and size of the work instruction information 302 so that the work target 301 can be seen while the work instruction information 302 is displayed, and the work target 301 is viewed while viewing the work instruction information 302 in that state. Will be working on.
 一方、図1及び図2に示した情報提供システムを適用したものにおいては、図7(b)に示すように、作業対象301に対して作業を行う場合、作業者が作業対象301に近づくと、作業者が作業しやすいように作業指示情報302の表示位置やサイズが自動的に変更され、それにより、作業指示情報302が表示されたまま作業対象301が見えるような状態となり、その状態で作業指示情報302を見ながら作業対象301に対して作業をすることになる。 On the other hand, in the case where the information providing system shown in FIGS. 1 and 2 is applied, when the work is performed on the work target 301 as shown in FIG. The display position and size of the work instruction information 302 are automatically changed so that the worker can easily work, so that the work target 301 can be seen while the work instruction information 302 is displayed. The user works on the work target 301 while looking at the work instruction information 302.
 図8は、図1及び図2に示した情報提供システムにおける具体的な動作例を説明するための図である。 FIG. 8 is a diagram for explaining a specific operation example in the information providing system shown in FIGS. 1 and 2.
 図1及び図2に示した情報提供システムを適用しないものにおいては、図8(a)に示すように、作業対象401に対して作業を行う場合、作業内容情報402の中から、作業対象401の作業手順を作業者が検索していくと、作業対象401の作業手順を示す作業指示情報403が作業対象401に重畳表示され、その状態で作業者は、作業指示情報403を見ながら作業対象401に対して作業をすることになる。 In the case where the information providing system shown in FIGS. 1 and 2 is not applied, as shown in FIG. 8A, when work is performed on the work target 401, the work target 401 is selected from the work content information 402. When the worker searches for the work procedure, work instruction information 403 indicating the work procedure of the work object 401 is superimposed on the work object 401, and in this state, the worker looks at the work instruction information 403 while viewing the work instruction information 403. 401 will be worked on.
 一方、図1及び図2に示した情報提供システムを適用したものにおいては、図8(b)に示すように、作業対象401に対して作業を行う場合、作業者が作業対象401に視線を移すと、作業対象401の作業手順を示す作業指示情報403が作業対象401に重畳表示され、その状態で作業者は、作業指示情報403を見ながら作業対象401に対して作業をすることになる。 On the other hand, in the case where the information providing system shown in FIGS. 1 and 2 is applied, when the work is performed on the work target 401 as shown in FIG. When transferred, the work instruction information 403 indicating the work procedure of the work object 401 is superimposed on the work object 401, and in this state, the worker works on the work object 401 while viewing the work instruction information 403. .
 なお、本実施形態においては、表示内容の学習に強化学習を用いる場合を例に説明したが、他の機械学習手法、例えば教師有り学習を用いても良い。この場合、例えば、ユーザの視界情報に対応する最適な表示内容を教師データとして用意し、学習することで最適な表示内容を学習することができる。 In this embodiment, the case where reinforcement learning is used for learning of display contents has been described as an example. However, other machine learning methods such as supervised learning may be used. In this case, for example, the optimum display content corresponding to the user's field of view information is prepared as teacher data, and the optimum display content can be learned by learning.
 また、本実施形態においては、ユーザ毎に最適な学習を行う場合を例に説明したが、例えば個人でなく役職などの属性レベル毎に学習を行ってもよい。この場合、全く学習を行っていないユーザが作業を行う際も、同様な作業を行う人物で既に学習したデータを用いて表示を最適化することができる。更に、属性やユーザを区別せず、全てのユーザ、作業に対して最適な表示を学習する構成でも良い。この場合、人物や作業内容に依らない、普遍的に好適な表示が学習できる。 Further, in the present embodiment, the case where the optimum learning is performed for each user has been described as an example. However, for example, the learning may be performed for each attribute level such as a title instead of an individual. In this case, even when a user who has not learned at all performs the work, the display can be optimized using data already learned by a person who performs the same work. Furthermore, the structure which learns the optimal display with respect to all the users and work, without distinguishing an attribute or a user may be sufficient. In this case, a universally suitable display can be learned regardless of the person or work content.
 なお、上述した実施形態は例示であり、これに限定されるものではなく、様々な変形例が可能である。例えば、上述した実施形態は理解のために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。 Note that the above-described embodiment is an example, and the present invention is not limited to this, and various modifications are possible. For example, the above-described embodiments have been described in detail for the sake of understanding, and are not necessarily limited to those provided with all the configurations described.
 また、上記の各構成は、それらの一部又は全部が、ハードウェアで構成されても、プロセッサでプログラムが実行されることにより実現されるように構成されてもよい。また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されていると考えてもよい。 In addition, each of the above-described configurations may be configured such that a part or all of the configuration is configured by hardware, or is realized by executing a program by a processor. Further, the control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
101…センサ、102…操作情報認識部、103…制御部、104…表示内容生成部、105…表示部、106…作業環境認識部、107…表示内容最適化部、108…作業解析部、109…ユーザ識別部、110…学習結果保存部、111…業務情報データベース、112…ネットワーク DESCRIPTION OF SYMBOLS 101 ... Sensor, 102 ... Operation information recognition part, 103 ... Control part, 104 ... Display content generation part, 105 ... Display part, 106 ... Work environment recognition part, 107 ... Display content optimization part, 108 ... Work analysis part, 109 User identification unit 110 Learning result storage unit 111 Business information database 112 Network

Claims (14)

  1.  ユーザに提供する提供情報を表示装置に表示させる情報提供システムであって、
     前記表示装置における前記提供情報の表示内容を生成し、該表示内容を前記表示装置に表示させる表示内容生成部と、
     センサから得られるセンサ情報に基づいて、前記表示装置における前記表示内容の表示態様に対する前記ユーザの反応を推定し、該反応に応じて前記表示内容の表示態様を調整する表示最適化部と、を有し、
     前記表示内容生成部は、前記表示最適化部にて調整された表示態様で前記表示内容を前記表示装置に表示させる、情報提供システム。
    An information providing system for causing a display device to display provided information provided to a user,
    Generating a display content of the provided information in the display device, and causing the display device to display the display content;
    A display optimization unit that estimates a response of the user to a display mode of the display content in the display device based on sensor information obtained from a sensor, and adjusts a display mode of the display content according to the response; Have
    The display content generation unit causes the display device to display the display content in a display mode adjusted by the display optimization unit.
  2.  前記センサ情報に基づいて前記ユーザの作業環境を推定する作業環境認識部を更に有し、
     前記表示最適化部は、前記作業環境に応じて前記表示内容の表示態様を調整する、請求項1に記載の情報提供システム。
    A work environment recognition unit that estimates the user's work environment based on the sensor information;
    The information providing system according to claim 1, wherein the display optimization unit adjusts a display mode of the display content according to the work environment.
  3.  前記表示最適化部は、前記ユーザの前記反応の反応種別に対して報酬を設定した強化学習により前記表示内容の表示態様を最適化する、請求項1に記載の情報提供システム。 The information providing system according to claim 1, wherein the display optimization unit optimizes a display mode of the display content by reinforcement learning in which a reward is set for the reaction type of the reaction of the user.
  4.  前記表示最適化部は、前記提供情報が対象とする作業の作業内容、作業環境、および前記表示装置の組み合わせ毎に、前記反応種別に対する報酬を予め定めた報酬情報に従って前記強化学習を実行する、請求項3に記載の情報提供システム。 The display optimization unit executes the reinforcement learning according to reward information for which a reward for the reaction type is predetermined for each combination of work content, work environment, and the display device of the work targeted by the provided information. The information providing system according to claim 3.
  5.  前記表示最適化部は、前記強化学習において、更に、前記提供情報の前記表示装置における表示に対する前記ユーザによる明示的な評価に対して報酬を設定する、請求項3に記載の情報提供システム。 4. The information providing system according to claim 3, wherein the display optimization unit further sets a reward for an explicit evaluation by the user with respect to the display of the provided information on the display device in the reinforcement learning.
  6.  前記表示最適化部は、前記表示装置の機種毎に前記表示態様を調整し、前記機種に対応づけて調整結果を示す表示態様情報を保存し、前記ユーザが前記表示装置を使用するとき当該表示装置の機種に対応づけて保存されている前記表示態様情報を適用する、請求項1に記載の情報提供システム。 The display optimization unit adjusts the display mode for each model of the display device, stores display mode information indicating an adjustment result in association with the model, and displays the display when the user uses the display device. The information providing system according to claim 1, wherein the display mode information stored in association with a device model is applied.
  7.  前記表示最適化部は、前記ユーザ毎に前記表示態様を調整し、前記ユーザに対応づけて調整結果を示す表示態様情報を保存し、前記ユーザが前記表示装置を使用するとき当該ユーザに対応づけて保存されている前記表示態様情報を適用する、請求項1に記載の情報提供システム。 The display optimization unit adjusts the display mode for each user, stores display mode information indicating an adjustment result in association with the user, and associates the display mode information with the user when the user uses the display device. The information providing system according to claim 1, wherein the display mode information stored in step 1 is applied.
  8.  前記表示態様情報がネットワーク経由で通信可能な情報処理装置に記録され、
     複数の情報提供システムが前記情報処理装置から前記表示態様情報を取得して利用可能である、
    請求項6または7に記載の情報提供システム。
    The display mode information is recorded in an information processing apparatus capable of communicating via a network,
    A plurality of information providing systems can obtain and use the display mode information from the information processing apparatus.
    The information provision system according to claim 6 or 7.
  9.  前記表示内容の表示態様に対する前記ユーザの反応として、前記表示内容の拡大操作および縮小操作が含まれる、請求項1に記載の情報提供システム。 2. The information provision system according to claim 1, wherein the display content enlargement operation and reduction operation are included as a response of the user to the display mode of the display content.
  10.  前記表示内容の表示態様に対する前記ユーザの反応として、前記表示内容に視線を向けた状態で頭を前方および後方に移動させる動作が含まれる、請求項1に記載の情報提供システム。 The information providing system according to claim 1, wherein the user's reaction to the display mode of the display content includes an operation of moving a head forward and backward with a line of sight toward the display content.
  11.  前記表示装置が、拡張現実の表示を行う装置であり、
     前記提供情報の表示が、現実空間に重畳表示する情報の表示である、
    請求項1に記載の情報提供システム。
    The display device is a device for displaying augmented reality;
    The display of the provided information is a display of information superimposed on the real space,
    The information providing system according to claim 1.
  12.  前記表示装置が、仮想現実の表示を行う装置であり、
     前記提供情報の表示が、仮想空間に表示する情報の表示である、
    請求項1に記載の情報提供システム。
    The display device is a device for displaying virtual reality;
    The display of the provided information is a display of information to be displayed in a virtual space.
    The information providing system according to claim 1.
  13.  前記表示装置を更に含む、請求項1に記載の情報提供システム。 The information providing system according to claim 1, further comprising the display device.
  14.  ユーザに提供する提供情報を表示装置に表示させる情報提供方法であって、
     表示内容生成手段が、前記表示装置における前記提供情報の表示内容を決定し、
     表示最適化手段が、センサから得られるセンサ情報に基づいて、前記表示内容が前記表示装置に表示された場合の表示態様に対する前記ユーザの反応を推定し、該反応に応じて前記表示内容の表示態様を調整し、
     前記表示内容生成手段が、前記調整された表示態様で前記表示内容を前記表示装置に表示させる、情報提供方法。
    An information provision method for causing a display device to display provision information to be provided to a user,
    Display content generation means determines the display content of the provision information in the display device;
    Based on sensor information obtained from the sensor, display optimization means estimates the user's response to the display mode when the display content is displayed on the display device, and displays the display content according to the response. Adjust the aspect,
    The information provision method in which the display content generation means causes the display device to display the display content in the adjusted display mode.
PCT/JP2018/004311 2018-02-08 2018-02-08 Information providing system and information providing method WO2019155564A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/004311 WO2019155564A1 (en) 2018-02-08 2018-02-08 Information providing system and information providing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/004311 WO2019155564A1 (en) 2018-02-08 2018-02-08 Information providing system and information providing method

Publications (1)

Publication Number Publication Date
WO2019155564A1 true WO2019155564A1 (en) 2019-08-15

Family

ID=67548923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/004311 WO2019155564A1 (en) 2018-02-08 2018-02-08 Information providing system and information providing method

Country Status (1)

Country Link
WO (1) WO2019155564A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022543935A (en) * 2020-02-25 2022-10-17 青▲島▼理工大学 Myoelectric Signal-Torque Matching Method Based on Multi-grain Parallelized CNN Model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007121686A (en) * 2005-10-28 2007-05-17 Casio Comput Co Ltd Screen generating device and program
JP2007265274A (en) * 2006-03-29 2007-10-11 Sendai Foundation For Applied Information Sciences Physiology adaptive display device
JP2010117823A (en) * 2008-11-12 2010-05-27 Samsung Electronics Co Ltd Information processor and program
JP2013077013A (en) * 2012-11-20 2013-04-25 Sony Corp Display device and display method
JP2017054208A (en) * 2015-09-07 2017-03-16 富士通株式会社 File editing device, file editing method and file editing program
JP2017138881A (en) * 2016-02-05 2017-08-10 ファナック株式会社 Machine learning device for learning display of operation menu, numerical control device, machine tool system, manufacturing system, and machine learning method
WO2017221525A1 (en) * 2016-06-23 2017-12-28 ソニー株式会社 Information processing device, information processing method, and computer program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007121686A (en) * 2005-10-28 2007-05-17 Casio Comput Co Ltd Screen generating device and program
JP2007265274A (en) * 2006-03-29 2007-10-11 Sendai Foundation For Applied Information Sciences Physiology adaptive display device
JP2010117823A (en) * 2008-11-12 2010-05-27 Samsung Electronics Co Ltd Information processor and program
JP2013077013A (en) * 2012-11-20 2013-04-25 Sony Corp Display device and display method
JP2017054208A (en) * 2015-09-07 2017-03-16 富士通株式会社 File editing device, file editing method and file editing program
JP2017138881A (en) * 2016-02-05 2017-08-10 ファナック株式会社 Machine learning device for learning display of operation menu, numerical control device, machine tool system, manufacturing system, and machine learning method
WO2017221525A1 (en) * 2016-06-23 2017-12-28 ソニー株式会社 Information processing device, information processing method, and computer program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022543935A (en) * 2020-02-25 2022-10-17 青▲島▼理工大学 Myoelectric Signal-Torque Matching Method Based on Multi-grain Parallelized CNN Model
JP7261427B2 (en) 2020-02-25 2023-04-20 青▲島▼理工大学 Myoelectric Signal-Torque Matching Method Based on Multi-grain Parallelized CNN Model

Similar Documents

Publication Publication Date Title
US10264177B2 (en) Methods and systems to obtain desired self-pictures with an image capture device
US20200349715A1 (en) Three-dimensional mapping system
US10249095B2 (en) Context-based discovery of applications
JP2019535055A (en) Perform gesture-based operations
US9514571B2 (en) Late stage reprojection
US20150002542A1 (en) Reprojection oled display for augmented reality experiences
KR20150076627A (en) System and method for learning driving information in vehicle
CN104077023A (en) Display control device, display control method, and recording medium
JP7490784B2 (en) Augmented Reality Map Curation
CN107924239B (en) Remote control system, remote control method, and recording medium
US10733799B2 (en) Augmented reality sensor
WO2022166448A1 (en) Devices, methods, systems, and media for selecting virtual objects for extended reality interaction
US20170201676A1 (en) Image processing apparatus and control method thereof
US11656690B2 (en) User input and virtual touch pad in augmented reality for use in surgical settings
US20190244133A1 (en) Learning apparatus and learning method
JP2013037454A (en) Posture determination method, program, device, and system
WO2019155564A1 (en) Information providing system and information providing method
US11443719B2 (en) Information processing apparatus and information processing method
US10650037B2 (en) Enhancing information in a three-dimensional map
US12019438B2 (en) Teleoperation with a wearable sensor system
KR20220135072A (en) Method and system for authoring 3d object
CN114911398A (en) Method for displaying graphical interface, electronic device and computer program product
CN107340950B (en) Method for automatically adjusting menu interface position, VR (virtual reality) equipment and storage medium
WO2024092803A1 (en) Methods and systems supporting multi-display interaction using wearable device
JPH1166351A (en) Method and device for controlling object operation inside three-dimensional virtual space and recording medium recording object operation control program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905640

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905640

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP