WO2019021447A1 - Wearable terminal display system, wearable terminal display method and program - Google Patents

Wearable terminal display system, wearable terminal display method and program Download PDF

Info

Publication number
WO2019021447A1
WO2019021447A1 PCT/JP2017/027351 JP2017027351W WO2019021447A1 WO 2019021447 A1 WO2019021447 A1 WO 2019021447A1 JP 2017027351 W JP2017027351 W JP 2017027351W WO 2019021447 A1 WO2019021447 A1 WO 2019021447A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
wearable terminal
display
future
target
Prior art date
Application number
PCT/JP2017/027351
Other languages
French (fr)
Japanese (ja)
Inventor
俊二 菅谷
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Priority to PCT/JP2017/027351 priority Critical patent/WO2019021447A1/en
Publication of WO2019021447A1 publication Critical patent/WO2019021447A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present invention relates to a wearable terminal display system, a wearable terminal display method, and a program for displaying a calculated future prediction as an augmented reality on a display target of a wearable terminal with respect to a prediction target seen through the display panel.
  • Patent Document 1 An apparatus for identifying and extracting a document that predicts the future has been provided.
  • Patent Document 1 has a problem that it can not provide future prediction only by looking at the prediction target.
  • the present invention identifies a prediction target from an image of a field of view of a wearable terminal, and displays a future prediction calculated according to the prediction target on a display plate of the wearable terminal as an augmented reality, wearable terminal It aims at providing a terminal display method and program.
  • the present invention provides the following solutions.
  • An invention is a wearable terminal display system for displaying a future prediction of a prediction target on a display board of the wearable terminal, and an image acquisition unit for acquiring an image of the prediction target within the view of the wearable terminal
  • An identifying means for analyzing the image to identify the prediction target, a calculating means for calculating a future prediction of the prediction target, and the prediction that appears to be transmitted through the display plate to the display plate of the wearable terminal
  • a future prediction display means for displaying the future prediction as an augmented reality for the object.
  • An invention is a wearable terminal display method for displaying a future prediction of a prediction target on a display board of the wearable terminal, and an image acquisition step of acquiring an image of the prediction target within the view of the wearable terminal Identifying the prediction target by analyzing the image, calculating the future prediction of the prediction target, calculating the future prediction of the prediction target, and displaying the prediction on the display plate of the wearable terminal.
  • the invention according to the first aspect is characterized in that the computer has an image acquisition step of acquiring an image of an object to be predicted which has come into view of the wearable terminal, an image analysis of the image to specify the object to be predicted, and Calculating the future prediction of the prediction target, and displaying the future prediction as the augmented reality on the display target of the wearable terminal on the display target of the wearable terminal.
  • the computer has an image acquisition step of acquiring an image of an object to be predicted which has come into view of the wearable terminal, an image analysis of the image to specify the object to be predicted, and Calculating the future prediction of the prediction target, and displaying the future prediction as the augmented reality on the display target of the wearable terminal on the display target of the wearable terminal.
  • the future prediction according to the prediction object can be displayed on the display board of the wearable terminal simply by putting the prediction object in the view of the wearable terminal.
  • FIG. 1 is a schematic view of a wearable terminal display system.
  • FIG. 2 is an example which calculated and displayed future prediction on the display board of a wearable terminal.
  • the wearable terminal display system is a system for displaying the calculated future prediction as an augmented reality on a display target of the wearable terminal with respect to a prediction target seen through the display panel.
  • a wearable terminal is a terminal with a view such as a smart glass or a head mounted display.
  • FIG. 1 is a schematic view of a wearable terminal display system according to a preferred embodiment of the present invention.
  • the wearable terminal display system includes an image acquisition unit, an identification unit, a calculation unit, and a future prediction display unit, which are realized by the control unit reading a predetermined program.
  • determination means, change means, detection means, action result display means, position direction acquisition means, estimation means, guideline display means, selection acceptance means may be provided. These may be application based, cloud based or otherwise.
  • Each means described above may be realized by a single computer or may be realized by two or more computers (for example, in the case of a server and a terminal).
  • An image acquisition means acquires the image of the prediction object which entered into the view of a wearable terminal.
  • An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter.
  • the image may be a moving image or a still image. In order to display future predictions in real time, real-time images are preferable.
  • the identification means analyzes the image to identify the prediction target. For example, it is specified whether the forecast target is Kagoshima black pig, Yubari melon, or land in Minato-ku, Tokyo.
  • the prediction target can be identified from the color, shape, size, features, and the like. Of course, the prediction target is not limited to these.
  • Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using the past image to be predicted as teacher data.
  • a calculation means calculates the future prediction according to prediction object.
  • future prediction for example, Kagoshima black pig grows up to what size, how much sugar content of Yubari melon will become, land of Minato-ku, Tokyo, overseas 1-chome land is likely to be sold at unit price, etc. It is.
  • future prediction is not limited to these.
  • the future prediction according to the target of prediction may be calculated with reference to a database in which the future prediction is registered in advance.
  • the Web content pre-linked to the prediction target may be accessed to calculate the future prediction. For example, it can be calculated from Web content by assigning a URL etc. that links the prediction target and the future prediction.
  • the future prediction may be calculated from the Web content searched by searching the prediction target via the Internet. For example, since there is a case where past information is posted on an information site etc., it can be calculated from an internet search. Alternatively, there are cases where it is possible to calculate future predictions from social networking services (SNS) or word-of-mouth sites.
  • SNS social
  • the future prediction display means displays the future prediction as an augmented reality on the prediction object which is seen through the display plate on the display plate of the wearable terminal. For example, as shown in FIG. 2, on the display board of the wearable terminal, the future prediction drawn with a broken line is displayed as an augmented reality with respect to the prediction target drawn with a solid line seen through the display board.
  • solid lines are real and broken lines are augmented reality.
  • the future prediction to be displayed as augmented reality may be displayed so as to overlap with the prediction target seen through the display board, it becomes difficult to see the prediction target, so even if the display of future prediction can be switched ON / OFF. Good.
  • the determination means determines whether or not the displayed future prediction has been browsed. By acquiring the image being browsed and analyzing the image, it may be determined whether the future prediction has been browsed. In addition, it may be determined from the sensor information of the wearable terminal, the sensor information worn by the viewer, etc. whether or not the future prediction is browsed. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the change means changes the degree of attention so that the future prediction is changed to viewed when it is determined that the view has been viewed, and the future prediction is viewed when it is determined that the view is not viewed. By doing this, it is possible to visually grasp which future prediction has been viewed and has not been viewed. For example, it may be regarded as already viewed by putting a check in the check box of the future prediction. For example, a stamp may be pressed on the future prediction to make the view already viewed. Moreover, the change of attention level may change the color and the size of the future prediction, or may stamp stamps so that the future prediction may stand out.
  • the detection means detects an action on the displayed future prediction.
  • the action is, for example, a gesture, hand movement, gaze movement, or the like.
  • the action on the future prediction may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like.
  • a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the action result display means displays the result according to the action as the augmented reality on the prediction target viewed through the display plate on the display plate of the wearable terminal.
  • the display of the future prediction may be turned off when an action to delete the future prediction is detected.
  • the link may be opened upon detecting an action to open the link attached to the future prediction.
  • the page may be turned.
  • other actions may be used.
  • the position / direction unit acquires the terminal position and the imaging direction of the wearable terminal.
  • the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal.
  • the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
  • An estimation means estimates the target position of prediction object based on a terminal position and an imaging direction. If the terminal position and the imaging direction are known, the target position of the imaged prediction target can be estimated.
  • the specifying unit may specify the prediction target from the target position and the image analysis. Specific accuracy can be improved by utilizing position information. For example, if it is possible to improve the accuracy of specifying that the location is Sensoji, the reliability of the future prediction to be displayed is also improved.
  • the guideline display means displays a guideline for imaging a prediction target as an augmented reality on the display board of the wearable terminal.
  • guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
  • the acquisition unit may acquire an image captured along a guideline. By acquiring and analyzing only the image captured along the guideline, it is possible to efficiently specify the prediction target.
  • the selection receiving unit receives selection of a selection target for the prediction target viewed through the display plate of the wearable terminal.
  • the selection target selection may be accepted by looking at the prediction target seen through the display plate of the wearable terminal for a certain period of time.
  • the selection of the selection target may be received by touching the prediction target which is seen through the display plate of the wearable terminal.
  • the selection of the selection target may be received by positioning the cursor on the prediction target that is seen through the display plate of the wearable terminal.
  • a sensor that detects a sight line For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the future prediction display means may display the future prediction as an augmented reality on the display plate of the wearable terminal in accordance with only the selection object that is seen through the display plate. Since the future prediction is displayed as an augmented reality according to only the selected selection object, it is possible to pinpoint the future prediction. If future predictions are displayed on all of the identified prediction targets, the display screen may be cumbersome. [Description of operation]
  • the wearable terminal display method according to the present invention is a method of displaying the calculated future prediction as an augmented reality on a prediction target that is seen through the display plate on a display plate of the wearable terminal.
  • the wearable terminal display method includes an image acquisition step, a identification step, a calculation step, and a future prediction display step. Although not shown in the drawings, similarly, a determination step, a change step, a detection step, an action result display step, a position direction acquisition step, an estimation step, a guideline display step, and a selection acceptance step may be provided.
  • the image acquisition step acquires an image of a prediction target that has come into view of the wearable terminal.
  • An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter.
  • the image may be a moving image or a still image. In order to display future predictions in real time, real-time images are preferable.
  • the image is analyzed to identify a prediction target.
  • a prediction target For example, it is specified whether the forecast target is Kagoshima black pig, Yubari melon, or land in Minato-ku, Tokyo.
  • the prediction target can be identified from the color, shape, size, features, and the like. Of course, the prediction target is not limited to these.
  • Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using the past image to be predicted as teacher data.
  • the calculation step calculates a future prediction according to the prediction target.
  • future prediction for example, Kagoshima black pig grows up to what size, how much sugar content of Yubari melon will become, land of Minato-ku, Tokyo, overseas 1-chome land is likely to be sold at unit price, etc. It is.
  • future prediction is not limited to these.
  • the future prediction according to the target of prediction may be calculated with reference to a database in which the future prediction is registered in advance.
  • the Web content pre-linked to the prediction target may be accessed to calculate the future prediction. For example, it can be calculated from Web content by assigning a URL etc. that links the prediction target and the future prediction.
  • the future prediction may be calculated from the Web content searched by searching the prediction target via the Internet. For example, since there is a case where past information is posted on an information site etc., it can be calculated from an internet search. Alternatively, there are cases where it is possible to calculate future predictions from social networking services (SNS) or word-of-mouth sites.
  • the future prediction display step displays the future prediction as an augmented reality for the prediction target that is seen through the display plate on the display plate of the wearable terminal.
  • the future prediction drawn with a broken line is displayed as an augmented reality with respect to the prediction target drawn with a solid line seen through the display board.
  • solid lines are real and broken lines are augmented reality.
  • the future prediction to be displayed as augmented reality may be displayed so as to overlap with the prediction target seen through the display board, it becomes difficult to see the prediction target, so even if the display of future prediction can be switched ON / OFF. Good.
  • the determination step determines whether the displayed future prediction has been viewed. By acquiring the image being browsed and analyzing the image, it may be determined whether the future prediction has been browsed. In addition, it may be determined from the sensor information of the wearable terminal, the sensor information worn by the viewer, etc. whether or not the future prediction is browsed. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the changing step changes the degree of attention so that the future prediction is changed to viewed if it is determined to have been viewed, and the future prediction is to be viewed if it is determined that it has not been viewed.
  • it may be regarded as already viewed by putting a check in the check box of the future prediction.
  • a stamp may be pressed on the future prediction to make the view already viewed.
  • the change of attention level may change the color and the size of the future prediction, or may stamp stamps so that the future prediction may stand out.
  • the detecting step detects an action on the displayed future prediction.
  • the action is, for example, a gesture, hand movement, gaze movement, or the like.
  • the action on the future prediction may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like.
  • a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the action result display step displays, on the display board of the wearable terminal, the result corresponding to the action as the augmented reality for the prediction target seen through the display board.
  • the display of the future prediction may be turned off when an action to delete the future prediction is detected.
  • the link may be opened upon detecting an action to open the link attached to the future prediction.
  • the page may be turned.
  • other actions may be used.
  • the position direction step acquires the terminal position and the imaging direction of the wearable terminal.
  • the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal.
  • the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
  • the estimation step estimates a target position of a prediction target based on the terminal position and the imaging direction. If the terminal position and the imaging direction are known, the target position of the imaged prediction target can be estimated.
  • the identification step may identify the prediction target from the target position and the image analysis. Specific accuracy can be improved by utilizing position information. For example, if it is possible to improve the accuracy of specifying that the location is Sensoji, the reliability of the future prediction to be displayed is also improved.
  • the guideline display step displays a guideline for imaging a prediction target as an augmented reality on a display plate of the wearable terminal.
  • guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
  • the acquisition step may acquire an image captured along a guideline.
  • the acquisition step may acquire an image captured along a guideline.
  • the selection receiving step receives selection of a selection target for the prediction target viewed through the display board of the wearable terminal.
  • the selection target selection may be accepted by looking at the prediction target seen through the display plate of the wearable terminal for a certain period of time.
  • the selection of the selection target may be received by touching the prediction target which is seen through the display plate of the wearable terminal.
  • the selection of the selection target may be received by positioning the cursor on the prediction target that is seen through the display plate of the wearable terminal.
  • a sensor that detects a sight line For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
  • the future prediction display step may display the future prediction as the augmented reality on the display board of the wearable terminal in accordance with only the selection target that is seen through the display board. Since the future prediction is displayed as an augmented reality according to only the selected selection object, it is possible to pinpoint the future prediction. If future predictions are displayed on all of the identified prediction targets, the display screen may be cumbersome.
  • the above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program.
  • the program may be, for example, an application installed on a computer, or a SaaS (software as a service) provided from a computer via a network, for example, a flexible disk, a CD It may be provided in the form of being recorded in a computer readable recording medium such as a CD-ROM or the like, a DVD (DVD-ROM, DVD-RAM or the like).
  • the computer reads the program from the recording medium, transfers the program to the internal storage device or the external storage device, stores it, and executes it.
  • the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.
  • nearest neighbor method naive Bayes method
  • decision tree naive Bayes method
  • support vector machine e.g., support vector machine
  • reinforcement learning e.g., reinforcement learning, etc.
  • deep learning may be used in which feature quantities for learning are generated by using a neural network.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To identify a prediction subject from an image of the field of view of a wearable terminal, and display a future prediction calculated according to the prediction subject, as augmented reality on the display of the wearable terminal. [Solution] A wearable terminal display system which displays future predictions of a prediction subject on the display of a wearable terminal, the wearable terminal display system comprising: an image acquisition means which acquires an image of the prediction subject which has entered the field of view of the wearable terminal; an identifying means which identifies the prediction subject by performing image analysis on the image; a calculating means which calculates a future prediction for the prediction subject; and a future prediction display means which displays on the display of the wearable terminal, as augmented reality, the future prediction for the prediction subject which is visible through the display.

Description

ウェアラブル端末表示システム、ウェアラブル端末表示方法およびプログラムWearable terminal display system, wearable terminal display method and program
 本発明は、ウェアラブル端末の表示板に、表示板を透過して見える予測対象に対して、計算された未来予測を拡張現実として表示するウェアラブル端末表示システム、ウェアラブル端末表示方法およびプログラムに関する。 The present invention relates to a wearable terminal display system, a wearable terminal display method, and a program for displaying a calculated future prediction as an augmented reality on a display target of a wearable terminal with respect to a prediction target seen through the display panel.
 近年、未来予測のIT化が進んでいる。例えば、未来を予測する文書を特定し、抽出する装置が提供されている(特許文献1)。 In recent years, the use of IT for forecasting the future has advanced. For example, an apparatus for identifying and extracting a document that predicts the future has been provided (Patent Document 1).
特開2016-206751号公報JP, 2016-206751, A
 しかしながら、特許文献1のシステムは、予測対象を見るだけでは未来予測を提供できない問題がある。 However, the system of Patent Document 1 has a problem that it can not provide future prediction only by looking at the prediction target.
 本発明は、上記課題に鑑み、ウェアラブル端末の視界の画像から予測対象を特定して、予測対象に応じて計算した未来予測をウェアラブル端末の表示板に拡張現実として表示するウェアラブル端末表示システム、ウェアラブル端末表示方法およびプログラムを提供することを目的とする。 In view of the above problems, the present invention identifies a prediction target from an image of a field of view of a wearable terminal, and displays a future prediction calculated according to the prediction target on a display plate of the wearable terminal as an augmented reality, wearable terminal It aims at providing a terminal display method and program.
 本発明では、以下のような解決手段を提供する。 The present invention provides the following solutions.
 第1の特徴に係る発明は、ウェアラブル端末の表示板に、予測対象の未来予測を表示するウェアラブル端末表示システムであって、ウェアラブル端末の視界に入った予測対象の画像を取得する画像取得手段と、前記画像を画像解析して、前記予測対象を特定する特定手段と、前記予測対象の未来予測を計算する計算手段と、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記予測対象に対して、前記未来予測を拡張現実として表示する未来予測表示手段と、を備えるウェアラブル端末表示システムを提供する。 An invention according to a first feature is a wearable terminal display system for displaying a future prediction of a prediction target on a display board of the wearable terminal, and an image acquisition unit for acquiring an image of the prediction target within the view of the wearable terminal An identifying means for analyzing the image to identify the prediction target, a calculating means for calculating a future prediction of the prediction target, and the prediction that appears to be transmitted through the display plate to the display plate of the wearable terminal And a future prediction display means for displaying the future prediction as an augmented reality for the object.
 第1の特徴に係る発明は、ウェアラブル端末の表示板に、予測対象の未来予測を表示するウェアラブル端末表示方法であって、ウェアラブル端末の視界に入った予測対象の画像を取得する画像取得ステップと、前記画像を画像解析して、前記予測対象を特定する特定ステップと、前記予測対象の未来予測を計算する計算ステップと、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記予測対象に対して、前記未来予測を拡張現実として表示する未来予測表示ステップと、を備えるウェアラブル端末表示方法を提供する。 An invention according to a first feature is a wearable terminal display method for displaying a future prediction of a prediction target on a display board of the wearable terminal, and an image acquisition step of acquiring an image of the prediction target within the view of the wearable terminal Identifying the prediction target by analyzing the image, calculating the future prediction of the prediction target, calculating the future prediction of the prediction target, and displaying the prediction on the display plate of the wearable terminal. Providing a target with a future prediction display step of displaying the future prediction as an augmented reality.
 第1の特徴に係る発明は、コンピュータに、ウェアラブル端末の視界に入った予測対象の画像を取得する画像取得ステップと、前記画像を画像解析して、前記予測対象を特定する特定ステップと、前記予測対象の未来予測を計算する計算ステップと、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記予測対象に対して、前記未来予測を拡張現実として表示する未来予測表示ステップと、をさせるためのプログラムを提供する。 The invention according to the first aspect is characterized in that the computer has an image acquisition step of acquiring an image of an object to be predicted which has come into view of the wearable terminal, an image analysis of the image to specify the object to be predicted, and Calculating the future prediction of the prediction target, and displaying the future prediction as the augmented reality on the display target of the wearable terminal on the display target of the wearable terminal. Provide a program to make
 予測対象をウェアラブル端末の視界に入れるだけで、ウェアラブル端末の表示板に予測対象に応じた未来予測を表示できる。 The future prediction according to the prediction object can be displayed on the display board of the wearable terminal simply by putting the prediction object in the view of the wearable terminal.
図1は、ウェアラブル端末表示システムの概要図である。FIG. 1 is a schematic view of a wearable terminal display system. 図2は、ウェアラブル端末の表示板に未来予測を計算して表示した一例である。FIG. 2: is an example which calculated and displayed future prediction on the display board of a wearable terminal.
 以下、本発明を実施するための最良の形態について説明する。なお、これはあくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。 The best mode for carrying out the present invention will be described below. This is merely an example, and the technical scope of the present invention is not limited to this.
 本発明のウェアラブル端末表示システムは、ウェアラブル端末の表示板に、表示板を透過して見える予測対象に対して、計算された未来予測を拡張現実として表示するシステムである。ウェアラブル端末とはスマートグラスやヘッドマウントディスプレイなどの視界がある端末のことをいう。 The wearable terminal display system according to the present invention is a system for displaying the calculated future prediction as an augmented reality on a display target of the wearable terminal with respect to a prediction target seen through the display panel. A wearable terminal is a terminal with a view such as a smart glass or a head mounted display.
 本発明の好適な実施形態の概要について、図1に基づいて説明する。図1は、本発明の好適な実施形態であるウェアラブル端末表示システムの概要図である。 An outline of a preferred embodiment of the present invention will be described based on FIG. FIG. 1 is a schematic view of a wearable terminal display system according to a preferred embodiment of the present invention.
 図1にあるように、ウェアラブル端末表示システムは、制御部が所定のプログラムを読み込むことで実現される、画像取得手段、特定手段、計算手段、未来予測表示手段、を備える。また図示しないが、同様に、判定手段、変更手段、検出手段、アクション結果表示手段、位置方向取得手段、推測手段、ガイドライン表示手段、選択受付手段、を備えてもよい。これらは、アプリケーション型、クラウド型またはその他であってもよい。上述の各手段が、単独のコンピュータで実現されてもよいし、2台以上のコンピュータ(例えば、サーバと端末のような場合)で実現されてもよい。 As shown in FIG. 1, the wearable terminal display system includes an image acquisition unit, an identification unit, a calculation unit, and a future prediction display unit, which are realized by the control unit reading a predetermined program. Although not shown, similarly, determination means, change means, detection means, action result display means, position direction acquisition means, estimation means, guideline display means, selection acceptance means may be provided. These may be application based, cloud based or otherwise. Each means described above may be realized by a single computer or may be realized by two or more computers (for example, in the case of a server and a terminal).
 画像取得手段は、ウェアラブル端末の視界に入った予測対象の画像を取得する。ウェアラブル端末のカメラで撮像された画像を取得してもよい。または、ウェアラブル端末以外であっても、このような画像を取得できるのであれば、それでも構わない。画像とは動画でも静止画でもよい。リアルタイムに未来予測を表示するためには、リアルタイムな画像の方が好ましい。 An image acquisition means acquires the image of the prediction object which entered into the view of a wearable terminal. An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter. The image may be a moving image or a still image. In order to display future predictions in real time, real-time images are preferable.
 特定手段は、画像を画像解析して予測対象を特定する。例えば、予測対象が、かごしま黒豚であるのか、夕張メロンであるのか、東京都港区海外1丁目の土地であるのか、などを特定する。色、形、大きさ、特徴、などから予測対象を特定できる。もちろん予測対象はこれらに限らない。また、映った予測対象の全てを特定してしまうと時間が掛かる場合には、ウェアラブル端末の視界の中央にある予測対象だけを特定してもよい。視界の中央にある予測対象だけを特定することで、特定に要する時間を大幅に短縮できる。機械学習によって画像解析の精度を向上させてもよい。例えば、予測対象の過去画像を教師データとして機械学習を行う。 The identification means analyzes the image to identify the prediction target. For example, it is specified whether the forecast target is Kagoshima black pig, Yubari melon, or land in Minato-ku, Tokyo. The prediction target can be identified from the color, shape, size, features, and the like. Of course, the prediction target is not limited to these. In addition, if it takes time to identify all the predicted objects that have appeared, only the predicted object in the center of the view of the wearable terminal may be identified. By identifying only the prediction object in the center of the view, the time required for identification can be greatly reduced. Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using the past image to be predicted as teacher data.
 計算手段は、予測対象に応じた未来予測を計算する。未来予測とは、例えば、かごしま黒豚がどのぐらいのサイズにまで育つか、夕張メロンがどのぐらいの糖度になるか、東京都港区海外1丁目の土地が坪単価いくらで売れそうか、などである。もちろん、未来予測はこれらに限らない。予め未来予測が登録されたデータベースを参照して予測対象に応じた未来予測を計算してもよい。また、予測対象に予め紐付けられたWebコンテンツにアクセスして未来予測を計算してもよい。例えば、予測対象と未来予測とを紐づけるURLなど割当てることでWebコンテンツから計算できる。また、予測対象をインターネット検索して検索されたWebコンテンツから未来予測を計算してもよい。例えば、情報サイトなどに過去の情報が掲載されているケースがあるので、インターネット検索から計算できる。または、SNS(social networking service)や口コミサイトなどから、未来予測を計算できることもある。 A calculation means calculates the future prediction according to prediction object. With future prediction, for example, Kagoshima black pig grows up to what size, how much sugar content of Yubari melon will become, land of Minato-ku, Tokyo, overseas 1-chome land is likely to be sold at unit price, etc. It is. Of course, future prediction is not limited to these. The future prediction according to the target of prediction may be calculated with reference to a database in which the future prediction is registered in advance. Also, the Web content pre-linked to the prediction target may be accessed to calculate the future prediction. For example, it can be calculated from Web content by assigning a URL etc. that links the prediction target and the future prediction. Further, the future prediction may be calculated from the Web content searched by searching the prediction target via the Internet. For example, since there is a case where past information is posted on an information site etc., it can be calculated from an internet search. Alternatively, there are cases where it is possible to calculate future predictions from social networking services (SNS) or word-of-mouth sites.
 未来予測表示手段は、ウェアラブル端末の表示板に、表示板を透過して見える予測対象に対して未来予測を拡張現実として表示する。例えば図2にあるように、ウェアラブル端末の表示板に、表示板を透過して見える実線で描かれた予測対象に対して、破線で描かれた未来予測を拡張現実として表示している。ここでは理解のために、実線は実物、破線は拡張現実、としている。表示板を透過して見える実線で描かれた予測対象に対して未来予測を拡張現実で表示することで、予測対象にどのような未来予測があるのかを視覚的に把握することが出来る。拡張現実として表示する未来予測は、表示板を透過して見える予測対象に重なるように表示しても良いが、予測対象が見づらくなるので、未来予測の表示ON/OFFを切り替えられるようにしてもよい。 The future prediction display means displays the future prediction as an augmented reality on the prediction object which is seen through the display plate on the display plate of the wearable terminal. For example, as shown in FIG. 2, on the display board of the wearable terminal, the future prediction drawn with a broken line is displayed as an augmented reality with respect to the prediction target drawn with a solid line seen through the display board. Here, for the sake of understanding, solid lines are real and broken lines are augmented reality. By displaying the future prediction as an augmented reality on the prediction target drawn as a solid line that can be seen through the display board, it is possible to visually grasp what kind of future prediction is in the prediction target. Although the future prediction to be displayed as augmented reality may be displayed so as to overlap with the prediction target seen through the display board, it becomes difficult to see the prediction target, so even if the display of future prediction can be switched ON / OFF. Good.
 判定手段は、表示された未来予測が閲覧されたかどうかを判定する。閲覧中の画像を取得して画像解析をすることで、未来予測が閲覧されたかどうかを判定してもよい。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、未来予測が閲覧されたかどうかを判定してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The determination means determines whether or not the displayed future prediction has been browsed. By acquiring the image being browsed and analyzing the image, it may be determined whether the future prediction has been browsed. In addition, it may be determined from the sensor information of the wearable terminal, the sensor information worn by the viewer, etc. whether or not the future prediction is browsed. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 変更手段は、閲覧されたと判定された場合は未来予測を閲覧済みに変更し、閲覧されていないと判定された場合は未来予測が閲覧されるように注目度を変更する。このようにすることで、どの未来予測が、閲覧されたのか、閲覧されていないのか、を視覚的に把握できる。例えば、未来予測のチェックボックスにチェックを入れることで閲覧済としてもよい。例えば、未来予測にスタンプを押すことで閲覧済としてもよい。また、注目度の変更は、未来予測の色・サイズを変更したり、未来予測が目立つようにスタンプを押したりしてもよい。 The change means changes the degree of attention so that the future prediction is changed to viewed when it is determined that the view has been viewed, and the future prediction is viewed when it is determined that the view is not viewed. By doing this, it is possible to visually grasp which future prediction has been viewed and has not been viewed. For example, it may be regarded as already viewed by putting a check in the check box of the future prediction. For example, a stamp may be pressed on the future prediction to make the view already viewed. Moreover, the change of attention level may change the color and the size of the future prediction, or may stamp stamps so that the future prediction may stand out.
 検出手段は、表示された未来予測に対するアクションを検出する。アクションは、例えば、ジェスチャーや、手の動き、視線の動き、などである。閲覧中の画像を取得して画像解析をすることで、未来予測に対するアクションを検出できる。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、未来予測に対するアクションを検出してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The detection means detects an action on the displayed future prediction. The action is, for example, a gesture, hand movement, gaze movement, or the like. By acquiring an image being browsed and performing image analysis, it is possible to detect an action on future prediction. Further, the action on the future prediction may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 アクション結果表示手段は、ウェアラブル端末の表示板に、表示板を透過して見える予測対象に対して、アクションに応じた結果を拡張現実として表示する。例えば、未来予測を消すアクションを検出したら未来予測の表示を消してよい。例えば、未来予測に付けられたリンクを開くアクションを検出したらリンクを開いてもよい。例えば、未来予測のページをめくるアクションを検出したらページをめくってもよい。もちろん他のアクションでもよい。 The action result display means displays the result according to the action as the augmented reality on the prediction target viewed through the display plate on the display plate of the wearable terminal. For example, the display of the future prediction may be turned off when an action to delete the future prediction is detected. For example, the link may be opened upon detecting an action to open the link attached to the future prediction. For example, when an action to turn a page of a future prediction is detected, the page may be turned. Of course, other actions may be used.
 位置方向手段は、ウェアラブル端末の端末位置と撮像方向とを取得する。例えば、端末位置は、ウェアラブル端末のGPS(Global Positioning System)から取得できる。例えば、撮像方向は、ウェアラブル端末で撮像する場合は、ウェアラブル端末の地磁気センサや加速度センサから取得できる。これら以外から取得してもよい。 The position / direction unit acquires the terminal position and the imaging direction of the wearable terminal. For example, the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal. For example, the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
 推測手段は、端末位置と撮像方向とに基づいて、予測対象の対象位置を推測する。端末位置と撮像方向が分かっていれば、撮像された予測対象の対象位置を推測することができる。 An estimation means estimates the target position of prediction object based on a terminal position and an imaging direction. If the terminal position and the imaging direction are known, the target position of the imaged prediction target can be estimated.
 また、特定手段は、対象位置と画像解析とから、予測対象を特定してもよい。位置情報を利用することで特定の精度を向上することができる。例えば、位置情報によって、浅草寺だと特定する精度が向上できれば、それに対応して表示する未来予測の信頼度も向上する。 In addition, the specifying unit may specify the prediction target from the target position and the image analysis. Specific accuracy can be improved by utilizing position information. For example, if it is possible to improve the accuracy of specifying that the location is Sensoji, the reliability of the future prediction to be displayed is also improved.
 ガイドライン表示手段は、ウェアラブル端末の表示板に、予測対象を撮像するためのガイドラインを拡張現実として表示する。例えば、枠や十字などのガイドラインを表示してもよい。ガイドラインに沿って撮像してもらうことで画像解析がしやすくなる。 The guideline display means displays a guideline for imaging a prediction target as an augmented reality on the display board of the wearable terminal. For example, guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
 また、取得手段は、ガイドラインに沿って撮像された画像を取得してもよい。ガイドラインに沿って撮像された画像だけを取得して画像解析することで、効率良く予測対象を特定できる。 In addition, the acquisition unit may acquire an image captured along a guideline. By acquiring and analyzing only the image captured along the guideline, it is possible to efficiently specify the prediction target.
 選択受付手段は、ウェアラブル端末の表示板を透過して見える予測対象に対して、選択対象の選択を受け付ける。例えば、ウェアラブル端末の表示板を透過して見える予測対象を一定時間見ることで選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える予測対象にタッチして選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える予測対象にカーソルを合わせることで選択対象の選択を受け付けてもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The selection receiving unit receives selection of a selection target for the prediction target viewed through the display plate of the wearable terminal. For example, the selection target selection may be accepted by looking at the prediction target seen through the display plate of the wearable terminal for a certain period of time. For example, the selection of the selection target may be received by touching the prediction target which is seen through the display plate of the wearable terminal. For example, the selection of the selection target may be received by positioning the cursor on the prediction target that is seen through the display plate of the wearable terminal. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 また、未来予測表示手段は、ウェアラブル端末の表示板に、表示板を透過して見える選択対象にだけ合わせて、未来予測を拡張現実として表示してもよい。選択された選択対象にだけ合わせて未来予測を拡張現実として表示するので、ピンポイントに未来予測を把握することができる。特定された全ての予測対象に未来予測を表示すると表示板の画面が煩わしくなることがある。
[動作の説明]
In addition, the future prediction display means may display the future prediction as an augmented reality on the display plate of the wearable terminal in accordance with only the selection object that is seen through the display plate. Since the future prediction is displayed as an augmented reality according to only the selected selection object, it is possible to pinpoint the future prediction. If future predictions are displayed on all of the identified prediction targets, the display screen may be cumbersome.
[Description of operation]
 次に、ウェアラブル端末表示方法について説明する。本発明のウェアラブル端末表示方法は、ウェアラブル端末の表示板に、表示板を透過して見える予測対象に対して、計算された未来予測を拡張現実として表示する方法である。 Next, the wearable terminal display method will be described. The wearable terminal display method according to the present invention is a method of displaying the calculated future prediction as an augmented reality on a prediction target that is seen through the display plate on a display plate of the wearable terminal.
 ウェアラブル端末表示方法は、画像取得ステップ、特定ステップ、計算ステップ、未来予測表示ステップ、を備える。また図示しないが、同様に、判定ステップ、変更ステップ、検出ステップ、アクション結果表示ステップ、位置方向取得ステップ、推測ステップ、ガイドライン表示ステップ、選択受付ステップ、を備えてもよい。 The wearable terminal display method includes an image acquisition step, a identification step, a calculation step, and a future prediction display step. Although not shown in the drawings, similarly, a determination step, a change step, a detection step, an action result display step, a position direction acquisition step, an estimation step, a guideline display step, and a selection acceptance step may be provided.
 画像取得ステップは、ウェアラブル端末の視界に入った予測対象の画像を取得する。ウェアラブル端末のカメラで撮像された画像を取得してもよい。または、ウェアラブル端末以外であっても、このような画像を取得できるのであれば、それでも構わない。画像とは動画でも静止画でもよい。リアルタイムに未来予測を表示するためには、リアルタイムな画像の方が好ましい。 The image acquisition step acquires an image of a prediction target that has come into view of the wearable terminal. An image captured by the camera of the wearable terminal may be acquired. Or even if it is except a wearable terminal, if such an image can be acquired, it does not matter. The image may be a moving image or a still image. In order to display future predictions in real time, real-time images are preferable.
 特定ステップは、画像を画像解析して予測対象を特定する。例えば、予測対象が、かごしま黒豚であるのか、夕張メロンであるのか、東京都港区海外1丁目の土地であるのか、などを特定する。色、形、大きさ、特徴、などから予測対象を特定できる。もちろん予測対象はこれらに限らない。また、映った予測対象の全てを特定してしまうと時間が掛かる場合には、ウェアラブル端末の視界の中央にある予測対象だけを特定してもよい。視界の中央にある予測対象だけを特定することで、特定に要する時間を大幅に短縮できる。機械学習によって画像解析の精度を向上させてもよい。例えば、予測対象の過去画像を教師データとして機械学習を行う。 In the identification step, the image is analyzed to identify a prediction target. For example, it is specified whether the forecast target is Kagoshima black pig, Yubari melon, or land in Minato-ku, Tokyo. The prediction target can be identified from the color, shape, size, features, and the like. Of course, the prediction target is not limited to these. In addition, if it takes time to identify all the predicted objects that have appeared, only the predicted object in the center of the view of the wearable terminal may be identified. By identifying only the prediction object in the center of the view, the time required for identification can be greatly reduced. Machine learning may improve the accuracy of image analysis. For example, machine learning is performed using the past image to be predicted as teacher data.
 計算ステップは、予測対象に応じた未来予測を計算する。未来予測とは、例えば、かごしま黒豚がどのぐらいのサイズにまで育つか、夕張メロンがどのぐらいの糖度になるか、東京都港区海外1丁目の土地が坪単価いくらで売れそうか、などである。もちろん、未来予測はこれらに限らない。予め未来予測が登録されたデータベースを参照して予測対象に応じた未来予測を計算してもよい。また、予測対象に予め紐付けられたWebコンテンツにアクセスして未来予測を計算してもよい。例えば、予測対象と未来予測とを紐づけるURLなど割当てることでWebコンテンツから計算できる。また、予測対象をインターネット検索して検索されたWebコンテンツから未来予測を計算してもよい。例えば、情報サイトなどに過去の情報が掲載されているケースがあるので、インターネット検索から計算できる。または、SNS(social networking service)や口コミサイトなどから、未来予測を計算できることもある。 The calculation step calculates a future prediction according to the prediction target. With future prediction, for example, Kagoshima black pig grows up to what size, how much sugar content of Yubari melon will become, land of Minato-ku, Tokyo, overseas 1-chome land is likely to be sold at unit price, etc. It is. Of course, future prediction is not limited to these. The future prediction according to the target of prediction may be calculated with reference to a database in which the future prediction is registered in advance. Also, the Web content pre-linked to the prediction target may be accessed to calculate the future prediction. For example, it can be calculated from Web content by assigning a URL etc. that links the prediction target and the future prediction. Further, the future prediction may be calculated from the Web content searched by searching the prediction target via the Internet. For example, since there is a case where past information is posted on an information site etc., it can be calculated from an internet search. Alternatively, there are cases where it is possible to calculate future predictions from social networking services (SNS) or word-of-mouth sites.
 未来予測表示ステップは、ウェアラブル端末の表示板に、表示板を透過して見える予測対象に対して未来予測を拡張現実として表示する。例えば図2にあるように、ウェアラブル端末の表示板に、表示板を透過して見える実線で描かれた予測対象に対して、破線で描かれた未来予測を拡張現実として表示している。ここでは理解のために、実線は実物、破線は拡張現実、としている。表示板を透過して見える実線で描かれた予測対象に対して未来予測を拡張現実で表示することで、予測対象にどのような未来予測があるのかを視覚的に把握することが出来る。拡張現実として表示する未来予測は、表示板を透過して見える予測対象に重なるように表示しても良いが、予測対象が見づらくなるので、未来予測の表示ON/OFFを切り替えられるようにしてもよい。 The future prediction display step displays the future prediction as an augmented reality for the prediction target that is seen through the display plate on the display plate of the wearable terminal. For example, as shown in FIG. 2, on the display board of the wearable terminal, the future prediction drawn with a broken line is displayed as an augmented reality with respect to the prediction target drawn with a solid line seen through the display board. Here, for the sake of understanding, solid lines are real and broken lines are augmented reality. By displaying the future prediction as an augmented reality on the prediction target drawn as a solid line that can be seen through the display board, it is possible to visually grasp what kind of future prediction is in the prediction target. Although the future prediction to be displayed as augmented reality may be displayed so as to overlap with the prediction target seen through the display board, it becomes difficult to see the prediction target, so even if the display of future prediction can be switched ON / OFF. Good.
 判定ステップは、表示された未来予測が閲覧されたかどうかを判定する。閲覧中の画像を取得して画像解析をすることで、未来予測が閲覧されたかどうかを判定してもよい。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、未来予測が閲覧されたかどうかを判定してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The determination step determines whether the displayed future prediction has been viewed. By acquiring the image being browsed and analyzing the image, it may be determined whether the future prediction has been browsed. In addition, it may be determined from the sensor information of the wearable terminal, the sensor information worn by the viewer, etc. whether or not the future prediction is browsed. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 変更ステップは、閲覧されたと判定された場合は未来予測を閲覧済みに変更し、閲覧されていないと判定された場合は未来予測が閲覧されるように注目度を変更する。このようにすることで、どの未来予測が、閲覧されたのか、閲覧されていないのか、を視覚的に把握できる。例えば、未来予測のチェックボックスにチェックを入れることで閲覧済としてもよい。例えば、未来予測にスタンプを押すことで閲覧済としてもよい。また、注目度の変更は、未来予測の色・サイズを変更したり、未来予測が目立つようにスタンプを押したりしてもよい。 The changing step changes the degree of attention so that the future prediction is changed to viewed if it is determined to have been viewed, and the future prediction is to be viewed if it is determined that it has not been viewed. By doing this, it is possible to visually grasp which future prediction has been viewed and has not been viewed. For example, it may be regarded as already viewed by putting a check in the check box of the future prediction. For example, a stamp may be pressed on the future prediction to make the view already viewed. Moreover, the change of attention level may change the color and the size of the future prediction, or may stamp stamps so that the future prediction may stand out.
 検出ステップは、表示された未来予測に対するアクションを検出する。アクションは、例えば、ジェスチャーや、手の動き、視線の動き、などである。閲覧中の画像を取得して画像解析をすることで、未来予測に対するアクションを検出できる。また、ウェアラブル端末のセンサ情報や、閲覧者に装着されたセンサ情報などから、未来予測に対するアクションを検出してもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The detecting step detects an action on the displayed future prediction. The action is, for example, a gesture, hand movement, gaze movement, or the like. By acquiring an image being browsed and performing image analysis, it is possible to detect an action on future prediction. Further, the action on the future prediction may be detected from the sensor information of the wearable terminal, the sensor information attached to the viewer, or the like. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 アクション結果表示ステップは、ウェアラブル端末の表示板に、表示板を透過して見える予測対象に対して、アクションに応じた結果を拡張現実として表示する。例えば、未来予測を消すアクションを検出したら未来予測の表示を消してよい。例えば、未来予測に付けられたリンクを開くアクションを検出したらリンクを開いてもよい。例えば、未来予測のページをめくるアクションを検出したらページをめくってもよい。もちろん他のアクションでもよい。 The action result display step displays, on the display board of the wearable terminal, the result corresponding to the action as the augmented reality for the prediction target seen through the display board. For example, the display of the future prediction may be turned off when an action to delete the future prediction is detected. For example, the link may be opened upon detecting an action to open the link attached to the future prediction. For example, when an action to turn a page of a future prediction is detected, the page may be turned. Of course, other actions may be used.
 位置方向ステップは、ウェアラブル端末の端末位置と撮像方向とを取得する。例えば、端末位置は、ウェアラブル端末のGPS(Global Positioning System)から取得できる。例えば、撮像方向は、ウェアラブル端末で撮像する場合は、ウェアラブル端末の地磁気センサや加速度センサから取得できる。これら以外から取得してもよい。 The position direction step acquires the terminal position and the imaging direction of the wearable terminal. For example, the terminal position can be acquired from the GPS (Global Positioning System) of the wearable terminal. For example, the imaging direction can be acquired from the geomagnetic sensor or the acceleration sensor of the wearable terminal when imaging is performed by the wearable terminal. You may acquire from other than these.
 推測ステップは、端末位置と撮像方向とに基づいて、予測対象の対象位置を推測する。端末位置と撮像方向が分かっていれば、撮像された予測対象の対象位置を推測することができる。 The estimation step estimates a target position of a prediction target based on the terminal position and the imaging direction. If the terminal position and the imaging direction are known, the target position of the imaged prediction target can be estimated.
 また、特定ステップは、対象位置と画像解析とから、予測対象を特定してもよい。位置情報を利用することで特定の精度を向上することができる。例えば、位置情報によって、浅草寺だと特定する精度が向上できれば、それに対応して表示する未来予測の信頼度も向上する。 In addition, the identification step may identify the prediction target from the target position and the image analysis. Specific accuracy can be improved by utilizing position information. For example, if it is possible to improve the accuracy of specifying that the location is Sensoji, the reliability of the future prediction to be displayed is also improved.
 ガイドライン表示ステップは、ウェアラブル端末の表示板に、予測対象を撮像するためのガイドラインを拡張現実として表示する。例えば、枠や十字などのガイドラインを表示してもよい。ガイドラインに沿って撮像してもらうことで画像解析がしやすくなる。 The guideline display step displays a guideline for imaging a prediction target as an augmented reality on a display plate of the wearable terminal. For example, guidelines such as a frame or a cross may be displayed. It will be easier to analyze the image by having the image taken according to the guidelines.
 また、取得ステップは、ガイドラインに沿って撮像された画像を取得してもよい。ガイドラインに沿って撮像された画像だけを取得して画像解析することで、効率良く予測対象を特定できる。 In addition, the acquisition step may acquire an image captured along a guideline. By acquiring and analyzing only the image captured along the guideline, it is possible to efficiently specify the prediction target.
 選択受付ステップは、ウェアラブル端末の表示板を透過して見える予測対象に対して、選択対象の選択を受け付ける。例えば、ウェアラブル端末の表示板を透過して見える予測対象を一定時間見ることで選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える予測対象にタッチして選択対象の選択を受け付けてもよい。例えば、ウェアラブル端末の表示板を透過して見える予測対象にカーソルを合わせることで選択対象の選択を受け付けてもよい。例えば、視線を検知するセンサ、モーションセンサ、加速度センサなど。 The selection receiving step receives selection of a selection target for the prediction target viewed through the display board of the wearable terminal. For example, the selection target selection may be accepted by looking at the prediction target seen through the display plate of the wearable terminal for a certain period of time. For example, the selection of the selection target may be received by touching the prediction target which is seen through the display plate of the wearable terminal. For example, the selection of the selection target may be received by positioning the cursor on the prediction target that is seen through the display plate of the wearable terminal. For example, a sensor that detects a sight line, a motion sensor, an acceleration sensor, and the like.
 また、未来予測表示ステップは、ウェアラブル端末の表示板に、表示板を透過して見える選択対象にだけ合わせて、未来予測を拡張現実として表示してもよい。選択された選択対象にだけ合わせて未来予測を拡張現実として表示するので、ピンポイントに未来予測を把握することができる。特定された全ての予測対象に未来予測を表示すると表示板の画面が煩わしくなることがある。 Further, the future prediction display step may display the future prediction as the augmented reality on the display board of the wearable terminal in accordance with only the selection target that is seen through the display board. Since the future prediction is displayed as an augmented reality according to only the selected selection object, it is possible to pinpoint the future prediction. If future predictions are displayed on all of the identified prediction targets, the display screen may be cumbersome.
 上述した手段、機能は、コンピュータ(CPU、情報処理装置、各種端末を含む)が、所定のプログラムを読み込んで、実行することによって実現される。プログラムは、例えば、コンピュータにインストールされるアプリケーションであってもよいし、コンピュータからネットワーク経由で提供されるSaaS(ソフトウェア・アズ・ア・サービス)形態であってもよいし、例えば、フレキシブルディスク、CD(CD-ROMなど)、DVD(DVD-ROM、DVD-RAMなど)等のコンピュータ読取可能な記録媒体に記録された形態で提供されてもよい。この場合、コンピュータはその記録媒体からプログラムを読み取って内部記憶装置または外部記憶装置に転送し記憶して実行する。また、そのプログラムを、例えば、磁気ディスク、光ディスク、光磁気ディスク等の記憶装置(記録媒体)に予め記録しておき、その記憶装置から通信回線を介してコンピュータに提供するようにしてもよい。 The above-described means and functions are realized by a computer (including a CPU, an information processing device, and various terminals) reading and executing a predetermined program. The program may be, for example, an application installed on a computer, or a SaaS (software as a service) provided from a computer via a network, for example, a flexible disk, a CD It may be provided in the form of being recorded in a computer readable recording medium such as a CD-ROM or the like, a DVD (DVD-ROM, DVD-RAM or the like). In this case, the computer reads the program from the recording medium, transfers the program to the internal storage device or the external storage device, stores it, and executes it. Alternatively, the program may be recorded in advance in a storage device (recording medium) such as, for example, a magnetic disk, an optical disk, or a magneto-optical disk, and may be provided from the storage device to the computer via a communication line.
 上述した機械学習の具体的なアルゴリズムとしては、最近傍法、ナイーブベイズ法、決定木、サポートベクターマシン、強化学習などを利用してよい。また、ニューラルネットワークを利用して、学習するための特徴量を自ら生成する深層学習(ディープラーニング)であってもよい。 As a specific algorithm of the above-mentioned machine learning, nearest neighbor method, naive Bayes method, decision tree, support vector machine, reinforcement learning, etc. may be used. In addition, deep learning may be used in which feature quantities for learning are generated by using a neural network.
 以上、本発明の実施形態について説明したが、本発明は上述したこれらの実施形態に限るものではない。また、本発明の実施形態に記載された効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、本発明の実施形態に記載されたものに限定されるものではない。

 
As mentioned above, although embodiment of this invention was described, this invention is not limited to these embodiment mentioned above. Further, the effects described in the embodiments of the present invention only list the most preferable effects resulting from the present invention, and the effects according to the present invention are limited to those described in the embodiments of the present invention is not.

Claims (13)

  1.  ウェアラブル端末の表示板に、予測対象の未来予測を表示するウェアラブル端末表示システムであって、
     ウェアラブル端末の視界に入った予測対象の画像を取得する画像取得手段と、
     前記画像を画像解析して、前記予測対象を特定する特定手段と、
     前記予測対象の未来予測を計算する計算手段と、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記予測対象に対して、前記未来予測を拡張現実として表示する未来予測表示手段と、
    を備えるウェアラブル端末表示システム。
    A wearable terminal display system for displaying a future prediction of an object to be predicted on a display board of the wearable terminal,
    An image acquisition unit that acquires an image of a prediction target within the field of view of the wearable terminal;
    Image analysis of the image to specify the prediction target;
    Calculation means for calculating the future prediction of the prediction target;
    Future prediction display means for displaying the future prediction as augmented reality for the prediction target seen through the display plate on the display plate of the wearable terminal;
    A wearable terminal display system comprising:
  2.  前記特定手段は、前記ウェアラブル端末の視界の中央にある予測対象だけを特定する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the specifying unit specifies only a prediction target in the center of the field of view of the wearable terminal.
  3.  前記計算手段は、予め未来予測が登録されたデータベースを参照して、前記予測対象の未来予測を計算する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the calculation means calculates the future prediction of the prediction target with reference to a database in which a future prediction is registered in advance.
  4.  前記計算手段は、前記予測対象に予め紐付けられたWebコンテンツにアクセスして、前記未来予測を計算する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the calculation means calculates the future prediction by accessing Web content previously linked to the prediction target.
  5.  前記計算手段は、前記予測対象をインターネット検索して、検索されたWebコンテンツから前記未来予測を計算する
    請求項1に記載のウェアラブル端末表示システム。
    The wearable terminal display system according to claim 1, wherein the calculation unit searches the prediction target via the Internet and calculates the future prediction from the searched Web content.
  6.  前記表示された未来予測が閲覧されたかどうかを判定する判定手段と、
     前記閲覧されたと判定された場合、未来予測を閲覧済みに変更する変更手段と、
    を備える請求項1に記載のウェアラブル端末表示システム。
    Determining means for determining whether the displayed future prediction has been viewed;
    Changing means for changing the future prediction to viewed when it is determined that the page has been browsed;
    The wearable terminal display system according to claim 1, comprising:
  7.  前記表示された未来予測が閲覧されたかどうかを判定する判定手段と、
     前記閲覧されていないと判定された場合、未来予測が閲覧されるように注目度を変更する変更手段と、
    を備える請求項1に記載のウェアラブル端末表示システム。
    Determining means for determining whether the displayed future prediction has been viewed;
    Changing means for changing the degree of attention so that the future prediction is browsed if it is determined that the browsing is not performed;
    The wearable terminal display system according to claim 1, comprising:
  8.  前記表示された未来予測に対するアクションを検出する検出手段と、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記予測対象に対して、前記アクションに応じた結果を拡張現実として表示するアクション結果表示手段と、
    を備える請求項1に記載のウェアラブル端末表示システム。
    Detection means for detecting an action on the displayed future prediction;
    Action result display means for displaying the result according to the action as the augmented reality on the prediction target viewed through the display plate on the display plate of the wearable terminal;
    The wearable terminal display system according to claim 1, comprising:
  9.  前記ウェアラブル端末の、端末位置と撮像方向と、を取得する位置方向取得手段と、
     前記端末位置と前記撮像方向とに基づいて、前記予測対象の対象位置を推測する推測手段と、
    を備え、
     前記特定手段は、前記対象位置と前記画像解析とから、前記予測対象を特定する
    請求項1に記載のウェアラブル端末表示システム。
    Position direction acquisition means for acquiring a terminal position and an imaging direction of the wearable terminal;
    Estimating means for estimating a target position of the prediction target based on the terminal position and the imaging direction;
    Equipped with
    The wearable terminal display system according to claim 1, wherein the specifying unit specifies the prediction target from the target position and the image analysis.
  10.  前記ウェアラブル端末の表示板に、前記予測対象を撮像するためのガイドラインを拡張現実として表示するガイドライン表示手段を備え、
     前記取得手段は、前記ガイドラインに沿って撮像された前記画像を取得する
    請求項1に記載のウェアラブル端末表示システム。 
    The display board of the wearable terminal is provided with a guideline display means for displaying a guideline for imaging the prediction object as an augmented reality,
    The wearable terminal display system according to claim 1, wherein the acquisition unit acquires the image captured along the guideline.
  11.  前記ウェアラブル端末の表示板を透過して見える前記予測対象に対して、選択対象の選択を受け付ける選択受付手段を備え、
     前記未来予測表示手段は、前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記選択対象にだけ合わせて、前記未来予測を拡張現実として表示する
    請求項1に記載のウェアラブル端末表示システム。
    And a selection receiving unit configured to receive a selection of a selection target for the prediction target viewed through the display plate of the wearable terminal.
    The wearable terminal display system according to claim 1, wherein the future prediction display means displays the future prediction as an augmented reality on the display plate of the wearable terminal in accordance with only the selection object seen through the display plate. .
  12.  ウェアラブル端末の表示板に、予測対象の未来予測を表示するウェアラブル端末表示方法であって、
     ウェアラブル端末の視界に入った予測対象の画像を取得する画像取得ステップと、
     前記画像を画像解析して、前記予測対象を特定する特定ステップと、
     前記予測対象の未来予測を計算する計算ステップと、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記予測対象に対して、前記未来予測を拡張現実として表示する未来予測表示ステップと、
    を備えるウェアラブル端末表示方法。
    A wearable terminal display method for displaying a future prediction of an object to be predicted on a display board of the wearable terminal,
    An image acquisition step of acquiring an image of a prediction target that has come into view of the wearable terminal;
    Image analysis of the image to identify the prediction object;
    Calculating the future prediction of the prediction object;
    A future prediction display step of displaying the future prediction as augmented reality for the prediction target viewed through the display plate on the display plate of the wearable terminal;
    A wearable terminal display method comprising:
  13.  コンピュータに、
     ウェアラブル端末の視界に入った予測対象の画像を取得する画像取得ステップと、
     前記画像を画像解析して、前記予測対象を特定する特定ステップと、
     前記予測対象の未来予測を計算する計算ステップと、
     前記ウェアラブル端末の表示板に、前記表示板を透過して見える前記予測対象に対して、前記未来予測を拡張現実として表示する未来予測表示ステップと、
    を実行させるためのプログラム。
     
    On the computer
    An image acquisition step of acquiring an image of a prediction target that has come into view of the wearable terminal;
    Image analysis of the image to identify the prediction object;
    Calculating the future prediction of the prediction object;
    A future prediction display step of displaying the future prediction as augmented reality for the prediction target viewed through the display plate on the display plate of the wearable terminal;
    A program to run a program.
PCT/JP2017/027351 2017-07-28 2017-07-28 Wearable terminal display system, wearable terminal display method and program WO2019021447A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/027351 WO2019021447A1 (en) 2017-07-28 2017-07-28 Wearable terminal display system, wearable terminal display method and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/027351 WO2019021447A1 (en) 2017-07-28 2017-07-28 Wearable terminal display system, wearable terminal display method and program

Publications (1)

Publication Number Publication Date
WO2019021447A1 true WO2019021447A1 (en) 2019-01-31

Family

ID=65041129

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/027351 WO2019021447A1 (en) 2017-07-28 2017-07-28 Wearable terminal display system, wearable terminal display method and program

Country Status (1)

Country Link
WO (1) WO2019021447A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002149478A (en) * 2000-08-29 2002-05-24 Fujitsu Ltd Method for automatically displaying update information and device for the same, and medium and program
JP2005176345A (en) * 2003-11-20 2005-06-30 Matsushita Electric Ind Co Ltd Device and method for cooperation control and service cooperation system
JP2007018456A (en) * 2005-07-11 2007-01-25 Nikon Corp Information display device and information display method
JP2007178124A (en) * 2005-12-26 2007-07-12 Aisin Aw Co Ltd Navigation system
WO2011126134A1 (en) * 2010-04-09 2011-10-13 サイバーアイ・エンタテインメント株式会社 Server system for real-time moving image collection, recognition, classification, processing, and delivery
JP2014531662A (en) * 2011-09-19 2014-11-27 アイサイト モバイル テクノロジーズ リミテッド Touch-free interface for augmented reality systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002149478A (en) * 2000-08-29 2002-05-24 Fujitsu Ltd Method for automatically displaying update information and device for the same, and medium and program
JP2005176345A (en) * 2003-11-20 2005-06-30 Matsushita Electric Ind Co Ltd Device and method for cooperation control and service cooperation system
JP2007018456A (en) * 2005-07-11 2007-01-25 Nikon Corp Information display device and information display method
JP2007178124A (en) * 2005-12-26 2007-07-12 Aisin Aw Co Ltd Navigation system
WO2011126134A1 (en) * 2010-04-09 2011-10-13 サイバーアイ・エンタテインメント株式会社 Server system for real-time moving image collection, recognition, classification, processing, and delivery
JP2014531662A (en) * 2011-09-19 2014-11-27 アイサイト モバイル テクノロジーズ リミテッド Touch-free interface for augmented reality systems

Similar Documents

Publication Publication Date Title
US20200193487A1 (en) System and method to measure effectiveness and consumption of editorial content
CN110704684B (en) Video searching method and device, terminal and storage medium
CN107690657B (en) Trade company is found according to image
US20180247361A1 (en) Information processing apparatus, information processing method, wearable terminal, and program
JP6681342B2 (en) Behavioral event measurement system and related method
US9024972B1 (en) Augmented reality computing with inertial sensors
US20200042516A1 (en) Automated sequential site navigation
JP6267841B1 (en) Wearable terminal display system, wearable terminal display method and program
US20190102952A1 (en) Identifying augmented reality visuals influencing user behavior in virtual-commerce environments
KR101925701B1 (en) Determination of attention towards stimuli based on gaze information
JP2010061218A (en) Web advertising effect measurement device, web advertising effect measurement method, and program
US20140330814A1 (en) Method, client of retrieving information and computer storage medium
US9619707B2 (en) Gaze position estimation system, control method for gaze position estimation system, gaze position estimation device, control method for gaze position estimation device, program, and information storage medium
WO2014176938A1 (en) Method and apparatus of retrieving information
JP6887198B2 (en) Wearable device display system, wearable device display method and program
JP2017204134A (en) Attribute estimation device, attribute estimation method, and program
WO2018198320A1 (en) Wearable terminal display system, wearable terminal display method and program
WO2019021446A1 (en) Wearable terminal display system, wearable terminal display method and program
WO2019021447A1 (en) Wearable terminal display system, wearable terminal display method and program
WO2018216221A1 (en) Wearable terminal display system, wearable terminal display method and program
WO2018216220A1 (en) Wearable terminal display system, wearable terminal display method and program
CN114972500A (en) Checking method, marking method, system, device, terminal, equipment and medium
WO2019003359A1 (en) Wearable terminal display system, wearable terminal display method, and program
JP6762470B2 (en) Wearable device display system, wearable device display method and program
JP6343412B1 (en) Map-linked sensor information display system, map-linked sensor information display method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17918995

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17918995

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP