WO2020202345A1 - Assistance method and assistance system - Google Patents

Assistance method and assistance system Download PDF

Info

Publication number
WO2020202345A1
WO2020202345A1 PCT/JP2019/014252 JP2019014252W WO2020202345A1 WO 2020202345 A1 WO2020202345 A1 WO 2020202345A1 JP 2019014252 W JP2019014252 W JP 2019014252W WO 2020202345 A1 WO2020202345 A1 WO 2020202345A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
functional component
information terminal
augmented reality
camera
Prior art date
Application number
PCT/JP2019/014252
Other languages
French (fr)
Japanese (ja)
Inventor
山岡 大祐
田中 一彦
祐 瀧口
瞳 ▲濱▼村
Original Assignee
本田技研工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 本田技研工業株式会社 filed Critical 本田技研工業株式会社
Priority to JP2021511720A priority Critical patent/JP7117454B2/en
Priority to PCT/JP2019/014252 priority patent/WO2020202345A1/en
Priority to CN201980091159.XA priority patent/CN113396382A/en
Publication of WO2020202345A1 publication Critical patent/WO2020202345A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services

Definitions

  • the present invention relates to a support method and a support system that support grasping a function.
  • Patent Document 1 a photographed image is displayed on a display in a mobile wireless communication terminal such as a smartphone, and guidance (name) of a component included in the photographed image is superimposed and displayed on the display, and superimposed and displayed on the display. It is disclosed that the operation manual of the part is displayed on the display when the guide of the part is pressed.
  • an object of the present invention is to allow the user to easily and intuitively grasp the function.
  • the support method as one aspect of the present invention is a support method for supporting grasping of a function by using an information terminal having a camera and a display, and is a first display for displaying a captured image obtained by the camera on the display.
  • the process, the specific step of specifying the functional component included in the captured image displayed on the display in the first display step, and the intention of the user to grasp the function related to the functional component specified in the specific step are detected. If so, it includes a second display step of superimposing an augmented reality image showing an operating state when the functional component is actually operated on the captured image obtained by the camera and displaying it on the display. It is characterized by that.
  • the user since the user can be made aware of how the function is actually exhibited, the user can be made to grasp the function more easily and intuitively than before.
  • Block diagram showing the configuration of the support system Sequence diagram showing the processing performed between the information terminal and the server device Flowchart showing the processing performed in the processing unit of the information terminal Diagram showing how the user is shooting the steering wheel inside the vehicle with the camera of the information terminal.
  • the information terminal 10 may include, for example, a processing unit 11, a storage unit 12, a camera 13, a display 14, a position detection sensor 15, a posture detection sensor 16, and a communication unit 17. Each part of the information terminal 10 is connected to each other so as to be able to communicate with each other via the system bus 18.
  • the camera 13 has a lens and an image sensor, and captures a subject to acquire a captured image.
  • the camera 13 may be provided, for example, on an outer surface opposite to the outer surface on which the display 14 is provided.
  • the display 14 notifies the user of information by displaying an image.
  • the display 14 can display the captured image acquired by the camera 13 in real time.
  • the display 14 of the present embodiment includes, for example, a touch panel type LCD (Liquid Crystal Display), and has a function of receiving information input from a user in addition to a function of displaying an image.
  • the present invention is not limited to this, and the display 14 may have only the function of displaying an image, and an input unit (for example, a keyboard, a mouse, etc.) may be provided independently of the display 14.
  • the position detection sensor 15 detects the position and orientation of the information terminal 10.
  • the position detection sensor 15 for example, a GPS sensor that receives a signal from a GPS satellite and acquires the position information of the information terminal 10 at the present time, or a camera 13 of the information terminal 10 based on geomagnetism or the like is directed.
  • An orientation sensor or the like that detects the orientation can be used.
  • the posture detection sensor 16 detects the posture of the information terminal 10.
  • an acceleration sensor, a gyro sensor, or the like can be used.
  • the communication unit 17 is communicably connected to the server device 20 via the network NTW. Specifically, the communication unit 17 has a function as a receiving unit that receives information from the server device 20 via the network NTW and a function as a transmitting unit that transmits information to the server device 20 via the network NTW. Have. In the case of the present embodiment, the communication unit 17 transmits information (operation information) indicating the type of functional component operated on the display 14 by the user and the operation mode thereof to the server device 20. Further, the communication unit 17 can receive data of an augmented reality (AR) image showing the operating state of the vehicle when the functional parts of the vehicle are actually operated from the server device 20.
  • AR augmented reality
  • a first acquisition unit 11a acquires the data of the captured image acquired by the camera 13.
  • the second acquisition unit 11b acquires the augmented reality image data from the server device 20 via the communication unit 17.
  • the identification unit 11c analyzes the captured image by performing image processing such as a pattern matching method, and identifies the functional parts of the vehicle included in the captured image displayed on the display 14.
  • the detection unit 11d detects the user's intention to grasp the function related to the functional component specified by the specific unit 11c.
  • the storage unit 22 contains an augmented reality image showing the operating state (function) of the vehicle when the functional parts of the vehicle are actually operated for each of the plurality of types of functional parts provided in the vehicle. Data is stored.
  • the processing unit 21 receives information indicating the type of the functional component detected by the processing unit 11 (detection unit 11d) of the information terminal 10 and the operation mode of the functional component from the information terminal 10 via the communication unit 23. Based on the received information, the data of the augmented reality image associated with the functional component and stored in the storage unit 22 is transmitted to the information terminal 10 via the communication unit 23.
  • FIG. 2 is a sequence diagram showing a process performed between the information terminal 10 and the server device 20.
  • augmented reality image for example, moving image
  • the server device 20 that has received the operation information from the information terminal 10 selects the augmented reality image data corresponding to the operation information (functional parts, operation mode) from the plurality of augmented reality images stored in the storage unit 22. Then, the data of the selected augmented reality image is transmitted to the information terminal 10 (step 108).
  • the information terminal 10 superimposes the augmented reality image received from the server device 20 on the captured image obtained by the camera 13 and displays it on the display 14 (step 109).
  • the information terminal 10 adjusts to the movement of the information terminal 10 based on the position of the information terminal 10 detected by the position detection sensor 15 and the posture of the information terminal 10 detected by the posture detection sensor 16.
  • the image captured by the camera 13 and the augmented reality image are aligned so that the augmented reality image moves.
  • FIG. 3 is a flowchart showing processing performed by the processing unit 11 of the information terminal 10.
  • the vehicle is provided with various functions such as functions used when driving, functions for improving comfort in the vehicle, and functions for improving safety, and functional parts for exerting these functions are provided. It can be installed in the vehicle (inside the vehicle). Functions used during driving include, for example, turn signals (winkers), wipers, side brakes, and transmissions. Further, as a function for improving comfort, for example, an air conditioner, a seat heater, audio, and the like can be mentioned. In addition, as functions for improving safety, cruise control with inter-vehicle distance control (hereinafter, ACC; Adoptive Cruise Control), lane keeping support system (hereinafter, LKAS; Lane Keeping Assistant System), and the like can be mentioned.
  • ACC inter-vehicle distance control
  • LKAS Lane keeping support system
  • the support system 100 of the present embodiment may be configured to provide support for the user to grasp various functions of the vehicle as described above, but in the following, support for grasping safety functions such as ACC and LKAS will be provided. An example of performing the above will be described. Here, it is assumed that the functional parts (switches) for exerting the functions of ACC and LKAS are provided on the handle.
  • the processing unit 11 (first acquisition unit 11a) causes the camera 13 to start shooting and acquires a shot image from the camera 13.
  • the processing unit 11 (display control unit 11e) sequentially displays the captured images acquired from the camera 13 on the display 14.
  • FIG. 4 shows a user taking a picture of the steering wheel 2 in the vehicle by the camera 13 of the information terminal 10.
  • the captured image of the handle 2 obtained by the camera 13 is sequentially displayed on the display 14.
  • FIG. 4 shows the vehicle interior environment as seen by the passenger on board, and in addition to the steering wheel 2, the windshield 1, the front pillar 3, the dashboard 4, and the meter panel 5 are also shown.
  • the processing unit 11 (specific unit 11c) identifies the functional parts of the vehicle included in the captured image displayed on the display 14. For example, the processing unit 11 first recognizes the parts included in the captured image by performing known image processing.
  • the storage unit 12 of the information terminal 10 stores feature information for each of the plurality of functional parts provided in the vehicle, and the processing unit 11 has a high degree of agreement with the features of the recognized parts (that is,). Investigate (determine) whether or not there are functional parts (the degree of matching exceeds a predetermined value). As a result, the processing unit 11 can specify the functional component included in the captured image.
  • the processing unit 11 determines whether or not the functional component is specified in the captured image displayed on the display 14. If the functional component is specified in the captured image, the process proceeds to S15, and if the functional component is not specified, the process returns to S12.
  • the processing unit 11 superimposes the augmented reality image indicating the name of the functional component specified in S13 on the captured image obtained by the camera 13 and displays it on the display 14. At this time, the processing unit 11 matches the position of the functional component in the captured image displayed on the display 14 based on the position / orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. As described above, the augmented reality image showing the name of the functional component is displayed on the display 14. For example, FIG. 5 shows an example in which the captured image of the handle 2 is displayed on the display 14 of the information terminal 10. In the example shown in FIG.
  • the ACC switch 2a, the LKAS switch 2b, the cancel switch 2c, and the inter-vehicle distance setting switch 2d are specified in the captured image of the handle 2, and the augmented reality images 31 to 34 showing their names are shown. , It is displayed on the display 14 by superimposing it on the photographed image of the handle.
  • the augmented reality image indicating the name of each of the plurality of functional components provided in the vehicle is stored in the storage unit 12, but the storage unit 12 is not limited to this, and is stored in the storage unit 22 of the server device 20. You may be.
  • the processing unit 11 transmits the identification information of the functional component specified in S12 to the server device 20 via the communication unit 17, and transmits the augmented reality image data indicating the name of the functional component via the communication unit 17. Can be received from the server device 20.
  • the processing unit 11 determines whether or not the functional component specified in S13 has been operated by the user on the display 14.
  • the processing unit 11 determines whether or not the functional component is operated on the display by the user's finger or the like.
  • the present invention is not limited to this, and for example, when a display other than the touch panel type is used, it may be determined whether or not the functional component is operated on the display via an input unit such as a mouse. If the functional component is operated on the display 14 by the user, the process proceeds to S17, and if the functional component is not operated on the display 14, the process returns to S12.
  • the processing unit 11 transmits information (operation information) indicating the type of functional component operated on the display 14 by the user and the operation mode thereof to the server device 20 via the communication unit 17.
  • the processing unit 11 (second acquisition unit 11b) transmits the augmented reality image corresponding to the operation information from the plurality of augmented reality images stored in the storage unit 22 of the server device 20 via the communication unit 17. Receives from the server device 20.
  • the processing unit 11 (display control unit 11e) superimposes the augmented reality image acquired in S18 on the captured image obtained by the camera 13 and displays it on the display 14. At this time, the processing unit 11 moves the augmented reality image according to the movement of the information terminal 10 based on the position and orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. The image captured by the camera 13 and the augmented reality image obtained in S18 are aligned.
  • FIG. 6 shows how the augmented reality image acquired in S18 is displayed on the display 14 of the information terminal 10, and the user grasps the function of the vehicle by the information terminal 10. Specifically, the user photographs the outside of the vehicle with the camera 13 of the information terminal 10 through the windshield 1, and the display 14 of the information terminal 10 acquires a functional component when the functional component is operated on the display 14. The augmented reality image is displayed superimposed on the image captured by the camera 13.
  • Such an augmented reality image is, for example, a moving image, and can be displayed on the display 14 so that the appearance of the augmented reality image changes according to the position, direction, and posture of the information terminal 10. Further, when the inter-vehicle distance setting switch 2d is operated on the display 14, the distance to the vehicle in front 43 displayed on the display 14 as an augmented reality image is set according to the inter-vehicle distance set by the operation. Can be changed.
  • FIG. 8 shows an example in which the augmented reality image obtained in S18 when the LKAS switch 2b is operated on the display 14 is superimposed on the image captured by the camera 13 and displayed on the display 14 of the information terminal 10. It shows.
  • the road 51 and the lane 52 are displayed on the display 14 as augmented reality images.
  • the operation performed when the LKAS is activated specifically, the symbol 53 and the description 54 indicating that the vehicle has protruded from the lane 52, and the symbol 55 and the description to assist the operation of the steering wheel 2 in that case. 56 is displayed on the display 14 as an augmented reality image.
  • the dashboard 4 displayed on the display 14 is an captured image captured by the camera 13. Similar to the example shown in FIG. 7, such an augmented reality image is, for example, a moving image, and can be displayed on the display 14 so that the appearance of the augmented reality image changes according to the position, direction, and posture of the information terminal 10. ..
  • the support system 100 for assisting the user in grasping the function of the vehicle has been described, but the support system 100 is not limited to the function of the vehicle and supports the grasp of the function of other objects. Applicable for.
  • the object may be any object whose state changes by operating a functional component, and the change in state may be an electrical change or a mechanical change.
  • the support method of the above embodiment is It is a support method for supporting grasping of a function by using an information terminal (for example, 10) having a camera (for example, 13) and a display (for example, 14).
  • an augmented reality image showing an operating state when the functional component is actually operated is obtained by the camera.
  • the present invention includes a second display step of superimposing the captured image and displaying the image on the display.
  • the invisible information includes radio waves emitted by the actual operation of the functional component. According to this configuration, it is possible for the user to more easily grasp the function by visually representing the radio wave emitted in order to exert the function.
  • the operation of the functional component on the display by the user is detected as the intention of the user to grasp the function related to the functional component. According to this configuration, it is possible to accurately detect the user's intention to grasp the function.

Abstract

This assistance method for assisting in understanding a function by using an information terminal having a camera and a display comprises: a first display step for displaying a captured image obtained by the camera on the display; a specifying step for specifying a functional component of a vehicle, included in the captured image displayed on the display in the first display step; and a second display step in which, when a user's intention to understand the function related to the functional component specified in the specifying step is detected, an augmented reality image showing the operation state when the functional component is actually operated is displayed on the display by being superimposed on the captured image obtained by the camera.

Description

支援方法、および支援システムSupport method and support system
 本発明は、機能の把握を支援する支援方法および支援システムに関するものである。 The present invention relates to a support method and a support system that support grasping a function.
 近年、車両には、様々な機能が設けられており、それらの機能をユーザに把握させることが求められている。特許文献1には、スマートフォン等の携帯無線通信端末において撮影画像をディスプレイに表示するとともに、当該撮影画像内に含まれる部品の案内(名称)をディスプレイに重畳表示すること、および、ディスプレイに重畳表示された部品の案内を押下した場合に、当該部品の操作マニュアルをディスプレイに表示することが開示されている。 In recent years, vehicles are provided with various functions, and it is required for users to understand these functions. In Patent Document 1, a photographed image is displayed on a display in a mobile wireless communication terminal such as a smartphone, and guidance (name) of a component included in the photographed image is superimposed and displayed on the display, and superimposed and displayed on the display. It is disclosed that the operation manual of the part is displayed on the display when the guide of the part is pressed.
特開2014-215845号公報Japanese Unexamined Patent Publication No. 2014-215845
 特許文献1に記載されたように、部品の案内および操作マニュアルをディスプレイに表示するだけでは、当該部品を操作したときにどのような機能が発揮されるのかなど、当該部品を操作したときに発揮される車両の機能をユーザに容易に把握させることが困難である。 As described in Patent Document 1, it is exhibited when the part is operated, such as what kind of function is exhibited when the part is operated only by displaying the guide and the operation manual of the part on the display. It is difficult for the user to easily grasp the function of the vehicle to be used.
 そこで、本発明は、容易に且つ直感的にユーザに機能を把握させることを目的とする。 Therefore, an object of the present invention is to allow the user to easily and intuitively grasp the function.
 本発明の一側面としての支援方法は、カメラおよびディスプレイを有する情報端末を用いて機能の把握を支援する支援方法であって、前記カメラで得られた撮影画像を前記ディスプレイに表示する第1表示工程と、前記第1表示工程で前記ディスプレイに表示された撮影画像内に含まれる機能部品を特定する特定工程と、前記特定工程で特定された前記機能部品に関する機能を把握するユーザの意思が検出された場合、当該機能部品が実際に操作されたときの動作状態を示す拡張現実画像を、前記カメラで得られた前記撮影画像に重ね合わせて前記ディスプレイに表示する第2表示工程と、を含むことを特徴とする。 The support method as one aspect of the present invention is a support method for supporting grasping of a function by using an information terminal having a camera and a display, and is a first display for displaying a captured image obtained by the camera on the display. The process, the specific step of specifying the functional component included in the captured image displayed on the display in the first display step, and the intention of the user to grasp the function related to the functional component specified in the specific step are detected. If so, it includes a second display step of superimposing an augmented reality image showing an operating state when the functional component is actually operated on the captured image obtained by the camera and displaying it on the display. It is characterized by that.
 本発明によれば、例えば、機能が実際に発揮されている様子をユーザに認知させることができるため、従来より容易に且つ直感的にユーザに機能を把握させることができる。 According to the present invention, for example, since the user can be made aware of how the function is actually exhibited, the user can be made to grasp the function more easily and intuitively than before.
 添付図面は明細書に含まれ、その一部を構成し、本発明の実施の形態を示し、その記述と共に本発明の原理を説明するために用いられる。
支援システムの構成を示すブロック図 情報端末とサーバ装置との間で行われる処理を示すシーケンス図 情報端末の処理部で行われる処理を示すフローチャート ユーザが情報端末のカメラによって車内のハンドルを撮影している様子を示す図 ハンドルの撮影画像を情報端末のディスプレイに表示した例を示す図 拡張現実画像がディスプレイに表示された情報端末によりユーザが車両の機能を把握している様子を示す図 撮像画像と拡張現実画像とがディスプレイに表示された情報端末を示す図 撮像画像と拡張現実画像とがディスプレイに表示された情報端末を示す図
The accompanying drawings are included in the specification, which form a part thereof, show an embodiment of the present invention, and are used together with the description to explain the principle of the present invention.
Block diagram showing the configuration of the support system Sequence diagram showing the processing performed between the information terminal and the server device Flowchart showing the processing performed in the processing unit of the information terminal Diagram showing how the user is shooting the steering wheel inside the vehicle with the camera of the information terminal The figure which shows the example which displayed the photographed image of the steering wheel on the display of an information terminal. A diagram showing a user grasping the function of a vehicle by an information terminal in which an augmented reality image is displayed on a display. A diagram showing an information terminal in which an captured image and an augmented reality image are displayed on a display. A diagram showing an information terminal in which an captured image and an augmented reality image are displayed on a display.
 以下、添付図面を参照して実施形態を詳しく説明する。尚、以下の実施形態は特許請求の範囲に係る発明を限定するものでなく、また実施形態で説明されている特徴の組み合わせの全てが発明に必須のものとは限らない。実施形態で説明されている複数の特徴のうち二つ以上の特徴が任意に組み合わされてもよい。また、同一若しくは同様の構成には同一の参照番号を付し、重複した説明は省略する。 Hereinafter, embodiments will be described in detail with reference to the attached drawings. It should be noted that the following embodiments do not limit the invention according to the claims, and not all combinations of features described in the embodiments are essential to the invention. Two or more of the plurality of features described in the embodiments may be arbitrarily combined. In addition, the same or similar configuration will be given the same reference number, and duplicate description will be omitted.
 <第1実施形態>
 本発明に係る第1実施形態について説明する。図1は、本実施形態の支援システム100の構成を示すブロック図である。本実施形態の支援システムは、ユーザによる車両の機能の把握を支援するためのシステムであり、情報端末10と、サーバ装置20(クラウド)と、ネットワークNTWとを含みうる。ここで、車両としては、例えば四輪車や鞍乗型車両(二輪車、三輪車)などが挙げられ、本実施形態では、車両として四輪車を例示して説明する。また、情報端末10としては、例えばスマートフォンやタブレット端末などが用いられ、本実施形態では、情報端末10としてタブレット端末を用いる例について説明する。スマートフォンおよびタブレット端末は、通話機能以外の様々な機能を有する携帯端末のことであるが、ディスプレイの寸法が互いに異なる。一般的には、タブレット端末の方が、スマートフォンよりディスプレイの寸法が大きい。
<First Embodiment>
The first embodiment according to the present invention will be described. FIG. 1 is a block diagram showing a configuration of the support system 100 of the present embodiment. The support system of the present embodiment is a system for assisting the user in grasping the function of the vehicle, and may include an information terminal 10, a server device 20 (cloud), and a network NTW. Here, examples of the vehicle include a four-wheeled vehicle and a saddle-mounted vehicle (two-wheeled vehicle, three-wheeled vehicle), and in the present embodiment, the four-wheeled vehicle will be described as an example. Further, as the information terminal 10, for example, a smartphone or a tablet terminal is used, and in the present embodiment, an example in which the tablet terminal is used as the information terminal 10 will be described. Smartphones and tablet terminals are mobile terminals having various functions other than the call function, but the dimensions of the displays are different from each other. In general, tablet terminals have larger display dimensions than smartphones.
 まず、情報端末10の構成について説明する。情報端末10は、例えば、処理部11と、記憶部12と、カメラ13と、ディスプレイ14と、位置検知センサ15と、姿勢検知センサ16と、通信部17とを含みうる。情報端末10の各部は、システムバス18を介して相互に通信可能に接続されている。 First, the configuration of the information terminal 10 will be described. The information terminal 10 may include, for example, a processing unit 11, a storage unit 12, a camera 13, a display 14, a position detection sensor 15, a posture detection sensor 16, and a communication unit 17. Each part of the information terminal 10 is connected to each other so as to be able to communicate with each other via the system bus 18.
 処理部11は、CPUに代表されるプロセッサ、半導体メモリ等の記憶デバイス、外部デバイスとのインターフェース等を含む。記憶部12には、プロセッサが実行するプログラムやプロセッサが処理に使用するデータ等が格納されており、処理部11は、記憶部12に記憶されたプログラムをメモリ等の記憶デバイスに読み出して実行することができる。本実施形態の場合、記憶部12には、ユーザによる車両の機能の把握を支援するためのアプリケーションプログラム(支援プログラム)が格納されており、処理部11は、記憶部12に記憶された当該支援プログラムをメモリ等の記憶デバイスに読み出して実行する。 The processing unit 11 includes a processor typified by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like. The storage unit 12 stores a program executed by the processor, data used by the processor for processing, and the like, and the processing unit 11 reads the program stored in the storage unit 12 into a storage device such as a memory and executes the program. be able to. In the case of the present embodiment, the storage unit 12 stores an application program (support program) for assisting the user in grasping the function of the vehicle, and the processing unit 11 stores the support stored in the storage unit 12. Read the program into a storage device such as memory and execute it.
 カメラ13は、レンズと撮像素子とを有し、被写体を撮影して撮影画像を取得する。カメラ13は、例えばディスプレイ14が設けられた外面とは反対側の外面に設けられうる。また、ディスプレイ14は、画像の表示によりユーザに対して情報を報知する。本実施形態の場合、ディスプレイ14は、カメラ13で取得された撮影画像をリアルタイムに表示することができる。ここで、本実施形態のディスプレイ14は、例えばタッチパネル式LCD(Liquid Crystal Display)などを含み、画像を表示する機能に加えて、ユーザからの情報の入力を受け付ける機能を有する。しかしながら、それに限られず、画像を表示する機能のみをディスプレイ14に持たせ、当該ディスプレイ14とは独立して入力部(例えばキーボードやマウスなど)を設けてもよい。 The camera 13 has a lens and an image sensor, and captures a subject to acquire a captured image. The camera 13 may be provided, for example, on an outer surface opposite to the outer surface on which the display 14 is provided. In addition, the display 14 notifies the user of information by displaying an image. In the case of the present embodiment, the display 14 can display the captured image acquired by the camera 13 in real time. Here, the display 14 of the present embodiment includes, for example, a touch panel type LCD (Liquid Crystal Display), and has a function of receiving information input from a user in addition to a function of displaying an image. However, the present invention is not limited to this, and the display 14 may have only the function of displaying an image, and an input unit (for example, a keyboard, a mouse, etc.) may be provided independently of the display 14.
 位置検知センサ15は、情報端末10の位置および方位を検知する。位置検知センサ15としては、例えば、GPS衛星からの信号を受信して現時点での情報端末10の位置情報を取得するGPSセンサや、地磁気などに基づいて情報端末10のカメラ13が向けられている方位を検知する方位センサなどを用いることができる。本実施形態では、「情報端末10の位置」と記載した場合、情報端末10の位置に加えて、情報端末10の方位をも含むものとする。また、姿勢検知センサ16は、情報端末10の姿勢を検知する。姿勢検知センサ16としては、例えば、加速度センサやジャイロセンサなどを用いることができる。 The position detection sensor 15 detects the position and orientation of the information terminal 10. As the position detection sensor 15, for example, a GPS sensor that receives a signal from a GPS satellite and acquires the position information of the information terminal 10 at the present time, or a camera 13 of the information terminal 10 based on geomagnetism or the like is directed. An orientation sensor or the like that detects the orientation can be used. In the present embodiment, when the description "position of the information terminal 10" is described, it is assumed that the orientation of the information terminal 10 is included in addition to the position of the information terminal 10. In addition, the posture detection sensor 16 detects the posture of the information terminal 10. As the posture detection sensor 16, for example, an acceleration sensor, a gyro sensor, or the like can be used.
 通信部17は、ネットワークNTWを介してサーバ装置20と通信可能に接続される。具体的には、通信部17は、ネットワークNTWを介してサーバ装置20から情報を受信する受信部としての機能と、ネットワークNTWを介してサーバ装置20に情報を送信する送信部としての機能とを有する。本実施形態の場合、通信部17は、ユーザによりディスプレイ14上で操作された機能部品の種類およびその操作態様を示す情報(操作情報)をサーバ装置20に送信する。また、通信部17は、車両の機能部品が実際に操作されたときの当該車両の動作状態を示す拡張現実(AR;Augmented Reality)画像のデータをサーバ装置20から受信しうる。 The communication unit 17 is communicably connected to the server device 20 via the network NTW. Specifically, the communication unit 17 has a function as a receiving unit that receives information from the server device 20 via the network NTW and a function as a transmitting unit that transmits information to the server device 20 via the network NTW. Have. In the case of the present embodiment, the communication unit 17 transmits information (operation information) indicating the type of functional component operated on the display 14 by the user and the operation mode thereof to the server device 20. Further, the communication unit 17 can receive data of an augmented reality (AR) image showing the operating state of the vehicle when the functional parts of the vehicle are actually operated from the server device 20.
 また、処理部11の具体的な構成としては、例えば、第1取得部11aと、第2取得部11bと、特定部11cと、検出部11dと、表示制御部11eとが設けられうる。第1取得部11aは、カメラ13で得られた撮影画像のデータを取得する。第2取得部11bは、通信部17を介してサーバ装置20から拡張現実画像のデータを取得する。特定部11cは、例えばパターンマッチング手法などの画像処理を行うことにより撮影画像を解析し、ディスプレイ14に表示された撮影画像内に含まれる車両の機能部品を特定する。検出部11dは、特定部11cにより特定された機能部品に関する機能を把握するユーザの意思を検出する。本実施形態の場合、検出部11dは、当該ユーザの意思として、特定部11cにより特定された機能部品がユーザによりディスプレイ14上で操作されたことを検出するが、例えばユーザの音声や視線などを検出する方式であってもよいし、情報端末10に設けられたボタンの操作を検出する方式であってもよい。また、特定部11dは、操作(機能把握のユーザの意思)が検出された機能部品の種類と当該機能部品の操作の態様とを示す情報を、通信部17を介してサーバ装置20に送信する。表示制御部11eは、第1取得部11aで取得された撮影画像をディスプレイ14に表示する。また、表示制御部11eは、第2取得部11bで拡張現実画像のデータが取得された場合には、位置検知センサ15および姿勢検知センサ16でそれぞれ検知された情報端末10の位置および姿勢に基づいて、当該拡張現実画像を撮影画像に重ね合わせて(重畳させて)ディスプレイ14に表示する。 Further, as a specific configuration of the processing unit 11, for example, a first acquisition unit 11a, a second acquisition unit 11b, a specific unit 11c, a detection unit 11d, and a display control unit 11e may be provided. The first acquisition unit 11a acquires the data of the captured image acquired by the camera 13. The second acquisition unit 11b acquires the augmented reality image data from the server device 20 via the communication unit 17. The identification unit 11c analyzes the captured image by performing image processing such as a pattern matching method, and identifies the functional parts of the vehicle included in the captured image displayed on the display 14. The detection unit 11d detects the user's intention to grasp the function related to the functional component specified by the specific unit 11c. In the case of the present embodiment, the detection unit 11d detects that the functional component specified by the specific unit 11c is operated by the user on the display 14 as the intention of the user, but for example, the user's voice or line of sight is detected. It may be a method of detecting, or a method of detecting the operation of a button provided on the information terminal 10. Further, the specific unit 11d transmits information indicating the type of the functional component for which the operation (the intention of the user for grasping the function) is detected and the operation mode of the functional component to the server device 20 via the communication unit 17. .. The display control unit 11e displays the captured image acquired by the first acquisition unit 11a on the display 14. Further, when the augmented reality image data is acquired by the second acquisition unit 11b, the display control unit 11e is based on the position and orientation of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16, respectively. Then, the augmented reality image is superimposed (superimposed) on the captured image and displayed on the display 14.
 次に、サーバ装置20の構成について説明する。サーバ装置20は、処理部21と、記憶部22と、通信部23とを含みうる。処理部21は、CPUに代表されるプロセッサ、半導体メモリ等の記憶デバイス、外部デバイスとのインターフェース等を含む。記憶部22には、プロセッサが実行するプログラムやプロセッサが処理に使用するデータ等が格納されており、処理部21は、記憶部22に記憶されたプログラムをメモリ等の記憶デバイスに読み出して実行することができる。また、通信部23は、ネットワークNTWを介して情報端末10と通信可能に接続される。具体的には、通信部23は、ネットワークNTWを介して情報端末10から情報を受信する受信部としての機能と、ネットワークNTWを介して情報端末10に情報を送信する送信部としての機能とを有する。 Next, the configuration of the server device 20 will be described. The server device 20 may include a processing unit 21, a storage unit 22, and a communication unit 23. The processing unit 21 includes a processor typified by a CPU, a storage device such as a semiconductor memory, an interface with an external device, and the like. The storage unit 22 stores a program executed by the processor, data used by the processor for processing, and the like, and the processing unit 21 reads the program stored in the storage unit 22 into a storage device such as a memory and executes the program. be able to. Further, the communication unit 23 is communicably connected to the information terminal 10 via the network NTW. Specifically, the communication unit 23 has a function as a receiving unit that receives information from the information terminal 10 via the network NTW and a function as a transmitting unit that transmits information to the information terminal 10 via the network NTW. Have.
 本実施形態の場合、記憶部22には、車両に設けられた複数種類の機能部品の各々について、車両の機能部品が実際に操作されたときの車両の動作状態(機能)を示す拡張現実画像のデータが記憶されている。処理部21は、情報端末10の処理部11(検出部11d)で検出された機能部品の種類と当該機能部品の操作の態様とを示す情報を通信部23を介して情報端末10から受信し、受信した当該情報に基づいて、当該機能部品に対応付けられて記憶部22に記憶されている拡張現実画像のデータを通信部23を介して情報端末10に送信する。 In the case of the present embodiment, the storage unit 22 contains an augmented reality image showing the operating state (function) of the vehicle when the functional parts of the vehicle are actually operated for each of the plurality of types of functional parts provided in the vehicle. Data is stored. The processing unit 21 receives information indicating the type of the functional component detected by the processing unit 11 (detection unit 11d) of the information terminal 10 and the operation mode of the functional component from the information terminal 10 via the communication unit 23. Based on the received information, the data of the augmented reality image associated with the functional component and stored in the storage unit 22 is transmitted to the information terminal 10 via the communication unit 23.
 [支援システムの処理シーケンス]
 次に、支援システム100の処理シーケンスについて説明する。図2は、情報端末10とサーバ装置20との間で行われる処理を示すシーケンス図である。
[Support system processing sequence]
Next, the processing sequence of the support system 100 will be described. FIG. 2 is a sequence diagram showing a process performed between the information terminal 10 and the server device 20.
 情報端末10において支援プログラムの実行が開始されると、情報端末10は、カメラ13による撮影を開始し(工程101)、カメラ13で得られた撮影画像をディスプレイ14にリアルタイムに表示する(工程102)。また、情報端末10は、位置検知センサ15による情報端末10の位置の検知と、姿勢検知センサ16による情報端末10の姿勢の検知とを開始する(工程103)。そして、情報端末10は、カメラ13で得られた撮影画像を解析し、ディスプレイ14に表示された撮影画像内に含まれる車両の機能部品を特定する(工程104)。情報端末10は、特定された機能部品がユーザによりディスプレイ14上で操作されたことを検出すると(工程105)、操作が行われた機能部品とその操作態様とを示す操作情報をサーバ装置20に送信する(工程106)。 When the information terminal 10 starts executing the support program, the information terminal 10 starts shooting with the camera 13 (step 101), and the shot image obtained by the camera 13 is displayed on the display 14 in real time (step 102). ). Further, the information terminal 10 starts detecting the position of the information terminal 10 by the position detection sensor 15 and detecting the posture of the information terminal 10 by the posture detection sensor 16 (step 103). Then, the information terminal 10 analyzes the captured image obtained by the camera 13 and identifies the functional parts of the vehicle included in the captured image displayed on the display 14 (step 104). When the information terminal 10 detects that the specified functional component has been operated by the user on the display 14 (step 105), the information terminal 10 transmits the operation information indicating the operated functional component and its operation mode to the server device 20. Transmit (step 106).
 サーバ装置20の記憶部22には、機能部品が実際に操作されたときの車両の動作状態を示す拡張現実画像(例えば動画)のデータが、車両に設けられた複数の機能部品の各々について記憶されている。情報端末10から操作情報を受信したサーバ装置20は、記憶部22に記憶されている複数の拡張現実画像の中から、操作情報(機能部品、操作態様)に対応する拡張現実画像のデータを選択し(工程107)、選択した拡張現実画像のデータを情報端末10の送信する(工程108)。情報端末10は、サーバ装置20から受信した拡張現実画像を、カメラ13で得られた撮影画像に重ね合わせてディスプレイ14に表示する(工程109)。このとき、情報端末10は、位置検知センサ15で検知されている情報端末10の位置と、姿勢検知センサ16で検知されている情報端末10の姿勢とに基づいて、情報端末10の動きに合わせて拡張現実画像が移動するように、カメラ13での撮影画像と拡張現実画像との位置合わせを行う。 In the storage unit 22 of the server device 20, augmented reality image (for example, moving image) data showing the operating state of the vehicle when the functional parts are actually operated is stored for each of the plurality of functional parts provided in the vehicle. Has been done. The server device 20 that has received the operation information from the information terminal 10 selects the augmented reality image data corresponding to the operation information (functional parts, operation mode) from the plurality of augmented reality images stored in the storage unit 22. Then, the data of the selected augmented reality image is transmitted to the information terminal 10 (step 108). The information terminal 10 superimposes the augmented reality image received from the server device 20 on the captured image obtained by the camera 13 and displays it on the display 14 (step 109). At this time, the information terminal 10 adjusts to the movement of the information terminal 10 based on the position of the information terminal 10 detected by the position detection sensor 15 and the posture of the information terminal 10 detected by the posture detection sensor 16. The image captured by the camera 13 and the augmented reality image are aligned so that the augmented reality image moves.
 [情報端末での処理]
 次に、支援プログラムが実行されたときに情報端末10で行われる処理について説明する。図3は、情報端末10の処理部11で行われる処理を示すフローチャートである。
[Processing at information terminal]
Next, the process performed on the information terminal 10 when the support program is executed will be described. FIG. 3 is a flowchart showing processing performed by the processing unit 11 of the information terminal 10.
 車両には、運転時に用いられる機能や車内での快適性を向上させるための機能、安全性を向上させるための機能などの様々な機能が設けられ、それらの機能を発揮させるための機能部品が車両(車内)に設けられうる。運転時に用いられる機能としては、例えば、方向指示器(ウィンカ)やワイパー、サイドブレーキ、変速機などが挙げられる。また、快適性を向上させるための機能としては、例えば、エアコンやシートヒータ、オーディオなどが挙げられる。また、安全性を向上させるための機能としては、車間制御付きクルーズコントロール(以下、ACC;Adoptive Cruise Control)や、車線維持支援システム(以下、LKAS;Lane Keeping Assistant System)などが挙げられる。本実施形態の支援システム100は、上述したような車両の様々な機能をユーザに把握させるための支援を行うように構成されうるが、以下では、ACCやLKASなどの安全性機能の把握の支援を行う例について説明する。ここで、ACCおよびLKASの機能を発揮させるための機能部品(スイッチ)は、ハンドルに設けられているものとする。 The vehicle is provided with various functions such as functions used when driving, functions for improving comfort in the vehicle, and functions for improving safety, and functional parts for exerting these functions are provided. It can be installed in the vehicle (inside the vehicle). Functions used during driving include, for example, turn signals (winkers), wipers, side brakes, and transmissions. Further, as a function for improving comfort, for example, an air conditioner, a seat heater, audio, and the like can be mentioned. In addition, as functions for improving safety, cruise control with inter-vehicle distance control (hereinafter, ACC; Adoptive Cruise Control), lane keeping support system (hereinafter, LKAS; Lane Keeping Assistant System), and the like can be mentioned. The support system 100 of the present embodiment may be configured to provide support for the user to grasp various functions of the vehicle as described above, but in the following, support for grasping safety functions such as ACC and LKAS will be provided. An example of performing the above will be described. Here, it is assumed that the functional parts (switches) for exerting the functions of ACC and LKAS are provided on the handle.
 S11では、処理部11(第1取得部11a)は、カメラ13に撮影を開始させるとともに、カメラ13から撮影画像を取得する。S12では、処理部11(表示制御部11e)は、カメラ13から取得した撮影画像をディスプレイ14に逐次表示する。例えば、図4は、ユーザが情報端末10のカメラ13によって車内のハンドル2を撮影している様子を示している。この場合、情報端末10では、カメラ13で得られたハンドル2の撮影画像がディスプレイ14に逐次表示される。なお、図4は、乗車したユーザから見える車内環境を示しており、ハンドル2の他に、フロントガラス1、フロントピラー3、ダッシュボード4およびメータパネル5も図示されている。 In S11, the processing unit 11 (first acquisition unit 11a) causes the camera 13 to start shooting and acquires a shot image from the camera 13. In S12, the processing unit 11 (display control unit 11e) sequentially displays the captured images acquired from the camera 13 on the display 14. For example, FIG. 4 shows a user taking a picture of the steering wheel 2 in the vehicle by the camera 13 of the information terminal 10. In this case, in the information terminal 10, the captured image of the handle 2 obtained by the camera 13 is sequentially displayed on the display 14. Note that FIG. 4 shows the vehicle interior environment as seen by the passenger on board, and in addition to the steering wheel 2, the windshield 1, the front pillar 3, the dashboard 4, and the meter panel 5 are also shown.
 S13では、処理部11(特定部11c)は、ディスプレイ14に表示された撮影画像内に含まれる車両の機能部品の特定を行う。例えば、処理部11は、まず、公知の画像処理を行うことにより、撮影画像内に含まれる部品を認識する。情報端末10の記憶部12には、車両に設けられた複数の機能部品の各々についての特徴情報が記憶されており、処理部11は、認識した部品の特徴との一致度が高い(即ち、一致度が所定値を超える)機能部品があるか否かの調査(判定)を行う。これにより、処理部11は、撮影画像内に含まれる機能部品を特定することができる。 In S13, the processing unit 11 (specific unit 11c) identifies the functional parts of the vehicle included in the captured image displayed on the display 14. For example, the processing unit 11 first recognizes the parts included in the captured image by performing known image processing. The storage unit 12 of the information terminal 10 stores feature information for each of the plurality of functional parts provided in the vehicle, and the processing unit 11 has a high degree of agreement with the features of the recognized parts (that is,). Investigate (determine) whether or not there are functional parts (the degree of matching exceeds a predetermined value). As a result, the processing unit 11 can specify the functional component included in the captured image.
 S14では、処理部11は、ディスプレイ14に表示された撮影画像内において機能部品が特定されたか否かを判断する。撮影画像内において機能部品が特定された場合にはS15に進み、機能部品が特定されなかった場合にはS12に戻る。 In S14, the processing unit 11 determines whether or not the functional component is specified in the captured image displayed on the display 14. If the functional component is specified in the captured image, the process proceeds to S15, and if the functional component is not specified, the process returns to S12.
 S15では、処理部11は、S13で特定された機能部品の名称を示す拡張現実画像を、カメラ13で得られた撮影画像に重ね合わせてディスプレイ14に表示する。このとき、処理部11は、位置検知センサ15および姿勢検知センサ16で検知された情報端末10の位置姿勢の情報に基づいて、ディスプレイ14に表示された撮影画像における当該機能部品の位置に整合するように、当該機能部品の名称を示す拡張現実画像をディスプレイ14に表示する。例えば、図5は、ハンドル2の撮影画像を情報端末10のディスプレイ14に表示した例を示している。図5に示す例では、ハンドル2の撮影画像内において、ACCスイッチ2a、LKASスイッチ2b、キャンセルスイッチ2c、および車間距離の設定スイッチ2dが特定され、それらの名称を示す拡張現実画像31~34が、ハンドルの撮影画像に重ね合わせてディスプレイ14に表示されている。 In S15, the processing unit 11 superimposes the augmented reality image indicating the name of the functional component specified in S13 on the captured image obtained by the camera 13 and displays it on the display 14. At this time, the processing unit 11 matches the position of the functional component in the captured image displayed on the display 14 based on the position / orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. As described above, the augmented reality image showing the name of the functional component is displayed on the display 14. For example, FIG. 5 shows an example in which the captured image of the handle 2 is displayed on the display 14 of the information terminal 10. In the example shown in FIG. 5, the ACC switch 2a, the LKAS switch 2b, the cancel switch 2c, and the inter-vehicle distance setting switch 2d are specified in the captured image of the handle 2, and the augmented reality images 31 to 34 showing their names are shown. , It is displayed on the display 14 by superimposing it on the photographed image of the handle.
 本実施形態の場合、車両に設けられた複数の機能部品の各々についての名称を示す拡張現実画像が記憶部12に記憶されているが、それに限られず、サーバ装置20の記憶部22に記憶されていてもよい。この場合、処理部11は、S12で特定された機能部品の識別情報を通信部17を介してサーバ装置20に送信し、当該機能部品の名称を示す拡張現実画像のデータを通信部17を介してサーバ装置20から受信しうる。 In the case of the present embodiment, the augmented reality image indicating the name of each of the plurality of functional components provided in the vehicle is stored in the storage unit 12, but the storage unit 12 is not limited to this, and is stored in the storage unit 22 of the server device 20. You may be. In this case, the processing unit 11 transmits the identification information of the functional component specified in S12 to the server device 20 via the communication unit 17, and transmits the augmented reality image data indicating the name of the functional component via the communication unit 17. Can be received from the server device 20.
 S16では、処理部11は、S13で特定された機能部品がユーザによりディスプレイ14上で操作されたか否かを判断する。ここで、本実施形態の場合、タッチパネル式のディスプレイ14が用いられているため、処理部11は、ユーザの指などによって機能部品がディスプレイ上で操作されたか否かを判断する。しかしながら、それに限られず、例えば、タッチパネル式でないディスプレイを用いる場合には、マウスなどの入力部を介して機能部品がディスプレイ上で操作されたか否かを判断してもよい。ユーザにより機能部品がディスプレイ14上で操作された場合にはS17に進み、機能部品がディスプレイ14上で操作されなかた場合にはS12に戻る。 In S16, the processing unit 11 determines whether or not the functional component specified in S13 has been operated by the user on the display 14. Here, in the case of the present embodiment, since the touch panel type display 14 is used, the processing unit 11 determines whether or not the functional component is operated on the display by the user's finger or the like. However, the present invention is not limited to this, and for example, when a display other than the touch panel type is used, it may be determined whether or not the functional component is operated on the display via an input unit such as a mouse. If the functional component is operated on the display 14 by the user, the process proceeds to S17, and if the functional component is not operated on the display 14, the process returns to S12.
 S17では、処理部11は、ユーザによりディスプレイ14上で操作された機能部品の種類およびその操作態様を示す情報(操作情報)を通信部17を介してサーバ装置20に送信する。S18では、処理部11(第2取得部11b)は、サーバ装置20の記憶部22に記憶されている複数の拡張現実画像の中から、操作情報に対応する拡張現実画像を通信部17を介してサーバ装置20から受信する。 In S17, the processing unit 11 transmits information (operation information) indicating the type of functional component operated on the display 14 by the user and the operation mode thereof to the server device 20 via the communication unit 17. In S18, the processing unit 11 (second acquisition unit 11b) transmits the augmented reality image corresponding to the operation information from the plurality of augmented reality images stored in the storage unit 22 of the server device 20 via the communication unit 17. Receives from the server device 20.
 ここで、S18において取得される拡張現実画像は、車両の機能を説明するために、機能部品が実際に操作されたときの車両の動作状態を示す拡張現実画像(例えば動画)であり、例えば、機能部品が実際に操作された場合における車両の状態の変化および車両の周辺環境の変化うち少なくとも一方を含みうる。拡張現実画像に含まれる「車両の状態」とは、例えば、機能部品が実際に操作された場合に車両が行う動作・機能を仮想的に表したものであり、機能部品の実際の操作により現れる不可視の情報を可視化したものも含まれうる。不可視の情報としては、例えば、ACCスイッチ2aが操作された場合に車両から射出されるミリ波レーダやレーザレーダなどの電波が挙げられる。また、拡張現実画像に含まれる「車両の周辺環境」とは、例えば、機能部品が実際に操作された場合に車両の周囲で変化する道路や車線、前方車両などを仮想的に表したものである。 Here, the augmented reality image acquired in S18 is an augmented reality image (for example, a moving image) showing an operating state of the vehicle when the functional parts are actually operated in order to explain the function of the vehicle, for example. It may include at least one of changes in the vehicle's condition and changes in the vehicle's surroundings when the functional component is actually operated. The "vehicle state" included in the augmented reality image is, for example, a virtual representation of the operation / function performed by the vehicle when the functional component is actually operated, and appears by the actual operation of the functional component. It may also include a visualization of invisible information. Examples of invisible information include radio waves such as millimeter-wave radar and laser radar emitted from the vehicle when the ACC switch 2a is operated. In addition, the "vehicle surrounding environment" included in the augmented reality image is, for example, a virtual representation of roads, lanes, vehicles in front, etc. that change around the vehicle when functional parts are actually operated. is there.
 S19では、処理部11(表示制御部11e)は、S18で取得した拡張現実画像を、カメラ13で得られた撮影画像に重ね合わせてディスプレイ14に表示する。このとき、処理部11は、位置検知センサ15および姿勢検知センサ16で検知された情報端末10の位置姿勢の情報に基づいて、情報端末10の動きに合わせて拡張現実画像が移動するように、カメラ13での撮影画像とS18で得られた拡張現実画像との位置合わせを行う。 In S19, the processing unit 11 (display control unit 11e) superimposes the augmented reality image acquired in S18 on the captured image obtained by the camera 13 and displays it on the display 14. At this time, the processing unit 11 moves the augmented reality image according to the movement of the information terminal 10 based on the position and orientation information of the information terminal 10 detected by the position detection sensor 15 and the attitude detection sensor 16. The image captured by the camera 13 and the augmented reality image obtained in S18 are aligned.
 図6は、S18で取得した拡張現実画像が情報端末10のディスプレイ14に表示され、当該情報端末10によりユーザが車両の機能を把握している様子を示している。具体的には、ユーザは、フロントガラス1を介して情報端末10のカメラ13で車外を撮影しており、情報端末10のディスプレイ14には、機能部品をディスプレイ14上で操作されたときに取得された拡張現実画像が、カメラ13での撮像画像に重ねて表示される。 FIG. 6 shows how the augmented reality image acquired in S18 is displayed on the display 14 of the information terminal 10, and the user grasps the function of the vehicle by the information terminal 10. Specifically, the user photographs the outside of the vehicle with the camera 13 of the information terminal 10 through the windshield 1, and the display 14 of the information terminal 10 acquires a functional component when the functional component is operated on the display 14. The augmented reality image is displayed superimposed on the image captured by the camera 13.
 例えば、図7は、ACCスイッチ2aがディスプレイ14上で操作されたときにS18で得られた拡張現実画像を、カメラ13での撮影画像に重ね合わせて情報端末10のディスプレイ14に表示した例を示している。図7に示す例では、道路41、車線42および前方車両43が拡張現実画像としてディスプレイ14に表示されている。そして、ACCを機能させたときに車両の前部から射出される電波44(例えばミリ波レーダ)が前方車両43に照射されている様子と、ACCの機能の説明45とが拡張現実画像としてディスプレイ14に表示されている。なお、ディスプレイ14に表示されているダッシュボード4は、カメラ13で撮影されている撮影画像である。このような拡張現実画像は、例えば動画であり、情報端末10の位置、方向および姿勢に応じて拡張現実画像の見え方が変わるようにディスプレイ14に表示されうる。また、車間距離の設定スイッチ2dがディスプレイ14上で操作された場合には、その操作によって設定された車間距離に応じて、拡張現実画像としてディスプレイ14に表示されている前方車両43との距離が変更されうる。 For example, FIG. 7 shows an example in which the augmented reality image obtained in S18 when the ACC switch 2a is operated on the display 14 is superimposed on the image captured by the camera 13 and displayed on the display 14 of the information terminal 10. It shows. In the example shown in FIG. 7, the road 41, the lane 42, and the vehicle in front 43 are displayed on the display 14 as augmented reality images. Then, a state in which a radio wave 44 (for example, a millimeter wave radar) emitted from the front part of the vehicle when the ACC is activated is irradiated to the vehicle in front 43 and an explanation 45 of the ACC function are displayed as an augmented reality image. It is displayed in 14. The dashboard 4 displayed on the display 14 is a photographed image taken by the camera 13. Such an augmented reality image is, for example, a moving image, and can be displayed on the display 14 so that the appearance of the augmented reality image changes according to the position, direction, and posture of the information terminal 10. Further, when the inter-vehicle distance setting switch 2d is operated on the display 14, the distance to the vehicle in front 43 displayed on the display 14 as an augmented reality image is set according to the inter-vehicle distance set by the operation. Can be changed.
 また、図8は、LKASスイッチ2bがディスプレイ14上で操作されたときにS18で得られた拡張現実画像を、カメラ13での撮影画像に重ね合わせて情報端末10のディスプレイ14に表示した例を示している。図8に示す例では、道路51、車線52が拡張現実画像としてディスプレイ14に表示されている。そして、LKASを機能させたときに行われる動作、具体的には、車両が車線52からはみ出したことを示す記号53および説明54、その場合にハンドル2の操作を支援する旨の記号55および説明56が拡張現実画像としてディスプレイ14に表示されている。なお、ディスプレイ14に表示されているダッシュボード4は、カメラ13で撮像されている撮像画像である。このような拡張現実画像は、図7に示す例と同様に、例えば動画であり、情報端末10の位置、方向および姿勢に応じて拡張現実画像の見え方が変わるようにディスプレイ14に表示されうる。 Further, FIG. 8 shows an example in which the augmented reality image obtained in S18 when the LKAS switch 2b is operated on the display 14 is superimposed on the image captured by the camera 13 and displayed on the display 14 of the information terminal 10. It shows. In the example shown in FIG. 8, the road 51 and the lane 52 are displayed on the display 14 as augmented reality images. Then, the operation performed when the LKAS is activated, specifically, the symbol 53 and the description 54 indicating that the vehicle has protruded from the lane 52, and the symbol 55 and the description to assist the operation of the steering wheel 2 in that case. 56 is displayed on the display 14 as an augmented reality image. The dashboard 4 displayed on the display 14 is an captured image captured by the camera 13. Similar to the example shown in FIG. 7, such an augmented reality image is, for example, a moving image, and can be displayed on the display 14 so that the appearance of the augmented reality image changes according to the position, direction, and posture of the information terminal 10. ..
 このように、本実施形態の支援システム100は、ディスプレイ14に表示された撮影画像内に含まれる機能部品をユーザがディスプレイ14上で操作した場合に、当該機能部品が実際に操作されたときの車両の動作状態を示す拡張現実画像を、撮影画像に重ね合わせてディスプレイ14に表示する。これにより、ユーザは、車両の機能部品を操作することによりどのように車両が機能するのかを視覚的に把握することができる。即ち、本実施形態の支援システム100により、車両の機能をユーザに容易に把握させることが可能となる。 As described above, in the support system 100 of the present embodiment, when the user operates the functional component included in the captured image displayed on the display 14 on the display 14, the functional component is actually operated. An augmented reality image showing the operating state of the vehicle is superimposed on the captured image and displayed on the display 14. As a result, the user can visually grasp how the vehicle functions by operating the functional parts of the vehicle. That is, the support system 100 of the present embodiment makes it possible for the user to easily grasp the function of the vehicle.
 <他の実施形態>
 上記実施形態では、拡張現実画像のデータがサーバ装置20の記憶部22に記憶されている例を説明したが、それに限られず、拡張現実画像のデータが情報端末10の記憶部22に記憶されていてもよい。この場合、情報端末10とサーバ装置20との間でデータの送受信を行う必要がなくなるため、オフラインの情報端末10においても上記の支援プログラムを実行することができる。
<Other embodiments>
In the above embodiment, the example in which the augmented reality image data is stored in the storage unit 22 of the server device 20 has been described, but the present invention is not limited to this, and the augmented reality image data is stored in the storage unit 22 of the information terminal 10. You may. In this case, since it is not necessary to send and receive data between the information terminal 10 and the server device 20, the above support program can be executed even on the offline information terminal 10.
 また、上記実施形態では、ユーザによる車両の機能の把握を支援するための支援システム100について説明したが、当該支援システム100は、車両の機能に限られず、他の物体の機能の把握を支援するために適用可能である。当該物体としては、機能部品の操作により状態が変化するものであればよく、状態の変化は、電気的な変化であってもよいし、機械的な変化であってもよい。 Further, in the above embodiment, the support system 100 for assisting the user in grasping the function of the vehicle has been described, but the support system 100 is not limited to the function of the vehicle and supports the grasp of the function of other objects. Applicable for. The object may be any object whose state changes by operating a functional component, and the change in state may be an electrical change or a mechanical change.
 <実施形態のまとめ>
 1.上記実施形態の支援方法は、
 カメラ(例えば13)およびディスプレイ(例えば14)を有する情報端末(例えば10)を用いて機能の把握を支援する支援方法であって、
 前記カメラで得られた撮影画像を前記ディスプレイに表示する第1表示工程と、
 前記第1表示工程で前記ディスプレイに表示された撮影画像内に含まれる機能部品(例えば2a~2d)を特定する特定工程と、
 前記特定工程で特定された前記機能部品に関する機能を把握するユーザの意思が検出された場合、当該機能部品が実際に操作されたときの動作状態を示す拡張現実画像を、前記カメラで得られた前記撮影画像に重ね合わせて前記ディスプレイに表示する第2表示工程と、を含む。
 この構成によれば、機能および機能部品の操作に関するマニュアルを確認しなくても、機能部品を操作することによりどのように機能が発揮されるのかを視覚的に且つ直感的に把握することが可能となる。即ち、ユーザは、当該機能を疑似体験することができるため、当該機能の把握を容易に行うことが可能となる。
<Summary of Embodiment>
1. 1. The support method of the above embodiment is
It is a support method for supporting grasping of a function by using an information terminal (for example, 10) having a camera (for example, 13) and a display (for example, 14).
The first display step of displaying the captured image obtained by the camera on the display, and
A specific step of identifying functional parts (for example, 2a to 2d) included in the captured image displayed on the display in the first display step, and
When the user's intention to grasp the function related to the functional component specified in the specific step is detected, an augmented reality image showing an operating state when the functional component is actually operated is obtained by the camera. The present invention includes a second display step of superimposing the captured image and displaying the image on the display.
With this configuration, it is possible to visually and intuitively grasp how the functions are exhibited by operating the functional parts without checking the manuals regarding the functions and the operation of the functional parts. It becomes. That is, since the user can experience the function in a simulated manner, it is possible to easily grasp the function.
 2.上記実施形態の支援方法において、
 前記第2表示工程では、前記拡張現実画像として、前記機能部品が実際に操作されたときに機能を発揮する物体の状態、および、前記機能部品が実際に操作されたときの前記物体の周辺環境のうち少なくとも一方を前記ディスプレイに表示する。
 この構成によれば、物体の機能をより視覚的に把握させることができるため、ユーザにとって物体の機能の把握をより容易に行うことが可能となる。
2. In the support method of the above embodiment
In the second display step, as the augmented reality image, the state of an object that exerts a function when the functional component is actually operated, and the surrounding environment of the object when the functional component is actually operated. At least one of them is displayed on the display.
According to this configuration, the function of the object can be grasped more visually, so that the user can more easily grasp the function of the object.
 3.上記実施形態の支援方法において、
 前記第2表示工程では、前記拡張現実画像として、前記機能部品の実際の操作により現れる不可視の情報を可視化して前記ディスプレイに表示する。
 この構成によれば、どのような方法で機能を発揮させているのかを視覚的に把握することができるため、ユーザにとって機能の把握をより容易に行うことが可能となる。
3. 3. In the support method of the above embodiment
In the second display step, invisible information that appears by the actual operation of the functional component is visualized and displayed on the display as the augmented reality image.
According to this configuration, it is possible to visually grasp how the function is exerted, so that the user can more easily grasp the function.
 4.上記実施形態の支援方法において、
 前記不可視の情報は、前記機能部品の実際の操作により射出される電波を含む。
 この構成によれば、機能を発揮させるために射出される電波を視覚的に表すことにより、ユーザにとって機能の把握をより容易に行うことが可能となる。
4. In the support method of the above embodiment
The invisible information includes radio waves emitted by the actual operation of the functional component.
According to this configuration, it is possible for the user to more easily grasp the function by visually representing the radio wave emitted in order to exert the function.
 5.上記実施形態の支援方法において、
 前記第2表示工程では、前記機能部品に関する機能を把握するユーザの意思として、ユーザによる前記ディスプレイ上での前記機能部品の操作を検出する。
 この構成によれば、ユーザが機能を把握したいとの意思を的確に検出することが可能となる。
5. In the support method of the above embodiment
In the second display step, the operation of the functional component on the display by the user is detected as the intention of the user to grasp the function related to the functional component.
According to this configuration, it is possible to accurately detect the user's intention to grasp the function.
 6.上記実施形態の支援方法において、
 前記機能部品は、車両に設けられた部品であり、
 前記第2表示工程では、前記機能部品が実際に操作されたときの前記車両の動作状態を示す前記拡張現実画像を前記ディスプレイに表示する。
 この構成によれば、車両の機能および機能部品の操作に関するマニュアルを確認しなくても、機能部品を操作することによりどのように車両の機能が発揮されるのかを視覚的に且つ直感的に把握することが可能となる。
6. In the support method of the above embodiment
The functional parts are parts provided in the vehicle and are
In the second display step, the augmented reality image showing the operating state of the vehicle when the functional component is actually operated is displayed on the display.
According to this configuration, it is possible to visually and intuitively grasp how the functions of the vehicle are exhibited by operating the functional parts without checking the manuals regarding the functions of the vehicle and the operation of the functional parts. It becomes possible to do.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために、以下の請求項を添付する。 The present invention is not limited to the above embodiments, and various modifications and modifications can be made without departing from the spirit and scope of the present invention. Therefore, in order to make the scope of the present invention public, the following claims are attached.
10:情報端末、11:処理部、12:記憶部、13:カメラ、14:ディスプレイ、15:位置検知センサ、16:姿勢検知センサ、17:通信部、20:サーバ装置、21:処理部、22:記憶部、23:通信部 10: Information terminal, 11: Processing unit, 12: Storage unit, 13: Camera, 14: Display, 15: Position detection sensor, 16: Posture detection sensor, 17: Communication unit, 20: Server device, 21: Processing unit, 22: Storage unit, 23: Communication unit

Claims (7)

  1.  カメラおよびディスプレイを有する情報端末を用いて機能の把握を支援する支援方法であって、
     前記カメラで得られた撮影画像を前記ディスプレイに表示する第1表示工程と、
     前記第1表示工程で前記ディスプレイに表示された撮影画像内に含まれる機能部品を特定する特定工程と、
     前記特定工程で特定された前記機能部品に関する機能を把握するユーザの意思が検出された場合、当該機能部品が実際に操作されたときの動作状態を示す拡張現実画像を、前記カメラで得られた前記撮影画像に重ね合わせて前記ディスプレイに表示する第2表示工程と、
     を含むことを特徴とする支援方法。
    It is a support method that supports understanding of functions using an information terminal having a camera and a display.
    The first display step of displaying the captured image obtained by the camera on the display, and
    In the first display step, a specific step of identifying functional parts included in the captured image displayed on the display, and a specific step of specifying the functional parts.
    When the user's intention to grasp the function related to the functional component specified in the specific step is detected, an augmented reality image showing an operating state when the functional component is actually operated is obtained by the camera. A second display step of superimposing the captured image and displaying it on the display,
    A support method characterized by including.
  2.  前記第2表示工程では、前記拡張現実画像として、前記機能部品が実際に操作されたときに機能を発揮する物体の状態、および、前記機能部品が実際に操作されたときの前記物体の周辺環境のうち少なくとも一方を前記ディスプレイに表示する、ことを特徴とする請求項1に記載の支援方法。 In the second display step, as the augmented reality image, the state of an object that exerts a function when the functional component is actually operated, and the surrounding environment of the object when the functional component is actually operated. The support method according to claim 1, wherein at least one of them is displayed on the display.
  3.  前記第2表示工程では、前記拡張現実画像として、前記機能部品の実際の操作により現れる不可視の情報を可視化して前記ディスプレイに表示する、ことを特徴とする請求項1又は2に記載の支援方法。 The support method according to claim 1 or 2, wherein in the second display step, invisible information appearing by an actual operation of the functional component is visualized and displayed on the display as the augmented reality image. ..
  4.  前記不可視の情報は、前記機能部品の実際の操作により射出される電波を含む、ことを特徴とする請求項3に記載の支援方法。 The support method according to claim 3, wherein the invisible information includes radio waves emitted by actual operation of the functional component.
  5.  前記第2表示工程では、前記機能部品に関する機能を把握するユーザの意思として、ユーザによる前記ディスプレイ上での前記機能部品の操作を検出する、ことを特徴とする請求項1乃至4のいずれか1項に記載の支援方法。 Any one of claims 1 to 4, wherein in the second display step, the operation of the functional component on the display by the user is detected as the intention of the user to grasp the function related to the functional component. The support method described in the section.
  6.  前記機能部品は、車両に設けられた部品であり、
     前記第2表示工程では、前記機能部品が実際に操作されたときの前記車両の動作状態を示す前記拡張現実画像を前記ディスプレイに表示する、ことを特徴とする請求項1乃至5のいずれか1項に記載の支援方法。
    The functional parts are parts provided in the vehicle and are
    Any one of claims 1 to 5, wherein in the second display step, an augmented reality image showing an operating state of the vehicle when the functional component is actually operated is displayed on the display. The support method described in the section.
  7.  カメラおよびディスプレイを有する情報端末を用いて機能の把握を支援する支援システムであって、
     前記情報端末は、
      前記カメラで得られた撮影画像を前記ディスプレイに表示する第1表示手段と、
      前記第1表示手段により前記ディスプレイに表示された撮影画像内に含まれる機能部品を特定する特定手段と、
      前記特定手段により特定された前記機能部品に関する機能を把握するユーザの意思が検出された場合、当該機能部品が実際に操作されたときの動作状態を示す拡張現実画像を、前記カメラで得られた前記撮影画像に重ね合わせて前記ディスプレイに表示する第2表示手段と、
     を含むことを特徴とする支援システム。
    It is a support system that supports understanding of functions using an information terminal equipped with a camera and a display.
    The information terminal is
    A first display means for displaying a captured image obtained by the camera on the display, and
    A specific means for identifying a functional component included in a captured image displayed on the display by the first display means, and
    When the user's intention to grasp the function related to the functional component specified by the specific means is detected, an augmented reality image showing an operating state when the functional component is actually operated is obtained by the camera. A second display means that is superimposed on the captured image and displayed on the display,
    A support system characterized by including.
PCT/JP2019/014252 2019-03-29 2019-03-29 Assistance method and assistance system WO2020202345A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2021511720A JP7117454B2 (en) 2019-03-29 2019-03-29 Support method and support system
PCT/JP2019/014252 WO2020202345A1 (en) 2019-03-29 2019-03-29 Assistance method and assistance system
CN201980091159.XA CN113396382A (en) 2019-03-29 2019-03-29 Support method and support system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/014252 WO2020202345A1 (en) 2019-03-29 2019-03-29 Assistance method and assistance system

Publications (1)

Publication Number Publication Date
WO2020202345A1 true WO2020202345A1 (en) 2020-10-08

Family

ID=72667270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/014252 WO2020202345A1 (en) 2019-03-29 2019-03-29 Assistance method and assistance system

Country Status (3)

Country Link
JP (1) JP7117454B2 (en)
CN (1) CN113396382A (en)
WO (1) WO2020202345A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015043180A (en) * 2013-08-26 2015-03-05 ブラザー工業株式会社 Image processing program
US20150061841A1 (en) * 2013-09-02 2015-03-05 Lg Electronics Inc. Mobile terminal and method of controlling the same
JP2015118556A (en) * 2013-12-18 2015-06-25 マイクロソフト コーポレーション Augmented reality overlay for control devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015043180A (en) * 2013-08-26 2015-03-05 ブラザー工業株式会社 Image processing program
US20150061841A1 (en) * 2013-09-02 2015-03-05 Lg Electronics Inc. Mobile terminal and method of controlling the same
JP2015118556A (en) * 2013-12-18 2015-06-25 マイクロソフト コーポレーション Augmented reality overlay for control devices

Also Published As

Publication number Publication date
JPWO2020202345A1 (en) 2021-12-02
JP7117454B2 (en) 2022-08-12
CN113396382A (en) 2021-09-14

Similar Documents

Publication Publication Date Title
EP2826689B1 (en) Mobile terminal
US9855957B2 (en) Method for using a communication terminal in a motor vehicle while autopilot is activated and motor vehicle
US10843686B2 (en) Augmented reality (AR) visualization of advanced driver-assistance system
CN105966311B (en) Method for calibrating a camera, device for a vehicle and computer program product
US9881482B2 (en) Method and device for displaying information of a system
WO2021082483A1 (en) Method and apparatus for controlling vehicle
US9987927B2 (en) Method for operating a communication device for a motor vehicle during an autonomous drive mode, communication device as well as motor vehicle
JP2019109707A (en) Display control device, display control method and vehicle
JP2004061259A (en) System, method, and program for providing information
CN114764319A (en) Intelligent cabin multi-screen interaction system and method and readable storage medium
JP2018100008A (en) Vehicular display device
WO2020202345A1 (en) Assistance method and assistance system
WO2021029043A1 (en) Information provision system, information terminal, and information provision method
JP2009190675A (en) Operating device for vehicle
JP2022174351A (en) automatic parking assistance system
US10528310B2 (en) Content displaying method and electronic device
JP2014215327A (en) Information display apparatus, on-vehicle device and information display system
JP6586226B2 (en) Terminal device position estimation method, information display method, and terminal device position estimation device
JP6569249B2 (en) Inter-vehicle communication system
KR20100011704A (en) A method for displaying driving information of vehicles and an apparatus therefor
JP6424775B2 (en) Information display device
KR101856255B1 (en) Navigation display system
US20220129676A1 (en) Information providing method, non-transitory computer readable storage medium storing program, and information providing apparatus
JP2013217808A (en) On-vehicle apparatus
WO2021100708A1 (en) Terminal device, information processing method, and program for terminal device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19922728

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021511720

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19922728

Country of ref document: EP

Kind code of ref document: A1