WO2019065699A1 - Terminal device - Google Patents

Terminal device Download PDF

Info

Publication number
WO2019065699A1
WO2019065699A1 PCT/JP2018/035601 JP2018035601W WO2019065699A1 WO 2019065699 A1 WO2019065699 A1 WO 2019065699A1 JP 2018035601 W JP2018035601 W JP 2018035601W WO 2019065699 A1 WO2019065699 A1 WO 2019065699A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving vehicle
autonomous driving
unit
image
vehicle
Prior art date
Application number
PCT/JP2018/035601
Other languages
French (fr)
Japanese (ja)
Inventor
晴彦 高木
吉洋 安原
昌嗣 左近
真武 下平
里紗 夏川
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Publication of WO2019065699A1 publication Critical patent/WO2019065699A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Definitions

  • the present invention relates to a terminal device.
  • Patent Document 1 when calling a taxi from home or on the go, there has been proposed a system or the like capable of arranging dispatch from a terminal device possessed by a user such as a smartphone (see, for example, Patent Document 1).
  • Patent Document 1 also describes a self-driving vehicle, in the case of a self-driving vehicle, since the human driver is not on board, the intention of the user when arriving at a designated place at the time of dispatch etc. It is possible that the vehicle is stopped at a position not suitable for use or at a position not suitable for use.
  • One of the problems to be solved by the present invention is to stop the vehicle at an appropriate position in a vehicle capable of autonomous traveling as described above.
  • invention of Claim 1 acquired the acquisition part which acquires the recognition information which the external world recognition part installed in the stopped autonomous driving vehicle recognizes, and the acquisition part
  • the invention according to claim 7 is an automatic driving vehicle operation method executed by a terminal device for operating a stopped automatic driving vehicle, wherein the recognition recognized by the external world recognition unit installed in the automatic driving vehicle An acquisition step of acquiring information, a display step of displaying on a display unit an operation image for moving a stop position of the autonomous driving vehicle based on the recognition information acquired in the acquisition step, and the display unit displayed on the display unit And transmitting the information related to the movement to the autonomous driving vehicle based on the operation detected by the operation detection unit for detecting the operation to move the stopping position of the autonomous driving vehicle with respect to the operation image. .
  • the invention according to claim 8 is characterized in that the method of operating an autonomous driving vehicle according to claim 7 is executed by a computer.
  • FIG. 1 It is a schematic block diagram of the system which has the terminal device concerning the 1st Example of this invention.
  • FIG. 1 It is a functional block diagram of the smart glass shown by FIG. It is explanatory drawing of the stop position movement operation of a user's self-driving vehicle from the smart glass shown by FIG. It is explanatory drawing of the other operation method of stop position movement operation shown by FIG.
  • the acquisition unit acquires recognition information recognized by the external world recognition unit installed in the autonomous driving vehicle, and the display unit performs automatic driving based on the recognition information acquired by the acquisition unit.
  • the operation image for moving the stop position of the vehicle is displayed.
  • the operation detection unit detects an operation for moving the stopping position of the automatically driven vehicle with respect to the operation image displayed on the display unit, and the transmission unit detects information about movement to the automatically driven vehicle based on the operation detected by the operation detection unit.
  • Send By doing this it is possible to move the stop position of the automatically driven vehicle stopped by the terminal device possessed by the user who has called the automatically driven vehicle, and the automatically operated vehicle is appropriately positioned according to the user's intention. It is possible to stop at
  • the image which looked over the automatic driving vehicle from upper direction may be displayed on a display part as an operation image.
  • the image which looked at the automatic driving vehicle from the side is displayed on a display part as an operation image.
  • the display unit may display information indicating detection of an obstacle.
  • the display unit may display the operation image including the movable range of the autonomous driving vehicle. By doing this, when the user moves the autonomous driving vehicle, the movable range can be recognized and the moving operation can be performed.
  • the display unit may display the predicted movement position of the autonomous driving vehicle based on the operation detected by the operation detection unit. By doing this, when the user moves the autonomous driving vehicle, the moved state can be grasped in advance, and the autonomous driving vehicle can be moved to a more appropriate position.
  • the acquiring step acquires recognition information recognized by the external world recognition unit installed in the autonomous driving vehicle
  • the displaying step acquires the recognition information acquired in the acquiring step
  • An operation image for moving the stop position of the autonomous driving vehicle is displayed on the display unit on the basis of.
  • the transmission step transmits information on the movement to the automatically driven vehicle.
  • the above-described method for operating an autonomous driving vehicle may be executed by a computer.
  • the stop position of the autonomous driving vehicle can be moved by the terminal device possessed by the user who has called the autonomous driving vehicle using the computer, and the autonomous driving vehicle can be appropriately adapted to the user's intention. It becomes possible to stop at the position.
  • a terminal apparatus will be described with reference to FIGS. 1 to 7.
  • the smartphone 1 as the terminal device according to the present embodiment can communicate with the vehicle control device 2 of the autonomous driving vehicle C.
  • the smartphone 1 and the vehicle control device 2 may directly communicate with each other by near field communication or the like, or may communicate with each other via a public network or the like.
  • the functional configuration of the smartphone 1 is shown in FIG.
  • the smartphone 1 includes a control unit 11, a communication unit 12, a storage unit 13, a display unit 14, and an operation unit 15.
  • the control unit 11 is configured by, for example, a CPU (Central Processing Unit), and controls the entire control of the smartphone 1.
  • the control unit 11 generates an operation image (described later) based on a camera image or the like acquired from the vehicle control device 2 of the autonomous driving vehicle C and causes the display unit 14 to display the operation image. Further, based on the operation performed on the operation unit 15, the control unit 11 generates information related to movement for moving the stop position of the autonomous driving vehicle C that is at a stop.
  • a CPU Central Processing Unit
  • the communication unit 12 as an acquisition unit and a transmission unit is configured by a wireless communication circuit or the like, and transmits information related to movement generated by the control unit 11 to the vehicle control device 2 of the autonomous driving vehicle C.
  • the communication unit 12 also receives information such as a camera image acquired by the external world recognition unit 3 installed in the autonomous driving vehicle C.
  • the storage unit 13 is configured of a storage device such as a non-volatile semiconductor memory, and stores an operating system (OS) operated by the control unit 11 and programs and data such as applications.
  • OS operating system
  • the display unit 14 is configured of, for example, a liquid crystal display, and displays various operation screens such as an application. Further, in the present embodiment, an operation image, which will be described later, generated by the control unit 11 is displayed.
  • the operation unit 15 as the operation detection unit is configured by, for example, a touch panel or a push button provided so as to be superimposed on the display unit 14, and an operation such as an application is performed. Further, in the present embodiment, the moving operation of the automatically driven vehicle C is performed based on the operation image displayed on the display unit 14. That is, by detecting the touch operation performed on the operation unit 15, it functions as an operation detection unit.
  • the autonomous driving vehicle C includes a vehicle control device 2, an external world recognition unit 3, and a vehicle position detection unit 4.
  • the vehicle control device 2 autonomously travels the autonomous driving vehicle C based on the results detected by the external world recognition unit 3 and the own vehicle position detection unit 4 and the map information for automatic driving possessed by the vehicle control device 2 (automatic driving ). Further, the vehicle control device 2 communicates with the smartphone 1, receives information on the movement of the autonomous driving vehicle C, and transmits the information acquired by the external world recognition unit 3.
  • the external world recognition unit 3 is installed in the autonomous driving vehicle C, and the camera for photographing the outside of the autonomous driving vehicle C, such as the front and rear, LiDAR (Light Detection And Ranging), radar, etc. It includes a sensor that recognizes the surrounding environment.
  • LiDAR Light Detection And Ranging
  • the own vehicle position detection unit 4 is a GPS (Global Positioning System) receiver that detects the current position of the automatically driven vehicle C, a gyro sensor that detects the posture (direction etc.) of the automatically driven vehicle C, the speed of the automatically driven vehicle C Also included are devices such as a velocity sensor that detects the acceleration and an acceleration sensor that detects the acceleration.
  • GPS Global Positioning System
  • the flowchart shown in FIG. 3 is executed by the control unit 11 of the smartphone 1. Also, the operation according to this flowchart may be configured as, for example, an application installed on the smartphone 1. In that case, the application functions as an autonomous driving vehicle operation program.
  • a result (recognition information) acquired by the external world recognition unit 3 such as an image photographed by the camera from the autonomous driving vehicle C is acquired via the communication unit 12.
  • the recognition information acquired in this step includes, for example, an image around the autonomous driving vehicle C photographed by the above-described camera, presence or absence of an obstacle around the autonomous driving vehicle C detected by a rider or a radar, and the obstacle Detection information of a building, a road, etc. around the automatically driven vehicle C including the distance of the vehicle.
  • step S ⁇ b> 2 the control unit 11 generates an operation image and displays it on the display unit 14.
  • the operation image is an image for operating the autonomous driving vehicle C, which is generated based on the recognition information acquired in step S1, with the smartphone 1. An example of the operation image will be described with reference to FIG.
  • the diagram shown in FIG. 4 is an example of the operation image.
  • an image (a bird's-eye view image) obtained by looking over the automatic driving vehicle C from above is displayed at the approximate center thereof.
  • This overhead view image may be displayed by color-coding or the like of a movable area M and a non-movable area N, which will be described later, other than the image (may be an illustration or the like) of the automatically driven vehicle C.
  • the autonomous driving vehicle C has cameras that respectively capture, for example, four directions in front, rear and left and right directions of the vehicle
  • the situation around the autonomous driving vehicle C combining the overhead image and the images captured by those cameras Image may be displayed, or an image representing the situation around the autonomous driving vehicle C in which the overhead image and the image generated based on the recognition information detected by the rider or the radar may be displayed. It is also good.
  • the image when setting it as the overhead image which is an image showing the surrounding condition of the automatically driven vehicle C, the image may be generated by the automatically driven vehicle C and transmitted to the smartphone 1 as recognition information.
  • the area around the automatically driven vehicle C is a movable area M (movable range), which is an area that enables the automatically driven vehicle C to move, and movement of the automatically driven vehicle C is not possible. It is divided into the immovable area N which is an area to be set.
  • the movable area M is in front of and behind the self-driving vehicle C.
  • the immovable area N is an area other than the movable area M.
  • the movement of the autonomous driving vehicle C by the smartphone 1 is limited to the longitudinal direction of the autonomous driving vehicle C. Therefore, the area
  • the position where the obstacle is present is regarded as a movement impossible area N.
  • the movement operation range (distance) of the autonomous driving vehicle C by the smartphone 1 may be determined in advance, and the area beyond the range may be set as the movement impossible area N.
  • the image is a bird's-eye view of the autonomous driving vehicle C.
  • the autonomous driving vehicle C may be an image viewed from the side.
  • the image may be an illustration or the like
  • the overhead image in FIG. 4 and the side image in FIG. 5 may be switched.
  • an icon or the like indicating switching of the image displayed on the display unit 14 may be switched by operation, or an obstacle recognized around the autonomous driving vehicle C recognized by the external world recognition unit 3 You may switch according to the type of and the distance to the obstacle.
  • step S3 it is determined whether or not the user has performed an operation on the operation image displayed in step S2. If the operation has been performed (in the case of YES), the process proceeds to step S4.
  • the operation in the operation image for example, the user performs a touch operation on a position desired to move the automatically driven vehicle C on the operation image (also referred to as a movement desired position) by a touch panel included in the operation unit 15 or Swipe operation of a portion of the autonomous driving vehicle C to a position where movement on the operation image is desired can be mentioned.
  • the autonomous driving vehicle C may be moved to the desired movement position by displaying a button or the like indicating the moving direction on the operation image and pressing the button.
  • the desired movement position is specified within the range of the movable area M, and when it is attempted to specify the movement impossible area N, the movement operation may not be accepted, and for example, a warning may be displayed.
  • the moving amount may be directly input, such as “1 m behind”.
  • This input is not limited to key input and may be voice input.
  • the directly input information is output to the vehicle control device 2 as movement information.
  • voice input for example, the microphone of the smartphone 1 functions as an operation detection unit.
  • the movement information is information related to the movement of the autonomous driving vehicle C based on the operation made into the operation image in step S3.
  • the coordinates of the movement desired position in the bird's-eye view image may be used as the movement information.
  • the movement amount can be calculated based on the coordinate information transmitted as the movement information by the vehicle control device 2.
  • the movement amount may be calculated based on the relationship between the distance to the obstacle, the maximum position where movement is possible, and the movement desired position. For example, the amount of movement can be calculated from the ratio of the coordinates of the maximum position where movement is possible to the coordinate of the movement desired position, and the calculated movement amount can be used as movement information.
  • step S5 the movement information generated in step S4 is transmitted to the vehicle control device 2 of the automatically driven vehicle C.
  • the vehicle control device 2 controls the accelerator, the brake, and the like of the automatically driven vehicle C based on the received movement information to move the vehicle C to the desired movement position.
  • the vehicle control device 2 may transmit the end of the movement to the smartphone 1.
  • the smartphone 1 the end of the movement may be displayed on the display unit 14.
  • the smartphone 1 may also store the final movement position.
  • step S1 functions as an acquisition step
  • step S2 as a generation step
  • step S5 as a transmission step.
  • the communication unit 12 acquires recognition information recognized by the external world recognition unit 3 installed in the autonomous driving vehicle C, and the control unit 11 recognizes the communication unit 12 acquires The operation image for moving the stop position of the automatically driven vehicle C is generated based on the information, and the display unit 14 displays the operation image. Then, the operation unit 15 performs an operation to move the stop position of the automatically driven vehicle C based on the operation image displayed on the display unit 14, and the communication unit 12 performs the operation based on the operation performed by the operation unit 15. The movement information is transmitted to the automatically driven vehicle C.
  • the smart phone 1 possessed by the user who has called the automatically driven vehicle C can move so as to finely adjust the stopping position of the automatically driven vehicle C, and the automatically driven vehicle C can be moved according to the user's intention. It is possible to stop at an appropriate position.
  • the communication unit 12 is installed in the autonomous driving vehicle C, and acquires an image captured by a camera imaging the outside of the autonomous driving vehicle C.
  • the autonomous driving vehicle C can be moved based on a bird's-eye view image or the like generated based on a plurality of images captured by a camera installed in the autonomous driving vehicle.
  • the movable range M of the automatically driven vehicle C can be determined based on the obstacle detected by the image captured by the camera.
  • the communication unit 12 acquires detection information of an obstacle existing around the autonomous driving vehicle C installed in the autonomous driving vehicle C, a building or a road around the autonomous driving vehicle C, and the like. By doing this, the autonomous driving vehicle C can be moved based on the detection result of a sensor such as a rider installed in the autonomous driving vehicle C.
  • control unit 11 includes the movable range M of the automatically driven vehicle C in the operation image.
  • the movable image M and the non-movable region N are displayed on the operation image to indicate the movable range for the user.
  • an icon W indicating that an obstacle has been detected or a message may be displayed.
  • the icon W and the like may be displayed together with the movable area M and the non-movable area N. That is, when acquiring information indicating an obstacle, the control unit 11 causes the display unit 14 to display an icon W indicating detection of an obstacle. By doing this, the user can recognize that there is an obstacle around the autonomous driving vehicle C.
  • the bird's-eye view image shown in FIG. 4 and the side image shown in FIG. 5 have been described as representative examples as operation images, but as the operation images, operations as described below Image operations are also possible.
  • FIG. 7 is an example of the image of the front camera installed in the autonomous driving vehicle C.
  • an image captured by the front camera is displayed on the display unit 14.
  • the user wants to move to the position of A in the upper row of FIG. 7 instead of the current stop position of the autonomous driving vehicle C, the user touches the portion of A in the upper row of FIG. Give instructions.
  • the control unit 11 transmits, as movement information, coordinate information of the position (A) designated by the user in the front camera image to the vehicle control device 2 through the communication unit 12.
  • the vehicle control device 2 calculates the distance from the image captured by the front camera image to the designated position, and moves the autonomous driving vehicle C to the designated position.
  • FIG. 7 was demonstrated by the front image image
  • the automatically driven vehicle is You may move C. In addition, it may be performed in a state where the user is in the vehicle of the autonomous driving vehicle C.
  • the moving operation of the stop position is performed from the inside of the automatically driven vehicle C, not only the smartphone 1 as the terminal device but also an on-vehicle device mounted on the automatically driven vehicle C may be used.
  • FIG. 8 a terminal according to a second embodiment of the present invention will be described with reference to FIG. 8 to FIG.
  • the same parts as those of the first embodiment described above are designated by the same reference numerals and the description thereof will be omitted.
  • FIG. 8 shows an example in which the stop position of the automatically driven vehicle C is moved using a smart glass.
  • the smart glass 20 is a glasses-type wearable device, and is a terminal device mounted on the head of the user. As shown in FIG. 8, the smart glass 20 includes a camera 23, a touch panel 24, and a display 25.
  • the display 25 is a transmissive display, and the user wearing the smart glass can visually recognize the front view of the user as well as the image projected on the display 25 positioned in front of his own eye.
  • the functional configuration of the smart glass 20 is shown in FIG.
  • the smart glass 20 includes a control unit 21, a communication unit 22, and a storage unit 26 in addition to the camera 23, the touch panel 24 and the display 25 shown in FIG. 8.
  • the control unit 21 is configured of, for example, a CPU (Central Processing Unit), and controls the entire control of the smart glass 20.
  • the control unit 21 generates and displays an operation image on the display 25 based on information acquired by the external world recognition unit 3 such as a camera image acquired from the vehicle control device 2 of the autonomous driving vehicle C.
  • the control part 21 produces
  • the communication unit 22 as an output unit is configured by a wireless communication circuit or the like, and transmits information on the movement generated by the control unit 21 to the vehicle control device 2 of the automatically driven vehicle C. Moreover, the information acquired by the external world recognition part 3, such as a camera image mentioned later from the vehicle control apparatus 2 of the autonomous driving vehicle C, is received.
  • the storage unit 13 is configured of a storage device such as a nonvolatile semiconductor memory, and stores an operating system (OS) operated by the control unit 21 and programs and data such as applications.
  • OS operating system
  • the operation image is, for example, as shown in FIG. 10.
  • FIG. 10 was viewed through the smart glass 20 from the side of the autonomous driving vehicle C
  • the immovable area N is projected as an image on the display 25 so that the immovable area N is superimposed on the view including the automatically driven vehicle C. This is, for example, by recognizing the automatically driven vehicle C from the image captured by the camera 23 of the smart glass 20, the range of the immovable area N is specified and displayed.
  • a gesture by the movement of the user's hand is performed.
  • the autonomous driving vehicle C moves in the right direction as viewed from the user (arrow AR in FIG. 10).
  • the autonomous driving vehicle C moves leftward as viewed from the user.
  • FIG. 10 shows an example in which the user shakes with his / her thumb pointing to the user.
  • photographs the said gesture with the camera 23 of the smart glass 20, and the control part 21 specifies the movement direction by recognizing the right hand or the left hand from the image image
  • the amount of movement of the autonomous driving vehicle C may be specified by, for example, predetermining the amount of movement for one shaking of the hand.
  • the movement information may include that one gesture has been recognized and the movement direction.
  • the back of the hand (the right hand R in FIG. 11) is directed toward the user It may be a gesture of pushing the autonomous driving vehicle C held up.
  • the amount of movement of the autonomous driving vehicle C in this case may be determined, for example, by the duration of the gesture for pressing the autonomous driving vehicle C. That is, while the gesture continues, the autonomous driving vehicle C moves. Of course, the movement beyond the movable area M can not be performed.
  • the smart glass 20 has been described in the present embodiment, the autonomous driving vehicle C is photographed with a smartphone having a camera, the movement impossible area N is superimposed and displayed on the photographed image, and the gesture is also photographed by the camera. It may be detected.
  • the communication unit 22 acquires recognition information recognized by the external world recognition unit 3 installed in the autonomous driving vehicle C, and the control unit 21 recognizes the communication unit 22 acquires Based on the information, an operation image for moving the stop position of the automatically driven vehicle C is generated. Then, the control unit 21 generates movement information to the autonomous driving vehicle C and transmits it by recognizing a predetermined gesture from the image captured by the camera 23.
  • the smart glass 20 makes it possible to finely adjust and move the stop position of the automatically driven vehicle C by an intuitive operation, and to stop the automatically driven vehicle C at an appropriate position. Become.
  • the movement of the automatically driven vehicle C is limited to the front-rear direction of the automatically driven vehicle C, but movement including steering wheel operation may be included.
  • the present invention is not limited to the above embodiment. That is, those skilled in the art can carry out various modifications without departing from the gist of the present invention in accordance with conventionally known findings. Of course, as long as the configuration of the terminal device of the present invention is provided even by this modification, it is included in the scope of the present invention.

Abstract

According to the present invention, a vehicle capable of autonomous travel is stopped at a suitable position. In a smartphone (1), a communication unit (12) acquires recognition information recognized by an external recognition unit (3) installed in an automatic driving vehicle (C); a control unit (11) generates an operation image for moving a stop position of the automatic driving vehicle (C), the stop position being based on the recognition information acquired by the communication unit (12); and a display unit (14) displays the operation image. In addition, an operation unit (15) performs an operation for moving the stop position of the automatic driving vehicle (C) according to the operation image displayed on the display unit (14), and the communication unit (12) transmits movement information to the automatic driving vehicle (C) on the basis of the operation performed in the operation unit (15).

Description

端末装置Terminal equipment
 本発明は、端末装置に関する。 The present invention relates to a terminal device.
 例えば自宅や出先からタクシーを呼ぶ場合に、スマートフォン等の利用者が有する端末機器から配車の手配をすることができるシステム等が提案されている(例えば、特許文献1を参照)。 For example, when calling a taxi from home or on the go, there has been proposed a system or the like capable of arranging dispatch from a terminal device possessed by a user such as a smartphone (see, for example, Patent Document 1).
 また、自律走行が可能な車両(自動運転車両)について、タクシーやカーシェアリングのように利用したい時に車両を呼び寄せて利用するようなことも将来の実施が検討されている。 In addition, for vehicles that can run autonomously (autonomously driven vehicles), it is also considered that future vehicles will be called and used when it is desired to use such as taxis and car sharing.
再表2016-002527号公報Re-listed 2016-002527 gazette
 特許文献1においては、自動運転車両についての記載もあるが、自動運転車両の場合、人間の運転者が乗車していないので、配車時等において指定した場所に到着したときに、利用者の意図していない位置やその場所の利用に不適な位置に停車されることがありうる。 Although Patent Document 1 also describes a self-driving vehicle, in the case of a self-driving vehicle, since the human driver is not on board, the intention of the user when arriving at a designated place at the time of dispatch etc. It is possible that the vehicle is stopped at a position not suitable for use or at a position not suitable for use.
 本発明が解決しようとする課題としては、上述したような自律走行が可能な車両において適切な位置に停車させることが一例として挙げられる。 One of the problems to be solved by the present invention is to stop the vehicle at an appropriate position in a vehicle capable of autonomous traveling as described above.
 上記課題を解決するために、請求項1に記載の発明は、停車している自動運転車両に設置されている外界認識部が認識した認識情報を取得する取得部と、前記取得部が取得した認識情報に基づく前記自動運転車両の停車位置を移動させるための操作画像を表示する表示部と、前記表示部に表示された前記操作画像に対する前記自動運転車両の停車位置を移動させる操作を検出する操作検出部と、前記操作検出部が検出した操作に基づいて前記自動運転車両へ移動に関する情報を送信する送信部と、を備えることを特徴としている。 In order to solve the above-mentioned subject, invention of Claim 1 acquired the acquisition part which acquires the recognition information which the external world recognition part installed in the stopped autonomous driving vehicle recognizes, and the acquisition part A display unit for displaying an operation image for moving the stopping position of the autonomous driving vehicle based on the recognition information, and an operation for moving the stopping position of the autonomous driving vehicle with respect to the operation image displayed on the display unit It is characterized by comprising: an operation detection unit; and a transmission unit that transmits information related to movement to the autonomous driving vehicle based on the operation detected by the operation detection unit.
 請求項7に記載の発明は、停車している自動運転車両を操作する端末装置で実行される自動運転車両操作方法であって、前記自動運転車両に設置されている外界認識部が認識した認識情報を取得する取得工程と、前記取得工程で取得した認識情報に基づく前記自動運転車両の停車位置を移動させるための操作画像を表示部に表示させる表示工程と、前記表示部に表示された前記操作画像に対する前記自動運転車両の停車位置を移動させる操作を検出する操作検出部が検出した操作に基づいて、前記自動運転車両へ移動に関する情報を送信する送信工程と、を備えることを特徴としている。 The invention according to claim 7 is an automatic driving vehicle operation method executed by a terminal device for operating a stopped automatic driving vehicle, wherein the recognition recognized by the external world recognition unit installed in the automatic driving vehicle An acquisition step of acquiring information, a display step of displaying on a display unit an operation image for moving a stop position of the autonomous driving vehicle based on the recognition information acquired in the acquisition step, and the display unit displayed on the display unit And transmitting the information related to the movement to the autonomous driving vehicle based on the operation detected by the operation detection unit for detecting the operation to move the stopping position of the autonomous driving vehicle with respect to the operation image. .
 請求項8に記載の発明は、請求項7に記載の自動運転車両操作方法を、コンピュータにより実行させることを特徴としている。 The invention according to claim 8 is characterized in that the method of operating an autonomous driving vehicle according to claim 7 is executed by a computer.
本発明の第1の実施例にかかる端末装置を有するシステムの概略構成図である。It is a schematic block diagram of the system which has the terminal device concerning the 1st Example of this invention. 図1に示されたスマートフォンの機能構成図である。It is a functional block diagram of the smart phone shown by FIG. 図1に示されたスマートフォンにおける自動運転車両の停車位置移動操作のフローチャートである。It is a flowchart of stop position movement operation of the autonomous driving vehicle in the smart phone shown in FIG. 操作画像の例として自動運転車両を上方から見た俯瞰画像である。It is a bird's-eye view which looked at an autonomous driving vehicle from the upper part as an example of an operation picture. 操作画像の例として自動運転車両を側面から見た側面画像である。It is a side image which looked at an autonomous driving vehicle from the side as an example of an operation picture. 障害物検知を示すアイコンを表示させた操作画像である。It is an operation image on which an icon indicating obstacle detection is displayed. 前方カメラ画像を操作画像として利用した自動運転車両の停車位置移動操作の説明図である。It is explanatory drawing of stop position movement operation of the autonomous driving vehicle using a front camera image as an operation image. 本発明の第2の実施例にかかる端末装置としてのスマートグラスの外観斜視図である。It is an appearance perspective view of smart glass as a terminal unit concerning the 2nd example of the present invention. 図8に示されたスマートグラスの機能構成図である。It is a functional block diagram of the smart glass shown by FIG. 図8に示されたスマートグラスを利用者自動運転車両の停車位置移動操作の説明図である。It is explanatory drawing of the stop position movement operation of a user's self-driving vehicle from the smart glass shown by FIG. 図10に示された停車位置移動操作の他の操作方法の説明図である。It is explanatory drawing of the other operation method of stop position movement operation shown by FIG.
 以下、本発明の一実施形態にかかる端末装置を説明する。本発明の一実施形態にかかる端末装置は、取得部が自動運転車両に設置されている外界認識部が認識した認識情報を取得し、表示部は、取得部が取得した認識情報に基づく自動運転車両の停車位置を移動させるための操作画像を表示する。そして、操作検出部で表示部に表示された操作画像に対する自動運転車両の停車位置を移動させる操作を検出し、送信部は、操作検出部が検出した操作に基づいて自動運転車両へ移動に関する情報を送信する。このようにすることにより、自動運転車両を呼び寄せたユーザが所持する端末装置によって停車している自動運転車両の停車位置を移動させることができ、自動運転車両をユーザの意図に応じた適切な位置に停車させることが可能となる。 Hereinafter, a terminal device according to an embodiment of the present invention will be described. In the terminal device according to an embodiment of the present invention, the acquisition unit acquires recognition information recognized by the external world recognition unit installed in the autonomous driving vehicle, and the display unit performs automatic driving based on the recognition information acquired by the acquisition unit. The operation image for moving the stop position of the vehicle is displayed. Then, the operation detection unit detects an operation for moving the stopping position of the automatically driven vehicle with respect to the operation image displayed on the display unit, and the transmission unit detects information about movement to the automatically driven vehicle based on the operation detected by the operation detection unit. Send By doing this, it is possible to move the stop position of the automatically driven vehicle stopped by the terminal device possessed by the user who has called the automatically driven vehicle, and the automatically operated vehicle is appropriately positioned according to the user's intention. It is possible to stop at
 また、表示部には、操作画像として、自動運転車両を上方から俯瞰して見た画像が表示されてもよい。このようにすることにより、自動運転車両を上方から移動させるような操作が可能となり、直感的な操作が可能となる。 Moreover, the image which looked over the automatic driving vehicle from upper direction may be displayed on a display part as an operation image. By doing so, an operation to move the autonomous driving vehicle from above can be performed, and an intuitive operation can be performed.
 また、表示部には、操作画像として、自動運転車両を側面から見た画像が表示される。このようにすることにより、自動運転車両を側面から見て左右方向に移動させるような操作が可能となり、直感的な操作が可能となる。 Moreover, the image which looked at the automatic driving vehicle from the side is displayed on a display part as an operation image. By doing this, it becomes possible to operate the autonomous driving vehicle in the lateral direction as viewed from the side, and an intuitive operation becomes possible.
 また、取得部が障害物を示す情報を取得した場合、表示部は、障害物の検出を示す情報を表示させてもよい。このようにすることにより、ユーザが自動運転車両の周囲に障害物があることを認識することができる。 In addition, when the acquisition unit acquires information indicating an obstacle, the display unit may display information indicating detection of an obstacle. By doing this, the user can recognize that there is an obstacle around the autonomous driving vehicle.
 また、表示部は、操作画像に前記自動運転車両の移動可能範囲を含めて表示してもよい。このようにすることにより、ユーザが自動運転車両を移動させる際に移動可能な範囲を認識して移動操作をすることができる。 The display unit may display the operation image including the movable range of the autonomous driving vehicle. By doing this, when the user moves the autonomous driving vehicle, the movable range can be recognized and the moving operation can be performed.
 また、表示部は、操作検出部で検出された操作に基づく自動運転車両の移動予測位置を表示してもよい。このようにすることにより、ユーザが自動運転車両を移動させる際に、移動させた状態を予め把握することができ、より適切な位置に自動運転車両を移動させることが可能となる。 The display unit may display the predicted movement position of the autonomous driving vehicle based on the operation detected by the operation detection unit. By doing this, when the user moves the autonomous driving vehicle, the moved state can be grasped in advance, and the autonomous driving vehicle can be moved to a more appropriate position.
 また、本発明の一実施形態にかかる自動運転車両操作方法は、取得工程が自動運転車両に設置されている外界認識部が認識した認識情報を取得し、表示工程が取得工程で取得した認識情報に基づいて自動運転車両の停車位置を移動させるための操作画像を表示部に表示させる。そして、送信工程が表示部に表示された操作画像に基づく自動運転車両の停車位置を移動させる操作を検出する操作検出部が検出した操作に基づいて自動運転車両へ移動に関する情報を送信する。このようにすることにより、自動運転車両を呼び寄せたユーザが所持する端末装置によって停車している自動運転車両の停車位置を移動させることができ、自動運転車両をユーザの意図に応じた適切な位置に停車させることが可能となる。 Further, in the autonomous driving vehicle operation method according to the embodiment of the present invention, the acquiring step acquires recognition information recognized by the external world recognition unit installed in the autonomous driving vehicle, and the displaying step acquires the recognition information acquired in the acquiring step An operation image for moving the stop position of the autonomous driving vehicle is displayed on the display unit on the basis of. Then, based on the operation detected by the operation detection unit that detects the operation of moving the stop position of the automatically driven vehicle based on the operation image displayed on the display unit, the transmission step transmits information on the movement to the automatically driven vehicle. By doing this, it is possible to move the stop position of the automatically driven vehicle stopped by the terminal device possessed by the user who has called the automatically driven vehicle, and the automatically operated vehicle is appropriately positioned according to the user's intention. It is possible to stop at
 また、上述した自動運転車両操作方法を、コンピュータにより実行させてもよい。このようにすることにより、コンピュータを用いて、自動運転車両を呼び寄せたユーザが所持する端末装置によって自動運転車両の停車位置を移動させることができ、自動運転車両をユーザの意図に応じた適切な位置に停車させることが可能となる。 Further, the above-described method for operating an autonomous driving vehicle may be executed by a computer. By doing this, the stop position of the autonomous driving vehicle can be moved by the terminal device possessed by the user who has called the autonomous driving vehicle using the computer, and the autonomous driving vehicle can be appropriately adapted to the user's intention. It becomes possible to stop at the position.
 本発明の第1の実施例にかかる端末装置を図1~図7を参照して説明する。本実施例にかかる端末装置としてのスマートフォン1は、図1に示したように、自動運転車両Cの車両制御装置2と通信可能となっている。なお、スマートフォン1と車両制御装置2とは、近距離無線通信等で直接通信してもよいし、公衆回線網等を介して通信してもよい。 A terminal apparatus according to a first embodiment of the present invention will be described with reference to FIGS. 1 to 7. As shown in FIG. 1, the smartphone 1 as the terminal device according to the present embodiment can communicate with the vehicle control device 2 of the autonomous driving vehicle C. The smartphone 1 and the vehicle control device 2 may directly communicate with each other by near field communication or the like, or may communicate with each other via a public network or the like.
 スマートフォン1の機能的構成を図2に示す。スマートフォン1は、制御部11と、通信部12と、記憶部13と、表示部14と、操作部15と、を備えている。 The functional configuration of the smartphone 1 is shown in FIG. The smartphone 1 includes a control unit 11, a communication unit 12, a storage unit 13, a display unit 14, and an operation unit 15.
 制御部11は、例えばCPU(Central Processing Unit)により構成され、スマートフォン1の全体制御を司る。制御部11は、自動運転車両Cの車両制御装置2から取得したカメラ画像等に基づいて、操作画像(後述する)を生成して表示部14に表示させる。また、制御部11は、操作部15に対して行われた操作に基づいて、停車している自動運転車両Cの停車位置を移動させるための移動に関する情報を生成する。 The control unit 11 is configured by, for example, a CPU (Central Processing Unit), and controls the entire control of the smartphone 1. The control unit 11 generates an operation image (described later) based on a camera image or the like acquired from the vehicle control device 2 of the autonomous driving vehicle C and causes the display unit 14 to display the operation image. Further, based on the operation performed on the operation unit 15, the control unit 11 generates information related to movement for moving the stop position of the autonomous driving vehicle C that is at a stop.
 取得部、送信部としての通信部12は、無線通信回路等により構成され、制御部11で生成された移動に関する情報を自動運転車両Cの車両制御装置2に送信する。また、通信部12は、自動運転車両Cに設置されている外界認識部3が取得したカメラ画像等の情報を受信する。 The communication unit 12 as an acquisition unit and a transmission unit is configured by a wireless communication circuit or the like, and transmits information related to movement generated by the control unit 11 to the vehicle control device 2 of the autonomous driving vehicle C. The communication unit 12 also receives information such as a camera image acquired by the external world recognition unit 3 installed in the autonomous driving vehicle C.
 記憶部13は、不揮発性の半導体メモリ地等の記憶装置により構成され、制御部11で動作するOS(Operating System)やアプリ等のプログラムやデータ等が記憶されている。 The storage unit 13 is configured of a storage device such as a non-volatile semiconductor memory, and stores an operating system (OS) operated by the control unit 11 and programs and data such as applications.
 表示部14は、例えば液晶ディスプレイにより構成され、アプリ等の各種動作画面を表示する。また、本実施例では制御部11で生成された、後述する操作画像が表示される。 The display unit 14 is configured of, for example, a liquid crystal display, and displays various operation screens such as an application. Further, in the present embodiment, an operation image, which will be described later, generated by the control unit 11 is displayed.
 操作検出部としての操作部15は、例えば表示部14に重ねられて設けられたタッチパネルや押しボタン等により構成され、アプリ等の操作が行われる。また、本実施例では、表示部14に表示された操作画像に基づいて自動運転車両Cの移動操作が行われる。つまり、この操作部15になされたタッチ操作を検出することで操作検出部として機能する。 The operation unit 15 as the operation detection unit is configured by, for example, a touch panel or a push button provided so as to be superimposed on the display unit 14, and an operation such as an application is performed. Further, in the present embodiment, the moving operation of the automatically driven vehicle C is performed based on the operation image displayed on the display unit 14. That is, by detecting the touch operation performed on the operation unit 15, it functions as an operation detection unit.
 自動運転車両Cは、車両制御装置2と、外界認識部3と、自車位置検出部4と、を備えている。車両制御装置2は、外界認識部3及び自車位置検出部4が検出した結果及び、車両制御装置2が有する自動運転用の地図情報に基づいて自動運転車両Cを自律的に走行(自動運転)させる。また、車両制御装置2は、スマートフォン1と通信し、自動運転車両Cの移動に関する情報を受信し、外界認識部3で取得された情報を送信する。 The autonomous driving vehicle C includes a vehicle control device 2, an external world recognition unit 3, and a vehicle position detection unit 4. The vehicle control device 2 autonomously travels the autonomous driving vehicle C based on the results detected by the external world recognition unit 3 and the own vehicle position detection unit 4 and the map information for automatic driving possessed by the vehicle control device 2 (automatic driving ). Further, the vehicle control device 2 communicates with the smartphone 1, receives information on the movement of the autonomous driving vehicle C, and transmits the information acquired by the external world recognition unit 3.
 外界認識部3は、自動運転車両Cに設置され、当該自動運転車両Cの前方や後方等の外部を撮影するカメラや、ライダ(LiDAR:Light Detection And Ranging)、レーダー等の自動運転車両Cの周囲環境を認識するセンサを含んでいる。 The external world recognition unit 3 is installed in the autonomous driving vehicle C, and the camera for photographing the outside of the autonomous driving vehicle C, such as the front and rear, LiDAR (Light Detection And Ranging), radar, etc. It includes a sensor that recognizes the surrounding environment.
 自車位置検出部4は、自動運転車両Cの現在位置を検出するGPS(Global Positioning System)受信機、自動運転車両Cの姿勢(向き等)を検出するジャイロセンサや、自動運転車両Cの速度を検出する速度センサや加速度を検出する加速度センサ等の機器も含む。 The own vehicle position detection unit 4 is a GPS (Global Positioning System) receiver that detects the current position of the automatically driven vehicle C, a gyro sensor that detects the posture (direction etc.) of the automatically driven vehicle C, the speed of the automatically driven vehicle C Also included are devices such as a velocity sensor that detects the acceleration and an acceleration sensor that detects the acceleration.
 次に、本実施例における自動運転車両Cの停車位置の移動動作(自動運転車両操作方法)について図3のフローチャートを参照して説明する。図3に示したフローチャートはスマートフォン1の制御部11で実行される。また、このフローチャートにかかる動作は、例えばスマートフォン1にインストールされたアプリとして構成してもよい。その場合は、当該アプリが自動運転車両操作プログラムとして機能する。 Next, the movement operation (the method of operating the automatically driven vehicle) of the stop position of the automatically driven vehicle C in the present embodiment will be described with reference to the flowchart of FIG. The flowchart shown in FIG. 3 is executed by the control unit 11 of the smartphone 1. Also, the operation according to this flowchart may be configured as, for example, an application installed on the smartphone 1. In that case, the application functions as an autonomous driving vehicle operation program.
 まず、ステップS1において、自動運転車両Cからカメラで撮影された画像等の外界認識部3で取得した結果(認識情報)を、通信部12を介して取得する。本ステップで取得される認識情報としては、例えば上述したカメラで撮影された自動運転車両C周辺の画像や、ライダやレーダーで検出された自動運転車両C周辺の障害物の有無やその障害物までの距離等を含む障害物及び自動運転車両C周辺の建物や道路等の検出情報が挙げられる。 First, in step S1, a result (recognition information) acquired by the external world recognition unit 3 such as an image photographed by the camera from the autonomous driving vehicle C is acquired via the communication unit 12. The recognition information acquired in this step includes, for example, an image around the autonomous driving vehicle C photographed by the above-described camera, presence or absence of an obstacle around the autonomous driving vehicle C detected by a rider or a radar, and the obstacle Detection information of a building, a road, etc. around the automatically driven vehicle C including the distance of the vehicle.
 次に、ステップS2において、制御部11は、操作画像を生成して表示部14に表示する。操作画像とは、ステップS1で取得した認識情報に基づいて生成される自動運転車両Cをスマートフォン1で操作するための画像である。操作画像の例を図4を参照して説明する。 Next, in step S <b> 2, the control unit 11 generates an operation image and displays it on the display unit 14. The operation image is an image for operating the autonomous driving vehicle C, which is generated based on the recognition information acquired in step S1, with the smartphone 1. An example of the operation image will be described with reference to FIG.
 図4に示した図は、操作画像の例である。図4に示した操作画像にはその略中心部に自動運転車両Cを上方から俯瞰して見た画像(俯瞰画像)が表示されている。この俯瞰画像は、自動運転車両Cの画像(イラスト等でもよい)以外は後述する移動可能領域M及び移動不可領域Nを色分け等により表示するだけでもよい。または、自動運転車両Cが例えば車両の前方、後方、左右方向の4方向をそれぞれ撮影するカメラを有する場合は、俯瞰画像とそれらのカメラが撮影した画像を組み合わせた自動運転車両Cの周囲の状況を表した画像を表示してもよいし、俯瞰画像とライダやレーダーで検出された認識情報に基づいて生成された画像を組み合わせた自動運転車両Cの周囲の状況を表した画像を表示してもよい。なお、俯瞰画像として自動運転車両Cの周囲の状況を表した画像とする場合は、当該画像を自動運転車両Cで生成して認識情報としてスマートフォン1に送信してもよい。 The diagram shown in FIG. 4 is an example of the operation image. In the operation image shown in FIG. 4, an image (a bird's-eye view image) obtained by looking over the automatic driving vehicle C from above is displayed at the approximate center thereof. This overhead view image may be displayed by color-coding or the like of a movable area M and a non-movable area N, which will be described later, other than the image (may be an illustration or the like) of the automatically driven vehicle C. Alternatively, when the autonomous driving vehicle C has cameras that respectively capture, for example, four directions in front, rear and left and right directions of the vehicle, the situation around the autonomous driving vehicle C combining the overhead image and the images captured by those cameras Image may be displayed, or an image representing the situation around the autonomous driving vehicle C in which the overhead image and the image generated based on the recognition information detected by the rider or the radar may be displayed. It is also good. In addition, when setting it as the overhead image which is an image showing the surrounding condition of the automatically driven vehicle C, the image may be generated by the automatically driven vehicle C and transmitted to the smartphone 1 as recognition information.
 図4に示した操作画像において自動運転車両Cの周囲の領域は、自動運転車両Cの移動を可能とする領域である移動可能領域M(移動可能範囲)と、自動運転車両Cの移動を不可とする領域である移動不可領域Nとに分けられている。図4の例では、移動可能領域Mは自動運転車両Cの前方と後方にある。移動不可領域Nは、移動可能領域M以外の領域である。本実施例では、スマートフォン1による自動運転車両Cの移動は、自動運転車両Cの前後方向に限っている。そのため、自動運転車両Cの移動可能領域から外れる領域は移動不可領域としている。また、自動運転車両Cの前後方向であっても、カメラで撮影された画像やライダやレーダーの検出結果から障害物が検出された場合はその障害物が有る位置は移動不可領域Nとする。さらには、スマートフォン1による自動運転車両Cの移動操作範囲(距離)を予め定めておき、その範囲を超える領域を移動不可領域Nとしてもよい。 In the operation image shown in FIG. 4, the area around the automatically driven vehicle C is a movable area M (movable range), which is an area that enables the automatically driven vehicle C to move, and movement of the automatically driven vehicle C is not possible. It is divided into the immovable area N which is an area to be set. In the example of FIG. 4, the movable area M is in front of and behind the self-driving vehicle C. The immovable area N is an area other than the movable area M. In the present embodiment, the movement of the autonomous driving vehicle C by the smartphone 1 is limited to the longitudinal direction of the autonomous driving vehicle C. Therefore, the area | region which remove | deviates from the movable area | region of the autonomous driving vehicle C is made into the movement impossible area | region. Further, even in the front-rear direction of the automatically driven vehicle C, when an obstacle is detected from an image taken by a camera or a detection result of a rider or a radar, the position where the obstacle is present is regarded as a movement impossible area N. Furthermore, the movement operation range (distance) of the autonomous driving vehicle C by the smartphone 1 may be determined in advance, and the area beyond the range may be set as the movement impossible area N.
 なお、図4に示した例では自動運転車両Cを俯瞰して見た画像であったが、図5に示したように、自動運転車両Cを側面から見た画像としてもよい。この場合は、自動運転車両Cの画像(イラスト等でもよい)と移動可能領域M及び移動不可領域Nとを色分け等により示される。また、図4の俯瞰画像と図5の側面画像とを切り替えられるようにしてもよい。画像の切り替え方法としては、表示部14に表示された画像の切り替えを示すアイコン等を操作により切り替えてもよいし、外界認識部3により認識された自動運転車両Cの周囲に認識された障害物の種類や障害物との距離に応じて切り替えてもよい。 In the example shown in FIG. 4, the image is a bird's-eye view of the autonomous driving vehicle C. However, as shown in FIG. 5, the autonomous driving vehicle C may be an image viewed from the side. In this case, the image (may be an illustration or the like) of the automatically driven vehicle C and the movable area M and the non-movable area N are indicated by color coding or the like. Further, the overhead image in FIG. 4 and the side image in FIG. 5 may be switched. As an image switching method, an icon or the like indicating switching of the image displayed on the display unit 14 may be switched by operation, or an obstacle recognized around the autonomous driving vehicle C recognized by the external world recognition unit 3 You may switch according to the type of and the distance to the obstacle.
 次に、ステップS3において、ステップS2で表示された操作画像についてユーザから操作がされたか否かを判断し、操作がされた場合(YESの場合)はステップS4に進む。操作画像における操作は、例えばユーザが操作部15に含まれるタッチパネルにより、操作画像上の自動運転車両Cの移動を希望する位置(移動希望位置とも呼ぶ)をタッチ操作することや、操作画像上の自動運転車両Cの部分を操作画像上の移動を希望する位置へスワイプ操作すること等が挙げられる。或いは、操作画像上に移動方向を示すボタン等を表示してそのボタンを押すことで自動運転車両Cを移動希望位置まで移動させるようにしてもよい。なお、移動希望位置は、移動可能領域Mの範囲内を指定することとし、移動不可領域Nを指定しようとした場合は移動操作を認めないようにし、例えば警告表示等をしてもよい。 Next, in step S3, it is determined whether or not the user has performed an operation on the operation image displayed in step S2. If the operation has been performed (in the case of YES), the process proceeds to step S4. For the operation in the operation image, for example, the user performs a touch operation on a position desired to move the automatically driven vehicle C on the operation image (also referred to as a movement desired position) by a touch panel included in the operation unit 15 or Swipe operation of a portion of the autonomous driving vehicle C to a position where movement on the operation image is desired can be mentioned. Alternatively, the autonomous driving vehicle C may be moved to the desired movement position by displaying a button or the like indicating the moving direction on the operation image and pressing the button. The desired movement position is specified within the range of the movable area M, and when it is attempted to specify the movement impossible area N, the movement operation may not be accepted, and for example, a warning may be displayed.
 また、例えば、「あと1m後ろ」等、移動量を直接入力するようにしてもよい。この入力はキー入力に限らず音声入力であってもよい。この直接入力の場合は直接入力された情報が移動情報として車両制御装置2へ出力される。また、音声入力の場合は、例えばスマートフォン1のマイクが操作検出部として機能する。 Also, for example, the moving amount may be directly input, such as “1 m behind”. This input is not limited to key input and may be voice input. In the case of this direct input, the directly input information is output to the vehicle control device 2 as movement information. In the case of voice input, for example, the microphone of the smartphone 1 functions as an operation detection unit.
 次に、ステップS4において、移動情報を生成する。移動情報とは、ステップS3で操作画像にされた操作に基づく自動運転車両Cの移動に関する情報である。本実施例では、例えば自動運転車両Cから俯瞰画像を取得した場合は、俯瞰画像における移動希望位置の座標を移動情報とすればよい。俯瞰画像が自動運転車両Cの例えば車両制御装置2で生成されている場合は、車両制御装置2で移動情報として送信した座標情報に基づいて移動量を算出することができる。 Next, movement information is generated in step S4. The movement information is information related to the movement of the autonomous driving vehicle C based on the operation made into the operation image in step S3. In the present embodiment, for example, when a bird's-eye view image is acquired from the automatically driven vehicle C, the coordinates of the movement desired position in the bird's-eye view image may be used as the movement information. When the overhead view image is generated by, for example, the vehicle control device 2 of the autonomous driving vehicle C, the movement amount can be calculated based on the coordinate information transmitted as the movement information by the vehicle control device 2.
 また、俯瞰画像を自動運転車両Cから取得していなく制御部11で生成した場合や、図5に示した側面画像の場合は、ライダ等で検出された障害物までの距離や予め定めた移動可能範囲(距離)が判明しているので、この障害物までの距離や移動が可能な最大位置と、移動希望位置と、の関係で移動量を算出すればよい。例えば移動が可能な最大位置の座標と移動希望位置との座標との比によって移動量を算出して、算出された移動量を移動情報とすることができる。 Further, in the case where the overhead image is not acquired from the automatically driven vehicle C but generated by the control unit 11 or in the case of the side image shown in FIG. 5, the distance to the obstacle detected by the rider or the like, or a predetermined movement Since the possible range (distance) is known, the movement amount may be calculated based on the relationship between the distance to the obstacle, the maximum position where movement is possible, and the movement desired position. For example, the amount of movement can be calculated from the ratio of the coordinates of the maximum position where movement is possible to the coordinate of the movement desired position, and the calculated movement amount can be used as movement information.
 次に、ステップS5において、ステップS4で生成された移動情報を自動運転車両Cの車両制御装置2に送信する。 Next, in step S5, the movement information generated in step S4 is transmitted to the vehicle control device 2 of the automatically driven vehicle C.
 車両制御装置2は、受信した移動情報に基づいて自動運転車両Cのアクセルやブレーキ等を制御して移動希望位置まで移動させる。移動が終了したら、車両制御装置2は、終了したことをスマートフォン1に送信してもよい。スマートフォン1では、移動終了したことを表示部14に表示してもよい。また、スマートフォン1は、最終的な移動位置を記憶してもよい。 The vehicle control device 2 controls the accelerator, the brake, and the like of the automatically driven vehicle C based on the received movement information to move the vehicle C to the desired movement position. When the movement ends, the vehicle control device 2 may transmit the end of the movement to the smartphone 1. In the smartphone 1, the end of the movement may be displayed on the display unit 14. The smartphone 1 may also store the final movement position.
 上述した説明から明らかなように、ステップS1が取得工程、ステップS2が生成工程、ステップS5が送信工程として機能する。 As apparent from the above description, step S1 functions as an acquisition step, step S2 as a generation step, and step S5 as a transmission step.
 本実施例によれば、スマートフォン1において、通信部12は、自動運転車両Cに設置されている外界認識部3が認識した認識情報を取得し、制御部11は、通信部12が取得した認識情報に基づいて自動運転車両Cの停車位置を移動させるための操作画像を生成し、その操作画像を表示部14が表示する。そして、操作部15により、表示部14に表示された操作画像に基づいて自動運転車両Cの停車位置を移動させる操作が行われ、通信部12は、操作部15で行われた操作に基づいて自動運転車両Cへ移動情報を送信する。このようにすることにより、自動運転車両Cを呼び寄せたユーザが所持するスマートフォン1によって自動運転車両Cの停車位置を微調整するように移動させることができ、自動運転車両Cをユーザの意図に応じた適切な位置に停車させることが可能となる。 According to the present embodiment, in the smartphone 1, the communication unit 12 acquires recognition information recognized by the external world recognition unit 3 installed in the autonomous driving vehicle C, and the control unit 11 recognizes the communication unit 12 acquires The operation image for moving the stop position of the automatically driven vehicle C is generated based on the information, and the display unit 14 displays the operation image. Then, the operation unit 15 performs an operation to move the stop position of the automatically driven vehicle C based on the operation image displayed on the display unit 14, and the communication unit 12 performs the operation based on the operation performed by the operation unit 15. The movement information is transmitted to the automatically driven vehicle C. By doing this, the smart phone 1 possessed by the user who has called the automatically driven vehicle C can move so as to finely adjust the stopping position of the automatically driven vehicle C, and the automatically driven vehicle C can be moved according to the user's intention. It is possible to stop at an appropriate position.
 また、表示部14には、操作画像として、自動運転車両Cを上方から俯瞰して見た画像が表示されることで、自動運転車両Cを上方から移動させるような操作が可能となり、直感的な操作が可能となる。 In addition, by displaying an image of the automatically driven vehicle C viewed from above from above as the operation image on the display unit 14, an operation that moves the automatically driven vehicle C from above is possible, which is intuitive Operation is possible.
 また、表示部14には、操作画像として、自動運転車両Cを側面から見た画像が表示されることで、自動運転車両Cを側面から見て左右方向に移動させるような操作が可能となり、直感的な操作が可能となる。 In addition, by displaying an image of the automatically driven vehicle C as viewed from the side as an operation image on the display unit 14, it becomes possible to perform an operation to move the automatically driven vehicle C laterally as viewed from the side. Intuitive operation is possible.
 また、通信部12は、自動運転車両Cに設置され、当該自動運転車両Cの外部を撮影しているカメラで撮影された画像を取得している。このようにすることにより、自動運転車両に設置されているカメラで撮影された複数の画像に基づいて生成された俯瞰画像等に基づいて自動運転車両Cを移動させることができる。また、当該カメラで撮影された画像により検出された障害物に基づいて自動運転車両Cの移動可能範囲Mを決定することができる。 In addition, the communication unit 12 is installed in the autonomous driving vehicle C, and acquires an image captured by a camera imaging the outside of the autonomous driving vehicle C. By doing this, the autonomous driving vehicle C can be moved based on a bird's-eye view image or the like generated based on a plurality of images captured by a camera installed in the autonomous driving vehicle. Further, the movable range M of the automatically driven vehicle C can be determined based on the obstacle detected by the image captured by the camera.
 また、通信部12は、自動運転車両Cに設置されている当該自動運転車両Cの周囲に存在する障害物や当該自動運転車両C周辺の建物や道路等の検出情報を取得している。このようにすることにより、自動運転車両Cに設置されているライダ等のセンサの検出結果に基づいて自動運転車両Cを移動させることができる。 In addition, the communication unit 12 acquires detection information of an obstacle existing around the autonomous driving vehicle C installed in the autonomous driving vehicle C, a building or a road around the autonomous driving vehicle C, and the like. By doing this, the autonomous driving vehicle C can be moved based on the detection result of a sensor such as a rider installed in the autonomous driving vehicle C.
 また、制御部11は、操作画像に自動運転車両Cの移動可能範囲Mを含めている。このようにすることにより、ユーザが自動運転車両Cを移動させる際に移動可能な範囲を認識しながら移動操作を行うことができる。 Further, the control unit 11 includes the movable range M of the automatically driven vehicle C in the operation image. By doing this, when the user moves the autonomous driving vehicle C, the moving operation can be performed while recognizing the movable range.
 なお、上述した実施例では、操作画像に移動可能領域Mや移動不可領域Nを表示してユーザに移動できる範囲を示したが、図6に示したように、取得した検出情報から障害物を検出した場合、例えば障害物を検出した旨を示すアイコンWやメッセージ等を表示するようにしてもよい。このアイコンW等は、移動可能領域Mや移動不可領域Nと合わせて表示してもよい。即ち、障害物を示す情報を取得した場合、制御部11は、表示部14に障害物の検出を示すアイコンWを表示させている。このようにすることにより、ユーザが自動運転車両Cの周囲に障害物があることを認識することができる。 In the above-described embodiment, the movable image M and the non-movable region N are displayed on the operation image to indicate the movable range for the user. However, as shown in FIG. If detected, for example, an icon W indicating that an obstacle has been detected or a message may be displayed. The icon W and the like may be displayed together with the movable area M and the non-movable area N. That is, when acquiring information indicating an obstacle, the control unit 11 causes the display unit 14 to display an icon W indicating detection of an obstacle. By doing this, the user can recognize that there is an obstacle around the autonomous driving vehicle C.
 また、上述した実施例では、操作画像としての代表的な例として、図4に示した俯瞰画像や図5に示した側面画像で説明したが、操作画像としては、以下に説明するような操作画像による操作も可能である。 In the embodiment described above, the bird's-eye view image shown in FIG. 4 and the side image shown in FIG. 5 have been described as representative examples as operation images, but as the operation images, operations as described below Image operations are also possible.
 図7は、自動運転車両Cに設置された前方カメラの画像の例である。図7上段は、前方カメラが撮影した画像を表示部14に表示させたものである。ここで、ユーザが現在の自動運転車両Cの停車位置ではなく、図7上段のAの位置に移動させたい場合、ユーザが表示部14に表示された図7上段のAの部分をタッチする等により指示をする。すると、その位置に例えば自動運転車両Cの後輪が位置した場合を予測した画像(移動予測位置)を表示させる(図7下段)。 FIG. 7 is an example of the image of the front camera installed in the autonomous driving vehicle C. As shown in FIG. In the upper part of FIG. 7, an image captured by the front camera is displayed on the display unit 14. Here, when the user wants to move to the position of A in the upper row of FIG. 7 instead of the current stop position of the autonomous driving vehicle C, the user touches the portion of A in the upper row of FIG. Give instructions. Then, an image (predicted movement position) predicted when the rear wheel of the automatically driven vehicle C is positioned, for example, is displayed at the position (lower stage in FIG. 7).
 表示部14に図7下段のような表示がされている際に、ユーザが当該位置への移動を承諾する場合は、確定ボタン等を操作して操作を確定する。移動位置が確定すると、制御部11は、通信部12を介して、前方カメラ画像においてユーザが指定した位置(A)の座標情報を車両制御装置2へ移動情報として送信する。車両制御装置2では、当該前方カメラ画像が撮影した画像から当該指定した位置までの距離を算出し、当該指定した位置へ自動運転車両Cを移動させる。 When a display as shown in the lower part of FIG. 7 is displayed on the display unit 14 and the user consents to the movement to the position, the operation is confirmed by operating the confirmation button or the like. When the movement position is determined, the control unit 11 transmits, as movement information, coordinate information of the position (A) designated by the user in the front camera image to the vehicle control device 2 through the communication unit 12. The vehicle control device 2 calculates the distance from the image captured by the front camera image to the designated position, and moves the autonomous driving vehicle C to the designated position.
 なお、図7の例は、前方カメラで撮影された前方画像で説明したが、勿論後方カメラで撮影された後方画像でも同様な処理を行うことができる。 In addition, although the example of FIG. 7 was demonstrated by the front image image | photographed with the front camera, of course, the same process can be performed also with the back image image | photographed by the rear camera.
 また、本実施例では、自動運転車両Cの近傍で操作することを前提として説明したが、例えば、自宅に自動運転車両Cの配車を要求した場合に、自宅の室内から操作画像により自動運転車両Cの移動操作をしてもよい。また、自動運転車両Cの車内に搭乗した状態で行ってもよい。自動運転車両Cの車内から停車位置の移動操作をする場合は、端末装置としてスマートフォン1に限らず、自動運転車両Cに搭載されている車載機器であってもよい。 Further, although the present embodiment has been described on the premise that the operation is performed in the vicinity of the automatically driven vehicle C, for example, when distribution of the automatically driven vehicle C is requested at home, the automatically driven vehicle is You may move C. In addition, it may be performed in a state where the user is in the vehicle of the autonomous driving vehicle C. When the moving operation of the stop position is performed from the inside of the automatically driven vehicle C, not only the smartphone 1 as the terminal device but also an on-vehicle device mounted on the automatically driven vehicle C may be used.
 次に、本発明の第2の実施例にかかる端末装置を図8~図11を参照して説明する。なお、前述した第1の実施例と同一部分には、同一符号を付して説明を省略する。 Next, a terminal according to a second embodiment of the present invention will be described with reference to FIG. 8 to FIG. The same parts as those of the first embodiment described above are designated by the same reference numerals and the description thereof will be omitted.
 図8は、スマートグラスを利用して自動運転車両Cの停車位置を移動させる例である。スマートグラス20は、眼鏡型ウェアラブルデバイスであり、ユーザの頭部に装着される端末装置である。スマートグラス20は、図8に示したように、カメラ23、タッチパネル24、ディスプレイ25を備えている。ディスプレイ25は、透過型ディスプレイであり、スマートグラスを装着したユーザは、自身の眼の前に位置しているディスプレイ25に投影された画像とともにユーザの前方の景色等も視認することができる。 FIG. 8 shows an example in which the stop position of the automatically driven vehicle C is moved using a smart glass. The smart glass 20 is a glasses-type wearable device, and is a terminal device mounted on the head of the user. As shown in FIG. 8, the smart glass 20 includes a camera 23, a touch panel 24, and a display 25. The display 25 is a transmissive display, and the user wearing the smart glass can visually recognize the front view of the user as well as the image projected on the display 25 positioned in front of his own eye.
 スマートグラス20の機能的構成を図9に示す。スマートグラス20は、図8に示したカメラ23、タッチパネル24及びディスプレイ25に加えて、制御部21と、通信部22と、記憶部26と、を備えている。 The functional configuration of the smart glass 20 is shown in FIG. The smart glass 20 includes a control unit 21, a communication unit 22, and a storage unit 26 in addition to the camera 23, the touch panel 24 and the display 25 shown in FIG. 8.
 制御部21は、例えばCPU(Central Processing Unit)により構成され、スマートグラス20の全体制御を司る。制御部21は、自動運転車両Cの車両制御装置2から取得したカメラ画像等の外界認識部3で取得された情報に基づいて、ディスプレイ25に操作画像を生成して表示させる。また、制御部21は、後述する操作に基づいて、停車している自動運転車両Cの停車位置を移動させるための移動に関する情報を生成する。 The control unit 21 is configured of, for example, a CPU (Central Processing Unit), and controls the entire control of the smart glass 20. The control unit 21 generates and displays an operation image on the display 25 based on information acquired by the external world recognition unit 3 such as a camera image acquired from the vehicle control device 2 of the autonomous driving vehicle C. Moreover, the control part 21 produces | generates the information regarding the movement for moving the stop position of the autonomous driving vehicle C which has stopped based on the operation mentioned later.
 出力部としての通信部22は、無線通信回路等により構成され、制御部21で生成された移動に関する情報を自動運転車両Cの車両制御装置2に送信する。また、自動運転車両Cの車両制御装置2から後述するカメラ画像等の外界認識部3で取得された情報を受信する。 The communication unit 22 as an output unit is configured by a wireless communication circuit or the like, and transmits information on the movement generated by the control unit 21 to the vehicle control device 2 of the automatically driven vehicle C. Moreover, the information acquired by the external world recognition part 3, such as a camera image mentioned later from the vehicle control apparatus 2 of the autonomous driving vehicle C, is received.
 記憶部13は、不揮発性の半導体メモ地等の記憶装置により構成され、制御部21で動作するOS(Operating System)やアプリ等のプログラムやデータ等が記憶されている。 The storage unit 13 is configured of a storage device such as a nonvolatile semiconductor memory, and stores an operating system (OS) operated by the control unit 21 and programs and data such as applications.
 図8に示したスマートグラスによる自動運転車両Cの移動操作の場合、操作画像としては、例えば図10に示したようになる、図10は、自動運転車両Cの側面からスマートグラス20を通して視認したものに対して、移動不可領域Nがディスプレイ25に画像として投影されることで、この移動不可領域Nが自動運転車両Cを含めた視界に重畳されたものである。これは、例えばスマートグラス20のカメラ23で撮影された画像から自動運転車両Cを認識することで、移動不可領域Nの範囲を特定して表示させている。 In the case of the movement operation of the autonomous driving vehicle C by the smart glass shown in FIG. 8, the operation image is, for example, as shown in FIG. 10. FIG. 10 was viewed through the smart glass 20 from the side of the autonomous driving vehicle C For the object, the immovable area N is projected as an image on the display 25 so that the immovable area N is superimposed on the view including the automatically driven vehicle C. This is, for example, by recognizing the automatically driven vehicle C from the image captured by the camera 23 of the smart glass 20, the range of the immovable area N is specified and displayed.
 本実施例における操作画像の操作方法としては、ユーザの手の動きによるジェスチャにより行う。例えば、図10に示したように、ユーザが左手Lを左から右に振るようなジェスチャをすると、自動運転車両Cがユーザから見て右方向に移動する(図10の矢印AR)。ユーザが右手Rを右から左へ振るようなジェスチャをすると、自動運転車両Cがユーザから見て左方向に移動する。図10は、手の親指がユーザ側に向くようにして立てた状態で振っている例である。 As a method of operating the operation image in the present embodiment, a gesture by the movement of the user's hand is performed. For example, as shown in FIG. 10, when the user makes a gesture to shake the left hand L from the left to the right, the autonomous driving vehicle C moves in the right direction as viewed from the user (arrow AR in FIG. 10). When the user makes a gesture to shake the right hand R from right to left, the autonomous driving vehicle C moves leftward as viewed from the user. FIG. 10 shows an example in which the user shakes with his / her thumb pointing to the user.
 これは、スマートグラス20のカメラ23で当該ジェスチャを撮影し、カメラ23によって撮影された画像から制御部21が右手又は左手の区別と、そのジェスチャを認識することで移動方向を特定する。即ち、本実施例ではカメラ23が操作検出部として機能する。自動運転車両Cの移動量については、例えば手を1回振ったことに対する移動量を予め定める等により特定すればよい。移動情報としては、1回のジェスチャを認識したことと移動方向を含めればよい。 This image | photographs the said gesture with the camera 23 of the smart glass 20, and the control part 21 specifies the movement direction by recognizing the right hand or the left hand from the image image | photographed by the camera 23, and the gesture. That is, in the present embodiment, the camera 23 functions as an operation detection unit. The amount of movement of the autonomous driving vehicle C may be specified by, for example, predetermining the amount of movement for one shaking of the hand. The movement information may include that one gesture has been recognized and the movement direction.
 なお、図10に示したようなジェスチャに限らず、手や腕を左右に移動させるジェスチャであって、自動運転車両Cの横に位置するユーザが自動運転車両Cをユーザから見て左右に移動させるようなジェスチャであればよい。 In addition, it is not only a gesture as shown in FIG. 10, but it is a gesture which moves a hand and an arm to the left and right, Comprising: The user located beside the automatic driving vehicle C moves the automatic driving vehicle C right and left seeing from a user It may be a gesture that causes
 また、図11に示したように、スマートグラス20越しに自動運転車両Cを後方又は前方から見た状態で、手(図11では右手R)の甲がユーザの側に向くようにして手をかざすようにした、自動運転車両Cを押すようなジェスチャとしてもよい。この場合の自動運転車両Cの移動量については、例えばこの自動運転車両Cを押すジェスチャの継続時間により定めるようにすればよい。つまり、ジェスチャ継続中は自動運転車両Cが移動する。勿論移動可能領域Mを超える移動はできないようにする。 Further, as shown in FIG. 11, with the automated driving vehicle C viewed from the rear or the front through the smart glass 20, the back of the hand (the right hand R in FIG. 11) is directed toward the user It may be a gesture of pushing the autonomous driving vehicle C held up. The amount of movement of the autonomous driving vehicle C in this case may be determined, for example, by the duration of the gesture for pressing the autonomous driving vehicle C. That is, while the gesture continues, the autonomous driving vehicle C moves. Of course, the movement beyond the movable area M can not be performed.
 また、本実施例では、スマートグラス20で説明したが、カメラを有するスマートフォンで、自動運転車両Cを撮影し、撮影画像に移動不可領域Nを重畳表示して、ジェスチャも当該カメラで撮影して検出するようにしてもよい。 Moreover, although the smart glass 20 has been described in the present embodiment, the autonomous driving vehicle C is photographed with a smartphone having a camera, the movement impossible area N is superimposed and displayed on the photographed image, and the gesture is also photographed by the camera. It may be detected.
 本実施例によれば、スマートグラス20において、通信部22は自動運転車両Cに設置されている外界認識部3が認識した認識情報を取得し、制御部21は、通信部22が取得した認識情報に基づいて自動運転車両Cの停車位置を移動させるための操作画像を生成する。そして、制御部21は、カメラ23で撮影した画像から所定のジェスチャを認識することで、自動運転車両Cへの移動情報を生成し送信する。このようにすることにより、スマートグラス20により、直感的な操作で自動運転車両Cの停車位置を微調整して移動させることができ、自動運転車両Cを適切な位置に停車させることが可能となる。 According to the present embodiment, in the smart glass 20, the communication unit 22 acquires recognition information recognized by the external world recognition unit 3 installed in the autonomous driving vehicle C, and the control unit 21 recognizes the communication unit 22 acquires Based on the information, an operation image for moving the stop position of the automatically driven vehicle C is generated. Then, the control unit 21 generates movement information to the autonomous driving vehicle C and transmits it by recognizing a predetermined gesture from the image captured by the camera 23. By doing this, the smart glass 20 makes it possible to finely adjust and move the stop position of the automatically driven vehicle C by an intuitive operation, and to stop the automatically driven vehicle C at an appropriate position. Become.
 また、上述した2つの実施例では、自動運転車両Cの移動は、自動運転車両Cの前後方向に限っていたが、ハンドル操作を伴うような移動を含めてもよい。 Further, in the two embodiments described above, the movement of the automatically driven vehicle C is limited to the front-rear direction of the automatically driven vehicle C, but movement including steering wheel operation may be included.
 また、本発明は上記実施例に限定されるものではない。即ち、当業者は、従来公知の知見に従い、本発明の骨子を逸脱しない範囲で種々変形して実施することができる。かかる変形によってもなお本発明の端末装置の構成を具備する限り、勿論、本発明の範疇に含まれるものである。 Further, the present invention is not limited to the above embodiment. That is, those skilled in the art can carry out various modifications without departing from the gist of the present invention in accordance with conventionally known findings. Of course, as long as the configuration of the terminal device of the present invention is provided even by this modification, it is included in the scope of the present invention.
  1    スマートフォン(端末装置)
  2    車両制御装置
  3    外界認識部
  11   制御部(生成部)
  12   通信部(取得部、送信部)
  14   表示部
  15   操作部(操作検出部)
  20   スマートグラス(端末装置)
  21   制御部(生成部)
  22   通信部(取得部、送信部)
  23   カメラ(操作検出部)
  25   ディスプレイ
  C    自動運転車両
  M    移動可能領域(移動可能範囲)
  N    移動不可領域
  W    アイコン(障害物の検出を示す情報)
  S1   自動運転車両から情報取得(取得工程)
  S2   操作画像生成、表示(生成工程)
  S5   送信(送信工程)
1 Smartphone (terminal device)
2 Vehicle control device 3 Environment recognition unit 11 Control unit (generation unit)
12 Communication unit (acquisition unit, transmission unit)
14 Display 15 Operation Unit (Operation Detection Unit)
20 Smart Glass (Terminal Device)
21 control unit (generation unit)
22 Communication unit (acquisition unit, transmission unit)
23 Camera (Operation Detection Unit)
25 Display C Automated Driving Vehicle M Movable Area (Movable Area)
N non-movable area W icon (information indicating detection of obstacle)
S1 Acquisition of information from autonomous driving vehicles (acquisition process)
S2 Operation image generation, display (generation process)
S5 transmission (transmission process)

Claims (8)

  1.  停車している自動運転車両に設置されている外界認識部が認識した認識情報を取得する取得部と、
     前記取得部が取得した認識情報に基づく前記自動運転車両の停車位置を移動させるための操作画像を表示する表示部と、
     前記表示部に表示された前記操作画像に対する前記自動運転車両の停車位置を移動させる操作を検出する操作検出部と、
     前記操作検出部が検出した操作に基づいて前記自動運転車両へ移動に関する情報を送信する送信部と、
    を備えることを特徴とする端末装置。
    An acquisition unit that acquires recognition information recognized by an external world recognition unit installed in a stopped autonomous driving vehicle;
    A display unit that displays an operation image for moving the stop position of the autonomous driving vehicle based on the recognition information acquired by the acquisition unit;
    An operation detection unit that detects an operation of moving the stop position of the autonomous driving vehicle with respect to the operation image displayed on the display unit;
    A transmitter configured to transmit information related to movement to the autonomous driving vehicle based on the operation detected by the operation detection unit;
    A terminal device comprising:
  2.  前記表示部には、前記操作画像として、前記自動運転車両を上方から俯瞰して見た画像が表示されることを特徴とする請求項1に記載の端末装置。 The terminal device according to claim 1, wherein the display unit displays, as the operation image, an image obtained by looking over the autonomous driving vehicle from above.
  3.  前記表示部には、前記操作画像として、前記自動運転車両を側面から見た画像が表示されることを特徴とする請求項1または2に記載の端末装置。 The terminal device according to claim 1, wherein an image obtained by viewing the autonomous driving vehicle from a side is displayed as the operation image on the display unit.
  4.  前記取得部が障害物を示す情報を取得した場合、前記表示部は、前記障害物の検出を示す情報を表示することを特徴とする請求項1乃至3のうちいずれか一項に記載の端末装置。 The terminal according to any one of claims 1 to 3, wherein when the acquisition unit acquires information indicating an obstacle, the display unit displays information indicating detection of the obstacle. apparatus.
  5.  前記表示部は、前記操作画像に前記自動運転車両の移動可能範囲を含めて表示することを特徴とする請求項1乃至4のうちいずれか一項に記載の端末装置。 The terminal device according to any one of claims 1 to 4, wherein the display unit displays the operation image including a movable range of the autonomous driving vehicle.
  6.  前記表示部は、前記操作検出部で検出された操作に基づく前記自動運転車両の移動予測位置を表示することを特徴とする請求項1乃至5のうちいずれか一項に記載の端末装置。 The terminal apparatus according to any one of claims 1 to 5, wherein the display unit displays a predicted movement position of the autonomous driving vehicle based on the operation detected by the operation detection unit.
  7.  停車している自動運転車両を操作する端末装置で実行される自動運転車両操作方法であって、
     前記自動運転車両に設置されている外界認識部が認識した認識情報を取得する取得工程と、
     前記取得工程で取得した認識情報に基づく前記自動運転車両の停車位置を移動させるための操作画像を表示部に表示させる表示工程と、
     前記表示部に表示された前記操作画像に対する前記自動運転車両の停車位置を移動させる操作を検出する操作検出部が検出した操作に基づいて、前記自動運転車両へ移動に関する情報を送信する送信工程と、
    を備えることを特徴とする自動運転車両操作方法。
    An autonomous driving vehicle operating method executed by a terminal device for operating a stationary autonomous driving vehicle, comprising:
    An acquisition step of acquiring recognition information recognized by an external world recognition unit installed in the autonomous driving vehicle;
    A display step of causing a display unit to display an operation image for moving the stop position of the autonomous driving vehicle based on the recognition information acquired in the acquisition step;
    A transmitting step of transmitting information related to movement to the autonomous driving vehicle based on an operation detected by an operation detecting unit detecting an operation of moving the stopping position of the autonomous driving vehicle with respect to the operation image displayed on the display unit; ,
    A method of operating an autonomous driving vehicle, comprising:
  8.  請求項7に記載の自動運転車両操作方法を、コンピュータにより実行させることを特徴とする自動運転車両操作プログラム。 A program for operating an autonomous driving vehicle, characterized in that the autonomous driving vehicle operating method according to claim 7 is executed by a computer.
PCT/JP2018/035601 2017-09-28 2018-09-26 Terminal device WO2019065699A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-188223 2017-09-28
JP2017188223 2017-09-28

Publications (1)

Publication Number Publication Date
WO2019065699A1 true WO2019065699A1 (en) 2019-04-04

Family

ID=65902112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/035601 WO2019065699A1 (en) 2017-09-28 2018-09-26 Terminal device

Country Status (1)

Country Link
WO (1) WO2019065699A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014065392A (en) * 2012-09-25 2014-04-17 Aisin Seiki Co Ltd Portable terminal, remote control system, remote control method, and program
JP2016007959A (en) * 2014-06-25 2016-01-18 富士通テン株式会社 Device for vehicle, vehicle control system, and vehicle control method
WO2017068698A1 (en) * 2015-10-22 2017-04-27 日産自動車株式会社 Parking support method and parking support device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014065392A (en) * 2012-09-25 2014-04-17 Aisin Seiki Co Ltd Portable terminal, remote control system, remote control method, and program
JP2016007959A (en) * 2014-06-25 2016-01-18 富士通テン株式会社 Device for vehicle, vehicle control system, and vehicle control method
WO2017068698A1 (en) * 2015-10-22 2017-04-27 日産自動車株式会社 Parking support method and parking support device

Similar Documents

Publication Publication Date Title
CN110709271B (en) Vehicle control system, vehicle control method, and storage medium
CN110678371B (en) Vehicle control system, vehicle control method, and storage medium
CN110419211B (en) Information processing apparatus, information processing method, and computer-readable storage medium
US9513702B2 (en) Mobile terminal for vehicular display system with gaze detection
RU2656933C2 (en) Method and device for early warning during meeting at curves
US10773715B2 (en) Parking control method and parking control device
CN110730740B (en) Vehicle control system, vehicle control method, and storage medium
US20180345790A1 (en) Vehicle control system, vehicle control method, and storage medium
US20180345991A1 (en) Vehicle control system, vehicle control method, and storage medium
US11203360B2 (en) Vehicle control system, vehicle control method and program
US20180345988A1 (en) Vehicle control system, vehicle control method, and storage medium
US11565713B2 (en) Vehicle control system, vehicle control method, and vehicle control program
WO2021082483A1 (en) Method and apparatus for controlling vehicle
JP5047650B2 (en) In-vehicle camera system
US20180348757A1 (en) Vehicle control system, vehicle control method, and storage medium
JP6451101B2 (en) Vehicle communication device
CN107428252B (en) Method for operating a communication device of a motor vehicle during an autonomous driving mode, communication device and motor vehicle
JP2022184896A (en) System and method for autonomous vehicle notification
CN111216127A (en) Robot control method, device, server and medium
US10503167B2 (en) Vehicle control system, vehicle control method, and storage medium
US10922976B2 (en) Display control device configured to control projection device, display control method for controlling projection device, and vehicle
CN115269097A (en) Navigation interface display method, navigation interface display device, navigation interface display equipment, storage medium and program product
JP2018203014A (en) Imaging display unit
JPWO2017061183A1 (en) Human interface
WO2019065699A1 (en) Terminal device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18861873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18861873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP