WO2019065699A1 - Dispositif terminal - Google Patents

Dispositif terminal Download PDF

Info

Publication number
WO2019065699A1
WO2019065699A1 PCT/JP2018/035601 JP2018035601W WO2019065699A1 WO 2019065699 A1 WO2019065699 A1 WO 2019065699A1 JP 2018035601 W JP2018035601 W JP 2018035601W WO 2019065699 A1 WO2019065699 A1 WO 2019065699A1
Authority
WO
WIPO (PCT)
Prior art keywords
driving vehicle
autonomous driving
unit
image
vehicle
Prior art date
Application number
PCT/JP2018/035601
Other languages
English (en)
Japanese (ja)
Inventor
晴彦 高木
吉洋 安原
昌嗣 左近
真武 下平
里紗 夏川
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Publication of WO2019065699A1 publication Critical patent/WO2019065699A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom

Definitions

  • the present invention relates to a terminal device.
  • Patent Document 1 when calling a taxi from home or on the go, there has been proposed a system or the like capable of arranging dispatch from a terminal device possessed by a user such as a smartphone (see, for example, Patent Document 1).
  • Patent Document 1 also describes a self-driving vehicle, in the case of a self-driving vehicle, since the human driver is not on board, the intention of the user when arriving at a designated place at the time of dispatch etc. It is possible that the vehicle is stopped at a position not suitable for use or at a position not suitable for use.
  • One of the problems to be solved by the present invention is to stop the vehicle at an appropriate position in a vehicle capable of autonomous traveling as described above.
  • invention of Claim 1 acquired the acquisition part which acquires the recognition information which the external world recognition part installed in the stopped autonomous driving vehicle recognizes, and the acquisition part
  • the invention according to claim 7 is an automatic driving vehicle operation method executed by a terminal device for operating a stopped automatic driving vehicle, wherein the recognition recognized by the external world recognition unit installed in the automatic driving vehicle An acquisition step of acquiring information, a display step of displaying on a display unit an operation image for moving a stop position of the autonomous driving vehicle based on the recognition information acquired in the acquisition step, and the display unit displayed on the display unit And transmitting the information related to the movement to the autonomous driving vehicle based on the operation detected by the operation detection unit for detecting the operation to move the stopping position of the autonomous driving vehicle with respect to the operation image. .
  • the invention according to claim 8 is characterized in that the method of operating an autonomous driving vehicle according to claim 7 is executed by a computer.
  • FIG. 1 It is a schematic block diagram of the system which has the terminal device concerning the 1st Example of this invention.
  • FIG. 1 It is a functional block diagram of the smart glass shown by FIG. It is explanatory drawing of the stop position movement operation of a user's self-driving vehicle from the smart glass shown by FIG. It is explanatory drawing of the other operation method of stop position movement operation shown by FIG.
  • the acquisition unit acquires recognition information recognized by the external world recognition unit installed in the autonomous driving vehicle, and the display unit performs automatic driving based on the recognition information acquired by the acquisition unit.
  • the operation image for moving the stop position of the vehicle is displayed.
  • the operation detection unit detects an operation for moving the stopping position of the automatically driven vehicle with respect to the operation image displayed on the display unit, and the transmission unit detects information about movement to the automatically driven vehicle based on the operation detected by the operation detection unit.
  • Send By doing this it is possible to move the stop position of the automatically driven vehicle stopped by the terminal device possessed by the user who has called the automatically driven vehicle, and the automatically operated vehicle is appropriately positioned according to the user's intention. It is possible to stop at
  • the image which looked over the automatic driving vehicle from upper direction may be displayed on a display part as an operation image.
  • the image which looked at the automatic driving vehicle from the side is displayed on a display part as an operation image.
  • the display unit may display information indicating detection of an obstacle.
  • the display unit may display the operation image including the movable range of the autonomous driving vehicle. By doing this, when the user moves the autonomous driving vehicle, the movable range can be recognized and the moving operation can be performed.
  • the display unit may display the predicted movement position of the autonomous driving vehicle based on the operation detected by the operation detection unit. By doing this, when the user moves the autonomous driving vehicle, the moved state can be grasped in advance, and the autonomous driving vehicle can be moved to a more appropriate position.
  • the acquiring step acquires recognition information recognized by the external world recognition unit installed in the autonomous driving vehicle
  • the displaying step acquires the recognition information acquired in the acquiring step
  • An operation image for moving the stop position of the autonomous driving vehicle is displayed on the display unit on the basis of.
  • the transmission step transmits information on the movement to the automatically driven vehicle.
  • the above-described method for operating an autonomous driving vehicle may be executed by a computer.
  • the stop position of the autonomous driving vehicle can be moved by the terminal device possessed by the user who has called the autonomous driving vehicle using the computer, and the autonomous driving vehicle can be appropriately adapted to the user's intention. It becomes possible to stop at the position.
  • a terminal apparatus will be described with reference to FIGS. 1 to 7.
  • the smartphone 1 as the terminal device according to the present embodiment can communicate with the vehicle control device 2 of the autonomous driving vehicle C.
  • the smartphone 1 and the vehicle control device 2 may directly communicate with each other by near field communication or the like, or may communicate with each other via a public network or the like.
  • the functional configuration of the smartphone 1 is shown in FIG.
  • the smartphone 1 includes a control unit 11, a communication unit 12, a storage unit 13, a display unit 14, and an operation unit 15.
  • the control unit 11 is configured by, for example, a CPU (Central Processing Unit), and controls the entire control of the smartphone 1.
  • the control unit 11 generates an operation image (described later) based on a camera image or the like acquired from the vehicle control device 2 of the autonomous driving vehicle C and causes the display unit 14 to display the operation image. Further, based on the operation performed on the operation unit 15, the control unit 11 generates information related to movement for moving the stop position of the autonomous driving vehicle C that is at a stop.
  • a CPU Central Processing Unit
  • the communication unit 12 as an acquisition unit and a transmission unit is configured by a wireless communication circuit or the like, and transmits information related to movement generated by the control unit 11 to the vehicle control device 2 of the autonomous driving vehicle C.
  • the communication unit 12 also receives information such as a camera image acquired by the external world recognition unit 3 installed in the autonomous driving vehicle C.
  • the storage unit 13 is configured of a storage device such as a non-volatile semiconductor memory, and stores an operating system (OS) operated by the control unit 11 and programs and data such as applications.
  • OS operating system
  • the display unit 14 is configured of, for example, a liquid crystal display, and displays various operation screens such as an application. Further, in the present embodiment, an operation image, which will be described later, generated by the control unit 11 is displayed.
  • the operation unit 15 as the operation detection unit is configured by, for example, a touch panel or a push button provided so as to be superimposed on the display unit 14, and an operation such as an application is performed. Further, in the present embodiment, the moving operation of the automatically driven vehicle C is performed based on the operation image displayed on the display unit 14. That is, by detecting the touch operation performed on the operation unit 15, it functions as an operation detection unit.
  • the autonomous driving vehicle C includes a vehicle control device 2, an external world recognition unit 3, and a vehicle position detection unit 4.
  • the vehicle control device 2 autonomously travels the autonomous driving vehicle C based on the results detected by the external world recognition unit 3 and the own vehicle position detection unit 4 and the map information for automatic driving possessed by the vehicle control device 2 (automatic driving ). Further, the vehicle control device 2 communicates with the smartphone 1, receives information on the movement of the autonomous driving vehicle C, and transmits the information acquired by the external world recognition unit 3.
  • the external world recognition unit 3 is installed in the autonomous driving vehicle C, and the camera for photographing the outside of the autonomous driving vehicle C, such as the front and rear, LiDAR (Light Detection And Ranging), radar, etc. It includes a sensor that recognizes the surrounding environment.
  • LiDAR Light Detection And Ranging
  • the own vehicle position detection unit 4 is a GPS (Global Positioning System) receiver that detects the current position of the automatically driven vehicle C, a gyro sensor that detects the posture (direction etc.) of the automatically driven vehicle C, the speed of the automatically driven vehicle C Also included are devices such as a velocity sensor that detects the acceleration and an acceleration sensor that detects the acceleration.
  • GPS Global Positioning System
  • the flowchart shown in FIG. 3 is executed by the control unit 11 of the smartphone 1. Also, the operation according to this flowchart may be configured as, for example, an application installed on the smartphone 1. In that case, the application functions as an autonomous driving vehicle operation program.
  • a result (recognition information) acquired by the external world recognition unit 3 such as an image photographed by the camera from the autonomous driving vehicle C is acquired via the communication unit 12.
  • the recognition information acquired in this step includes, for example, an image around the autonomous driving vehicle C photographed by the above-described camera, presence or absence of an obstacle around the autonomous driving vehicle C detected by a rider or a radar, and the obstacle Detection information of a building, a road, etc. around the automatically driven vehicle C including the distance of the vehicle.
  • step S ⁇ b> 2 the control unit 11 generates an operation image and displays it on the display unit 14.
  • the operation image is an image for operating the autonomous driving vehicle C, which is generated based on the recognition information acquired in step S1, with the smartphone 1. An example of the operation image will be described with reference to FIG.
  • the diagram shown in FIG. 4 is an example of the operation image.
  • an image (a bird's-eye view image) obtained by looking over the automatic driving vehicle C from above is displayed at the approximate center thereof.
  • This overhead view image may be displayed by color-coding or the like of a movable area M and a non-movable area N, which will be described later, other than the image (may be an illustration or the like) of the automatically driven vehicle C.
  • the autonomous driving vehicle C has cameras that respectively capture, for example, four directions in front, rear and left and right directions of the vehicle
  • the situation around the autonomous driving vehicle C combining the overhead image and the images captured by those cameras Image may be displayed, or an image representing the situation around the autonomous driving vehicle C in which the overhead image and the image generated based on the recognition information detected by the rider or the radar may be displayed. It is also good.
  • the image when setting it as the overhead image which is an image showing the surrounding condition of the automatically driven vehicle C, the image may be generated by the automatically driven vehicle C and transmitted to the smartphone 1 as recognition information.
  • the area around the automatically driven vehicle C is a movable area M (movable range), which is an area that enables the automatically driven vehicle C to move, and movement of the automatically driven vehicle C is not possible. It is divided into the immovable area N which is an area to be set.
  • the movable area M is in front of and behind the self-driving vehicle C.
  • the immovable area N is an area other than the movable area M.
  • the movement of the autonomous driving vehicle C by the smartphone 1 is limited to the longitudinal direction of the autonomous driving vehicle C. Therefore, the area
  • the position where the obstacle is present is regarded as a movement impossible area N.
  • the movement operation range (distance) of the autonomous driving vehicle C by the smartphone 1 may be determined in advance, and the area beyond the range may be set as the movement impossible area N.
  • the image is a bird's-eye view of the autonomous driving vehicle C.
  • the autonomous driving vehicle C may be an image viewed from the side.
  • the image may be an illustration or the like
  • the overhead image in FIG. 4 and the side image in FIG. 5 may be switched.
  • an icon or the like indicating switching of the image displayed on the display unit 14 may be switched by operation, or an obstacle recognized around the autonomous driving vehicle C recognized by the external world recognition unit 3 You may switch according to the type of and the distance to the obstacle.
  • step S3 it is determined whether or not the user has performed an operation on the operation image displayed in step S2. If the operation has been performed (in the case of YES), the process proceeds to step S4.
  • the operation in the operation image for example, the user performs a touch operation on a position desired to move the automatically driven vehicle C on the operation image (also referred to as a movement desired position) by a touch panel included in the operation unit 15 or Swipe operation of a portion of the autonomous driving vehicle C to a position where movement on the operation image is desired can be mentioned.
  • the autonomous driving vehicle C may be moved to the desired movement position by displaying a button or the like indicating the moving direction on the operation image and pressing the button.
  • the desired movement position is specified within the range of the movable area M, and when it is attempted to specify the movement impossible area N, the movement operation may not be accepted, and for example, a warning may be displayed.
  • the moving amount may be directly input, such as “1 m behind”.
  • This input is not limited to key input and may be voice input.
  • the directly input information is output to the vehicle control device 2 as movement information.
  • voice input for example, the microphone of the smartphone 1 functions as an operation detection unit.
  • the movement information is information related to the movement of the autonomous driving vehicle C based on the operation made into the operation image in step S3.
  • the coordinates of the movement desired position in the bird's-eye view image may be used as the movement information.
  • the movement amount can be calculated based on the coordinate information transmitted as the movement information by the vehicle control device 2.
  • the movement amount may be calculated based on the relationship between the distance to the obstacle, the maximum position where movement is possible, and the movement desired position. For example, the amount of movement can be calculated from the ratio of the coordinates of the maximum position where movement is possible to the coordinate of the movement desired position, and the calculated movement amount can be used as movement information.
  • step S5 the movement information generated in step S4 is transmitted to the vehicle control device 2 of the automatically driven vehicle C.
  • the vehicle control device 2 controls the accelerator, the brake, and the like of the automatically driven vehicle C based on the received movement information to move the vehicle C to the desired movement position.
  • the vehicle control device 2 may transmit the end of the movement to the smartphone 1.
  • the smartphone 1 the end of the movement may be displayed on the display unit 14.
  • the smartphone 1 may also store the final movement position.
  • step S1 functions as an acquisition step
  • step S2 as a generation step
  • step S5 as a transmission step.
  • the communication unit 12 acquires recognition information recognized by the external world recognition unit 3 installed in the autonomous driving vehicle C, and the control unit 11 recognizes the communication unit 12 acquires The operation image for moving the stop position of the automatically driven vehicle C is generated based on the information, and the display unit 14 displays the operation image. Then, the operation unit 15 performs an operation to move the stop position of the automatically driven vehicle C based on the operation image displayed on the display unit 14, and the communication unit 12 performs the operation based on the operation performed by the operation unit 15. The movement information is transmitted to the automatically driven vehicle C.
  • the smart phone 1 possessed by the user who has called the automatically driven vehicle C can move so as to finely adjust the stopping position of the automatically driven vehicle C, and the automatically driven vehicle C can be moved according to the user's intention. It is possible to stop at an appropriate position.
  • the communication unit 12 is installed in the autonomous driving vehicle C, and acquires an image captured by a camera imaging the outside of the autonomous driving vehicle C.
  • the autonomous driving vehicle C can be moved based on a bird's-eye view image or the like generated based on a plurality of images captured by a camera installed in the autonomous driving vehicle.
  • the movable range M of the automatically driven vehicle C can be determined based on the obstacle detected by the image captured by the camera.
  • the communication unit 12 acquires detection information of an obstacle existing around the autonomous driving vehicle C installed in the autonomous driving vehicle C, a building or a road around the autonomous driving vehicle C, and the like. By doing this, the autonomous driving vehicle C can be moved based on the detection result of a sensor such as a rider installed in the autonomous driving vehicle C.
  • control unit 11 includes the movable range M of the automatically driven vehicle C in the operation image.
  • the movable image M and the non-movable region N are displayed on the operation image to indicate the movable range for the user.
  • an icon W indicating that an obstacle has been detected or a message may be displayed.
  • the icon W and the like may be displayed together with the movable area M and the non-movable area N. That is, when acquiring information indicating an obstacle, the control unit 11 causes the display unit 14 to display an icon W indicating detection of an obstacle. By doing this, the user can recognize that there is an obstacle around the autonomous driving vehicle C.
  • the bird's-eye view image shown in FIG. 4 and the side image shown in FIG. 5 have been described as representative examples as operation images, but as the operation images, operations as described below Image operations are also possible.
  • FIG. 7 is an example of the image of the front camera installed in the autonomous driving vehicle C.
  • an image captured by the front camera is displayed on the display unit 14.
  • the user wants to move to the position of A in the upper row of FIG. 7 instead of the current stop position of the autonomous driving vehicle C, the user touches the portion of A in the upper row of FIG. Give instructions.
  • the control unit 11 transmits, as movement information, coordinate information of the position (A) designated by the user in the front camera image to the vehicle control device 2 through the communication unit 12.
  • the vehicle control device 2 calculates the distance from the image captured by the front camera image to the designated position, and moves the autonomous driving vehicle C to the designated position.
  • FIG. 7 was demonstrated by the front image image
  • the automatically driven vehicle is You may move C. In addition, it may be performed in a state where the user is in the vehicle of the autonomous driving vehicle C.
  • the moving operation of the stop position is performed from the inside of the automatically driven vehicle C, not only the smartphone 1 as the terminal device but also an on-vehicle device mounted on the automatically driven vehicle C may be used.
  • FIG. 8 a terminal according to a second embodiment of the present invention will be described with reference to FIG. 8 to FIG.
  • the same parts as those of the first embodiment described above are designated by the same reference numerals and the description thereof will be omitted.
  • FIG. 8 shows an example in which the stop position of the automatically driven vehicle C is moved using a smart glass.
  • the smart glass 20 is a glasses-type wearable device, and is a terminal device mounted on the head of the user. As shown in FIG. 8, the smart glass 20 includes a camera 23, a touch panel 24, and a display 25.
  • the display 25 is a transmissive display, and the user wearing the smart glass can visually recognize the front view of the user as well as the image projected on the display 25 positioned in front of his own eye.
  • the functional configuration of the smart glass 20 is shown in FIG.
  • the smart glass 20 includes a control unit 21, a communication unit 22, and a storage unit 26 in addition to the camera 23, the touch panel 24 and the display 25 shown in FIG. 8.
  • the control unit 21 is configured of, for example, a CPU (Central Processing Unit), and controls the entire control of the smart glass 20.
  • the control unit 21 generates and displays an operation image on the display 25 based on information acquired by the external world recognition unit 3 such as a camera image acquired from the vehicle control device 2 of the autonomous driving vehicle C.
  • the control part 21 produces
  • the communication unit 22 as an output unit is configured by a wireless communication circuit or the like, and transmits information on the movement generated by the control unit 21 to the vehicle control device 2 of the automatically driven vehicle C. Moreover, the information acquired by the external world recognition part 3, such as a camera image mentioned later from the vehicle control apparatus 2 of the autonomous driving vehicle C, is received.
  • the storage unit 13 is configured of a storage device such as a nonvolatile semiconductor memory, and stores an operating system (OS) operated by the control unit 21 and programs and data such as applications.
  • OS operating system
  • the operation image is, for example, as shown in FIG. 10.
  • FIG. 10 was viewed through the smart glass 20 from the side of the autonomous driving vehicle C
  • the immovable area N is projected as an image on the display 25 so that the immovable area N is superimposed on the view including the automatically driven vehicle C. This is, for example, by recognizing the automatically driven vehicle C from the image captured by the camera 23 of the smart glass 20, the range of the immovable area N is specified and displayed.
  • a gesture by the movement of the user's hand is performed.
  • the autonomous driving vehicle C moves in the right direction as viewed from the user (arrow AR in FIG. 10).
  • the autonomous driving vehicle C moves leftward as viewed from the user.
  • FIG. 10 shows an example in which the user shakes with his / her thumb pointing to the user.
  • photographs the said gesture with the camera 23 of the smart glass 20, and the control part 21 specifies the movement direction by recognizing the right hand or the left hand from the image image
  • the amount of movement of the autonomous driving vehicle C may be specified by, for example, predetermining the amount of movement for one shaking of the hand.
  • the movement information may include that one gesture has been recognized and the movement direction.
  • the back of the hand (the right hand R in FIG. 11) is directed toward the user It may be a gesture of pushing the autonomous driving vehicle C held up.
  • the amount of movement of the autonomous driving vehicle C in this case may be determined, for example, by the duration of the gesture for pressing the autonomous driving vehicle C. That is, while the gesture continues, the autonomous driving vehicle C moves. Of course, the movement beyond the movable area M can not be performed.
  • the smart glass 20 has been described in the present embodiment, the autonomous driving vehicle C is photographed with a smartphone having a camera, the movement impossible area N is superimposed and displayed on the photographed image, and the gesture is also photographed by the camera. It may be detected.
  • the communication unit 22 acquires recognition information recognized by the external world recognition unit 3 installed in the autonomous driving vehicle C, and the control unit 21 recognizes the communication unit 22 acquires Based on the information, an operation image for moving the stop position of the automatically driven vehicle C is generated. Then, the control unit 21 generates movement information to the autonomous driving vehicle C and transmits it by recognizing a predetermined gesture from the image captured by the camera 23.
  • the smart glass 20 makes it possible to finely adjust and move the stop position of the automatically driven vehicle C by an intuitive operation, and to stop the automatically driven vehicle C at an appropriate position. Become.
  • the movement of the automatically driven vehicle C is limited to the front-rear direction of the automatically driven vehicle C, but movement including steering wheel operation may be included.
  • the present invention is not limited to the above embodiment. That is, those skilled in the art can carry out various modifications without departing from the gist of the present invention in accordance with conventionally known findings. Of course, as long as the configuration of the terminal device of the present invention is provided even by this modification, it is included in the scope of the present invention.

Abstract

Selon la présente invention, un véhicule apte à se déplacer de manière autonome est arrêté à une position appropriée. Dans un téléphone intelligent (1), une unité de communication (12) acquiert des informations de reconnaissance reconnues par une unité de reconnaissance externe (3) installée dans le véhicule à conduite automatique (C) ; une unité de commande (11) génère une image d'opération pour déplacer la position d'arrêt du véhicule à conduite automatique (C), la position d'arrêt étant déterminée sur la base des informations de reconnaissance acquises par l'unité de communication (12) ; et une unité d'affichage (14) affiche l'image d'opération. De plus, une unité de fonctionnement (15) effectue une opération pour déplacer la position d'arrêt du véhicule à conduite automatique (C) en fonction de l'image d'opération affichée sur l'unité d'affichage (14), et l'unité de communication (12) transmet des informations de mouvement au véhicule à conduite automatique (C) sur la base de l'opération effectuée dans l'unité de fonctionnement (15).
PCT/JP2018/035601 2017-09-28 2018-09-26 Dispositif terminal WO2019065699A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-188223 2017-09-28
JP2017188223 2017-09-28

Publications (1)

Publication Number Publication Date
WO2019065699A1 true WO2019065699A1 (fr) 2019-04-04

Family

ID=65902112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/035601 WO2019065699A1 (fr) 2017-09-28 2018-09-26 Dispositif terminal

Country Status (1)

Country Link
WO (1) WO2019065699A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014065392A (ja) * 2012-09-25 2014-04-17 Aisin Seiki Co Ltd 携帯端末、遠隔操作システム、遠隔操作方法、及びプログラム
JP2016007959A (ja) * 2014-06-25 2016-01-18 富士通テン株式会社 車両用装置、車両制御システム、車両制御方法
WO2017068698A1 (fr) * 2015-10-22 2017-04-27 日産自動車株式会社 Dispositif d'aide au stationnement et procédé d'aide au stationnement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014065392A (ja) * 2012-09-25 2014-04-17 Aisin Seiki Co Ltd 携帯端末、遠隔操作システム、遠隔操作方法、及びプログラム
JP2016007959A (ja) * 2014-06-25 2016-01-18 富士通テン株式会社 車両用装置、車両制御システム、車両制御方法
WO2017068698A1 (fr) * 2015-10-22 2017-04-27 日産自動車株式会社 Dispositif d'aide au stationnement et procédé d'aide au stationnement

Similar Documents

Publication Publication Date Title
CN110709271B (zh) 车辆控制系统、车辆控制方法及存储介质
US11077862B2 (en) Vehicle control system, vehicle control method, and storage medium
CN110419211B (zh) 信息处理装置、信息处理方法和计算机可读的存储介质
US9513702B2 (en) Mobile terminal for vehicular display system with gaze detection
RU2656933C2 (ru) Способ и устройство для предупреждения о встречном транспортном средстве
US20180345790A1 (en) Vehicle control system, vehicle control method, and storage medium
CA3067958C (fr) Procede et dispositif de commande de stationnement
US20180345991A1 (en) Vehicle control system, vehicle control method, and storage medium
CN110678371A (zh) 车辆控制系统、车辆控制方法及车辆控制程序
US11565713B2 (en) Vehicle control system, vehicle control method, and vehicle control program
WO2021082483A1 (fr) Procédé et appareil de commande de véhicule
JP5047650B2 (ja) 車載カメラシステム
US20180348757A1 (en) Vehicle control system, vehicle control method, and storage medium
JP6451101B2 (ja) 車両用通信装置
CN107428252B (zh) 用于在自主驾驶模式期间操作机动车辆的通信装置的方法,通信装置以及机动车辆
JP2022184896A (ja) 自律車両の通知のシステムと方法
JPWO2018220832A1 (ja) 車両制御システム、車両制御方法、およびプログラム
US10503167B2 (en) Vehicle control system, vehicle control method, and storage medium
US10922976B2 (en) Display control device configured to control projection device, display control method for controlling projection device, and vehicle
JP2019156298A (ja) 車両遠隔操作装置及び車両遠隔操作方法
CN115269097A (zh) 导航界面的显示方法、装置、设备、存储介质及程序产品
JP2018203014A (ja) 撮像表示ユニット
JPWO2017061183A1 (ja) ヒューマンインターフェース
WO2019065699A1 (fr) Dispositif terminal
KR101744718B1 (ko) 차량용 표시 시스템 및 그 제어방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18861873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18861873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP