WO2019207944A1 - Information processing device, program and information processing method - Google Patents

Information processing device, program and information processing method Download PDF

Info

Publication number
WO2019207944A1
WO2019207944A1 PCT/JP2019/007190 JP2019007190W WO2019207944A1 WO 2019207944 A1 WO2019207944 A1 WO 2019207944A1 JP 2019007190 W JP2019007190 W JP 2019007190W WO 2019207944 A1 WO2019207944 A1 WO 2019207944A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
user
location
image data
unit
Prior art date
Application number
PCT/JP2019/007190
Other languages
French (fr)
Japanese (ja)
Inventor
忍 及川
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2019207944A1 publication Critical patent/WO2019207944A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/005Traffic control systems for road vehicles including pedestrian guidance indicator
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams

Definitions

  • the present disclosure relates to an information processing device, a program, and an information processing method, and particularly relates to an information processing device, a program, and an information processing method that are used in a service for attracting vehicles.
  • Patent Document 1 discloses a navigation device in which a user can easily check a vehicle coming to a meeting place.
  • the navigation device generates voice guidance information indicating from which direction the vehicle enters the meeting place based on the position information and route information of the set meeting place.
  • This device transmits the generated guidance information to the user's terminal. The user can easily confirm the vehicle by listening to the voice guidance information.
  • Patent Document 1 when the meeting place is crowded with many taxi vehicles, it may be difficult for the user to specify which vehicle the called vehicle is.
  • This disclosure is intended to provide an information processing apparatus that makes it easy for a user to recognize a called vehicle and solves the above-described problems.
  • the information processing apparatus includes: a position acquisition unit that acquires position information of a vehicle that is calling to a meeting place and position information of a user; the position information of the vehicle that is acquired by the position acquisition unit; A vehicle location calculation unit that calculates a location of the vehicle displayed in user image data captured by the user's camera according to the user's location information, and the location of the vehicle calculated by the vehicle location calculation unit, An image presentation unit for presenting on the user image data.
  • the information processing apparatus includes: a position acquisition unit that acquires position information of a vehicle that is calling to a meeting place and a user's position information; the vehicle acquired by the position acquisition unit; According to the position information, the user location calculation unit that calculates the location of the user displayed in the vehicle external image data acquired from the external camera of the vehicle, and the location of the user calculated by the user location calculation unit, An image presenting unit for presenting on the vehicle external image data.
  • an information processing program executed by the information processing apparatus causes the position acquisition unit to acquire the position information of the vehicle and the position information of the user that are calling to the meeting place, and causes the vehicle location calculation unit to Based on the position information of the vehicle acquired by the position acquisition unit and the position information of the user, the location of the vehicle displayed in the user image data captured by the user's camera is calculated, and the image presentation unit The location of the vehicle calculated by the vehicle location calculation unit is presented on the user image data.
  • an information processing method executed by an information processing apparatus acquires position information of a vehicle and a user who are calling to a meeting place, and acquires the acquired position information of the vehicle and the user
  • the position of the vehicle displayed in the user image data captured by the user's camera is calculated based on the position information, and the calculated position of the vehicle is presented on the user image data.
  • the information processing apparatus the information processing program, and the information processing method, first, the position information of the vehicle and the position information of the user calling to the meeting place are acquired. Then, the location of the vehicle displayed in the image data of the user's camera is calculated based on the position information. Then, the calculated location of the vehicle is presented on the image data. Thereby, it is possible to provide an information processing apparatus that allows the user to easily specify the called vehicle.
  • the drawing It is a system configuration diagram of an information processing system according to the first embodiment of the present disclosure, It is a block diagram which shows the control structure of the user terminal shown in FIG. 1, a motor vehicle terminal, and a server, It is a block diagram showing a functional configuration of the information processing system according to the first embodiment of the present disclosure, It is a flowchart of a vehicle presentation process according to the first embodiment of the present disclosure, 5 is a screen example of the vehicle presentation process shown in FIG.
  • FIG. 8 is a screen example of the vehicle user presentation process shown in FIG. It is a block diagram which shows the function structure of the information processing system which concerns on 3rd embodiment of this indication, It is a block diagram showing a functional configuration of an information processing system according to a fourth embodiment of the present disclosure, It is a block diagram showing a functional configuration of an information processing system according to a fifth embodiment of the present disclosure, It is an example of a screen of an information processing system concerning a 5th embodiment of this indication.
  • the information processing system X of the present embodiment is a system that enables confirmation when a user calls a car such as a taxi or a hire.
  • the information processing system X includes a user terminal 1, an automobile terminal 2, and a server 3, which are connected by a network 4.
  • the user terminal 1 can use services on the web provided by the server 3, and includes a smartphone with a camera (Smart Phone), a mobile phone, a PDA (Personal Data Assistant), a PC (Personal Computer), a personal navigation device, It is a device such as a home appliance.
  • a smartphone with a camera (Smart Phone), a mobile phone, a PDA (Personal Data Assistant), a PC (Personal Computer), a personal navigation device, It is a device such as a home appliance.
  • an example in which the user terminal 1 is a smartphone will be described.
  • This smartphone functions as the information processing apparatus of this embodiment by installing and executing application software (hereinafter referred to simply as “application”), which will be described later.
  • application application software
  • the car terminal 2 is an in-car terminal of a car that uses services on the web provided by the server 3.
  • the in-vehicle terminal is a smartphone, a mobile phone, a PDA, a car navigation device, or the like.
  • the automobile terminal 2 is also a smartphone
  • the smartphone is placed on the holder H on the dashboard D of the automobile (vehicle) and used for car navigation by the driver (operator) of the automobile.
  • the outside and inside of the vehicle can be imaged with the out camera and the in camera of the smartphone, respectively.
  • the holder H is driven like an electric head by control from the automobile terminal 2 and can change the direction of imaging by the camera of the smartphone.
  • the server 3 is a server that performs management in a vehicle call service. Specifically, the server 3 receives a request for calling a car from the user terminal 1, searches for the car terminal 2 of the car that can be called, and mediates both terminals. For example, the server 3 estimates user characteristics from information such as the purchase history, viewing history, and travel history of the user, and performs matching of charges, drivers, vehicles, and the like based on the characteristics.
  • the server 3 introduces the driver of the vehicle registered in the service, the vehicle type of the vehicle, image data obtained by imaging the appearance of the vehicle, streaming data, description language data, audio data, etc. (hereinafter simply referred to as “image data etc.”). Can be distributed to the user.
  • the image data or the like may be a still image or a moving image.
  • the server 3 can also display word-of-mouth, rating, etc. for each vehicle.
  • the server 3 may set an application to be installed in the user terminal 1, obtain various information, manage the operation of the application, and distribute the application itself.
  • the network 4 is a mobile phone network, a WAN (Wide Area Network) such as the Internet (registered trademark), or an IP network such as a LAN (Local Area Network) such as WiFi (registered trademark) or a wireless LAN.
  • WAN Wide Area Network
  • IP network such as a LAN (Local Area Network) such as WiFi (registered trademark) or a wireless LAN.
  • LAN Local Area Network
  • WiFi registered trademark
  • wireless LAN wireless LAN
  • system configuration may be the same.
  • FIG. 2A shows the configuration of the user terminal 1
  • FIG. 2B shows the configuration of the automobile terminal 2
  • FIG. 2C shows the configuration of the server 3.
  • the user terminal 1 includes a control unit 10, a storage unit 11, an input unit 12, a display unit 13, a connection unit 14, a sensor group 15, a camera 16, a voice input / output unit 18, and the like. Each unit is connected to the control unit 10 and controlled in operation by the control unit 10.
  • the automobile terminal 2 includes a control unit 20, a storage unit 21, an input unit 22, a display unit 23, a connection unit 24, an external camera 26, an in-vehicle camera 27, a voice input / output unit 28, and the like. Each unit is connected to the control unit 20 and controlled in operation by the control unit 20.
  • the server 3 includes a control unit 30, a storage unit 31, a connection unit 34, and the like. Each unit is connected to the control unit 30 and controlled in operation by the control unit 30.
  • the control units 10, 20, and 30 include a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU, a TPU (Tensor Processing Unit), a DFP (Data Flow Processor), a DSP (Digital Signal Processor), It is an information processing unit including other ASIC (Application Specific Processor).
  • CPU Central Processing Unit
  • MPU Micro Processing Unit
  • GPU GPU
  • TPU Transistor Processing Unit
  • DFP Data Flow Processor
  • DSP Digital Signal Processor
  • It is an information processing unit including other ASIC (Application Specific Processor).
  • Each control unit 10, 20, 30 reads a program stored in the auxiliary storage unit of each corresponding storage unit 11, 21, 31 and develops this program in the main storage unit and executes the program, which will be described later. It is made to operate as each functional block (functional unit).
  • the control units 10, 20, and 30 control the entire devices of the user terminal 1, the automobile terminal 2, and the server 3.
  • Storage units 11, 21, and 31 are a main storage unit such as a RAM (Random Access Memory), an auxiliary storage unit such as a ROM (Read Only Memory), an SSD (Solid State Disk), an HDD (Hard Disk Drive), and a flash memory card. Or non-transitory tangible storage medium that may include optical recording media and the like.
  • Programs for controlling the operation of the user terminal 1, the automobile terminal 2, and the server 3 are stored in the auxiliary storage units of the storage units 11, 21, and 31, respectively.
  • This program is an OS (Operating System), an application, and the like.
  • the storage units 11, 21, and 31 also store various data.
  • the input units 12 and 22 are a touch panel, a pointing device such as a mouse, a button, a keyboard, an acceleration sensor, a line-of-sight sensor, a biometric authentication sensor, and the like.
  • the input unit 12 acquires input to the web page form, various instructions, and the like by the user.
  • Display units 13 and 23 are an LCD (Liquid Crystal Display), an organic EL display, an LED (Light Emitting Diode), or the like.
  • connection units 14, 24, and 34 are connection means including a wireless transceiver and a LAN interface for connecting to the network 4.
  • the connection units 14, 24, and 34 may include USB (Universal Serial Bus), RS-232C, various flash memory cards, SIM cards, interfaces to Bluetooth (registered trademark), and the like.
  • Sensor groups 15 and 25 are GNSS (Global Navigation Satellite System) position information sensors, gyro sensors, magnetic sensors, altitude sensors, and the like.
  • the sensor group 15 detects the position (coordinates), orientation, speed, acceleration, etc. (hereinafter referred to as “position etc.”) of the map of the device itself.
  • the camera 16 is provided in the user terminal 1 to take an image of the user and the user's surroundings.
  • the camera 16 is a camera in which an imaging element such as a CCD (Charge-Coupled Device) image sensor or a CMOS (Complementary MOS) image sensor and an optical element such as a lens are combined.
  • the camera 16 captures a still image or a moving image mainly directed by the user and stores the image data in the storage unit 11.
  • the voice input / output units 18 and 28 are a microphone and an A / D (Analog-to-Digital) converter for voice input, a D / A (Digital-to-Analog) converter for voice output, an amplifier, a speaker, and a vibration motor. Etc. With these configurations, the voice input / output units 18 and 28 perform notification such as a voice call of a user or driver, input of a voice command, information notification at the time of calling, navigation, and the like.
  • the external camera 26 is a camera for external imaging of the vehicle.
  • the smartphone's out camera is used for external imaging.
  • Image data captured by this out-camera is stored in the storage unit 21.
  • the in-vehicle camera 27 is a camera for in-vehicle imaging of the vehicle.
  • the in-camera of a smart phone is used for in-vehicle imaging.
  • Image data captured by the in-camera is also stored in the storage unit 21.
  • the external camera 26, and the in-vehicle camera 27, whether a still image or a moving image is captured, the image resolution, the zoom magnification, and the like can be changed via an application installed in the user terminal 1. These changes may also be possible in the automobile terminal 2.
  • the functional configuration of the information processing system Xa according to the first embodiment of the present disclosure will be described with reference to FIG. Also in the following embodiments, the information processing system X, the user terminal 1, the server 3, and the like having different functional unit configurations are indicated by adding lowercase letters a, b, c,.
  • the control unit 10a of the user terminal 1a includes a position acquisition unit 100, a vehicle location calculation unit 110, and an image presentation unit 120.
  • the storage unit 11a of the user terminal 1a stores position information 300 and user image data 310.
  • the automobile terminal 2 a includes location information 302.
  • the server 3 stores map data 340.
  • the position acquisition unit 100 acquires the position information 302 and the user position information 300 of the vehicle calling to the meeting place.
  • the meeting place is a position designated by the user through the application and contacted with the driver of the automobile via the server 3. This meeting place can be indicated on the map data 340.
  • the position information 302 is acquired from the automobile terminal 2 via the server 3.
  • the user position information 300 is acquired by the sensor group 15 of the own device.
  • the vehicle location calculation unit 110 is displayed in the user image data 310 in which the user's periphery is imaged by the user's camera 16 based on the vehicle location information 302 and the user location information 300 acquired by the location acquisition unit 100. Calculate the location of the vehicle.
  • the vehicle location calculation unit 110 calculates the position of the called vehicle on the map data 340 from the position information 302, and the position of the user position information 300 on the user's map data 340; The imaging direction of the camera 16 is calculated.
  • the vehicle location calculation part 110 collates with the map data 340 by DNN (Deep
  • DNN Deep
  • the vehicle location calculation part 110 can grasp
  • the image presentation unit 120 presents the vehicle location calculated by the vehicle location calculation unit 110 on the user image data 310.
  • AR Augmented
  • AR Augmented
  • the image presentation unit 120 can also show the building or road name recognized by the image recognition in the user image data 310. Further, the image presentation unit 120 can acquire the map data 340 from the server 3, arrange the vehicle position information 302 and the user position information 300 on the map, and show them on the display unit 13.
  • the image presentation unit 120 can also show the user the direction of arrival of the called vehicle.
  • the position information 300 is information such as the position of the own apparatus (user position) acquired by the sensor group 15 of the user terminal 1a that is the own apparatus.
  • the position information 300 may be away from the meeting place.
  • the position information 302 is information such as the position of the automobile terminal 2a (the position of the vehicle) acquired by the sensor group 25 of the automobile terminal 2a.
  • the position information 302 is acquired from the vehicle terminal 2 a of the vehicle being called by the position acquisition unit 100 of the user terminal 1 via the server 3.
  • User image data 310 is image data of a still image or a moving image captured by the camera 16.
  • an AR display scenery including the surrounding roads and buildings around the user, that is, a car that is called to come, is imaged by the user.
  • control unit 10a of the user terminal 1 executes a program such as an application on an OS (Operating System) stored in the storage unit 11a, so that the position acquisition unit 100, the vehicle location calculation unit 110, and the image presentation are performed. It functions as the unit 120 or the like.
  • OS Operating System
  • some or all of the functions executed by the control units 10, 20, and 30 may be configured in hardware by one or a plurality of ICs, DSPs, programmable logic circuits, and the like.
  • each part of the above-mentioned user terminal 1 becomes a hardware resource which performs the method of this indication.
  • the position information 302 of the vehicle calling to the meeting place and the position information 300 of the user are acquired.
  • the location of the vehicle displayed in the user image data 310 acquired from the user's camera is calculated from the acquired position information 302 and position information 300.
  • the calculated location of the vehicle is presented on the user image data 310 so that the user can easily recognize the called vehicle.
  • control unit 10a of the user terminal 1 mainly executes a program such as an application stored in the storage unit using hardware resources in cooperation with each unit.
  • Step S101 First, the position acquisition unit 100 performs a position acquisition process.
  • the user activates the application on the user terminal 1, connects to the server 3, and requests the vehicle to be called.
  • the server 3 sets a vehicle to be called, notifies the vehicle terminal 2 of the vehicle, and enables communication with the user terminal 1.
  • the position acquisition unit 100 acquires the position information 302 from the automobile terminal 2a of the vehicle calling to the meeting place and stores it in the storage unit 11a.
  • the position acquisition unit 100 also acquires the position information 300 of the user terminal 1a from the sensor group 15 and stores it in the storage unit 11a. (Step S102) Next, the vehicle location calculation unit 110 performs a vehicle location calculation process.
  • the user is interested in where the vehicle is running, particularly when calling and waiting for the vehicle, so that the user image data 310 at various locations is captured and displayed as an AR.
  • the image presentation unit 120 may notify the user to that effect. This notification is performed by displaying a pop-up or the like on the screen of the application.
  • the vehicle location calculation unit 110 calculates, based on the vehicle location information 302 and the user location information 300, the direction in which the vehicle comes when the called vehicle comes close, and directs the camera 16 of the user terminal to the user. To instruct. This instruction may be performed on the screen of the application.
  • the user points the camera 16 in the instructed direction so as to collide with a landmark known to the user.
  • the vehicle location calculation unit 110 may indicate to the user by voice or vibration by the voice input / output unit 18.
  • the vehicle location calculation unit 110 recognizes the vehicle displayed therein and calculates the location where the called vehicle is captured. (Step S103) Next, the image presentation unit 120 performs image presentation processing.
  • the image presentation unit 120 presents the calculated vehicle location on the user image data 310 by AR.
  • the image presentation unit 120 first displays a message such as “User, driver has arrived” or the like in the display column 610 when the called vehicle is captured. Then, the image presentation unit 120 displays the user image data 310 on the screen of the application, and displays the vehicle location 700 that is the calculated location of the vehicle as an AR.
  • the image presentation unit 120 also displays the map data 340 acquired from the server 3, displays the position of the user with the pointer U and displays the position of the vehicle with the pointer P thereon. On this map data 340, the route of the vehicle to the meeting place, the arrow of the traveling direction of the vehicle, etc. may be shown. This makes it easier to recognize the vehicle that has been called to the user.
  • the vehicle cannot be specified only by the above-mentioned service, and the driver of the vehicle who had been bothered to call must be contacted separately.
  • the user terminal 1a of the information processing system Xa of the present embodiment includes the position acquisition unit 100 that acquires the position information 302 of the vehicle that is calling to the meeting place and the position information 300 of the user, and the position acquisition unit 100.
  • the vehicle location calculation unit 110 that calculates the location of the vehicle displayed in the user image data 310 acquired from the user's camera based on the acquired vehicle location information 302 and the user location information 300, and the vehicle location calculation unit 110 It is an information processing apparatus provided with the image presentation part 120 which presents the location of the calculated vehicle on the user image data 310, It is characterized by the above-mentioned.
  • This configuration makes it possible to know the location of the called vehicle on the image in real time. For this reason, the user can easily find out which vehicle is the called vehicle. Since it is possible to know where the vehicle has arrived, it is possible to get on without getting lost.
  • the automobile terminal 2 may be a car navigation device, an external camera 26 and an in-vehicle camera 27 separately connected thereto.
  • the external camera 26 may also be any combination of a front camera, a rear camera, a side camera, and the like.
  • the external camera 26 is a radar that can create image data used during automatic driving, LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), a camera used for operation management and remote operation of the sensor group 25, and the like. It may be.
  • the external camera 26 and / or the in-vehicle camera 27 may be capable of capturing a wide range like a 360 ° camera.
  • an image of an external camera such as a monitoring camera around the vehicle may be acquired. As a result, it is possible to easily distinguish between valleys of buildings, places where there are many passengers getting on and off the airport, places where roads are hierarchical, and the like.
  • the map data 340 is used to detect that the vehicle has come close.
  • the vehicle location calculation unit 110 may detect that the user terminal 1 and the automobile terminal 2 have come close by using bidirectional communication or the like.
  • the direction in which the camera 16 is directed can be presented to the user based on the strength of radio waves of bidirectional communication.
  • the vehicle location calculation unit 110 may calculate the location of the vehicle from only the position information 300 and the location information 302 without performing image recognition. Further, the vehicle location calculation unit 110 may recognize characters from the user image data 310 by performing character recognition of a taxi company or the like described in the two-dimensional barcode or the vehicle instead of the vehicle itself.
  • the server 3 may be able to mediate a call between the user and the driver by SIP (Session Initiation Protocol) or the like.
  • the server 3 can also have a function of performing chat, messenger transmission / reception, e-mail transmission / reception (hereinafter referred to as “chat etc.”). This chat or the like may be input by voice input from the user terminal 1 so that automatic translation can be performed.
  • a plurality of user terminals 1, automobile terminals 2, and servers 3 may be provided.
  • the user terminal 1, the automobile terminal 2, and the server 3 may include other components in addition to the above-described units, and each unit may include a plurality of control units.
  • any one of the respective units and any combination thereof may be integrally configured.
  • the input unit 12 and the display unit 13 may be integrally formed.
  • each of the control units 10, 20, and 30 and the storage units 11, 21, and 31 may be integrally formed.
  • the information processing system X of the present embodiment is used for a service for calling a vehicle such as a taxi or a hire.
  • the vehicle terminal 2b also presents the user location so that the vehicle driver can also confirm the user.
  • AR is presented for the location of the vehicle that has been called as in the user terminal 1a of the first embodiment described above.
  • the appearance image data 350 is acquired from the server 3 and image recognition is performed.
  • an image of the in-vehicle camera 27 of the automobile terminal 2b is also displayed.
  • the imaging directions of the external camera 26 and the in-vehicle camera 27 can be operated from the user terminal 1b.
  • the control unit 10b of the user terminal 1b includes a position acquisition unit 100, a vehicle location calculation unit 110, an image presentation unit 120, and a camera control unit 130.
  • the storage unit 11b of the user terminal 1b stores position information 300, user image data 310, vehicle exterior image data 320, and vehicle interior image data 330.
  • the control unit 10b of the automobile terminal 2b includes a position acquisition unit 200, a user location calculation unit 210, and an image presentation unit 220.
  • the storage unit 11b of the automobile terminal 2b stores position information 302, user image data 310, vehicle exterior image data 320, and vehicle interior image data 330.
  • the server 3 stores map data 340 and appearance image data 350.
  • the vehicle location calculation unit 110 acquires the appearance image data 350 of the vehicle from the server 3 in addition to the processing of the first embodiment. On this basis, the vehicle location calculation unit 110 collates with the map data 340, and then uses the appearance image data 350 to recognize the image of the vehicle called by DNN or the like. Thereby, the location of the vehicle is accurately calculated.
  • the image presentation unit 120 also obtains and presents in-vehicle image data 330 captured by the in-vehicle camera 27 of the vehicle from the automobile terminal 2b.
  • the image presentation unit 120 displays the position of the vehicle on the map data 340 with a pointer, and also displays the appearance image data 350 in the vicinity of the pointer.
  • the appearance image data 350 to be displayed a small image such as thumbnail data may be used.
  • the image presentation unit 120 can also obtain and present the outside image data 320 captured by the external camera 26 of the automobile terminal 2b from the automobile terminal 2b.
  • the outside image data 320 may indicate a place where the user's own figure, which is calculated by the user place calculation unit 210 of the automobile terminal 2b, is captured.
  • the camera control unit 130 controls the external camera 26 and / or the in-vehicle camera 27 of the vehicle.
  • the camera control unit 130 moves the orientation of the smartphone holder H as illustrated in FIG. 1B according to an instruction on the user's application, or zoom magnification of the zoom lens. It is possible to perform control to change the value. This control can be performed via an application installed in the automobile terminal 2b.
  • the position acquisition unit 200 acquires the position information 302 of the host vehicle and the position information 300 of the user, similar to the position acquisition unit 100 of the user terminal 1b.
  • the user location calculation unit 210 uses the vehicle location information acquired by the location acquisition unit 200 and the user location information 300 to determine the location of the user displayed in the vehicle outside image data 320 captured by the external camera 26 of the vehicle. calculate. For example, similar to the vehicle location calculation unit 110, the user recognition is performed by collating with the map data 340 by perspective projection transformation, affine transformation, or the like and using DNN or the like. Here, when there are a large number of persons in the outside-vehicle image data 320, the user location calculation unit 210 may recognize the person whose position is the closest as the user. At this time, the user location calculation unit 210 can acquire user information from the user terminal 1b and select a person who is most likely to be a user based on gender or age.
  • the image presentation unit 220 is a functional unit similar to the image presentation unit 120.
  • the image presentation unit 220 presents, for example, the user location calculated by the user location calculation unit 210 on the outside image data 320 in AR.
  • the image presentation unit 220 acquires user image data 310 from the user terminal 1b, acquires the vehicle location calculated by the vehicle location calculation unit 110 of the user terminal 1b from the user terminal 1a, and presents it. It is also possible to do.
  • the vehicle outside image data 320 is still image data or moving image data captured by the external camera 26 of the automobile terminal 2b.
  • the in-vehicle image data 330 is still image or moving image data captured by the in-vehicle camera 27 of the automobile terminal 2b.
  • the appearance image data 350 is image data obtained by capturing the appearance of the vehicle.
  • the image data may include data having different formats and sizes, such as recognition data used when the vehicle location calculation unit 110 recognizes an image, thumbnail data to be displayed on the user's map, and the like.
  • the location of the user is presented in the vehicle outside image data 320 captured from the automobile terminal 2b.
  • in-vehicle image data 330 is also presented.
  • the orientation of the camera of the automobile terminal 2b and the like are controlled according to a user instruction. This makes it easier for both the driver and the user of the vehicle to call and recognize each other.
  • control unit 10b of the user terminal 1b and the control unit 20b of the automobile terminal 2b mainly store programs such as applications stored in the storage unit 11b and the storage unit 21b, respectively. To work with hardware resources.
  • Step S111 First, the position acquisition unit 100 of the user terminal 1a performs a position acquisition process.
  • This process is performed in the same manner as step S101 in FIG.
  • the position acquisition unit 100 transmits the position information 300 acquired from the sensor group 15 to the automobile terminal 2b.
  • the position acquisition unit 200 of the automobile terminal 2b performs a position acquisition process.
  • the position acquisition unit 200 acquires the position information 302 from the sensor group 25 during traveling and stores it in the storage unit 21b.
  • the position acquisition unit 200 transmits the position information 302 to the user terminal 1b in a state where the application is activated and communication is possible between the automobile terminal 2 and the user terminal 1.
  • the position acquisition unit 200 also acquires the position information 300 of the user terminal 1a and stores it in the storage unit 21b.
  • the camera control unit 130 of the user terminal 1a controls the external camera 26 and / or the in-vehicle camera 27 of the vehicle.
  • a screen example 510 in FIG. 8A shows an example of image data 320 outside the vehicle of the external camera 26 displayed on the display unit 13 of the user terminal 1b and / or the display unit 23 of the automobile terminal 2b.
  • the user can press and instruct each arrow of the button 720 displayed superimposed on the outside-vehicle image data 320 and move the electric pan head of the holder in the direction of the arrow.
  • the zoom magnification of the external camera 26 or the in-vehicle camera 27 can be changed by the zoom mark of the button 730.
  • the vehicle location calculation unit 110 performs a vehicle location calculation process.
  • This process is also performed in the same manner as step S102 in FIG.
  • the vehicle location calculation unit 110 acquires the vehicle appearance image data 350 from the server 3. Then, the vehicle location calculation unit 110 uses the appearance image data 350 to recognize the vehicle by image recognition using DNN or the like, and calculates the location of the vehicle. (Step S213)
  • the user location calculation unit 210 of the automobile terminal 2b performs a user location calculation process.
  • the user location calculation unit 210 recognizes the user displayed in the outside-vehicle image data 320 captured by the external camera 26 and calculates the location displayed by the user. (Steps S114 and S214)
  • the image presentation unit 120 of the user terminal 1a and the image presentation unit 220 of the automobile terminal 2b perform image transmission / reception processing.
  • the image presentation unit 120 transmits the user image data 310 in which the location of the vehicle has been presented to the automobile terminal 2b. Further, the image presentation unit 220 transmits the vehicle outside image data 320 and the vehicle interior image data 330 on which the user's location has been presented to the user terminal 1b. Transmission / reception of these image data may be performed by PtP (Peer to Peer) or via the server 3. Each unit receives these image data and stores them in the storage units 11b and 21b, respectively. (Steps S115 and S215) The image presentation unit 120 of the user terminal 1a and the image presentation unit 220 of the automobile terminal 2b each perform an image presentation process.
  • the image presentation unit 120 switches or overlaps the images of the camera 16, the external camera 26, and the in-vehicle camera 27 on the display unit 13. It can be displayed.
  • the image presentation unit 220 of the automobile terminal 2b can switch these images and display them on the display unit 23.
  • FIG. 8A shows an example in which the user location 710, which is the calculated user location, is AR-displayed in the image data 320 outside the vehicle.
  • FIG. 8B shows an example in which in-vehicle image data 330 is displayed in addition to the images in the screen example 500 in FIG. 5 of the first embodiment. Further, on the map data 340, a thumbnail image of the appearance image data 350 is displayed in a balloon format in the vicinity of the pointer P indicating the position of the vehicle.
  • These screen examples are displayed on the display unit 13 and / or the display unit 23 in real time (real time).
  • the automobile terminal 2b of the information processing system Xb uses the vehicle position information 302 acquired by the position acquisition unit 200 and the user position information 300 to acquire the outside of the vehicle acquired from the external camera 26 of the vehicle.
  • a user location calculation unit 210 that calculates the location of the user displayed in the image data 320 is further provided, and the image presentation unit 120 and / or the image presentation unit 220 determine the location of the user calculated by the user location calculation unit 210. It is an information processing apparatus that is presented on the image data 320 outside the vehicle.
  • the driver can recognize the user's location from the surrounding images while driving. For this reason, it becomes easy for the driver to grasp the user, and the driver can easily reach the meeting place.
  • the user terminal 1b of the present embodiment may also be able to display the vehicle exterior image data 320 on the image presentation unit 120. For this reason, the user can also know which side the vehicle is traveling in real time. That is, by knowing the current state of the vehicle, it is possible to easily grasp where the called vehicle has arrived.
  • the vehicle location calculation unit 110 acquires the vehicle appearance image data 350, and recognizes the vehicle from the appearance image data 350, thereby calculating the location of the vehicle. It is characterized by being.
  • the recognition rate can be remarkably increased by image recognition using actual vehicle data, rather than DNN that categorizes a general automobile. For this reason, the user can surely recognize the called vehicle, and trouble can be prevented.
  • the image presentation unit 120 displays the position of the vehicle with a pointer on the acquired map data 340 and also displays the appearance image data 350 in the vicinity of the pointer. It is characterized by being.
  • the user can easily discriminate the called vehicle on the map, and more easily find a vehicle that is nearby.
  • the called vehicle is characteristic in appearance such as a vehicle type or paint, it can be easily discriminated.
  • the user terminal 1b of the present embodiment is characterized in that the image presentation unit 120 is an information processing apparatus that also presents in-vehicle image data 330 acquired from the in-vehicle camera 27 of the vehicle.
  • This configuration makes it possible to check the status of the called vehicle on the app in real time. That is, not only the situation outside the vehicle but also the situation inside the vehicle can be confirmed. This makes it possible to know the situation inside the vehicle. As a result, it is possible to give the user a sense of security more than pre-registered drivers and vehicle images, word of mouth, ratings, and the like.
  • the user terminal 1b of the present embodiment is an information processing apparatus that further includes a camera control unit 130 that controls the external camera 26 and / or the in-vehicle camera 27 of the vehicle.
  • the user can operate from the app of the user terminal 1b where the car terminal 2b captures an image. That is, the user can remotely operate the camera of the automobile terminal 2b, and can freely view the image of the vehicle while remotely operating it.
  • the user image data 310 captured by the user terminal 1b can be transmitted to the automobile terminal 2b.
  • the external camera 26 and the in-vehicle camera 27 are controlled based on a user instruction.
  • the driver of the vehicle may drive the holder H via the application of the vehicle terminal 2b to change the orientation and zoom magnification of the external camera 26 and the in-vehicle camera 27.
  • outside image data 320, the in-vehicle image data 330, and the user image data 310 are not only transmitted to each terminal in real time, but may be stored at each terminal for a specific period. As a result, it is possible to browse an image several seconds to several minutes ago.
  • the in-vehicle image data 330 may include not only an image of the in-vehicle camera 27 but also an image of an external camera that allows an appearance image of the car to be seen.
  • the information processing system Xc according to the third embodiment of the present disclosure will be described with reference to FIG. Also in FIG. 9, the same code
  • the location of the vehicle is presented in the user image data 310.
  • the location of the user is presented in the image data 320 outside the vehicle.
  • the control unit 20 c includes a position acquisition unit 200, a user location calculation unit 210, and an image presentation unit 220.
  • the storage unit 21c stores position information 300 and image data 320 outside the vehicle.
  • the user terminal 1 c includes position information 302.
  • the automobile terminal 2c of the present embodiment includes the position acquisition unit 200 that acquires the position information 302 of the vehicle that is calling to the meeting place and the position information 300 of the user, and the position of the vehicle and the user that are acquired by the position acquisition unit 100.
  • the user location calculation unit 210 that calculates the location of the user displayed in the vehicle external image data acquired from the external camera 26 of the vehicle, and the location of the user calculated by the user location calculation unit 210 It is an information processing apparatus provided with the image presentation part 120 presented on external image data.
  • the server 3 includes the map data 340 .
  • the same processing as in the second embodiment described above is performed only by the user terminal 1d and the automobile terminal 2d as in PtP.
  • control unit 10d and the storage unit 11d of the user terminal 1d and the control unit 20d of the automobile terminal 2d have the same configuration as the control unit 10b, the storage unit 11b, and the control unit 20b of the second embodiment. is there.
  • the storage unit 21d of the automobile terminal 2d stores map data 340 in addition to the data of the storage unit 21b of the second embodiment.
  • the car terminal 2d stores the map data 340 for the dedicated navigation, and collates based on the map data 340, thereby preventing a shift of the map data 340 between the user terminal 1d and the car terminal 2d, and the location of the user and the vehicle. It becomes easy to calculate.
  • the map data 340 is described as being stored in the automobile terminal 2d, but may be the user terminal 1d.
  • a navigation device that can be connected to the automobile terminal 2b is provided separately, and the map data 340 may be acquired from the navigation device. Furthermore, you may make it recognize a vehicle or a user's location directly, without using the map data 340.
  • FIG. ⁇ Fifth embodiment> an information processing system Xe according to the fifth embodiment of the present disclosure will be described with reference to FIG. Also in FIG. 11, the same reference numerals are given to the same configurations as those in the first to fourth embodiments described above.
  • the vehicle or user location is calculated from the image data at each terminal.
  • the server 3e may calculate these points.
  • the server 3e includes a user location calculation unit 210 and a vehicle location calculation unit 110 in the control unit 30e.
  • the storage unit 31e acquires the position information 300 and the user image data 310 from the user terminal 1e and stores them. Further, the position information 302, the outside image data 320, and the in-house image data are acquired from the automobile terminal 2e and stored in the storage unit 31d. Then, each image data is distributed from the server 3e to each terminal by streaming or the like.
  • control unit 10e of the user terminal 1e includes a position acquisition unit 100, an image presentation unit 120, and a camera control unit 130.
  • the control unit 20e of the automobile terminal 2e includes a position acquisition unit 200 and an image presentation unit 220.
  • the storage unit 31e of the server 3e it is possible to configure the storage unit 31e of the server 3e to store the vehicle outside image data 320 and the like for a specific period.
  • the user can view the outside image data 320 collated with the map data 340.
  • the user can set the destination.
  • the user sets a destination with a pin on the map.
  • the corresponding outside-vehicle image data 320 can be displayed.
  • a destination can be set by an image of a location where the vehicle has actually moved.
  • This configuration makes it possible to easily set the destination even when the user does not know the arrival point. For this reason, it is possible to cope with a case where it is difficult to explain an address at a travel destination or a place visited for the first time. That is, it is possible to easily search for a destination that the user remembers vaguely as “that neighborhood”. In addition, the user can reliably tell the driver where to go by showing the image.
  • 360 ° images can be created from the stored outside-vehicle image data 320 and VR (Virtual Reality) moving images can be distributed in real time.
  • VR Virtual Reality
  • the user image data 310 is described so as to present only the location of the vehicle.
  • the user terminal 1 may further include a functional unit that is not described in the first to fifth embodiments.
  • control unit and its method described in the present disclosure are realized by a dedicated computer provided by configuring a processor and a memory programmed to execute one or more functions embodied by a computer program. May be.
  • control unit and the method thereof described in the present disclosure may be realized by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits.
  • control unit and the method thereof described in the present disclosure may include a combination of a processor and a memory programmed to execute one or more functions and a processor configured by one or more hardware logic circuits. It may be realized by one or more configured dedicated computers.
  • the computer program may be stored in a computer-readable non-transition tangible recording medium as instructions executed by the computer.
  • each section is expressed as S101, for example.
  • each section can be divided into a plurality of subsections, while a plurality of sections can be combined into one section.
  • each section configured in this manner can be referred to as a device, module, or means.

Abstract

This information processing device is provided with: a location acquisition unit (100) which acquires location information (302) on a vehicle that is called and coming to a meeting place and location information (300) on a user; a vehicle spot calculation unit (110) which calculates a spot of the vehicle, which is to be displayed in user image data (310) that is captured by the user camera 16, by means of the vehicle location information and the user location information acquired by the location acquisition unit; and an image presentation unit (120) which presents, on the user image data, the spot of the vehicle, which is calculated by the vehicle spot calculation unit.

Description

情報処理装置、プログラム、及び情報処理方法Information processing apparatus, program, and information processing method 関連出願の相互参照Cross-reference of related applications
 本出願は、2018年4月25日に出願された日本特許出願番号2018-83762号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Patent Application No. 2018-83762 filed on April 25, 2018, the contents of which are incorporated herein by reference.
 本開示は、情報処理装置、プログラム、及び情報処理方法に係り、特に車両を呼び寄せるサービスで用いられる情報処理装置、プログラム、及び情報処理方法に関するものである。 The present disclosure relates to an information processing device, a program, and an information processing method, and particularly relates to an information processing device, a program, and an information processing method that are used in a service for attracting vehicles.
 従来から、ユーザーが携帯端末を介してタクシー等の車両を呼び寄せるサービスが普及している。 Conventionally, a service in which a user calls a vehicle such as a taxi through a mobile terminal has been widespread.
 このようなサービスにおいて、待ち合わせ場所に向かってくる車両をユーザーが容易に確認するようにしたナビゲーション装置が特許文献1に記載されている。このナビゲーション装置は、設定された待ち合わせ場所の位置情報とルート情報とに基づいて、待ち合わせ場所に対してどの方向から車両が進入してくるかを示す、音声の案内情報を生成する。この装置は、生成された案内情報を、ユーザーの端末に送信する。ユーザーは、この音声の案内情報を聞くことで、車両を容易に確認することができる。 In such a service, Patent Document 1 discloses a navigation device in which a user can easily check a vehicle coming to a meeting place. The navigation device generates voice guidance information indicating from which direction the vehicle enters the meeting place based on the position information and route information of the set meeting place. This device transmits the generated guidance information to the user's terminal. The user can easily confirm the vehicle by listening to the voice guidance information.
 しかしながら、特許文献1の装置では、待ち合わせ場所が多くのタクシー車両等で混雑していると、呼び寄せた車両がどの車両なのかを、ユーザーが特定することが難しい場合があった。 However, in the apparatus of Patent Document 1, when the meeting place is crowded with many taxi vehicles, it may be difficult for the user to specify which vehicle the called vehicle is.
特開2006-25874号公報JP 2006-25874 A
 本開示は、呼び寄せた車両をユーザーに認識させやすくし、上述の問題点を解消する情報処理装置を提供することを目的とする。 This disclosure is intended to provide an information processing apparatus that makes it easy for a user to recognize a called vehicle and solves the above-described problems.
 本開示のある態様において、情報処理装置は、待ち合わせ場所に呼び寄せている車両の位置情報及びユーザーの位置情報を取得する位置取得部と、前記位置取得部により取得された前記車両の位置情報及び前記ユーザーの位置情報により、前記ユーザーのカメラで撮像されたユーザー画像データ内に表示される前記車両の箇所を算出する車両箇所算出部と、前記車両箇所算出部により算出された前記車両の箇所を、前記ユーザー画像データ上に提示する画像提示部とを備える。 In an aspect of the present disclosure, the information processing apparatus includes: a position acquisition unit that acquires position information of a vehicle that is calling to a meeting place and position information of a user; the position information of the vehicle that is acquired by the position acquisition unit; A vehicle location calculation unit that calculates a location of the vehicle displayed in user image data captured by the user's camera according to the user's location information, and the location of the vehicle calculated by the vehicle location calculation unit, An image presentation unit for presenting on the user image data.
 本開示の別の態様において、情報処理装置は、待ち合わせ場所に呼び寄せている車両の位置情報及びユーザーの位置情報を取得する位置取得部と、前記位置取得部により取得された前記車両及び前記ユーザーの位置情報により、前記車両の外部用カメラから取得した車両外部画像データ内に表示される前記ユーザーの箇所を算出するユーザー箇所算出部と、前記ユーザー箇所算出部により算出された前記ユーザーの箇所を、前記車両外部画像データ上に提示する画像提示部とを備える。 In another aspect of the present disclosure, the information processing apparatus includes: a position acquisition unit that acquires position information of a vehicle that is calling to a meeting place and a user's position information; the vehicle acquired by the position acquisition unit; According to the position information, the user location calculation unit that calculates the location of the user displayed in the vehicle external image data acquired from the external camera of the vehicle, and the location of the user calculated by the user location calculation unit, An image presenting unit for presenting on the vehicle external image data.
 本開示の別の態様において、情報処理装置により実行される情報処理プログラムは、位置取得部に、待ち合わせ場所に呼び寄せている車両の位置情報及びユーザーの位置情報を取得させ、車両箇所算出部に、前記位置取得部により取得された前記車両の位置情報及び前記ユーザーの位置情報により、前記ユーザーのカメラで撮像されたユーザー画像データ内に表示される前記車両の箇所を算出させ、画像提示部に、前記車両箇所算出部により算出された前記車両の箇所を、前記ユーザー画像データ上に提示させる。 In another aspect of the present disclosure, an information processing program executed by the information processing apparatus causes the position acquisition unit to acquire the position information of the vehicle and the position information of the user that are calling to the meeting place, and causes the vehicle location calculation unit to Based on the position information of the vehicle acquired by the position acquisition unit and the position information of the user, the location of the vehicle displayed in the user image data captured by the user's camera is calculated, and the image presentation unit The location of the vehicle calculated by the vehicle location calculation unit is presented on the user image data.
 本開示の別の態様において、情報処理装置により実行される情報処理方法は、待ち合わせ場所に呼び寄せている車両の位置情報及びユーザーの位置情報を取得し、取得された前記車両の位置情報及び前記ユーザーの位置情報により、前記ユーザーのカメラで撮像されたユーザー画像データ内に表示される前記車両の箇所を算出し、算出された前記車両の箇所を、前記ユーザー画像データ上に提示する。 In another aspect of the present disclosure, an information processing method executed by an information processing apparatus acquires position information of a vehicle and a user who are calling to a meeting place, and acquires the acquired position information of the vehicle and the user The position of the vehicle displayed in the user image data captured by the user's camera is calculated based on the position information, and the calculated position of the vehicle is presented on the user image data.
 上記の情報処理装置、情報処理プログラム、情報処理方法によれば、まず、待ち合わせ場所に呼び寄せている車両の位置情報及びユーザーの位置情報を取得する。この上で、これらの位置情報により、ユーザーのカメラの画像データに表示される車両の箇所を算出する。そして、算出された車両の箇所を、画像データ上に提示する。これにより、呼び寄せた車両をユーザーが容易に特定することを可能とする情報処理装置を提供することができる。 According to the information processing apparatus, the information processing program, and the information processing method, first, the position information of the vehicle and the position information of the user calling to the meeting place are acquired. Then, the location of the vehicle displayed in the image data of the user's camera is calculated based on the position information. Then, the calculated location of the vehicle is presented on the image data. Thereby, it is possible to provide an information processing apparatus that allows the user to easily specify the called vehicle.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
本開示の第一実施形態に係る情報処理システムのシステム構成図であり、 図1に示すユーザー端末、自動車端末、及びサーバーの制御構成を示すブロック図であり、 本開示の第一実施形態に係る情報処理システムの機能構成を示すブロック図であり、 本開示の第一実施形態に係る車両提示処理のフローチャートであり、 図4に示す車両提示処理の画面例であり、 本開示の第二実施形態に係る情報処理システムの機能構成を示すブロック図であり、 本開示の第二実施形態に係る車両ユーザー提示処理のフローチャートであり、 図7に示す車両ユーザー提示処理の画面例であり、 本開示の第三実施形態に係る情報処理システムの機能構成を示すブロック図であり、 本開示の第四実施形態に係る情報処理システムの機能構成を示すブロック図であり、 本開示の第五実施形態に係る情報処理システムの機能構成を示すブロック図であり、 本開示の第五実施形態に係る情報処理システムの画面例である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing
It is a system configuration diagram of an information processing system according to the first embodiment of the present disclosure, It is a block diagram which shows the control structure of the user terminal shown in FIG. 1, a motor vehicle terminal, and a server, It is a block diagram showing a functional configuration of the information processing system according to the first embodiment of the present disclosure, It is a flowchart of a vehicle presentation process according to the first embodiment of the present disclosure, 5 is a screen example of the vehicle presentation process shown in FIG. It is a block diagram which shows the function structure of the information processing system which concerns on 2nd embodiment of this indication, It is a flowchart of the vehicle user presentation processing according to the second embodiment of the present disclosure, FIG. 8 is a screen example of the vehicle user presentation process shown in FIG. It is a block diagram which shows the function structure of the information processing system which concerns on 3rd embodiment of this indication, It is a block diagram showing a functional configuration of an information processing system according to a fourth embodiment of the present disclosure, It is a block diagram showing a functional configuration of an information processing system according to a fifth embodiment of the present disclosure, It is an example of a screen of an information processing system concerning a 5th embodiment of this indication.
<第一実施形態>
〔情報処理システムXのシステム構成〕
 まず、図1により、本開示の第一実施形態に係る情報処理システムXのシステム構成について説明する。
<First embodiment>
[System configuration of information processing system X]
First, the system configuration of the information processing system X according to the first embodiment of the present disclosure will be described with reference to FIG.
 本実施形態の情報処理システムXは、タクシーやハイヤー等の自動車をユーザーが呼び寄せる際に、確認できるようにするシステムである。 The information processing system X of the present embodiment is a system that enables confirmation when a user calls a car such as a taxi or a hire.
 図1(a)によれば、情報処理システムXは、ユーザー端末1、自動車端末2、及びサーバー3を含み、これらがネットワーク4で接続されている。 1A, the information processing system X includes a user terminal 1, an automobile terminal 2, and a server 3, which are connected by a network 4.
 ユーザー端末1は、サーバー3により提供されるウェブ上のサービスを利用可能であり、カメラ付きのスマートフォン(Smart Phone)、携帯電話、PDA(Personal Data Assistant)、PC(Personal Computer)、パーソナルナビゲーション装置、家電機器等の装置である。本実施形態においては、ユーザー端末1がスマートフォンである例について説明する。このスマートフォンは、後述するアプリケーション・ソフトウェア(Application Software、以下、単に「アプリ」という。)をインストールして実行することで、本実施形態の情報処理装置として機能する。 The user terminal 1 can use services on the web provided by the server 3, and includes a smartphone with a camera (Smart Phone), a mobile phone, a PDA (Personal Data Assistant), a PC (Personal Computer), a personal navigation device, It is a device such as a home appliance. In the present embodiment, an example in which the user terminal 1 is a smartphone will be described. This smartphone functions as the information processing apparatus of this embodiment by installing and executing application software (hereinafter referred to simply as “application”), which will be described later.
 自動車端末2は、サーバー3により提供されるウェブ上のサービスを利用する自動車の車内端末である。この車内端末は、スマートフォン、携帯電話、PDA、カーナビゲーション装置等である。 The car terminal 2 is an in-car terminal of a car that uses services on the web provided by the server 3. The in-vehicle terminal is a smartphone, a mobile phone, a PDA, a car navigation device, or the like.
 本実施形態においては、図1(b)に示すように、自動車端末2もスマートフォンである例について説明する。この例の場合、スマートフォンは、自動車(車両)のダッシュボードD上のホルダーHに載置され、自動車のドライバー(操縦者)により、カーナビゲーション用として使用される。この配置により、スマートフォンのアウトカメラ及びインカメラで、車外及び車内を、それぞれ撮像することが可能である。加えて、このホルダーHは、自動車端末2からの制御で電動雲台のように駆動され、スマートフォンのカメラによる撮像の向きを変更することも可能である。 In the present embodiment, an example in which the automobile terminal 2 is also a smartphone will be described as shown in FIG. In the case of this example, the smartphone is placed on the holder H on the dashboard D of the automobile (vehicle) and used for car navigation by the driver (operator) of the automobile. With this arrangement, the outside and inside of the vehicle can be imaged with the out camera and the in camera of the smartphone, respectively. In addition, the holder H is driven like an electric head by control from the automobile terminal 2 and can change the direction of imaging by the camera of the smartphone.
 サーバー3は、車両の呼び寄せのサービスにおける管理を行うサーバーである。具体的には、サーバー3は、ユーザー端末1からの自動車の呼び寄せの要請を受けて、呼び寄せ可能な自動車の自動車端末2を検索して、双方の端末の仲介を行う。たとえば、サーバー3は、ユーザーの購入履歴、覧履歴、移動履歴等の情報からユーザーの特性を推定し、それに基づいた料金、ドライバー、車両等のマッチングを行う。 The server 3 is a server that performs management in a vehicle call service. Specifically, the server 3 receives a request for calling a car from the user terminal 1, searches for the car terminal 2 of the car that can be called, and mediates both terminals. For example, the server 3 estimates user characteristics from information such as the purchase history, viewing history, and travel history of the user, and performs matching of charges, drivers, vehicles, and the like based on the characteristics.
 この際、サーバー3は、サービスに登録した車両のドライバーの紹介、車両の車種、車両の外観を撮像した画像データ、ストリーミングデータ、記述言語データ、音声データ等(以下、単に「画像データ等」と省略する。)を、ユーザーに配信することも可能である。この画像データ等は、静止画でも動画でもよい。さらに、サーバー3は、各車両の口コミ、レーティング等を示すことも可能である。 At this time, the server 3 introduces the driver of the vehicle registered in the service, the vehicle type of the vehicle, image data obtained by imaging the appearance of the vehicle, streaming data, description language data, audio data, etc. (hereinafter simply referred to as “image data etc.”). Can be distributed to the user. The image data or the like may be a still image or a moving image. Furthermore, the server 3 can also display word-of-mouth, rating, etc. for each vehicle.
 加えて、サーバー3は、ユーザー端末1にインストールされるアプリを設定したり、各種情報を取得したり、アプリの動作を管理したり、アプリ自体を配布したりしてもよい。 In addition, the server 3 may set an application to be installed in the user terminal 1, obtain various information, manage the operation of the application, and distribute the application itself.
 ネットワーク4は、携帯電話網、インターネット(登録商標)等のWAN(Wide Area Network)、又は、Wifi(登録商標)や無線LAN等のLAN(Local Area Network)等のIPネットワーク等である。これらのネットワーク等は、例えば、ユーザー端末1及び自動車端末2から常時接続可能(コネクティッド)であってもよく、携帯電話網の場合には、4G(4th Generation)や5G(5th Generation)等の規格の低レイテンシーで高速な通信ネットワークであってもよい。 The network 4 is a mobile phone network, a WAN (Wide Area Network) such as the Internet (registered trademark), or an IP network such as a LAN (Local Area Network) such as WiFi (registered trademark) or a wireless LAN. These networks and the like may be, for example, always connectable (connected) from the user terminal 1 and the automobile terminal 2, and in the case of a mobile phone network, 4G (4th generation), 5G (5th generation), etc. It may be a standard low-latency and high-speed communication network.
 後述する第二乃至第五実施形態においても、システム構成は同一であってもよい。 In the second to fifth embodiments described later, the system configuration may be the same.
 次に、図2により、ユーザー端末1、自動車端末2、及びサーバー3の装置構成について説明する。図2(a)はユーザー端末1、図2(b)は自動車端末2、図2(c)はサーバー3の構成を示す。 Next, the device configuration of the user terminal 1, the car terminal 2, and the server 3 will be described with reference to FIG. 2A shows the configuration of the user terminal 1, FIG. 2B shows the configuration of the automobile terminal 2, and FIG. 2C shows the configuration of the server 3.
 ユーザー端末1は、制御部10、記憶部11、入力部12、表示部13、接続部14、センサ群15、カメラ16、及び音声入出力部18等を含む。各部は、制御部10に接続され、制御部10によって動作制御される。 The user terminal 1 includes a control unit 10, a storage unit 11, an input unit 12, a display unit 13, a connection unit 14, a sensor group 15, a camera 16, a voice input / output unit 18, and the like. Each unit is connected to the control unit 10 and controlled in operation by the control unit 10.
 自動車端末2は、制御部20、記憶部21、入力部22、表示部23、接続部24、外部用カメラ26、車内用カメラ27、及び音声入出力部28等を含む。各部は、制御部20に接続され、制御部20によって動作制御される。 The automobile terminal 2 includes a control unit 20, a storage unit 21, an input unit 22, a display unit 23, a connection unit 24, an external camera 26, an in-vehicle camera 27, a voice input / output unit 28, and the like. Each unit is connected to the control unit 20 and controlled in operation by the control unit 20.
 サーバー3は、制御部30、記憶部31、及び接続部34等を含む。各部は、制御部30に接続され、制御部30によって動作制御される。 The server 3 includes a control unit 30, a storage unit 31, a connection unit 34, and the like. Each unit is connected to the control unit 30 and controlled in operation by the control unit 30.
 制御部10、20、30は、CPU(Central Processing Unit、中央処理装置)、MPU(Micro Processing Unit)、GPU、TPU(Tensor Processing Unit)、DFP(Data Flow Processor)、DSP(Digital Signal Processor)、その他のASIC(Application Specific Processor、特定用途向けプロセッサー)等を含む情報処理部である。 The control units 10, 20, and 30 include a CPU (Central Processing Unit), an MPU (Micro Processing Unit), a GPU, a TPU (Tensor Processing Unit), a DFP (Data Flow Processor), a DSP (Digital Signal Processor), It is an information processing unit including other ASIC (Application Specific Processor).
 各制御部10、20、30は、対応する各記憶部11、21、31の補助記憶部に記憶されているプログラムを読み出して、このプログラムを主記憶部に展開させて実行することで、後述する各機能ブロック(機能部)として動作させられる。また、制御部10、20、30は、ユーザー端末1、自動車端末2、及びサーバー3のそれぞれの装置全体の制御を行う。 Each control unit 10, 20, 30 reads a program stored in the auxiliary storage unit of each corresponding storage unit 11, 21, 31 and develops this program in the main storage unit and executes the program, which will be described later. It is made to operate as each functional block (functional unit). The control units 10, 20, and 30 control the entire devices of the user terminal 1, the automobile terminal 2, and the server 3.
 記憶部11、21、31は、RAM(Random Access Memory)等の主記憶部、ROM(Read Only Memory)、SSD(Solid State Disk)、HDD(Hard Disk Drive)等の補助記憶部、フラッシュメモリーカードや光学記録媒体等を含んでいてもよい、非遷移的実体的記録媒体(non-transitory tangible storage media)である。 Storage units 11, 21, and 31 are a main storage unit such as a RAM (Random Access Memory), an auxiliary storage unit such as a ROM (Read Only Memory), an SSD (Solid State Disk), an HDD (Hard Disk Drive), and a flash memory card. Or non-transitory tangible storage medium that may include optical recording media and the like.
 記憶部11、21、31の補助記憶部には、それぞれ、ユーザー端末1、自動車端末2、及びサーバー3の動作制御を行うためのプログラムが記憶されている。このプログラムは、OS(Operating System)及びアプリ等である。加えて、記憶部11、21、31は、各種データも格納している。 Programs for controlling the operation of the user terminal 1, the automobile terminal 2, and the server 3 are stored in the auxiliary storage units of the storage units 11, 21, and 31, respectively. This program is an OS (Operating System), an application, and the like. In addition, the storage units 11, 21, and 31 also store various data.
 入力部12、22は、タッチパネル、マウス等のポインティングデバイス、ボタン、キーボード、加速度センサ、視線センサ、生体認証センサ等である。入力部12は、ユーザーによるウェブページのフォーム等への入力、各種指示等を取得する。 The input units 12 and 22 are a touch panel, a pointing device such as a mouse, a button, a keyboard, an acceleration sensor, a line-of-sight sensor, a biometric authentication sensor, and the like. The input unit 12 acquires input to the web page form, various instructions, and the like by the user.
 表示部13、23は、LCD(Liquid Crystal Display)、有機ELディスプレイ、LED(Light Emitting Diode)等である。 Display units 13 and 23 are an LCD (Liquid Crystal Display), an organic EL display, an LED (Light Emitting Diode), or the like.
 接続部14、24、34は、ネットワーク4に接続するための無線送受信機やLANインターフェイス等を含む接続手段である。接続部14、24、34は、USB(Universal Serial Bus)、RS-232C、各種フラッシュメモリーカード、SIMカード、Bluetooth(登録商標)等へのインターフェイス等も含んでいてもよい。 The connection units 14, 24, and 34 are connection means including a wireless transceiver and a LAN interface for connecting to the network 4. The connection units 14, 24, and 34 may include USB (Universal Serial Bus), RS-232C, various flash memory cards, SIM cards, interfaces to Bluetooth (registered trademark), and the like.
 センサ群15、25は、GNSS(Global Navigation Satellite System)の位置情報センサ、ジャイロセンサ、磁気センサ、高度センサ等である。本実施形態においては、センサ群15は、自装置の地図の位置(座標)や向きや速度や加速度等(以下、「位置等」という。)を検出する。 Sensor groups 15 and 25 are GNSS (Global Navigation Satellite System) position information sensors, gyro sensors, magnetic sensors, altitude sensors, and the like. In the present embodiment, the sensor group 15 detects the position (coordinates), orientation, speed, acceleration, etc. (hereinafter referred to as “position etc.”) of the map of the device itself.
 カメラ16は、ユーザー端末1にユーザー自身やユーザー周辺を撮像するために設けられる。このカメラ16は、CCD(Charge-Coupled Device)イメージセンサやCMOS(Complementary MOS)イメージセンサ等の撮像素子と、レンズ等の光学素子とを組み合わせたカメラ等である。カメラ16は、主にユーザーが向けた方向の静止画又は動画を撮像して、画像データを記憶部11に格納する。 The camera 16 is provided in the user terminal 1 to take an image of the user and the user's surroundings. The camera 16 is a camera in which an imaging element such as a CCD (Charge-Coupled Device) image sensor or a CMOS (Complementary MOS) image sensor and an optical element such as a lens are combined. The camera 16 captures a still image or a moving image mainly directed by the user and stores the image data in the storage unit 11.
 音声入出力部18、28は、音声入力のためのマイクロフォンとA/D(Analog to Digital)コンバータ、音声出力のためのD/A(Digital to Analog)コンバータ、アンプ(Amplifier)、スピーカー、振動モータ等を含む。これらの構成により、音声入出力部18、28は、ユーザー又はドライバーの音声通話、音声コマンドの入力、呼び寄せ時の情報通知、ナビゲーション等の通知を行う。 The voice input / output units 18 and 28 are a microphone and an A / D (Analog-to-Digital) converter for voice input, a D / A (Digital-to-Analog) converter for voice output, an amplifier, a speaker, and a vibration motor. Etc. With these configurations, the voice input / output units 18 and 28 perform notification such as a voice call of a user or driver, input of a voice command, information notification at the time of calling, navigation, and the like.
 外部用カメラ26は、車両の外部撮像用のカメラ等である。本実施形態では、スマートフォンのアウトカメラを、外部撮像用に用いる。このアウトカメラで撮像された画像データは、記憶部21に格納される。 The external camera 26 is a camera for external imaging of the vehicle. In this embodiment, the smartphone's out camera is used for external imaging. Image data captured by this out-camera is stored in the storage unit 21.
 車内用カメラ27は、車両の車内撮像用のカメラ等である。本実施形態では、スマートフォンのインカメラを、車内撮像用に用いる。このインカメラで撮像された画像データも、記憶部21に格納される。 The in-vehicle camera 27 is a camera for in-vehicle imaging of the vehicle. In this embodiment, the in-camera of a smart phone is used for in-vehicle imaging. Image data captured by the in-camera is also stored in the storage unit 21.
 カメラ16、外部用カメラ26、及び車内用カメラ27では、静止画及び動画のいずれを撮像するか、画像解像度、ズーム倍率等が、ユーザー端末1にインストールされたアプリを介して変更可能である。また、自動車端末2においても、これらの変更が可能であってもよい。
〔情報処理システムXaの機能構成〕
 ここで、図3により、本開示の第一実施形態に係る情報処理システムXaの機能構成について説明する。なお、以下の実施形態においても、機能部の構成が異なる情報処理システムX、ユーザー端末1、サーバー3等については、説明上、英小文字符号a、b、c……等を付加して示す。
In the camera 16, the external camera 26, and the in-vehicle camera 27, whether a still image or a moving image is captured, the image resolution, the zoom magnification, and the like can be changed via an application installed in the user terminal 1. These changes may also be possible in the automobile terminal 2.
[Functional configuration of information processing system Xa]
Here, the functional configuration of the information processing system Xa according to the first embodiment of the present disclosure will be described with reference to FIG. Also in the following embodiments, the information processing system X, the user terminal 1, the server 3, and the like having different functional unit configurations are indicated by adding lowercase letters a, b, c,.
 ユーザー端末1aの制御部10aは、位置取得部100、車両箇所算出部110、及び画像提示部120を備える。 The control unit 10a of the user terminal 1a includes a position acquisition unit 100, a vehicle location calculation unit 110, and an image presentation unit 120.
 ユーザー端末1aの記憶部11aは、位置情報300及びユーザー画像データ310を格納している。 The storage unit 11a of the user terminal 1a stores position information 300 and user image data 310.
 自動車端末2aは、位置情報302を含む。 The automobile terminal 2 a includes location information 302.
 サーバー3は、地図データ340を格納している。 The server 3 stores map data 340.
 位置取得部100は、待ち合わせ場所に呼び寄せている車両の位置情報302及びユーザーの位置情報300を取得する。本実施形態においては、待ち合わせ場所は、アプリによりユーザーが指定して、サーバー3を介して、自動車のドライバーに連絡された位置等である。この待ち合わせ場所は、地図データ340上で示すことが可能である。位置情報302は、自動車端末2からサーバー3を介して取得される。ユーザーの位置情報300は、自装置のセンサ群15により取得される。 The position acquisition unit 100 acquires the position information 302 and the user position information 300 of the vehicle calling to the meeting place. In the present embodiment, the meeting place is a position designated by the user through the application and contacted with the driver of the automobile via the server 3. This meeting place can be indicated on the map data 340. The position information 302 is acquired from the automobile terminal 2 via the server 3. The user position information 300 is acquired by the sensor group 15 of the own device.
 車両箇所算出部110は、位置取得部100により取得された車両の位置情報302及びユーザーの位置情報300により、ユーザーのカメラ16で、ユーザーの周辺が撮像されたユーザー画像データ310内に表示される車両の箇所を算出する。本実施形態では、例えば、車両箇所算出部110は、呼び出された車両の地図データ340上の位置を、位置情報302から算出し、ユーザーの位置情報300のユーザーの地図データ340上の位置と、カメラ16の撮像している方向とを算出する。そして、車両箇所算出部110は、ユーザー画像データ310を透視投影変換やアフィン変換等して、DNN(Deep neural network)等にて、地図データ340と照合する。これにより、ユーザー画像データ310内の建物や道路の座標を、車両箇所算出部110が把握可能となる。さらに、車両箇所算出部110は、道路上の自動車の画像を、DNN等を用いた画像認識により認識し、呼び出された車両の位置等の座標と一致するものを検索する。一致するものがあった場合、車両箇所算出部110は、この座標を車両の箇所として算出する。 The vehicle location calculation unit 110 is displayed in the user image data 310 in which the user's periphery is imaged by the user's camera 16 based on the vehicle location information 302 and the user location information 300 acquired by the location acquisition unit 100. Calculate the location of the vehicle. In the present embodiment, for example, the vehicle location calculation unit 110 calculates the position of the called vehicle on the map data 340 from the position information 302, and the position of the user position information 300 on the user's map data 340; The imaging direction of the camera 16 is calculated. And the vehicle location calculation part 110 collates with the map data 340 by DNN (Deep | neural | network) etc. by performing perspective projection conversion, affine conversion, etc. for the user image data 310. FIG. Thereby, the vehicle location calculation part 110 can grasp | ascertain the coordinate of the building in the user image data 310, or a road. Further, the vehicle location calculation unit 110 recognizes the image of the automobile on the road by image recognition using DNN or the like, and searches for a match with the coordinates such as the position of the called vehicle. If there is a match, the vehicle location calculation unit 110 calculates the coordinates as the location of the vehicle.
 画像提示部120は、車両箇所算出部110により算出された車両の箇所を、ユーザー画像データ310上に提示する。本実施形態においては、例えば、AR(Augmented Reality)により、撮像された画像に丸印等で注釈を付加して、車両の画像が強調されるように示すことが可能である。 The image presentation unit 120 presents the vehicle location calculated by the vehicle location calculation unit 110 on the user image data 310. In the present embodiment, for example, AR (Augmented) Reality) can be used to add an annotation to a captured image with a circle or the like so that the vehicle image is emphasized.
 この際、画像提示部120は、画像認識で認識された建物や道路名等も、ユーザー画像データ310に示すことが可能である。さらに、画像提示部120は、サーバー3から、地図データ340を取得し、車両の位置情報302及びユーザーの位置情報300を地図上に配置して、表示部13に示すことも可能である。 At this time, the image presentation unit 120 can also show the building or road name recognized by the image recognition in the user image data 310. Further, the image presentation unit 120 can acquire the map data 340 from the server 3, arrange the vehicle position information 302 and the user position information 300 on the map, and show them on the display unit 13.
 加えて、画像提示部120は、ユーザーに、呼び出される車両の到来する方向等を示すことも可能である。 In addition, the image presentation unit 120 can also show the user the direction of arrival of the called vehicle.
 位置情報300は、自装置であるユーザー端末1aのセンサ群15により取得された自装置の位置(ユーザーの位置)等の情報である。この位置情報300は、待ち合わせ場所から離れていてもよい。 The position information 300 is information such as the position of the own apparatus (user position) acquired by the sensor group 15 of the user terminal 1a that is the own apparatus. The position information 300 may be away from the meeting place.
 位置情報302は、自動車端末2aのセンサ群25により取得された自動車端末2aの位置(車両の位置)等の情報である。本実施形態では、例えば、ユーザー端末1の位置取得部100により、サーバー3を介して、呼び寄せている車両の自動車端末2aから、この位置情報302が取得される。 The position information 302 is information such as the position of the automobile terminal 2a (the position of the vehicle) acquired by the sensor group 25 of the automobile terminal 2a. In the present embodiment, for example, the position information 302 is acquired from the vehicle terminal 2 a of the vehicle being called by the position acquisition unit 100 of the user terminal 1 via the server 3.
 ユーザー画像データ310は、カメラ16で撮像された静止画又は動画の画像データである。本実施形態においては、ユーザーの周辺、すなわち、呼び寄せている自動車が来ると思われる周囲の道路や建物を含む、AR表示用の風景が、ユーザーにより撮像される。 User image data 310 is image data of a still image or a moving image captured by the camera 16. In the present embodiment, an AR display scenery including the surrounding roads and buildings around the user, that is, a car that is called to come, is imaged by the user.
 ここで、ユーザー端末1の制御部10aは、記憶部11aに記憶されたOS(Operating System)上でアプリ等のプログラムを実行することで、位置取得部100、車両箇所算出部110、及び画像提示部120等として機能させられる。この際、制御部10、20、30が実行する機能の一部又は全部を、一つ又は複数のIC、DSP、プログラマブルロジック回路等によりハードウェア的に構成してもよい。 Here, the control unit 10a of the user terminal 1 executes a program such as an application on an OS (Operating System) stored in the storage unit 11a, so that the position acquisition unit 100, the vehicle location calculation unit 110, and the image presentation are performed. It functions as the unit 120 or the like. At this time, some or all of the functions executed by the control units 10, 20, and 30 may be configured in hardware by one or a plurality of ICs, DSPs, programmable logic circuits, and the like.
 また、上述のユーザー端末1の各部は、本開示の方法を実行するハードウェア資源となる。
〔情報処理システムXaによる車両提示処理〕
 次に、図4~図5を参照して、本開示の第一実施形態に係る情報処理システムXaによる車両提示の説明を行う。
Moreover, each part of the above-mentioned user terminal 1 becomes a hardware resource which performs the method of this indication.
[Vehicle presentation processing by information processing system Xa]
Next, vehicle presentation by the information processing system Xa according to the first embodiment of the present disclosure will be described with reference to FIGS.
 本実施形態の車両提示処理は、待ち合わせ場所に呼び寄せている車両の位置情報302の及びユーザーの位置情報300を取得する。そして、取得された位置情報302及び位置情報300により、ユーザーのカメラから取得したユーザー画像データ310内に表示される車両の箇所を算出する。その後、算出された車両の箇所を、ユーザー画像データ310上に提示して、ユーザーに、呼び寄せた車両を認識させやすくする。 In the vehicle presentation process according to the present embodiment, the position information 302 of the vehicle calling to the meeting place and the position information 300 of the user are acquired. And the location of the vehicle displayed in the user image data 310 acquired from the user's camera is calculated from the acquired position information 302 and position information 300. Thereafter, the calculated location of the vehicle is presented on the user image data 310 so that the user can easily recognize the called vehicle.
 本実施形態の車両提示処理は、主に、ユーザー端末1の制御部10aが、記憶部に記憶されたアプリ等のプログラムを、各部と協働し、ハードウェア資源を用いて実行する。 In the vehicle presentation process of the present embodiment, the control unit 10a of the user terminal 1 mainly executes a program such as an application stored in the storage unit using hardware resources in cooperation with each unit.
 以下で、図4のフローチャートを参照して、本実施形態の車両提示処理の詳細をステップ毎に説明する。
(ステップS101)
 まず、位置取得部100が、位置取得処理を行う。
Below, with reference to the flowchart of FIG. 4, the detail of the vehicle presentation process of this embodiment is demonstrated for every step.
(Step S101)
First, the position acquisition unit 100 performs a position acquisition process.
 ここでは、ユーザーがユーザー端末1でアプリを起動し、サーバー3に接続して、車両の呼び寄せを依頼する。サーバー3は、呼び寄せる車両を設定して、当該車両の自動車端末2に通知し、ユーザー端末1との間で通信可能とする。 Here, the user activates the application on the user terminal 1, connects to the server 3, and requests the vehicle to be called. The server 3 sets a vehicle to be called, notifies the vehicle terminal 2 of the vehicle, and enables communication with the user terminal 1.
 この状態で、位置取得部100は、待ち合わせ場所に呼び寄せている車両の自動車端末2aから、位置情報302を取得して、記憶部11aに格納する。 In this state, the position acquisition unit 100 acquires the position information 302 from the automobile terminal 2a of the vehicle calling to the meeting place and stores it in the storage unit 11a.
 位置取得部100は、センサ群15から、ユーザー端末1aの位置情報300も取得して、記憶部11aに格納する。
(ステップS102)
 次に、車両箇所算出部110が、車両箇所算出処理を行う。
The position acquisition unit 100 also acquires the position information 300 of the user terminal 1a from the sensor group 15 and stores it in the storage unit 11a.
(Step S102)
Next, the vehicle location calculation unit 110 performs a vehicle location calculation process.
 ユーザーは、特に車両を呼び寄せて待っている際に、車両がどこを走っているのか気になるため、色々な箇所のユーザー画像データ310を撮像してAR表示させようとすると考えられる。 It is considered that the user is interested in where the vehicle is running, particularly when calling and waiting for the vehicle, so that the user image data 310 at various locations is captured and displayed as an AR.
 このため、車両がユーザー端末1aと近い距離に到着し、ユーザー画像データ310として撮像可能となった場合、画像提示部120は、その旨をユーザーに通知してもよい。この通知は、アプリの画面上で、ポップアップ等を表示して行う。 Therefore, when the vehicle arrives at a distance close to the user terminal 1a and can be captured as the user image data 310, the image presentation unit 120 may notify the user to that effect. This notification is performed by displaying a pop-up or the like on the screen of the application.
 車両箇所算出部110は、車両の位置情報302及びユーザーの位置情報300により、呼び寄せた車両が近くにきた際に、車両が来る方向を算出し、ユーザー端末のカメラ16をそちらに向けるようにユーザーに指示する。この指示は、アプリの画面上で行われてもよい。 The vehicle location calculation unit 110 calculates, based on the vehicle location information 302 and the user location information 300, the direction in which the vehicle comes when the called vehicle comes close, and directs the camera 16 of the user terminal to the user. To instruct. This instruction may be performed on the screen of the application.
 ユーザーは、自分の知っているランドマークに照らし合わせるようにして、指示された方向に、カメラ16を向ける。 The user points the camera 16 in the instructed direction so as to collide with a landmark known to the user.
 車両が来る方向にカメラ16が向けられた場合、車両箇所算出部110は、音声入出力部18により、音声や振動でユーザーに示してもよい。 When the camera 16 is directed in the direction in which the vehicle comes, the vehicle location calculation unit 110 may indicate to the user by voice or vibration by the voice input / output unit 18.
 ユーザーがカメラ16によりユーザー画像データ310を撮像すると、車両箇所算出部110は、この内に表示される車両を認識して、呼び寄せた車両が撮像されている箇所を算出する。
(ステップS103)
 次に、画像提示部120が、画像提示処理を行う。
When the user captures the user image data 310 with the camera 16, the vehicle location calculation unit 110 recognizes the vehicle displayed therein and calculates the location where the called vehicle is captured.
(Step S103)
Next, the image presentation unit 120 performs image presentation processing.
 画像提示部120は、算出された車両の箇所を、ユーザー画像データ310上にARにより提示する。 The image presentation unit 120 presents the calculated vehicle location on the user image data 310 by AR.
 図5のアプリの画面例500によると、画像提示部120は、呼び寄せた車両が撮像されている場合、まず、表示欄610に「ユーザー様、ドライバーが到着しました」等のメッセージを表示する。この上で、画像提示部120は、ユーザー画像データ310をアプリの画面上に表示して、算出された車両の箇所である車両箇所700をAR表示する。 Referring to the screen example 500 of the application shown in FIG. 5, the image presentation unit 120 first displays a message such as “User, driver has arrived” or the like in the display column 610 when the called vehicle is captured. Then, the image presentation unit 120 displays the user image data 310 on the screen of the application, and displays the vehicle location 700 that is the calculated location of the vehicle as an AR.
 画像提示部120は、サーバー3から取得した地図データ340も表示して、この上に、ユーザーの位置をポインターUで表示し、車両の位置をポインターPで表示する。この地図データ340上には、待ち合わせ場所までの車両のルート、車両の進行方向の矢印等を示してもよい。これにより、ユーザーに呼び寄せた車両を認識させやすくなる。 The image presentation unit 120 also displays the map data 340 acquired from the server 3, displays the position of the user with the pointer U and displays the position of the vehicle with the pointer P thereon. On this map data 340, the route of the vehicle to the meeting place, the arrow of the traveling direction of the vehicle, etc. may be shown. This makes it easier to recognize the vehicle that has been called to the user.
 以上により、本開示の第一実施形態に係る車両提示処理を終了する。 This completes the vehicle presentation process according to the first embodiment of the present disclosure.
 以上のように構成することで、以下のような効果を得ることができる。 By configuring as above, the following effects can be obtained.
 従来から、車両を呼び寄せるサービスが提供されている。しかしながら、車両の待ち合わせ場所が混雑していたり、到着地点がずれていたりした等の場合、呼び寄せた車両をユーザーが認識できないことがあった。また、ユーザーは、呼び寄せた車両の外観等を知らないため、例えば、サーバーから車種等の情報を与えられても、分からない可能性がある。 Conventionally, a service for attracting vehicles has been provided. However, when the meeting place of the vehicle is congested or the arrival point is shifted, the user may not be able to recognize the called vehicle. In addition, since the user does not know the appearance of the called vehicle, for example, even if information such as the vehicle type is given from the server, there is a possibility that the user does not know.
 このような場合、上述のサービスのみでは車両を特定できず、わざわざ呼び寄せた車両の運転手に別途連絡しなければならなかった。 In such a case, the vehicle cannot be specified only by the above-mentioned service, and the driver of the vehicle who had been bothered to call must be contacted separately.
 これに対して、本実施形態の情報処理システムXaのユーザー端末1aは、待ち合わせ場所に呼び寄せている車両の位置情報302及びユーザーの位置情報300を取得する位置取得部100と、位置取得部100により取得された車両の位置情報302及びユーザーの位置情報300により、ユーザーのカメラから取得したユーザー画像データ310内に表示される車両の箇所を算出する車両箇所算出部110と、車両箇所算出部110により算出された車両の箇所を、ユーザー画像データ310上に提示する画像提示部120とを備える情報処理装置であることを特徴とする。 On the other hand, the user terminal 1a of the information processing system Xa of the present embodiment includes the position acquisition unit 100 that acquires the position information 302 of the vehicle that is calling to the meeting place and the position information 300 of the user, and the position acquisition unit 100. The vehicle location calculation unit 110 that calculates the location of the vehicle displayed in the user image data 310 acquired from the user's camera based on the acquired vehicle location information 302 and the user location information 300, and the vehicle location calculation unit 110 It is an information processing apparatus provided with the image presentation part 120 which presents the location of the calculated vehicle on the user image data 310, It is characterized by the above-mentioned.
 このように構成することで、呼び寄せた車両の箇所を画像上でリアルタイムに知ることができる。このため、ユーザーが、どの車両が呼び寄せた車両なのかを容易に見つけることができる。車両がどこに到着したのか分かるため、迷わず乗車することが可能となる。 This configuration makes it possible to know the location of the called vehicle on the image in real time. For this reason, the user can easily find out which vehicle is the called vehicle. Since it is possible to know where the vehicle has arrived, it is possible to get on without getting lost.
 加えて、建物の位置関係について、ユーザーが知っているランドマークに照らし合わせた上で、選択的に画像上で照合と認識を行うことができる。このため、ユーザーが、呼び寄せた車両を特定しやすくなる。 In addition, it is possible to selectively collate and recognize the building positional relationship on the image after checking it against landmarks known by the user. For this reason, it becomes easy for the user to specify the called vehicle.
 なお、上述の実施形態においては、自動車端末2を、ホルダーHに固定されたスマートフォン等である例について記載した。 In the above-described embodiment, the example in which the automobile terminal 2 is a smartphone or the like fixed to the holder H has been described.
 しかしながら、自動車端末2は、カーナビゲーション装置と、これに別途接続された外部用カメラ26及び車内用カメラ27であってもよい。外部用カメラ26も、フロントカメラ、リアカメラ、側面カメラ等のいずれか及び任意の組み合わせであってもよい。さらに、外部用カメラ26は、自動運転時等に用いられる画像データ作成可能なレーダーやLIDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)、センサ群25の運行管理や遠隔操作に使われるカメラ等であってもよい。また、外部用カメラ26及び/又は車内用カメラ27は、360°カメラのように広い範囲を撮像可能であってもよい。さらに、外部用カメラ26として、車両の周辺にある監視カメラ等の外部カメラの画像を取得可能であってもよい。これにより、ビルの谷間、空港の車の乗降が多い箇所、道が階層になっている箇所等を容易に判別可能となる。 However, the automobile terminal 2 may be a car navigation device, an external camera 26 and an in-vehicle camera 27 separately connected thereto. The external camera 26 may also be any combination of a front camera, a rear camera, a side camera, and the like. Further, the external camera 26 is a radar that can create image data used during automatic driving, LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), a camera used for operation management and remote operation of the sensor group 25, and the like. It may be. Further, the external camera 26 and / or the in-vehicle camera 27 may be capable of capturing a wide range like a 360 ° camera. Further, as the external camera 26, an image of an external camera such as a monitoring camera around the vehicle may be acquired. As a result, it is possible to easily distinguish between valleys of buildings, places where there are many passengers getting on and off the airport, places where roads are hierarchical, and the like.
 上述の実施形態においては、地図データ340を用いて、車両が近くにきたことを検出するように記載した。しかしながら、車両箇所算出部110は、ユーザー端末1及び自動車端末2の間で、双方向通信等を用いて、近くに来たことを検出してもよい。また、双方向通信の電波の強度等により、カメラ16を向ける方向を、ユーザーに提示することも可能である。 In the embodiment described above, the map data 340 is used to detect that the vehicle has come close. However, the vehicle location calculation unit 110 may detect that the user terminal 1 and the automobile terminal 2 have come close by using bidirectional communication or the like. In addition, the direction in which the camera 16 is directed can be presented to the user based on the strength of radio waves of bidirectional communication.
 加えて、車両箇所算出部110は、画像認識を行わないで、位置情報300及び位置情報302のみから車両の箇所を算出してもよい。さらに、車両箇所算出部110は、ユーザー画像データ310から、車両そのものではなく二次元バーコードや車両に記載されたタクシー会社等の文字認識を行って、呼び寄せた車両を画像認識してもよい。 In addition, the vehicle location calculation unit 110 may calculate the location of the vehicle from only the position information 300 and the location information 302 without performing image recognition. Further, the vehicle location calculation unit 110 may recognize characters from the user image data 310 by performing character recognition of a taxi company or the like described in the two-dimensional barcode or the vehicle instead of the vehicle itself.
 サーバー3は、SIP(Session Initiation Protocol)等により、ユーザーとドライバーとの通話を行う仲介が可能であってもよい。加えて、サーバー3は、チャット、メッセンジャー送受信、電子メール送受信等を行う機能(以下、「チャット等」という。)を備えることも可能である。このチャット等は、ユーザー端末1からの音声入力によって入力し、自動翻訳を行うようにすることが可能であってもよい。 The server 3 may be able to mediate a call between the user and the driver by SIP (Session Initiation Protocol) or the like. In addition, the server 3 can also have a function of performing chat, messenger transmission / reception, e-mail transmission / reception (hereinafter referred to as “chat etc.”). This chat or the like may be input by voice input from the user terminal 1 so that automatic translation can be performed.
 加えて、ユーザー端末1、自動車端末2、及びサーバー3は、複数備えられていてもよい。 In addition, a plurality of user terminals 1, automobile terminals 2, and servers 3 may be provided.
 ユーザー端末1、自動車端末2、及びサーバー3は、上述の各部に加えて、他の構成要素を含んでもよく、各部が複数の制御単位を含んでいてもよい。 The user terminal 1, the automobile terminal 2, and the server 3 may include other components in addition to the above-described units, and each unit may include a plurality of control units.
 各部のいずれか及び任意の組み合わせのものが一体的に構成されていてもよく、例えば、入力部12と表示部13とは、一体的に形成されていてもよい。また、制御部10、20、30と、記憶部11、21、31とについても、それぞれが一体的に形成されていてもよい。 Any one of the respective units and any combination thereof may be integrally configured. For example, the input unit 12 and the display unit 13 may be integrally formed. Moreover, each of the control units 10, 20, and 30 and the storage units 11, 21, and 31 may be integrally formed.
 さらに、本実施形態の情報処理システムXを、タクシーやハイヤー等の車両の呼び寄せのサービスに用いる例について記載した。 Furthermore, an example in which the information processing system X of the present embodiment is used for a service for calling a vehicle such as a taxi or a hire has been described.
 これについて、自動運転のコミュニティーカー、シェアリングカー、宅配の待ち合わせ等に用いることも可能である。
<第二実施形態>
 次に、本開示の第二実施形態に係る情報処理システムXbの説明を行う。自動車端末2bでもユーザーの箇所を提示して、自動車のドライバーもユーザーを確認可能とする。ユーザー側では、上述の第一実施形態のユーザー端末1aのように呼び寄せた車両の箇所をAR提示する。この際に、サーバー3から外観画像データ350を取得して画像認識する。これに加えて、自動車端末2bの車内用カメラ27の画像も表示する。さらに、外部用カメラ26及び車内用カメラ27の撮像の向きを、ユーザー端末1bから操作可能にする。
〔情報処理システムXbの機能構成〕
 ここで、図6により、本開示の第二実施形態に係る情報処理システムXbの機能構成について説明する。
It can also be used for autonomous driving community cars, sharing cars, waiting for home delivery, and the like.
<Second embodiment>
Next, the information processing system Xb according to the second embodiment of the present disclosure will be described. The vehicle terminal 2b also presents the user location so that the vehicle driver can also confirm the user. On the user side, AR is presented for the location of the vehicle that has been called as in the user terminal 1a of the first embodiment described above. At this time, the appearance image data 350 is acquired from the server 3 and image recognition is performed. In addition to this, an image of the in-vehicle camera 27 of the automobile terminal 2b is also displayed. Furthermore, the imaging directions of the external camera 26 and the in-vehicle camera 27 can be operated from the user terminal 1b.
[Functional configuration of information processing system Xb]
Here, the functional configuration of the information processing system Xb according to the second embodiment of the present disclosure will be described with reference to FIG.
 ユーザー端末1bの制御部10bは、位置取得部100、車両箇所算出部110、画像提示部120、及びカメラ制御部130を備える。 The control unit 10b of the user terminal 1b includes a position acquisition unit 100, a vehicle location calculation unit 110, an image presentation unit 120, and a camera control unit 130.
 ユーザー端末1bの記憶部11bは、位置情報300、ユーザー画像データ310、車外画像データ320、及び車内画像データ330を格納する。 The storage unit 11b of the user terminal 1b stores position information 300, user image data 310, vehicle exterior image data 320, and vehicle interior image data 330.
 自動車端末2bの制御部10bは、位置取得部200、ユーザー箇所算出部210、及び画像提示部220を備える。 The control unit 10b of the automobile terminal 2b includes a position acquisition unit 200, a user location calculation unit 210, and an image presentation unit 220.
 自動車端末2bの記憶部11bは、位置情報302、ユーザー画像データ310、車外画像データ320、及び車内画像データ330を格納する。 The storage unit 11b of the automobile terminal 2b stores position information 302, user image data 310, vehicle exterior image data 320, and vehicle interior image data 330.
 サーバー3は、地図データ340及び外観画像データ350を格納している。 The server 3 stores map data 340 and appearance image data 350.
 図6において、図3の第二実施形態と同じ符号は、同様の構成要素を示す。 6, the same reference numerals as those in the second embodiment in FIG. 3 indicate the same components.
 本実施形態において、車両箇所算出部110は、第一実施形態の処理に加え、車両の外観画像データ350をサーバー3から取得する。この上で、車両箇所算出部110は、地図データ340と照合した後、外観画像データ350を用いて、DNN等にて呼び寄せた車両を画像認識する。これにより、正確に車両の箇所を算出する。 In the present embodiment, the vehicle location calculation unit 110 acquires the appearance image data 350 of the vehicle from the server 3 in addition to the processing of the first embodiment. On this basis, the vehicle location calculation unit 110 collates with the map data 340, and then uses the appearance image data 350 to recognize the image of the vehicle called by DNN or the like. Thereby, the location of the vehicle is accurately calculated.
 画像提示部120は、第一実施形態の処理に加え、車両の車内用カメラ27で撮像された車内画像データ330も、自動車端末2bから取得して提示する。 In addition to the process of the first embodiment, the image presentation unit 120 also obtains and presents in-vehicle image data 330 captured by the in-vehicle camera 27 of the vehicle from the automobile terminal 2b.
 この際、画像提示部120は、地図データ340上に、車両の位置をポインターで表示し、併せて該ポインターの近傍に外観画像データ350を表示する。表示される外観画像データ350は、サムネイルのデータ等の小さい画像を用いてもよい。 At this time, the image presentation unit 120 displays the position of the vehicle on the map data 340 with a pointer, and also displays the appearance image data 350 in the vicinity of the pointer. As the appearance image data 350 to be displayed, a small image such as thumbnail data may be used.
 加えて、画像提示部120は、例えば、自動車端末2bの外部用カメラ26で撮像された車外画像データ320も、自動車端末2bから取得して提示することが可能である。この際、この車外画像データ320には、自動車端末2bのユーザー箇所算出部210で算出された、ユーザー自身の姿が撮像された箇所が示されてもよい。 In addition, for example, the image presentation unit 120 can also obtain and present the outside image data 320 captured by the external camera 26 of the automobile terminal 2b from the automobile terminal 2b. At this time, the outside image data 320 may indicate a place where the user's own figure, which is calculated by the user place calculation unit 210 of the automobile terminal 2b, is captured.
 カメラ制御部130は、車両の外部用カメラ26及び/又は車内用カメラ27を制御する。本実施形態において、カメラ制御部130は、例えば、ユーザーのアプリ上の指示により、図1(b)に記載したような、スマートフォン用のホルダーHの向きを軸動させたり、ズームレンズのズーム倍率を変更させたりする制御を行うことが可能である。この制御は、自動車端末2bにインストールされたアプリを介して行うことが可能である。 The camera control unit 130 controls the external camera 26 and / or the in-vehicle camera 27 of the vehicle. In the present embodiment, for example, the camera control unit 130 moves the orientation of the smartphone holder H as illustrated in FIG. 1B according to an instruction on the user's application, or zoom magnification of the zoom lens. It is possible to perform control to change the value. This control can be performed via an application installed in the automobile terminal 2b.
 位置取得部200は、ユーザー端末1bの位置取得部100と同様に、自車両の位置情報302及びユーザーの位置情報300を取得する。 The position acquisition unit 200 acquires the position information 302 of the host vehicle and the position information 300 of the user, similar to the position acquisition unit 100 of the user terminal 1b.
 ユーザー箇所算出部210は、位置取得部200により取得された車両の位置情報及びユーザーの位置情報300により、車両の外部用カメラ26で撮像された車外画像データ320内に表示されるユーザーの箇所を算出する。このユーザーの認識は、例えば、車両箇所算出部110と同様に、透視投影変換やアフィン変換等により地図データ340と照合し、DNN等を用いて行う。ここで、ユーザー箇所算出部210は、車外画像データ320内にヒトが多数いた場合、位置等が一番近いヒトをユーザーと認識してもよい。この際、ユーザー箇所算出部210は、ユーザー端末1bからユーザーの情報を取得して、男女別や年齢等から最もユーザーらしいヒトを選択することも可能である。 The user location calculation unit 210 uses the vehicle location information acquired by the location acquisition unit 200 and the user location information 300 to determine the location of the user displayed in the vehicle outside image data 320 captured by the external camera 26 of the vehicle. calculate. For example, similar to the vehicle location calculation unit 110, the user recognition is performed by collating with the map data 340 by perspective projection transformation, affine transformation, or the like and using DNN or the like. Here, when there are a large number of persons in the outside-vehicle image data 320, the user location calculation unit 210 may recognize the person whose position is the closest as the user. At this time, the user location calculation unit 210 can acquire user information from the user terminal 1b and select a person who is most likely to be a user based on gender or age.
 画像提示部220は、画像提示部120と同様の機能部である。 The image presentation unit 220 is a functional unit similar to the image presentation unit 120.
 画像提示部220は、例えば、ユーザー箇所算出部210により算出されたユーザーの箇所を、車外画像データ320上に、ARで提示する。これに加えて、画像提示部220は、ユーザー端末1bからユーザー画像データ310を取得し、ユーザー端末1bの車両箇所算出部110により算出された車両の箇所を、ユーザー端末1aから取得して、提示することも可能である。 The image presentation unit 220 presents, for example, the user location calculated by the user location calculation unit 210 on the outside image data 320 in AR. In addition, the image presentation unit 220 acquires user image data 310 from the user terminal 1b, acquires the vehicle location calculated by the vehicle location calculation unit 110 of the user terminal 1b from the user terminal 1a, and presents it. It is also possible to do.
 車外画像データ320は、自動車端末2bの外部用カメラ26で撮像された静止画又は動画の画像データである。 The vehicle outside image data 320 is still image data or moving image data captured by the external camera 26 of the automobile terminal 2b.
 車内画像データ330は、自動車端末2bの車内用カメラ27で撮像された静止画又は動画の画像データである。 The in-vehicle image data 330 is still image or moving image data captured by the in-vehicle camera 27 of the automobile terminal 2b.
 外観画像データ350は、車両の外観が撮像された画像データである。この画像データは、車両箇所算出部110の画像認識の際に用いられる認識用データ、ユーザーの地図上に示すためのサムネイルデータ等、フォーマットや大きさ等が異なるデータを含んでいてもよい。
〔情報処理システムXbによる車両ユーザー提示処理〕
 次に、図7~図8を参照して、本開示の第二実施形態に係る情報処理システムXbによる車両ユーザー提示処理の説明を行う。
The appearance image data 350 is image data obtained by capturing the appearance of the vehicle. The image data may include data having different formats and sizes, such as recognition data used when the vehicle location calculation unit 110 recognizes an image, thumbnail data to be displayed on the user's map, and the like.
[Vehicle user presentation processing by information processing system Xb]
Next, a vehicle user presentation process by the information processing system Xb according to the second embodiment of the present disclosure will be described with reference to FIGS.
 本実施形態の車両ユーザー提示処理では、第一実施形態の車両提示処理と同様の処理に加えて、自動車端末2bから撮像された車外画像データ320にユーザーの箇所を提示する。加えて、車内画像データ330も提示する。この際、ユーザーの指示により、自動車端末2bのカメラの向き等を制御する。これにより、呼び寄せた車両のドライバー及びユーザーの双方に、お互いを認識されやすくする。 In the vehicle user presentation process of the present embodiment, in addition to the same process as the vehicle presentation process of the first embodiment, the location of the user is presented in the vehicle outside image data 320 captured from the automobile terminal 2b. In addition, in-vehicle image data 330 is also presented. At this time, the orientation of the camera of the automobile terminal 2b and the like are controlled according to a user instruction. This makes it easier for both the driver and the user of the vehicle to call and recognize each other.
 本実施形態の車両提示処理は、主に、ユーザー端末1bの制御部10b、及び自動車端末2bの制御部20bが、それぞれ、記憶部11b及び記憶部21bに記憶されたアプリ等のプログラムを、各部と協働し、ハードウェア資源を用いて実行する。 In the vehicle presentation process of the present embodiment, the control unit 10b of the user terminal 1b and the control unit 20b of the automobile terminal 2b mainly store programs such as applications stored in the storage unit 11b and the storage unit 21b, respectively. To work with hardware resources.
 以下で、図7のフローチャートを参照して、本実施形態の車両ユーザー提示処理の詳細をステップ毎に説明する。
(ステップS111)
 まず、ユーザー端末1aの位置取得部100が、位置取得処理を行う。
Below, with reference to the flowchart of FIG. 7, the detail of the vehicle user presentation process of this embodiment is demonstrated for every step.
(Step S111)
First, the position acquisition unit 100 of the user terminal 1a performs a position acquisition process.
 この処理は、図4のステップS101と同様に行う。 This process is performed in the same manner as step S101 in FIG.
 これに加えて、本実施形態において、位置取得部100は、センサ群15から取得した位置情報300を、自動車端末2bへ送信する。
(ステップS211)
 ここで、自動車端末2bの位置取得部200が、位置取得処理を行う。
In addition to this, in the present embodiment, the position acquisition unit 100 transmits the position information 300 acquired from the sensor group 15 to the automobile terminal 2b.
(Step S211)
Here, the position acquisition unit 200 of the automobile terminal 2b performs a position acquisition process.
 位置取得部200は、走行中、センサ群25から、位置情報302を取得して、記憶部21bに格納している。 The position acquisition unit 200 acquires the position information 302 from the sensor group 25 during traveling and stores it in the storage unit 21b.
 アプリが起動され、自動車端末2とユーザー端末1との間で通信可能となった状態で、位置取得部200は、位置情報302をユーザー端末1bへ送信する。 The position acquisition unit 200 transmits the position information 302 to the user terminal 1b in a state where the application is activated and communication is possible between the automobile terminal 2 and the user terminal 1.
 加えて、本実施形態においては、位置取得部200は、ユーザー端末1aの位置情報300も取得して、記憶部21bに格納する。
(ステップS112)
 次に、ユーザー端末1aのカメラ制御部130は、車両の外部用カメラ26及び/又は車内用カメラ27を制御する。
In addition, in the present embodiment, the position acquisition unit 200 also acquires the position information 300 of the user terminal 1a and stores it in the storage unit 21b.
(Step S112)
Next, the camera control unit 130 of the user terminal 1a controls the external camera 26 and / or the in-vehicle camera 27 of the vehicle.
 図8(a)の画面例510は、ユーザー端末1bの表示部13及び/又は自動車端末2bの表示部23に表示される外部用カメラ26の車外画像データ320の例を示す。この例によれば、ユーザーは、車外画像データ320に重ねて表示されたボタン720の各矢印を押下して指示し、ホルダーの電動雲台を当該矢印の向きへ移動させることが可能である。また、ボタン730のズームマークにより、外部用カメラ26又は車内用カメラ27のズーム倍率を変更することが可能である。
(ステップS113)
 次に、車両箇所算出部110が、車両箇所算出処理を行う。
A screen example 510 in FIG. 8A shows an example of image data 320 outside the vehicle of the external camera 26 displayed on the display unit 13 of the user terminal 1b and / or the display unit 23 of the automobile terminal 2b. According to this example, the user can press and instruct each arrow of the button 720 displayed superimposed on the outside-vehicle image data 320 and move the electric pan head of the holder in the direction of the arrow. Further, the zoom magnification of the external camera 26 or the in-vehicle camera 27 can be changed by the zoom mark of the button 730.
(Step S113)
Next, the vehicle location calculation unit 110 performs a vehicle location calculation process.
 この処理も、図4のステップS102と同様に行う。 This process is also performed in the same manner as step S102 in FIG.
 これに加え、本実施形態においては、車両箇所算出部110は、サーバー3から車両の外観画像データ350を取得する。この上で、車両箇所算出部110は、外観画像データ350を用いて、DNN等を用いた画像認識により車両を認識し、車両の箇所を算出する。
(ステップS213)
 ここで、自動車端末2bのユーザー箇所算出部210は、ユーザー箇所算出処理を行う。
In addition to this, in this embodiment, the vehicle location calculation unit 110 acquires the vehicle appearance image data 350 from the server 3. Then, the vehicle location calculation unit 110 uses the appearance image data 350 to recognize the vehicle by image recognition using DNN or the like, and calculates the location of the vehicle.
(Step S213)
Here, the user location calculation unit 210 of the automobile terminal 2b performs a user location calculation process.
 ユーザー箇所算出部210は、外部用カメラ26で撮像された車外画像データ320内に表示されるユーザーを認識して、ユーザーの表示された箇所を算出する。
(ステップS114、S214)
 ユーザー端末1aの画像提示部120、及び自動車端末2bの画像提示部220が、画像送受信処理を行う。
The user location calculation unit 210 recognizes the user displayed in the outside-vehicle image data 320 captured by the external camera 26 and calculates the location displayed by the user.
(Steps S114 and S214)
The image presentation unit 120 of the user terminal 1a and the image presentation unit 220 of the automobile terminal 2b perform image transmission / reception processing.
 ここでは、画像提示部120が、車両の箇所を提示済みのユーザー画像データ310を自動車端末2bへ送信する。さらに、画像提示部220が、ユーザーの箇所を提示済みの車外画像データ320、及び車内画像データ330を、ユーザー端末1bへ送信する。これらの画像データの送受信は、PtP(Peer to Peer)で行っても、サーバー3を介して行ってもよい。各部は、これらの画像データを受信して、記憶部11b、21bへそれぞれ格納する。
(ステップS115、S215)
 ユーザー端末1aの画像提示部120、及び自動車端末2bの画像提示部220が、それぞれ、画像提示処理を行う。
Here, the image presentation unit 120 transmits the user image data 310 in which the location of the vehicle has been presented to the automobile terminal 2b. Further, the image presentation unit 220 transmits the vehicle outside image data 320 and the vehicle interior image data 330 on which the user's location has been presented to the user terminal 1b. Transmission / reception of these image data may be performed by PtP (Peer to Peer) or via the server 3. Each unit receives these image data and stores them in the storage units 11b and 21b, respectively.
(Steps S115 and S215)
The image presentation unit 120 of the user terminal 1a and the image presentation unit 220 of the automobile terminal 2b each perform an image presentation process.
 画像提示部120は、図4のステップS103の画像提示処理と同様の処理に加え、カメラ16、外部用カメラ26、及び車内用カメラ27の各画像を切り換えたり、重ねたりして表示部13に表示させることが可能である。 In addition to the processing similar to the image presentation processing in step S103 of FIG. 4, the image presentation unit 120 switches or overlaps the images of the camera 16, the external camera 26, and the in-vehicle camera 27 on the display unit 13. It can be displayed.
 また、自動車端末2bの画像提示部220も、これら各画像を切り換えて、表示部23に表示させることが可能である。 Also, the image presentation unit 220 of the automobile terminal 2b can switch these images and display them on the display unit 23.
 図8(a)の画面例510では、車外画像データ320において、算出されたユーザーの箇所であるユーザー箇所710がAR表示されている例を示している。 8A shows an example in which the user location 710, which is the calculated user location, is AR-displayed in the image data 320 outside the vehicle.
 図8(b)の画面例512では、第一実施形態の図5の画面例500の各画像に加えて、車内画像データ330が表示されている例を示している。さらに、地図データ340上では、車両の位置を示すポインターPの近傍に、吹き出し形式で外観画像データ350のサムネイル画像を表示している。 8B shows an example in which in-vehicle image data 330 is displayed in addition to the images in the screen example 500 in FIG. 5 of the first embodiment. Further, on the map data 340, a thumbnail image of the appearance image data 350 is displayed in a balloon format in the vicinity of the pointer P indicating the position of the vehicle.
 これらの画面例は、表示部13及び/又は表示部23に、ぞれぞれ、リアルタイム(実時間)で表示される。 These screen examples are displayed on the display unit 13 and / or the display unit 23 in real time (real time).
 以上により、本開示の第二実施形態に係る車両ユーザー提示処理を終了する。 Thus, the vehicle user presentation process according to the second embodiment of the present disclosure is completed.
 以上のように構成することで、以下のような効果を得ることができる。 By configuring as above, the following effects can be obtained.
 従来の呼び寄せのサービスでは、車両のドライバーにとって、例えば、人が多いところは誰がお客さん(ユーザー)かわからないことがあった。この場合、車両が待ち合わせ場所に到着しても、ユーザーを認識できず、トラブルとなることがあった。 In the conventional calling service, for example, there are cases where a vehicle driver does not know who is a customer (user) where there are many people. In this case, even if the vehicle arrives at the meeting place, the user cannot be recognized, which may cause a problem.
 これに対して、本実施形態の情報処理システムXbの自動車端末2bは、位置取得部200により取得された車両の位置情報302及びユーザーの位置情報300により、車両の外部用カメラ26から取得した車外画像データ320内に表示されるユーザーの箇所を算出するユーザー箇所算出部210を更に備え、画像提示部120及び/又は画像提示部220は、ユーザー箇所算出部210により算出されたユーザーの箇所を、車外画像データ320上に提示する情報処理装置であることを特徴とする。 On the other hand, the automobile terminal 2b of the information processing system Xb according to the present embodiment uses the vehicle position information 302 acquired by the position acquisition unit 200 and the user position information 300 to acquire the outside of the vehicle acquired from the external camera 26 of the vehicle. A user location calculation unit 210 that calculates the location of the user displayed in the image data 320 is further provided, and the image presentation unit 120 and / or the image presentation unit 220 determine the location of the user calculated by the user location calculation unit 210. It is an information processing apparatus that is presented on the image data 320 outside the vehicle.
 このように構成することで、ドライバーが、走行中の周辺の画像から、ユーザーの箇所を認識することができる。このため、ドライバーがユーザーを把握しやすくなり、待ち合わせ場所へ容易に到着可能となる。 With this configuration, the driver can recognize the user's location from the surrounding images while driving. For this reason, it becomes easy for the driver to grasp the user, and the driver can easily reach the meeting place.
 加えて、本実施形態のユーザー端末1bも、車外画像データ320を画像提示部120で表示可能であってもよい。このため、ユーザーも、リアルタイムで、車両がどの辺を走行しているかを知ることができる。すなわち、車両の現在の状態を知ることができることで、呼び寄せた車両がどこに到着しているかを容易に把握可能となる。 In addition, the user terminal 1b of the present embodiment may also be able to display the vehicle exterior image data 320 on the image presentation unit 120. For this reason, the user can also know which side the vehicle is traveling in real time. That is, by knowing the current state of the vehicle, it is possible to easily grasp where the called vehicle has arrived.
 このようにして、ユーザー及びドライバーの双方がお互いを認識することで、待ち合わせのトラブルが生じる可能性を少なくすることができる。 In this way, both the user and the driver can recognize each other, thereby reducing the possibility of waiting problems.
 本実施形態のユーザー端末1bにおいて、車両箇所算出部110は、車両の外観画像データ350を取得して、該外観画像データ350により車両を画像認識することで、車両の箇所を算出する情報処理装置であることを特徴とする。 In the user terminal 1b of the present embodiment, the vehicle location calculation unit 110 acquires the vehicle appearance image data 350, and recognizes the vehicle from the appearance image data 350, thereby calculating the location of the vehicle. It is characterized by being.
 このように構成することで、ユーザー画像データ310から確実に、呼び寄せた車両を画像認識して、車両の箇所を算出することができる。すなわち、一般的な自動車をカテゴライズしたDNNよりも、実際の車両のデータを用いて画像認識した方が、認識率を顕著に高めることができる。このため、確実にユーザーに、呼び寄せた車両を認識させることができ、トラブルを防止可能となる。 With this configuration, it is possible to reliably recognize the called vehicle from the user image data 310 and calculate the location of the vehicle. That is, the recognition rate can be remarkably increased by image recognition using actual vehicle data, rather than DNN that categorizes a general automobile. For this reason, the user can surely recognize the called vehicle, and trouble can be prevented.
 本実施形態のユーザー端末1bにおいて、画像提示部120は、取得された地図データ340上に、車両の位置をポインターで表示し、併せて該ポインターの近傍に外観画像データ350を表示する情報処理装置であることを特徴とする。 In the user terminal 1b of the present embodiment, the image presentation unit 120 displays the position of the vehicle with a pointer on the acquired map data 340 and also displays the appearance image data 350 in the vicinity of the pointer. It is characterized by being.
 このように構成することで、ユーザーは、呼び寄せた車両を地図上で容易に判別することができ、更に、近くにきた車両を見つけやすくなる。特に、呼び寄せた車両が、車種や塗装等の外観で特徴あるものであった場合、容易に判別可能となる。 With this configuration, the user can easily discriminate the called vehicle on the map, and more easily find a vehicle that is nearby. In particular, when the called vehicle is characteristic in appearance such as a vehicle type or paint, it can be easily discriminated.
 従来、車両の呼び寄せのサービスでは、呼び寄せた車両の状況をユーザーが知ることはできなかった。ユーザーは、実際に呼び寄せた車が車両の画像と違う雰囲気だったらと考えて心配になることがあった。 Conventionally, in the vehicle calling service, the user cannot know the status of the calling vehicle. Users sometimes worried that the car they actually called was in a different atmosphere from the vehicle image.
 これに対して、本実施形態のユーザー端末1bは、画像提示部120は、車両の車内用カメラ27から取得した車内画像データ330も提示する情報処理装置であることを特徴とする。 On the other hand, the user terminal 1b of the present embodiment is characterized in that the image presentation unit 120 is an information processing apparatus that also presents in-vehicle image data 330 acquired from the in-vehicle camera 27 of the vehicle.
 このように構成することで、呼び寄せた車両の状況を、リアルタイムで、アプリ上にて確認することができる。つまり、車外状況だけではなく、車内状況も確認可能となる。これにより、車内状況も知ることが可能となる。結果として、予め登録したドライバーや車両の画像、口コミ、レーティング等以上に、ユーザーに安心感を与えることができる。 This configuration makes it possible to check the status of the called vehicle on the app in real time. That is, not only the situation outside the vehicle but also the situation inside the vehicle can be confirmed. This makes it possible to know the situation inside the vehicle. As a result, it is possible to give the user a sense of security more than pre-registered drivers and vehicle images, word of mouth, ratings, and the like.
 さらに、自動運転のコミュニティーカー、シェアリングカー等では、車内の状況を知ることが重要であった。これは、車内の人数が多いときには、呼び寄せのサービスを別に頼む必要があるからである。 Furthermore, it was important to know the situation inside the car for community cars, sharing cars, and so on. This is because when there are a large number of people in the vehicle, it is necessary to request a call service separately.
 また、ユーザーによっては、タクシーやハイヤー等を数人でシェア(相乗り)して安く使いたいというニーズもある。しかし、相乗りが多くなると煩わしく感じることもあり、別途、呼び寄せのサービスを使うかどうか、ユーザーが悩ましく感じることがあった。 Also, some users have a need to share a taxi or hire with several people (carpooling) and use it cheaply. However, when the number of carpooling increases, it sometimes feels annoying, and the user sometimes feels annoyed whether to use a separate calling service.
 これに対して、車内の状況を知ることが可能となるため、どのくらい人が乗っているか、乗る可能性があるか等をユーザーが判断可能となる。このため、別途、呼び寄せするかどうかをユーザーが容易に選択可能となり、ユーザーの使い勝手を高めることができる。 On the other hand, since it becomes possible to know the situation inside the vehicle, the user can determine how many people are riding and whether there is a possibility of riding. For this reason, it becomes possible for the user to easily select whether or not to call separately, and the usability of the user can be improved.
 本実施形態のユーザー端末1bは、車両の外部用カメラ26及び/又は車内用カメラ27を制御するカメラ制御部130を更に備える情報処理装置であることを特徴とする。 The user terminal 1b of the present embodiment is an information processing apparatus that further includes a camera control unit 130 that controls the external camera 26 and / or the in-vehicle camera 27 of the vehicle.
 このように構成することで、自動車端末2bでどこを撮像するか、ユーザー端末1bのアプリ上からユーザーが操作可能となる。つまり、ユーザーは、自動車端末2bのカメラを遠隔操作することができ、車両の画像を遠隔操作しながら自由に見ることができる。 With this configuration, the user can operate from the app of the user terminal 1b where the car terminal 2b captures an image. That is, the user can remotely operate the camera of the automobile terminal 2b, and can freely view the image of the vehicle while remotely operating it.
 これにより、ドライバーに示したいところ、ユーザーが閲覧したいところ等を閲覧可能となる。このため、駐車場位置、送迎等が多い場所等で、どこにユーザーが待ち合わせているのか、確実にドライバーへ示すことができる。加えて、ユーザー自身が、例えば、自分の奥さんや子供等の分かりやすい服装をした人物を見つけて、この画像を取得して車両を判別しやすくすることができる。 This makes it possible to browse wherever you want to show the driver and where you want to browse. Therefore, the driver can be surely shown where the user is waiting in a parking lot position, a place where there are many pick-ups, and the like. In addition, it is possible for the user himself / herself to find a person wearing easily understandable clothes such as his wife or a child, for example, and to acquire this image so that the vehicle can be easily identified.
 加えて、本実施形態の情報処理システムXbでは、ユーザー端末1bで撮像されたユーザー画像データ310を、自動車端末2bへ送信することができる。 In addition, in the information processing system Xb of this embodiment, the user image data 310 captured by the user terminal 1b can be transmitted to the automobile terminal 2b.
 このため、ドライバーに対して、ユーザーのいる位置の画像、現在の状況等を送信することが可能となり、ドライバーがユーザーを探す手間を省くことができる。このため、サービスを向上させることができる。 For this reason, it is possible to transmit the image of the position where the user is, the current situation, etc. to the driver, and the driver can save time and effort to find the user. For this reason, service can be improved.
 なお、上述の実施形態においては、ユーザーの指示に基づき、外部用カメラ26及び車内用カメラ27を制御するように記載した。 In the above-described embodiment, the external camera 26 and the in-vehicle camera 27 are controlled based on a user instruction.
 これに対して、停車時に、自動車のドライバーも自動車端末2bのアプリを介して、ホルダーHを駆動させ、外部用カメラ26及び車内用カメラ27の向きやズーム倍率を変更可能であってもよい。 On the other hand, when the vehicle is stopped, the driver of the vehicle may drive the holder H via the application of the vehicle terminal 2b to change the orientation and zoom magnification of the external camera 26 and the in-vehicle camera 27.
 さらに、車外画像データ320、車内画像データ330、及びユーザー画像データ310は、リアルタイムで各端末へ送信されるだけでなく、各端末で特定期間保存可能であってもよい。これにより、数秒~数分程度前の画像を閲覧することができる。 Further, the outside image data 320, the in-vehicle image data 330, and the user image data 310 are not only transmitted to each terminal in real time, but may be stored at each terminal for a specific period. As a result, it is possible to browse an image several seconds to several minutes ago.
 加えて、車内画像データ330は、車内用カメラ27の画像だけでなく、車の外観画像等が見られるような外部カメラの画像を含んでいてもよい。
<第三実施形態>
 次に、図9により、本開示の第三実施形態に係る情報処理システムXcについて説明する。図9においても、上述の第一実施形態及び第二実施形態と同様の構成は、同じ符号を付している。
In addition, the in-vehicle image data 330 may include not only an image of the in-vehicle camera 27 but also an image of an external camera that allows an appearance image of the car to be seen.
<Third embodiment>
Next, the information processing system Xc according to the third embodiment of the present disclosure will be described with reference to FIG. Also in FIG. 9, the same code | symbol is attached | subjected to the structure similar to the above-mentioned 1st embodiment and 2nd embodiment.
 上述の第一実施形態では、ユーザー画像データ310において、車両の箇所を提示した。第二実施形態においては、車両の箇所に加え、車外画像データ320において、ユーザーの箇所を提示した。 In the first embodiment described above, the location of the vehicle is presented in the user image data 310. In the second embodiment, in addition to the location of the vehicle, the location of the user is presented in the image data 320 outside the vehicle.
 これに対して、情報処理システムXcのように、自動車端末2cのみでユーザーの箇所を提示するように構成してもよい。図9の例では、制御部20cは、位置取得部200、ユーザー箇所算出部210、画像提示部220を備えている。記憶部21cは、位置情報300及び車外画像データ320を格納している。ユーザー端末1cは、位置情報302を備えている。 On the other hand, as in the information processing system Xc, the user location may be presented only by the automobile terminal 2c. In the example of FIG. 9, the control unit 20 c includes a position acquisition unit 200, a user location calculation unit 210, and an image presentation unit 220. The storage unit 21c stores position information 300 and image data 320 outside the vehicle. The user terminal 1 c includes position information 302.
 つまり、本実施形態の自動車端末2cは、待ち合わせ場所に呼び寄せている車両の位置情報302及びユーザーの位置情報300を取得する位置取得部200と、位置取得部100により取得された車両及びユーザーの位置情報300により、車両の外部用カメラ26から取得した車両外部画像データ内に表示されるユーザーの箇所を算出するユーザー箇所算出部210と、ユーザー箇所算出部210により算出されたユーザーの箇所を、車両外部画像データ上に提示する画像提示部120とを備える情報処理装置であることを特徴とする。 That is, the automobile terminal 2c of the present embodiment includes the position acquisition unit 200 that acquires the position information 302 of the vehicle that is calling to the meeting place and the position information 300 of the user, and the position of the vehicle and the user that are acquired by the position acquisition unit 100. Based on the information 300, the user location calculation unit 210 that calculates the location of the user displayed in the vehicle external image data acquired from the external camera 26 of the vehicle, and the location of the user calculated by the user location calculation unit 210 It is an information processing apparatus provided with the image presentation part 120 presented on external image data.
 このように構成することで、上述の第二実施形態と同様に、自動車のドライバーに対して、ユーザーを認識しやすくすることができる。この場合、ユーザーが呼び寄せた自動車を確認しなくてもよいので、ユーザーの負担を減らすことができる。
<第四実施形態>
 次に、図10により、本開示の第四実施形態に係る情報処理システムXdについて説明する。図10においても、上述の第一乃至第三実施形態と同様の構成は、同じ符号を付している。
With this configuration, it is possible to make it easier for the driver of the automobile to recognize the user, as in the second embodiment described above. In this case, since it is not necessary to confirm the car that the user has called, the burden on the user can be reduced.
<Fourth embodiment>
Next, an information processing system Xd according to the fourth embodiment of the present disclosure will be described with reference to FIG. Also in FIG. 10, the same code | symbol is attached | subjected to the structure similar to the above-mentioned 1st thru | or 3rd embodiment.
 上述の第一乃至第四実施形態では、サーバー3に地図データ340を備える例について記載した。しかしながら、本実施形態においては、PtPのようにユーザー端末1dと自動車端末2dとのみで、例えば、上述の第二実施形態と同様の処理を行うようにする。 In the first to fourth embodiments described above, an example in which the server 3 includes the map data 340 has been described. However, in the present embodiment, for example, the same processing as in the second embodiment described above is performed only by the user terminal 1d and the automobile terminal 2d as in PtP.
 図10の例では、ユーザー端末1dの制御部10d及び記憶部11d、自動車端末2dの制御部20dは、第二実施形態の制御部10b、記憶部11b、制御部20bと、それぞれ同様の構成である。これに対して、自動車端末2dの記憶部21dは、第二実施形態の記憶部21bの各データに加えて、地図データ340を格納している。 In the example of FIG. 10, the control unit 10d and the storage unit 11d of the user terminal 1d and the control unit 20d of the automobile terminal 2d have the same configuration as the control unit 10b, the storage unit 11b, and the control unit 20b of the second embodiment. is there. On the other hand, the storage unit 21d of the automobile terminal 2d stores map data 340 in addition to the data of the storage unit 21b of the second embodiment.
 このように構成することで、サーバー3を介さないで、ユーザー及びドライバーがお互いを確認することができる。また、自動車端末2dが専用ナビゲーションの地図データ340を格納して、これを基に照合することで、ユーザー端末1dと自動車端末2dとで地図データ340のズレ等を防ぎ、ユーザー及び車両の箇所を算出しやすくなる。 With this configuration, the user and the driver can confirm each other without going through the server 3. In addition, the car terminal 2d stores the map data 340 for the dedicated navigation, and collates based on the map data 340, thereby preventing a shift of the map data 340 between the user terminal 1d and the car terminal 2d, and the location of the user and the vehicle. It becomes easy to calculate.
 なお、この例では、自動車端末2dに地図データ340を格納するように記載したものの、ユーザー端末1dであってもよい。また、別途、自動車端末2bと接続可能なナビゲーション装置を備えていて、これから地図データ340を取得するようにしてもよい。さらに、地図データ340を用いないで、直接車両やユーザーの箇所を認識するようにしてもよい。
<第五実施形態>
 次に、図11により、本開示の第五実施形態に係る情報処理システムXeについて説明する。図11においても、上述の第一乃至第四実施形態と同様の構成は、同じ符号を付している。
In this example, the map data 340 is described as being stored in the automobile terminal 2d, but may be the user terminal 1d. In addition, a navigation device that can be connected to the automobile terminal 2b is provided separately, and the map data 340 may be acquired from the navigation device. Furthermore, you may make it recognize a vehicle or a user's location directly, without using the map data 340. FIG.
<Fifth embodiment>
Next, an information processing system Xe according to the fifth embodiment of the present disclosure will be described with reference to FIG. Also in FIG. 11, the same reference numerals are given to the same configurations as those in the first to fourth embodiments described above.
 上述の第一乃至第四実施形態では、それぞれの端末で画像データから車両又はユーザーの箇所を算出するように記載した。しかしながら、情報処理システムXeのように、サーバー3eで、これらの箇所の算出を行ってもよい。この場合、例えば、サーバー3eは、制御部30eにユーザー箇所算出部210及び車両箇所算出部110を備える。記憶部31eは、地図データ340に加え、ユーザー端末1eから位置情報300及びユーザー画像データ310を取得して格納する。さらに、自動車端末2eから位置情報302、車外画像データ320、及び社内画像データを取得して、記憶部31dに格納する。この上で、サーバー3eから、各端末に、各画像データをストリーミング等で配信する。 In the first to fourth embodiments described above, the vehicle or user location is calculated from the image data at each terminal. However, like the information processing system Xe, the server 3e may calculate these points. In this case, for example, the server 3e includes a user location calculation unit 210 and a vehicle location calculation unit 110 in the control unit 30e. In addition to the map data 340, the storage unit 31e acquires the position information 300 and the user image data 310 from the user terminal 1e and stores them. Further, the position information 302, the outside image data 320, and the in-house image data are acquired from the automobile terminal 2e and stored in the storage unit 31d. Then, each image data is distributed from the server 3e to each terminal by streaming or the like.
 この例の場合、ユーザー端末1eの制御部10eは、位置取得部100、画像提示部120、カメラ制御部130を備えている。自動車端末2eの制御部20eは、位置取得部200、画像提示部220を備えている。 In this example, the control unit 10e of the user terminal 1e includes a position acquisition unit 100, an image presentation unit 120, and a camera control unit 130. The control unit 20e of the automobile terminal 2e includes a position acquisition unit 200 and an image presentation unit 220.
 このように構成することで、ユーザー端末1e及び自動車端末2eの処理負担を低減することができる。また、サーバー3eで各画像データを蓄積し、各箇所を認識することで、効率的に学習させて、地図データ340との照合や各認識の精度を高めることができる。 With this configuration, the processing burden on the user terminal 1e and the automobile terminal 2e can be reduced. Further, by accumulating each image data in the server 3e and recognizing each location, it is possible to efficiently learn and to improve the accuracy of collation with the map data 340 and each recognition.
 さらに、サーバー3eの記憶部31eに、車外画像データ320等を特定期間、蓄積するような構成も可能である。 Further, it is possible to configure the storage unit 31e of the server 3e to store the vehicle outside image data 320 and the like for a specific period.
 これにより、ユーザー端末1eにおいて、車両がどこを走っていたのかを、時系列に沿って検索して、より車両を発見しやすくすることが可能である。 This makes it possible to search the user terminal 1e along the time series to find out where the vehicle was running, thereby making it easier to find the vehicle.
 これに加えて、図12によれば、車両の呼び寄せのサービスを利用している各車両の車外画像データ320を蓄積することで、ユーザーが地図データ340と照合された車外画像データ320を閲覧して、ユーザーが行き先を設定することも可能となる。 In addition, according to FIG. 12, by accumulating the outside image data 320 of each vehicle using the vehicle calling service, the user can view the outside image data 320 collated with the map data 340. Thus, the user can set the destination.
 たとえば、図12(a)の画面例520のように、ユーザーが行き先を地図上のピンで設定する。この場合、図12(b)の画面例530のように、対応する車外画像データ320を表示可能である。これにより、ユーザー端末1eにて、行き先を、車両が実際に移動した箇所の画像によって設定可能となる。 For example, as shown in a screen example 520 in FIG. 12A, the user sets a destination with a pin on the map. In this case, as shown in the screen example 530 of FIG. 12B, the corresponding outside-vehicle image data 320 can be displayed. Thereby, in the user terminal 1e, a destination can be set by an image of a location where the vehicle has actually moved.
 さらに、ユーザー端末1eで、地図データ340とユーザー画像データ310のランドマークとの照合結果から、「大体、あの辺に行きたい」という行き先設定を行うことも可能となる。 Furthermore, it is possible to set a destination setting of “I want to go to that neighborhood” from the collation result between the map data 340 and the landmark of the user image data 310 on the user terminal 1e.
 このように構成することで、ユーザーが到着地点を知らない場合であっても、容易に行き先を設定することができる。このため、旅行先や初めて訪れる場所で、住所を説明するのが難しい場合に対応可能となる。すなわち、ユーザーが「あの辺」と漠然と覚えている行き先の検索を容易にすることができる。加えて、ユーザーが画像で示すことで、行き先をドライバーに確実に伝えることができる。 This configuration makes it possible to easily set the destination even when the user does not know the arrival point. For this reason, it is possible to cope with a case where it is difficult to explain an address at a travel destination or a place visited for the first time. That is, it is possible to easily search for a destination that the user remembers vaguely as “that neighborhood”. In addition, the user can reliably tell the driver where to go by showing the image.
 サーバー3eに、地図データ340と照合された車外画像データ320とを蓄積することで、ユーザーが実際に車両に乗った後で、行き先の画像データを閲覧することも可能となる。このため、ユーザーが住所だけで到着地が分からない場合でも、行き先の画像を閲覧できることで、「あの辺」に行きたいというように、ドライバーに説明できるようになる。 By accumulating the image data 320 that is collated with the map data 340 in the server 3e, it is possible to view the destination image data after the user actually gets on the vehicle. For this reason, even when the user does not know the arrival place only by the address, the user can explain to the driver that he / she wants to go to “that neighborhood” by browsing the destination image.
 加えて、蓄積された車外画像データ320から、360°画像を作成して、VR(Virtual Reality)動画のリアルタイム配信等も可能となる。 In addition, 360 ° images can be created from the stored outside-vehicle image data 320 and VR (Virtual Reality) moving images can be distributed in real time.
 さらに、サーバー3eの記憶部31eに、車内画像データ330等を特定期間、蓄積するような構成も可能である。 Furthermore, a configuration in which the in-vehicle image data 330 and the like are stored in the storage unit 31e of the server 3e for a specific period is possible.
 これにより、ユーザー端末1eから、過去の車内状況を確認し、どのくらい人が乗っているか、乗る可能性があるか等で呼び寄せのサービスを使うかどうかを、ユーザーが容易に判断可能となる。
<他の実施形態>
 なお、上述の実施形態においては、車外画像データ320に、車両を呼び寄せているユーザーの箇所のみを表示するように記載した。
Thereby, it is possible for the user to easily determine from the user terminal 1e whether or not to use the calling service based on the past in-vehicle situation and how many people are on the board or whether there is a possibility of getting on.
<Other embodiments>
In the above-described embodiment, only the location of the user calling the vehicle is displayed in the outside image data 320.
 しかしながら、車外画像データ320に、アプリを起動しているユーザーの箇所を全て表示するようにしてもよい。加えて、地図データ340上で、過去に利用したユーザーのデータ、携帯電波による人の流れや多さ等を示してもよい。 However, you may make it display all the parts of the user who is starting the application in the image data 320 outside a vehicle. In addition, on the map data 340, the user data used in the past, the flow and number of people by mobile radio waves, etc. may be shown.
 このように構成することで、ユーザーやドライバーが待ち合わせ場所として最適な箇所かどうかを判断することができ、場合によっては待ち合わせ場所を変更することが可能となる。これにより、トラブルの発生等を防ぐことができる。 With this configuration, it is possible to determine whether or not the user or driver is the most suitable meeting place, and in some cases, the meeting place can be changed. This can prevent troubles from occurring.
 上述の実施形態では、ユーザー画像データ310に車両の箇所のみを提示するように記載した。 In the embodiment described above, the user image data 310 is described so as to present only the location of the vehicle.
 しかしながら、ユーザーが車両に乗車した後にも、移動中に、地図データ340と照合して、歴史上の建物の紹介、町の歴史やニュース、天気等を、AR表示することも可能である。この際に、自動翻訳等を行ったり、ユーザーの感情を推定してAR表示する情報を選択してもよい。 However, even after the user gets on the vehicle, it is also possible to display AR introductions of historical buildings, town history, news, weather, etc. while collating with the map data 340 while moving. At this time, automatic translation or the like may be performed, or information for AR display may be selected by estimating the user's emotion.
 このように、知識とその場の体験を一致させることで、感動を生じさせることができる。また、ユーザーは、移動中は暇になり、ドライバーと話したくないこともある。この場合であっても、AR表示を眺めることで、ユーザーに便宜を与えることができる。 In this way, it is possible to create inspiration by matching the knowledge with the experience on the spot. Users may also be idle on the move and do not want to talk to the driver. Even in this case, the user can be provided with convenience by viewing the AR display.
 上述の本開示の第一乃至第五実施形態においては、本開示の特徴を主要な構成に絞って説明した。しかしながら、これらの実施形態の機能部の構成の組み合わせは任意である。すなわち、第一乃至第五実施形態のいずれかの構成、任意の組み合わせの構成、又は、全ての機能部を備えた構成であってもよい。 In the first to fifth embodiments of the present disclosure described above, the features of the present disclosure have been described focusing on the main configuration. However, the combination of the structures of the functional units in these embodiments is arbitrary. That is, any one of the configurations of the first to fifth embodiments, a configuration of any combination, or a configuration including all functional units may be used.
 また、ユーザー端末1は、上述の第一乃至第五実施形態で記載していない機能部を更に備えていてもよい。 Further, the user terminal 1 may further include a functional unit that is not described in the first to fifth embodiments.
 本開示に記載の制御部及びその手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリーを構成することによって提供された専用コンピュータにより、実現されてもよい。あるいは、本開示に記載の制御部及びその手法は、一つ以上の専用ハードウエア論理回路によってプロセッサを構成することによって提供された専用コンピュータにより、実現されてもよい。もしくは、本開示に記載の制御部及びその手法は、一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリーと一つ以上のハードウエア論理回路によって構成されたプロセッサとの組み合わせにより構成された一つ以上の専用コンピュータにより、実現されてもよい。また、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていてもよい。 The control unit and its method described in the present disclosure are realized by a dedicated computer provided by configuring a processor and a memory programmed to execute one or more functions embodied by a computer program. May be. Alternatively, the control unit and the method thereof described in the present disclosure may be realized by a dedicated computer provided by configuring a processor with one or more dedicated hardware logic circuits. Alternatively, the control unit and the method thereof described in the present disclosure may include a combination of a processor and a memory programmed to execute one or more functions and a processor configured by one or more hardware logic circuits. It may be realized by one or more configured dedicated computers. The computer program may be stored in a computer-readable non-transition tangible recording medium as instructions executed by the computer.
 ここで、この出願に記載されるフローチャート、あるいは、フローチャートの処理は、複数のセクション(あるいはステップと言及される)から構成され、各セクションは、たとえば、S101と表現される。さらに、各セクションは、複数のサブセクションに分割されることができる、一方、複数のセクションが合わさって一つのセクションにすることも可能である。さらに、このように構成される各セクションは、デバイス、モジュール、ミーンズとして言及されることができる。 Here, the flowchart described in this application or the process of the flowchart is configured by a plurality of sections (or referred to as steps), and each section is expressed as S101, for example. Further, each section can be divided into a plurality of subsections, while a plurality of sections can be combined into one section. Further, each section configured in this manner can be referred to as a device, module, or means.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described based on the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (9)

  1.  待ち合わせ場所に呼び寄せている車両の位置情報(302)及びユーザーの位置情報(300)を取得する位置取得部(100)と、
     前記位置取得部により取得された前記車両の位置情報及び前記ユーザーの位置情報により、前記ユーザーのカメラ(16)で撮像されたユーザー画像データ(310)内に表示される前記車両の箇所を算出する車両箇所算出部(110)と、
     前記車両箇所算出部により算出された前記車両の箇所を、前記ユーザー画像データ上に提示する画像提示部(120)とを備える
     情報処理装置。
    A position acquisition unit (100) for acquiring the position information (302) of the vehicle calling to the meeting place and the position information (300) of the user;
    The location of the vehicle displayed in the user image data (310) captured by the user's camera (16) is calculated based on the vehicle location information and the user location information acquired by the location acquisition unit. A vehicle location calculator (110);
    An information processing apparatus comprising: an image presentation unit (120) that presents the location of the vehicle calculated by the vehicle location calculation unit on the user image data.
  2.  前記位置取得部により取得された前記車両の位置情報及び前記ユーザーの位置情報により、前記車両の外部用カメラ(26)で撮像された車外画像データ(320)内に表示される前記ユーザーの箇所を算出するユーザー箇所算出部(210)を更に備え、
     前記画像提示部は、
     前記ユーザー箇所算出部により算出された前記ユーザーの箇所を、前記車外画像データ上に提示する
     請求項1に記載の情報処理装置。
    The location of the user displayed in the outside image data (320) captured by the external camera (26) of the vehicle based on the position information of the vehicle and the position information of the user acquired by the position acquisition unit. A user location calculation unit (210) for calculating,
    The image presentation unit
    The information processing apparatus according to claim 1, wherein the user location calculated by the user location calculation unit is presented on the image data outside the vehicle.
  3.  前記車両箇所算出部は、
     前記車両の外観画像データ(350)を取得して、該外観画像データにより前記車両を画像認識することで、前記車両の箇所を算出する
     請求項1又は2に記載の情報処理装置。
    The vehicle location calculator
    The information processing apparatus according to claim 1 or 2, wherein the vehicle location is calculated by acquiring the vehicle appearance image data (350) and recognizing the vehicle from the appearance image data.
  4.  前記画像提示部は、
     取得された地図データ(340)上に、前記車両の位置をポインターで表示し、併せて該ポインターの近傍に前記外観画像データを表示する
     請求項3に記載の情報処理装置。
    The image presentation unit
    The information processing apparatus according to claim 3, wherein the position of the vehicle is displayed with a pointer on the acquired map data (340), and the appearance image data is displayed in the vicinity of the pointer.
  5.  前記画像提示部は、
     前記車両の車内用カメラ(27)で撮像された車内画像データ(330)も提示する
     請求項1乃至4のいずれか1項に記載の情報処理装置。
    The image presentation unit
    The information processing apparatus according to any one of claims 1 to 4, further presenting in-vehicle image data (330) captured by the in-vehicle camera (27) of the vehicle.
  6.  前記車両の前記外部用カメラと前記車内用カメラの少なくとも一つを制御するカメラ制御部(130)を更に備える
     請求項2乃至5のいずれか1項に記載の情報処理装置。
    The information processing apparatus according to any one of claims 2 to 5, further comprising a camera control unit (130) configured to control at least one of the external camera and the in-vehicle camera of the vehicle.
  7.  待ち合わせ場所に呼び寄せている車両の位置情報(302)及びユーザーの位置情報(300)を取得する位置取得部(100)と、
     前記位置取得部により取得された前記車両及び前記ユーザーの位置情報により、前記車両の外部用カメラ(26)から取得した車両外部画像データ(320)内に表示される前記ユーザーの箇所を算出するユーザー箇所算出部(210)と、
     前記ユーザー箇所算出部により算出された前記ユーザーの箇所を、前記車両外部画像データ上に提示する画像提示部(120)とを備える
     情報処理装置。
    A position acquisition unit (100) for acquiring the position information (302) of the vehicle calling to the meeting place and the position information (300) of the user;
    A user who calculates the location of the user displayed in the vehicle external image data (320) acquired from the external camera (26) of the vehicle based on the vehicle and user position information acquired by the position acquisition unit. A location calculation unit (210);
    An information processing apparatus comprising: an image presentation unit (120) that presents the user location calculated by the user location calculation unit on the vehicle external image data.
  8.  情報処理装置(1)により実行される情報処理プログラムであって、
     位置取得部(100)に、待ち合わせ場所に呼び寄せている車両の位置情報(320)及びユーザーの位置情報(300)を取得させ、
     車両箇所算出部(110)に、前記位置取得部により取得された前記車両の位置情報及び前記ユーザーの位置情報により、前記ユーザーのカメラ(16)で撮像されたユーザー画像データ(310)内に表示される前記車両の箇所を算出させ、
     画像提示部(120)に、前記車両箇所算出部により算出された前記車両の箇所を、前記ユーザー画像データ上に提示させる
     プログラム。
    An information processing program executed by the information processing apparatus (1),
    The position acquisition unit (100) is allowed to acquire the position information (320) of the vehicle calling to the meeting place and the position information (300) of the user,
    Displayed in the vehicle location calculation unit (110) in the user image data (310) captured by the user's camera (16) based on the vehicle location information and the user location information acquired by the location acquisition unit. To calculate the location of the vehicle to be
    A program for causing an image presentation unit (120) to present the vehicle location calculated by the vehicle location calculation unit on the user image data.
  9.  情報処理装置(1)により実行される情報処理方法であって、
     待ち合わせ場所に呼び寄せている車両の位置情報(302)及びユーザーの位置情報(300)を取得し、
     取得された前記車両の位置情報及び前記ユーザーの位置情報により、前記ユーザーのカメラ(16)で撮像されたユーザー画像データ(310)内に表示される前記車両の箇所を算出し、
     算出された前記車両の箇所を、前記ユーザー画像データ上に提示する
     情報処理方法。
     
     
     
     
    An information processing method executed by the information processing apparatus (1),
    Obtain the location information (302) and the location information (300) of the vehicle calling to the meeting place,
    Based on the acquired position information of the vehicle and the position information of the user, the location of the vehicle displayed in the user image data (310) captured by the user's camera (16) is calculated,
    An information processing method for presenting the calculated location of the vehicle on the user image data.



PCT/JP2019/007190 2018-04-25 2019-02-26 Information processing device, program and information processing method WO2019207944A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-083762 2018-04-25
JP2018083762A JP2019191914A (en) 2018-04-25 2018-04-25 Information processor, program, and information processing method

Publications (1)

Publication Number Publication Date
WO2019207944A1 true WO2019207944A1 (en) 2019-10-31

Family

ID=68294945

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/007190 WO2019207944A1 (en) 2018-04-25 2019-02-26 Information processing device, program and information processing method

Country Status (2)

Country Link
JP (1) JP2019191914A (en)
WO (1) WO2019207944A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021086608A (en) * 2020-08-24 2021-06-03 ニューラルポケット株式会社 Information processing system, information processing device, terminal device, server device, program, or method
JP2021086597A (en) * 2020-03-09 2021-06-03 ニューラルポケット株式会社 Information processing system, information processing device, terminal device, server device, program, and method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020077177A (en) * 2018-11-07 2020-05-21 矢崎総業株式会社 Reserved vehicle confirmation system
JP7468411B2 (en) 2021-03-05 2024-04-16 トヨタ自動車株式会社 Autonomous vehicles, vehicle dispatch management devices, and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003121195A (en) * 2001-08-07 2003-04-23 Casio Comput Co Ltd Target position searching apparatus, target position searching method and program thereof
JP2009243885A (en) * 2008-03-28 2009-10-22 Denso Corp Car navigation device
JP2012225782A (en) * 2011-04-20 2012-11-15 Aisin Aw Co Ltd Information service system and information service method
JP2014048079A (en) * 2012-08-30 2014-03-17 Mitsubishi Electric Corp Navigation device
JP2015230690A (en) * 2014-06-06 2015-12-21 パナソニックIpマネジメント株式会社 Vehicle confirmation system, on-vehicle device, terminal device, server, and vehicle confirmation control method
JP2016138854A (en) * 2015-01-29 2016-08-04 株式会社ゼンリンデータコム Navigation system, navigation device, flying object, navigation cooperation control method, cooperation control program for navigation device, and cooperation control program for flying object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003121195A (en) * 2001-08-07 2003-04-23 Casio Comput Co Ltd Target position searching apparatus, target position searching method and program thereof
JP2009243885A (en) * 2008-03-28 2009-10-22 Denso Corp Car navigation device
JP2012225782A (en) * 2011-04-20 2012-11-15 Aisin Aw Co Ltd Information service system and information service method
JP2014048079A (en) * 2012-08-30 2014-03-17 Mitsubishi Electric Corp Navigation device
JP2015230690A (en) * 2014-06-06 2015-12-21 パナソニックIpマネジメント株式会社 Vehicle confirmation system, on-vehicle device, terminal device, server, and vehicle confirmation control method
JP2016138854A (en) * 2015-01-29 2016-08-04 株式会社ゼンリンデータコム Navigation system, navigation device, flying object, navigation cooperation control method, cooperation control program for navigation device, and cooperation control program for flying object

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021086597A (en) * 2020-03-09 2021-06-03 ニューラルポケット株式会社 Information processing system, information processing device, terminal device, server device, program, and method
JP6997471B2 (en) 2020-03-09 2022-01-17 ニューラルポケット株式会社 Information processing system, information processing device, terminal device, server device, program, or method
JP2021086608A (en) * 2020-08-24 2021-06-03 ニューラルポケット株式会社 Information processing system, information processing device, terminal device, server device, program, or method

Also Published As

Publication number Publication date
JP2019191914A (en) 2019-10-31

Similar Documents

Publication Publication Date Title
WO2019207944A1 (en) Information processing device, program and information processing method
JP7182341B2 (en) Information processing equipment
CN106062514B (en) Interaction between a portable device and a vehicle head unit
JP6819086B2 (en) Display control device for vehicles
JP5943222B2 (en) POSITION INFORMATION PROVIDING DEVICE AND POSITION INFORMATION PROVIDING SYSTEM
WO2016035281A1 (en) Vehicle-mounted system, information processing method, and computer program
EP3660458A1 (en) Information providing system, server, onboard device, and information providing method
EP2860494A1 (en) Position information transmission apparatus, position information transmission system, and vehicle
CN104034343A (en) Navigation method, navigation system, vehicle-mounted terminal and acquisition method for navigation information of vehicle-mounted terminal
CN114096996A (en) Method and apparatus for using augmented reality in traffic
CN111721315A (en) Information processing method and device, vehicle and display equipment
CN114333404A (en) Vehicle searching method and device for parking lot, vehicle and storage medium
CN113223316A (en) Method for quickly finding unmanned vehicle, control equipment and unmanned vehicle
JP7372144B2 (en) In-vehicle processing equipment and in-vehicle processing systems
JP5612925B2 (en) Traffic information processing apparatus, traffic information processing system, program, and traffic information processing method
CN110580272A (en) Information processing apparatus, system, method, and non-transitory storage medium
CN111758115A (en) Vehicle co-taking auxiliary system
JP2019125167A (en) Onboard equipment, server, navigation system, map display program, and map display method
CN115034416A (en) Autonomous vehicle, vehicle distribution management device, and terminal device
JP6584285B2 (en) Electronic device and recommendation information presentation system
US20230260335A1 (en) Information processing system, information terminal, information processing method, and recording medium
JP7124823B2 (en) Information processing device, information processing method, and information processing system
JP2015170098A (en) Message exchange program, method, and electronic device
JP5831936B2 (en) In-vehicle device system and in-vehicle device
CN113401071B (en) Display control device, display control method, and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19794027

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19794027

Country of ref document: EP

Kind code of ref document: A1