WO2023084810A1 - Information processing apparatus, information processing method, and information processing program - Google Patents

Information processing apparatus, information processing method, and information processing program Download PDF

Info

Publication number
WO2023084810A1
WO2023084810A1 PCT/JP2022/015291 JP2022015291W WO2023084810A1 WO 2023084810 A1 WO2023084810 A1 WO 2023084810A1 JP 2022015291 W JP2022015291 W JP 2022015291W WO 2023084810 A1 WO2023084810 A1 WO 2023084810A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image data
information processing
information
time
Prior art date
Application number
PCT/JP2022/015291
Other languages
French (fr)
Japanese (ja)
Inventor
淳 磯村
宣宏 沖
一兵衛 内藤
磯生 上野
直子 重松
シュムール アール
ブラッド ダビジャ
デビッド アッシュ
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Publication of WO2023084810A1 publication Critical patent/WO2023084810A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present invention relates to an information processing device, an information processing method, and an information processing program.
  • AR Augmented Reality
  • Anastasia Morozova “Augmented Reality in Travel: How AR Can Enrich Technologys' Experiences While on Vacation”, [online], [searched August 6, 2021], Internet ⁇ URL: https://medium.com/@info_35021/ augmented-reality-in-travel-how-ar-can-enrich-touristsexperiences-while-on-vacation-41782b69b21d> Hanadate, 8 others, “High-speed spatio-temporal data management technology “Axispot” and spatio-temporal data high-speed search technology,” NTT Technical Journal, November 2019, p.18-p.22
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide a technique capable of providing attractive information to passengers of mobile objects.
  • An information processing apparatus is an information processing apparatus that is communicably connected to a spatio-temporal database that processes spatio-temporal data in real time and provides information to a passenger of a mobile object.
  • a plurality of collected image data obtained by collecting from a mobile object image data taken by the plurality of mobile objects at a plurality of locations at a plurality of dates and times, time data at the time of shooting, and position data of each mobile object at the time of shooting.
  • a collection unit that transmits a plurality of time data and a plurality of location data to the spatio-temporal database; a reception unit that receives location data of the predetermined mobile unit from a predetermined mobile unit; and a position of the predetermined mobile unit.
  • a processing unit for retrieving an object in image data corresponding to the data from the image data group of the spatio-temporal database; and a transmitting unit configured to transmit object display information for displaying augmented reality superimposed on the scene of the moving object to the augmented reality display device of the predetermined moving object.
  • An information processing method is an information processing method that is communicably connected to a spatio-temporal database that processes spatio-temporal data in real time and provides information to a passenger of a mobile object.
  • a device collects, from a plurality of mobile bodies, image data taken by the plurality of mobile bodies at a plurality of locations at a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting.
  • a step of transmitting a plurality of image data, a plurality of time data, and a plurality of position data to the spatio-temporal database a step of receiving position data of the predetermined moving body from a predetermined moving body; retrieving an object in image data corresponding to body position data from the image data group of the spatio-temporal database; and transmitting object display information to be superimposed on the scene of the predetermined moving body and displayed in augmented reality to the augmented reality display device of the predetermined moving body.
  • An information processing program causes a computer in a communication network to function as the information processing device.
  • An information processing program causes a computer in a mobile object to function as the information processing device.
  • FIG. 1 is a diagram illustrating a configuration example of an information processing system.
  • FIG. 2 is a diagram showing a processing flow of data collection operation.
  • FIG. 3 is a diagram showing a processing flow of operations such as AR display of an object.
  • FIG. 4 is a diagram illustrating a hardware configuration example of an information processing apparatus.
  • An object of the present invention is to provide attractive information to a passenger of a mobile object.
  • the present invention performs an AR display in which the past real world is superimposed on the current real world, instead of the conventional AR display in which the virtual world is superimposed on the real world.
  • the present invention gives the feeling that real value is added to the scene from the moving object by AR displaying not the virtual object but the real object that actually existed in the past.
  • the present invention provides a spatio-temporal database that can store a large amount of spatio-temporal data all at once, and that can search and analyze past spatio-temporal data and objects that occurred at a given place on a given date and time in real time. (See Patent Document 2 and Non-Patent Document 2).
  • the present invention uses a real-time spatio-temporal database for processing such as searches to retrieve information about actual objects that existed in a predetermined location in the past and events that occurred in the past in a predetermined location.
  • An AR display superimposed on the current environment of the moving object located at the same place as the predetermined place is performed.
  • the AR display of the present invention By performing the AR display of the present invention, it is possible to provide interesting, timeless and attractive information. For example, it is possible to allow a passenger of a mobile object to experience an advanced travel experience. Specifically, by displaying in AR the bronze statues of historical figures that were installed along the road on which the car travels, it is possible to provide social education to the children of the passengers.
  • the AR display of past firework displays at the place where the vehicle is running can make the driver feel like he or she is enjoying a trip.
  • the present invention can also improve driving safety, such as reducing driver distraction and reducing boredom experienced while driving on lightly traffic roads.
  • FIG. 1 is a diagram showing a configuration example of an information processing system 1 according to this embodiment.
  • This information processing system 1 includes an information processing device 10, a spatio-temporal database 20, and a plurality of automobiles 30, which are interconnected via a communication network so as to be able to communicate with each other.
  • the information processing device 10 is an information processing device that provides attractive information to passengers of the automobile 30 (driver, fellow passenger, operator, passenger, etc.).
  • the information processing device 10 may be configured by a computer (server device, etc.) in the communication network, or may be configured by a computer (in-vehicle equipment, etc.) in the vehicle 30 .
  • the information processing device 10 includes a collection unit 11, a reception unit 12, a processing unit 13, and a transmission unit 14, as shown in FIG.
  • the collection unit 11 collects, from a plurality of automobiles 30, image data photographed by the plurality of automobiles 30 at a plurality of locations on a plurality of dates and times, recorded audio data, various types of measurement data measured, at the time of photographing, recording It has a function of collecting position data and time data of each vehicle 30 at the time of measurement.
  • the collection unit 11 transmits a plurality of image data, a plurality of audio data, a plurality of various measurement data, a plurality of position data, and a plurality of time data to the spatio-temporal database 20, and collects the image data, the audio data, and the various measurement data. , position data, and time data are associated with each other and stored in the spatio-temporal database 20 .
  • the collection unit 11 has a function of adding an image data tag indicating the image content of the image data to the image data. Similarly, the collection unit 11 has a function of adding an audio data tag indicating the audio content of the audio data to the audio data.
  • the receiving unit 12 has a function of receiving position data and time data of a predetermined automobile 30' from the predetermined automobile 30'.
  • the receiving unit 12 has a function of receiving profile information of passengers of the predetermined automobile 30' from the predetermined automobile 30'.
  • the processing unit 13 has a function of searching the image data group and the audio data group of the spatio-temporal database 20 for an object and audio data in the image data corresponding to the position data of the predetermined automobile 30'. Similarly, the processing unit 13 selects an object in the image data corresponding to both the position data and the time data of the predetermined vehicle 30' and the audio data in the image data group and the audio data group in the spatio-temporal database 20. Equipped with a function to search from
  • the processing unit 13 has a function of searching for an object in image data and audio data corresponding to the position data of a predetermined automobile 30' based on the image data tag and the audio data tag. Similarly, the processing unit 13 has a function of retrieving an object in image data and audio data corresponding to both position data and time data of a predetermined automobile 30' based on an image data tag and an audio data tag. Prepare.
  • the processing unit 13 has a function of retrieving objects and audio data in image data related to profile information of passengers of a predetermined automobile 30'.
  • the processing unit 13 has a function of searching for objects in image data and audio data that affect driving of a predetermined automobile 30'.
  • the transmitting unit 14 transmits the object in the image data searched by the processing unit 13 to the scene of the predetermined automobile 30' at the same display position as the display position where the object was displayed in the past in the image data of the object. It has a function of transmitting object display information to be superimposed and displayed in AR (augmented reality display) to an output device 35 of a predetermined automobile 30'. Similarly, the transmission unit 14 has a function of transmitting audio output information for outputting the audio data retrieved by the processing unit 13 to the output device 35 of the predetermined automobile 30'.
  • the spatio-temporal database 20 accumulates spatio-temporal data such as image data transmitted simultaneously by a plurality of automobiles 30, while storing spatio-temporal data corresponding to a predetermined time at a predetermined place and a predetermined time at a predetermined place. It is an ultra-high-speed spatio-temporal data management technology that includes high-speed spatio-temporal data search technology that searches and analyzes dynamic and static objects located at the time in real time. It is a database technology that enables search and classification analysis.
  • This spatio-temporal database 20 is disclosed in Patent Document 2 and Non-Patent Document 2.
  • the spatio-temporal database 20 uses the spatio-temporal code as a key of a distributed key-value store to improve the efficiency of multidimensional information retrieval, etc. It enables fast retrieval of data and objects.
  • a plurality of cars 30 are connected cars connected to a communication network.
  • a plurality of automobiles 30 each include a camera 31 , a position sensor 32 , a clock 33 , an input device 34 and an output device 35 .
  • the automobile 30 may further include a speed sensor, an acceleration sensor, a steering wheel angle sensor, a line-of-sight position measurement sensor, and the like.
  • the camera 31 is an imaging device with a sound pickup function installed in the automobile 30 .
  • the camera 31 has a function of photographing the front, rear, left, right, bottom, top, and oblique directions of the automobile 30 and picking up sounds generated around the automobile 30 .
  • the position sensor 32 has a function of acquiring position data indicating the geographical two-dimensional coordinate position of the automobile 30 from the global positioning system.
  • the position sensor 32 is, for example, a positioning function built in a navigation system.
  • the clock 33 has a function of measuring the time data of the automobile 30.
  • the clock 33 may measure the time itself, or may refer to the time via a communication network using a network time protocol.
  • the clock 33 may be omitted if the camera 31 or the position sensor 32 has a built-in time measurement management function.
  • the input device 34 is a computer installed in the automobile 30.
  • the input device 34 is, for example, a tablet terminal, an in-vehicle device, or the like.
  • the output device 35 is an AR display device capable of AR-displaying objects and an audio output device capable of outputting audio data.
  • the output device 35 is, for example, a computer that outputs video and audio from the camera 31, a navigation system, a tablet terminal, or the like.
  • the output device 35 may be, for example, a projector that outputs an image to the front window of the automobile 30 and outputs sound, a head-mounted display, a head-up display, or the like. That is, the output device 35 is a conventionally known general screen or display, VR goggles or spectacles type display device, a window installed in a moving object, a projector, or the like.
  • FIG. 2 is a diagram showing a processing flow of data collection operation.
  • the collection unit 11 constantly collects image data and the like regarding actual objects existing around the plurality of automobiles 30 that are running or paused, and actual events occurring around the plurality of automobiles 30 .
  • the collection unit 11 collects images captured by the cameras 31 of the automobiles 30 at a plurality of locations on a plurality of dates and times from the plurality of automobiles 30 that are running or at a temporary stop.
  • Data, recorded voice data, position data of each car 30 obtained by the position sensor 32, and time data of each car 30 measured by the clock 33 are collected all the time.
  • the collection unit 11 collects image data about cars, roads, signs, people, animals, trees, rivers, lawns, houses, weather, the sun, clouds, rainbows, lightning, and other potentially interesting human and natural events. are always collected.
  • the vehicle 30 is also equipped with a speed sensor, an acceleration sensor, a steering wheel angle sensor, a line-of-sight position measurement sensor, etc.
  • the collection unit 11 constantly collects various measurement data measured by each sensor at the same time.
  • Step S102 the collection unit 11 uses the image data, sound data, various measurement data, position data, and time data collected from the plurality of automobiles 30 to determine the category to which the image data and sound data belong, An image data tag indicating the category is added to the image data, and an audio data tag indicating the category is added to the audio data.
  • a machine learning model for classification tasks that has been built in advance to determine the categories to which image data and audio data belong is used.
  • the collection unit 11 inputs image data or audio data to the machine learning model, and determines the inference result of the category output from the machine learning model as the category to which it belongs. It should be noted that the category classification accuracy of the machine learning model can be improved by inputting a plurality of types of image data and audio data to the machine learning model and repeatedly learning the model.
  • Step S103 Finally, the collection unit 11 transmits a plurality of image data, a plurality of audio data, a plurality of various measurement data, a plurality of position data, a plurality of time data, and a plurality of tag data to the spatio-temporal database 20, and The data are associated with each other and stored in the spatio-temporal database 20 .
  • the information processing apparatus 10 executes steps S101 to S103 every hour, every day, every month, and every year.
  • the spatio-temporal database 20 accumulates a huge amount of image data and the like regarding things that existed at arbitrary dates and times in various locations around the world and events that occurred at that time.
  • the spatio-temporal database 20 uses position data, time data, image data tags, and audio data tags as search keys, and stores image data, audio data, various measurement data, and tag data so as to be searchable with the search keys. are stored in association with each other. Due to real-time search and analysis of spatio-temporal data and objects possessed by the spatio-temporal database 20, the information processing apparatus 10 can search desired image data and audio data in real time at a later date.
  • the collection unit 11 assigns category tags related to "seasons".
  • the collection unit 11 collects the colors of flowers, the types of flowers, the colors of mountains, whether there is snow in the image data, the chirping of cicadas in the audio data, and the chirping of cicadas in the audio data, so that the season of the driving place of the automobile 30 can be determined at a later date. , time data, etc., the season of the image data and audio data is determined, and tags such as spring, summer, autumn, winter, early winter, etc. are added.
  • the collection unit 11 assigns category tags related to "time".
  • the collection unit 11 determines the time of the image data based on how the sunset and sunrise appear in the image data, how the full moon appears in the case of nighttime, and the like, and attaches a tag for that time.
  • the collection unit 11 may add the time data collected at the same time as the image data as it is.
  • the collection unit 11 assigns category tags related to "weather".
  • the collection unit 11 determines whether the image data includes the sun, rain, many clouds, the color of the clouds, the sound of rain in the audio data, the sound of thunder, etc. , determines the weather in the image data or the audio data, and assigns tags such as sunny, cloudy, raining, snowing, and the like.
  • the collection unit 11 assigns category tags related to "landscape".
  • the collection unit 11 extracts the scenery in the image data based on whether there is a rainbow, bridge, paddy field, mountain, sea, tower, etc. in the image data. Decide and assign a tag for that scenery.
  • the collection unit 11 assigns category tags related to "animals".
  • the collection unit 11 identifies animals in the image data and the audio data based on whether the image data includes cows and birds, whether the audio data includes sheep barks, horse barks, and the like. Decide and give the animal a tag.
  • the name of the tag may be a generic name such as dog or bird, or an individual name such as pug dog or beagle dog.
  • An existing object recognition function can be used to determine whether an object such as a cow exists in the image data.
  • the collection unit 11 assigns category tags related to "vehicles".
  • the collection unit 11 determines whether the image data contains a car or a train, whether the sound data contains a running sound of a train, or whether a running sound of a personal watercraft is included in the sound data. Determine the vehicles in the data and tag them.
  • An existing image recognition function can be used to determine whether or not there is an image of an automobile or the like in the image data.
  • the collection unit 11 attaches category tags related to "people" and "human activities”.
  • the collection unit 11 attaches a rice harvesting tag when there is a state of rice harvesting in the image data.
  • the collection unit 11 attaches a tag of a fireworks display when the image data includes the scenery of the fireworks display or fireworks in the sky.
  • An existing image recognition function can also be used to determine whether or not an event such as rice harvesting is included in the image data.
  • the collection unit 11 assigns category tags related to "safe operation of automobiles". Based on the measurement data of the speed sensor, the acceleration sensor, and the steering wheel angle sensor, the collection unit 11 attaches a safe operation tag to the image data when the driving of the automobile 30 is as smooth as a manual operation. . On the other hand, when the acceleration or the steering wheel angle changes sharply in a short time, a dangerous operation tag is added to the image data.
  • the collection unit 11 assigns category tags related to "driver's interest”. Based on the line-of-sight position measurement sensor, the collection unit 11 attaches a driver's interest tag to the image data when the line-of-sight of the driver is at a position other than the area in the traveling direction of the vehicle.
  • FIG. 3 is a diagram showing a processing flow of operations such as AR display of an object.
  • the predetermined vehicle 30 ′ may be a vehicle 30 capable of transmitting image data and the like to the information processing device 10 or a vehicle not transmitting image data and the like to the information processing device 10 .
  • Step S201 The receiving unit 12 receives the position data acquired by the position sensor 32 and the time data measured by the clock 33 of the predetermined automobile 30' currently running or stopped.
  • the receiving section 12 also receives the profile information.
  • the profile information includes, for example, the age, sex, height, weight, hobby, job, personality, clothes, goals, etc. of the input person.
  • Step S202 when the processing unit 13 receives the profile information of the passenger, the processing unit 13 inputs the profile information into a machine learning model built in advance to determine the user's preferences from the profile, and from the machine learning model The output inference result is determined as the passenger's preference information.
  • the processing unit 13 inputs the position data and time data of the predetermined automobile 30' into a machine learning model constructed in advance for safe driving, and also inputs the passenger's profile information into the machine learning model. Then, the inference result output from the machine learning model is determined as the safe driving information regarding the safe driving of the passenger.
  • the above two types of machine learning models improve passenger preference information and safe driving information by inputting passenger profile information, vehicle location data and time data into the machine learning model and learning repeatedly. can do.
  • the above two types of machine learning models may be customized for each passenger so as to learn the interests of each passenger. .
  • Step S203 the processing unit 13 accesses the spatio-temporal database 20, searches for an image data group that matches the position data of the predetermined vehicle 30', and selects an image that matches the passenger's preference information from the image data group. Search for image data with data tags attached. Similarly, the processing unit 13 also searches for audio data. For example, if the passenger likes birds, the processing unit 13 searches for images and sounds of birds that flew at a time zone different from the current date and time at a predetermined travel location of the automobile 30'.
  • the processing unit 13 accesses the spatio-temporal database 20, searches for an image data group that matches both the position data and the time data of the predetermined vehicle 30', and selects the passenger's preference from the image data group. Search for image data to which an image data tag that matches the information is added. Similarly, the processing unit 13 also searches for audio data. For example, when the passenger likes birds, the processing unit 13 searches for images and sounds of birds that flew at the same time zone as the current date and time at the predetermined travel location of the automobile 30'.
  • the processing unit 13 also searches for image data and audio data that match the safe driving information of the passenger. For example, the processing unit 13 searches for an image of a vehicle that was in a traffic jam during a different time zone or the same time zone as the current date and time at a predetermined travel location of the vehicle 30', and for sounds during construction.
  • processing unit 13 may use all of the plurality of search methods described in step S203, or may use some of the search methods.
  • Step S204 the processing unit 13 selects one or more image data and audio data suitable for the passenger of the predetermined automobile 30' from among the searched plurality of image data and the plurality of audio data. After that, the processing unit 13 extracts the object and audio data included in the image data from the spatio-temporal database 20 . In the case of the above example, the processing unit 13 extracts, for example, a bird object and a bird's song from among the bird image, the bird's song, the image of the car in the traffic jam, and the sound during construction.
  • the processing unit 13 infers a plurality of image data that are interesting to the passenger based on the age of the passenger of the predetermined automobile 30', demographics obtained from the Internet, etc., and performs educational and entertainment activities according to the age. Extract target objects (walking horses, rainbows, etc.).
  • the processing unit 13 extracts dynamic objects (automobiles, road construction scenes, etc.) that have traveled in the past under conditions that are very similar to the current weather and time at the travel location of the predetermined automobile 30'. do.
  • the processing unit 13 may determine the state of traffic traffic including vehicles seen ahead at approximately the same traveling speed, especially the matching of the traveling speed. Extract the appearance of high traffic for a long period of time.
  • the processing unit 13 receives the line-of-sight position data of the driver measured by the line-of-sight position measurement sensor of the predetermined automobile 30', and detects that the driver's line-of-sight position and line-of-sight movement are not fixed, and that the driver is tired or bored with driving. If so, select the image data that matches the tag of the category regarding "driver's interest".
  • the processing unit 13 gives priority to the preferences of the passengers of the predetermined automobile 30', and exceptionally , give priority to safe driving information.
  • Step S205 the processing unit 13 determines the timing of outputting the object and audio data extracted from the spatio-temporal database 20 .
  • Step S206 Finally, the transmission unit 14 provides object display information for AR-displaying the object extracted from the spatio-temporal database 20 at the above timing, and audio output information for outputting the audio data extracted from the spatio-temporal database 20 at the above timing. are output to the output device 35 of the predetermined automobile 30'.
  • the output device 35 displays the object output from the information processing device 10 in an AR manner to the passenger of the predetermined automobile 30', and outputs the audio data as audio.
  • the output device 35 may display the image of the landscape taken by the camera 31 while traveling in the past at that location.
  • the bird object that was flying is AR-displayed at the same position as the past flight position, and the bird's cry is output as voice.
  • the output device 35 reproduces a bird object that has flown in the past at the place of travel with respect to the actual scenery seen from the front window of the predetermined automobile 30'.
  • An AR display is displayed on the front window at the same position as the flight position, and the bird's song is output from a speaker.
  • the objects displayed in AR are not virtual objects like conventional ones, but actual things that existed in the past or actual events that happened in the past. Therefore, it is possible to give the feeling of adding genuine value to the passengers of the predetermined automobile 30'. Passengers see realistic objects, so they can enjoy a more enjoyable trip, and they can drive safely without distracting their attention from the road while maintaining their awareness. As a result, attractive information can be provided to the passengers of the predetermined automobile 30'.
  • the passenger of the predetermined automobile 30' can perform a touch operation on the screen. , swipe operation, voice input, or other gestures may be used to select one of them.
  • the selected object information is fed back to the user's preference machine learning model described in step S202, and learned as the passenger's preference.
  • the object display information that the information processing device 10 transmits to the predetermined automobile 30' is to display the object to be displayed in AR at the same display position as the display position in which the object was displayed in the past in the image data of the object. This is display information for AR display superimposed on a scene seen from a predetermined automobile 30'.
  • the information processing apparatus 10 maintains the display position where the object was displayed in the past image data without changing the display position of the object, that is, the attribute information of the object extracted from the past image data. (position, size, color, shape, vector, etc.) is used as is to generate object display information.
  • the output device 35 outputs the object to be displayed as it is to the display position in the image data in which the object was captured in the past. Since the given car 30' is driving in the same place where the object was photographed, the object can be seen against the surrounding background without processing to change the position or size of the object. are superimposed without a sense of incongruity.
  • AR display has very low computational efficiency due to its extremely high computational cost.
  • automobiles tend to travel in the same lane, the actual scene seen from the automobile and the past image have the same angle of view. Therefore, the computational power required for the synchronization process to align them with each other is much less than before, and since complex calculations do not have to be performed, attractive information can be provided at high speed.
  • the photographed image data is image data photographed in the past by a vehicle running at a predetermined speed, at a predetermined position, and at a predetermined time period.
  • the information processing apparatus 10 displays the bird object included in the image data on the output device 35 of the automobile currently traveling in the same place as the place.
  • the AR-displayed bird object is displayed as seen from the car's camera. Therefore, in order to AR-display such a bird object, it is only necessary to place the bird object in a frame such as a tablet terminal or a front window at an appropriate timing. Since the bird object appears at the same angle and position, there is no need to calculate how it will look from different angles.
  • the information processing apparatus 10 outputs object display information for AR-displaying an object and audio output information for outputting audio data of the object to a vehicle, and the vehicle can display a landscape image or a real scene. , the object is AR-displayed and the audio data of the object is output.
  • the information processing apparatus 10 not only superimposes and outputs the object itself, but also provides commentary information for explaining the object, that is, an actual object that existed in a predetermined place in the past, or an object that has existed in a predetermined place in the past. It is also possible to simultaneously output commentary information about an actual event that occurred in the past to the automobile, further display the commentary information of the object in AR, and further output audio data of the commentary information.
  • the processing unit 13 of the information processing apparatus 10 converts the search key used when searching the spatio-temporal database 20 for image data and audio data in step S203 as commentary information about the object of the image data. to generate Specifically, the processing unit 13 extracts the object based on the position data, the time data, the image data tag, the audio data tag, etc. used for searching the object from the image data group of the spatio-temporal database. Generates descriptive information for .
  • the transmitting unit 14 transmits object commentary information output information for AR display and audio output of the commentary information of the object to a predetermined automobile 30'.
  • the object commentary information output information includes commentary information of the object, voice data of the commentary information, an AR display command of the commentary information, an output command of the voice data, and the like.
  • the output device 35 of the predetermined automobile 30' displays the commentary information of the object included in the object commentary information output information output from the information processing device 10 to the passenger of the predetermined automobile 30' in AR,
  • the voice data of the commentary information is output as voice.
  • the explanation information of the object is also AR displayed/audio output at the same time.
  • the state of the car that caused the traffic accident is displayed and voice-outputted in AR
  • the commentary information "A car accident occurred at this place on the date and time of the year ⁇ month ⁇ " is displayed and voiced in AR. output.
  • an automobile has been described as an example of a moving body.
  • the mobile body of the present invention can be applied to various vehicles such as trains, monorails, aircraft, airships, boats, submarines, motorcycles, bicycles, and ropeways.
  • vehicles such as trains, monorails, aircraft, airships, boats, submarines, motorcycles, bicycles, and ropeways.
  • it can be applied to a maritime safety patrol boat, and an AR display can be made of the scene of past water accident rescues while moving on the water. Since there are no running sections such as roads on the water, there is a possibility that the position cannot be synchronized even if the object is output as it is to the display position in the image data where the object was shot in the past. It is effective when it is sufficient to be able to grasp at an approximate position.
  • the information processing device 10 that is communicably connected to the spatio-temporal database 20 that processes spatio-temporal data in real time and that provides information to passengers of a predetermined automobile 30', a plurality of image data photographed at a plurality of locations at a plurality of dates and times by the plurality of automobiles 30, the time data at the time of photographing, and the position data of each moving body at the time of photographing are collected from the plurality of collected images.
  • a processing unit 13 for searching the image data group of the spatio-temporal database 20 for an object in the image data corresponding to the position data of the automobile 30', A transmission unit 14 for transmitting object display information to be superimposed on a scene of a predetermined automobile 30' and displayed in AR at a display position to an output device 35 of the predetermined automobile 30'.
  • the information processing apparatus 10 can reproduce an actual object that existed in a predetermined place in the past or an actual event that occurred in the past in a predetermined place.
  • AR display is superimposed on the current environment of the moving object. For those who want to see things and events in the real world instead of virtual objects, it gives a sense of adding real value, so providing attractive information to mobile passengers can be done.
  • the information processing apparatus 10 transmits object display information for AR display in which the searched object is superimposed on the scene from the predetermined automobile 30' at the same display position as the display position of the object. Since there is no processing to render objects according to the passenger's position, it requires much less computing power than general AR display, and can provide attractive information to passengers of mobile vehicles at high speed. can.
  • the information processing apparatus 10 retrieves objects related to the profile of the passenger of the predetermined automobile 30', so that it is possible to present objects and events that remain in the passenger's taste and memory. can.
  • the information processing apparatus 10 of the embodiment described above includes, for example, a CPU 901, a memory 902, a storage 903, a communication device 904, an input device 905, and an output device 906, as shown in FIG. It can be realized using a general-purpose computer system.
  • Memory 902 and storage 903 are storage devices.
  • each function of the information processing apparatus 10 is realized by the CPU 901 executing a predetermined program loaded on the memory 902 .
  • the information processing device 10 may be implemented by one computer.
  • the information processing device 10 may be implemented by multiple computers.
  • the information processing device 10 may be a virtual machine implemented in a computer.
  • a program for the information processing device 10 can be stored in computer-readable recording media such as HDD, SSD, USB memory, CD, and DVD.
  • the program for the information processing device 10 may be installed and executed on a computer (server device, etc.) within the communication network in order to perform sensing and control from a remote location via the communication network.
  • the program can also be distributed on a recording medium and distributed over a communication network.
  • the program may be executed by installing a distributed or distributed program in a computer (in-vehicle device or the like) in the vehicle 30 .
  • An information processing device that is communicatively connected to a spatio-temporal database that processes spatio-temporal data in real time and provides information to a passenger of a mobile object, Image data taken by the plurality of mobile bodies at a plurality of locations on a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting are collected from the plurality of mobile bodies, and the collected plurality of transmitting image data, a plurality of time data, and a plurality of position data to the spatio-temporal database; receiving position data of a given mobile object from a given mobile object; retrieving an object in the image data corresponding to the position data of the predetermined mobile object from the image data group of the spatio-temporal database;
  • Augmented reality display of the predetermined moving body is object display information for displaying the object in augmented reality by superimposing the object on the scene of the predetermined moving body at the same display position as the display
  • a non-temporary device that is communicatively connected to a spatio-temporal database that processes spatio-temporal data in real time and that stores a program executable by a computer to perform information processing that provides information to passengers of a mobile object a storage medium,
  • the information processing includes: Image data taken by the plurality of mobile bodies at a plurality of locations on a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting are collected from the plurality of mobile bodies, and the collected plurality of transmitting image data, a plurality of time data, and a plurality of position data to the spatio-temporal database; receiving position data of a given mobile object from a given mobile object; retrieving an object in the image data corresponding to the position data of the predetermined mobile object from the image data group of the spatio-temporal database;
  • Augmented reality display of the predetermined moving body is object display information for displaying the object in augmented reality by superimposing the object on the scene of the predetermined moving body
  • information processing system 10 information processing device 11: collection unit 12: reception unit 13: processing unit 14: transmission unit 20: spatio-temporal database 30: automobile 31: camera 32: position sensor 33: clock 34: input device 35: Output device 901: CPU 902: Memory 903: Storage 904: Communication device 905: Input device 906: Output device

Abstract

An information processing apparatus 10 that is communicably connected to a time-and-space database 20 for processing data on time and space in real time and provides information to a passenger of a moving body includes: a collection unit 11 that collects, from a plurality of moving bodies, pieces of image data obtained by imaging performed by the plurality of moving bodies at a plurality of locations at a plurality of dates and times, pieces of time data at the imaging, and pieces of position data of the moving bodies during the imaging and transmits the collected plurality of pieces of image data, the plurality of pieces of time data, and the plurality of pieces of image data to the time-and-space database; a reception unit 12 that receives, from a specific moving body, the position data of the specific moving body; a processing unit 13 that searches an image data group of the time-and-space database for an object in the image data corresponding to the position data of the specific moving body; and a transmission unit 14 that transmits, to an augmented reality display apparatus of the specific moving body, object display information to allow the object to be displayed in an augmented reality by being superimposed on a scene of the specific moving body at the same display position as a display position where the object has been displayed in the past.

Description

情報処理装置、情報処理方法、及び、情報処理プログラムInformation processing device, information processing method, and information processing program
 本発明は、情報処理装置、情報処理方法、及び、情報処理プログラムに関する。 The present invention relates to an information processing device, an information processing method, and an information processing program.
 従来、仮想世界を現実世界に重畳するAR(Augmented Reality)表示という拡張現実表示技術が知られている(特許文献1及び非特許文献1を参照)。 Conventionally, an augmented reality display technology called AR (Augmented Reality) display that superimposes the virtual world on the real world is known (see Patent Document 1 and Non-Patent Document 1).
米国特許第8687021号公報U.S. Pat. No. 8,687,021 特開2020-13539号公報JP 2020-13539 A
 しかしながら、従来のAR表示は、仮想オブジェクトを現実の環境内に重畳するにすぎないため、面白くないという課題があった。 However, conventional AR displays have the problem of being uninteresting because they simply superimpose virtual objects in the real environment.
 本発明は、上記事情に鑑みてなされたものであり、本発明の目的は、移動体の搭乗者に対して魅力的な情報を提供可能な技術を提供することである。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide a technique capable of providing attractive information to passengers of mobile objects.
 本発明の一態様の情報処理装置は、時空間上のデータをリアルタイムに処理する時空間データベースに通信可能に接続され、移動体の搭乗者に対して情報を提供する情報処理装置において、複数の移動体から前記複数の移動体により複数の場所で複数の日時にそれぞれ撮影された画像データと撮影時の時刻データと撮影時の各移動体の位置データとを収集し、収集した複数の画像データと複数の時刻データと複数の位置データとを前記時空間データベースへ送信する収集部と、所定の移動体から前記所定の移動体の位置データを受信する受信部と、前記所定の移動体の位置データに対応する画像データ内のオブジェクトを前記時空間データベースの画像データ群の中から検索する処理部と、前記オブジェクトを、前記オブジェクトが過去に表示されていた表示位置と同じ表示位置で、前記所定の移動体の光景に重畳して拡張現実表示させるオブジェクト表示情報を、前記所定の移動体の拡張現実表示装置へ送信する送信部と、を備える。 An information processing apparatus according to one aspect of the present invention is an information processing apparatus that is communicably connected to a spatio-temporal database that processes spatio-temporal data in real time and provides information to a passenger of a mobile object. A plurality of collected image data obtained by collecting from a mobile object image data taken by the plurality of mobile objects at a plurality of locations at a plurality of dates and times, time data at the time of shooting, and position data of each mobile object at the time of shooting. a collection unit that transmits a plurality of time data and a plurality of location data to the spatio-temporal database; a reception unit that receives location data of the predetermined mobile unit from a predetermined mobile unit; and a position of the predetermined mobile unit. a processing unit for retrieving an object in image data corresponding to the data from the image data group of the spatio-temporal database; and a transmitting unit configured to transmit object display information for displaying augmented reality superimposed on the scene of the moving object to the augmented reality display device of the predetermined moving object.
 本発明の一態様の情報処理方法は、時空間上のデータをリアルタイムに処理する時空間データベースに通信可能に接続され、移動体の搭乗者に対して情報を提供する情報処理方法において、情報処理装置が、複数の移動体から前記複数の移動体により複数の場所で複数の日時にそれぞれ撮影された画像データと撮影時の時刻データと撮影時の各移動体の位置データとを収集し、収集した複数の画像データと複数の時刻データと複数の位置データとを前記時空間データベースへ送信するステップと、所定の移動体から前記所定の移動体の位置データを受信するステップと、前記所定の移動体の位置データに対応する画像データ内のオブジェクトを前記時空間データベースの画像データ群の中から検索するステップと、前記オブジェクトを、前記オブジェクトが過去に表示されていた表示位置と同じ表示位置で、前記所定の移動体の光景に重畳して拡張現実表示させるオブジェクト表示情報を、前記所定の移動体の拡張現実表示装置へ送信するステップと、を行う。 An information processing method according to one aspect of the present invention is an information processing method that is communicably connected to a spatio-temporal database that processes spatio-temporal data in real time and provides information to a passenger of a mobile object. A device collects, from a plurality of mobile bodies, image data taken by the plurality of mobile bodies at a plurality of locations at a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting. a step of transmitting a plurality of image data, a plurality of time data, and a plurality of position data to the spatio-temporal database; a step of receiving position data of the predetermined moving body from a predetermined moving body; retrieving an object in image data corresponding to body position data from the image data group of the spatio-temporal database; and transmitting object display information to be superimposed on the scene of the predetermined moving body and displayed in augmented reality to the augmented reality display device of the predetermined moving body.
 本発明の一態様の情報処理プログラムは、上記情報処理装置として通信ネットワーク内のコンピュータを機能させる。 An information processing program according to one aspect of the present invention causes a computer in a communication network to function as the information processing device.
 本発明の一態様の情報処理プログラムは、上記情報処理装置として移動体内のコンピュータを機能させる。 An information processing program according to one aspect of the present invention causes a computer in a mobile object to function as the information processing device.
 本発明によれば、移動体の搭乗者に対して魅力的な情報を提供可能な技術を提供できる。 According to the present invention, it is possible to provide technology capable of providing attractive information to passengers of mobile objects.
図1は、情報処理システムの構成例を示す図である。FIG. 1 is a diagram illustrating a configuration example of an information processing system. 図2は、データ収集動作の処理フローを示す図である。FIG. 2 is a diagram showing a processing flow of data collection operation. 図3は、オブジェクトのAR表示等動作の処理フローを示す図である。FIG. 3 is a diagram showing a processing flow of operations such as AR display of an object. 図4は、情報処理装置のハードウェア構成例を示す図である。FIG. 4 is a diagram illustrating a hardware configuration example of an information processing apparatus.
 以下、図面を参照して、本発明の実施形態を説明する。図面の記載において同一部分には同一符号を付し説明を省略する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the description of the drawings, the same parts are denoted by the same reference numerals, and the description thereof is omitted.
 [発明の概要]
 本発明の目的は、移動体の搭乗者に対して魅力的な情報を提供することである。この目的を達成するため、本発明は、従来のような仮想世界を現実世界に重畳するAR表示ではなく、過去の現実世界を現在の現実世界に重畳するAR表示を行う。すなわち、本発明は、仮想オブジェクトではなく過去に実在していたリアルなオブジェクトをAR表示することで、移動体からの光景に対して本物の価値を付加しているような感覚を与える。
[Summary of Invention]
An object of the present invention is to provide attractive information to a passenger of a mobile object. In order to achieve this object, the present invention performs an AR display in which the past real world is superimposed on the current real world, instead of the conventional AR display in which the virtual world is superimposed on the real world. In other words, the present invention gives the feeling that real value is added to the scene from the moving object by AR displaying not the virtual object but the real object that actually existed in the past.
 但し、本発明のAR表示を行うためには、複数の場所で複数の日時にそれぞれ発生した膨大な過去の現実世界の時空間上のデータを超高速で処理しなければならない。そこで、本発明は、膨大な時空間データを一斉に蓄積可能であり、かつ、所定の場所で所定の日時に発生した過去の時空間データやオブジェクトをリアルタイムに検索、分析可能な時空間データベースを用いる(特許文献2及び非特許文献2を参照)。 However, in order to perform the AR display of the present invention, it is necessary to process a huge amount of past real-world spatio-temporal data generated at multiple locations on multiple dates and times at ultra-high speed. Therefore, the present invention provides a spatio-temporal database that can store a large amount of spatio-temporal data all at once, and that can search and analyze past spatio-temporal data and objects that occurred at a given place on a given date and time in real time. (See Patent Document 2 and Non-Patent Document 2).
 つまり、本発明は、検索等の処理にリアルタイム性のある時空間データベースを用いて、所定の場所に過去に存在していた実際のモノや所定の場所で過去に起こっていた実際の出来事を、その所定の場所と同じ場所に位置する移動体の現在の環境に重畳するAR表示を行う。 In other words, the present invention uses a real-time spatio-temporal database for processing such as searches to retrieve information about actual objects that existed in a predetermined location in the past and events that occurred in the past in a predetermined location. An AR display superimposed on the current environment of the moving object located at the same place as the predetermined place is performed.
 本発明のAR表示を行うことで、面白く、飽きのこない、魅力的な情報を提供することができる。例えば、移動体の搭乗者に対して高度な旅行体験を経験させることができる。具体的には、自動車の走行する道路沿いに過去に設置されていた歴史人物の銅像をAR表示することで、同乗者の子供に対して社会的教育を行うことができる。走行中の場所で過去に行われた花火大会をAR表示することで、運転者に対して楽しい旅行気分を味わわせることができる。その他、本発明は、運転者の気を散らさないようにすること、交通量の少ない道路を運転中に感じる退屈さを解消させること等、運転の安全性を向上することもできる。 By performing the AR display of the present invention, it is possible to provide interesting, timeless and attractive information. For example, it is possible to allow a passenger of a mobile object to experience an advanced travel experience. Specifically, by displaying in AR the bronze statues of historical figures that were installed along the road on which the car travels, it is possible to provide social education to the children of the passengers. The AR display of past firework displays at the place where the vehicle is running can make the driver feel like he or she is enjoying a trip. In addition, the present invention can also improve driving safety, such as reducing driver distraction and reducing boredom experienced while driving on lightly traffic roads.
 [情報処理システムの構成例]
 図1は、本実施形態に係る情報処理システム1の構成例を示す図である。この情報処理システム1は、通信ネットワークを介して相互通信可能に接続された、情報処理装置10と、時空間データベース20と、複数の自動車30と、を備える。
[Configuration example of information processing system]
FIG. 1 is a diagram showing a configuration example of an information processing system 1 according to this embodiment. This information processing system 1 includes an information processing device 10, a spatio-temporal database 20, and a plurality of automobiles 30, which are interconnected via a communication network so as to be able to communicate with each other.
 情報処理装置10は、自動車30の搭乗者(運転者、同乗者、操縦者、乗客等)に対して魅力的な情報を提供する情報処理装置である。情報処理装置10は、通信ネットワーク内のコンピュータ(サーバ装置等)で構成してもよいし、自動車30内のコンピュータ(車載機器等)で構成してもよい。 The information processing device 10 is an information processing device that provides attractive information to passengers of the automobile 30 (driver, fellow passenger, operator, passenger, etc.). The information processing device 10 may be configured by a computer (server device, etc.) in the communication network, or may be configured by a computer (in-vehicle equipment, etc.) in the vehicle 30 .
 情報処理装置10は、図1に示したように、収集部11と、受信部12と、処理部13と、送信部14と、を備える。 The information processing device 10 includes a collection unit 11, a reception unit 12, a processing unit 13, and a transmission unit 14, as shown in FIG.
 収集部11は、複数の自動車30から、その複数の自動車30により複数の場所で複数の日時にそれぞれ撮影された画像データ、録音された音声データ、測定された各種の測定データ、撮影時、録音時、測定時の各自動車30の位置データ及び時刻データを収集する機能を備える。 The collection unit 11 collects, from a plurality of automobiles 30, image data photographed by the plurality of automobiles 30 at a plurality of locations on a plurality of dates and times, recorded audio data, various types of measurement data measured, at the time of photographing, recording It has a function of collecting position data and time data of each vehicle 30 at the time of measurement.
 収集部11は、複数の画像データ、複数の音声データ、複数の各種の測定データ、複数の位置データ、複数の時刻データを時空間データベース20へ送信し、画像データ、音声データ、各種の測定データ、位置データ、時刻データを関連付けて時空間データベース20に蓄積させる機能を備える。 The collection unit 11 transmits a plurality of image data, a plurality of audio data, a plurality of various measurement data, a plurality of position data, and a plurality of time data to the spatio-temporal database 20, and collects the image data, the audio data, and the various measurement data. , position data, and time data are associated with each other and stored in the spatio-temporal database 20 .
 収集部11は、画像データに対して、その画像データの画像内容を示す画像データ用タグを付与する機能を備える。同様に、収集部11は、音声データに対して、その音声データの音声内容を示す音声データ用タグを付与する機能を備える。 The collection unit 11 has a function of adding an image data tag indicating the image content of the image data to the image data. Similarly, the collection unit 11 has a function of adding an audio data tag indicating the audio content of the audio data to the audio data.
 受信部12は、所定の自動車30’から、その所定の自動車30’の位置データ及び時刻データを受信する機能を備える。 The receiving unit 12 has a function of receiving position data and time data of a predetermined automobile 30' from the predetermined automobile 30'.
 受信部12は、所定の自動車30’から、その所定の自動車30の搭乗者のプロフィール情報を受信する機能を備える。 The receiving unit 12 has a function of receiving profile information of passengers of the predetermined automobile 30' from the predetermined automobile 30'.
 処理部13は、所定の自動車30’の位置データに対応する画像データ内のオブジェクト、音声データを時空間データベース20の画像データ群の中、音声データ群の中から検索する機能を備える。同様に、処理部13は、所定の自動車30’の位置データと時刻データとの両方に対応する画像データ内のオブジェクト、音声データを時空間データベース20の画像データ群の中、音声データ群の中から検索する機能を備える。 The processing unit 13 has a function of searching the image data group and the audio data group of the spatio-temporal database 20 for an object and audio data in the image data corresponding to the position data of the predetermined automobile 30'. Similarly, the processing unit 13 selects an object in the image data corresponding to both the position data and the time data of the predetermined vehicle 30' and the audio data in the image data group and the audio data group in the spatio-temporal database 20. Equipped with a function to search from
 処理部13は、画像データ用タグ、音声データ用タグを基に、所定の自動車30’の位置データに対応する画像データ内のオブジェクト、音声データを検索する機能を備える。同様に、処理部13は、画像データ用タグ、音声データ用タグを基に、所定の自動車30’の位置データと時刻データとの両方に対応する画像データ内のオブジェクト、音声データを検索する機能を備える。 The processing unit 13 has a function of searching for an object in image data and audio data corresponding to the position data of a predetermined automobile 30' based on the image data tag and the audio data tag. Similarly, the processing unit 13 has a function of retrieving an object in image data and audio data corresponding to both position data and time data of a predetermined automobile 30' based on an image data tag and an audio data tag. Prepare.
 処理部13は、所定の自動車30’の搭乗者のプロフィール情報に関連する画像データ内のオブジェクト、音声データを検索する機能を備える。 The processing unit 13 has a function of retrieving objects and audio data in image data related to profile information of passengers of a predetermined automobile 30'.
 処理部13は、所定の自動車30’の運転に影響する画像データ内のオブジェクト、音声データを検索する機能を備える。 The processing unit 13 has a function of searching for objects in image data and audio data that affect driving of a predetermined automobile 30'.
 送信部14は、処理部13が検索した画像データ内のオブジェクトを、当該オブジェクトが当該オブジェクトの画像データ内で過去に表示されていた表示位置と同じ表示位置で、所定の自動車30’の光景に重畳してAR表示(拡張現実表示)させるオブジェクト表示情報を、所定の自動車30’の出力機器35へ送信する機能を備える。同様に、送信部14は、処理部13が検索した音声データを出力するための音声出力情報を所定の自動車30’の出力機器35へ送信する機能を備える。 The transmitting unit 14 transmits the object in the image data searched by the processing unit 13 to the scene of the predetermined automobile 30' at the same display position as the display position where the object was displayed in the past in the image data of the object. It has a function of transmitting object display information to be superimposed and displayed in AR (augmented reality display) to an output device 35 of a predetermined automobile 30'. Similarly, the transmission unit 14 has a function of transmitting audio output information for outputting the audio data retrieved by the processing unit 13 to the output device 35 of the predetermined automobile 30'.
 時空間データベース20は、複数の自動車30が一斉に送信する画像データ等の時空間上のデータを蓄積しながら、所定の場所で所定の時刻に対応する時空間上のデータや所定の場所で所定の時刻に位置する動的、静的なオブジェクトをリアルタイムに検索、分析する時空間データ高速検索技術を含む超高速時空間データ管理技術であり、かつ、超高速でモノの位置データや時刻データを検索、分類分析を可能とするデータベース技術である。 The spatio-temporal database 20 accumulates spatio-temporal data such as image data transmitted simultaneously by a plurality of automobiles 30, while storing spatio-temporal data corresponding to a predetermined time at a predetermined place and a predetermined time at a predetermined place. It is an ultra-high-speed spatio-temporal data management technology that includes high-speed spatio-temporal data search technology that searches and analyzes dynamic and static objects located at the time in real time. It is a database technology that enables search and classification analysis.
 この時空間データベース20は、特許文献2及び非特許文献2に開示されている。例えば、非特許文献2に開示されているように、時空間データベース20は、時空間コードを分散キーバリューストアのキーとして用いて多次元情報検索の効率化等を行うことで、時空間上のデータやオブジェクトの高速検索を実現している。 This spatio-temporal database 20 is disclosed in Patent Document 2 and Non-Patent Document 2. For example, as disclosed in Non-Patent Document 2, the spatio-temporal database 20 uses the spatio-temporal code as a key of a distributed key-value store to improve the efficiency of multidimensional information retrieval, etc. It enables fast retrieval of data and objects.
 複数の自動車30は、通信ネットワークに接続されたコネクテッドカーである。複数の自動車30は、それぞれ、カメラ31と、位置センサ32と、クロック33と、入力機器34と、出力機器35と、を備える。自動車30は、速度センサ、加速度センサ、ハンドル角度センサ、視線位置測定センサ等も更に備えてもよい。 A plurality of cars 30 are connected cars connected to a communication network. A plurality of automobiles 30 each include a camera 31 , a position sensor 32 , a clock 33 , an input device 34 and an output device 35 . The automobile 30 may further include a speed sensor, an acceleration sensor, a steering wheel angle sensor, a line-of-sight position measurement sensor, and the like.
 カメラ31は、自動車30に設置された収音機能付きの撮像装置である。カメラ31は、自動車30の前方向、後方向、左方向、右方向、下方向、上方向、それらの斜め方向をそれぞれ撮影し、自動車30の周囲の発生音を収音する機能を備える。 The camera 31 is an imaging device with a sound pickup function installed in the automobile 30 . The camera 31 has a function of photographing the front, rear, left, right, bottom, top, and oblique directions of the automobile 30 and picking up sounds generated around the automobile 30 .
 位置センサ32は、全地球測位システムから自動車30の地理的な二次元座標位置を示す位置データを取得する機能を備える。位置センサ32は、例えば、ナビゲーションシステムに内蔵された位置測位機能等である。 The position sensor 32 has a function of acquiring position data indicating the geographical two-dimensional coordinate position of the automobile 30 from the global positioning system. The position sensor 32 is, for example, a positioning function built in a navigation system.
 クロック33は、自動車30の時刻データを測定する機能を備える。クロック33は、自身で時刻を測定してもよいし、ネットワークタイムプロトコルを用いて通信ネットワークを介して時刻を参照してもよい。クロック33は、カメラ31や位置センサ32に時刻測定管理機能が内蔵されている場合には、なくてもよい。 The clock 33 has a function of measuring the time data of the automobile 30. The clock 33 may measure the time itself, or may refer to the time via a communication network using a network time protocol. The clock 33 may be omitted if the camera 31 or the position sensor 32 has a built-in time measurement management function.
 入力機器34は、自動車30に設置されたコンピュータである。入力機器34は、例えば、タブレット端末、車載機器等である。 The input device 34 is a computer installed in the automobile 30. The input device 34 is, for example, a tablet terminal, an in-vehicle device, or the like.
 出力機器35は、オブジェクトをAR表示可能なAR表示装置、音声データを出力可能な音声出力機器である。出力機器35は、例えば、カメラ31からの映像及び音声を出力するコンピュータ、ナビゲーションシステム、タブレット端末等である。出力機器35は、例えば、自動車30のフロントウィンドウに画像を出力して音声を出力するプロジェクタ、ヘッドマウントディスプレイ、ヘッドアップディスプレイ等でもよい。すなわち、出力機器35は、従来から知られている一般的な画面やディスプレイ、VRゴーグルや眼鏡タイプの表示装置、移動体内に設置されたウィンドウ(窓)やプロジェクタ等である。 The output device 35 is an AR display device capable of AR-displaying objects and an audio output device capable of outputting audio data. The output device 35 is, for example, a computer that outputs video and audio from the camera 31, a navigation system, a tablet terminal, or the like. The output device 35 may be, for example, a projector that outputs an image to the front window of the automobile 30 and outputs sound, a head-mounted display, a head-up display, or the like. That is, the output device 35 is a conventionally known general screen or display, VR goggles or spectacles type display device, a window installed in a moving object, a projector, or the like.
 [情報処理装置の動作例]
 次に、情報処理装置10の動作例を説明する。
[Example of operation of information processing device]
Next, an operation example of the information processing device 10 will be described.
  [データ収集動作]
 まず、情報処理装置10が複数の自動車30から画像データ等を収集する動作を説明する。図2は、データ収集動作の処理フローを示す図である。
[Data collection operation]
First, the operation of the information processing device 10 collecting image data and the like from a plurality of automobiles 30 will be described. FIG. 2 is a diagram showing a processing flow of data collection operation.
 ステップS101;
 収集部11は、走行中又は一時停止中の複数の自動車30の周囲に存在する実際のモノや複数の自動車30の周囲で起こっている実際の出来事に関する画像データ等を常時収集する。具体的には、収集部11は、走行中又は一時停止中である複数の自動車30から、その複数の自動車30により複数の場所で複数の日時に各自動車30のカメラ31でそれぞれ撮影された画像データ、録音された音声データ、位置センサ32で取得された各自動車30の位置データ、クロック33で測定された各自動車30の時刻データを常時収集する。
Step S101;
The collection unit 11 constantly collects image data and the like regarding actual objects existing around the plurality of automobiles 30 that are running or paused, and actual events occurring around the plurality of automobiles 30 . Specifically, the collection unit 11 collects images captured by the cameras 31 of the automobiles 30 at a plurality of locations on a plurality of dates and times from the plurality of automobiles 30 that are running or at a temporary stop. Data, recorded voice data, position data of each car 30 obtained by the position sensor 32, and time data of each car 30 measured by the clock 33 are collected all the time.
 例えば、収集部11は、自動車、道路、標識、人、動物、木、川、芝生、家、天気、太陽、雲、虹、雷、その他の潜在的に興味深い人間や自然の出来事に関する画像データ等を常時収集する。自動車30に速度センサ、加速度センサ、ハンドル角度センサ、視線位置測定センサ等も搭載されている場合には、収集部11は、各センサでそれぞれ測定された各種の測定データも同時に常時収集する。 For example, the collection unit 11 collects image data about cars, roads, signs, people, animals, trees, rivers, lawns, houses, weather, the sun, clouds, rainbows, lightning, and other potentially interesting human and natural events. are always collected. When the vehicle 30 is also equipped with a speed sensor, an acceleration sensor, a steering wheel angle sensor, a line-of-sight position measurement sensor, etc., the collection unit 11 constantly collects various measurement data measured by each sensor at the same time.
 ステップS102;
 次に、収集部11は、複数の自動車30から収集した画像データ、音声データ、各種の測定データ、位置データ、時刻データを用いて、その画像データ及び音声データが所属するカテゴリをそれぞれ決定し、そのカテゴリを示す画像データ用タグを画像データに付与し、そのカテゴリを示す音声データ用タグを音声データに付与する。
Step S102;
Next, the collection unit 11 uses the image data, sound data, various measurement data, position data, and time data collected from the plurality of automobiles 30 to determine the category to which the image data and sound data belong, An image data tag indicating the category is added to the image data, and an audio data tag indicating the category is added to the audio data.
 カテゴリの決定手段については、例えば、画像データ及び音声データがそれぞれ所属するカテゴリを決定するために予め構築した分類タスク用の機械学習モデルを用いる。収集部11は、その機械学習モデルに画像データ又は音声データを入力し、その機械学習モデルから出力されたカテゴリの推論結果を所属先のカテゴリとして決定する。なお、複数の種類の画像データや音声データを機械学習モデルに入力して繰り返し学習させることで、その機械学習モデルのカテゴリ分類精度を改善することができる。 Regarding the means for determining categories, for example, a machine learning model for classification tasks that has been built in advance to determine the categories to which image data and audio data belong is used. The collection unit 11 inputs image data or audio data to the machine learning model, and determines the inference result of the category output from the machine learning model as the category to which it belongs. It should be noted that the category classification accuracy of the machine learning model can be improved by inputting a plurality of types of image data and audio data to the machine learning model and repeatedly learning the model.
 ステップS103;
 最後に、収集部11は、複数の画像データ、複数の音声データ、複数の各種の測定データ、複数の位置データ、複数の時刻データ、複数のタグデータを時空間データベース20へ送信し、それらのデータ同士を関連付けて時空間データベース20に蓄積させる。
Step S103;
Finally, the collection unit 11 transmits a plurality of image data, a plurality of audio data, a plurality of various measurement data, a plurality of position data, a plurality of time data, and a plurality of tag data to the spatio-temporal database 20, and The data are associated with each other and stored in the spatio-temporal database 20 .
 情報処理装置10は、ステップS101~ステップS103を毎時刻、毎日、毎月、毎年実行する。その結果、時空間データベース20には、世界各地の各場所で任意の日時に存在していたモノやその時に起こっていた出来事に関する膨大な画像データ等が蓄積される。 The information processing apparatus 10 executes steps S101 to S103 every hour, every day, every month, and every year. As a result, the spatio-temporal database 20 accumulates a huge amount of image data and the like regarding things that existed at arbitrary dates and times in various locations around the world and events that occurred at that time.
 時空間データベース20は、位置データ、時刻データ、画像データ用タグ、音声データ用タグを検索キーとし、その検索キーで検索可能となるように、画像データ、音声データ、各種の測定データ、タグデータを関連付けて記憶する。時空間データベース20の持つ時空間上のデータやオブジェクトの検索、分析のリアルタイム性により、情報処理装置10は、後日、所望の画像データ及び音声データをリアルタイムに検索することができる。 The spatio-temporal database 20 uses position data, time data, image data tags, and audio data tags as search keys, and stores image data, audio data, various measurement data, and tag data so as to be searchable with the search keys. are stored in association with each other. Due to real-time search and analysis of spatio-temporal data and objects possessed by the spatio-temporal database 20, the information processing apparatus 10 can search desired image data and audio data in real time at a later date.
  [カテゴリのタグの例]
 ステップS102で説明したカテゴリの画像データ用タグ及び音声データ用タグの例を説明する。
[Category tag example]
An example of the image data tag and the audio data tag of the category described in step S102 will be described.
 収集部11は、「季節」に関するカテゴリのタグを付与する。収集部11は、自動車30の走行場所の季節を後日判定できるように、画像データ内の花の色、花の種類、山の色、雪があるか、音声データ内のホトトギスの鳴き声、セミの鳴き声、時刻データ、等を基に、画像データや音声データの季節を決定し、春、夏、秋、冬、初冬、等のタグを付与する。 The collection unit 11 assigns category tags related to "seasons". The collection unit 11 collects the colors of flowers, the types of flowers, the colors of mountains, whether there is snow in the image data, the chirping of cicadas in the audio data, and the chirping of cicadas in the audio data, so that the season of the driving place of the automobile 30 can be determined at a later date. , time data, etc., the season of the image data and audio data is determined, and tags such as spring, summer, autumn, winter, early winter, etc. are added.
 収集部11は、「時刻」に関するカテゴリのタグを付与する。収集部11は、画像データ内の日没や日の出の見え方、夜の場合は満月の下の見え方、等を基に、画像データの時刻を決定し、その時刻のタグを付与する。収集部11は、画像データと同時に収集した時刻データをそのまま付与してもよい。 The collection unit 11 assigns category tags related to "time". The collection unit 11 determines the time of the image data based on how the sunset and sunrise appear in the image data, how the full moon appears in the case of nighttime, and the like, and attaches a tag for that time. The collection unit 11 may add the time data collected at the same time as the image data as it is.
 収集部11は、「天気」に関するカテゴリのタグを付与する。収集部11は、画像データ内に太陽があるか、雨があるか、雲が多いか、雲の色、音声データ内に雨の音があるか、雷の音があるか、等を基に、画像データ内や音声データ内の天気を決定し、晴れ、曇り、雨、雪、等のタグを付与する。 The collection unit 11 assigns category tags related to "weather". The collection unit 11 determines whether the image data includes the sun, rain, many clouds, the color of the clouds, the sound of rain in the audio data, the sound of thunder, etc. , determines the weather in the image data or the audio data, and assigns tags such as sunny, cloudy, raining, snowing, and the like.
 収集部11は、「景色」に関するカテゴリのタグを付与する。収集部11は、画像データ内に虹があるか、橋があるか、水田があるか、山があるか、海があるか、タワーがあるか、等を基に、画像データ内の風景を決定し、その風景のタグを付与する。 The collection unit 11 assigns category tags related to "landscape". The collection unit 11 extracts the scenery in the image data based on whether there is a rainbow, bridge, paddy field, mountain, sea, tower, etc. in the image data. Decide and assign a tag for that scenery.
 収集部11は、「動物」に関するカテゴリのタグを付与する。収集部11は、画像データ内に牛がいるか、鳥がいるか、音声データ内に羊の鳴き声があるか、馬の鳴き声があるか、等を基に、画像データ内や音声データ内の動物を決定し、その動物のタグを付与する。タグの名称は、犬、鳥等の包括的な名称でもよいし、パグ犬、ビーグル犬等の個別的な名称でもよい。なお、画像データ内に牛等のオブジェクトがいるか否かについては、既存のオブジェクト認識機能を利用可能である。 The collection unit 11 assigns category tags related to "animals". The collection unit 11 identifies animals in the image data and the audio data based on whether the image data includes cows and birds, whether the audio data includes sheep barks, horse barks, and the like. Decide and give the animal a tag. The name of the tag may be a generic name such as dog or bird, or an individual name such as pug dog or beagle dog. An existing object recognition function can be used to determine whether an object such as a cow exists in the image data.
 収集部11は、「乗り物」に関するカテゴリのタグを付与する。収集部11は、画像データ内に自動車があるか、電車があるか、音声データ内に電車の走行音があるか、水上バイクの走行音があるか、等を基に、画像データ内や音声データ内の乗り物を決定し、その乗り物のタグを付与する。なお、画像データ内に自動車等の像があるか否かについては、既存の画像認識機能を利用可能である。 The collection unit 11 assigns category tags related to "vehicles". The collection unit 11 determines whether the image data contains a car or a train, whether the sound data contains a running sound of a train, or whether a running sound of a personal watercraft is included in the sound data. Determine the vehicles in the data and tag them. An existing image recognition function can be used to determine whether or not there is an image of an automobile or the like in the image data.
 収集部11は、「人」や「人の活動」に関するカテゴリのタグを付与する。収集部11は、画像データ内に稲刈りを行っている様子がある場合には、稲刈りのタグを付与する。収集部11は、画像データ内に花火大会の風景や上空に花火が含まれている場合には、花火大会のタグを付与する。なお、画像データ内に稲刈り等の出来事が含まれているか否かについても、既存の画像認識機能を利用可能である。 The collection unit 11 attaches category tags related to "people" and "human activities". The collection unit 11 attaches a rice harvesting tag when there is a state of rice harvesting in the image data. The collection unit 11 attaches a tag of a fireworks display when the image data includes the scenery of the fireworks display or fireworks in the sky. An existing image recognition function can also be used to determine whether or not an event such as rice harvesting is included in the image data.
 収集部11は、「自動車の安全操作」に関するカテゴリのタグを付与する。収集部11は、速度センサ、加速度センサ、ハンドル角度センサの各測定データを基に、自動車30の運転が人手の操作のように滑らかである場合には、安全操作のタグを画像データに付与する。一方、加速度やハンドル角度が短時間で急峻に変化する場合には、危険操作のタグを画像データに付与する。 The collection unit 11 assigns category tags related to "safe operation of automobiles". Based on the measurement data of the speed sensor, the acceleration sensor, and the steering wheel angle sensor, the collection unit 11 attaches a safe operation tag to the image data when the driving of the automobile 30 is as smooth as a manual operation. . On the other hand, when the acceleration or the steering wheel angle changes sharply in a short time, a dangerous operation tag is added to the image data.
 収集部11は、「運転者の興味」に関するカテゴリのタグを付与する。収集部11は、視線位置測定センサを基に、運転者の視線が自動車の進行方向のエリア以外の位置にある場合には、運転者の興味ありのタグを画像データに付与する。 The collection unit 11 assigns category tags related to "driver's interest". Based on the line-of-sight position measurement sensor, the collection unit 11 attaches a driver's interest tag to the image data when the line-of-sight of the driver is at a position other than the area in the traveling direction of the vehicle.
 上述したカテゴリのタグは、例にすぎない。将来、撮影等が行われた場所を通過する自動車の搭乗者に対して画像表示や音声出力するために、どの画像や音声を選択すべきかを決定するのに役立つ他のカテゴリも使用可能である。 The category tags mentioned above are only examples. Other categories are also available to help determine which images and sounds should be selected for future display and sound output to occupants of vehicles passing through the location where the picture was taken. .
  [オブジェクトのAR表示動作]
 次に、情報処理装置10が、時空間データベース20に蓄積されている過去の画像データ等を用いて、所定の自動車30’の搭乗者に対して、所定のオブジェクトをAR表示し、所定の音声データを出力する動作を説明する。図3は、オブジェクトのAR表示等動作の処理フローを示す図である。なお、所定の自動車30’は、画像データ等を情報処理装置10へ送信可能な自動車30でもよいし、画像データ等を情報処理装置10へ送信しない自動車でもよい。
[AR Display Operation of Object]
Next, the information processing device 10 uses the past image data or the like accumulated in the spatio-temporal database 20 to AR-display a predetermined object to a passenger of a predetermined automobile 30', and displays a predetermined voice. The operation of outputting data will be explained. FIG. 3 is a diagram showing a processing flow of operations such as AR display of an object. The predetermined vehicle 30 ′ may be a vehicle 30 capable of transmitting image data and the like to the information processing device 10 or a vehicle not transmitting image data and the like to the information processing device 10 .
 ステップS201;
 受信部12は、現在走行中又は一時停止中である所定の自動車30’の位置センサ32で取得された位置データ、クロック33で測定された時刻データを受信する。所定の自動車30’の搭乗者が自動車30’の入力機器34に自己のプロフィール情報を入力した場合には、受信部12は、そのプロフィール情報も受信する。プロフィール情報には、例えば、入力者の年齢、性別、身長、体重、趣味、仕事、性格、服装、目標等が含まれる。
Step S201;
The receiving unit 12 receives the position data acquired by the position sensor 32 and the time data measured by the clock 33 of the predetermined automobile 30' currently running or stopped. When a passenger of a predetermined automobile 30' inputs his/her profile information to the input device 34 of the automobile 30', the receiving section 12 also receives the profile information. The profile information includes, for example, the age, sex, height, weight, hobby, job, personality, clothes, goals, etc. of the input person.
 ステップS202;
 次に、処理部13は、搭乗者のプロフィール情報を受信した場合には、そのプロフィール情報を、プロフィールからユーザの嗜好を決定するために予め構築した機械学習モデルに入力し、その機械学習モデルから出力された推論結果を搭乗者の嗜好情報として決定する。
Step S202;
Next, when the processing unit 13 receives the profile information of the passenger, the processing unit 13 inputs the profile information into a machine learning model built in advance to determine the user's preferences from the profile, and from the machine learning model The output inference result is determined as the passenger's preference information.
 また、処理部13は、所定の自動車30’の位置データ及び時刻データを、安全運転を行うために予め構築した機械学習モデルに入力し、更には搭乗者のプロフィール情報も当該機械学習モデルに入力し、その機械学習モデルから出力された推論結果を搭乗者の安全運転に関する安全運転情報として決定する。 In addition, the processing unit 13 inputs the position data and time data of the predetermined automobile 30' into a machine learning model constructed in advance for safe driving, and also inputs the passenger's profile information into the machine learning model. Then, the inference result output from the machine learning model is determined as the safe driving information regarding the safe driving of the passenger.
 上記2種類の機械学習モデルは、搭乗者のプロフィール情報、自動車の位置データ及び時刻データをその機械学習モデルにそれぞれ入力して繰り返し学習することで、搭乗者の嗜好情報及び安全運転情報をそれぞれ改善することができる。また、搭乗者の嗜好情報及び安全運転情報は人により好みが異なるので、搭乗者毎に興味のあることを学習するように、上記2種類の機械学習モデルを搭乗者毎にカスタマイズしてもよい。 The above two types of machine learning models improve passenger preference information and safe driving information by inputting passenger profile information, vehicle location data and time data into the machine learning model and learning repeatedly. can do. In addition, since the taste information and safe driving information of passengers differ from person to person, the above two types of machine learning models may be customized for each passenger so as to learn the interests of each passenger. .
 ステップS203;
 次に、処理部13は、時空間データベース20へアクセスし、所定の自動車30’の位置データに一致する画像データ群を検索し、その画像データ群の中から搭乗者の嗜好情報に一致する画像データ用タグが付与されている画像データを検索する。同様に、処理部13は、音声データも検索する。例えば、処理部13は、搭乗者が鳥を好む場合には、所定の自動車30’の走行場所で現在日時と異なる時間帯に飛行していた鳥の画像や鳥の鳴き声を検索する。
Step S203;
Next, the processing unit 13 accesses the spatio-temporal database 20, searches for an image data group that matches the position data of the predetermined vehicle 30', and selects an image that matches the passenger's preference information from the image data group. Search for image data with data tags attached. Similarly, the processing unit 13 also searches for audio data. For example, if the passenger likes birds, the processing unit 13 searches for images and sounds of birds that flew at a time zone different from the current date and time at a predetermined travel location of the automobile 30'.
 また、処理部13は、時空間データベース20へアクセスし、所定の自動車30’の位置データと時刻データとの両方に一致する画像データ群を検索し、その画像データ群の中から搭乗者の嗜好情報に一致する画像データ用タグが付与されている画像データを検索する。同様に、処理部13は、音声データも検索する。例えば、処理部13は、搭乗者が鳥を好む場合には、所定の自動車30’の走行場所で現在日時と同じ時間帯に飛行していた鳥の画像や鳥の鳴き声を検索する。 In addition, the processing unit 13 accesses the spatio-temporal database 20, searches for an image data group that matches both the position data and the time data of the predetermined vehicle 30', and selects the passenger's preference from the image data group. Search for image data to which an image data tag that matches the information is added. Similarly, the processing unit 13 also searches for audio data. For example, when the passenger likes birds, the processing unit 13 searches for images and sounds of birds that flew at the same time zone as the current date and time at the predetermined travel location of the automobile 30'.
 同様に、処理部13は、搭乗者の安全運転情報に一致する画像データ及び音声データも検索する。例えば、処理部13は、所定の自動車30’の走行場所で現在日時と異なる時間帯又は同じ時間帯に渋滞中であった自動車の画像や工事中の音を検索する。 Similarly, the processing unit 13 also searches for image data and audio data that match the safe driving information of the passenger. For example, the processing unit 13 searches for an image of a vehicle that was in a traffic jam during a different time zone or the same time zone as the current date and time at a predetermined travel location of the vehicle 30', and for sounds during construction.
 なお、処理部13は、ステップS203で説明した複数の検索方法のうち全ての検索方法を用いてもよいし、一部の検索方法を用いてもよい。 Note that the processing unit 13 may use all of the plurality of search methods described in step S203, or may use some of the search methods.
 ステップS204;
 次に、処理部13は、検索した複数の画像データ及び複数の音声データの中から、所定の自動車30’の搭乗者に好適な1つ以上の画像データ及び音声データを選択する。その後、処理部13は、その画像データに含まれるオブジェクト及び音声データを時空間データベース20から抽出する。上記例の場合、処理部13は、鳥の画像、鳥の鳴き声、渋滞中の自動車の画像、工事中の音の中から、例えば、鳥のオブジェクト及び鳥の鳴き声を抽出する。
Step S204;
Next, the processing unit 13 selects one or more image data and audio data suitable for the passenger of the predetermined automobile 30' from among the searched plurality of image data and the plurality of audio data. After that, the processing unit 13 extracts the object and audio data included in the image data from the spatio-temporal database 20 . In the case of the above example, the processing unit 13 extracts, for example, a bird object and a bird's song from among the bird image, the bird's song, the image of the car in the traffic jam, and the sound during construction.
 好適な画像データ及び音声データの選択は、様々である。 There are various choices for suitable image data and audio data.
 例えば、処理部13は、所定の自動車30’の搭乗者の年齢やインターネットより取得した人口統計等を基に搭乗者に興味深い複数の画像データを推測し、その年齢に応じた教育やエンターテインメントに関する動的オブジェクト(歩行中の馬、虹、等)を抽出する。 For example, the processing unit 13 infers a plurality of image data that are interesting to the passenger based on the age of the passenger of the predetermined automobile 30', demographics obtained from the Internet, etc., and performs educational and entertainment activities according to the age. Extract target objects (walking horses, rainbows, etc.).
 例えば、処理部13は、所定の自動車30’の走行場所で現在の天候及び時刻と非常に類似する条件下で過去に走行していた動的オブジェクト(自動車、道路の工事風景、等)を抽出する。 For example, the processing unit 13 extracts dynamic objects (automobiles, road construction scenes, etc.) that have traveled in the past under conditions that are very similar to the current weather and time at the travel location of the predetermined automobile 30'. do.
 例えば、処理部13は、所定の自動車30’が夜間や霧の多い状況で走行中の場合には、ほぼ同じ走行速度で前方に見えている自動車を含む交通トラフィックの様子、特に走行速度の一致する可能性が長時間で高いトラフィックの様子を抽出する。 For example, when a predetermined vehicle 30' is traveling at night or in foggy conditions, the processing unit 13 may determine the state of traffic traffic including vehicles seen ahead at approximately the same traveling speed, especially the matching of the traveling speed. Extract the appearance of high traffic for a long period of time.
 例えば、処理部13は、所定の自動車30’の視線位置測定センサにより測定された運転者の視線位置データを受信し、運転者の視線位置や視線移動が定まらず、疲れていたり、運転に退屈したりしている場合には、「運転者の興味」に関するカテゴリのタグに一致する画像データを選択する。 For example, the processing unit 13 receives the line-of-sight position data of the driver measured by the line-of-sight position measurement sensor of the predetermined automobile 30', and detects that the driver's line-of-sight position and line-of-sight movement are not fixed, and that the driver is tired or bored with driving. If so, select the image data that matches the tag of the category regarding "driver's interest".
 例えば、処理部13は、原則的には、所定の自動車30’の搭乗者の嗜好を優先し、所定の自動車30’の走行場所で過去に事故が発生していた場合には、例外的に、安全運転情報を優先する。 For example, in principle, the processing unit 13 gives priority to the preferences of the passengers of the predetermined automobile 30', and exceptionally , give priority to safe driving information.
 ステップS205;
 次に、処理部13は、時空間データベース20から抽出したオブジェクト及び音声データを出力するタイミングを決定する。
Step S205;
Next, the processing unit 13 determines the timing of outputting the object and audio data extracted from the spatio-temporal database 20 .
 ステップS206;
 最後に、送信部14は、時空間データベース20から抽出したオブジェクトを上記タイミングでAR表示するためのオブジェクト表示情報と、時空間データベース20から抽出した音声データを上記タイミングで出力するための音声出力情報とを、所定の自動車30’の出力機器35へ出力する。
Step S206;
Finally, the transmission unit 14 provides object display information for AR-displaying the object extracted from the spatio-temporal database 20 at the above timing, and audio output information for outputting the audio data extracted from the spatio-temporal database 20 at the above timing. are output to the output device 35 of the predetermined automobile 30'.
 その後、出力機器35は、情報処理装置10から出力されたオブジェクトを所定の自動車30’の搭乗者に対してAR表示し、音声データを音声出力する。 After that, the output device 35 displays the object output from the information processing device 10 in an AR manner to the passenger of the predetermined automobile 30', and outputs the audio data as audio.
 例えば、出力機器35がカメラ31からの映像及び音声を出力するタブレット端末である場合、出力機器35は、そのカメラ31で撮影された走行中の風景画像に対して、その走行場所で過去に飛行していた鳥オブジェクトを過去の飛行位置と同じ位置にAR表示し、その鳥の鳴き声を音声出力する。 For example, if the output device 35 is a tablet terminal that outputs the video and audio from the camera 31, the output device 35 may display the image of the landscape taken by the camera 31 while traveling in the past at that location. The bird object that was flying is AR-displayed at the same position as the past flight position, and the bird's cry is output as voice.
 例えば、出力機器35がプロジェクタである場合、出力機器35は、所定の自動車30’のフロントウィンドウから見える走行中の実風景に対して、その走行場所で過去に飛行していた鳥オブジェクトを過去の飛行位置と同じ位置でフロントウィンドウにAR表示し、その鳥の鳴き声をスピーカから音声出力する。 For example, in the case where the output device 35 is a projector, the output device 35 reproduces a bird object that has flown in the past at the place of travel with respect to the actual scenery seen from the front window of the predetermined automobile 30'. An AR display is displayed on the front window at the same position as the flight position, and the bird's song is output from a speaker.
 このように、AR表示されるオブジェクトは、従来のような仮想オブジェクトではなく、過去に存在していた実際のモノや過去に起こっていた実際の出来事である。そのため、所定の自動車30’の搭乗者に対して本物の価値を付加しているような感覚を与えることができる。搭乗者はリアルなオブジェクトを見るので、より楽しい旅行気分を味わうことができ、運転者が意識を維持しながら道路から注意を逸らさない安全運転を行うことができる。その結果、所定の自動車30’の搭乗者に対して魅力的な情報を提供することができる。 In this way, the objects displayed in AR are not virtual objects like conventional ones, but actual things that existed in the past or actual events that happened in the past. Therefore, it is possible to give the feeling of adding genuine value to the passengers of the predetermined automobile 30'. Passengers see realistic objects, so they can enjoy a more enjoyable trip, and they can drive safely without distracting their attention from the road while maintaining their awareness. As a result, attractive information can be provided to the passengers of the predetermined automobile 30'.
 なお、最適な画像データ及び音声データの選択結果が複数であったことにより、出力機器35に複数のオブジェクトが表示されている場合には、所定の自動車30’の搭乗者は、画面のタッチ操作、スワイプ操作、音声入力等のジェスチャーにより、いずれかを選択してもよい。選択されたオブジェクト情報は、ステップS202で説明したユーザの嗜好用の機械学習モデルにフィードバックされ、その搭乗者の嗜好として学習される。 Note that when a plurality of objects are displayed on the output device 35 due to a plurality of selection results of the optimum image data and sound data, the passenger of the predetermined automobile 30' can perform a touch operation on the screen. , swipe operation, voice input, or other gestures may be used to select one of them. The selected object information is fed back to the user's preference machine learning model described in step S202, and learned as the passenger's preference.
  [オブジェクト表示情報について]
 情報処理装置10が所定の自動車30’へ送信するオブジェクト表示情報は、AR表示対象であるオブジェクトを、そのオブジェクトが当該オブジェクトの画像データ内で過去に表示されていた表示位置と同じ表示位置で、所定の自動車30’から見える光景に重畳するAR表示するための表示情報である。
[About object display information]
The object display information that the information processing device 10 transmits to the predetermined automobile 30' is to display the object to be displayed in AR at the same display position as the display position in which the object was displayed in the past in the image data of the object. This is display information for AR display superimposed on a scene seen from a predetermined automobile 30'.
 つまり、情報処理装置10は、オブジェクトの表示位置に変更を加えることなく、そのオブジェクトが過去の画像データ内で表示されていた表示位置のまま、すなわち、過去の画像データから抽出したオブジェクトの属性情報(位置、大きさ、色、形、ベクトル等)をそのまま使用してオブジェクト表示情報を生成する。 That is, the information processing apparatus 10 maintains the display position where the object was displayed in the past image data without changing the display position of the object, that is, the attribute information of the object extracted from the past image data. (position, size, color, shape, vector, etc.) is used as is to generate object display information.
 出力機器35は、表示対象のオブジェクトを、そのオブジェクトが過去に撮影されていた画像データ内の表示位置にそのまま出力する。所定の自動車30’は、そのオブジェクトが撮影されていた場所と同じ場所を走行中であるため、そのオブジェクトの位置や大きさを変更する処理を行わなくても、そのオブジェクトは周囲の背景に対して違和感なく重畳されることになる。 The output device 35 outputs the object to be displayed as it is to the display position in the image data in which the object was captured in the past. Since the given car 30' is driving in the same place where the object was photographed, the object can be seen against the surrounding background without processing to change the position or size of the object. are superimposed without a sense of incongruity.
 一般に、AR表示は、計算コストが非常に高いため、計算効率が非常に悪い。一方、自動車は同じ車線を走行する傾向にあるため、自動車から見える走行中の実際の光景と過去の画像とは同じ画角である。そのため、互いを位置合わせするための同期処理に必要な計算能力は、従来よりもはるかに小さく、複雑な計算を行う必要がないので、魅力的な情報を高速に提供することができる。 In general, AR display has very low computational efficiency due to its extremely high computational cost. On the other hand, since automobiles tend to travel in the same lane, the actual scene seen from the automobile and the past image have the same angle of view. Therefore, the computational power required for the synchronization process to align them with each other is much less than before, and since complex calculations do not have to be performed, attractive information can be provided at high speed.
 例えば、特定の木の枝から自動車の搭乗者の視界の外まで飛ぶ鳥を考える。撮影した画像データは、所定の速度、所定の位置、所定の時間帯で走行中の自動車によって過去に撮影された画像データである。情報処理装置10は、その場所と同じ場所を現在走行中の自動車の出力機器35に、その画像データに含まれていた鳥オブジェクトをAR表示する。AR表示される鳥オブジェクトは、自動車のカメラから見たように表示される。そのため、このような鳥オブジェクトをAR表示するためには、適切なタイミングで鳥オブジェクトをタブレット端末やフロントウィンドウ等のフレーム内に配置するだけでよい。鳥オブジェクトは同じ角度、同じ位置で表示されるため、異なる角度からどのように見えるかを計算する必要はない。 For example, consider a bird that flies from the branch of a particular tree out of sight of the car occupant. The photographed image data is image data photographed in the past by a vehicle running at a predetermined speed, at a predetermined position, and at a predetermined time period. The information processing apparatus 10 displays the bird object included in the image data on the output device 35 of the automobile currently traveling in the same place as the place. The AR-displayed bird object is displayed as seen from the car's camera. Therefore, in order to AR-display such a bird object, it is only necessary to place the bird object in a frame such as a tablet terminal or a front window at an appropriate timing. Since the bird object appears at the same angle and position, there is no need to calculate how it will look from different angles.
  [オブジェクトの解説情報]
 ここまで、情報処理装置10が、オブジェクトをAR表示するためのオブジェクト表示情報、そのオブジェクトの音声データを出力するための音声出力情報を自動車へ出力し、そして、その自動車が、風景画像や実光景に対して、そのオブジェクトをAR表示し、そのオブジェクトの音声データを出力する場合について説明した。
[Object description information]
Up to this point, the information processing apparatus 10 outputs object display information for AR-displaying an object and audio output information for outputting audio data of the object to a vehicle, and the vehicle can display a landscape image or a real scene. , the object is AR-displayed and the audio data of the object is output.
 一方、情報処理装置10は、オブジェクトそのものを重畳表示・重畳出力するだけでなく、オブジェクトを解説するための解説情報、つまり、所定の場所に過去に存在していた実際のモノや所定の場所で過去に起こっていた実際の出来事に関する解説情報も同時に自動車へ出力し、自動車は、オブジェクトの解説情報を更にAR表示し、その解説情報の音声データを更に出力することも可能である。 On the other hand, the information processing apparatus 10 not only superimposes and outputs the object itself, but also provides commentary information for explaining the object, that is, an actual object that existed in a predetermined place in the past, or an object that has existed in a predetermined place in the past. It is also possible to simultaneously output commentary information about an actual event that occurred in the past to the automobile, further display the commentary information of the object in AR, and further output audio data of the commentary information.
 例えば、情報処理装置10の処理部13は、ステップS203で画像データや音声データを時空間データベース20から検索するときに用いた検索キーを、その画像データのオブジェクトに関する解説情報として、オブジェクトの解説情報を生成する。具体的には、処理部13は、オブジェクトを時空間データベースの画像データ群の中から検索するために用いた、位置データ、時刻データ、画像データ用タグ、音声データ用タグ等を基に、オブジェクトの解説情報を生成する。 For example, the processing unit 13 of the information processing apparatus 10 converts the search key used when searching the spatio-temporal database 20 for image data and audio data in step S203 as commentary information about the object of the image data. to generate Specifically, the processing unit 13 extracts the object based on the position data, the time data, the image data tag, the audio data tag, etc. used for searching the object from the image data group of the spatio-temporal database. Generates descriptive information for .
 そして、送信部14は、そのオブジェクトの解説情報をAR表示及び音声出力するためのオブジェクト解説情報出力情報を所定の自動車30’へ送信する。オブジェクト解説情報出力情報には、そのオブジェクトの解説情報、その解説情報の音声データ、その解説情報のAR表示命令、その音声データの出力命令等が含まれている。 Then, the transmitting unit 14 transmits object commentary information output information for AR display and audio output of the commentary information of the object to a predetermined automobile 30'. The object commentary information output information includes commentary information of the object, voice data of the commentary information, an AR display command of the commentary information, an output command of the voice data, and the like.
 その後、所定の自動車30’の出力機器35は、情報処理装置10から出力されたオブジェクト解説情報出力情報に含まれるオブジェクトの解説情報を、所定の自動車30’の搭乗者に対してAR表示し、その解説情報の音声データを音声出力する。 After that, the output device 35 of the predetermined automobile 30' displays the commentary information of the object included in the object commentary information output information output from the information processing device 10 to the passenger of the predetermined automobile 30' in AR, The voice data of the commentary information is output as voice.
 これにより、所定の自動車30’では、オブジェクトのAR表示・音声出力に加えて、そのオブジェクトの解説情報も同時にAR表示・音声出力される。例えば、交通事故を起こした自動車の様子がAR表示・音声出力されるとともに、「〇年△月◇日□時▽分にこの場所で自動車事故が発生しました」という解説情報がAR表示・音声出力される。 As a result, in the predetermined automobile 30', in addition to the AR display/audio output of the object, the explanation information of the object is also AR displayed/audio output at the same time. For example, the state of the car that caused the traffic accident is displayed and voice-outputted in AR, and the commentary information "A car accident occurred at this place on the date and time of the year △ month ◇" is displayed and voiced in AR. output.
 その結果、オブジェクトの解説情報をAR表示等することで、自動車の搭乗者に対してより魅力的な情報を提供することができる。 As a result, it is possible to provide more attractive information to car passengers by displaying the commentary information of the object in AR.
 [適用例]
 本実施形態では、自動車を移動体の例に用いて説明した。本発明の移動体は、電車、モノレール、航空機、飛行船、船艇、潜水艦、オートバイ、自転車、ロープウェイ等、各種の乗り物に適用可能である。例えば、海上安全パトロール艇に適用し、水上を移動中の場所で過去に行われた水難事故の救助の様子をAR表示する。水上には道路等の走行区画がないため、オブジェクトを当該オブジェクトが過去に撮影されていた画像データ内の表示位置にそのまま出力しても位置を同期できない可能性があるが、過去に行った出来事を凡その位置で把握できれば十分である場合に、有効である。
[Application example]
In the present embodiment, an automobile has been described as an example of a moving body. The mobile body of the present invention can be applied to various vehicles such as trains, monorails, aircraft, airships, boats, submarines, motorcycles, bicycles, and ropeways. For example, it can be applied to a maritime safety patrol boat, and an AR display can be made of the scene of past water accident rescues while moving on the water. Since there are no running sections such as roads on the water, there is a possibility that the position cannot be synchronized even if the object is output as it is to the display position in the image data where the object was shot in the past. It is effective when it is sufficient to be able to grasp at an approximate position.
 [実施形態の効果]
 本実施形態によれば、時空間上のデータをリアルタイムに処理する時空間データベース20に通信可能に接続され、所定の自動車30’の搭乗者に対して情報を提供する情報処理装置10において、複数の自動車30から当該複数の自動車30により複数の場所で複数の日時にそれぞれ撮影された画像データと撮影時の時刻データと撮影時の各移動体の位置データとを収集し、収集した複数の画像データと複数の時刻データと複数の位置データとを時空間データベース20へ送信する収集部11と、所定の自動車30’から当該所定の自動車30’の位置データを受信する受信部12と、所定の自動車30’の位置データに対応する画像データ内のオブジェクトを時空間データベース20の画像データ群の中から検索する処理部13と、そのオブジェクトを、当該オブジェクトが過去に表示されていた表示位置と同じ表示位置で、所定の自動車30’の光景に重畳してAR表示させるオブジェクト表示情報を、所定の自動車30’の出力機器35へ送信する送信部14と、を備える。
[Effects of Embodiment]
According to the present embodiment, in the information processing device 10 that is communicably connected to the spatio-temporal database 20 that processes spatio-temporal data in real time and that provides information to passengers of a predetermined automobile 30', a plurality of image data photographed at a plurality of locations at a plurality of dates and times by the plurality of automobiles 30, the time data at the time of photographing, and the position data of each moving body at the time of photographing are collected from the plurality of collected images. data, a plurality of time data, and a plurality of position data to the spatio-temporal database 20; A processing unit 13 for searching the image data group of the spatio-temporal database 20 for an object in the image data corresponding to the position data of the automobile 30', A transmission unit 14 for transmitting object display information to be superimposed on a scene of a predetermined automobile 30' and displayed in AR at a display position to an output device 35 of the predetermined automobile 30'.
 すなわち、本実施形態に係る情報処理装置10は、所定の場所に過去に存在していた実際のモノや所定の場所で過去に起こっていた実際の出来事を、その所定の場所と同じ場所に位置する移動体の現在の環境に重畳するAR表示を行う。仮想オブジェクトではなく本当の現実世界のモノや出来事を見たい人にとって、本物の価値を付加しているような感覚が与えられるので、移動体の搭乗者に対して魅力的な情報を提供することができる。 That is, the information processing apparatus 10 according to the present embodiment can reproduce an actual object that existed in a predetermined place in the past or an actual event that occurred in the past in a predetermined place. AR display is superimposed on the current environment of the moving object. For those who want to see things and events in the real world instead of virtual objects, it gives a sense of adding real value, so providing attractive information to mobile passengers can be done.
 特に、本実施形態に係る情報処理装置10は、検索したオブジェクトを当該オブジェクトの表示位置と同じ表示位置で所定の自動車30’からの光景に重畳するAR表示するためのオブジェクト表示情報を送信する。搭乗者の位置に応じてオブジェクトをレンダリングする処理をしないので、一般的なAR表示よりも非常に小さい計算能力でよく、移動体の搭乗者に対して魅力的な情報を高速に提供することができる。 In particular, the information processing apparatus 10 according to the present embodiment transmits object display information for AR display in which the searched object is superimposed on the scene from the predetermined automobile 30' at the same display position as the display position of the object. Since there is no processing to render objects according to the passenger's position, it requires much less computing power than general AR display, and can provide attractive information to passengers of mobile vehicles at high speed. can.
 また、本実施形態によれば、情報処理装置10は、所定の自動車30’の搭乗者のプロフィールに関連するオブジェクトを検索するので、搭乗者の嗜好や記憶に残るモノや出来事を提示することができる。 Further, according to the present embodiment, the information processing apparatus 10 retrieves objects related to the profile of the passenger of the predetermined automobile 30', so that it is possible to present objects and events that remain in the passenger's taste and memory. can.
 [その他]
 本発明は、上記実施形態に限定されない。本発明は、本発明の要旨の範囲内で数々の変形が可能である。
[others]
The invention is not limited to the above embodiments. The present invention can be modified in many ways within the scope of the gist of the present invention.
 上記説明した本実施形態の情報処理装置10は、例えば、図4に示すように、CPU901と、メモリ902と、ストレージ903と、通信装置904と、入力装置905と、出力装置906と、を備えた汎用的なコンピュータシステムを用いて実現できる。メモリ902及びストレージ903は、記憶装置である。当該コンピュータシステムにおいて、CPU901がメモリ902上にロードされた所定のプログラムを実行することにより、情報処理装置10の各機能が実現される。 The information processing apparatus 10 of the embodiment described above includes, for example, a CPU 901, a memory 902, a storage 903, a communication device 904, an input device 905, and an output device 906, as shown in FIG. It can be realized using a general-purpose computer system. Memory 902 and storage 903 are storage devices. In the computer system, each function of the information processing apparatus 10 is realized by the CPU 901 executing a predetermined program loaded on the memory 902 .
 情報処理装置10は、1つのコンピュータで実装されてもよい。情報処理装置10は、複数のコンピュータで実装されてもよい。情報処理装置10は、コンピュータに実装される仮想マシンであってもよい。情報処理装置10用のプログラムは、HDD、SSD、USBメモリ、CD、DVD等のコンピュータ読取り可能な記録媒体に記憶できる。 The information processing device 10 may be implemented by one computer. The information processing device 10 may be implemented by multiple computers. The information processing device 10 may be a virtual machine implemented in a computer. A program for the information processing device 10 can be stored in computer-readable recording media such as HDD, SSD, USB memory, CD, and DVD.
 情報処理装置10用のプログラムは、通信ネットワークを介して遠隔地からセンシングや制御を行うようにするため、通信ネットワーク内のコンピュータ(サーバ装置等)にインストールして実行してもよい。プログラムは、記録媒体で配布し、通信ネットワークを介して配信することもできる。プログラムは、配布又は配信されたプログラムを自動車30内のコンピュータ(車載機器等)にインストールして実行してもよい。 The program for the information processing device 10 may be installed and executed on a computer (server device, etc.) within the communication network in order to perform sensing and control from a remote location via the communication network. The program can also be distributed on a recording medium and distributed over a communication network. The program may be executed by installing a distributed or distributed program in a computer (in-vehicle device or the like) in the vehicle 30 .
 [付記]
 以上の実施形態に関し、更に以下の付記を開示する。
[Appendix]
The following additional remarks are disclosed regarding the above embodiments.
 (付記項1)
 メモリと、
 前記メモリに接続された少なくとも1つのプロセッサと、
 を含み、
 前記プロセッサは、
 時空間上のデータをリアルタイムに処理する時空間データベースに通信可能に接続され、移動体の搭乗者に対して情報を提供する情報処理装置であって、
 複数の移動体から前記複数の移動体により複数の場所で複数の日時にそれぞれ撮影された画像データと撮影時の時刻データと撮影時の各移動体の位置データとを収集し、収集した複数の画像データと複数の時刻データと複数の位置データとを前記時空間データベースへ送信し、
 所定の移動体から前記所定の移動体の位置データを受信し、
 前記所定の移動体の位置データに対応する画像データ内のオブジェクトを前記時空間データベースの画像データ群の中から検索し、
 前記オブジェクトを、前記オブジェクトが過去に表示されていた表示位置と同じ表示位置で、前記所定の移動体の光景に重畳して拡張現実表示させるオブジェクト表示情報を、前記所定の移動体の拡張現実表示装置へ送信する、
 情報処理装置。
(Appendix 1)
memory;
at least one processor connected to the memory;
including
The processor
An information processing device that is communicatively connected to a spatio-temporal database that processes spatio-temporal data in real time and provides information to a passenger of a mobile object,
Image data taken by the plurality of mobile bodies at a plurality of locations on a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting are collected from the plurality of mobile bodies, and the collected plurality of transmitting image data, a plurality of time data, and a plurality of position data to the spatio-temporal database;
receiving position data of a given mobile object from a given mobile object;
retrieving an object in the image data corresponding to the position data of the predetermined mobile object from the image data group of the spatio-temporal database;
Augmented reality display of the predetermined moving body is object display information for displaying the object in augmented reality by superimposing the object on the scene of the predetermined moving body at the same display position as the display position where the object was displayed in the past. send to the device,
Information processing equipment.
 (付記項2)
 時空間上のデータをリアルタイムに処理する時空間データベースに通信可能に接続され、移動体の搭乗者に対して情報を提供する情報処理を実行するようにコンピュータによって実行可能なプログラムを記憶した非一時的記憶媒体であって、
 前記情報処理は、
 複数の移動体から前記複数の移動体により複数の場所で複数の日時にそれぞれ撮影された画像データと撮影時の時刻データと撮影時の各移動体の位置データとを収集し、収集した複数の画像データと複数の時刻データと複数の位置データとを前記時空間データベースへ送信し、
 所定の移動体から前記所定の移動体の位置データを受信し、
 前記所定の移動体の位置データに対応する画像データ内のオブジェクトを前記時空間データベースの画像データ群の中から検索し、
 前記オブジェクトを、前記オブジェクトが過去に表示されていた表示位置と同じ表示位置で、前記所定の移動体の光景に重畳して拡張現実表示させるオブジェクト表示情報を、前記所定の移動体の拡張現実表示装置へ送信する、
 非一時的記憶媒体。
(Appendix 2)
A non-temporary device that is communicatively connected to a spatio-temporal database that processes spatio-temporal data in real time and that stores a program executable by a computer to perform information processing that provides information to passengers of a mobile object a storage medium,
The information processing includes:
Image data taken by the plurality of mobile bodies at a plurality of locations on a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting are collected from the plurality of mobile bodies, and the collected plurality of transmitting image data, a plurality of time data, and a plurality of position data to the spatio-temporal database;
receiving position data of a given mobile object from a given mobile object;
retrieving an object in the image data corresponding to the position data of the predetermined mobile object from the image data group of the spatio-temporal database;
Augmented reality display of the predetermined moving body is object display information for displaying the object in augmented reality by superimposing the object on the scene of the predetermined moving body at the same display position as the display position where the object was displayed in the past. send to the device,
Non-transitory storage media.
 1:情報処理システム
 10:情報処理装置
 11:収集部
 12:受信部
 13:処理部
 14:送信部
 20:時空間データベース
 30:自動車
 31:カメラ
 32:位置センサ
 33:クロック
 34:入力機器
 35:出力機器
 901:CPU
 902:メモリ
 903:ストレージ
 904:通信装置
 905:入力装置
 906:出力装置
1: information processing system 10: information processing device 11: collection unit 12: reception unit 13: processing unit 14: transmission unit 20: spatio-temporal database 30: automobile 31: camera 32: position sensor 33: clock 34: input device 35: Output device 901: CPU
902: Memory 903: Storage 904: Communication device 905: Input device 906: Output device

Claims (10)

  1.  時空間上のデータをリアルタイムに処理する時空間データベースに通信可能に接続され、移動体の搭乗者に対して情報を提供する情報処理装置において、
     複数の移動体から前記複数の移動体により複数の場所で複数の日時にそれぞれ撮影された画像データと撮影時の時刻データと撮影時の各移動体の位置データとを収集し、収集した複数の画像データと複数の時刻データと複数の位置データとを前記時空間データベースへ送信する収集部と、
     所定の移動体から前記所定の移動体の位置データを受信する受信部と、
     前記所定の移動体の位置データに対応する画像データ内のオブジェクトを前記時空間データベースの画像データ群の中から検索する処理部と、
     前記オブジェクトを、前記オブジェクトが過去に表示されていた表示位置と同じ表示位置で、前記所定の移動体の光景に重畳して拡張現実表示させるオブジェクト表示情報を、前記所定の移動体の拡張現実表示装置へ送信する送信部と、
     を備える情報処理装置。
    An information processing device that is communicatively connected to a spatio-temporal database that processes spatio-temporal data in real time and provides information to passengers of a mobile object,
    Image data taken by the plurality of mobile bodies at a plurality of locations on a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting are collected from the plurality of mobile bodies, and the collected plurality of a collection unit that transmits image data, a plurality of time data, and a plurality of position data to the spatio-temporal database;
    a receiving unit that receives position data of a predetermined moving body from a predetermined moving body;
    a processing unit that searches for an object in image data corresponding to the position data of the predetermined mobile object from the image data group of the spatio-temporal database;
    Augmented reality display of the predetermined moving body is object display information for displaying the object in augmented reality by superimposing the object on the scene of the predetermined moving body at the same display position as the display position where the object was displayed in the past. a transmitter for transmitting to the device;
    Information processing device.
  2.  前記処理部は、
     前記所定の移動体の位置データと時刻データとの両方に対応する画像データ内のオブジェクトを検索する請求項1に記載の情報処理装置。
    The processing unit is
    2. The information processing apparatus according to claim 1, wherein an object in image data corresponding to both position data and time data of said predetermined moving body is searched.
  3.  前記処理部は、
     前記複数の画像データに付与されている各画像データの画像内容を示すタグを基に前記オブジェクトを検索する請求項1に記載の情報処理装置。
    The processing unit is
    2. The information processing apparatus according to claim 1, wherein said object is retrieved based on a tag indicating image content of each image data attached to said plurality of image data.
  4.  前記処理部は、
     前記所定の移動体の搭乗者のプロフィールに関連する前記オブジェクトを検索する請求項1に記載の情報処理装置。
    The processing unit is
    2. The information processing apparatus according to claim 1, wherein said object related to a profile of a passenger of said predetermined mobile body is retrieved.
  5.  前記処理部は、
     前記所定の移動体の運転に影響する前記オブジェクトを検索する請求項1に記載の情報処理装置。
    The processing unit is
    2. The information processing apparatus according to claim 1, wherein the object that affects driving of the predetermined moving body is searched.
  6.  前記送信部は、
     前記オブジェクトの解説情報を拡張現実表示又は音声出力するためのオブジェクト解説情報出力情報を前記拡張現実表示装置へ送信する請求項1に記載の情報処理装置。
    The transmission unit
    2. The information processing apparatus according to claim 1, wherein object commentary information output information for augmented reality display or audio output of the commentary information of the object is transmitted to the augmented reality display device.
  7.  前記解説情報は、
     前記オブジェクトを前記時空間データベースの画像データ群の中から検索するための検索キーを基に生成される請求項6に記載の情報処理装置。
    The explanatory information includes:
    7. The information processing apparatus according to claim 6, wherein the object is generated based on a search key for searching the image data group of the spatio-temporal database.
  8.  時空間上のデータをリアルタイムに処理する時空間データベースに通信可能に接続され、移動体の搭乗者に対して情報を提供する情報処理方法において、
     情報処理装置が、
     複数の移動体から前記複数の移動体により複数の場所で複数の日時にそれぞれ撮影された画像データと撮影時の時刻データと撮影時の各移動体の位置データとを収集し、収集した複数の画像データと複数の時刻データと複数の位置データとを前記時空間データベースへ送信するステップと、
     所定の移動体から前記所定の移動体の位置データを受信するステップと、
     前記所定の移動体の位置データに対応する画像データ内のオブジェクトを前記時空間データベースの画像データ群の中から検索するステップと、
     前記オブジェクトを、前記オブジェクトが過去に表示されていた表示位置と同じ表示位置で、前記所定の移動体の光景に重畳して拡張現実表示させるオブジェクト表示情報を、前記所定の移動体の拡張現実表示装置へ送信するステップと、
     を行う情報処理方法。
    In an information processing method for providing information to a passenger of a mobile object by being communicatively connected to a spatio-temporal database that processes spatio-temporal data in real time,
    The information processing device
    Image data taken by the plurality of mobile bodies at a plurality of locations on a plurality of dates and times, time data at the time of shooting, and position data of each mobile body at the time of shooting are collected from the plurality of mobile bodies, and the collected plurality of transmitting image data, a plurality of time data, and a plurality of position data to the spatio-temporal database;
    receiving from a given mobile object location data of said given mobile object;
    a step of searching the image data group of the spatio-temporal database for an object in the image data corresponding to the position data of the predetermined moving object;
    Augmented reality display of the predetermined moving body is object display information for displaying the object in augmented reality by superimposing the object on the scene of the predetermined moving body at the same display position as the display position where the object was displayed in the past. transmitting to the device;
    Information processing method that performs
  9.  請求項1に記載の情報処理装置として通信ネットワーク内のコンピュータを機能させる情報処理プログラム。 An information processing program that causes a computer in a communication network to function as the information processing apparatus according to claim 1.
  10.  請求項1に記載の情報処理装置として移動体内のコンピュータを機能させる情報処理プログラム。 An information processing program that causes a computer in a moving body to function as the information processing device according to claim 1.
PCT/JP2022/015291 2021-11-10 2022-03-29 Information processing apparatus, information processing method, and information processing program WO2023084810A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163277794P 2021-11-10 2021-11-10
US63/277,794 2021-11-10

Publications (1)

Publication Number Publication Date
WO2023084810A1 true WO2023084810A1 (en) 2023-05-19

Family

ID=86335538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/015291 WO2023084810A1 (en) 2021-11-10 2022-03-29 Information processing apparatus, information processing method, and information processing program

Country Status (1)

Country Link
WO (1) WO2023084810A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008518325A (en) * 2004-10-25 2008-05-29 エー9.・コム・インコーポレーテッド System and method for displaying location specific images on a mobile device
JP2008147864A (en) * 2006-12-07 2008-06-26 Sony Corp Image display system, display device and display method
US20100007671A1 (en) * 2007-04-19 2010-01-14 Qisda Corporation Display system and a display method
JP2014052915A (en) * 2012-09-07 2014-03-20 Toshiba Corp Electronic apparatus, display control method, and program
JP2020144574A (en) * 2019-03-06 2020-09-10 Kddi株式会社 Program, device, and method, for mixing sound objects in accordance with images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008518325A (en) * 2004-10-25 2008-05-29 エー9.・コム・インコーポレーテッド System and method for displaying location specific images on a mobile device
JP2008147864A (en) * 2006-12-07 2008-06-26 Sony Corp Image display system, display device and display method
US20100007671A1 (en) * 2007-04-19 2010-01-14 Qisda Corporation Display system and a display method
JP2014052915A (en) * 2012-09-07 2014-03-20 Toshiba Corp Electronic apparatus, display control method, and program
JP2020144574A (en) * 2019-03-06 2020-09-10 Kddi株式会社 Program, device, and method, for mixing sound objects in accordance with images

Similar Documents

Publication Publication Date Title
CN113474825B (en) Method and apparatus for providing immersive augmented reality experience on a mobile platform
US11417057B2 (en) Realistic 3D virtual world creation and simulation for training automated driving systems
EP3244591B1 (en) System and method for providing augmented virtual reality content in autonomous vehicles
US10901416B2 (en) Scene creation system for autonomous vehicles and methods thereof
US10957110B2 (en) Systems, devices, and methods for tracing paths in augmented reality
US10708704B2 (en) Spatial audio for three-dimensional data sets
US9956876B2 (en) System and method for providing content in autonomous vehicles based on real-time traffic information
US20170352185A1 (en) System and method for facilitating a vehicle-related virtual reality and/or augmented reality presentation
US20170186240A1 (en) System and method for dynamic in-vehicle virtual reality
US20100220037A1 (en) Image display system, display apparatus, and display method
US20140063061A1 (en) Determining a position of an item in a virtual augmented space
WO2014140915A2 (en) Systems and methods for virtualized advertising
CN107407572A (en) Along route search
US20220005283A1 (en) R-snap for production of augmented realities
US20210366193A1 (en) Efficient capture and delivery of walkable and interactive virtual reality or 360 degree video
CN110100153B (en) Information providing system
WO2013024364A2 (en) Systems and methods for virtual viewing of physical events
Nedevschi Semantic segmentation learning for autonomous uavs using simulators and real data
WO2023084810A1 (en) Information processing apparatus, information processing method, and information processing program
CN110634190A (en) Remote camera VR experience system
JP6857776B1 (en) Programs, information processing methods, information processing devices, and systems
JP2013232238A5 (en) Live video providing system
US11302080B1 (en) Planner for an objective-effectuator
EP3724821A1 (en) Objective-effectuators in synthesized reality settings
KR102592675B1 (en) Method, system, and non-transitory computer-readable recording medium for providing contents

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22892314

Country of ref document: EP

Kind code of ref document: A1