CN112188077A - Information providing device, information providing method, and storage medium - Google Patents

Information providing device, information providing method, and storage medium Download PDF

Info

Publication number
CN112188077A
CN112188077A CN202010583659.XA CN202010583659A CN112188077A CN 112188077 A CN112188077 A CN 112188077A CN 202010583659 A CN202010583659 A CN 202010583659A CN 112188077 A CN112188077 A CN 112188077A
Authority
CN
China
Prior art keywords
image
unit
information
information providing
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010583659.XA
Other languages
Chinese (zh)
Inventor
横村光
礒部新
小野佑树
村上真一
田中健
盐贝彬
玉那霸隆介
山崎惠子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN112188077A publication Critical patent/CN112188077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

The invention provides an information providing device, an information providing method and a storage medium capable of providing more appropriate image information to a user. An information providing device according to an embodiment includes: an imaging control unit that instructs a plurality of imaging units including a first imaging unit mounted on a vehicle, a second imaging unit disposed at a predetermined position, and a third imaging unit used by another person other than the user, based on position information of an imaging target, based on imaging content received from a terminal device used by the user; an image acquisition unit that acquires an image including the subject captured by the first, second, and third imaging units, in accordance with a predetermined instruction by the imaging control unit; and an information providing unit that provides information including the image satisfying the shooting content acquired by the image acquiring unit to the terminal device.

Description

Information providing device, information providing method, and storage medium
Technical Field
The invention relates to an information providing apparatus, an information providing method and a storage medium.
Background
The following techniques are known: the present invention relates to a vehicle monitoring system that recognizes a surrounding situation of a vehicle using an image captured by a camera mounted on the vehicle, or captures a surrounding area using a fixed camera provided at a predetermined position, and monitors the surrounding area using the captured image, or detects a suspicious person (see, for example, patent documents 1 to 3).
Prior art documents
Patent document
Patent document 1: japanese patent No. 3991314
Patent document 2: japanese patent laid-open publication No. 2004-254087
Patent document 3: japanese patent laid-open No. 2000-032436
Summary of the invention
Problems to be solved by the invention
However, in the related art, it is not considered to provide the user with images photographed in various uses.
Disclosure of Invention
An aspect of the present invention has been made in view of the above circumstances, and an object thereof is to provide an information providing apparatus, an information providing method, and a storage medium that can provide more appropriate image information to a user.
Means for solving the problems
The information providing apparatus, the information providing method, and the storage medium of the present invention adopt the following configurations.
(1): an information providing device according to an aspect of the present invention includes: an imaging control unit that instructs a plurality of imaging units including a first imaging unit mounted on a vehicle, a second imaging unit disposed at a predetermined position, and a third imaging unit used by another person other than the user, based on position information of an imaging target, based on imaging content received from a terminal device used by the user; an image acquisition unit that acquires an image including the subject captured by the first, second, and third imaging units, in accordance with a predetermined instruction by the imaging control unit; and an information providing unit that provides information including the image satisfying the shooting content acquired by the image acquiring unit to the terminal device.
(2): in the aspect (1) above, the captured content includes control content that does not upload the images captured by the plurality of imaging units to the information providing apparatus.
(3): in the aspect (1) above, the shooting contents include control contents for uploading an image shot outside a predetermined area to the information providing apparatus.
(4): in the aspect of (1) above, the imaging content includes control content that permits uploading of an image captured by at least one of the first imaging unit, the second imaging unit, and the third imaging unit.
(5): in the aspect (1) described above, the captured content includes attribute information of an owner of the third imaging unit, and includes control content that permits uploading from the third imaging unit that matches the attribute.
(6): an information providing method according to an aspect of the present invention causes an information providing apparatus to perform: a step of instructing a plurality of image capturing units including a first image capturing unit mounted on a vehicle, a second image capturing unit provided at a predetermined position, and a third image capturing unit used by another person other than the user, based on position information of an image capturing object, based on image capturing contents received from a terminal device used by the user; acquiring an image including the subject captured by the first, second, and third imaging units, in accordance with the predetermined instruction; and providing information including the acquired image satisfying the captured content to the terminal device.
(7): a storage medium according to an aspect of the present invention stores a program that causes an information providing apparatus to perform: a step of instructing a plurality of image capturing units including a first image capturing unit mounted on a vehicle, a second image capturing unit provided at a predetermined position, and a third image capturing unit used by another person other than the user, based on position information of an image capturing object, based on image capturing contents received from a terminal device used by the user; acquiring an image including the subject captured by the first, second, and third imaging units, in accordance with the predetermined instruction; and providing information including the acquired image satisfying the captured content to the terminal device.
Effects of the invention
According to the aspects (1) to (7), more appropriate image information can be provided to the user.
Drawings
Fig. 1 is a configuration diagram of an information providing system including an information providing apparatus according to an embodiment.
Fig. 2 is a configuration diagram of a vehicle system of the embodiment.
Fig. 3 is a functional configuration diagram of the first control unit and the second control unit.
Fig. 4 is a diagram showing an example of a functional configuration of the travel driving force output device.
Fig. 5 is a configuration diagram of an information providing server according to an embodiment.
Fig. 6 is a diagram showing an example of the contents of the camera DB.
Fig. 7 is a diagram showing an example of the contents of the image DB.
Fig. 8 is a diagram showing an example of the contents of the user DB.
Fig. 9 is a diagram showing an example of providing the content of the image DB.
Fig. 10 is a diagram showing an example of contents of the provider DB.
Fig. 11 is a configuration diagram of a terminal device according to the embodiment.
Fig. 12 is a diagram showing a structural view of a stationary phase apparatus.
Fig. 13 is a structural diagram of the drone.
Fig. 14 is a diagram for explaining a specific example of the information providing system according to the embodiment.
Fig. 15 is a diagram showing an example of an image including an image request.
Fig. 16 is a diagram for explaining the contents of the shooting level information.
Fig. 17 is a diagram for explaining a situation in which the own vehicle is photographed.
Fig. 18 is a diagram showing an example of an image for inquiring the terminal device whether or not to perform shooting.
Fig. 19 is a diagram showing an example of an image including feature information.
Fig. 20 is a diagram showing an example of an image extracted by the extraction unit.
Fig. 21 is a diagram showing an example of an image including a user captured by the vehicle.
Fig. 22 is a diagram showing an example of an image including a user sitting in the driver seat DS of the host vehicle.
Fig. 23 is a diagram showing an example of an image album edited by the editing unit.
Fig. 24 is a diagram showing an example of an image album edited by the editing unit.
Fig. 25 is a sequence diagram showing an example of a flow of processing of the information providing system according to the embodiment.
Fig. 26 is a diagram for explaining guidance of a photographic subject.
Fig. 27 is a diagram showing an example of the hardware configuration of the information providing server according to the embodiment.
Detailed Description
Embodiments of an information providing apparatus, an information providing method, and a storage medium according to the present invention will be described below with reference to the accompanying drawings.
[ integral Structure ]
Fig. 1 is a configuration diagram of an information providing system 1 including an information providing apparatus according to an embodiment. The information providing system 1 includes, for example, one or more vehicles 2, an information providing server 300, one or more terminal devices 400, one or more stationary phase devices 500, and one or more drones 600. In the example of fig. 1, a vehicle system 3 is mounted on a vehicle 2. The vehicle system 3, the information providing server 300, the terminal device 400, the stationary phase device 500, and the drone 600 can communicate with each other via a network NW, for example. The network NW includes, for example, a cellular network, a Wi-Fi network, Bluetooth (registered trademark), the internet, wan (wide Area network), lan (local Area network), a public line, a supplier device, a private line, a wireless base station, and the like. The information providing server 300 is an example of an "information providing apparatus". The drone 600 is an example of a "flying object". The flyer includes, for example, unmanned flyer and manned flyer which are realized by remote operation.
The vehicle system 3, the terminal device 400, the stationary phase device 500, and the drone 600 each include a camera (an example of an imaging unit). The images captured by the respective cameras are transmitted to the information providing server 300 via the network NW. The information providing server 300 receives the photographed images of the respective cameras, selects or edits images provided to the user from the received photographed images, and provides information including the selected or edited images to the user. Note that the image provided to the user may include a movie (moving image).
The functions of the vehicle system 3, the information providing server 300, the terminal device 400, the stationary camera device 500, and the drone 600 will be described below. Hereinafter, the terminal devices used by the user U1 and the user U2 will be referred to as a terminal device 400-1 and a terminal device 400-2, respectively. The vehicle owned or occupied by the user U1 is referred to as a host vehicle M, and vehicles other than the host vehicle M are referred to as other vehicles M. The vehicle system 3 mounted on the host vehicle M and the other vehicle M is applied to an autonomous vehicle, for example. The automated driving is, for example, a case where one or both of steering and acceleration/deceleration of the vehicle are controlled to execute driving control. The driving Control includes, for example, driving Control such as acc (adaptive Cruise Control system), tjp (traffic Jam pilot), alc (auto Lane changing), cmbs (fusion differentiation Brake system), and lkas (Lane Keeping Assistance system). In addition, the autonomous vehicle may also execute driving control based on manual driving by an occupant (driver). The autonomous vehicle is, for example, a two-wheel, three-wheel, or four-wheel vehicle, and the driving source thereof is an internal combustion engine such as a diesel engine or a gasoline engine, an electric motor, or a combination thereof. The electric motor operates using generated power generated by a generator connected to the internal combustion engine or discharge power of a secondary battery or a fuel cell. Hereinafter, a case where the right-hand traffic rule is applied will be described, but the left and right sides may be reversed when the right-hand traffic rule is applied. Hereinafter, a description will be given with one of the horizontal directions as X, the other as Y, and the vertical direction orthogonal to the horizontal direction of X-Y as Z.
[ vehicle System ]
Fig. 2 is a configuration diagram of the vehicle system 3 of the embodiment. The following describes the vehicle system 3 mounted on the host vehicle M. The vehicle system 3 includes, for example, a camera 10, a radar device 12, a probe 14, an object recognition device 16, a communication device 20, an hmi (human Machine interface)30, a vehicle sensor 40, a navigation device 50, an mpu (map Positioning unit)60, a driving operation unit 80, an in-vehicle Machine 90, an automatic driving control device 100, a driving force output device 200, a brake device 210, and a steering device 220. The above-described apparatuses and devices are connected to each other by a multiplex communication line such as a can (controller Area network) communication line, a serial communication line, a wireless communication network, or the like. The configuration shown in fig. 2 is merely an example, and a part of the configuration may be omitted, or another configuration may be further added.
The camera 10 is a digital camera using a solid-state imaging device such as a ccd (charge Coupled device) or a cmos (complementary Metal Oxide semiconductor). The camera 10 is attached to an arbitrary portion of the vehicle M. For example, when the front of the vehicle M is photographed, the camera 10 is attached to the upper portion of the front windshield, the rear surface of the interior mirror, and the like. When the rear of the vehicle M is photographed, the camera 10 is attached to an upper portion of a rear windshield, for example. When imaging the right side or the left side of the vehicle M, the camera 10 is attached to the right side or the left side of the vehicle body or the door mirror. The camera 10 repeatedly captures the periphery of the host vehicle M periodically, for example. The camera 10 may also be a stereo camera.
The radar device 12 radiates radio waves such as millimeter waves to the periphery of the host vehicle M, detects radio waves (reflected waves) reflected by an object, and detects at least the position (distance and direction) of the object. The radar device 12 is mounted on an arbitrary portion of the vehicle M. The radar device 12 may detect the position and velocity of the object by an FM-cw (frequency Modulated Continuous wave) method.
The detector 14 is a LIDAR (light Detection and ranging). The detector 14 irradiates light to the periphery of the host vehicle M and measures scattered light. The detector 14 detects the distance to the object based on the time from light emission to light reception. The light to be irradiated is, for example, pulsed laser light. The probe 14 is attached to an arbitrary portion of the vehicle M.
The object recognition device 16 performs a sensor fusion process on the detection results detected by some or all of the camera 10, the radar device 12, and the probe 14, and recognizes the position, the type, the speed, and the like of the object. The object recognition device 16 outputs the recognition result to the automatic driving control device 100. The object recognition device 16 may output the detection results of the camera 10, the radar device 12, and the detector 14 directly to the automatic driving control device 100. In this case, the object recognition device 16 may be omitted from the vehicle system 3.
The Communication device 20 communicates with another vehicle present in the vicinity of the host vehicle M by using, for example, a cellular network, a Wi-Fi network, Bluetooth, DSRC (differentiated Short Range Communication), or the like, or communicates with various information providing servers via a wireless base station.
The HMI30 presents various information to an occupant (including the driver) of the host vehicle M, and accepts input operations by the occupant. The HMI30 includes, for example, various display devices, speakers, buzzers, touch panels, switches, keys, and the like. The HMI30 may also include a microphone that collects sounds inside or outside the vehicle.
The vehicle sensors 40 include a vehicle speed sensor that detects the speed of the own vehicle M, an acceleration sensor that detects acceleration, a yaw rate sensor that detects an angular velocity about a vertical axis, an orientation sensor that detects the orientation of the own vehicle M, and the like. The vehicle sensor 40 may include a position sensor for detecting a position (latitude and longitude) of the vehicle M, a vibration sensor for detecting a sway or vibration of the vehicle M, a rainfall sensor for measuring rainfall outside the vehicle, a wind speed sensor for measuring a wind speed outside the vehicle, and the like.
The Navigation device 50 includes, for example, a gnss (global Navigation Satellite system) receiver 51, a Navigation HMI52, and a route determination unit 53. The navigation device 50 holds the first map information 54 in a storage device such as an hdd (hard Disk drive) or a flash memory. The GNSS receiver 51 determines the position of the own vehicle M based on the signals received from the GNSS satellites. The position of the host vehicle M may also be determined or supplemented by an ins (inertial Navigation system) that utilizes the output of the vehicle sensors 40. The navigation HMI52 includes a display device, a speaker, a touch panel, keys, and the like. The navigation HMI52 may also be shared in part or in whole with the aforementioned HMI 30. The route determination unit 53 determines a route (hereinafter referred to as an on-map route) from the position of the host vehicle M (or an arbitrary input position) specified by the GNSS receiver 51 to the destination input by the occupant using the navigation HMI52, for example, with reference to the first map information 54. The first map information 54 is, for example, information representing a road shape by a line representing a road and nodes connected by the line. The first map information 54 may also include curvature Of a road, poi (point Of interest) information, and the like. The map upper path is output to the MPU 60. The navigation device 50 may also perform route guidance using the navigation HMI52 based on the on-map route. The navigation device 50 may be realized by the function of the terminal device 400-1 held by the occupant, for example. The navigation device 50 may transmit the current position and the destination to the navigation server via the communication device 20, and acquire a route equivalent to the route on the map from the navigation server.
The MPU60 includes, for example, the recommended lane determining unit 61, and holds the second map information 62 in a storage device such as an HDD or a flash memory. The recommended lane determining unit 61 divides the on-map route provided from the navigation device 50 into a plurality of sections (for example, every 100[ m ] in the vehicle traveling direction), and determines the recommended lane for each section with reference to the second map information 62. The recommended lane determining unit 61 determines to travel in the first lane from the left. The recommended lane determining unit 61 determines the recommended lane so that the host vehicle M can travel on a reasonable route for traveling to the branch destination when there is a branch point on the route on the map.
The second map information 62 is map information with higher accuracy than the first map information 54. The first map information 54 and the second map information 62 are combined together as an example of "map information". The second map information 62 includes, for example, information on the center of a lane, information on the boundary of a lane, and the like. The second map information 62 may include road information, traffic regulation information, address information (address, zip code), facility information, telephone number information, and the like. Further, one or both of the first map information 54 and the second map information 62 may include road information more detailed than the above-described road information. The detailed road information is information on the type of traffic object installed on the road, the installation time, the ground objects around the road, the power supply source to the traffic object, the durability of the road, the altitude from the altitude, and the like, for example. The traffic object includes, for example, a traffic signal, a traffic sign, a stationary camera for photographing a road, and other objects disposed around the road. The above-ground objects include, for example, buildings such as buildings, bridges, and towers, various trees such as forests and windbreak forests, and various objects including plants. In addition, the information related to things includes information related to the kind, size, and period of time of the object being built or planted. One or both of the first map information 54 and the second map information 62 can be updated as needed by the communication device 20 communicating with an external device such as the information providing server 300.
The driving operation member 80 includes, for example, operation members such as an accelerator pedal, a brake pedal, a shift lever, a steering wheel, and a joystick. A sensor for detecting the operation amount or the presence or absence of operation is attached to the driving operation element 80, and the detection result of the sensor is output to the automatic driving control device 100 or some or all of the running driving force output device 200, the brake device 210, and the steering device 220.
The vehicle interior camera 90 is a digital camera using a solid-state imaging device such as a CCD or a CMOS. The vehicle interior camera 90 may also be a stereo camera. The vehicle interior camera 90 is attached to an arbitrary portion in the vehicle interior. The vehicle interior camera 90 photographs an area including each seat existing in the vehicle interior. This makes it possible to acquire images of the occupants seated in the seats from the images captured by the vehicle interior camera 90. The vehicle interior camera 90 may periodically repeat shooting the above-described area, or may shoot at a predetermined timing. The predetermined timing is a timing at which the host vehicle M arrives at a predetermined point and a timing at which the information providing server 300 receives an imaging instruction. The image captured by the vehicle interior camera 90 is output to the automatic driving control device 100.
The automatic driving control device 100 includes, for example, a first control unit 120, a second control unit 160, and an image acquisition control unit 180. The first control unit 120, the second control unit 160, and the image acquisition control unit 180 are each realized by a hardware processor execution program (software) such as a cpu (central Processing unit). Some or all of the above-described components may be realized by hardware (including circuit units) such as lsi (large Scale integration), asic (application Specific Integrated circuit), FPGA (Field-Programmable Gate Array), gpu (graphics Processing unit), or the like, or may be realized by cooperation between software and hardware. The program may be stored in advance in a storage device (a storage device including a non-transitory storage medium) such as an HDD or a flash memory of the automatic drive control device 100, or may be stored in a removable storage medium such as a DVD or a CD-ROM, and the storage medium (the non-transitory storage medium) may be attached to the HDD or the flash memory of the automatic drive control device 100 by being attached to the drive device.
Fig. 3 is a functional configuration diagram of the first control unit 120 and the second control unit 160. The first control unit 120 includes, for example, a recognition unit 130 and an action plan generation unit 140. The combination of the action plan generating unit 140 and the second control unit 160 is an example of a "driving control unit". The first control unit 120 implements, for example, a function implemented by an AI (Artificial Intelligence) and a function implemented by a model provided in advance in parallel. For example, the function of "recognizing an intersection" can be realized by executing, in parallel, recognition of an intersection by deep learning or the like and recognition based on a condition (presence of a signal, a road sign, or the like that can be pattern-matched) provided in advance, and adding scores to both of them to comprehensively evaluate them. This ensures the reliability of automatic driving.
The recognition unit 130 recognizes the surrounding environment of the host vehicle M. For example, the recognition unit 130 recognizes the states of the position, speed, acceleration, traveling direction, and the like of an object (for example, a nearby vehicle, an object target) present in the periphery of the host vehicle M based on information input from the camera 10, the radar device 12, and the probe 14 via the object recognition device 16. The position of the object is recognized as a position on absolute coordinates with the origin at the representative point (center of gravity, center of drive axis, etc.) of the host vehicle M, for example, and used for control. The position of the object may be represented by a representative point such as the center of gravity, the center, and the corner of the object, or may be represented by a region of representation. In the case where the object is a vehicle, the "state" of the object may include acceleration, jerk, or "behavior state" of the object (e.g., whether a course change is being made or is to be made).
The recognition unit 130 recognizes, for example, a lane (traveling lane) on which the host vehicle M travels. For example, the recognition unit 130 recognizes the traveling lane by comparing the pattern of road dividing lines (e.g., the arrangement of solid lines and broken lines) obtained from the second map information 62 with the pattern of road dividing lines around the host vehicle M recognized from the image captured by the camera 10. The recognition unit 130 is not limited to the road dividing line, and may recognize the lane by recognizing a boundary of the traveling path (road boundary) including a road dividing line, a shoulder, a curb, a center barrier, a guardrail, and the like. In this recognition, the position of the own vehicle M acquired from the navigation device 50 and the processing result by the INS process may be taken into account. The recognition unit 130 recognizes a stop line, an obstacle, a red light, a toll booth, and other road items.
The recognition unit 130 recognizes the position and posture of the host vehicle M with respect to the travel lane when recognizing the travel lane. The recognition unit 130 may recognize, for example, a deviation of a reference point of the host vehicle M from the center of the lane and an angle formed by the traveling direction (Y direction) of the host vehicle M with respect to a line connecting the centers of the lanes as the relative position and posture of the host vehicle M with respect to the traveling lane. Instead, the recognition unit 130 may recognize the position of the reference point of the host vehicle M with respect to any one side end portion (road dividing line or road boundary) of the traveling lane, as the relative position of the host vehicle M with respect to the traveling lane.
The recognition unit 130 recognizes information relating to the position of the nearby vehicle based on the nearby vehicle of the host vehicle M recognized from the image captured by the camera 10, the congestion information of the periphery of the host vehicle M acquired by the navigation device 50, or the position information obtained from the second map information 62.
The recognition unit 130 may acquire various information received from vehicles or the like traveling around the host vehicle M via the communication device 20 by vehicle-to-vehicle communication, and recognize the periphery of the host vehicle M based on the information. The recognition unit 130 may recognize an object having high similarity to the feature information based on the feature information of the imaging target included in the image acquisition instruction information acquired from the information providing server 300. The imaging target includes, for example, a person, a vehicle, an object such as a feature, a landscape, and the like. Additionally, the person may include the user himself, the user's family, acquaintances, famous persons, and the like.
The action plan generating unit 140 and the second control unit 160 control one or both of the speed and the steering of the vehicle M to perform driving control based on the recognition result of the recognition unit 130. When the automatic driving is performed, the action plan generating unit 140 generates a target trajectory on which the host vehicle M will automatically (independently of the operation of the driver) travel in the future so that the host vehicle M can travel on the recommended lane determined by the recommended lane determining unit 61 in principle and can cope with the surrounding situation of the host vehicle M. The target trajectory includes, for example, a velocity element. For example, the target track is represented by a track in which the points (track points) to be reached by the vehicle M are sequentially arranged. The track point is a point to which the host vehicle M should arrive at every predetermined travel distance (for example, several [ M ] or so) in terms of a distance along the way, and unlike this, a target speed and a target acceleration at every predetermined sampling time (for example, several zero-point [ sec ] or so) are generated as a part of the target track. The track point may be a position to which the vehicle M should arrive at a sampling time at every predetermined sampling time. In this case, the information on the target velocity and the target acceleration is expressed by the interval between the track points.
The action plan generating unit 140 may set an event of the autonomous driving when the target trajectory is generated. Examples of the event of the automated driving include a constant speed driving event, a low speed follow-up driving event, a lane change event, a branch event, a merge event, and a take-over event for ending the automated driving and switching to the manual driving. The action plan generating unit 140 generates a target trajectory corresponding to the started event.
Further, the action plan generating unit 140 generates a target trajectory for controlling the speed of the vehicle M (for example, decelerating during shooting), traveling on a position or a route where shooting is easy, or performing steering control such that the vehicle M is steered toward the camera, when the camera 10 shoots the subject or when the vehicle M is shot, based on the control information from the image acquisition control unit 180. The photographic subject includes, for example, a photographic subject person, a photographic subject object (for example, a vehicle, etc.). The action plan generating unit 140 may perform the following control: the output of the running driving force by the running driving force output device 200 (for example, a driving mode described later) is switched based on an instruction operation by an occupant using the HMI30, the running condition of the host vehicle M, and control information from the image acquisition control unit 180.
The second control unit 160 controls the running driving force output device 200, the brake device 210, and the steering device 220 so that the host vehicle M passes through the target trajectory generated by the action plan generation unit 140 at a predetermined timing.
The second control unit 160 includes, for example, a target trajectory acquisition unit 162, a speed control unit 164, and a steering control unit 166. The target trajectory acquisition unit 162 acquires information of the target trajectory (trajectory point) generated by the action plan generation unit 140 and stores the information in a memory (not shown). The speed control unit 164 controls the running driving force output device 200 or the brake device 210 based on the speed element associated with the target track stored in the memory. The steering control unit 166 controls the steering device 220 according to the degree of curvature of the target track stored in the memory. The processing of the speed control unit 164 and the steering control unit 166 is realized by, for example, a combination of feedforward control and feedback control. For example, the steering control unit 166 performs a combination of feedforward control according to the curvature of the road ahead of the host vehicle M and feedback control based on deviation from the target trajectory.
Returning to fig. 2, the image acquisition control unit 180 causes the recognition unit 130 to recognize the imaging target or causes the action plan generation unit 140 to generate the target trajectory in which the speed and the turn of the host vehicle M are adjusted so that the camera 10 captures the imaging target, based on the image acquisition instruction information from the information providing server 300. In addition, when the vehicle M is photographed by an external camera, the image acquisition control unit 180 may cause the action plan generating unit 140 to generate a target trajectory with the speed and the turn adjusted so that the vehicle M can be easily photographed by the external camera. Further, the image acquisition control unit 180 may cause the travel driving force output device 200 to switch the driving force.
Further, the image acquisition control unit 180 extracts an image satisfying a predetermined condition from the images of the camera 10 and the vehicle interior camera 90 captured at a predetermined cycle based on the image acquisition instruction information. The predetermined condition is, for example, a case where the distance between the subject vehicle M and the imaging target is within a predetermined distance, a case where the subject vehicle M has reached a predetermined position, or a case where an object having a high degree of similarity to the feature information included in the image acquisition instruction information is included in the captured image. The predetermined location is, for example, a road, an intersection, a parking lot, a house, a company, a station, a shop, a public facility, a leisure facility, a sightseeing spot, or the like. The predetermined position may be, for example, one or more positions set in advance by the user. The image acquisition control unit 180 may acquire the sound outside or inside the vehicle from a microphone included in the HMI30 simultaneously with the acquisition of the image. The acquired image or audio is transmitted (uploaded) to the information providing server 300 via the communication device 20 together with the camera ID based on the image acquisition instruction information.
Running drive force output device 200 outputs running drive force (torque) for running the vehicle to the drive wheels. Fig. 4 is a diagram showing an example of a functional configuration of the travel driving force output device 200. The traveling driving force output device 200 includes, for example, a driving force switching unit 202, a first driving force output unit 204, and a second driving force output unit 206. The driving force switching unit 202 switches between the output of the running driving force by the first driving force output unit 204 and the output of the running driving force by the second driving force output unit 206 based on the control information generated by the action plan generating unit 140. Hereinafter, the output of the travel driving force by first driving force output unit 204 will be referred to as "first driving mode", and the output of the travel driving force by second driving force output unit 206 will be referred to as "second driving mode".
The first driving force output unit 204 causes the host vehicle M to travel using the internal combustion engine as a power source. The first driving force output unit 204 includes, for example, an engine ecu (electronic Control unit)204A and an engine 204B. Engine ECU204A controls engine 204B when receiving an instruction to switch the first drive mode through drive force switching unit 202. Engine ECU204A adjusts the throttle opening, the gear stage, and the like of engine 204B in accordance with the information input from second control unit 160 or the information input from driving operation element 80, and outputs the traveling driving force for traveling host vehicle M.
The second driving force output unit 206 includes, for example, a motor ECU206A, a vehicle battery 206B, and a traveling motor 206C. The motor ECU206A controls the driving of the travel motor 206C using the electric power supplied from the vehicle battery 206B. The motor ECU206A adjusts the duty ratio of the PWM signal applied to the travel motor 206C in accordance with the information input from the second control unit 160 or the information input from the driving operation element 80, and outputs the travel driving force (torque) for causing the vehicle M to travel by the travel motor 206C. The motor ECU206A may be charged by, for example, returning electricity generated by forcibly rotating the travel motor 206C to the vehicle battery 206B by rotating the wheels after the accelerator is released. Vehicle battery 206B is a secondary battery such as a lithium ion battery. The vehicle battery 206B is charged and discharged by the control of the motor ECU 206A. The second drive mode described above has a smaller driving sound due to vibrations and the like during traveling than the first drive mode in which the engine is driven.
The brake device 210 includes, for example, a caliper, a hydraulic cylinder that transmits hydraulic pressure to the caliper, an electric motor that generates hydraulic pressure in the hydraulic cylinder, and a brake ECU. The brake ECU controls the electric motor in accordance with information input from the second control unit 160 or information input from the driving operation element 80, and outputs a braking torque corresponding to a braking operation to each wheel. The brake device 210 may be provided with a mechanism for transmitting the hydraulic pressure generated by the operation of the brake pedal included in the driving operation element 80 to the hydraulic cylinder via the master cylinder as a backup. The brake device 210 is not limited to the above-described configuration, and may be an electronically controlled hydraulic brake device that transmits the hydraulic pressure of the master cylinder to the hydraulic cylinder by controlling the actuator in accordance with information input from the second control unit 160.
The steering device 220 includes, for example, a steering ECU and an electric motor. The electric motor changes the orientation of the steering wheel by applying a force to a rack-and-pinion mechanism, for example. The steering ECU drives the electric motor in accordance with information input from the second control unit 160 or information input from the driving operation element 80 to change the direction of the steered wheels.
[ information providing Server ]
Fig. 5 is a configuration diagram of information providing server 300 according to the embodiment. The information providing server 300 includes a server-side communication unit 310, an input unit 320, an output unit 330, a server-side control unit 340, and a server-side storage unit 360. The information providing server 300 can function as a cloud server that communicates with the vehicle 2 (the host vehicle M, another vehicle M) mounted with the vehicle system 3 via the network NW and transmits and receives various data.
The server-side communication unit 310 includes a communication Interface such as nic (network Interface card). The server-side communication unit 310 communicates with the vehicle or another external device via the network NW using, for example, a cellular network, a Wi-Fi network, or Bluetooth.
The input unit 320 is a user interface such as a button, a keyboard, and a mouse. The input unit 320 receives an operation by a server administrator or the like. The input unit 320 may be a touch panel integrally configured with the display unit of the output unit 330.
The output unit 330 outputs information to a server manager or the like. The output unit 330 includes, for example, a display unit for displaying images and an audio output unit for outputting audio. The display unit includes, for example, a display device such as an lcd (liquid Crystal display) or an organic EL display. The display unit displays an image of the information output by the server-side control unit 340. The sound output unit is, for example, a speaker. The audio output unit outputs audio of the information output by the server-side control unit 340.
The server-side control unit 340 includes, for example, an imaging control unit 342, an image acquisition unit 344, an extraction unit 346, an editing unit 348, an information providing unit 350, and an accounting unit 352. Each component of the server-side control unit 340 is realized by a processor such as a CPU executing a program stored in the server-side storage unit 360. Part or all of the components of the server-side control unit 340 may be implemented by hardware (circuit unit) such as an LSI, ASIC, FPGA, or GPU, or may be implemented by cooperation of software and hardware. The program may be stored in advance in a storage device (a storage device including a non-transitory storage medium) such as an HDD or a flash memory of the information providing server 300, or may be stored in a removable storage medium such as a DVD, a CD-ROM, or a memory card, and the storage medium (the non-transitory storage medium) may be attached to the storage device of the information providing server 300 by being attached to a drive device, a card slot, or the like.
The server-side storage unit 360 is implemented by a storage device such as an HDD, a flash memory, an EEPROM, a ROM, and a RAM. The server-side storage unit 360 stores, for example, a camera DB362, an image DB364, a user DB366, a provided image DB368, a provider DB370, a program read out and executed by a processor, and other various information.
The imaging control unit 342 instructs cameras mounted on the vehicle 2, such as the camera 10, the terminal device 400, the camera provided in the unmanned aerial vehicle 600, and the stationary phase device 500 to take an image, and requests for obtaining an image. For example, the imaging control unit 342 refers to the camera DB362 stored in the server-side storage unit 360 to acquire position information of each camera.
Fig. 6 is a diagram showing an example of the contents of the camera DB 362. In the camera DB362, the position information and the owner attribute information of the camera are associated with a camera ID that is identification information for identifying the camera. The positional information is, for example, latitude and longitude information. The positional information may include angle of view information of the camera. The imaging control unit 342 acquires position information from each camera other than the stationary camera device 500 at a predetermined cycle, associates the acquired position information with the camera ID, and registers the position information in the camera DB 362. This makes it possible to more accurately grasp the current position information of the camera mounted on the moving object such as the vehicle 2, the terminal device 400, or the unmanned aerial vehicle 600. The owner attribute information includes, for example, information on the sex, age, and application with a high frequency of use (for example, application of the photo class) of the owner of the camera identified by the camera ID.
For example, when there is an image request from the user based on the position information of each camera acquired from the camera DB362, the imaging control unit 342 extracts a camera ID existing in the vicinity of the position of the imaging target (for example, within a predetermined distance around the position of the imaging target) by referring to the camera DB362 using the position information of the imaging target acquired together with the image request. The imaging control unit 342 may extract a camera ID satisfying the attribute condition by referring to the owner attribute information of the camera DB362 based on the attribute information included in the image request, in addition to the position information. Then, the imaging control unit 342 outputs a predetermined instruction to the extracted camera ID. The predetermined instruction is, for example, image acquisition instruction information. The shooting instruction information includes, for example, position information, feature information, and the like of the subject. The image instruction information may include, for example, an instruction to upload images captured by a plurality of cameras to the information providing server 300. The image instruction information may include an instruction to extract a captured image and upload the captured image to the information providing apparatus when the distance to the subject is within a predetermined distance.
The image acquisition unit 344 acquires images captured by each of the vehicle system 3, the terminal device 400, the stationary phase device 500, and the unmanned aerial vehicle 600 connected via the network NW, the devices that have transmitted the image acquisition instruction information. The image acquisition unit 344 stores the acquired captured image in the image DB 364. Fig. 7 is a diagram showing an example of the contents of the image DB 364. In the image DB364, a shooting date and time indicating the date and time of shooting an image, a shot image ID as identification information for identifying the shot image, and shot image data are associated with a camera ID.
The extraction section 346 extracts an image suitable for the user from the image DB 364. For example, the extraction unit 346 acquires feature information of the subject or the object registered in the user DB366 based on identification information of the subject or the object included in the image request from the user, and extracts an image including an object having a high similarity to the feature information (an object having a similarity equal to or greater than a threshold value) from among the images stored in the image DB364 as a provisional image. The temporary image is, for example, an image extracted by the extracting unit 346 and not edited by the editing unit 348.
Fig. 8 is a diagram showing an example of the contents of the user DB 366. In the user DB366, for example, a photographic subject ID and feature information as identification information for identifying a photographic subject are associated with a user ID as identification information for identifying a user. The feature information is, for example, the contour of the face, the shape, arrangement, color information, and the like of each part such as the eyes, nose, mouth, and ears in the case where the subject is a person. When the object to be photographed is an object such as the vehicle 2, the characteristic information is the shape, size, color, and other identification information of the object. For example, when the subject is the vehicle 2, the other identification information may include license plate information, vehicle type information, and the like. The extraction unit 346 may generate or update the feature information by performing, for example, mechanical learning based on image data (forward solution data) obtained by imaging the imaging target.
The editing unit 348 edits the temporary image or the like extracted by the extraction unit 346 to generate an edited image. The edited image is an image obtained by superimposing the date, the shooting location, information on the camera to be shot, or decoration on the shot image. The edited image may include an image group in which a plurality of images are arranged in a predetermined order such as a time series. The edited image may include an image in which a non-subject person, a non-subject object, or a landscape included in the image is hidden.
The temporary image extracted by the extraction unit 346 and the edited image edited by the editing unit 348 are stored in the provided image DB 368. Fig. 9 is a diagram showing an example of contents of the provided image DB 368. In the provided image DB368, a temporary image ID as identification information for identifying a temporary image, an edited image ID as identification information for identifying an edited image, a date and time of provision, and a usage charge are associated with a user ID. The usage charge is the usage charge of each user calculated by the accounting unit 352.
The information providing unit 350 provides the image extracted by the extracting unit 346 to the user who has a request for the image. For example, when there is an image request from the terminal device 400 of the user, the information providing unit 350 transmits the temporary image extracted by the extracting unit 346 to the terminal device 400 within a predetermined time period from the reception of the image request. Further, the information providing unit 350 transmits the edited image edited by the editing unit 348 to the terminal apparatus 400 of the user after a predetermined time has elapsed. For example, when the vehicle 2 does not have a function of immediately transmitting an image captured by the camera 10 or captures an image in a place with a poor communication environment such as a parking lot, the captured image cannot be immediately transmitted to the information providing server 300. Therefore, when the time is within the predetermined time after the image request is received, the information providing unit 350 provides the user with the temporary image extracted by the extracting unit 346 among the images acquired by the image acquiring unit 344, and provides the user with the edited image edited by the editing unit 348 after the predetermined time has elapsed. This makes it possible to provide a user who does not wait for the acquisition of an image for a long time with a temporary image and provide the user with an edited image generated from an image that increases with the passage of time, thereby providing a higher-quality image. The predetermined time may be a fixed time such as half a day or a day, or may be a time designated by the user. For example, in the case where the user travels, the prescribed time may be a travel period.
The information providing unit 350 may transmit only an image in which a request instruction from the user is present, out of the provisional image and the edited image. When acquiring the audio and the image together, the information providing unit 350 may provide the image and the audio to the user in association with each other.
The accounting unit 352 calculates the usage fee for the user based on the image provided to the user, and stores the calculated usage fee in the provided image DB368 in association with the user ID of the user who provided the image. The accounting unit 352 may charge a fee according to the number of images, or may charge a fee according to the type of image to be provided (for example, a temporary image or an edited image). The accounting unit 352 derives a reward for the image provider who provides the captured image, and stores the derived reward information in the provider DB 370. The image provider is, for example, an owner of the vehicle 2, a vehicle manufacturer, a user of the terminal device 400, a manager who manages the stationary camera device 500, an operation company that operates the unmanned aerial vehicle 600, and the like, which have captured the captured image. The reward information is information on money, points, commodities, services, and the like, for example.
Fig. 10 is a diagram showing an example of contents of provider DB 370. In the provider DB370, the captured image ID, the adopted flag, and the reward information are associated with a provider ID that is identification information for identifying a provider. The adoption flag refers to information indicating whether an image provided by a provider is adopted as a provided image (provisional image or edited image). In the example of fig. 10, the adopted flag in the case where the image is adopted as the provided image is set to "1", and the adopted flag in the case where the image is not adopted as the provided image is set to "0", but the identification of the flag is not limited to this. The accounting unit 352 sets the reward a in the case where the captured image is adopted as the provided image to be larger than the reward B in the case where the captured image is not adopted. This makes it possible to increase the awareness of the image provider that the image provider wants to capture an image with a high reward. Even when the captured image is not used as the provided image, the accounting unit 352 provides some reward for the provision of the image. This enables the image provider to increase the desire to provide a captured image.
[ terminal device ]
Fig. 11 is a configuration diagram of the terminal device 400 according to the embodiment. The terminal device 400 is a terminal device that can be carried by a user, such as a smartphone, a tablet terminal, or a personal computer. The terminal device 400 includes, for example, a terminal-side communication unit 410, an input unit 420, a display 430, a speaker 440, a position acquisition unit 450, a terminal-side camera 460, an application execution unit 470, an output control unit 480, and a terminal-side storage unit 490. The position acquisition unit 450, the application execution unit 470, and the output control unit 480 are realized by executing a program (software) by a hardware processor such as a CPU, for example. Some or all of these components may be realized by hardware (including circuit units) such as LSIs, ASICs, FPGAs, GPUs, or the like, or may be realized by cooperation of software and hardware. The program may be stored in advance in a storage device (a storage device including a non-transitory storage medium) such as an HDD or a flash memory of the terminal device 400, or may be stored in a removable storage medium such as a DVD, a CD-ROM, or a memory card, and the storage medium (the non-transitory storage medium) may be attached to the storage device of the terminal device 400 by being attached to a drive device, a card slot, or the like.
The terminal-side storage unit 490 can be realized by the various storage devices described above, or an EEPROM, a ROM, a RAM, and the like. The terminal-side storage unit 490 stores information such as an image providing application 492, a temporary image 494, and an edited image 496.
The terminal-side communication unit 410 communicates with external devices such as the vehicle 2 and the information providing server 300 via a network such as a cellular network, a Wi-Fi network, Bluetooth, DSRC, LAN, WAN, or the internet.
The input unit 420 receives an input from the user U by operating various keys, buttons, and the like, for example. The display 430 is, for example, an LCD, an organic EL display, or the like. The input unit 420 may be configured integrally with the display 430 as a touch panel. The display 430 displays various information in the image providing process in the embodiment by the control of the output control section 480. The speaker 440 outputs a predetermined sound under the control of the output control unit 480, for example.
The position acquisition unit 450 acquires the position information of the terminal device 400 by a GNSS receiver (not shown) built in the terminal device 400, and transmits the acquired position information to the information providing server 300.
The terminal side camera 460 is a digital camera using a solid-state imaging device such as a CCD or a CMOS, for example. The terminal side camera 460 photographs a direction in which the terminal apparatus 400 is facing by an operation of the user of the terminal apparatus 400. The image captured by the terminal side camera 460 is transmitted (uploaded) to the information providing server 300 according to the instruction of the application execution section 470.
The application execution unit 470 is realized by executing the image providing application 492 stored in the terminal-side storage unit 490. The image providing application 492 is, for example, an application program for communicating with the information providing server 300 via the network NW to acquire image data stored in the information providing server 300. The image providing application 492 causes the display 430 to display an image request screen for setting the image information the user accepts to provide, and accepts setting information (image request) from the user. The image providing application 492 transmits the received image request to the information providing server 300, and acquires an image (which may include a voice) corresponding to the image request.
The application execution unit 470 may switch whether or not to execute the image providing application 492, download images (temporary image 494 and edited image 496) from the information providing server 300, and store the images in the terminal-side storage unit 490, or read out the images. The application execution unit 470 may control transmission of the captured image of the terminal-side camera 460 to the information providing server 300.
The output control unit 480 controls the content and display mode of an image to be displayed on the display 430, the content and output mode of a sound to be output from the speaker 440, in accordance with an instruction from the application execution unit 470. The output control unit 480 may generate an inquiry image for inquiring the user whether or not to perform imaging of the imaging target based on the control information from the information providing server 300, and display the inquiry image on the display 430.
[ stationary phase machine ]
Fig. 12 is a diagram showing a structural view of the stationary phase apparatus 500. The stationary camera device 500 is a camera that photographs surroundings at a predetermined period or at the time of object detection, such as a surveillance camera or a monitoring camera. The stationary phase apparatus 500 includes, for example, a camera-side communication unit 510, a camera 520, a microphone 530, an imaging control unit 540, and a storage unit 550.
The camera-side communication unit 510 communicates with external devices such as the vehicle 2 and the information providing server 300 via a network such as a cellular network, a Wi-Fi network, Bluetooth, DSRC, LAN, WAN, or the internet.
The camera 520 is a digital camera using a solid-state imaging device such as a CCD or a CMOS. The camera 520 captures a landscape in a fixed direction under the control of the image capture control unit 540. The microphone 530 collects sound of the periphery where the stationary camera device 500 is disposed. The microphone 530 may collect surrounding sound at the timing when the camera 520 captures the surrounding sound by the imaging control unit 540.
The imaging control unit 540 controls the timing of starting and ending the imaging with respect to the camera 520. The imaging control unit 540 may start imaging by the camera 520 at a predetermined timing (for example, a predetermined time period or a timing at which the vehicle passes), or may start imaging by the camera 520 based on instruction information from the information providing server 300. The imaging control unit 540 stores the captured image 552 captured by the camera 520 in the storage unit 550, or transmits the captured image together with the camera ID to the information providing server 300 via the camera-side communication unit 510. The imaging control unit 540 may store the sound collected by the microphone 530 in the storage unit 550 together with the captured image 552, or may transmit the sound to the information providing server 300.
The storage unit 550 is realized by, for example, an HDD, a flash memory, an EEPROM, a ROM, a RAM, or the like. The storage unit 550 stores information such as a captured image 552 captured by the camera 520.
[ unmanned plane ]
Fig. 13 is a structural diagram of the drone 600. The drone 600 includes, for example, a drone-side communication unit 610, a drone-side camera 620, a position acquisition unit 630, a sensor 640, a motor 650, a charging unit 660, a control unit 670, and a drone-side storage unit 680.
The unmanned-side communication unit 610 is a communication interface for connecting to the network NW or directly communicating with the information providing server 300, for example. The drone-side communication section 610 may include, for example, a NIC, and may perform wireless communication based on Wi-Fi, DSRC, Bluetooth, or other communication standards. As the unmanned-machine-side communication unit 610, a plurality of communication units corresponding to the purposes may be prepared.
The unmanned-side camera 620 is a digital camera using a solid-state imaging device such as a CCD or a CMOS, for example. One or more unmanned-side cameras 620 are installed at any position of the unmanned aerial vehicle 600. The unmanned-vehicle-side camera 620 captures an image of a space in a direction in which the camera is installed at a predetermined timing under the control of the control unit 670.
The position acquisition unit 630 acquires the position of the drone 600 by a GNSS receiver (not shown). The position acquisition unit 630 outputs the acquired position information to the control unit 670 or transmits the position information to the information providing server 300 via the drone-side communication unit 610. The sensor 640 is, for example, a magnetic sensor, a three-axis gyro sensor, a three-axis acceleration sensor, or the like. The content detected by the sensor 640 is output to the control unit 670 and the like.
The motor 650 includes a plurality of motors, and each motor is provided with a propulsion mechanism such as a propeller. The charging unit 660 includes a rechargeable battery and supplies electric power to each part of the drone 600. Further, the charging unit 660 may include a connection unit for charging the rechargeable battery.
The control unit 670 is realized by a processor such as a CPU executing a program (software) stored in the storage unit 290. One or both of these functional units may be realized by hardware such as an LSI, an ASIC, an FPGA, or a GPU, or may be realized by cooperation of software and hardware. The control unit 670 controls the motor 650 based on the position information acquired by the position acquisition unit 630 and the content detected by the sensor 640, for example, so that the unmanned aerial vehicle 600 flies according to the flight plan or the like acquired from the information providing server 300 or the like. The flight plan includes, for example, a flight path, altitude, speed, and flight pattern. Flight modes include ascent, straight flight, descent, hover, parallel, and the like. The parallel mode is a mode of following the movement of the photographic subject.
Further, the control unit 670 controls the unmanned-side camera 620 so as to capture an image of an object existing within a predetermined distance or within the angle of view of the unmanned-side camera 620, based on the image acquisition instruction information received from the information providing server 300. For example, when the similarity between the feature information acquired by analyzing the captured image of the unmanned-side camera 620 to acquire the feature information of the object or the like and the feature information of the subject included in the imaging instruction information from the information providing server 300 is high, the control unit 670 acquires the captured image of the unmanned-side camera 620. The control unit 670 stores the captured image of the unmanned aerial vehicle camera 620 in the unmanned aerial vehicle storage unit 680, or transmits (uploads) the captured image together with the camera ID to the information providing server 300 through the unmanned aerial vehicle communication unit 610. Further, the control unit 670 may specify an imaging target by communicating with the imaging target (for example, the vehicle 2 or the terminal device) using the unmanned-side communication unit 610, and image the specified imaging target.
The unmanned aerial vehicle side storage unit 680 is implemented by, for example, an HDD, a flash memory, an EEPROM, a ROM, a RAM, and the like. The information such as a captured image 682 captured by the drone camera 620 is stored in the drone-side storage unit 680.
[ specific example of information providing System ]
A specific example of the information providing system 1 according to the embodiment will be described below. Fig. 14 is a diagram for explaining a specific example of the information providing system 1 according to the embodiment. In the example of fig. 14, an intersection CR1 and roads R1 to R4 connected to an intersection CR1 are shown. The vehicle M travels at a speed VM on a road R1 toward an intersection CR1 (Y direction in the figure), and the other vehicle M1 travels at a speed VM1 on a road R1 toward a direction away from the intersection CR1 (Y direction in the figure). In the example of fig. 14, stationary camera devices 500-1 and 500-2 are provided near the road. In addition, stationary camera apparatus 500-3 is also installed in the store near intersection CR 1. In addition, in the example of fig. 14, the drone 600 is flying near the intersection CR 1. The information providing server 300 acquires images captured by cameras of the host vehicle M, the other vehicles M1, the stationary camera devices 500-1 to 500-3, and the unmanned aerial vehicle 600. The camera 10 and the vehicle interior camera 90 mounted on the host vehicle M are examples of the "first imaging unit". The camera 520 of the stationary phase apparatuses 500-1 to 500-3 is an example of the "second imaging unit". The "second imaging unit" may include the camera 10 and the in-vehicle camera of another vehicle ml, and the unmanned aerial vehicle side camera 620 of the unmanned aerial vehicle 600. In the following description, the terminal-side camera 460 of the terminal device 400 used by a person other than the user U1 (for example, the user U2) is an example of the "third shooting unit". The image captured by the first image capturing unit is an example of a "first image", the image captured by the second image capturing unit is an example of a "second image", and the image captured by the third image capturing unit is an example of a "third image".
An example of a scenario in which the user U1 using the terminal device 400-1 uses the information providing service according to the present embodiment will be described below. First, the user U1 starts the image-providing application 492 installed in the terminal apparatus 400-1. The image-rendering application 492, upon activation, causes an image IM1 containing an image request to be displayed on the display 430. Fig. 15 is a diagram showing an example of the image IM1 including the image request. The image IM1 includes, for example, a title display area a11, an image setting area a12, and a gui (graphical User interface) switch display area a 13. Information for urging the setting input relating to the image request is displayed in the title display area a 11. In the example of fig. 15, "< image request screen > request input acceptance of supplied image information is displayed in the title display area a 11. "such text information.
In the image setting area a12, combo boxes CB11 to CB16 and the like are displayed, in which items such as information relating to the shooting level, information relating to the subject, the presence or absence of an image, the presence or absence of a sound, and the attribute of the owner of the camera are set. The shooting level is level information set according to, for example, whether or not the shot image needs to be transmitted (uploaded) to the information providing server 300, whether or not image provision is required, shooting conditions, and the like.
Fig. 16 is a diagram for explaining the contents of the shooting level information. In the shooting level information, the shooting contents and the shooting levels are associated with each other. The shot content includes control content for not uploading images shot by a plurality of cameras to the information providing server 300 and control content for uploading shot images outside a predetermined area to the information providing server 300. The imaging content may include control content that permits uploading of an image captured by at least one of the "first imaging unit", the "second imaging unit", and the "third imaging unit". The shooting content may include attribute information of the owner of the "third shooting unit" and control content that permits uploading from the third shooting unit that matches the attribute.
In the example of fig. 16, "level 1" indicates that uploading of a captured image and provision of an image to a user are not performed. The "level 2" is level information indicating that uploading of a captured image and information provision outside a predetermined area are permitted (in other words, uploading of a captured image and information provision inside a predetermined area are not permitted). "level 3" is level information indicating that uploading of a captured image and image provision are permitted. The "level 4" is level information for permitting uploading of a captured image and providing of an image by a camera (first imaging unit, second imaging unit) other than a terminal device of another person other than the user. The "level 5" is level information indicating that uploading of a captured image and image provision of a terminal device of another person who satisfies the owner attribute of the camera are permitted. The imaging level information shown in fig. 16 may be stored in the terminal-side storage unit 490, or may be stored in the server-side storage unit 360, for example. The user U1 selects "level 1" through the combo box CB11, whereby it is possible to suppress, for example, a situation unintended by the user U1, shooting in an environment, and image provision. Further, the user U1 can suppress shooting and image providing in a specific range such as home or accept only image providing at a sightseeing point by selecting "level 2", for example. Further, the user U1 can acquire, for example, one day's own actions, events, and traveling situations from images captured by one or more external cameras by selecting "level 3". Further, the user U1 can acquire an image captured by, for example, a camera of a vehicle, a fixed camera set on the street, or the like, which is a standard setting, by selecting "level 4". Further, by selecting "level 5" by the user U1, it is possible to obtain an image more desired by the user, for example, according to the sex and age of the camera owner.
In addition, when the "level 2" to the "level 5" are set in the shooting levels, a combo box CB12 for setting a shooting target, combo boxes CB13 and CB14 for selecting whether or not a temporary image is desired and providing of each edited image, a combo box CB15 for selecting whether or not a sound is desired, a combo box CB16 for selecting an attribute, and the like are displayed. The photographic subject that can be selected by the combo box CB12 is, for example, a subject registered in advance in the user DB 366. The sound refers to the sound before and after the captured image. The sound is, for example, sound inside or outside the vehicle collected by a microphone or the like included in the HMI30, or ambient sound collected by the stationary camera device 500. By inputting the setting conditions to the respective frames, the image providing service conforming to the intention of the user U1 can be executed. In the example of fig. 15, "level 3" is selected among the image capturing levels, "vehicle (host vehicle M)" is selected as the target object, "wish" is set for the provisional image, the edited image, and the sound, and "20-year-old female" is selected as the attribute. For example, since a 20-year-old woman is presumed to be more used to photographing using the terminal device 400 of the woman than other ages and sexes, the attribute of the owner of the camera is set to "20-year-old woman", thereby obtaining a higher quality image.
An icon for receiving an instruction from the user U1 is displayed in the GUI switch display area a13, for example. The icons include, for example, an icon IC11 in which the text "OK" is depicted and an icon IC12 in which the text "CANCEL" is depicted. When the icon IC11 is selected by the user U1, the image providing application 492 transmits the information (image request) input through the image setting area a12 to the information providing server 300 via the terminal-side communication unit 410, and ends the display of the image IM1 of the shooting setting screen. In addition, when the icon IC12 is selected by the user U1, the contents set in the image setting area a12 are canceled and the display of the image IM1 is ended.
The information providing server 300 acquires information set in the image setting area a12 by the terminal device 400, and performs various controls related to image provision based on the acquired information. For example, when the shooting level is "level 1", the server-side control unit 340 suspends the acquisition of images from the cameras or suppresses the supply of images to the terminal device 400. In addition, when the shooting levels are "level 2" to "level 5", the camera ID is acquired with reference to the camera DB362 under the condition corresponding to each level, and the image acquisition instruction information is transmitted to the acquired camera ID. The image acquisition instruction information may include, in addition to the position and feature information of the subject, an expected time until the distance from the subject is within a predetermined distance (within a distance predicted to be able to be captured), and the like. Each device that has received the imaging instruction images the subject when the subject approaches within the predetermined distance.
Fig. 17 is a diagram for explaining a situation in which the own vehicle M is photographed. In the example of fig. 17, a scene of the host vehicle M set as an imaging target is imaged by the terminal-side camera 460 of the terminal device 400-2, the camera 520 of the stationary terminal device 500-1, and the unmanned-side camera 620. The terminal device 400-2, the stationary camera device 500-1, and the drone 600 acquire the positional information of the vehicle M based on the image acquisition instruction information from the information providing server 300, acquire whether the vehicle M is approaching based on the acquired positional information, and image the vehicle M when the vehicle M is approaching.
When receiving a shooting instruction from the information providing server 300, the output control unit 480 of the terminal device 400-2 may cause the display 430 to display an image for inquiring the user U2 as to whether or not to shoot. Fig. 18 is a diagram showing an example of an image IM2 for inquiring the terminal device 400-2 as to whether or not to perform shooting. The image IM2 includes, for example, a title display area a21, an inquiry content display area a22, and a GUI switch display area a 23. Information indicating that the request for shooting is related to the shooting request from the information providing server 300 (for example, "inquiry") is displayed in the title display area a 21.
Information for inquiring whether or not to perform shooting of a shooting object with respect to the user U2 of the terminal apparatus 400-2 is displayed in the inquiry content display area a 22. In the example of fig. 18, "the shooting request is accepted" is displayed in the shooting instruction display area a 22. There is 3 minutes until the photographic subject approaches. Is shooting performed? "such text information.
An icon for receiving an instruction from the user U2 is displayed in the GUI switch display area a23, for example. The icons include, for example, an icon IC21 on which a character "yes" is drawn and an icon IC22 on which a character "no" is drawn. When the user U2 selects the icon IC21, the output controller 480 generates the feature information image IM3 that notifies the subject acquired from the information providing server 300, and displays the generated image on the display 430. When the icon IC22 is selected by the user U2, the output controller 480 terminates the display of the image IM 2.
Fig. 19 is a diagram showing an example of the image IM3 including the feature information. The image IM3 includes, for example, a title display area a31 and a photographic subject display area a 32. Information indicating that the subject is notified (for example, "notification") is displayed in the title display area a 31. Information on the photographic subject is displayed in the photographic subject display area a 32. The image providing application 492 may cause the image IM3 to be displayed on the display 430 when the distance between the terminal device 400-2 and the subject (the host vehicle M) is within a predetermined distance or when it is predicted that the terminal device will pass through within a predetermined time. In the example of fig. 19, "the subject is about to pass" is displayed in the subject display area a 32. Please shoot the following characteristic vehicle. "," < feature information > license plate number: 55-55 "," vehicle type: o ≈ o "," color: red "such text information. The image providing application 492 may start the terminal side camera 460 to prepare for image capturing while displaying the image IM3, or may start the terminal side camera 460 to prepare for image capturing after displaying the image IM3 for a predetermined time. This makes it possible for the user U2 to easily photograph the subject.
The terminal-side camera 460 of the terminal device 400-2, the camera 520 of the stationary terminal device 500-1, and the unmanned-side camera 620 used by the user U2 may communicate with the host vehicle M as the subject of image capture, respectively, to acquire the distance to the host vehicle M.
Here, the image acquisition control unit 180 of the host vehicle M may perform control to decelerate the speed of the host vehicle M when approaching the camera that captures the host vehicle M. The information on the position of the camera that captures the vehicle M may be acquired from the information providing server 300, or may be acquired by direct communication with each device. In the example of fig. 17, the automatic driving control device 100 of the host vehicle M executes driving control for decelerating the speed from VM to VM # (VM > VM #) when the distances from the terminal device 400-2, the stationary camera device 500-1, and the unmanned aerial vehicle 600, which are imaging the host vehicle M, are equal to or less than a predetermined distance by the control of the image acquisition control unit 180. The deceleration amount may be a fixed value or may be set based on the current speed of the nearby vehicle or the road environment. The automatic driving control apparatus 100 may control the steering (steering angle) so that the vehicle M can travel on a position or a route easily captured by the stationary phase apparatus 500-1 or the terminal apparatus 400-2, for example, by the control of the image acquisition control unit 180.
The images captured by the terminal-side camera 460 of the terminal device 400-2, the camera 520 of the stationary camera device 500-1, and the unmanned-side camera 620 are transmitted to the information providing server 300. The image acquisition unit 344 of the information providing server 300 acquires captured images of the terminal-side camera 460 of the terminal device 400-2, the camera 520 of the stationary phase device 500-1, and the drone-side camera 620, and stores the acquired images in the image DB 364. The extraction unit 346 extracts an image including the own vehicle M from the images registered in the image DB 364. Fig. 20 is a diagram showing an example of the image extracted by the extraction unit 346. In the example of fig. 20, images IM41, IM42, and IM43 … of the vehicle M are shown among the images captured by the terminal-side camera 460 of the terminal device 400-2, the camera 520 of the stationary terminal device 500-1, and the unmanned-side camera 620, respectively.
When the image request content from the terminal device 400-1 includes information that a temporary image is desired, the information providing unit 350 transmits the image extracted by the extracting unit 346 to the terminal device 400-1 as a temporary image. This enables the user U1 to be provided with the captured image more quickly.
In the above example, although the case where the camera in the periphery captures the image of the host vehicle M has been described, the camera 10 of the host vehicle M may capture the image of the subject. Fig. 21 is a diagram showing an example of an image IM5 including the user U2 photographed by the vehicle M. In the example of fig. 21, the subject is the user U2. When the user U2 is photographed based on the image acquisition instruction information from the information providing server 300, the image acquisition control unit 180 of the host vehicle M communicates with the terminal device 400-2 of the user U2, acquires an image IM5 photographed at a time point when the distance from the position of the host vehicle M to the position of the terminal device 400 reaches a predetermined distance from the image which is photographed by the camera 10 all the time in order to identify the periphery of the host vehicle M, and transmits (uploads) the acquired image IM5 to the information providing server 300.
When the user U2 desires a voice, the image acquisition control unit 180 collects a voice at a predetermined distance from the terminal device 400-2 by using a microphone included in the HMI30, and transmits (uploads) the collected voice information and captured image to the information providing server 300.
When the sound near the user U2 is collected by the microphone mounted on the host vehicle M, the sound near the user U2 may not be collected accurately due to the engine sound of the host vehicle M. Therefore, when the running driving force output device 200 is operated in the first drive mode, the image acquisition control unit 180 controls to switch to the operation in the second drive mode. Thus, when the microphone mounted on the host vehicle M collects the sound near the position where the user U2 is present, the noise of the drive system for driving the host vehicle M can be suppressed, and the sound around the user U2 (in the example of fig. 21, "take a picture"! ") can be collected more clearly.
The image acquisition control unit 180 may capture an image of the user U1 with the vehicle interior camera 90 and transmit (upload) the captured image to the information providing server 300 together with the camera ID. Fig. 22 is a diagram showing an example of an image IM6 including a user U1 seated in the driver seat DS of the host vehicle M. For example, the image acquisition control unit 180 analyzes an image including the user U1 captured by the vehicle interior camera 90, extracts an image of the user U1 having a specific expression (e.g., smile or surprise face), and transmits the extracted image to the information providing server 300. The image acquisition control unit 180 may acquire the position information of the host vehicle M and transmit the image of the user U1 captured by the in-vehicle camera 90 at the time when the host vehicle M reaches a specific position (for example, a sightseeing spot or a scenic spot) to the information providing server 300. Thereby, the information providing server 300 can provide the user U1 with images captured by various cameras.
When the captured image of each camera is within a predetermined time period from the user's request for capturing, the information providing server 300 transmits the provisional image to the terminal device 400-1 of the user U1, and after the predetermined time period, the information providing server 300 transmits the edited image edited by the editing unit 348 to the terminal device 400-1. For example, when the image request from the terminal device 400-1 includes information that it is desired to edit an image, the editing unit 348 edits at least one image satisfying a predetermined condition from among the temporary images, and the information providing unit 350 transmits the edited image to the terminal device 400-1.
For example, the editing unit 348 extracts images satisfying a predetermined condition from among the plurality of temporary images IM41, IM42, IM43 … shown in the example of fig. 20, and the images captured by the camera of the host vehicle M shown in fig. 21 and 22. The predetermined condition is, for example, "a subject is photographed from a correction side", "a scene or a known person in a specific place, a feature of the ground are included in an image", and "the expression of the subject is smiling". The editing unit 348 also generates an edited image IM41# in which comment information CM1 is added to the extracted image IM41, and transmits the edited image IM41# to the terminal apparatus 400-1. The comment information CM1 includes, for example, shooting date and time, shooting place, and information relating to a camera for shooting. Thus, the user U1 can acquire not only the image captured by himself/herself but also images captured by various cameras present in the periphery.
The editing unit 348 may edit the image album by arranging images captured by a plurality of cameras in a predetermined order. Fig. 23 is a diagram showing an example of the image album AL1 edited by the editing unit 348. In the example of fig. 23, an image album AL1 representing the situation of one day of the user U1 is edited using images captured by a plurality of cameras. The images IM71 and IM75 are captured images of cameras installed in the home. The image IM72 is a captured image of the stationary camera device 500 or the vehicle installed on the way to and from work. The images IM73 and IM74 are captured images of cameras provided in the company of the user U1. The image album AL1 may include sound information acquired together with the images. As shown in fig. 23, the editing unit 348 can easily grasp the day behavior of the user U1, for example, by the user U1 himself or a manager of the user U1 by creating an image album AL1 by arranging images captured by a plurality of cameras in time series. The editing unit 348 can easily call the memory of the user who views the album by creating an image album by arranging images captured at respective positions of a route to a destination set in home travel, self-driving, or the like in time series, for example.
The editing unit 348 may edit the image album based on, for example, past history of captured images of the user stored for a long time. Fig. 24 is a diagram showing an example of the image album AL2 edited by the editing unit 348. In the example of fig. 24, an image album AL2 is shown in which photographed images IM81 to IM85 of the user U2 at the ages of 10, 20, 40, 50, and 70 are arranged in time series. The predetermined order is not limited to the time series, and may be arranged based on a predetermined reference value such as the size order, color order, and height order of the subject. By providing the image album AL2 shown in fig. 23 and 24 to the user U2, the editing unit 348 can provide an image service with a higher added value.
[ processing sequence ]
Fig. 25 is a sequence diagram showing an example of the flow of processing in the information providing system 1 according to the embodiment. In the example of fig. 25, an example will be described in which the information providing server 300 provides an image in response to a shooting provision request from the terminal device 400-1 used by the user U1. In the example of fig. 25, images are acquired from the stationary camera device 500, the other vehicle m, the terminal device 400-2, and the drone 600.
The terminal device 400-1 receives the shooting setting information from the shooting setting screen IM1 by the operation of the user U1 (step S100), and transmits the received shooting setting information to the information providing server 300 (step S300). The information providing server 300 extracts a camera for shooting from a plurality of cameras registered in the camera DB362 in advance based on the shooting setting information received from the terminal device 400-1 (step S104). In the example of fig. 25, the stationary camera apparatus 500, the other vehicle m, the terminal apparatus 400-2, and the drone 600 are extracted. The information providing server 300 transmits an image acquisition instruction to each of the stationary apparatus 500, the other vehicle m, the terminal apparatus 400-2, and the drone 600 (steps S106 to S112).
The stationary camera device 500 acquires an image captured when the subject set by the imaging setting information is within a predetermined distance based on the image acquisition instruction from the information providing server 300 (step S114). The other vehicle m acquires the captured image when the subject approaches within the predetermined distance based on the image acquisition instruction from the information providing server 300 (step S116). The terminal device 400-2 acquires the captured image captured by the user U2 when the user U2 permits the image capture and when the subject approaches within the predetermined distance based on the image acquisition instruction from the information providing server 300 (step S118). The drone 600 performs imaging when the imaging subject approaches within a predetermined distance based on the image acquisition instruction from the information providing server 300 (step S120). The images captured by the stationary camera device 500, the other vehicle m, the terminal device 400-2, and the drone 600 are transmitted (uploaded) to the information providing server (steps S122 to S128). The stationary camera device 500, the other vehicle m, the terminal device 400-2, and the drone 600 take images at respective imaging timings, and transmit (upload) the taken images to the information providing server 300 in a state in which communication with the information providing server 300 is possible after the images are taken, or at a predetermined timing.
The information providing server 300 stores the captured image obtained from the stationary camera device 500, the other vehicle m, the terminal device 400-2, and the drone 600 in the server-side storage unit 360 (step S130). Next, the information providing server 300 extracts an image in which the subject is captured from the server-side storage unit 360 as a temporary image (step S132), and transmits the extracted image to the terminal device 400-1 (step S134).
The terminal device 400-1 displays the temporary image on the display 430 (step S136). After a predetermined time has elapsed since the terminal device 400-1 received the image request, the information providing server 300 edits the image provided to the user U1 (step S138), and transmits the edited image to the terminal device 400-1 (step S140). The terminal device 400-1 displays the edited image on the display 430 (step S142). This completes the process of this sequence.
According to the above-described embodiments, the information providing server 300 can provide more appropriate image information to the user. Specifically, according to the embodiment, it is possible to obtain not only an image captured by the user but also an image captured from various angles and positions at which the user cannot capture the image by using various cameras such as a camera mounted on a vehicle and a camera provided at a predetermined position. In addition, according to the embodiment, the user can be provided with the image without waiting much by providing the user with the image of the temporary version at the time point when the image requested by the user is extracted. Further, since the edited image is newly presented to the user as the full version after the predetermined time has elapsed, even in the case where a part of the captured image is not acquired in the extraction stage of the provisional version, the edited image more conforming to the intention of the user can be provided after the predetermined time has elapsed.
In addition, according to the embodiment, when the photographic subject is photographed by the vehicle, a better photographic image can be photographed by adjusting the speed of the vehicle or adjusting the steering of the vehicle. In addition, when collecting the sound near the subject, the sound near the subject can be collected more clearly by switching to the drive mode in which the noise due to the travel driving force is reduced.
Further, according to the embodiment, when an imaging request is made in the information providing system, the provision of an image can be accepted at a timing at which the subject is intended by setting the imaging level. Therefore, for example, it is possible to suppress the provision of images in daily life and receive the provision of images only when there is a special event such as home travel. Further, by changing the shooting level, it is possible to perform control such as image shooting even when action histories or the like are desired.
[ modified examples ]
In the above-described information providing system 1, when the subject approaches, the subject is photographed by a camera present in the periphery thereof, but instead of this (or in addition thereto), the photographer may guide the subject to a position where photographing is easy. Fig. 26 is a diagram for explaining guidance of a photographic subject. Fig. 26 shows a scene in which the host vehicle M parks in the parking lot PA, the host vehicle M being a subject of imaging, and the other vehicle M2 being a subject of imaging.
In this scene, the other vehicle M2 performs inter-vehicle communication with the host vehicle M, and transmits a request signal to the host vehicle M to stop the host vehicle M at a position P2 on the front side of the stop position P1 of the other vehicle M2. When determining that the vehicle can be stopped at the position P2, the automatic driving control device 100 of the host vehicle M executes driving control for stopping the host vehicle M at the position P2. In this way, the other vehicle M2 can capture the image of the vehicle M stopped by the mounted camera 10.
In the above-described embodiment, the case where the own vehicle M is automatically driven has been described as an example, but other information may be provided to the vehicle in the case of manual driving. For example, in the above example, when the host vehicle M passes near the camera that captures the image, the host vehicle M is caused to perform automatic driving that performs speed control or steering control, but the information providing server 300 may cause a message that urges speed control (for example, deceleration) or steering control to be displayed on the display device of the HMI30 or may transmit an instruction to perform control of audio output to the vehicle when the vehicle is driven manually. In addition, when the manual driving is being executed and the sound is collected by the microphone of the HMI30, the information providing server 300 may transmit an instruction to the vehicle to cause the display device of the HMI30 to display a notification to switch the driving mode of the vehicle from the first driving mode to the second driving mode. As shown in fig. 26, when the vehicle is parked at position P2, the display device may be controlled to display information related to the parking position. Thus, when the vehicle 2 is traveling by manual driving, the driver can be requested to perform driving in which images are easily captured. As a result, a more appropriate image can be obtained.
[ hardware configuration ]
Fig. 27 is a diagram showing an example of the hardware configuration of the information providing server 300 according to the embodiment. As shown in the figure, the information providing server 300 is configured such that a communication controller 300-1, a CPU300-2, a RAM300-3 used as a work memory, a ROM300-4 storing a boot program and the like, a flash memory, a storage device 300-5 such as an HDD, a drive device 300-6, and the like are connected to each other via an internal bus or a dedicated communication line. The communication controller 300-1 performs communication with components other than the information providing server 300. The storage device 300-5 stores a program 300-5a executed by the CPU 300-2. The program is developed in the RAM300-3 by a dma (direct Memory access) controller (not shown) or the like, and executed by the CPU 300-2. In this way, a part or all of the functional configurations of the information providing server 300 are realized.
The above-described embodiments can be expressed as follows.
An information providing device is configured to include:
a storage device in which a program is stored; and
a hardware processor for executing a program of a program,
executing, by the hardware processor, a program stored in the storage device to perform:
a step of instructing a plurality of image capturing units including a first image capturing unit mounted on a vehicle, a second image capturing unit provided at a predetermined position, and a third image capturing unit used by another person other than the user, based on position information of an image capturing object, based on image capturing contents received from a terminal device used by the user;
acquiring an image including the subject captured by the first, second, and third imaging units, in accordance with the predetermined instruction; and
and providing information including the acquired image satisfying the shooting content to the terminal device.
While the present invention has been described with reference to the embodiments, the present invention is not limited to the embodiments, and various modifications and substitutions can be made without departing from the scope of the present invention.

Claims (7)

1. An information providing apparatus, wherein,
the information providing device is provided with:
an imaging control unit that instructs a plurality of imaging units including a first imaging unit mounted on a vehicle, a second imaging unit disposed at a predetermined position, and a third imaging unit used by another person other than the user, based on position information of an imaging target, based on imaging content received from a terminal device used by the user;
an image acquisition unit that acquires an image including the subject captured by the first, second, and third imaging units, in accordance with a predetermined instruction by the imaging control unit; and
and an information providing unit that provides information including the image satisfying the shooting content acquired by the image acquiring unit to the terminal device.
2. The information providing apparatus according to claim 1,
the shooting contents include control contents that do not upload images shot by the plurality of shooting parts to the information providing apparatus.
3. The information providing apparatus according to claim 1,
the shot content includes control content for uploading an image shot outside a predetermined area to the information providing apparatus.
4. The information providing apparatus according to claim 1,
the shooting contents include control contents permitting uploading of images shot by at least some of the first shooting unit, the second shooting unit, and the third shooting unit.
5. The information providing apparatus according to claim 1,
the shooting content includes attribute information of an owner of the third shooting unit, and includes control content permitting uploading from the third shooting unit that meets the attribute.
6. An information providing method, wherein,
the information providing method causes an information providing apparatus to perform the following processing:
a step of instructing a plurality of image capturing units including a first image capturing unit mounted on a vehicle, a second image capturing unit provided at a predetermined position, and a third image capturing unit used by another person other than the user, based on position information of an image capturing object, based on image capturing contents received from a terminal device used by the user;
acquiring an image including the subject captured by the first, second, and third imaging units, in accordance with the predetermined instruction; and
and providing information including the acquired image satisfying the shooting content to the terminal device.
7. A storage medium storing a program, wherein,
the program causes the information providing apparatus to perform the following processing:
a step of instructing a plurality of image capturing units including a first image capturing unit mounted on a vehicle, a second image capturing unit provided at a predetermined position, and a third image capturing unit used by another person other than the user, based on position information of an image capturing object, based on image capturing contents received from a terminal device used by the user;
acquiring an image including the subject captured by the first, second, and third imaging units, in accordance with the predetermined instruction; and
and providing information including the acquired image satisfying the shooting content to the terminal device.
CN202010583659.XA 2019-07-04 2020-06-23 Information providing device, information providing method, and storage medium Pending CN112188077A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-125246 2019-07-04
JP2019125246A JP7210394B2 (en) 2019-07-04 2019-07-04 INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING METHOD AND PROGRAM

Publications (1)

Publication Number Publication Date
CN112188077A true CN112188077A (en) 2021-01-05

Family

ID=73919716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010583659.XA Pending CN112188077A (en) 2019-07-04 2020-06-23 Information providing device, information providing method, and storage medium

Country Status (2)

Country Link
JP (1) JP7210394B2 (en)
CN (1) CN112188077A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885087A (en) * 2021-01-22 2021-06-01 北京嘀嘀无限科技发展有限公司 Method, apparatus, device and medium for determining road condition information and program product

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116887040B (en) * 2023-09-07 2023-12-01 宁波舜宇精工股份有限公司 In-vehicle camera control method, system, storage medium and intelligent terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013247377A (en) * 2012-05-23 2013-12-09 Pioneer Electronic Corp Terminal device, control method, program, and storage medium
CN105270261A (en) * 2014-05-30 2016-01-27 Lg电子株式会社 Around view provision apparatus and vehicle including the same
CN105654577A (en) * 2016-03-03 2016-06-08 百度在线网络技术(北京)有限公司 Driving navigation method and driving navigation device
CN106530781A (en) * 2016-09-29 2017-03-22 奇瑞汽车股份有限公司 Method and system for sharing road condition information based on Internet-of-vehicles
CN109739221A (en) * 2018-12-10 2019-05-10 北京百度网讯科技有限公司 Automatic driving vehicle monitoring method, device and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5633494B2 (en) * 2011-09-20 2014-12-03 トヨタ自動車株式会社 Information processing system, information processing apparatus, and center server
US10721378B2 (en) * 2016-07-29 2020-07-21 Sony Interactive Entertainment Inc. Image management system and unmanned flying body
JP6748292B2 (en) * 2017-03-31 2020-08-26 本田技研工業株式会社 Album generating apparatus, album generating system, and album generating method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013247377A (en) * 2012-05-23 2013-12-09 Pioneer Electronic Corp Terminal device, control method, program, and storage medium
CN105270261A (en) * 2014-05-30 2016-01-27 Lg电子株式会社 Around view provision apparatus and vehicle including the same
CN105654577A (en) * 2016-03-03 2016-06-08 百度在线网络技术(北京)有限公司 Driving navigation method and driving navigation device
CN106530781A (en) * 2016-09-29 2017-03-22 奇瑞汽车股份有限公司 Method and system for sharing road condition information based on Internet-of-vehicles
CN109739221A (en) * 2018-12-10 2019-05-10 北京百度网讯科技有限公司 Automatic driving vehicle monitoring method, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885087A (en) * 2021-01-22 2021-06-01 北京嘀嘀无限科技发展有限公司 Method, apparatus, device and medium for determining road condition information and program product

Also Published As

Publication number Publication date
JP7210394B2 (en) 2023-01-23
JP2021013068A (en) 2021-02-04

Similar Documents

Publication Publication Date Title
US10726360B2 (en) Vehicle control system, vehicle control method, and vehicle control program
JP7058233B2 (en) Vehicle control devices, vehicle control methods, and programs
JP7176974B2 (en) Pick-up management device, pick-up control method, and program
JP6561357B2 (en) Vehicle control system, vehicle control method, and vehicle control program
JP2018077649A (en) Remote operation control device, vehicle control system, remote operation control method and remote operation control program
US20200283022A1 (en) Vehicle control system, vehicle control method, and storage medium
CN111376853B (en) Vehicle control system, vehicle control method, and storage medium
US11302194B2 (en) Management device, management method, and storage medium
CN111619569B (en) Vehicle control system, vehicle control method, and storage medium
US20200311623A1 (en) Parking management apparatus, method for controlling parking management apparatus, and storage medium
JP6696006B2 (en) Vehicle control system, vehicle control method, and vehicle control program
JP2020052559A (en) Vehicle control device, vehicle control method, and program
JP2019156133A (en) Vehicle controller, vehicle control method and program
CN112188077A (en) Information providing device, information providing method, and storage medium
JP2020138611A (en) Vehicle control device, vehicle control system, vehicle control method, and program
JP6956132B2 (en) Shooting system, server, control method and program
WO2023102911A1 (en) Data collection method, data presentation method, data processing method, aircraft landing method, data presentation system and storage medium
JP6663343B2 (en) Vehicle control system, vehicle control method, and vehicle control program
JP7155047B2 (en) VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND PROGRAM
CN110301133B (en) Information processing apparatus, information processing method, and computer-readable recording medium
JP7208114B2 (en) INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING METHOD AND PROGRAM
JP7311331B2 (en) INFORMATION PROVIDING DEVICE, INFORMATION PROVIDING METHOD AND PROGRAM
US20200282978A1 (en) Vehicle control system, vehicle control method, and storage medium
WO2019163813A1 (en) Vehicle control system, vehicle control method, and program
US20200311621A1 (en) Management device, management method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105