WO2017057055A1 - Information processing device, information terminal and information processing method - Google Patents

Information processing device, information terminal and information processing method Download PDF

Info

Publication number
WO2017057055A1
WO2017057055A1 PCT/JP2016/077428 JP2016077428W WO2017057055A1 WO 2017057055 A1 WO2017057055 A1 WO 2017057055A1 JP 2016077428 W JP2016077428 W JP 2016077428W WO 2017057055 A1 WO2017057055 A1 WO 2017057055A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
information
moving body
accident
map
Prior art date
Application number
PCT/JP2016/077428
Other languages
French (fr)
Japanese (ja)
Inventor
貝野 彰彦
嵩明 加藤
江島 公志
辰吾 鶴見
福地 正樹
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2017057055A1 publication Critical patent/WO2017057055A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present technology relates to an information processing device, an information terminal, and an information processing method, and particularly relates to an information processing device, an information terminal, and an information processing method that can prevent an accident of a moving body.
  • the navigation device mounted on the host vehicle can receive information about the oncoming vehicle from the server and pass the oncoming vehicle. It has been proposed to search for and display.
  • the navigation device calculates the current position, speed, and direction of the host vehicle by map matching with a GPS (Global Positioning System) receiver, a vehicle speed sensor, an angular velocity gyroscope, and a map database. Then, when the host vehicle is traveling on a mountain road or a narrow road with poor visibility, the navigation device transmits the current position, speed and direction of the host vehicle, error information, and destination point information to the server.
  • GPS Global Positioning System
  • the server determines the passing possibility of each vehicle based on the information received from each vehicle, and transmits approach information including current position information, direction information, and speed information of other vehicles to the navigation device of each vehicle.
  • the navigation device that has received the approach information searches for a point where it can pass another vehicle and displays it on a map (for example, see Patent Document 1).
  • the present technology has been made in view of such a situation, and is intended to reliably prevent an accident of a moving body such as a vehicle.
  • the information processing apparatus includes information on each moving body that includes position information indicating the position and speed of each moving body estimated based on an image captured from each moving body.
  • a receiving unit that receives from the terminal; and a risk prediction unit that predicts an accident between the moving bodies based on the movements of the moving bodies that are predicted based on the estimated position and speed of the moving bodies; Is provided.
  • the receiving unit receives from each information terminal a local map indicating a position in a three-dimensional space of a feature point in an image photographed from each moving body, and based on the received local map, predetermined
  • a global map update unit that updates a global map that indicates the position of the feature point in the three-dimensional space in the region, and the motion prediction unit further predicts the motion of each moving object based on the global map Can be made.
  • the motion predicting unit can predict the motion of each moving body avoiding a stationary object on the global map.
  • the position information includes the movement of each moving body predicted by each information terminal, and the risk prediction unit includes each moving body based on the movement of each moving body predicted by each information terminal. Accidents can be predicted in the meantime.
  • the danger notification unit can cause the information terminal to transmit control information used for processing for the mobile body to avoid an accident.
  • the receiving unit receives a detection result of a dangerous area at risk of an accident from each information terminal, and indicates the dangerous area based on the detection result of the dangerous area by each information terminal
  • a dangerous area map update unit for updating a map and a dangerous area notification unit for notifying each of the information terminals of the dangerous area based on the dangerous area map may be further provided.
  • the receiving unit receives a detection result of a dangerous area at risk of an accident from each information terminal, and indicates the dangerous area based on the detection result of the dangerous area by each information terminal
  • a dangerous area map update unit that updates a map, and a dangerous area determination that determines whether each moving body is in the dangerous area based on the dangerous area map and notifies the information terminal of the determination result Can be further provided.
  • a storage unit for storing the position information related to the accident can be further provided.
  • the information processing apparatus transmits position information indicating the position and speed of each moving body estimated based on an image captured from each moving body to each moving body.
  • An information terminal in a moving body, and an estimation unit that estimates the position and speed of the moving body based on an image captured from the moving body;
  • a transmission unit that transmits position information including position and speed to the information processing device, and a risk avoidance process that performs a process for avoiding the accident when the information processing device is notified of a risk that the moving body will have an accident A part.
  • the estimation unit can estimate the position and speed of the moving body based on the relative position between the feature point in the image photographed from the moving body and the moving body.
  • a local map generating unit that generates a local map indicating the position of the feature point in the three-dimensional space may be further provided, and the transmitting unit may further cause the information processing apparatus to transmit the local map.
  • An object detection unit that detects an object around the moving object based on the feature point, an estimated position and speed of the moving object, and an accident of the moving object based on the detection result of the object
  • a risk prediction unit that predicts the accident, and the risk avoidance processing unit further performs a process for avoiding the accident when the risk predicting unit predicts a risk that the moving body will hit the accident. Can be made.
  • the estimating unit predicts the movement of the moving body, and the transmitting unit receives the position information including a prediction result of the moving body as the information. It can be transmitted to the processing device.
  • the transmission unit causes the information processing apparatus to transmit the position information when it is determined that the moving body is in a risk area where there is a risk of an accident based on information from the information processing apparatus. be able to.
  • the risk prediction unit may further detect the dangerous region, and the transmission unit may transmit the detection result of the dangerous region to the information processing apparatus.
  • An information processing method is estimated by an estimation step in which an information terminal provided in a moving body estimates a position and a speed of the moving body based on an image taken from the moving body.
  • a risk avoidance processing step is performed by an estimation step in which an information terminal provided in a moving body estimates a position and a speed of the moving body based on an image taken from the moving body.
  • position information indicating a position and a speed of each moving body estimated based on an image captured from each moving body is received from an information terminal provided in each moving body. Then, an accident between the moving bodies is predicted based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
  • the position and speed of the moving body are estimated based on an image captured from the moving body, and position information including the estimated position and speed of the moving body is stored in the information processing apparatus.
  • the information processing apparatus notifies the risk that the moving body will have an accident, a process for avoiding the accident is performed.
  • the prediction accuracy of accidents between moving bodies is improved. As a result, it is possible to reliably prevent an accident of the moving body.
  • 1 is a block diagram illustrating an embodiment of an information processing system to which the present technology is applied. It is a block diagram which shows 1st Embodiment of an information terminal. It is a block diagram which shows 1st Embodiment of a SLAM process part. It is a block diagram which shows 1st Embodiment of a server. It is a figure which shows the example of a dangerous area map. It is a flowchart for demonstrating 1st Embodiment of the process of an information terminal. It is a flowchart for demonstrating 1st Embodiment of the process of a server. It is a flowchart for demonstrating 1st Embodiment of the process of a server.
  • FIG. 1 shows an embodiment of an information processing system 1 to which the present technology is applied.
  • the information processing system 1 is configured to include information terminals 11-1 to 11-n and a server 12.
  • the information terminals 11-1 to 11-n are simply referred to as information terminals 11 when it is not necessary to distinguish them individually.
  • the information terminals 11 and the server 12 are connected to each other via a base station (not shown), the network 13, and the like, and communicate with each other.
  • a base station not shown
  • the network 13 and the like
  • any wireless communication method can be adopted as the communication method of the information terminal 11.
  • the communication method of the server 12 any wired or wireless communication method can be adopted.
  • the information terminal 11 and the server 12 can directly communicate with each other.
  • each information terminal 11 and the server 12 communicate via the network 13 or the like
  • the description of “via the network 13 or the like” is omitted for easy understanding.
  • expressions such as “each information terminal 11 and server 12 communicate with each other” and “each information terminal 11 and server 12 perform transmission and reception of information and data” are used.
  • Each information terminal 11 includes, for example, an in-vehicle information terminal used for a car navigation system, an automatic driving system, or the like, a mobile information terminal such as a smartphone, a mobile phone, a tablet, a wearable device, or a notebook personal computer.
  • a mobile information terminal such as a smartphone, a mobile phone, a tablet, a wearable device, or a notebook personal computer.
  • each information terminal 11 is provided in a mobile body.
  • the mobile body provided with each information terminal 11 includes, for example, a mobile body that moves on land or in the air.
  • Such moving bodies include, for example, vehicles, people, airplanes, helicopters, drones, robots, and the like.
  • each information terminal 11 may be always installed on the moving body, or may be temporarily installed on the moving body, mounted, or carried.
  • a mobile body provided with the information terminal 11 is distinguished from other mobile bodies with respect to a certain information terminal 11, it is referred to as a self mobile body.
  • the information processing system 1 performs a risk prediction for each moving body provided with each information terminal 11 and performs a process for avoiding an accident in which each moving body is predicted.
  • the server 12 integrates information, data, and the like from each information terminal 11, performs risk prediction between the moving objects, and notifies each information terminal 11 of the prediction result.
  • each information terminal 11 performs risk prediction of the own moving body based on an image taken from the own moving body.
  • Each information terminal 11 performs a process for avoiding an accident in which the mobile object is predicted based on the risk prediction results of each information terminal 11 and the server 12.
  • the server 12 performs a simulation of the state of occurrence of the accident of the moving body based on, for example, information and data from each information terminal 11.
  • FIG. 2 shows a configuration example of the function of the information terminal 11a which is the first embodiment of the information terminal 11 of FIG.
  • the information terminal 11a is configured to include cameras 101L and 101R, a reception unit 102, an information processing unit 103, and a transmission unit 104.
  • the information processing unit 103 is configured to include a SLAM (Simultaneous Localization and Mapping) processing unit 111, a dangerous area determination unit 112, a risk prediction unit 113, and a risk avoidance processing unit 114.
  • SLAM Simultaneous Localization and Mapping
  • the camera 101L captures the moving direction of the moving object from the left side.
  • the camera 101L supplies an image (hereinafter referred to as a left image) obtained as a result of shooting to the SLAM processing unit 111.
  • the camera 101R captures the moving direction of the moving body from the right side.
  • the camera 101R supplies an image (hereinafter referred to as a right image) obtained as a result of shooting to the SLAM processing unit 111.
  • the receiving unit 102 receives various types of information, data, and the like from other information terminals 11a, the server 12, and other servers (not shown), and supplies them to each unit of the information terminal 11a.
  • the receiving unit 102 receives a global map from the server 12 and supplies the global map to the SLAM processing unit 111.
  • the receiving unit 102 receives the dangerous area information from the server 12 and supplies the dangerous area information to the dangerous area determination unit 112.
  • the receiving unit 102 receives the danger notification information from the server 12 and supplies it to the danger avoidance processing unit 114.
  • the global map is a map showing the position in a three-dimensional space of a stationary object in a predetermined wide area.
  • the global map includes information indicating the position and feature amount of a feature point of a stationary object in a predetermined region on a three-dimensional spatial coordinate system.
  • the spatial coordinate system is represented by, for example, latitude, longitude, and height from the ground.
  • the dangerous area information is information indicating the position of a dangerous area where an accident may occur.
  • the danger notification information is information for notifying the danger to the information terminal 11a provided in the mobile body having a risk of causing an accident when the server 12 predicts the risk of the mobile body having an accident. is there.
  • the SLAM processing unit 111 uses the SLAM technology to estimate the speed, position, and orientation of the moving object based on the left image, the right image, and the global map, and to detect objects around the moving object. Or generate a local map.
  • the SLAM processing unit 111 supplies position / orientation information indicating the estimated position and orientation of the moving body to the dangerous area determination unit 112 and the risk prediction unit 113.
  • the SLAM processing unit 111 supplies speed information indicating the estimated speed of the moving body to the danger prediction unit 113.
  • the SLAM processing unit 111 supplies position information including the estimated position and speed of the moving body to the transmission unit 104.
  • the SLAM processing unit 111 notifies the danger prediction unit 113 of the detection result of objects around the moving body.
  • the SLAM processing unit 111 supplies the generated local map to the transmission unit 104.
  • the SLAM processing unit 111 generates global map transmission request information for requesting transmission of the global map, and supplies it to the transmission unit 104.
  • the local map is a map indicating the position in the three-dimensional space of a stationary object around each moving object, and is generated by each information terminal 11.
  • the local map includes information indicating the position and the feature amount on the three-dimensional spatial coordinate system of the feature point of the stationary object around each moving object, as in the global map.
  • the dangerous area determination unit 112 determines whether or not the own moving body is in the dangerous area based on the dangerous area information and the position and orientation information of the own moving body, and notifies the SLAM processing unit 111 of the determination result.
  • the danger prediction unit 113 performs the danger prediction and the dangerous area detection process of the moving object based on the speed information and the position and orientation information of the moving object and the detection result of the objects around the moving object.
  • the risk prediction unit 113 notifies the risk avoidance processing unit 114 of the result of the risk prediction. Further, the danger prediction unit 113 generates danger area notification information for notifying the detected danger area, and supplies the danger area notification information to the transmission unit 104.
  • the danger avoidance processing unit 114 performs a process for avoiding the danger of the own mobile body based on the risk prediction result by the danger prediction unit 113 and the danger notification information from the server 12.
  • the transmission unit 104 transmits various information, data, and the like to the other information terminals 11a, the server 12, and other servers (not shown). For example, the transmission unit 104 transmits position information, a local map, dangerous area notification information, and global map transmission request information to the server 12.
  • FIG. 3 shows a functional configuration example of the SLAM processing unit 111a which is the first embodiment of the SLAM processing unit 111 of FIG.
  • the SLAM processing unit 111a is configured to include an estimation unit 201, a position information generation unit 202, an object detection unit 203, and a local map generation unit 204.
  • the estimation unit 201 estimates the movement amount, position, orientation, and speed of the moving body based on the relative positions of the feature points in the left and right images captured by the cameras 201L and 201R and the moving body.
  • the estimation unit 201 includes image correction units 211L and 211R, a feature point detection unit 212, a parallax matching unit 213, a distance estimation unit 214, a feature amount calculation unit 215, a map information storage unit 216, a motion matching unit 217,
  • the movement amount estimation unit 218, the object dictionary storage unit 219, the object recognition unit 220, the position / orientation information storage unit 221, the position / orientation estimation unit 222, and the speed estimation unit 223 are configured.
  • the image correcting unit 211L and the image correcting unit 211R correct the left image supplied from the camera 201L and the right image supplied from the camera 201R, respectively, so that the images are directed in the same direction.
  • the image correction unit 211L supplies the corrected left image to the feature point detection unit 212 and the motion matching unit 217.
  • the image correction unit 211R supplies the corrected right image to the parallax matching unit 213.
  • the feature point detection unit 212 detects a feature point of the left image.
  • the feature point detection unit 212 supplies two-dimensional position information indicating the position of each detected feature point on the two-dimensional image coordinate system to the parallax matching unit 213 and the feature amount calculation unit 215.
  • the image coordinate system is represented by, for example, an x coordinate and ay coordinate in the image.
  • the parallax matching unit 213 detects the feature point of the right image corresponding to the feature point detected in the left image. Thereby, the parallax which is the difference between the position on the left image of each feature point and the position on the right image is obtained.
  • the parallax matching unit 213 supplies the distance estimation unit 214 with two-dimensional position information indicating the position of each feature point on the image coordinate system in the left image and the right image.
  • the distance estimation unit 214 estimates the distance to each feature point based on the parallax between the left image and the right image of each feature point, and further calculates the position of each feature point on the three-dimensional spatial coordinate system To do.
  • the distance estimation unit 214 supplies the feature amount calculation unit 215 with three-dimensional position information indicating the position of each feature point on the spatial coordinate system.
  • the feature amount calculation unit 215 calculates the feature amount of each feature point of the left image.
  • the feature amount calculation unit 215 causes the map information storage unit 216 to store the feature point information including the three-dimensional position information of each feature point and the feature amount.
  • the map information storage unit 216 stores the glow map supplied from the server 12 in addition to the feature point information used for the local map.
  • the motion matching unit 217 acquires the three-dimensional position information of each feature point detected in the previous frame from the map information storage unit 216. Next, the motion matching unit 217 detects feature points corresponding to each feature point detected in the previous frame in the left image of the current frame. Then, the motion matching unit 217 supplies the movement amount estimation unit 218 with the three-dimensional position information in the previous frame of each feature point and the two-dimensional position information indicating the position on the image coordinate system in the current frame. .
  • the movement amount estimation unit 218 determines the movement of the moving body between frames (more precisely, the camera 201L). Estimate the amount of movement of the position and orientation.
  • the movement amount estimation unit 218 supplies movement amount information indicating the estimated movement amount of the position and posture of the moving body to the object detection unit 203, the position / orientation estimation unit 222, and the speed estimation unit 223.
  • the object recognition unit 220 recognizes an object in the left image based on the object dictionary stored in the object dictionary storage unit 219. Based on the recognition result of the object, the object recognition unit 220 sets the initial position and orientation values (hereinafter referred to as the initial position and initial orientation) of the moving body (more precisely, the camera 201L) in the spatial coordinate system. To do.
  • the object recognizing unit 220 causes the position and orientation information storage unit 221 to store initial position and orientation information indicating the set initial position and initial orientation.
  • the position / orientation estimation unit 222 performs self-movement based on the initial position / orientation information stored in the position / orientation information storage unit 221 or the position / orientation information of the previous frame and the estimation result of the movement amount of the own moving object. Estimate body position and posture. In addition, the position / orientation estimation unit 222 corrects the estimated position and orientation of the moving body based on the global map stored in the map information storage unit 216 as necessary. The position / orientation estimation unit 222 supplies position / orientation information indicating the estimated position and orientation of the moving object to the dangerous region determination unit 112, the risk prediction unit 113, the position information generation unit 202, and the speed estimation unit 223. And stored in the position and orientation information storage unit 221.
  • the speed estimation unit 223 estimates the speed of the mobile body by dividing the estimated travel amount of the mobile body by the elapsed time.
  • the speed estimation unit 223 supplies speed information indicating the estimated speed to the danger prediction unit 113 and the position information generation unit 202.
  • the position information generation unit 202 generates position information including the position and speed of the moving body when notified from the dangerous area determination unit 112 that the moving body is in the dangerous area.
  • the position information generation unit 202 supplies the generated position information to the transmission unit 104.
  • the object detection unit 203 uses the stationary object and the moving body around the moving body. Is detected.
  • the object detection unit 203 notifies the danger prediction unit 113 and the local map generation unit 204 of the detection results of the stationary object and the moving body around the moving body.
  • the local map generation unit 204 When notified from the dangerous area determination unit 112 that the moving body is in the dangerous area, the local map generation unit 204 detects a stationary object and a moving object around the moving body, and a map information storage unit. A local map is generated based on the feature point information of the current frame stored in H.216. The local map generation unit 204 supplies the generated local map to the transmission unit 104. The local map generation unit 204 includes the generated local map, generates global map transmission request information for requesting the server 12 to transmit the global map, and supplies the generated information to the transmission unit 104.
  • FIG. 4 shows an example of the functional configuration of the server 12a, which is the first embodiment of the server 12 of FIG.
  • the server 12a includes a reception unit 301, a position information storage unit 302, a map information storage unit 303, a global map update unit 304, a dangerous area map update unit 305, a motion prediction unit 306, a risk prediction unit 307, a risk notification unit 308, and a simulation unit. 309, a global map search unit 310, a dangerous area notification unit 311, and a transmission unit 312.
  • the receiving unit 301 receives various types of information, data, and the like from each information terminal 11a and other servers (not shown), and supplies them to each unit of the server 12a.
  • the receiving unit 301 receives position information from each information terminal 11 a and stores it in the position information storage unit 302.
  • the receiving unit 301 receives the local map and the dangerous area information from each information terminal 11 a and stores them in the map information storage unit 303.
  • the reception unit 301 receives global map transmission request information from each information terminal 11 a and supplies the global map transmission request information to the global map search unit 310.
  • the receiving unit 301 receives accident information indicating the occurrence of a moving object accident from another server and an instruction to execute an accident simulation, and supplies the instruction to the simulation unit 309.
  • the global map update unit 304 updates the global map stored in the map information storage unit 303 based on the local map generated by each information terminal 11a stored in the map information storage unit 303.
  • the dangerous area map update unit 305 updates the dangerous area map stored in the map information storage unit 303 based on the dangerous area notification information from each information terminal 11a stored in the map information storage unit 303.
  • the dangerous area map is a map showing the position of the dangerous area where there is a risk of an accident occurring.
  • the area covered by the global map is divided into grids, and whether or not each area is a dangerous area is indicated.
  • the grid of the dangerous area map may have a hierarchical structure.
  • FIG. 5 shows an example of a dangerous area map in which the grid has a two-layer structure. First, it is determined whether or not each grid is a dangerous area in the first hierarchy. Then, in the grid determined to be a dangerous area in the first hierarchy, it is determined whether or not each grid in the second hierarchy obtained by further dividing the grid is a dangerous area.
  • the grid G1 in the first layer which is a dangerous area indicated by diagonal lines, is divided into a plurality of grids in the second layer.
  • the motion prediction unit 306 predicts the motion of each mobile object based on the position information from each information terminal 11 a stored in the position information storage unit 302 and the global map stored in the map information storage unit 303. To do.
  • the motion prediction unit 306 notifies the motion prediction unit 306 of the motion prediction result of each moving object.
  • the danger prediction unit 307 performs danger prediction of each moving body based on the prediction result of the movement of each moving body, and supplies the prediction result to the danger notification unit 308.
  • the danger notification unit 308 provides danger notification information for notifying a danger to the information terminal 11a provided in the mobile object in which the dangerous state is detected, that is, the mobile object predicted to be in an accident. Generate.
  • the danger notification unit 308 supplies the generated danger notification information to the transmission unit 312.
  • the simulation unit 309 sets so as to save the position information related to the accident indicated in the accident information in the position information storage unit 302.
  • the simulation unit 309 stores an accident simulation execution command from an input unit (not shown) or from another server or the like via the reception unit 301, and is stored in the position information storage unit 302. Based on the location information, a simulation of the specified accident occurrence is performed.
  • the simulation unit 309 generates simulation data for simulating an accident occurrence state and supplies the simulation data to the transmission unit 312.
  • the global map search unit 310 performs a local map image search included in the global map transmission request information in the global map stored in the map information storage unit 303.
  • the global map search unit 310 extracts a map of the searched area and its surrounding area from the global map, and supplies the extracted map to the transmission unit 312.
  • the dangerous area notification unit 311 generates dangerous area information for each dangerous area shown in the dangerous area map stored in the map information storage unit 303 and supplies the dangerous area information to the transmission unit 312.
  • the transmission unit 312 transmits various information, data, and the like to each information terminal 11a and other servers. For example, the transmission unit 312 transmits danger notification information, simulation data, a global map, and dangerous area information to each information terminal 11a.
  • step S1 the estimation unit 201 estimates the movement amount, position, and orientation of the moving body.
  • the image correction unit 211L and the image correction unit 211R correct the left image supplied from the camera 201L and the right image supplied from the camera 201R, respectively, so that the images are directed in the same direction. To do.
  • the image correction unit 211L supplies the corrected left image to the feature point detection unit 212 and the motion matching unit 217.
  • the image correction unit 211R supplies the corrected right image to the parallax matching unit 213.
  • the feature point detection unit 212 detects a feature point of the left image.
  • a feature point detection method for example, an arbitrary method such as a Harris corner can be used.
  • the feature point detection unit 212 supplies two-dimensional position information indicating the position of each detected feature point on the image coordinate system to the parallax matching unit 213.
  • the parallax matching unit 213 detects the feature point of the right image corresponding to the feature point detected in the left image.
  • the parallax matching unit 213 supplies the distance estimation unit 214 with two-dimensional position information indicating the position of each feature point on the image coordinate system in the left image and the right image.
  • the distance estimation unit 214 estimates the distance to each feature point based on the parallax between the left image and the right image of each feature point, and further calculates the position of each feature point on the three-dimensional spatial coordinate system To do.
  • the distance estimation unit 214 supplies the feature amount calculation unit 215 with three-dimensional position information indicating the position of each feature point on the spatial coordinate system.
  • the feature amount calculation unit 215 calculates the feature amount of each feature point of the left image.
  • the feature amount for example, an arbitrary feature amount such as SURF (Speeded Up Robust ⁇ ⁇ ⁇ Features) can be used.
  • the feature amount calculation unit 215 causes the map information storage unit 216 to store the feature point information including the three-dimensional position information of each feature point and the feature amount.
  • the motion matching unit 217 acquires the three-dimensional position information of each feature point detected in the previous frame from the map information storage unit 216. Next, the motion matching unit 217 detects feature points corresponding to each feature point detected in the previous frame in the left image of the current frame. Then, the motion matching unit 217 supplies the movement amount estimation unit 218 with the three-dimensional position information in the previous frame of each feature point and the two-dimensional position information indicating the position on the image coordinate system in the current frame. .
  • the movement amount estimation unit 218 estimates the movement amount of the own moving body (more precisely, the camera 201L) between the previous frame and the current frame. For example, the movement amount estimation unit 218 calculates the movement amount dX that minimizes the value of the cost function f in the following equation (1).
  • the moving amount dX indicates the moving amount of the position and orientation of the own moving body (more precisely, the camera 201L) from the previous frame to the current frame.
  • the movement amount dX indicates the movement amount of the position in the three-axis direction (three degrees of freedom) and the posture around each axis (three degrees of freedom) in the spatial coordinate system.
  • M t ⁇ 1 and Z t indicate the positions of the frame immediately before the corresponding feature point and the current frame. More specifically, M t-1 indicates the position of the feature point on the spatial coordinate system of the previous frame, and Z t indicates the position of the feature point on the image coordinate system of the current frame. Yes.
  • proj (dX, M t-1 ) is the image coordinates of the left image of the current frame, using the movement amount dX as the position M t-1 of the feature point in the previous frame on the spatial coordinate system. The projected position on the system is shown. That is, proj (dX, M t-1 ) estimates the position of the feature point on the left image of the current frame based on the position M t-1 of the feature point in the previous frame and the movement amount dX. Is.
  • the movement amount estimation unit 218 obtains a movement amount dX that minimizes the sum of squares of Z t -proj (dX, M t-1 ) of each feature point shown in Expression (1) by, for example, the least square method. That is, the movement amount estimation unit 218 determines the feature point of the left image of the current frame on the image coordinate system based on the position M t-1 of the feature point on the spatial coordinate system and the movement amount dX of the previous frame. The amount of movement dX that minimizes the error when the position of is estimated is obtained.
  • the movement amount estimation unit 218 supplies movement amount information indicating the obtained movement amount dX to the object detection unit 203, the position / orientation estimation unit 222, and the speed estimation unit 223.
  • the position / orientation estimation unit 222 acquires position / orientation information in the previous frame from the position / orientation information storage unit 221. Then, the position / orientation estimation unit 222 adds the movement amount dX estimated by the movement amount estimation unit 218 to the position and posture of the own movement body in the previous frame, thereby Estimate posture.
  • the position / orientation estimation unit 222 acquires initial position / orientation information from the position / orientation information storage unit 221 when estimating the position and orientation of the moving body in the first frame. Then, the position / orientation estimation unit 222 estimates the position and orientation of the own moving body by adding the movement amount dX estimated by the movement amount estimation unit 218 to the initial position and initial posture of the own moving body.
  • the position / orientation estimation unit 222 corrects the estimated position and orientation of the moving object based on the global map stored in the map information storage unit 216 as necessary.
  • the position / orientation estimation unit 222 supplies position / orientation information indicating the estimated position and orientation of the moving object to the dangerous region determination unit 112, the risk prediction unit 113, the position information generation unit 202, and the speed estimation unit 223. And stored in the position and orientation information storage unit 221.
  • step S2 the speed estimation unit 223 estimates the speed of the mobile body. Specifically, the speed estimation unit 223 estimates the speed of the moving body by dividing the movement amount dX estimated by the movement amount estimation unit 218 by the elapsed time. The speed estimation unit 223 supplies speed information indicating the estimated speed to the danger prediction unit 113 and the position information generation unit 202.
  • the object detection unit 203 detects surrounding objects. Specifically, the object detection unit 203 acquires the feature point information of the previous frame and the current frame from the map information storage unit 216. Next, the object detection unit 203 performs matching between the feature point of the previous frame and the feature point of the current frame, and detects the movement of each feature point between frames. Next, the object detection unit 203 calculates a feature point that moves corresponding to the movement of the own moving body based on the movement amount dX estimated by the movement amount estimation unit 218 and a movement that corresponds to the movement of the own moving body. Distinguish from feature points that are not.
  • the object detection unit 203 detects a stationary object around the moving body based on a feature point that moves corresponding to the movement of the moving body. In addition, the object detection unit 203 detects a moving body around the moving body based on feature points that do not move corresponding to the movement of the moving body. The object detection unit 203 notifies the danger prediction unit 113 and the local map generation unit 204 of the detection results of the stationary object and the moving body around the moving body.
  • step S4 the dangerous area determination unit 112 determines whether or not the own mobile body is in the dangerous area. Specifically, when the own mobile body exists in the reception area of the dangerous area information transmitted from the server 12a in step S102 of FIG. 7 described later, the receiving unit 102 receives the dangerous area information. The receiving unit 102 supplies the received dangerous area information to the dangerous area determination unit 112.
  • the dangerous area determination unit 112 determines whether the position of the moving body estimated by the position / orientation estimation unit 222 is within the dangerous area indicated in the received dangerous area information. And when it determines with the own mobile body being in a danger area, a process progresses to step S5.
  • step S5 the information terminal 11a transmits the position information and the local map to the server 12a. Specifically, the dangerous area determination unit 112 notifies the position information generation unit 202 and the local map generation unit 204 that the mobile body is in the dangerous area.
  • the position information generation unit 202 generates position information including the speed of the moving body estimated by the speed estimation unit 223 and the position of the moving body included in the position / orientation information from the position / orientation estimation unit 222.
  • the position information generation unit 202 supplies the generated position information to the transmission unit 104.
  • the local map generation unit 204 acquires feature point information of the current frame from the map information storage unit 216. Next, the local map generation unit 204 deletes information on the feature points of the surrounding moving objects detected by the object detection unit 203 from the acquired feature point information. Then, the local map generation unit 204 generates a local map based on the remaining feature point information, and supplies the generated local map to the transmission unit 104.
  • the transmission unit 104 transmits the acquired position information and local map to the server 12a.
  • step S4 the dangerous area determination unit 112 receives the dangerous area information when the position of the moving body estimated by the position / orientation estimation unit 222 is not within the dangerous area indicated in the received dangerous area information. If not, it is determined that the mobile body is not in the danger area. And the process of step S5 is skipped and a process progresses to step S6.
  • the risk prediction unit 113 performs risk prediction. Specifically, the danger predicting unit 113 performs collision, rear-end collision with a surrounding object based on the estimated speed and position of the moving object and the detection result of the surrounding object. Predict the risk of accidents such as contact. Note that the risk prediction unit 113 may further perform the risk prediction by further using the estimated posture of the moving body as necessary. The risk prediction unit 113 notifies the risk avoidance processing unit 114 of the result of the risk prediction.
  • the danger prediction unit 113 may determine the risk of an accident based on the speed and traveling direction of the mobile body and surrounding mobile bodies, the distance between the mobile body and surrounding objects, and the like. You may make it calculate the risk which shows a degree and a severity.
  • step S7 the danger avoidance processing unit 114 determines whether or not there is a risk that the moving body will encounter an accident.
  • step S8 when the danger avoidance processing unit 114 determines that there is a risk that the moving body will have an accident based on the result of the danger prediction by the danger prediction unit 113, the process proceeds to step S8.
  • step S109 the server 12a transmits the danger notification information to the mobile body predicted to be in danger of an accident. Then, when the danger avoidance processing unit 114 receives the danger notification information via the reception unit 102, the danger avoidance processing unit 114 determines that there is a risk that the mobile body will have an accident, and the process proceeds to step S8.
  • step S8 the danger avoidance processing unit 114 performs a process for avoiding an accident.
  • the danger avoidance processing unit 114 gives a warning to a person who is driving or maneuvering the vehicle. Further, for example, when the self-moving body is a person, the danger avoidance processing unit 114 warns the person.
  • the warning content can be arbitrarily set. For example, only danger notification may be performed, or a method for avoiding an accident may be notified.
  • the warning method any method such as an image, sound, blinking light, vibration of the operation unit, or the like can be adopted.
  • the danger avoidance processing unit 114 can also control the operation of the moving object so as to avoid an accident. For example, in order to avoid an accident, the danger avoidance processing unit 114 can actuate, decelerate, or change the traveling direction of a vehicle that is a vehicle.
  • the processing content may be changed according to the risk level.
  • the danger avoidance processing unit 114 may select and execute a process from stop, deceleration, direction change, warning, or the like of the moving body according to the degree of danger.
  • the danger avoidance processing unit 114 performs processing according to the control information.
  • step S9 the risk prediction unit 113 determines whether or not the predicted accident partner is a mobile object. If it is determined that the predicted accident partner is a mobile object, the process proceeds to step S10.
  • the danger prediction unit 113 notifies the danger area. Specifically, the danger prediction unit 113 regards the current position of the moving body as a dangerous area where there is a risk of an accident with another moving body, and sets danger area notification information for notifying the dangerous area. Generate.
  • the dangerous area notification information includes, for example, the position of the own moving body, the time when the dangerous area is detected, the position of another predicted moving body that is a predicted accident partner, and environmental conditions such as weather.
  • the danger prediction unit 113 transmits the generated dangerous area notification information to the server 12a via the transmission unit 104.
  • the server 12a receives the dangerous area notification information in step S101 of FIG. 7 described later.
  • step S9 determines in step S9 that the predicted accident partner is not a moving object but a stationary object
  • the process of step S10 is skipped, and the process proceeds to step S11.
  • step S7 If it is determined in step S7 that there is no risk that the moving body will have an accident, the processes in steps S8 to S10 are skipped, and the process proceeds to step S11.
  • step S11 the local map generation unit 204 determines whether to request transmission of the global map. If it is determined to request transmission of the global map, the process proceeds to step S12.
  • the conditions for requesting the transmission of the global map can be set arbitrarily. For example, when a global map is not stored in the map information storage unit 216, transmission of the global map may be requested. Or you may make it request
  • step S12 the local map generation unit 204 requests transmission of a global map. Specifically, the local map generation unit 204 generates a local map by the same process as in step S5 described above. Then, the local map generation unit 204 generates global map transmission request information including the generated local map, and transmits the generated global map transmission request information to the server 12a via the transmission unit 104.
  • the server 12a receives the global map transmission request information in step S103 of FIG. 7 to be described later, and transmits the global map around the requesting mobile body in step S105.
  • step S13 the receiving unit 102 receives the global map transmitted from the server 12a.
  • the receiving unit 102 stores the received global map in the map information storage unit 216.
  • step S1 the process returns to step S1, and the processes after step S1 are executed.
  • step S11 determines whether the transmission of the global map is not requested. If it is determined in step S11 that the transmission of the global map is not requested, the process returns to step S1, and the processes after step S1 are executed.
  • step S101 the receiving unit 301 starts receiving position information, a local map, and dangerous area notification information transmitted from each information terminal 11a.
  • the receiving unit 301 stores the received position information in the position information storage unit 302.
  • the receiving unit 301 also stores the received local map and dangerous area notification information in the map information storage unit 303.
  • step S102 the server 12a starts transmitting dangerous area information.
  • the dangerous area notification unit 311 generates dangerous area information for each dangerous area shown in the dangerous area map stored in the map information storage unit 303.
  • Each dangerous area information includes information indicating the position of the dangerous area.
  • the dangerous area notification unit 311 supplies the generated dangerous area information to the transmission unit 312.
  • the transmission unit 312 transmits, for example, each dangerous area information to a corresponding dangerous area and an area including the vicinity thereof.
  • step S103 the global map search unit 310 determines whether transmission of a global map is requested.
  • the global map search unit 310 determines that the transmission of the global map is requested when the global map transmission request information transmitted from the information terminal 11a in step S12 in FIG. Proceed to step S104.
  • step S104 the global map search unit 310 searches for a global map around the requesting mobile unit. Specifically, the global map search unit 310 searches the local map image and the feature amount included in the global map transmission request information in the global map stored in the map information storage unit 303.
  • step S105 the global map search unit 310 transmits a global map. Specifically, the global map search unit 310 extracts a map of the area searched in the process of step S104 and its surrounding area from the global map. The global map search unit 310 transmits the extracted map to the requesting information terminal 11a via the transmission unit 312.
  • step S103 determines whether the transmission of the global map is not requested. If it is determined in step S103 that the transmission of the global map is not requested, the processes in steps S104 and S105 are skipped, and the process proceeds to step S106.
  • step S106 the motion prediction unit 306 predicts the motion of each moving object. Specifically, the motion prediction unit 306 predicts the motion of each mobile object based on the position information from each information terminal 11a accumulated in the position information storage unit 302 between a predetermined time ago and the present time. . The motion prediction unit 306 notifies the risk prediction unit 307 of the motion prediction result of each moving object.
  • the motion prediction unit 306 may further predict the motion of the moving object using a global map. For example, in the global map, when there is a stationary object in the traveling direction of the moving body predicted by the position information, the motion prediction unit 306 may predict that the moving body moves so as to avoid the stationary object. . That is, the motion prediction unit 306 may predict a motion that the moving body avoids a stationary object on the global map. Thereby, the movement of the moving body can be predicted more accurately.
  • the risk prediction unit 307 performs risk prediction. Specifically, the risk predicting unit 307 determines the risk that each moving body may cause an accident such as a collision, rear-end collision, or contact with another moving body based on the predicted motion of each moving body. Predict.
  • the risk prediction unit 307 may predict that the distance between the plurality of moving objects will be within a predetermined distance based on the prediction result of the movement of each moving object. Predict that there is a risk.
  • a vehicle 401 and a vehicle 402 which are moving bodies, run side by side in one lane
  • a vehicle 403 and a vehicle 404 which are moving bodies, run side by side in the opposite lane.
  • the vehicles 401 to 404 are traveling at speeds v1a to v4a, respectively. If the speed v4a> the speed v3a and the distance between the vehicle 403 and the vehicle 404 is predicted to be within a predetermined distance, there is a risk that an accident will occur between the vehicle 403 and the vehicle 404. Predicted to be (dangerous).
  • the vehicles 401 to 404 are traveling at speeds v1b to v4b, respectively. If the vehicle 402 is traveling in the direction of the oncoming lane and the distance between the vehicle 402 and the vehicle 403 is predicted to be within a predetermined distance, an accident occurs between the vehicle 402 and the vehicle 403. It is predicted that there is a risk (dangerous state)
  • the risk prediction unit 307 calculates the degree of risk of occurrence of an accident and the degree of risk indicating the severity of the accident based on the speed and traveling direction of the moving object, the distance between the moving objects, and the like. You may do it.
  • step S108 the danger prediction unit 307 determines whether or not a dangerous state has been detected. For example, if the risk prediction unit 307 detects the risk of an accident between the moving objects as a result of the process in step S109, the risk prediction unit 307 determines that a dangerous state has been detected, and the process proceeds to step S109.
  • step S109 the danger notifying unit 308 notifies the danger to a moving body that is in an accident.
  • the danger prediction unit 307 supplies the danger notification unit 308 with information indicating the position, speed, traveling direction, and the like of a moving body that may be in an accident.
  • the danger notification unit 308 generates danger notification information including the position, speed, traveling direction, and the like of another moving body that is an accident partner for each moving body that is in danger of an accident.
  • the danger notification information may include information indicating a method for avoiding an accident, control information used for processing for the mobile body to avoid an accident, and the like.
  • the risk notification unit 308 may change the content of the risk notification information according to the risk level.
  • the danger notification unit 308 may include control information for the moving object in the danger notification information when the risk level is high, and may not include the control information when the risk level is low.
  • the danger notification unit 308 may change the content of the control information included in the danger notification information according to the degree of danger.
  • the danger notification unit 308 may include control information for instructing to stop the moving body when the degree of danger is high, and may include control information for instructing deceleration of the moving body when the degree of danger is low.
  • the danger notification unit 308 transmits the generated danger notification information to the information terminal 11a provided in the mobile body at risk of an accident via the transmission unit 313, respectively.
  • step S109 if it is determined in step S108 that a dangerous state has not been detected, the process of step S109 is skipped, and the process proceeds to step S110.
  • step S110 the global map update unit 304 determines whether to update the global map. If it is determined to update the global map, the process proceeds to step S111.
  • the conditions for updating the global map can be set arbitrarily.
  • the global map may be updated on a predetermined date, day of the week, or time zone.
  • the global map may be updated at predetermined time intervals.
  • the global map may be updated when the accumulated amount of the local map from each information terminal 11a after the previous update becomes a predetermined amount or more.
  • the global map update unit 304 updates the global map. For example, the global map update unit 304 compares the local map stored in the map information storage unit 303 after the previous update with the global map stored in the map information storage unit 303, and determines areas with different information. Extract. The global map update unit 304 replaces the extracted area of the global map with the local map information for the area with the high reliability of the local map.
  • the area where the reliability of the local map is high is, for example, an area where the information of the local maps from a predetermined number or more of the information terminals 11a match.
  • the information on the moving body can be reflected in the global map.
  • a vehicle parked for a long time can be reflected in the global map.
  • the change can be reflected in the global map.
  • falling trees, falling rocks, etc. can be reflected in the global map.
  • the global map update unit 304 stores the updated global map in the map information storage unit 303.
  • step S110 determines whether the global map is updated. If it is determined in step S110 that the global map is not updated, the process of step S111 is skipped, and the process proceeds to step S112.
  • step S112 the dangerous area map update unit 305 determines whether or not to update the dangerous area map. If it is determined to update the dangerous area map, the process proceeds to step S113.
  • the conditions for updating the dangerous area map can be arbitrarily set.
  • the dangerous area map may be updated on a predetermined date, day of the week, or time zone.
  • the dangerous area map may be updated at predetermined time intervals.
  • the dangerous area map may be updated in accordance with the update of the global map.
  • the dangerous area map may be updated when the accumulation amount of the dangerous area information from each information terminal 11a after the previous update becomes a predetermined amount or more. .
  • the dangerous area map update unit 305 updates the dangerous area map. For example, the dangerous area map update unit 305 selects an area in which the frequency of being notified as a dangerous area from each information terminal 11a exceeds a predetermined threshold among the areas not currently set as the dangerous area in the dangerous area map. Newly set a dangerous area. Further, for example, the dangerous area map update unit 305 sets the dangerous area when the frequency of being notified as the dangerous area from each information terminal 11a is very low even in the area currently set as the dangerous area. May be removed.
  • step S113 determines whether the dangerous area map is updated. If it is determined in step S112 that the dangerous area map is not updated, the process of step S113 is skipped, and the process proceeds to step S114.
  • step S114 the simulation unit 309 determines whether an accident has occurred. For example, the simulation unit 309 determines that an accident has occurred when receiving the accident information indicating the occurrence of the accident of the moving object from the external server, the information terminal 11a, or the like via the receiving unit 301, and the processing is performed in steps. The process proceeds to S115.
  • an eCall system server is assumed as an external server that transmits accident information.
  • step S115 the simulation unit 309 performs setting so as to save position information related to the accident.
  • the simulation unit 309 includes the position information stored in the position information storage unit 302 within a predetermined period before and after the accident occurrence time, and the position indicated in the position information indicates the location of the accident occurrence.
  • the position information within a predetermined range including the position information is set as position information related to the accident that has occurred. For example, even when the old position information is deleted from the position information storage unit 302, the position information related to the accident is stored as it is without being deleted.
  • step S114 determines whether accident has occurred. If it is determined in step S114 that no accident has occurred, the process of step S115 is skipped, and the process proceeds to step S116.
  • step S116 the simulation unit 309 determines whether an accident simulation execution has been commanded. For example, the simulation unit 309 receives an accident simulation execution command via an input unit (not shown), or receives an accident simulation execution command from another server or the like via the reception unit 301. If so, it is determined that an accident simulation execution has been commanded, and the process proceeds to step S117.
  • step S117 the simulation unit 309 executes an accident simulation.
  • the simulation unit 309 acquires position information related to the accident to be reproduced by the simulation from the position information storage unit 302.
  • the simulation unit 309 generates and displays a simulation image indicating an accident occurrence state based on the acquired position information. For example, in the simulation image, the movement of the moving body before and after the occurrence of the accident near the accident site is reproduced.
  • the simulation unit 309 generates simulation data for displaying the above-described simulation image based on the acquired position information. And the simulation part 309 transmits the produced
  • step S103 Thereafter, the process returns to step S103, and the processes after step S103 are executed.
  • step S116 when it is determined in step S116 that the execution of the accident simulation is not instructed, the process returns to step S103, and the processes after step S103 are executed.
  • the accuracy of estimation of the position and speed of the moving body is improved, and the accuracy of the risk prediction of the moving body is improved, so that an accident of the moving body can be surely prevented.
  • the position of a moving body when the position of a moving body is measured using GPS, the position of the moving body cannot be accurately measured in an urban area where it is difficult to receive radio waves from GPS satellites. Moreover, the measurement accuracy of the GPS position in private use is about 10 m. Therefore, when GPS is used, the accuracy of the risk prediction of the moving object may be reduced.
  • each information terminal 11a can stably estimate the position, posture, and speed of each moving object with high accuracy regardless of the surrounding conditions and environment.
  • the position of a moving body can be estimated with an accuracy of several centimeters. Therefore, the estimation accuracy of the movement of each moving body is improved, and the accuracy of the danger prediction of the moving body is improved.
  • the accuracy of the risk prediction is improved as compared with the case where the risk prediction is performed by each mobile body alone.
  • SLAM is basically a technique for estimating the position and orientation of a moving object relative to a stationary object, and is not a technique whose main purpose is to recognize surrounding moving objects. In addition, SLAM cannot recognize other moving objects that are not shown in the image in the blind spot of the camera.
  • the field of view in front of the vehicle 431 is blocked by the parked vehicle 433 and the vehicle 433. Therefore, it is difficult to recognize the movement of the vehicle 432 approaching from the oncoming lane from the vehicle 431.
  • the server 12a collects the positional information of the vehicles 421 and 422, and the vehicles 431 and 432, predicts the movement of each vehicle, and then predicts the danger of each vehicle. As a result, the server 12a can predict and prevent accidents in the vehicles 421 and 422 and accidents in the vehicles 431 and 432.
  • the server 12a collects a local map from each information terminal 11a, and updates the global map based on the collected local map. Therefore, since the change of the situation of each place can be reflected to a global map rapidly and correctly, the precision of the danger prediction of each moving body improves, and the accident of each moving body can be prevented reliably.
  • the server 12a can store location information related to the accident and can simulate the accident occurrence later, it can support on-site verification, analysis of the cause of the accident, countermeasures for accident prevention, and the like. it can.
  • FIG. 13 shows an example of the functional configuration of the information terminal 11b which is the second embodiment of the information terminal 11 of FIG. In the figure, parts corresponding to those in FIG.
  • the information terminal 11b is different from the information terminal 11a in FIG. 2 in that an information processing unit 501 is provided instead of the information processing unit 103.
  • the information processing unit 501 is different from the information processing unit 103 in that a saddle risk region determination unit 511 is provided instead of the risk region determination unit 112.
  • the dangerous area determination unit 511 receives the determination result as to whether or not the mobile body is in the dangerous area from the server 12b (FIG. 14) via the reception unit 102.
  • the dangerous area determination unit 511 determines whether or not the mobile body is in the dangerous area based on the received determination result, and notifies the SLAM processing unit 111 of the determination result.
  • FIG. 14 shows an example of the functional configuration of the server 12b, which is the second embodiment of the server 12 of FIG. In the figure, parts corresponding to those in FIG.
  • the server 12b is different from the server 12a of FIG. 4 in that a global map search unit 601 and a dangerous region determination unit 602 are provided instead of the global map search unit 310 and the dangerous region notification unit 311.
  • the global map search unit 601 receives global map transmission request information from each information terminal 11b via the receiving unit 301. Then, the global map search unit 601 searches for an image of the local map included in the global map transmission request information in the global map stored in the map information storage unit 303. The global map search unit 601 extracts a map of the searched area and its surrounding area from the global map, and transmits the extracted map to the requesting information terminal 11b via the transmission unit 312. In addition, the global map search unit 601 detects the current position of the requesting mobile body based on the search result, and supplies the detection result to the dangerous area determination unit 602.
  • the dangerous area determination unit 602 determines whether or not the current position of the requesting mobile body detected by the global map search unit 310 is within the dangerous area indicated in the dangerous area map stored in the map information storage unit 303. Determine.
  • the dangerous area determination unit 602 transmits the determination result to the information terminal 11b that has requested the global map via the transmission unit 312.
  • steps S201 to S203 processing similar to that in steps S1 to S3 in FIG. 6 is executed.
  • step S204 the dredge risk area determination unit 511 determines whether or not the mobile body is in the danger area. Specifically, if the determination result that the mobile object is in the dangerous area is received from the server 12b, the dredging dangerous area determination unit 511 determines that the mobile object is in the dangerous area, and the processing is step. The process proceeds to S205.
  • step S205 the position information and the local map are transmitted to the server 12b as in the process of step S5 of FIG.
  • step S204 the dredge risk area determination unit 511 receives the determination result that the own mobile body is outside the danger area from the server 12b, or if the determination result is not received from the server 12b, It is determined that the moving object is not in the dangerous area. Then, the process of step S205 is skipped, and the process proceeds to step S206.
  • steps S206 to S210 processing similar to that in steps S6 to S10 in FIG. 6 is executed.
  • step S211 it is determined whether or not the transmission of the global map is requested as in the process of step S11 of FIG. If it is determined to request transmission of the global map, the process proceeds to step S212.
  • step S212 global map transmission request information is transmitted to the server 12a in the same manner as in step S12 of FIG.
  • the server 12a receives the global map transmission request information in step S302 of FIG. 16 to be described later, and in step S305, transmits the global map around the requesting mobile body and the determination result of the dangerous area.
  • step S213 the receiving unit 102 receives the global map and the dangerous area determination result transmitted from the server 12a.
  • the receiving unit 102 stores the received global map in the map information storage unit 216 and supplies the dangerous region determination result to the heel risk region determining unit 511.
  • step S201 Thereafter, the process returns to step S201, and the processes after step S201 are executed.
  • step S211 if it is determined in step S211 that transmission of the global map is not requested, the process returns to step S201, and the processes after step S201 are executed.
  • step S301 reception of position information, a local map, and dangerous area notification information transmitted from each information terminal 11b is started in the same manner as in step S101 of FIG.
  • step S302 it is determined whether or not the transmission of the global map has been requested in the same manner as in step S103 of FIG. If it is determined that the transmission of the global map is requested, the process proceeds to step S303.
  • step S303 the global map around the requesting mobile unit is searched in the same manner as in step S104 of FIG.
  • the global map search unit 310 detects the current position of the requesting mobile body based on the search result, and supplies the detection result to the dangerous area determination unit 602.
  • step S304 the dangerous area determination unit 602 determines whether or not the requesting moving body is in the dangerous area. Specifically, the dangerous area determination unit 602 determines that the current position of the requesting mobile body detected by the global map search unit 310 is within the dangerous area indicated in the dangerous area map stored in the map information storage unit 303. It is determined whether or not.
  • step S305 the server 12b transmits the determination result of the global map and the dangerous area.
  • the global map search unit 310 extracts a map of the area searched in the process of step S305 and its surrounding area from the global map.
  • the global map search unit 310 supplies the extracted map to the transmission unit 312.
  • the dangerous area determination unit 602 notifies the transmission unit 312 of the determination result in the process of step S304.
  • the transmission unit 312 transmits the map supplied from the global map search unit 310 and the determination result notified from the dangerous area determination unit 602 to the requesting information terminal 11b.
  • step S302 determines whether the transmission of the global map is not requested. If it is determined in step S302 that the transmission of the global map is not requested, the processes in steps S303 to S305 are skipped, and the process proceeds to step S306.
  • steps S306 to S317 processing similar to that in steps S106 to S117 in FIG. 7 is executed. And a process returns to step S302 and the process after step S302 is performed.
  • the server 12b since the server 12b determines whether or not each mobile object is in the dangerous area, the processing of each information terminal 11b can be reduced.
  • 3rd Embodiment differs in the structure of the SLAM process part 111 of the information terminal 11 compared with 1st Embodiment.
  • FIG. 18 shows a functional configuration example of the SLAM processing unit 111b which is the second embodiment of the SLAM processing unit 111 of FIG. In the figure, parts corresponding to those in FIG.
  • the SLAM processing unit 111b is different from the SLAM processing unit 111a in FIG. 3 in that an estimation unit 701 is provided instead of the estimation unit 201.
  • the estimation unit 701 replaces the feature amount calculation unit 215, the motion matching unit 217, the movement amount estimation unit 218, the object dictionary storage unit 219, the object recognition unit 220, and the position and orientation estimation unit 222.
  • a feature amount calculation unit 711, an image search unit 712, a feature point matching unit 713, a position / orientation estimation unit 714, and a movement amount estimation unit 715 are examples of the estimation unit 701 .
  • the feature amount calculation unit 711 calculates the feature amount of each feature point of the left image in the same manner as the feature amount calculation unit 215 of the SLAM processing unit 111a.
  • the feature quantity calculation unit 711 supplies the feature point information regarding each feature point to the image search unit 712 and the feature point matching unit 713 and causes the map information storage unit 216 to store the feature point information.
  • the image search unit 712 searches for an image similar to the left image from the global map stored in the map information storage unit 216 based on the feature amount information supplied from the feature amount calculation unit 711. That is, the image search unit 712 searches the global map for an image around the moving object.
  • the image search unit 712 supplies the detected image (hereinafter referred to as a similar image) to the feature point matching unit 713.
  • the feature point matching unit 713 detects a feature point of a similar image corresponding to the feature point detected by the feature point detection unit 212 based on the feature amount information supplied from the feature amount calculation unit 711.
  • the feature point matching unit 713 supplies the similar image and the detection result of the feature point to the position / orientation estimation unit 714.
  • the position / orientation estimation unit 714 estimates the position and orientation of the self-moving body (more precisely, the camera 201L) in the global map (that is, the spatial coordinate system) based on the position of the feature point in the similar image.
  • the position / orientation estimation unit 714 obtains position / orientation information indicating the position and orientation of the moving body from the danger region determination unit 112, the risk prediction unit 113, the position information generation unit 202, the speed estimation unit 223, and the movement amount estimation unit 715. And is stored in the position / orientation information storage unit 221.
  • the movement amount estimation unit 715 Based on the position and orientation information of the previous frame and the current frame stored in the position and orientation information storage unit 221, the movement amount estimation unit 715 automatically moves between the previous frame and the current frame. Estimate the amount of movement of the body position and posture. The movement amount estimation unit 715 supplies movement amount information indicating the estimated movement amount of the position and orientation of the moving body to the object detection unit 203 and the speed estimation unit 223.
  • the position and orientation of the moving object are directly estimated using the global map.
  • the SLAM processing unit 111a uses the adjacent frame images (the image of the previous frame and the image of the current frame) to perform motion matching, movement amount estimation, position and orientation estimation, and object detection.
  • the processing may be performed using images separated by two or more frames (for example, images of two or more N frames before and an image of the current frame). The same applies to the SLAM processing unit 111b.
  • a risk level may be set for each risk area of the risk area map, and the information terminal 11 may perform processing according to the risk level.
  • the dangerous area and the degree of danger differ depending on the time zone, season, weather, etc. Therefore, a plurality of dangerous area maps may be created depending on the time zone, season, weather, etc., and the plurality of dangerous area maps may be used properly.
  • the information terminal 11 notifies the server 12 of the location where the risk of an accident with another moving body is detected as the dangerous region.
  • the dangerous region is detected by a different method. It is possible to do so. For example, in the left image or the right image, a place where the area that blocks the field of view in the traveling direction is a predetermined ratio or more may be detected as a dangerous area.
  • each information terminal 11 predicts the movement of its own moving body by the same process as the motion prediction unit 306 of the server 12a.
  • the prediction result may be included in the position information and transmitted to the server 12.
  • the server 12 may perform risk prediction of each mobile object based on the prediction result of the movement of the mobile object from each information terminal 11.
  • the cameras 101L and 101R may be provided outside the information terminal 11.
  • the present technology can be applied to a case where the moving body is a person other than a vehicle, an airplane, a helicopter, a drone, or the like. Can be applied.
  • the present technology can be applied not only to a vehicle that moves by a prime mover, but also to a vehicle that is driven by a rail or an overhead line, a vehicle that moves by human power, and the like.
  • the present technology can be applied regardless of differences in vehicle driving methods (for example, automatic driving, manual driving, remote control, etc.).
  • SLAM technology can be applied by photographing the ground from the moving body.
  • This technology can also be applied to prevent accidents between two or more types of moving objects.
  • the present invention can be applied to prevent accidents between cars, bicycles and people, accidents between cars and trains, and the like.
  • FIG. 19 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 905 is further connected to the bus 904.
  • An input unit 906, an output unit 907, a storage unit 908, a communication unit 909, and a drive 910 are connected to the input / output interface 905.
  • the input unit 906 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 907 includes a display, a speaker, and the like.
  • the storage unit 908 includes a hard disk, a nonvolatile memory, and the like.
  • the communication unit 909 includes a network interface or the like.
  • the drive 910 drives a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 901 loads the program stored in the storage unit 908 to the RAM 903 via the input / output interface 905 and the bus 904 and executes the program, for example. Is performed.
  • the program executed by the computer (CPU 901) can be provided by being recorded on a removable medium 911 as a package medium, for example.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 908 via the input / output interface 905 by attaching the removable medium 911 to the drive 910.
  • the program can be received by the communication unit 909 via a wired or wireless transmission medium and installed in the storage unit 908.
  • the program can be installed in the ROM 902 or the storage unit 908 in advance.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
  • the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
  • each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • the information terminal 11 can be realized as a device mounted on any type of vehicle such as an automobile, an electric car, a hybrid electric car, and a motorcycle.
  • FIG. 20 is a block diagram illustrating an example of a schematic configuration of a vehicle control system 1000 to which the present technology can be applied.
  • the vehicle control system 1000 includes a plurality of electronic control units connected via a communication network 1001.
  • the vehicle control system 1000 includes a drive system control unit 1002, a body system control unit 1004, a battery control unit 1005, a vehicle exterior information detection device 1007, a vehicle interior information detection device 1010, and an integrated control unit 1012.
  • a communication network 1001 that connects the plurality of control units is compliant with an arbitrary standard such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), or FlexRay (registered trademark). It may be an in-vehicle communication network.
  • Each control unit includes a microcomputer that performs arithmetic processing according to various programs, a storage unit that stores programs executed by the microcomputer or parameters used for various calculations, and a drive circuit that drives various devices to be controlled. Is provided.
  • Each control unit includes a network I / F for performing communication with other control units via the communication network 1001, and wired or wireless communication with devices or sensors inside and outside the vehicle. A communication I / F for performing communication is provided. In FIG.
  • the functional configuration of the integrated control unit 1012 includes a microcomputer 1051, general-purpose communication I / F 1052, dedicated communication I / F 1053, positioning unit 1054, beacon receiving unit 1055, in-vehicle device I / F 1056, audio image output unit 1057, An in-vehicle network I / F 1058 and a storage unit 1059 are illustrated.
  • other control units include a microcomputer, a communication I / F, a storage unit, and the like.
  • the drive system control unit 1002 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 1002 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle.
  • the drive system control unit 1002 may have a function as a control device such as ABS (Antilock Brake System) or ESC (ElectronicElectroStability Control).
  • a vehicle state detection unit 1003 is connected to the drive system control unit 1002.
  • the vehicle state detection unit 1003 includes, for example, a gyro sensor that detects the angular velocity of the axial rotational motion of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, or an accelerator pedal operation amount, a brake pedal operation amount, and a steering wheel steering. At least one of sensors for detecting an angle, an engine speed, a rotational speed of a wheel, or the like is included.
  • the drive system control unit 1002 performs arithmetic processing using a signal input from the vehicle state detection unit 1003, and controls an internal combustion engine, a drive motor, an electric power steering device, a brake device, or the like.
  • the body system control unit 1004 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 1004 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp.
  • the body control unit 1004 can be input with radio waves transmitted from a portable device that substitutes for a key or signals of various switches.
  • the body system control unit 1004 receives the input of these radio waves or signals, and controls the vehicle door lock device, power window device, lamp, and the like.
  • the battery control unit 1005 controls the secondary battery 1006 that is a power supply source of the drive motor according to various programs. For example, information such as a battery temperature, a battery output voltage, or a remaining battery capacity is input to the battery control unit 1005 from a battery device including the secondary battery 1006. The battery control unit 1005 performs arithmetic processing using these signals, and controls the temperature adjustment control of the secondary battery 1006 or the cooling device provided in the battery device.
  • the vehicle outside information detection device 1007 detects information outside the vehicle on which the vehicle control system 1000 is mounted.
  • the imaging unit 1008 and the outside information detection unit 1009 is connected to the outside information detection device 1007.
  • the imaging unit 1008 includes at least one of a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras.
  • the outside information detection unit 1009 detects, for example, current weather or an environmental sensor for detecting the weather, or other vehicles, obstacles, pedestrians, etc. around the vehicle on which the vehicle control system 1000 is mounted.
  • a surrounding information detection sensor is included.
  • the environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunshine sensor that detects sunlight intensity, and a snow sensor that detects snowfall.
  • the ambient information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device.
  • the imaging unit 1008 and the vehicle exterior information detection unit 1009 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
  • FIG. 21 shows an example of installation positions of the imaging unit 1008 and the vehicle exterior information detection unit 1009.
  • the imaging units 1101F, 1101L, 1101R, 1101B, and 1101C are provided, for example, at at least one position among a front nose, a side mirror, a rear bumper, a back door, and an upper portion of a windshield in the vehicle interior of the vehicle 1100.
  • the imaging unit 1101F provided in the front nose and the imaging unit 1101C provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 1100.
  • the imaging units 1101L and 1101R provided in the side mirror mainly acquire an image of the side of the vehicle 1100.
  • the imaging unit 1101B provided in the rear bumper or the back door mainly acquires an image behind the vehicle 1100.
  • the imaging unit 1101C provided on the upper part of the windshield in the passenger compartment is mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 21 shows an example of shooting ranges of the respective imaging units 1101F, 1101L, 1101R, and 1101B.
  • the imaging range a indicates the imaging range of the imaging unit 1101F provided in the front nose
  • the imaging ranges b and c indicate the imaging ranges of the imaging units 1101L and 1101R provided in the side mirrors, respectively
  • the imaging range d The imaging range of the imaging part 1101B provided in the rear bumper or the back door is shown. For example, by overlaying image data captured by the imaging units 1101F, 1101L, 1101R, and 1101B, an overhead image when the vehicle 1100 is viewed from above is obtained.
  • Vehicle exterior information detection units 1102F, 1102FL, 1102FR, 1102ML, 1102MR, 1102C, 1102BL, 1102BR, and 1102B provided on the front, rear, side, corner, and front windshield of the vehicle 1100 are, for example, an ultrasonic sensor or a radar. It may be a device.
  • the vehicle outside information detection units 1102F, 1102C, and 1102B provided on the front nose, the rear bumper, the back door, and the windshield in the vehicle interior of the vehicle 1100 may be LIDAR devices, for example.
  • These outside information detection units 1102F to 1102B are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, and the like.
  • the vehicle outside information detection device 1007 causes the imaging unit 1008 to capture an image outside the vehicle and receives the captured image data. Further, the vehicle exterior information detection device 1007 receives detection information from the vehicle exterior information detection unit 1009 connected thereto. When the vehicle exterior information detection unit 1009 is an ultrasonic sensor, a radar device, or a LIDAR device, the vehicle exterior information detection device 1007 transmits ultrasonic waves, electromagnetic waves, or the like, and receives received reflected wave information.
  • the vehicle outside information detection device 1007 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received information.
  • the vehicle outside information detection device 1007 may perform an environment recognition process for recognizing rainfall, fog, road surface conditions, or the like based on the received information.
  • the vehicle outside information detection device 1007 may calculate a distance to an object outside the vehicle based on the received information.
  • the vehicle outside information detection device 1007 may perform an image recognition process or a distance detection process for recognizing a person, a car, an obstacle, a sign, a character on a road surface, or the like based on the received image data.
  • the outside information detection apparatus 1007 performs processing such as distortion correction or alignment on the received image data, and combines the image data captured by the different imaging units 1008 to generate an overhead image or a panoramic image. Also good.
  • the vehicle exterior information detection device 1007 may perform viewpoint conversion processing using image data captured by different imaging units 1008.
  • the in-vehicle information detection device 1010 detects in-vehicle information.
  • a driver state detection unit 1011 that detects the driver's state is connected to the in-vehicle information detection apparatus 1010.
  • the driver state detection unit 1011 may include a camera that captures an image of the driver, a biological sensor that detects biological information of the driver, a microphone that collects sound in the passenger compartment, and the like.
  • the biometric sensor is provided, for example, on a seat surface or a steering wheel, and detects biometric information of an occupant sitting on the seat or a driver holding the steering wheel.
  • the in-vehicle information detection device 1010 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 1011 and determine whether the driver is asleep. May be.
  • the vehicle interior information detection apparatus 1010 may perform a process such as a noise canceling process on the collected audio signal.
  • the integrated control unit 1012 controls the overall operation in the vehicle control system 1000 according to various programs.
  • An input unit 1018 is connected to the integrated control unit 1012.
  • the input unit 1018 is realized by a device that can be input by a passenger, such as a touch panel, a button, a microphone, a switch, or a lever.
  • the input unit 1018 may be, for example, a remote control device using infrared rays or other radio waves, or may be an external connection device such as a mobile phone or a PDA (Personal Digital Assistant) that supports the operation of the vehicle control system 1000. May be.
  • the input unit 1018 may be, for example, a camera. In that case, the passenger can input information by gesture.
  • the input unit 1018 may include, for example, an input control circuit that generates an input signal based on information input by a passenger or the like using the input unit 1018 and outputs the input signal to the integrated control unit 1012.
  • a passenger or the like operates the input unit 1018 to input various data to the vehicle control system 1000 or instruct a processing operation.
  • the storage unit 1059 may include a RAM (Random Access Memory) that stores various programs executed by the microcomputer and a ROM (Read Only Memory) that stores various parameters, calculation results, sensor values, and the like.
  • the storage unit 1059 may be realized by a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
  • General-purpose communication I / F 1052 is a general-purpose communication I / F that mediates communication with various devices existing in the external environment 1016.
  • the general-purpose communication I / F 1052 is a cellular communication protocol such as GSM (registered trademark) (Global System of Mobile communications), WiMAX, Long Term Term Evolution (LTE) or LTE-A (LTE-Advanced), or a wireless LAN (Wi-Fi). (Also referred to as (registered trademark)) may be implemented.
  • the general-purpose communication I / F 1052 is connected to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via, for example, a base station or an access point. May be. Further, the general-purpose communication I / F 1052 is connected to a terminal (for example, a pedestrian or a store terminal, or an MTC (Machine Type Communication) terminal) that exists in the vicinity of the vehicle using, for example, P2P (Peer To To Peer) technology. May be.
  • a terminal for example, a pedestrian or a store terminal, or an MTC (Machine Type Communication) terminal
  • P2P Peer To To Peer
  • the dedicated communication I / F 1053 is a communication I / F that supports a communication protocol formulated for use in a vehicle.
  • the dedicated communication I / F 1053 may implement a standard protocol such as WAVE (Wireless Access in Vehicle Environment) or DSRC (Dedicated Short Range Communication) that is a combination of the lower layer IEEE 802.11p and the upper layer IEEE 1609 .
  • the dedicated communication I / F 1053 is typically V2X, which is a concept that includes one or more of vehicle-to-vehicle communication, vehicle-to-infrastructure communication, and vehicle-to-pedestrian communication. Perform communication.
  • the positioning unit 1054 receives a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a Global Positioning System (GPS) satellite), performs positioning, and performs latitude, longitude, and altitude of the vehicle.
  • the position information including is generated.
  • the positioning unit 1054 may specify the current position by exchanging signals with the wireless access point, or may acquire position information from a terminal such as a mobile phone, PHS, or smartphone having a positioning function.
  • the beacon receiving unit 1055 receives, for example, radio waves or electromagnetic waves transmitted from radio stations or the like installed on the road, and acquires information such as the current position, traffic jams, closed roads, or required time. Note that the function of the beacon receiving unit 1055 may be included in the dedicated communication I / F 1053 described above.
  • the in-vehicle device I / F 1056 is a communication interface that mediates connections between the microcomputer 1051 and various devices existing in the vehicle.
  • the in-vehicle device I / F 1056 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB).
  • the in-vehicle device I / F 1056 may establish a wired connection via a connection terminal (and a cable if necessary).
  • the in-vehicle device I / F 1056 exchanges a control signal or a data signal with, for example, a mobile device or wearable device that a passenger has, or an information device that is carried in or attached to the vehicle.
  • the in-vehicle network I / F 1058 is an interface that mediates communication between the microcomputer 1051 and the communication network 1001.
  • the in-vehicle network I / F 1058 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 1001.
  • the microcomputer 1051 of the integrated control unit 1012 is connected via at least one of a general-purpose communication I / F 1052, a dedicated communication I / F 1053, a positioning unit 1054, a beacon receiving unit 1055, an in-vehicle device I / F 1056, and an in-vehicle network I / F 1058.
  • the vehicle control system 1000 is controlled according to various programs.
  • the microcomputer 1051 calculates a control target value of the driving force generation device, the steering mechanism, or the braking device based on the acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 1002. Also good.
  • the microcomputer 1051 may perform cooperative control for the purpose of vehicle collision avoidance or impact mitigation, follow-up travel based on inter-vehicle distance, vehicle speed maintenance travel, automatic driving, and the like.
  • the microcomputer 1051 is information acquired via at least one of the general-purpose communication I / F 1052, the dedicated communication I / F 1053, the positioning unit 1054, the beacon receiving unit 1055, the in-vehicle device I / F 1056, and the in-vehicle network I / F 1058. Based on the above, local map information including peripheral information on the current position of the vehicle may be created. Further, the microcomputer 1051 may generate a warning signal by predicting a danger such as a vehicle collision, approach of a pedestrian or the like or approach to a closed road based on the acquired information. The warning signal may be, for example, a signal for generating a warning sound or lighting a warning lamp.
  • the sound image output unit 1057 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle.
  • an audio speaker 1013, a display unit 1014, and an instrument panel 1015 are illustrated as output devices.
  • the display unit 1014 may include at least one of an on-board display and a head-up display, for example.
  • the display unit 1014 may have an AR (Augmented Reality) display function.
  • the output device may be another device such as a headphone, a projector, or a lamp other than these devices.
  • the display device can display the results obtained by various processes performed by the microcomputer 1051 or information received from other control units in various formats such as text, images, tables, and graphs. Display visually. Further, when the output device is an audio output device, the audio output device converts an audio signal made up of reproduced audio data or acoustic data into an analog signal and outputs it aurally.
  • At least two control units connected via the communication network 1001 may be integrated as one control unit.
  • each control unit may be configured by a plurality of control units.
  • the vehicle control system 1000 may include another control unit not shown.
  • some or all of the functions of any of the control units may be given to other control units. That is, as long as information is transmitted and received via the communication network 1001, the predetermined arithmetic processing may be performed by any of the control units.
  • a sensor or device connected to one of the control units may be connected to another control unit, and a plurality of control units may transmit / receive detection information to / from each other via the communication network 1001. .
  • the information processing unit 103 of the information terminal 11a in FIG. 2 and the information processing unit 501 of the information terminal 11b in FIG. 13 can be applied to the integrated control unit 1012 in FIG.
  • the reception unit 102 and the transmission unit 104 of the information terminal 11a in FIG. 2 and the information terminal 11b in FIG. 13 can be applied to the general-purpose communication I / F 1052 in FIG.
  • the cameras 101L and 101R of the information terminal 11a in FIG. 2 and the information terminal 11b in FIG. 13 can be applied to the imaging unit 1008 in FIG. It is assumed that the cameras 101L and 101R are provided at the positions of the vehicle exterior information detection units 1102FL and 1102FR in FIG. 21, for example, in order to increase the baseline length.
  • the components of the information processing unit 103 and the information processing unit 501 are realized in a module (for example, an integrated circuit module configured by one die) for the integrated control unit 1012 illustrated in FIG. May be.
  • the information processing unit 103 and the information processing unit 501 may be realized by a plurality of control units of the vehicle control system 1000 illustrated in FIG.
  • a computer program for realizing the functions of the information processing unit 103 and the information processing unit 501 can be installed in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed via a network, for example, without using a recording medium.
  • the present technology can take the following configurations.
  • a receiving unit that receives position information indicating a position and a speed of each moving body estimated based on an image photographed from each moving body from an information terminal included in each moving body;
  • An information processing apparatus comprising: a risk prediction unit that predicts an accident between the moving bodies based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
  • the information processing apparatus according to (1) further including a motion prediction unit that predicts a motion of each moving body based on the estimated position and speed of each moving body.
  • the receiving unit receives a local map indicating a position in a three-dimensional space of a feature point in an image taken from each moving body from each information terminal, Based on the received local map, further comprising a global map updater for updating a global map indicating the position of the feature point in the predetermined area in the three-dimensional space;
  • the information processing apparatus according to (2) wherein the motion prediction unit further predicts the motion of each moving object based on the global map.
  • the motion prediction unit predicts a motion in which each moving body avoids a stationary object on the global map.
  • the position information includes the movement of each mobile object predicted by each information terminal, The information processing apparatus according to (1), wherein the risk prediction unit predicts an accident between the moving bodies based on movements of the moving bodies predicted by the information terminals.
  • Any of (1) to (5), further comprising a danger notification unit for notifying the information terminal provided in the mobile body that is predicted to have an accident risk by the danger prediction unit An information processing apparatus according to claim 1.
  • the receiving unit receives a detection result of a dangerous area where an accident may occur from each information terminal, A dangerous area map updating unit for updating a dangerous area map indicating the dangerous area based on the detection result of the dangerous area by each of the information terminals;
  • the information processing apparatus according to any one of (1) to (7), further comprising: a dangerous area notification unit that notifies each of the information terminals of the dangerous area based on the dangerous area map.
  • the receiving unit receives a detection result of a dangerous area where an accident may occur from each information terminal, A dangerous area map updating unit for updating a dangerous area map indicating the dangerous area based on the detection result of the dangerous area by each of the information terminals; (1) to (1), further comprising: a risk area determination unit that determines whether or not each moving body is in the risk area based on the risk area map, and notifies the information terminal of the determination result.
  • the information processing apparatus according to any one of 7).
  • the information processing apparatus according to any one of (1) to (9), further including a storage unit that stores location information related to the accident when the accident information indicating the occurrence of the accident of the mobile object is received.
  • the information processing apparatus further including a simulation unit that simulates an accident occurrence state of the moving body based on the position information stored in the storage unit.
  • Information processing device A receiving step of receiving position information indicating the position and speed of each moving body estimated based on an image captured from each moving body from an information terminal provided in each moving body; A risk prediction step of predicting an accident between the moving bodies based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
  • An estimation unit that estimates the position and speed of the moving body based on an image taken from the moving body;
  • a transmission unit that transmits position information including the estimated position and speed of the moving object to the information processing device;
  • An information terminal comprising: a risk avoidance processing unit that performs a process for avoiding an accident when the information processing apparatus is notified of a risk that the moving body will have an accident.
  • the estimation unit estimates a position and a speed of the moving body based on a relative position between a feature point in an image captured from the moving body and the moving body.
  • a local map generating unit that generates a local map indicating the position of the feature point in a three-dimensional space; The information terminal according to (14), wherein the transmission unit further transmits the local map to the information processing apparatus.
  • An object detection unit that detects an object around the moving body based on the feature points;
  • a risk prediction unit that predicts an accident of the moving body based on the estimated position and speed of the moving body and the detection result of the object, and
  • the information terminal according to (14) or (15), wherein the risk avoidance processing unit further performs a process for avoiding an accident when the risk predicting unit predicts a risk that the moving body will have an accident. .
  • the estimation unit predicts the movement of the moving body based on the estimated position and speed of the moving body, The information terminal according to any one of (13) to (16), wherein the transmission unit transmits the position information including a prediction result of the movement of the moving body to the information processing apparatus.
  • the transmission unit transmits the position information to the information processing device when it is determined that the moving body is in a dangerous area where there is a risk of an accident based on information from the information processing device. (13) The information terminal according to any one of (17).
  • the danger prediction unit further detects the dangerous area, The information terminal according to (18), wherein the transmission unit transmits a detection result of the dangerous area to the information processing apparatus.
  • An information terminal provided on a mobile object An estimation step of estimating the position and speed of the moving body based on an image taken from the moving body; A transmission step of transmitting position information including the estimated position and speed of the moving body to the information processing apparatus; An information processing method comprising: a risk avoidance processing step for performing a process for avoiding an accident when the information processing apparatus is notified of a risk that the moving body will have an accident.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present technique relates to an information processing device, an information terminal and an information processing method which make it possible to reliably prevent accidents between mobile bodies. The information processing device is provided with: a receiving unit which, from information terminals equipped on each mobile body, receives position information indicating the position and speed of the mobile bodies estimated on the basis of images captured from said mobile bodies; and a danger prediction unit which predicts accidents between said mobile bodies on the basis of the movement of the mobile bodies predicted on the basis of the predicted position and speed of each mobile body. The present technique may be applied, for example, to systems for predicting and preventing accidents between vehicles.

Description

情報処理装置、情報端末、及び、情報処理方法Information processing apparatus, information terminal, and information processing method
 本技術は、情報処理装置、情報端末、及び、情報処理方法に関し、特に、移動体の事故を防止できるようにした情報処理装置、情報端末、及び、情報処理方法に関する。 The present technology relates to an information processing device, an information terminal, and an information processing method, and particularly relates to an information processing device, an information terminal, and an information processing method that can prevent an accident of a moving body.
 従来、自車両が山岳路や見通しの悪い狭い道路を走行中の場合、自車両に搭載されたナビゲーション装置が、接近する対向車についての情報をサーバから受信し、対向車とすれ違うことができる地点を探索して表示することが提案されている。 Conventionally, when the host vehicle is traveling on a mountain road or a narrow road with poor visibility, the navigation device mounted on the host vehicle can receive information about the oncoming vehicle from the server and pass the oncoming vehicle. It has been proposed to search for and display.
 具体的には、ナビゲーション装置は、GPS(Global Positioning System)受信器、車速センサ、角速度ジャイロ、及び、地図データベースとのマップマッチングにより、自車両の現在位置、速度及び方位を算出する。そして、ナビゲーション装置は、自車両が山岳路や見通しの悪い狭い道路を走行中の場合、自車両の現在位置、速度及び方位、誤差情報、並びに、行き先地点情報をサーバへ送信する。 Specifically, the navigation device calculates the current position, speed, and direction of the host vehicle by map matching with a GPS (Global Positioning System) receiver, a vehicle speed sensor, an angular velocity gyroscope, and a map database. Then, when the host vehicle is traveling on a mountain road or a narrow road with poor visibility, the navigation device transmits the current position, speed and direction of the host vehicle, error information, and destination point information to the server.
 サーバは、各車両から受信した情報に基づいて、各車両のすれ違い可能性を判定し、他の車両の現在位置情報、方位情報、速度情報を含む接近情報を各車両のナビゲーション装置に送信する。接近情報を受信したナビゲーション装置は、他の車両とすれ違うことが可能な地点を探索し、地図上に表示する(例えば、特許文献1参照)。 The server determines the passing possibility of each vehicle based on the information received from each vehicle, and transmits approach information including current position information, direction information, and speed information of other vehicles to the navigation device of each vehicle. The navigation device that has received the approach information searches for a point where it can pass another vehicle and displays it on a map (for example, see Patent Document 1).
国際公開第2004/064007号International Publication No. 2004/064007
 しかしながら、特許文献1に記載の発明では、ナビゲーション装置がGPS衛星からの電波を受信できない場合、自車両の位置を検出できないため、サーバは接近情報を提供することができない。その結果、ナビゲーション装置は、他の車両とすれ違うことが可能な地点への誘導を行うことができなくなる。また、民間利用におけるGPSの位置の測定精度は10m程度であるため、他の車両とのすれ違いを誤検出するおそれがある。その結果、車両のすれ違い時の事故を防止できないおそれがある。 However, in the invention described in Patent Document 1, when the navigation device cannot receive the radio wave from the GPS satellite, the server cannot provide the approach information because the position of the host vehicle cannot be detected. As a result, the navigation device cannot guide to a point where it can pass another vehicle. Moreover, since the measurement accuracy of the GPS position in private use is about 10 m, there is a risk of misdetecting a passing with another vehicle. As a result, there is a possibility that an accident at the time of passing of the vehicle cannot be prevented.
 本技術は、このような状況に鑑みてなされたものであり、車両等の移動体の事故を確実に防止できるようにするものである。 The present technology has been made in view of such a situation, and is intended to reliably prevent an accident of a moving body such as a vehicle.
 本技術の第1の側面の情報処理装置は、各移動体から撮影された画像に基づいて推定された各前記移動体の位置及び速度を示す位置情報を各前記移動体に備えられている情報端末から受信する受信部と、推定された各前記移動体の位置及び速度に基づいて予測される各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う危険予測部とを備える。 The information processing apparatus according to the first aspect of the present technology includes information on each moving body that includes position information indicating the position and speed of each moving body estimated based on an image captured from each moving body. A receiving unit that receives from the terminal; and a risk prediction unit that predicts an accident between the moving bodies based on the movements of the moving bodies that are predicted based on the estimated position and speed of the moving bodies; Is provided.
 推定された各前記移動体の位置及び速度に基づいて、各前記移動体の動きを予測する動き予測部をさらに設けることができる。 It is possible to further provide a motion prediction unit that predicts the motion of each moving body based on the estimated position and speed of each moving body.
 前記受信部には、各前記移動体から撮影された画像内の特徴点の3次元空間内の位置を示すローカルマップを各前記情報端末から受信させ、受信された前記ローカルマップに基づいて、所定の領域内の特徴点の3次元空間の位置を示すグローバルマップを更新するグローバルマップ更新部をさらに設け、前記動き予測部には、さらに前記グローバルマップに基づいて、各前記移動体の動きを予測させることができる。 The receiving unit receives from each information terminal a local map indicating a position in a three-dimensional space of a feature point in an image photographed from each moving body, and based on the received local map, predetermined A global map update unit that updates a global map that indicates the position of the feature point in the three-dimensional space in the region, and the motion prediction unit further predicts the motion of each moving object based on the global map Can be made.
 前記動き予測部には、各前記移動体が前記グローバルマップ上の静止物体を避ける動きを予測させることができる。 The motion predicting unit can predict the motion of each moving body avoiding a stationary object on the global map.
 前記位置情報に、各前記情報端末が予測した各前記移動体の動きを含ませ、前記危険予測部には、各前記情報端末が予測した各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行わせることができる。 The position information includes the movement of each moving body predicted by each information terminal, and the risk prediction unit includes each moving body based on the movement of each moving body predicted by each information terminal. Accidents can be predicted in the meantime.
 前記危険予測部により事故にあう危険性があると予測された前記移動体に備えられている前記情報端末への危険の通知を行う危険通知部をさらに設けることができる。 It is possible to further provide a danger notifying unit for notifying the information terminal provided in the moving body that is predicted to have an accident risk by the danger predicting unit.
 前記危険通知部には、前記移動体が事故を回避するための処理に用いる制御情報を前記情報端末に送信させることができる。 The danger notification unit can cause the information terminal to transmit control information used for processing for the mobile body to avoid an accident.
 前記受信部には、事故が発生する危険性がある危険領域の検出結果を各前記情報端末から受信させ、各前記情報端末による前記危険領域の検出結果に基づいて、前記危険領域を示す危険領域マップを更新する危険領域マップ更新部と、前記危険領域マップに基づいて、各前記情報端末に前記危険領域を通知する危険領域通知部とをさらに設けることができる。 The receiving unit receives a detection result of a dangerous area at risk of an accident from each information terminal, and indicates the dangerous area based on the detection result of the dangerous area by each information terminal A dangerous area map update unit for updating a map and a dangerous area notification unit for notifying each of the information terminals of the dangerous area based on the dangerous area map may be further provided.
 前記受信部には、事故が発生する危険性がある危険領域の検出結果を各前記情報端末から受信させ、各前記情報端末による前記危険領域の検出結果に基づいて、前記危険領域を示す危険領域マップを更新する危険領域マップ更新部と、前記危険領域マップに基づいて、各前記移動体が前記危険領域内にあるか否かを判定し、判定結果を各前記情報端末に通知する危険領域判定部とをさらに設けることができる。 The receiving unit receives a detection result of a dangerous area at risk of an accident from each information terminal, and indicates the dangerous area based on the detection result of the dangerous area by each information terminal A dangerous area map update unit that updates a map, and a dangerous area determination that determines whether each moving body is in the dangerous area based on the dangerous area map and notifies the information terminal of the determination result Can be further provided.
 前記移動体の事故の発生を示す事故情報を受信した場合、前記事故に関連する位置情報を保存する記憶部をさらに設けることができる。 When the accident information indicating the occurrence of the accident of the mobile object is received, a storage unit for storing the position information related to the accident can be further provided.
 前記記憶部に保存されている前記位置情報に基づいて、前記移動体の事故の発生状況のシミュレーションを行うシミュレーション部をさらに設けることができる。 It is possible to further provide a simulation unit for simulating the occurrence state of the accident of the moving body based on the position information stored in the storage unit.
 本技術の第1の側面の情報処理方法は、情報処理装置が、各移動体から撮影された画像に基づいて推定された各前記移動体の位置及び速度を示す位置情報を各前記移動体に備えられている情報端末から受信する受信ステップと、推定された各前記移動体の位置及び速度に基づいて予測される各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う危険予測ステップとを含む。 In the information processing method according to the first aspect of the present technology, the information processing apparatus transmits position information indicating the position and speed of each moving body estimated based on an image captured from each moving body to each moving body. A reception step of receiving from the information terminal provided, and prediction of an accident between the mobile bodies based on the movement of each mobile body predicted based on the estimated position and speed of each mobile body. A risk prediction step to be performed.
 本技術の第2の側面の情報端末は、移動体に備えられ、前記移動体から撮影された画像に基づいて前記移動体の位置及び速度を推定する推定部と、推定された前記移動体の位置及び速度を含む位置情報を情報処理装置に送信する送信部と、前記情報処理装置から前記移動体が事故にあう危険性が通知された場合、事故を回避するための処理を行う危険回避処理部とを備える。 An information terminal according to a second aspect of the present technology is provided in a moving body, and an estimation unit that estimates the position and speed of the moving body based on an image captured from the moving body; A transmission unit that transmits position information including position and speed to the information processing device, and a risk avoidance process that performs a process for avoiding the accident when the information processing device is notified of a risk that the moving body will have an accident A part.
 前記推定部には、前記移動体から撮影された画像内の特徴点と前記移動体との相対位置に基づいて、前記移動体の位置及び速度を推定させることができる。 The estimation unit can estimate the position and speed of the moving body based on the relative position between the feature point in the image photographed from the moving body and the moving body.
 前記特徴点の3次元空間内の位置を示すローカルマップを生成するローカルマップ生成部をさらに設け、前記送信部には、さらに前記ローカルマップを前記情報処理装置に送信させることができる。 A local map generating unit that generates a local map indicating the position of the feature point in the three-dimensional space may be further provided, and the transmitting unit may further cause the information processing apparatus to transmit the local map.
 前記特徴点に基づいて、前記移動体の周辺の物体の検出を行う物体検出部と、推定された前記移動体の位置及び速度、並びに、前記物体の検出結果に基づいて、前記移動体の事故の予測を行う危険予測部とをさらに設け、前記危険回避処理部には、さらに前記危険予測部により前記移動体が事故にあう危険性が予測された場合、事故を回避するための処理を行わせることができる。 An object detection unit that detects an object around the moving object based on the feature point, an estimated position and speed of the moving object, and an accident of the moving object based on the detection result of the object A risk prediction unit that predicts the accident, and the risk avoidance processing unit further performs a process for avoiding the accident when the risk predicting unit predicts a risk that the moving body will hit the accident. Can be made.
 前記推定部には、推定した前記移動体の位置及び速度に基づいて、前記移動体の動きを予測させ、前記送信部には、前記移動体の動きの予測結果を含む前記位置情報を前記情報処理装置に送信させることができる。 Based on the estimated position and speed of the moving body, the estimating unit predicts the movement of the moving body, and the transmitting unit receives the position information including a prediction result of the moving body as the information. It can be transmitted to the processing device.
 前記送信部には、前記情報処理装置からの情報に基づいて、事故が発生する危険性がある危険領域内に前記移動体があると判定した場合、前記位置情報を前記情報処理装置に送信させることができる。 The transmission unit causes the information processing apparatus to transmit the position information when it is determined that the moving body is in a risk area where there is a risk of an accident based on information from the information processing apparatus. be able to.
 前記危険予測部には、さらに前記危険領域の検出を行わせ、前記送信部には、前記危険領域の検出結果を前記情報処理装置に送信させることができる。 The risk prediction unit may further detect the dangerous region, and the transmission unit may transmit the detection result of the dangerous region to the information processing apparatus.
 本技術の第2の側面の情報処理方法は、移動体に備えられた情報端末が、前記移動体から撮影された画像に基づいて前記移動体の位置及び速度を推定する推定ステップと、推定された前記移動体の位置及び速度を含む位置情報を情報処理装置に送信する送信ステップと、前記情報処理装置から前記移動体が事故にあう危険性が通知された場合、事故を回避するための処理を行う危険回避処理ステップとを含む。 An information processing method according to a second aspect of the present technology is estimated by an estimation step in which an information terminal provided in a moving body estimates a position and a speed of the moving body based on an image taken from the moving body. A transmission step of transmitting position information including the position and speed of the moving body to an information processing device, and a process for avoiding an accident when the information processing device is notified of a risk that the moving body will have an accident. And a risk avoidance processing step.
 本技術の第1の側面においては、各移動体から撮影された画像に基づいて推定された各前記移動体の位置及び速度を示す位置情報が各前記移動体に備えられている情報端末から受信され、推定された各前記移動体の位置及び速度に基づいて予測される各前記移動体の動きに基づいて、各前記移動体間の事故の予測が行われる。 In the first aspect of the present technology, position information indicating a position and a speed of each moving body estimated based on an image captured from each moving body is received from an information terminal provided in each moving body. Then, an accident between the moving bodies is predicted based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
 本技術の第2の側面においては、移動体から撮影された画像に基づいて前記移動体の位置及び速度が推定され、推定された前記移動体の位置及び速度を含む位置情報が情報処理装置に送信され、前記情報処理装置から前記移動体が事故にあう危険性が通知された場合、事故を回避するための処理が行われる。 In the second aspect of the present technology, the position and speed of the moving body are estimated based on an image captured from the moving body, and position information including the estimated position and speed of the moving body is stored in the information processing apparatus. When the information is transmitted and the information processing apparatus notifies the risk that the moving body will have an accident, a process for avoiding the accident is performed.
 本技術の第1の側面によれば、移動体間の事故の予測精度が向上する。その結果、移動体の事故を確実に防止することができる。 に よ According to the first aspect of the present technology, the prediction accuracy of accidents between moving bodies is improved. As a result, it is possible to reliably prevent an accident of the moving body.
 本技術の第2の側面によれば、移動体の事故を確実に防止することができる。 <According to the second aspect of the present technology, it is possible to reliably prevent an accident of a moving body.
 なお、ここに記載された効果は必ずしも限定されるものではなく、本開示中に記載されたいずれかの効果であってもよい。 It should be noted that the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
本技術を適用した情報処理システムの一実施の形態を示すブロック図である。1 is a block diagram illustrating an embodiment of an information processing system to which the present technology is applied. 情報端末の第1の実施の形態を示すブロック図である。It is a block diagram which shows 1st Embodiment of an information terminal. SLAM処理部の第1の実施の形態を示すブロック図である。It is a block diagram which shows 1st Embodiment of a SLAM process part. サーバの第1の実施の形態を示すブロック図である。It is a block diagram which shows 1st Embodiment of a server. 危険領域マップの例を示す図である。It is a figure which shows the example of a dangerous area map. 情報端末の処理の第1の実施の形態を説明するためのフローチャートである。It is a flowchart for demonstrating 1st Embodiment of the process of an information terminal. サーバの処理の第1の実施の形態を説明するためのフローチャートである。It is a flowchart for demonstrating 1st Embodiment of the process of a server. サーバの処理の第1の実施の形態を説明するためのフローチャートである。It is a flowchart for demonstrating 1st Embodiment of the process of a server. 危険予測処理の第1の例を示す図である。It is a figure which shows the 1st example of a danger prediction process. 危険予測処理の第2の例を示す図である。It is a figure showing the 2nd example of danger prediction processing. 本技術の効果を説明するための図である。It is a figure for demonstrating the effect of this technique. 本技術の効果を説明するための図である。It is a figure for demonstrating the effect of this technique. 情報端末の第2の実施の形態を示すブロック図である。It is a block diagram which shows 2nd Embodiment of an information terminal. サーバの第2の実施の形態を示すブロック図である。It is a block diagram which shows 2nd Embodiment of a server. 情報端末の処理の第2の実施の形態を説明するためのフローチャートである。It is a flowchart for demonstrating 2nd Embodiment of the process of an information terminal. サーバの処理の第2の実施の形態を説明するためのフローチャートである。It is a flowchart for demonstrating 2nd Embodiment of the process of a server. サーバの処理の第2の実施の形態を説明するためのフローチャートである。It is a flowchart for demonstrating 2nd Embodiment of the process of a server. SLAM処理部の第2の実施の形態を示すブロック図である。It is a block diagram which shows 2nd Embodiment of a SLAM process part. コンピュータの構成例を示すブロック図である。It is a block diagram which shows the structural example of a computer. 車両制御システムの概略的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a schematic structure of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。It is explanatory drawing which shows an example of the installation position of a vehicle exterior information detection part and an imaging part.
 以下、本技術を実施するための形態(以下、実施の形態という)について説明する。なお、説明は以下の順序で行う。
1.第1の実施の形態
2.第2の実施の形態(危険領域の判定をサーバで行う場合の例)
3.第3の実施の形態(グローバルマップを用いて移動体の位置及び姿勢を直接推定する例)
4.変形例
5.応用例
Hereinafter, modes for carrying out the present technology (hereinafter referred to as embodiments) will be described. The description will be given in the following order.
1. First Embodiment 2. FIG. Second embodiment (example when risk area is determined by a server)
3. Third embodiment (an example of directly estimating the position and orientation of a moving object using a global map)
4). Modification 5 Application examples
<1.第1の実施の形態>
 まず、図1乃至図12を参照して、本技術の第1の実施の形態について説明する。
<1. First Embodiment>
First, a first embodiment of the present technology will be described with reference to FIGS.
{情報処理システム1の構成例}
 図1は、本技術を適用した情報処理システム1の一実施の形態を示している。
{Configuration example of information processing system 1}
FIG. 1 shows an embodiment of an information processing system 1 to which the present technology is applied.
 情報処理システム1は、情報端末11-1乃至11-n及びサーバ12を含むように構成される。なお、以下、情報端末11-1乃至11-nを個々に区別する必要がない場合、単に情報端末11と称する。 The information processing system 1 is configured to include information terminals 11-1 to 11-n and a server 12. Hereinafter, the information terminals 11-1 to 11-n are simply referred to as information terminals 11 when it is not necessary to distinguish them individually.
 各情報端末11とサーバ12とは、図示せぬ基地局及びネットワーク13等を介して相互に接続されており、互いに通信を行う。なお、情報端末11の通信方式には、任意の無線の通信方式を採用することが可能である。サーバ12の通信方式には、任意の有線又は無線の通信方式を採用することが可能である。また、情報端末11とサーバ12とが、直接通信を行うようにすることも可能である。 The information terminals 11 and the server 12 are connected to each other via a base station (not shown), the network 13, and the like, and communicate with each other. Note that any wireless communication method can be adopted as the communication method of the information terminal 11. As the communication method of the server 12, any wired or wireless communication method can be adopted. In addition, the information terminal 11 and the server 12 can directly communicate with each other.
 なお、以下、各情報端末11とサーバ12とがネットワーク13等を介して通信を行う場合、説明を分かりやすくするために、”ネットワーク13等を介して”の記載を省略する。例えば、以下、”各情報端末11とサーバ12とが通信を行う”、”各情報端末11とサーバ12とが情報やデータ等の送受信を行う”等の表現を用いる。 In the following, when each information terminal 11 and the server 12 communicate via the network 13 or the like, the description of “via the network 13 or the like” is omitted for easy understanding. For example, expressions such as “each information terminal 11 and server 12 communicate with each other” and “each information terminal 11 and server 12 perform transmission and reception of information and data” are used.
 各情報端末11は、例えば、カーナビゲーションシステム、自動運転システム等に用いられる車載情報端末、スマートフォン、携帯電話機、タブレット、ウエアラブルデバイス、ノート型のパーソナルコンピュータ等の携帯情報端末等により構成される。 Each information terminal 11 includes, for example, an in-vehicle information terminal used for a car navigation system, an automatic driving system, or the like, a mobile information terminal such as a smartphone, a mobile phone, a tablet, a wearable device, or a notebook personal computer.
 また、各情報端末11は、それぞれ移動体に備えられる。各情報端末11が備えられる移動体には、例えば、陸上や空中を移動する移動体が含まれる。そのような移動体には、例えば、車両、人、飛行機、ヘリコプター、ドローン、ロボット等が含まれる。また、各情報端末11を移動体に常時設置するようにしてもよいし、或いは、一時的に移動体に設置したり、装着したり、携帯させたりするようにしてもよい。 Further, each information terminal 11 is provided in a mobile body. The mobile body provided with each information terminal 11 includes, for example, a mobile body that moves on land or in the air. Such moving bodies include, for example, vehicles, people, airplanes, helicopters, drones, robots, and the like. In addition, each information terminal 11 may be always installed on the moving body, or may be temporarily installed on the moving body, mounted, or carried.
 なお、以下、ある情報端末11に対して、当該情報端末11が備えられている移動体を他の移動体と区別する場合、自移動体と称する。 Note that, hereinafter, when a mobile body provided with the information terminal 11 is distinguished from other mobile bodies with respect to a certain information terminal 11, it is referred to as a self mobile body.
 情報処理システム1は、各情報端末11が備えられている各移動体の危険予測を行い、各移動体が予測される事故を回避するための処理を行う。例えば、サーバ12は、各情報端末11からの情報やデータ等を統合し、各移動体間の危険予測を行い、予測結果を各情報端末11に通知する。また、各情報端末11は、自移動体から撮影された画像に基づいて、自移動体の危険予測を行う。各情報端末11は、各情報端末11及びサーバ12の危険予測の結果に基づいて、自移動体が予測される事故を回避するための処理を行う。 The information processing system 1 performs a risk prediction for each moving body provided with each information terminal 11 and performs a process for avoiding an accident in which each moving body is predicted. For example, the server 12 integrates information, data, and the like from each information terminal 11, performs risk prediction between the moving objects, and notifies each information terminal 11 of the prediction result. In addition, each information terminal 11 performs risk prediction of the own moving body based on an image taken from the own moving body. Each information terminal 11 performs a process for avoiding an accident in which the mobile object is predicted based on the risk prediction results of each information terminal 11 and the server 12.
 また、サーバ12は、例えば、各情報端末11からの情報やデータ等に基づいて、発生した移動体の事故の発生状況のシミュレーションを行う。 In addition, the server 12 performs a simulation of the state of occurrence of the accident of the moving body based on, for example, information and data from each information terminal 11.
{情報端末11aの機能の構成例}
 図2は、図1の情報端末11の第1の実施の形態である情報端末11aの機能の構成例を示している。情報端末11aは、カメラ101L,101R、受信部102、情報処理部103、及び、送信部104を含むように構成される。情報処理部103は、SLAM(Simultaneous Localization and Mapping)処理部111、危険領域判定部112、危険予測部113、及び、危険回避処理部114を含むように構成される。
{Example of function configuration of information terminal 11a}
FIG. 2 shows a configuration example of the function of the information terminal 11a which is the first embodiment of the information terminal 11 of FIG. The information terminal 11a is configured to include cameras 101L and 101R, a reception unit 102, an information processing unit 103, and a transmission unit 104. The information processing unit 103 is configured to include a SLAM (Simultaneous Localization and Mapping) processing unit 111, a dangerous area determination unit 112, a risk prediction unit 113, and a risk avoidance processing unit 114.
 カメラ101Lは、例えば、移動体の進行方向を左側から撮影する。カメラ101Lは、撮影の結果得られた画像(以下、左画像と称する)をSLAM処理部111に供給する。 For example, the camera 101L captures the moving direction of the moving object from the left side. The camera 101L supplies an image (hereinafter referred to as a left image) obtained as a result of shooting to the SLAM processing unit 111.
 カメラ101Rは、例えば、移動体の進行方向を右側から撮影する。カメラ101Rは、撮影の結果得られた画像(以下、右画像と称する)をSLAM処理部111に供給する。 The camera 101R, for example, captures the moving direction of the moving body from the right side. The camera 101R supplies an image (hereinafter referred to as a right image) obtained as a result of shooting to the SLAM processing unit 111.
 受信部102は、他の情報端末11a、サーバ12、及び、図示せぬ他のサーバ等から各種の情報やデータ等を受信し、情報端末11aの各部に供給する。例えば、受信部102は、サーバ12からグローバルマップを受信し、SLAM処理部111に供給する。また、例えば、受信部102は、サーバ12から危険領域情報を受信し、危険領域判定部112に供給する。さらに、例えば、受信部102は、サーバ12から危険通知情報を受信し、危険回避処理部114に供給する。 The receiving unit 102 receives various types of information, data, and the like from other information terminals 11a, the server 12, and other servers (not shown), and supplies them to each unit of the information terminal 11a. For example, the receiving unit 102 receives a global map from the server 12 and supplies the global map to the SLAM processing unit 111. For example, the receiving unit 102 receives the dangerous area information from the server 12 and supplies the dangerous area information to the dangerous area determination unit 112. Further, for example, the receiving unit 102 receives the danger notification information from the server 12 and supplies it to the danger avoidance processing unit 114.
 ここで、グローバルマップとは、所定の広域な領域内の静止物体の3次元空間内の位置を示すマップである。例えば、グローバルマップは、所定の領域内の静止物体の特徴点の3次元の空間座標系上の位置及び特徴量を示す情報を含む。なお、空間座標系は、例えば、緯度、経度及び地面からの高さにより表される。 Here, the global map is a map showing the position in a three-dimensional space of a stationary object in a predetermined wide area. For example, the global map includes information indicating the position and feature amount of a feature point of a stationary object in a predetermined region on a three-dimensional spatial coordinate system. The spatial coordinate system is represented by, for example, latitude, longitude, and height from the ground.
 危険領域情報とは、事故が発生する危険性のある危険領域の位置を示す情報である。 The dangerous area information is information indicating the position of a dangerous area where an accident may occur.
 危険通知情報とは、サーバ12が、移動体が事故にあう危険性を予測した場合に、事故にあう危険性のある移動体に備えられている情報端末11aに危険を通知するための情報である。 The danger notification information is information for notifying the danger to the information terminal 11a provided in the mobile body having a risk of causing an accident when the server 12 predicts the risk of the mobile body having an accident. is there.
 SLAM処理部111は、SLAMの技術を用いて、左画像及び右画像、並びに、グローバルマップに基づいて、自移動体の速度、位置及び姿勢を推定したり、自移動体の周囲の物体を検出したり、ローカルマップを生成したりする。SLAM処理部111は、推定した自移動体の位置及び姿勢を示す位置姿勢情報を危険領域判定部112及び危険予測部113に供給する。また、SLAM処理部111は、推定した自移動体の速度を示す速度情報を危険予測部113に供給する。さらに、SLAM処理部111は、推定した自移動体の位置及び速度を含む位置情報を送信部104に供給する。また、SLAM処理部111は、自移動体の周囲の物体の検出結果を危険予測部113に通知する。さらに、SLAM処理部111は、生成したローカルマップを送信部104に供給する。また、SLAM処理部111は、グローバルマップの送信を要求するためのグローバルマップ送信要求情報を生成し、送信部104に供給する。 The SLAM processing unit 111 uses the SLAM technology to estimate the speed, position, and orientation of the moving object based on the left image, the right image, and the global map, and to detect objects around the moving object. Or generate a local map. The SLAM processing unit 111 supplies position / orientation information indicating the estimated position and orientation of the moving body to the dangerous area determination unit 112 and the risk prediction unit 113. In addition, the SLAM processing unit 111 supplies speed information indicating the estimated speed of the moving body to the danger prediction unit 113. Further, the SLAM processing unit 111 supplies position information including the estimated position and speed of the moving body to the transmission unit 104. In addition, the SLAM processing unit 111 notifies the danger prediction unit 113 of the detection result of objects around the moving body. Further, the SLAM processing unit 111 supplies the generated local map to the transmission unit 104. In addition, the SLAM processing unit 111 generates global map transmission request information for requesting transmission of the global map, and supplies it to the transmission unit 104.
 ここで、ローカルマップとは、各移動体の周辺の静止物体の3次元空間内の位置を示すマップであり、各情報端末11により生成される。例えば、ローカルマップは、グローバルマップと同様に、各移動体の周辺の静止物体の特徴点の3次元の空間座標系上の位置及び特徴量を示す情報を含む。 Here, the local map is a map indicating the position in the three-dimensional space of a stationary object around each moving object, and is generated by each information terminal 11. For example, the local map includes information indicating the position and the feature amount on the three-dimensional spatial coordinate system of the feature point of the stationary object around each moving object, as in the global map.
 危険領域判定部112は、危険領域情報及び自移動体の位置姿勢情報に基づいて、自移動体が危険領域内にあるか否かを判定し、判定結果をSLAM処理部111に通知する。 The dangerous area determination unit 112 determines whether or not the own moving body is in the dangerous area based on the dangerous area information and the position and orientation information of the own moving body, and notifies the SLAM processing unit 111 of the determination result.
 危険予測部113は、自移動体の速度情報及び位置姿勢情報、並びに、自移動体の周囲の物体の検出結果に基づいて、自移動体の危険予測及び危険領域の検出処理を行う。危険予測部113は、危険予測の結果を危険回避処理部114に通知する。また、危険予測部113は、検出した危険領域を通知するための危険領域通知情報を生成し、送信部104に供給する。 The danger prediction unit 113 performs the danger prediction and the dangerous area detection process of the moving object based on the speed information and the position and orientation information of the moving object and the detection result of the objects around the moving object. The risk prediction unit 113 notifies the risk avoidance processing unit 114 of the result of the risk prediction. Further, the danger prediction unit 113 generates danger area notification information for notifying the detected danger area, and supplies the danger area notification information to the transmission unit 104.
 危険回避処理部114は、危険予測部113による危険予測結果、及び、サーバ12からの危険通知情報に基づいて、自移動体の危険を回避するための処理を行う。 The danger avoidance processing unit 114 performs a process for avoiding the danger of the own mobile body based on the risk prediction result by the danger prediction unit 113 and the danger notification information from the server 12.
 送信部104は、他の情報端末11a、サーバ12、及び、図示せぬ他のサーバ等に各種の情報やデータ等を送信する。例えば、送信部104は、位置情報、ローカルマップ、危険領域通知情報、及び、グローバルマップ送信要求情報をサーバ12に送信する。 The transmission unit 104 transmits various information, data, and the like to the other information terminals 11a, the server 12, and other servers (not shown). For example, the transmission unit 104 transmits position information, a local map, dangerous area notification information, and global map transmission request information to the server 12.
{SLAM処理部111aの構成例}
 図3は、図2のSLAM処理部111の第1の実施の形態であるSLAM処理部111aの機能の構成例を示している。SLAM処理部111aは、推定部201、位置情報生成部202、物体検出部203、及び、ローカルマップ生成部204を含むように構成される。
{Configuration example of SLAM processing unit 111a}
FIG. 3 shows a functional configuration example of the SLAM processing unit 111a which is the first embodiment of the SLAM processing unit 111 of FIG. The SLAM processing unit 111a is configured to include an estimation unit 201, a position information generation unit 202, an object detection unit 203, and a local map generation unit 204.
 推定部201は、カメラ201L及び201Rにより撮影された左画像及び右画像内の特徴点と移動体との相対位置に基づいて、自移動体の移動量、位置、姿勢及び速度を推定する。具体的には、推定部201は、画像補正部211L,211R、特徴点検出部212、視差マッチング部213、距離推定部214、特徴量算出部215、マップ情報記憶部216、動きマッチング部217、移動量推定部218、物体辞書記憶部219、物体認識部220、位置姿勢情報記憶部221、位置姿勢推定部222、及び、速度推定部223を含むように構成される。 The estimation unit 201 estimates the movement amount, position, orientation, and speed of the moving body based on the relative positions of the feature points in the left and right images captured by the cameras 201L and 201R and the moving body. Specifically, the estimation unit 201 includes image correction units 211L and 211R, a feature point detection unit 212, a parallax matching unit 213, a distance estimation unit 214, a feature amount calculation unit 215, a map information storage unit 216, a motion matching unit 217, The movement amount estimation unit 218, the object dictionary storage unit 219, the object recognition unit 220, the position / orientation information storage unit 221, the position / orientation estimation unit 222, and the speed estimation unit 223 are configured.
 画像補正部211L及び画像補正部211Rは、それぞれカメラ201Lから供給される左画像、及び、カメラ201Rから供給される右画像を、互いに同じ方向を向いた画像となるように補正する。画像補正部211Lは、補正後の左画像を特徴点検出部212及び動きマッチング部217に供給する。画像補正部211Rは、補正後の右画像を視差マッチング部213に供給する。 The image correcting unit 211L and the image correcting unit 211R correct the left image supplied from the camera 201L and the right image supplied from the camera 201R, respectively, so that the images are directed in the same direction. The image correction unit 211L supplies the corrected left image to the feature point detection unit 212 and the motion matching unit 217. The image correction unit 211R supplies the corrected right image to the parallax matching unit 213.
 特徴点検出部212は、左画像の特徴点を検出する。特徴点検出部212は、検出した各特徴点の2次元の画像座標系上の位置を示す2次元位置情報を視差マッチング部213及び特徴量算出部215に供給する。なお、画像座標系は、例えば、画像内のx座標及びy座標により表される。 The feature point detection unit 212 detects a feature point of the left image. The feature point detection unit 212 supplies two-dimensional position information indicating the position of each detected feature point on the two-dimensional image coordinate system to the parallax matching unit 213 and the feature amount calculation unit 215. The image coordinate system is represented by, for example, an x coordinate and ay coordinate in the image.
 視差マッチング部213は、左画像で検出された特徴点に対応する右画像の特徴点を検出する。これにより、各特徴点の左画像上の位置と右画像上の位置の差である視差が求まる。視差マッチング部213は、各特徴点の左画像及び右画像における画像座標系上の位置を示す2次元位置情報を距離推定部214に供給する。 The parallax matching unit 213 detects the feature point of the right image corresponding to the feature point detected in the left image. Thereby, the parallax which is the difference between the position on the left image of each feature point and the position on the right image is obtained. The parallax matching unit 213 supplies the distance estimation unit 214 with two-dimensional position information indicating the position of each feature point on the image coordinate system in the left image and the right image.
 距離推定部214は、各特徴点の左画像と右画像との間の視差に基づいて、各特徴点までの距離を推定し、さらに各特徴点の3次元の空間座標系上の位置を算出する。距離推定部214は、各特徴点の空間座標系上の位置を示す3次元位置情報を特徴量算出部215に供給する。 The distance estimation unit 214 estimates the distance to each feature point based on the parallax between the left image and the right image of each feature point, and further calculates the position of each feature point on the three-dimensional spatial coordinate system To do. The distance estimation unit 214 supplies the feature amount calculation unit 215 with three-dimensional position information indicating the position of each feature point on the spatial coordinate system.
 特徴量算出部215は、左画像の各特徴点の特徴量を算出する。特徴量算出部215は、各特徴点の3次元位置情報及び特徴量を含む特徴点情報を、マップ情報記憶部216に記憶させる。 The feature amount calculation unit 215 calculates the feature amount of each feature point of the left image. The feature amount calculation unit 215 causes the map information storage unit 216 to store the feature point information including the three-dimensional position information of each feature point and the feature amount.
 マップ情報記憶部216には、ローカルマップに用いられる特徴点情報の他、サーバ12から供給されるグローマップが記憶される。 The map information storage unit 216 stores the glow map supplied from the server 12 in addition to the feature point information used for the local map.
 動きマッチング部217は、1つ前のフレームで検出された各特徴点の3次元位置情報をマップ情報記憶部216から取得する。次に、動きマッチング部217は、現在のフレームの左画像において、1つ前のフレームで検出された各特徴点に対応する特徴点を検出する。そして、動きマッチング部217は、各特徴点の1つ前のフレームにおける3次元位置情報、及び、現在のフレームにおける画像座標系上の位置を示す2次元位置情報を移動量推定部218に供給する。 The motion matching unit 217 acquires the three-dimensional position information of each feature point detected in the previous frame from the map information storage unit 216. Next, the motion matching unit 217 detects feature points corresponding to each feature point detected in the previous frame in the left image of the current frame. Then, the motion matching unit 217 supplies the movement amount estimation unit 218 with the three-dimensional position information in the previous frame of each feature point and the two-dimensional position information indicating the position on the image coordinate system in the current frame. .
 移動量推定部218は、各特徴点の1つ前のフレームの3次元位置情報及び現在のフレームの2次元位置情報に基づいて、フレーム間の自移動体(より正確には、カメラ201L)の位置及び姿勢の移動量を推定する。移動量推定部218は、推定した自移動体の位置及び姿勢の移動量を示す移動量情報を、物体検出部203、位置姿勢推定部222、及び、速度推定部223に供給する。 Based on the three-dimensional position information of the previous frame of each feature point and the two-dimensional position information of the current frame, the movement amount estimation unit 218 determines the movement of the moving body between frames (more precisely, the camera 201L). Estimate the amount of movement of the position and orientation. The movement amount estimation unit 218 supplies movement amount information indicating the estimated movement amount of the position and posture of the moving body to the object detection unit 203, the position / orientation estimation unit 222, and the speed estimation unit 223.
 物体認識部220は、物体辞書記憶部219に記憶されている物体辞書に基づいて、左画像内の物体の認識を行う。物体認識部220は、物体の認識結果に基づいて、自移動体(より正確には、カメラ201L)の空間座標系における位置及び姿勢の初期値(以下、初期位置及び初期姿勢と称する)を設定する。物体認識部220は、設定した初期位置及び初期姿勢を示す初期位置姿勢情報を位置姿勢情報記憶部221に記憶させる。 The object recognition unit 220 recognizes an object in the left image based on the object dictionary stored in the object dictionary storage unit 219. Based on the recognition result of the object, the object recognition unit 220 sets the initial position and orientation values (hereinafter referred to as the initial position and initial orientation) of the moving body (more precisely, the camera 201L) in the spatial coordinate system. To do. The object recognizing unit 220 causes the position and orientation information storage unit 221 to store initial position and orientation information indicating the set initial position and initial orientation.
 位置姿勢推定部222は、位置姿勢情報記憶部221に記憶されている初期位置姿勢情報、又は、前のフレームの位置姿勢情報、並びに、自移動体の移動量の推定結果に基づいて、自移動体の位置及び姿勢を推定する。また、位置姿勢推定部222は、必要に応じて、マップ情報記憶部216に記憶されているグローバルマップに基づいて、推定した自移動体の位置及び姿勢の補正を行う。位置姿勢推定部222は、推定した自移動体の位置及び姿勢を示す位置姿勢情報を、危険領域判定部112、危険予測部113、位置情報生成部202、及び、速度推定部223に供給するとともに、位置姿勢情報記憶部221に記憶させる。 The position / orientation estimation unit 222 performs self-movement based on the initial position / orientation information stored in the position / orientation information storage unit 221 or the position / orientation information of the previous frame and the estimation result of the movement amount of the own moving object. Estimate body position and posture. In addition, the position / orientation estimation unit 222 corrects the estimated position and orientation of the moving body based on the global map stored in the map information storage unit 216 as necessary. The position / orientation estimation unit 222 supplies position / orientation information indicating the estimated position and orientation of the moving object to the dangerous region determination unit 112, the risk prediction unit 113, the position information generation unit 202, and the speed estimation unit 223. And stored in the position and orientation information storage unit 221.
 速度推定部223は、推定された自移動体の移動量を経過時間で除算することにより、自移動体の速度を推定する。速度推定部223は、推定した速度を示す速度情報を危険予測部113及び位置情報生成部202に供給する。 The speed estimation unit 223 estimates the speed of the mobile body by dividing the estimated travel amount of the mobile body by the elapsed time. The speed estimation unit 223 supplies speed information indicating the estimated speed to the danger prediction unit 113 and the position information generation unit 202.
 位置情報生成部202は、危険領域判定部112から自移動体が危険領域内にあることを通知された場合、自移動体の位置及び速度を含む位置情報を生成する。位置情報生成部202は、生成した位置情報を送信部104に供給する。 The position information generation unit 202 generates position information including the position and speed of the moving body when notified from the dangerous area determination unit 112 that the moving body is in the dangerous area. The position information generation unit 202 supplies the generated position information to the transmission unit 104.
 物体検出部203は、移動量情報、並びに、マップ情報記憶部216に記憶されている1つ前のフレームと現在のフレームの特徴点情報に基づいて、自移動体の周囲の静止物体及び移動体を検出する。物体検出部203は、自移動体の周囲の静止物体及び移動体の検出結果を危険予測部113及びローカルマップ生成部204に通知する。 Based on the movement amount information and the feature point information of the previous frame and the current frame stored in the map information storage unit 216, the object detection unit 203 uses the stationary object and the moving body around the moving body. Is detected. The object detection unit 203 notifies the danger prediction unit 113 and the local map generation unit 204 of the detection results of the stationary object and the moving body around the moving body.
 ローカルマップ生成部204は、危険領域判定部112から自移動体が危険領域内にあることを通知された場合、自移動体の周囲の静止物体及び移動体の検出結果、並びに、マップ情報記憶部216に記憶されている現在のフレームの特徴点情報に基づいて、ローカルマップを生成する。ローカルマップ生成部204は、生成したローカルマップを送信部104に供給する。また、ローカルマップ生成部204は、生成したローカルマップを含み、グローバルマップの送信をサーバ12に要求するためのグローバルマップ送信要求情報を生成し、送信部104に供給する。 When notified from the dangerous area determination unit 112 that the moving body is in the dangerous area, the local map generation unit 204 detects a stationary object and a moving object around the moving body, and a map information storage unit. A local map is generated based on the feature point information of the current frame stored in H.216. The local map generation unit 204 supplies the generated local map to the transmission unit 104. The local map generation unit 204 includes the generated local map, generates global map transmission request information for requesting the server 12 to transmit the global map, and supplies the generated information to the transmission unit 104.
{サーバ12の第1の実施の形態}
 図4は、図1のサーバ12の第1の実施の形態であるサーバ12aの機能の構成例を示している。
{First embodiment of server 12}
FIG. 4 shows an example of the functional configuration of the server 12a, which is the first embodiment of the server 12 of FIG.
 サーバ12aは、受信部301、位置情報記憶部302、マップ情報記憶部303、グローバルマップ更新部304、危険領域マップ更新部305、動き予測部306、危険予測部307、危険通知部308、シミュレーション部309、グローバルマップ検索部310、危険領域通知部311、及び、送信部312を含むように構成される。 The server 12a includes a reception unit 301, a position information storage unit 302, a map information storage unit 303, a global map update unit 304, a dangerous area map update unit 305, a motion prediction unit 306, a risk prediction unit 307, a risk notification unit 308, and a simulation unit. 309, a global map search unit 310, a dangerous area notification unit 311, and a transmission unit 312.
 受信部301は、各情報端末11a及び図示せぬ他のサーバ等から各種の情報やデータ等を受信し、サーバ12aの各部に供給する。例えば、受信部301は、各情報端末11aから位置情報を受信し、位置情報記憶部302に記憶させる。また、例えば、受信部301は、各情報端末11aからローカルマップ及び危険領域情報を受信し、マップ情報記憶部303に記憶させる。さらに、例えば、受信部301は、各情報端末11aからグローバルマップ送信要求情報を受信し、グローバルマップ検索部310に供給する。また、例えば、受信部301は、他のサーバから移動体の事故の発生を示す事故情報、及び、事故のシミュレーションの実行の指令を受信し、シミュレーション部309に供給する。 The receiving unit 301 receives various types of information, data, and the like from each information terminal 11a and other servers (not shown), and supplies them to each unit of the server 12a. For example, the receiving unit 301 receives position information from each information terminal 11 a and stores it in the position information storage unit 302. Further, for example, the receiving unit 301 receives the local map and the dangerous area information from each information terminal 11 a and stores them in the map information storage unit 303. Further, for example, the reception unit 301 receives global map transmission request information from each information terminal 11 a and supplies the global map transmission request information to the global map search unit 310. Further, for example, the receiving unit 301 receives accident information indicating the occurrence of a moving object accident from another server and an instruction to execute an accident simulation, and supplies the instruction to the simulation unit 309.
 グローバルマップ更新部304は、マップ情報記憶部303に記憶されている各情報端末11aにより生成されたローカルマップに基づいて、マップ情報記憶部303に記憶されているグローバルマップを更新する。 The global map update unit 304 updates the global map stored in the map information storage unit 303 based on the local map generated by each information terminal 11a stored in the map information storage unit 303.
 危険領域マップ更新部305は、マップ情報記憶部303に記憶されている各情報端末11aからの危険領域通知情報に基づいて、マップ情報記憶部303に記憶されている危険領域マップを更新する。 The dangerous area map update unit 305 updates the dangerous area map stored in the map information storage unit 303 based on the dangerous area notification information from each information terminal 11a stored in the map information storage unit 303.
 ここで、危険領域マップとは、事故が発生する危険性がある危険領域の位置を示すマップである。例えば、危険領域マップでは、グローバルマップがカバーする領域をグリッド状に分割し、グリッド毎に危険領域であるか否かが示される。 Here, the dangerous area map is a map showing the position of the dangerous area where there is a risk of an accident occurring. For example, in the dangerous area map, the area covered by the global map is divided into grids, and whether or not each area is a dangerous area is indicated.
 なお、図5に示されるように、危険領域マップのグリッドを階層構造にしてもよい。具体的には、図5では、グリッドを2層構造とした危険領域マップの例が示されている。まず、第1階層において各グリッドが危険領域であるか否かが判定される。そして、第1階層において危険領域であると判定されたグリッドにおいて、当該グリッドをさらに分割した第2階層の各グリッドが危険領域であるか否かが判定される。 Note that as shown in FIG. 5, the grid of the dangerous area map may have a hierarchical structure. Specifically, FIG. 5 shows an example of a dangerous area map in which the grid has a two-layer structure. First, it is determined whether or not each grid is a dangerous area in the first hierarchy. Then, in the grid determined to be a dangerous area in the first hierarchy, it is determined whether or not each grid in the second hierarchy obtained by further dividing the grid is a dangerous area.
 例えば、図5の例では、斜線で示される危険領域である第1階層のグリッドG1が、第2階層において複数のグリッドに分割されている。次に、第2階層のグリッド毎に危険領域であるか否かが判定される。なお、この例では、斜線で示される第2階層のグリッドG2a乃至G2dが危険領域であると判定されている。 For example, in the example of FIG. 5, the grid G1 in the first layer, which is a dangerous area indicated by diagonal lines, is divided into a plurality of grids in the second layer. Next, it is determined whether or not it is a dangerous area for each grid of the second hierarchy. In this example, it is determined that the grids G2a to G2d in the second hierarchy indicated by diagonal lines are dangerous areas.
 これにより、第2階層において危険領域であるか否かを判定するグリッドの数を削減することができ、処理の負荷を軽減することができる。 This makes it possible to reduce the number of grids for determining whether or not it is a dangerous area in the second hierarchy, and to reduce the processing load.
 動き予測部306は、位置情報記憶部302に記憶されている各情報端末11aからの位置情報、及び、マップ情報記憶部303に記憶されているグローバルマップに基づいて、各移動体の動きを予測する。動き予測部306は、各移動体の動きの予測結果を動き予測部306に通知する。 The motion prediction unit 306 predicts the motion of each mobile object based on the position information from each information terminal 11 a stored in the position information storage unit 302 and the global map stored in the map information storage unit 303. To do. The motion prediction unit 306 notifies the motion prediction unit 306 of the motion prediction result of each moving object.
 危険予測部307は、各移動体の動きの予測結果に基づいて、各移動体の危険予測を行い、予測結果を危険通知部308に供給する。 The danger prediction unit 307 performs danger prediction of each moving body based on the prediction result of the movement of each moving body, and supplies the prediction result to the danger notification unit 308.
 危険通知部308は、危険な状態が検出された移動体、すなわち、事故にあう危険性が予測された移動体に備えられている情報端末11aに危険の通知等を行うための危険通知情報を生成する。危険通知部308は、生成した危険通知情報を送信部312に供給する。 The danger notification unit 308 provides danger notification information for notifying a danger to the information terminal 11a provided in the mobile object in which the dangerous state is detected, that is, the mobile object predicted to be in an accident. Generate. The danger notification unit 308 supplies the generated danger notification information to the transmission unit 312.
 シミュレーション部309は、事故情報に示される事故に関連する位置情報を位置情報記憶部302に保存するように設定する。また、シミュレーション部309は、図示せぬ入力部から、又は、受信部301を介して他のサーバ等から、事故のシミュレーションの実行の指令を受信した場合、位置情報記憶部302に記憶されている位置情報に基づいて、指定された事故の発生状況のシミュレーションを行う。或いは、シミュレーション部309は、事故の発生状況のシミュレーションを行うためのシミュレーションデータを生成し、送信部312に供給する。 The simulation unit 309 sets so as to save the position information related to the accident indicated in the accident information in the position information storage unit 302. In addition, the simulation unit 309 stores an accident simulation execution command from an input unit (not shown) or from another server or the like via the reception unit 301, and is stored in the position information storage unit 302. Based on the location information, a simulation of the specified accident occurrence is performed. Alternatively, the simulation unit 309 generates simulation data for simulating an accident occurrence state and supplies the simulation data to the transmission unit 312.
 グローバルマップ検索部310は、マップ情報記憶部303に記憶されているグローバルマップにおいて、グローバルマップ送信要求情報に含まれるローカルマップの画像検索を行う。グローバルマップ検索部310は、検索した領域及びその周辺の領域のマップをグローバルマップから抽出し、抽出したマップを送信部312に供給する。 The global map search unit 310 performs a local map image search included in the global map transmission request information in the global map stored in the map information storage unit 303. The global map search unit 310 extracts a map of the searched area and its surrounding area from the global map, and supplies the extracted map to the transmission unit 312.
 危険領域通知部311は、マップ情報記憶部303に記憶されている危険領域マップに示される危険領域毎に危険領域情報を生成し、送信部312に供給する。 The dangerous area notification unit 311 generates dangerous area information for each dangerous area shown in the dangerous area map stored in the map information storage unit 303 and supplies the dangerous area information to the transmission unit 312.
 送信部312は、各情報端末11a及び他のサーバ等に各種の情報やデータ等を送信する。例えば、送信部312は、危険通知情報、シミュレーションデータ、グローバルマップ、及び、危険領域情報を各情報端末11aに送信する。 The transmission unit 312 transmits various information, data, and the like to each information terminal 11a and other servers. For example, the transmission unit 312 transmits danger notification information, simulation data, a global map, and dangerous area information to each information terminal 11a.
{情報処理システム1の処理の第1の実施の形態}
 次に、図6乃至図12を参照して、情報処理システム1の処理の第1の実施の形態について説明する。
{First Embodiment of Processing of Information Processing System 1}
Next, a first embodiment of the process of the information processing system 1 will be described with reference to FIGS.
(情報端末11aの処理)
 まず、図6のフローチャートを参照して、情報端末11aの処理について説明する。
(Processing of information terminal 11a)
First, processing of the information terminal 11a will be described with reference to the flowchart of FIG.
 ステップS1において、推定部201は、自移動体の移動量、位置及び姿勢を推定する。 In step S1, the estimation unit 201 estimates the movement amount, position, and orientation of the moving body.
 具体的には、画像補正部211L及び画像補正部211Rは、それぞれカメラ201Lから供給される左画像、及び、カメラ201Rから供給される右画像を、互いに同じ方向を向いた画像となるように補正する。画像補正部211Lは、補正後の左画像を特徴点検出部212及び動きマッチング部217に供給する。画像補正部211Rは、補正後の右画像を視差マッチング部213に供給する。 Specifically, the image correction unit 211L and the image correction unit 211R correct the left image supplied from the camera 201L and the right image supplied from the camera 201R, respectively, so that the images are directed in the same direction. To do. The image correction unit 211L supplies the corrected left image to the feature point detection unit 212 and the motion matching unit 217. The image correction unit 211R supplies the corrected right image to the parallax matching unit 213.
 特徴点検出部212は、左画像の特徴点を検出する。特徴点の検出方法には、例えば、Harrisコーナー等の任意の方法を用いることができる。特徴点検出部212は、検出した各特徴点の画像座標系上の位置を示す2次元位置情報を視差マッチング部213に供給する。 The feature point detection unit 212 detects a feature point of the left image. For the feature point detection method, for example, an arbitrary method such as a Harris corner can be used. The feature point detection unit 212 supplies two-dimensional position information indicating the position of each detected feature point on the image coordinate system to the parallax matching unit 213.
 視差マッチング部213は、左画像で検出された特徴点に対応する右画像の特徴点を検出する。視差マッチング部213は、各特徴点の左画像及び右画像における画像座標系上の位置を示す2次元位置情報を距離推定部214に供給する。 The parallax matching unit 213 detects the feature point of the right image corresponding to the feature point detected in the left image. The parallax matching unit 213 supplies the distance estimation unit 214 with two-dimensional position information indicating the position of each feature point on the image coordinate system in the left image and the right image.
 距離推定部214は、各特徴点の左画像と右画像との間の視差に基づいて、各特徴点までの距離を推定し、さらに各特徴点の3次元の空間座標系上の位置を算出する。距離推定部214は、各特徴点の空間座標系上の位置を示す3次元位置情報を特徴量算出部215に供給する。 The distance estimation unit 214 estimates the distance to each feature point based on the parallax between the left image and the right image of each feature point, and further calculates the position of each feature point on the three-dimensional spatial coordinate system To do. The distance estimation unit 214 supplies the feature amount calculation unit 215 with three-dimensional position information indicating the position of each feature point on the spatial coordinate system.
 特徴量算出部215は、左画像の各特徴点の特徴量を算出する。特徴量には、例えばSURF(Speeded Up Robust Features)等、任意の特徴量を用いることができる。特徴量算出部215は、各特徴点の3次元位置情報及び特徴量を含む特徴点情報を、マップ情報記憶部216に記憶させる。 The feature amount calculation unit 215 calculates the feature amount of each feature point of the left image. As the feature amount, for example, an arbitrary feature amount such as SURF (Speeded Up Robust 例 え ば Features) can be used. The feature amount calculation unit 215 causes the map information storage unit 216 to store the feature point information including the three-dimensional position information of each feature point and the feature amount.
 動きマッチング部217は、1つ前のフレームで検出された各特徴点の3次元位置情報をマップ情報記憶部216から取得する。次に、動きマッチング部217は、現在のフレームの左画像において、1つ前のフレームで検出された各特徴点に対応する特徴点を検出する。そして、動きマッチング部217は、各特徴点の1つ前のフレームにおける3次元位置情報、及び、現在のフレームにおける画像座標系上の位置を示す2次元位置情報を移動量推定部218に供給する。 The motion matching unit 217 acquires the three-dimensional position information of each feature point detected in the previous frame from the map information storage unit 216. Next, the motion matching unit 217 detects feature points corresponding to each feature point detected in the previous frame in the left image of the current frame. Then, the motion matching unit 217 supplies the movement amount estimation unit 218 with the three-dimensional position information in the previous frame of each feature point and the two-dimensional position information indicating the position on the image coordinate system in the current frame. .
 移動量推定部218は、1つ前のフレームと現在のフレームとの間における自移動体(より正確には、カメラ201L)の移動量を推定する。例えば、移動量推定部218は、次式(1)のコスト関数fの値が最小となる移動量dXを算出する。 The movement amount estimation unit 218 estimates the movement amount of the own moving body (more precisely, the camera 201L) between the previous frame and the current frame. For example, the movement amount estimation unit 218 calculates the movement amount dX that minimizes the value of the cost function f in the following equation (1).
f=Σ||Zt-proj(dX,Mt-1)||2 ・・・(1) f = Σ || Z t -proj (dX, M t-1 ) || 2 (1)
 なお、移動量dXは、1つ前のフレームから現在のフレームまでの間の自移動体(より正確には、カメラ201L)の位置及び姿勢の移動量を示している。例えば、移動量dXは、空間座標系における3軸方向(3自由度)の位置及び各軸まわり(3自由度)の姿勢の移動量を示す。 The moving amount dX indicates the moving amount of the position and orientation of the own moving body (more precisely, the camera 201L) from the previous frame to the current frame. For example, the movement amount dX indicates the movement amount of the position in the three-axis direction (three degrees of freedom) and the posture around each axis (three degrees of freedom) in the spatial coordinate system.
 また、Mt-1とZtは、対応する特徴点の1つ前のフレームと現在のフレームの位置を示している。より具体的には、Mt-1は、1つ前のフレームの空間座標系上の特徴点の位置を示し、Ztは、現在のフレームの画像座標系上の特徴点の位置を示している。 M t−1 and Z t indicate the positions of the frame immediately before the corresponding feature point and the current frame. More specifically, M t-1 indicates the position of the feature point on the spatial coordinate system of the previous frame, and Z t indicates the position of the feature point on the image coordinate system of the current frame. Yes.
 さらに、proj(dX,Mt-1)は、1つ前のフレームにおける特徴点の空間座標系上の位置Mt-1を、移動量dXを用いて、現在のフレームの左画像の画像座標系上に射影した位置を示している。すなわち、proj(dX,Mt-1)は、1つ前のフレームにおける特徴点の位置Mt-1と移動量dXに基づいて、現在のフレームの左画像上の特徴点の位置を推定したものである。 Further, proj (dX, M t-1 ) is the image coordinates of the left image of the current frame, using the movement amount dX as the position M t-1 of the feature point in the previous frame on the spatial coordinate system. The projected position on the system is shown. That is, proj (dX, M t-1 ) estimates the position of the feature point on the left image of the current frame based on the position M t-1 of the feature point in the previous frame and the movement amount dX. Is.
 移動量推定部218は、式(1)に示される各特徴点のZt-proj(dX,Mt-1)の二乗和が最小となる移動量dXを、例えば最小二乗法等により求める。すなわち、移動量推定部218は、1つ前のフレームにおける特徴点の空間座標系上の位置Mt-1及び移動量dXに基づいて、現在のフレームの左画像の特徴点の画像座標系上の位置を推定した場合の誤差が最小となる移動量dXを求める。移動量推定部218は、求めた移動量dXを示す移動量情報を、物体検出部203、位置姿勢推定部222、及び、速度推定部223に供給する。 The movement amount estimation unit 218 obtains a movement amount dX that minimizes the sum of squares of Z t -proj (dX, M t-1 ) of each feature point shown in Expression (1) by, for example, the least square method. That is, the movement amount estimation unit 218 determines the feature point of the left image of the current frame on the image coordinate system based on the position M t-1 of the feature point on the spatial coordinate system and the movement amount dX of the previous frame. The amount of movement dX that minimizes the error when the position of is estimated is obtained. The movement amount estimation unit 218 supplies movement amount information indicating the obtained movement amount dX to the object detection unit 203, the position / orientation estimation unit 222, and the speed estimation unit 223.
 位置姿勢推定部222は、1つ前のフレームにおける位置姿勢情報を位置姿勢情報記憶部221から取得する。そして、位置姿勢推定部222は、1つ前のフレームにおける自移動体の位置及び姿勢に、移動量推定部218により推定された移動量dXを加算することにより、現在の自移動体の位置及び姿勢を推定する。 The position / orientation estimation unit 222 acquires position / orientation information in the previous frame from the position / orientation information storage unit 221. Then, the position / orientation estimation unit 222 adds the movement amount dX estimated by the movement amount estimation unit 218 to the position and posture of the own movement body in the previous frame, thereby Estimate posture.
 なお、位置姿勢推定部222は、最初のフレームにおける自移動体の位置及び姿勢の推定を行う場合、初期位置姿勢情報を位置姿勢情報記憶部221から取得する。そして、位置姿勢推定部222は、自移動体の初期位置及び初期姿勢に、移動量推定部218により推定された移動量dXを加算することにより、自移動体の位置及び姿勢を推定する。 The position / orientation estimation unit 222 acquires initial position / orientation information from the position / orientation information storage unit 221 when estimating the position and orientation of the moving body in the first frame. Then, the position / orientation estimation unit 222 estimates the position and orientation of the own moving body by adding the movement amount dX estimated by the movement amount estimation unit 218 to the initial position and initial posture of the own moving body.
 また、位置姿勢推定部222は、必要に応じて、マップ情報記憶部216に記憶されているグローバルマップに基づいて、推定した自移動体の位置及び姿勢の補正を行う。 In addition, the position / orientation estimation unit 222 corrects the estimated position and orientation of the moving object based on the global map stored in the map information storage unit 216 as necessary.
 位置姿勢推定部222は、推定した自移動体の位置及び姿勢を示す位置姿勢情報を、危険領域判定部112、危険予測部113、位置情報生成部202、及び、速度推定部223に供給するとともに、位置姿勢情報記憶部221に記憶させる。 The position / orientation estimation unit 222 supplies position / orientation information indicating the estimated position and orientation of the moving object to the dangerous region determination unit 112, the risk prediction unit 113, the position information generation unit 202, and the speed estimation unit 223. And stored in the position and orientation information storage unit 221.
 ステップS2において、速度推定部223は、自移動体の速度を推定する。具体的には、速度推定部223は、移動量推定部218により推定された移動量dXを経過時間で除算することにより、自移動体の速度を推定する。速度推定部223は、推定した速度を示す速度情報を危険予測部113及び位置情報生成部202に供給する。 In step S2, the speed estimation unit 223 estimates the speed of the mobile body. Specifically, the speed estimation unit 223 estimates the speed of the moving body by dividing the movement amount dX estimated by the movement amount estimation unit 218 by the elapsed time. The speed estimation unit 223 supplies speed information indicating the estimated speed to the danger prediction unit 113 and the position information generation unit 202.
 ステップS3において、物体検出部203は、周囲の物体の検出を行う。具体的には、物体検出部203は、1つ前のフレームと現在のフレームの特徴点情報をマップ情報記憶部216から取得する。次に、物体検出部203は、1つ前のフレームの特徴点と現在のフレームの特徴点とのマッチングを行い、フレーム間における各特徴点の動きを検出する。次に、物体検出部203は、移動量推定部218により推定された移動量dXに基づく自移動体の動きに対応する動きをしている特徴点と、自移動体の動きに対応する動きをしていない特徴点とを区別する。次に、物体検出部203は、自移動体の動きに対応する動きをしている特徴点に基づいて、自移動体の周囲の静止物体を検出する。また、物体検出部203は、自移動体の動きに対応する動きをしていない特徴点に基づいて、自移動体の周囲の移動体を検出する。そして、物体検出部203は、自移動体の周囲の静止物体及び移動体の検出結果を危険予測部113及びローカルマップ生成部204に通知する。 In step S3, the object detection unit 203 detects surrounding objects. Specifically, the object detection unit 203 acquires the feature point information of the previous frame and the current frame from the map information storage unit 216. Next, the object detection unit 203 performs matching between the feature point of the previous frame and the feature point of the current frame, and detects the movement of each feature point between frames. Next, the object detection unit 203 calculates a feature point that moves corresponding to the movement of the own moving body based on the movement amount dX estimated by the movement amount estimation unit 218 and a movement that corresponds to the movement of the own moving body. Distinguish from feature points that are not. Next, the object detection unit 203 detects a stationary object around the moving body based on a feature point that moves corresponding to the movement of the moving body. In addition, the object detection unit 203 detects a moving body around the moving body based on feature points that do not move corresponding to the movement of the moving body. The object detection unit 203 notifies the danger prediction unit 113 and the local map generation unit 204 of the detection results of the stationary object and the moving body around the moving body.
 ステップS4において、危険領域判定部112は、自移動体が危険領域内にあるか否かを判定する。具体的には、自移動体が、後述する図7のステップS102においてサーバ12aから送信される危険領域情報の受信エリア内に存在する場合、受信部102は、危険領域情報を受信する。受信部102は、受信した危険領域情報を危険領域判定部112に供給する。 In step S4, the dangerous area determination unit 112 determines whether or not the own mobile body is in the dangerous area. Specifically, when the own mobile body exists in the reception area of the dangerous area information transmitted from the server 12a in step S102 of FIG. 7 described later, the receiving unit 102 receives the dangerous area information. The receiving unit 102 supplies the received dangerous area information to the dangerous area determination unit 112.
 危険領域判定部112は、位置姿勢推定部222により推定された自移動体の位置が、受信した危険領域情報に示される危険領域内であるか否かを判定する。そして、自移動体が危険領域内にあると判定された場合、処理はステップS5に進む。 The dangerous area determination unit 112 determines whether the position of the moving body estimated by the position / orientation estimation unit 222 is within the dangerous area indicated in the received dangerous area information. And when it determines with the own mobile body being in a danger area, a process progresses to step S5.
 ステップS5において、情報端末11aは、位置情報及びローカルマップをサーバ12aに送信する。具体的には、危険領域判定部112は、自移動体が危険領域内にあることを位置情報生成部202及びローカルマップ生成部204に通知する。 In step S5, the information terminal 11a transmits the position information and the local map to the server 12a. Specifically, the dangerous area determination unit 112 notifies the position information generation unit 202 and the local map generation unit 204 that the mobile body is in the dangerous area.
 位置情報生成部202は、速度推定部223により推定された自移動体の速度、並びに、位置姿勢推定部222からの位置姿勢情報に含まれる自移動体の位置を含む位置情報を生成する。位置情報生成部202は、生成した位置情報を送信部104に供給する。 The position information generation unit 202 generates position information including the speed of the moving body estimated by the speed estimation unit 223 and the position of the moving body included in the position / orientation information from the position / orientation estimation unit 222. The position information generation unit 202 supplies the generated position information to the transmission unit 104.
 ローカルマップ生成部204は、現在のフレームの特徴点情報をマップ情報記憶部216から取得する。次に、ローカルマップ生成部204は、取得した特徴点情報から、物体検出部203により検出された周囲の移動体の特徴点に関する情報を削除する。そして、ローカルマップ生成部204は、残った特徴点情報に基づいてローカルマップを生成し、生成したローカルマップを送信部104に供給する。 The local map generation unit 204 acquires feature point information of the current frame from the map information storage unit 216. Next, the local map generation unit 204 deletes information on the feature points of the surrounding moving objects detected by the object detection unit 203 from the acquired feature point information. Then, the local map generation unit 204 generates a local map based on the remaining feature point information, and supplies the generated local map to the transmission unit 104.
 送信部104は、取得した位置情報及びローカルマップをサーバ12aに送信する。 The transmission unit 104 transmits the acquired position information and local map to the server 12a.
 その後、処理はステップS6に進む。 Thereafter, the process proceeds to step S6.
 一方、ステップS4において、危険領域判定部112は、位置姿勢推定部222により推定された自移動体の位置が、受信した危険領域情報に示される危険領域内でない場合、又は、危険領域情報を受信していない場合、自移動体が危険領域内にないと判定する。そして、ステップS5の処理はスキップされ、処理はステップS6に進む。 On the other hand, in step S4, the dangerous area determination unit 112 receives the dangerous area information when the position of the moving body estimated by the position / orientation estimation unit 222 is not within the dangerous area indicated in the received dangerous area information. If not, it is determined that the mobile body is not in the danger area. And the process of step S5 is skipped and a process progresses to step S6.
 ステップS6において、危険予測部113は、危険予測を行う。具体的には、危険予測部113は、推定された自移動体の速度及び位置、並びに、自移動体の周囲の物体の検出結果に基づいて、周囲の物体との間で、衝突、追突、接触等の事故が発生する危険性を予測する。なお、危険予測部113は、必要に応じて、推定された自移動体の姿勢をさらに用いて危険予測を行うようにしてもよい。危険予測部113は、危険予測の結果を危険回避処理部114に通知する。 In step S6, the risk prediction unit 113 performs risk prediction. Specifically, the danger predicting unit 113 performs collision, rear-end collision with a surrounding object based on the estimated speed and position of the moving object and the detection result of the surrounding object. Predict the risk of accidents such as contact. Note that the risk prediction unit 113 may further perform the risk prediction by further using the estimated posture of the moving body as necessary. The risk prediction unit 113 notifies the risk avoidance processing unit 114 of the result of the risk prediction.
 なお、例えば、危険予測部113は、自移動体及び周囲の移動体の速度及び進行方向、並びに、自移動体と周囲の物体との間の距離等に基づいて、事故が発生する危険性の度合いや重大度を示す危険度を算出するようにしてもよい。 For example, the danger prediction unit 113 may determine the risk of an accident based on the speed and traveling direction of the mobile body and surrounding mobile bodies, the distance between the mobile body and surrounding objects, and the like. You may make it calculate the risk which shows a degree and a severity.
 ステップS7において、危険回避処理部114は、自移動体が事故にあう危険性があるか否かを判定する。 In step S7, the danger avoidance processing unit 114 determines whether or not there is a risk that the moving body will encounter an accident.
 具体的には、危険回避処理部114が、危険予測部113による危険予測の結果に基づいて、自移動体が事故にあう危険性があると判定した場合、処理はステップS8に進む。 Specifically, when the danger avoidance processing unit 114 determines that there is a risk that the moving body will have an accident based on the result of the danger prediction by the danger prediction unit 113, the process proceeds to step S8.
 また、サーバ12aは、後述する図7のステップS108において、危険な状態を検出した場合、ステップS109において、事故にあう危険性があると予測した移動体に危険通知情報を送信する。そして、危険回避処理部114は、受信部102を介して危険通知情報を受信した場合、自移動体が事故にあう危険性があると判定し、処理はステップS8に進む。 In addition, when the server 12a detects a dangerous state in step S108 of FIG. 7 to be described later, in step S109, the server 12a transmits the danger notification information to the mobile body predicted to be in danger of an accident. Then, when the danger avoidance processing unit 114 receives the danger notification information via the reception unit 102, the danger avoidance processing unit 114 determines that there is a risk that the mobile body will have an accident, and the process proceeds to step S8.
 ステップS8において、危険回避処理部114は、事故を回避するための処理を行う。 In step S8, the danger avoidance processing unit 114 performs a process for avoiding an accident.
 例えば、危険回避処理部114は、自移動体が乗り物である場合、乗り物の運転又は操縦をしている人に対して警告を行う。また、例えば、危険回避処理部114は、自移動体が人である場合、その人に対して警告を行う。なお、警告内容は、任意に設定することが可能である。例えば、単に危険の通知のみを行うようにしてもよいし、事故を回避する方法を通知するようにしてもよい。また、警告方法には、画像、音声、光の点滅、操作部の振動等の任意の方法を採用することができる。 For example, when the moving object is a vehicle, the danger avoidance processing unit 114 gives a warning to a person who is driving or maneuvering the vehicle. Further, for example, when the self-moving body is a person, the danger avoidance processing unit 114 warns the person. The warning content can be arbitrarily set. For example, only danger notification may be performed, or a method for avoiding an accident may be notified. As the warning method, any method such as an image, sound, blinking light, vibration of the operation unit, or the like can be adopted.
 また、例えば、危険回避処理部114は、自移動体が乗り物である場合、事故を回避するように自移動体の動作を制御することも可能である。例えば、危険回避処理部114は、事故を回避するために、乗り物である自移動体のブレーキを作動させたり、減速させたり、進行方向を変更させたりすることが可能である。 Also, for example, when the moving object is a vehicle, the danger avoidance processing unit 114 can also control the operation of the moving object so as to avoid an accident. For example, in order to avoid an accident, the danger avoidance processing unit 114 can actuate, decelerate, or change the traveling direction of a vehicle that is a vehicle.
 さらに、例えば、危険予測部113により危険度が算出されている場合、危険度に応じて処理内容を変更するようにしてもよい。例えば、危険回避処理部114は、危険度に応じて、自移動体の停止、減速、方向転換、警告等の中から処理を選択して実行するようにしてもよい。 Furthermore, for example, when the risk level is calculated by the risk prediction unit 113, the processing content may be changed according to the risk level. For example, the danger avoidance processing unit 114 may select and execute a process from stop, deceleration, direction change, warning, or the like of the moving body according to the degree of danger.
 また、危険回避処理部114は、例えば、サーバ12aから受信した危険通知情報に、事故を回避するための処理に用いる制御情報が含まれる場合、その制御情報に従って処理を行う。 Further, for example, when the danger notification information received from the server 12a includes control information used for processing for avoiding an accident, the danger avoidance processing unit 114 performs processing according to the control information.
 ステップS9において、危険予測部113は、予測した事故の相手が移動体であるか否かを判定する。予測された事故の相手が移動体であると判定された場合、処理はステップS10に進む。 In step S9, the risk prediction unit 113 determines whether or not the predicted accident partner is a mobile object. If it is determined that the predicted accident partner is a mobile object, the process proceeds to step S10.
 ステップS10において、危険予測部113は、危険領域の通知を行う。具体的には、危険予測部113は、現在の自移動体の位置を他の移動体との事故が発生する危険性のある危険領域とみなし、危険領域を通知するための危険領域通知情報を生成する。危険領域通知情報は、例えば、自移動体の位置、危険領域を検出した時刻、予測された事故の相手である他の移動体の位置、及び、天気等の環境条件を含む。危険予測部113は、生成した危険領域通知情報を、送信部104を介してサーバ12aに送信する。 In step S10, the danger prediction unit 113 notifies the danger area. Specifically, the danger prediction unit 113 regards the current position of the moving body as a dangerous area where there is a risk of an accident with another moving body, and sets danger area notification information for notifying the dangerous area. Generate. The dangerous area notification information includes, for example, the position of the own moving body, the time when the dangerous area is detected, the position of another predicted moving body that is a predicted accident partner, and environmental conditions such as weather. The danger prediction unit 113 transmits the generated dangerous area notification information to the server 12a via the transmission unit 104.
 その後、処理はステップS11に進む。 Thereafter, the process proceeds to step S11.
 なお、サーバ12aは、後述する図7のステップS101において、危険領域通知情報を受信する。 The server 12a receives the dangerous area notification information in step S101 of FIG. 7 described later.
 一方、ステップS9において、危険予測部113が、予測した事故の相手が移動体ではなく静止物体であると判定した場合、ステップS10の処理はスキップされ、処理はステップS11に進む。 On the other hand, when the risk prediction unit 113 determines in step S9 that the predicted accident partner is not a moving object but a stationary object, the process of step S10 is skipped, and the process proceeds to step S11.
 また、ステップS7において、自移動体が事故にあう危険性がないと判定された場合、ステップS8乃至S10の処理はスキップされ、処理はステップS11に進む。 If it is determined in step S7 that there is no risk that the moving body will have an accident, the processes in steps S8 to S10 are skipped, and the process proceeds to step S11.
 ステップS11において、ローカルマップ生成部204は、グローバルマップの送信を要求するか否かを判定する。グローバルマップの送信を要求すると判定された場合、処理はステップS12に進む。 In step S11, the local map generation unit 204 determines whether to request transmission of the global map. If it is determined to request transmission of the global map, the process proceeds to step S12.
 なお、グローバルマップの送信を要求する条件は任意に設定することができる。例えば、マップ情報記憶部216にグローバルマップが記憶されていない場合に、グローバルマップの送信を要求するようにしてもよい。或いは、例えば、所定の日付、曜日、又は、時間帯等に、グローバルマップの送信を要求するようにしてもよい。或いは、例えば、現在の自移動体の位置が、マップ情報記憶部216に記憶されているグローバルマップの外に出そうな場合又はグローバルマップの外に出ている場合、グローバルマップの送信を要求するようにしてもよい。 Note that the conditions for requesting the transmission of the global map can be set arbitrarily. For example, when a global map is not stored in the map information storage unit 216, transmission of the global map may be requested. Or you may make it request | require transmission of a global map on a predetermined date, a day of the week, or a time slot | zone, for example. Alternatively, for example, when the current position of the mobile body is about to go out of the global map stored in the map information storage unit 216 or out of the global map, the transmission of the global map is requested. You may do it.
 ステップS12において、ローカルマップ生成部204は、グローバルマップの送信を要求する。具体的には、ローカルマップ生成部204は、上述したステップS5と同様の処理により、ローカルマップを生成する。そして、ローカルマップ生成部204は、生成したローカルマップを含むグローバルマップ送信要求情報を生成し、生成したグローバルマップ送信要求情報を、送信部104を介してサーバ12aに送信する。 In step S12, the local map generation unit 204 requests transmission of a global map. Specifically, the local map generation unit 204 generates a local map by the same process as in step S5 described above. Then, the local map generation unit 204 generates global map transmission request information including the generated local map, and transmits the generated global map transmission request information to the server 12a via the transmission unit 104.
 サーバ12aは、後述する図7のステップS103において、グローバルマップ送信要求情報を受信し、ステップS105において、要求元の自移動体の周辺のグローバルマップを送信する。 The server 12a receives the global map transmission request information in step S103 of FIG. 7 to be described later, and transmits the global map around the requesting mobile body in step S105.
 ステップS13において、受信部102は、サーバ12aから送信されたグローバルマップを受信する。受信部102は、受信したグローバルマップをマップ情報記憶部216に記憶させる。 In step S13, the receiving unit 102 receives the global map transmitted from the server 12a. The receiving unit 102 stores the received global map in the map information storage unit 216.
 その後、処理はステップS1に戻り、ステップS1以降の処理が実行される。 Thereafter, the process returns to step S1, and the processes after step S1 are executed.
 一方、ステップS11において、グローバルマップの送信を要求しないと判定された場合、処理はステップS1に戻り、ステップS1以降の処理が実行される。 On the other hand, if it is determined in step S11 that the transmission of the global map is not requested, the process returns to step S1, and the processes after step S1 are executed.
(サーバ12aの処理)
 次に、図7及び図8のフローチャートを参照して、図6の情報端末11aの処理に対応して実行される、サーバ12aの処理について説明する。
(Processing of server 12a)
Next, the process of the server 12a executed in response to the process of the information terminal 11a of FIG. 6 will be described with reference to the flowcharts of FIGS.
 ステップS101において、受信部301は、各情報端末11aから送信されてくる位置情報、ローカルマップ及び危険領域通知情報の受信を開始する。受信部301は、受信した位置情報を位置情報記憶部302に記憶させる。また、受信部301は、受信したローカルマップ及び危険領域通知情報をマップ情報記憶部303に記憶させる。 In step S101, the receiving unit 301 starts receiving position information, a local map, and dangerous area notification information transmitted from each information terminal 11a. The receiving unit 301 stores the received position information in the position information storage unit 302. The receiving unit 301 also stores the received local map and dangerous area notification information in the map information storage unit 303.
 ステップS102において、サーバ12aは、危険領域情報の送信を開始する。例えば、危険領域通知部311は、マップ情報記憶部303に記憶されている危険領域マップに示される危険領域毎に危険領域情報を生成する。各危険領域情報には、危険領域の位置を示す情報が含まれる。危険領域通知部311は、生成した危険領域情報を送信部312に供給する。 In step S102, the server 12a starts transmitting dangerous area information. For example, the dangerous area notification unit 311 generates dangerous area information for each dangerous area shown in the dangerous area map stored in the map information storage unit 303. Each dangerous area information includes information indicating the position of the dangerous area. The dangerous area notification unit 311 supplies the generated dangerous area information to the transmission unit 312.
 送信部312は、例えば、各危険領域情報を、対応する危険領域及びその周辺を含む領域にそれぞれ送信する。 The transmission unit 312 transmits, for example, each dangerous area information to a corresponding dangerous area and an area including the vicinity thereof.
 ステップS103において、グローバルマップ検索部310は、グローバルマップの送信が要求されたか否かを判定する。グローバルマップ検索部310は、図6のステップS12において情報端末11aから送信されたグローバルマップ送信要求情報を、受信部301を介して受信した場合、グローバルマップの送信が要求されたと判定し、処理はステップS104に進む。 In step S103, the global map search unit 310 determines whether transmission of a global map is requested. The global map search unit 310 determines that the transmission of the global map is requested when the global map transmission request information transmitted from the information terminal 11a in step S12 in FIG. Proceed to step S104.
 ステップS104において、グローバルマップ検索部310は、要求元の移動体周辺のグローバルマップを検索する。具体的には、グローバルマップ検索部310は、マップ情報記憶部303に記憶されているグローバルマップにおいて、グローバルマップ送信要求情報に含まれるローカルマップの画像及び特徴量の検索を行う。 In step S104, the global map search unit 310 searches for a global map around the requesting mobile unit. Specifically, the global map search unit 310 searches the local map image and the feature amount included in the global map transmission request information in the global map stored in the map information storage unit 303.
 ステップS105において、グローバルマップ検索部310は、グローバルマップを送信する。具体的には、グローバルマップ検索部310は、ステップS104の処理で検索された領域及びその周辺の領域のマップをグローバルマップから抽出する。グローバルマップ検索部310は、抽出したマップを、送信部312を介して、要求元の情報端末11aに送信する。 In step S105, the global map search unit 310 transmits a global map. Specifically, the global map search unit 310 extracts a map of the area searched in the process of step S104 and its surrounding area from the global map. The global map search unit 310 transmits the extracted map to the requesting information terminal 11a via the transmission unit 312.
 その後、処理はステップS106に進む。 Thereafter, the process proceeds to step S106.
 一方、ステップS103において、グローバルマップの送信が要求されていないと判定された場合、ステップS104及びS105の処理はスキップされ、処理はステップS106に進む。 On the other hand, if it is determined in step S103 that the transmission of the global map is not requested, the processes in steps S104 and S105 are skipped, and the process proceeds to step S106.
 ステップS106において、動き予測部306は、各移動体の動きを予測する。具体的には、動き予測部306は、所定の時間前から現在までの間に位置情報記憶部302に蓄積された各情報端末11aからの位置情報に基づいて、各移動体の動きを予測する。動き予測部306は、各移動体の動きの予測結果を危険予測部307に通知する。 In step S106, the motion prediction unit 306 predicts the motion of each moving object. Specifically, the motion prediction unit 306 predicts the motion of each mobile object based on the position information from each information terminal 11a accumulated in the position information storage unit 302 between a predetermined time ago and the present time. . The motion prediction unit 306 notifies the risk prediction unit 307 of the motion prediction result of each moving object.
 このとき、動き予測部306は、さらにグローバルマップを用いて移動体の動きを予測するようにしてもよい。例えば、動き予測部306は、グローバルマップにおいて、位置情報により予測される移動体の進行方向に静止物体が存在する場合、移動体がその静止物体を避けるように動くと予測するようにしてもよい。すなわち、動き予測部306は、移動体がグローバルマップ上の静止物体を避ける動きを予測するようにしてもよい。これにより、より正確に移動体の動きを予測することができる。 At this time, the motion prediction unit 306 may further predict the motion of the moving object using a global map. For example, in the global map, when there is a stationary object in the traveling direction of the moving body predicted by the position information, the motion prediction unit 306 may predict that the moving body moves so as to avoid the stationary object. . That is, the motion prediction unit 306 may predict a motion that the moving body avoids a stationary object on the global map. Thereby, the movement of the moving body can be predicted more accurately.
 ステップS107において、危険予測部307は、危険予測を行う。具体的には、危険予測部307は、各移動体の動きの予測結果に基づいて、各移動体が、他の移動体との間で衝突、追突、接触等の事故が発生する危険性を予測する。 In step S107, the risk prediction unit 307 performs risk prediction. Specifically, the risk predicting unit 307 determines the risk that each moving body may cause an accident such as a collision, rear-end collision, or contact with another moving body based on the predicted motion of each moving body. Predict.
 例えば、危険予測部307は、各移動体の動きの予測結果に基づいて、今後複数の移動体間の距離が所定の距離以内になることが予測される場合、当該複数の移動体が事故にあう危険性があると予測する。 For example, the risk prediction unit 307 may predict that the distance between the plurality of moving objects will be within a predetermined distance based on the prediction result of the movement of each moving object. Predict that there is a risk.
 例えば、図9及び図10に示されるように、一方の車線に移動体である車両401と車両402が前後に並んで走行し、対向車線に移動体である車両403と車両404が前後に並んで走行している例について検討する。 For example, as shown in FIGS. 9 and 10, a vehicle 401 and a vehicle 402, which are moving bodies, run side by side in one lane, and a vehicle 403 and a vehicle 404, which are moving bodies, run side by side in the opposite lane. Consider an example of driving in
 図9の例では、車両401乃至404は、それぞれ速度v1a乃至v4aで走行している。そして、速度v4a>速度v3aであり、車両403と車両404との間の距離が所定の距離以内になることが予測される場合、車両403と車両404との間に事故が発生する危険性がある(危険な状態である)と予測される。 In the example of FIG. 9, the vehicles 401 to 404 are traveling at speeds v1a to v4a, respectively. If the speed v4a> the speed v3a and the distance between the vehicle 403 and the vehicle 404 is predicted to be within a predetermined distance, there is a risk that an accident will occur between the vehicle 403 and the vehicle 404. Predicted to be (dangerous).
 一方、車両401と車両403が、対向車線上ですれ違うことが予測されるが、車両401と車両403との間は一定の距離が確保されるため、車両401及び車両403との間で事故が発生する危険性はない(危険な状態ではない)と判定される。また、速度v1aと速度v2aがほぼ等しい場合、車両401と車両402との間は一定の距離が確保されるため、車両401と車両402との間で事故が発生する危険性はない(危険な状態ではない)と判定される。 On the other hand, it is predicted that the vehicle 401 and the vehicle 403 pass each other on the opposite lane. However, since a certain distance is secured between the vehicle 401 and the vehicle 403, an accident occurs between the vehicle 401 and the vehicle 403. It is determined that there is no risk of occurrence (not a dangerous condition). In addition, when the speed v1a and the speed v2a are substantially equal, a certain distance is secured between the vehicle 401 and the vehicle 402, so there is no risk that an accident will occur between the vehicle 401 and the vehicle 402 (dangerous). Is not in the state).
 図10の例では、車両401乃至404は、それぞれ速度v1b乃至v4bで走行している。そして、車両402が対向車線の方向に進んでおり、車両402と車両403との間の距離が所定の距離以内になることが予測される場合、車両402と車両403との間で事故が発生する危険性がある(危険な状態である)と予測される。 10, the vehicles 401 to 404 are traveling at speeds v1b to v4b, respectively. If the vehicle 402 is traveling in the direction of the oncoming lane and the distance between the vehicle 402 and the vehicle 403 is predicted to be within a predetermined distance, an accident occurs between the vehicle 402 and the vehicle 403. It is predicted that there is a risk (dangerous state)
 一方、速度v3bと速度v4bがほぼ等しい場合、車両403と車両404との間は一定の距離が確保されるため、車両403と車両404との間で事故が発生する危険性はない(危険な状態ではない)と判定される。 On the other hand, when the speed v3b and the speed v4b are substantially equal, a certain distance is secured between the vehicle 403 and the vehicle 404, so there is no risk that an accident will occur between the vehicle 403 and the vehicle 404 (dangerous). Is not in the state).
 また、例えば、危険予測部307は、移動体の速度及び進行方向、並びに、移動体間の距離等に基づいて、事故が発生する危険性の度合いや事故の重大度を示す危険度を算出するようにしてもよい。 Further, for example, the risk prediction unit 307 calculates the degree of risk of occurrence of an accident and the degree of risk indicating the severity of the accident based on the speed and traveling direction of the moving object, the distance between the moving objects, and the like. You may do it.
 ステップS108において、危険予測部307は、危険な状態を検出したか否かを判定する。例えば、危険予測部307は、ステップS109の処理の結果、移動体間の事故が発生する危険性を検出した場合、危険な状態を検出したと判定し、処理はステップS109に進む。 In step S108, the danger prediction unit 307 determines whether or not a dangerous state has been detected. For example, if the risk prediction unit 307 detects the risk of an accident between the moving objects as a result of the process in step S109, the risk prediction unit 307 determines that a dangerous state has been detected, and the process proceeds to step S109.
 ステップS109において、危険通知部308は、事故にあう危険性のある移動体に危険を通知する。具体的には、危険予測部307は、事故にあう危険性のある移動体の位置、速度、及び、進行方向等を示す情報を危険通知部308に供給する。 In step S109, the danger notifying unit 308 notifies the danger to a moving body that is in an accident. Specifically, the danger prediction unit 307 supplies the danger notification unit 308 with information indicating the position, speed, traveling direction, and the like of a moving body that may be in an accident.
 危険通知部308は、事故にあう危険性のある移動体毎に、事故の相手となる他の移動体の位置、速度、及び、進行方向等を含む危険通知情報を生成する。なお、危険通知情報に、事故を回避するための方法を示す情報や、移動体が事故を回避するための処理に用いる制御情報等を含めるようにしてもよい。 The danger notification unit 308 generates danger notification information including the position, speed, traveling direction, and the like of another moving body that is an accident partner for each moving body that is in danger of an accident. The danger notification information may include information indicating a method for avoiding an accident, control information used for processing for the mobile body to avoid an accident, and the like.
 また、例えば、危険通知部308は、危険予測部307により危険度が算出されている場合、危険度に応じて危険通知情報の内容を変更するようにしてもよい。例えば、危険通知部308は、危険度が高い場合、危険通知情報に移動体に対する制御情報を含め、危険度が低い場合、制御情報を含めないようにしてもよい。 Also, for example, when the risk level is calculated by the risk prediction unit 307, the risk notification unit 308 may change the content of the risk notification information according to the risk level. For example, the danger notification unit 308 may include control information for the moving object in the danger notification information when the risk level is high, and may not include the control information when the risk level is low.
 また、例えば、危険通知部308は、危険度に応じて、危険通知情報に含める制御情報の内容を変更するようにしてもよい。例えば、危険通知部308は、危険度が高い場合、移動体の停止を指令する制御情報を含め、危険度が低い場合、移動体の減速を指令する制御情報を含めるようにしてもよい。 Further, for example, the danger notification unit 308 may change the content of the control information included in the danger notification information according to the degree of danger. For example, the danger notification unit 308 may include control information for instructing to stop the moving body when the degree of danger is high, and may include control information for instructing deceleration of the moving body when the degree of danger is low.
 危険通知部308は、生成した危険通知情報を、送信部313を介して、事故にあう危険性のある移動体に備られている情報端末11aにそれぞれ送信する。 The danger notification unit 308 transmits the generated danger notification information to the information terminal 11a provided in the mobile body at risk of an accident via the transmission unit 313, respectively.
 その後、処理は、ステップS110に進む。 Thereafter, the process proceeds to step S110.
 一方、ステップS108において、危険な状態を検出していないと判定された場合、ステップS109の処理はスキップされ、処理はステップS110に進む。 On the other hand, if it is determined in step S108 that a dangerous state has not been detected, the process of step S109 is skipped, and the process proceeds to step S110.
 ステップS110において、グローバルマップ更新部304は、グローバルマップを更新するか否かを判定する。グローバルマップを更新すると判定された場合、処理はステップS111に進む。 In step S110, the global map update unit 304 determines whether to update the global map. If it is determined to update the global map, the process proceeds to step S111.
 なお、グローバルマップを更新する条件は任意に設定することが可能である。例えば、所定の日付、曜日、又は、時間帯等にグローバルマップを更新するようにしてもよい。或いは、例えば、所定の時間間隔でグローバルマップを更新するようにしてもよい。或いは、例えば、マップ情報記憶部303において、前回の更新後の各情報端末11aからのローカルマップの蓄積量が所定の量以上になった場合に、グローバルマップを更新するようにしてもよい。 The conditions for updating the global map can be set arbitrarily. For example, the global map may be updated on a predetermined date, day of the week, or time zone. Alternatively, for example, the global map may be updated at predetermined time intervals. Alternatively, for example, in the map information storage unit 303, the global map may be updated when the accumulated amount of the local map from each information terminal 11a after the previous update becomes a predetermined amount or more.
 ステップS111において、グローバルマップ更新部304は、グローバルマップを更新する。例えば、グローバルマップ更新部304は、前回の更新後にマップ情報記憶部303に蓄積されているローカルマップと、マップ情報記憶部303に記憶されているグローバルマップとの比較を行い、情報が異なる領域を抽出する。グローバルマップ更新部304は、抽出したグローバルマップの領域のうち、ローカルマップの信頼度が高い領域を、ローカルマップの情報に置き換える。ここで、ローカルマップの信頼度が高い領域とは、例えば、所定の数以上の情報端末11aからのローカルマップの情報が一致する領域等である。 In step S111, the global map update unit 304 updates the global map. For example, the global map update unit 304 compares the local map stored in the map information storage unit 303 after the previous update with the global map stored in the map information storage unit 303, and determines areas with different information. Extract. The global map update unit 304 replaces the extracted area of the global map with the local map information for the area with the high reliability of the local map. Here, the area where the reliability of the local map is high is, for example, an area where the information of the local maps from a predetermined number or more of the information terminals 11a match.
 このように、各情報端末11aから送信されるローカルマップに基づいて、グローバルマップを更新することにより、各地の状況の変化がリアルタイムにグローバルマップに反映される。 Thus, by updating the global map based on the local map transmitted from each information terminal 11a, changes in the situation in each place are reflected in the global map in real time.
 例えば、移動体が所定の期間以上同じ位置にとどまっている場合に、その移動体の情報をグローバルマップに反映することができる。例えば、長時間駐車している車両等をグローバルマップに反映することができる。 For example, when a moving body stays at the same position for a predetermined period or longer, the information on the moving body can be reflected in the global map. For example, a vehicle parked for a long time can be reflected in the global map.
 また、例えば、静止物体が新たに出現したり、位置が変化したり、消滅したりした場合に、その変化をグローバルマップに反映することができる。たとえ、倒れてきた樹木、落下してきた岩等をグローバルマップに反映することができる。 Also, for example, when a new stationary object appears, changes its position, or disappears, the change can be reflected in the global map. For example, falling trees, falling rocks, etc. can be reflected in the global map.
 そして、各地の状況の変化がリアルタイムにグローバルマップに更新されることにより、各移動体の危険予測の精度が向上し、より確実に事故を防止することが可能になる。 And, by changing the situation in each place to the global map in real time, the accuracy of the danger prediction of each moving body is improved, and it becomes possible to prevent accidents more reliably.
 グローバルマップ更新部304は、更新したグローバルマップをマップ情報記憶部303に記憶させる。 The global map update unit 304 stores the updated global map in the map information storage unit 303.
 その後、処理はステップS112に進む。 Thereafter, the process proceeds to step S112.
 一方、ステップS110において、グローバルマップを更新しないと判定された場合、ステップS111の処理はスキップされ、処理はステップS112に進む。 On the other hand, if it is determined in step S110 that the global map is not updated, the process of step S111 is skipped, and the process proceeds to step S112.
 ステップS112において、危険領域マップ更新部305は、危険領域マップを更新するか否かを判定する。危険領域マップを更新すると判定された場合、処理はステップS113に進む。 In step S112, the dangerous area map update unit 305 determines whether or not to update the dangerous area map. If it is determined to update the dangerous area map, the process proceeds to step S113.
 なお、危険領域マップを更新する条件は任意に設定することが可能である。例えば、所定の日付、曜日、又は、時間帯等に危険領域マップを更新するようにしてもよい。或いは、例えば、所定の時間間隔で危険領域マップを更新するようにしてもよい。或いは、例えば、グローバルマップの更新に合わせて危険領域マップを更新するようにしてもよい。或いは、例えば、マップ情報記憶部303において、前回の更新後の各情報端末11aからの危険領域情報の蓄積量が所定の量以上になった場合に、危険領域マップを更新するようにしてもよい。 It should be noted that the conditions for updating the dangerous area map can be arbitrarily set. For example, the dangerous area map may be updated on a predetermined date, day of the week, or time zone. Alternatively, for example, the dangerous area map may be updated at predetermined time intervals. Alternatively, for example, the dangerous area map may be updated in accordance with the update of the global map. Alternatively, for example, in the map information storage unit 303, the dangerous area map may be updated when the accumulation amount of the dangerous area information from each information terminal 11a after the previous update becomes a predetermined amount or more. .
 ステップS113において、危険領域マップ更新部305は、危険領域マップを更新する。例えば、危険領域マップ更新部305は、現時点で危険領域マップにおいて危険領域に設定されていない領域のうち、各情報端末11aから危険領域であると通知された頻度が所定の閾値を超えた領域を、新たに危険領域に設定する。また、例えば、危険領域マップ更新部305は、現在危険領域に設定されている領域であっても、各情報端末11aから危険領域であると通知される頻度が非常に低い場合、危険領域の設定を外すようにしてもよい。 In step S113, the dangerous area map update unit 305 updates the dangerous area map. For example, the dangerous area map update unit 305 selects an area in which the frequency of being notified as a dangerous area from each information terminal 11a exceeds a predetermined threshold among the areas not currently set as the dangerous area in the dangerous area map. Newly set a dangerous area. Further, for example, the dangerous area map update unit 305 sets the dangerous area when the frequency of being notified as the dangerous area from each information terminal 11a is very low even in the area currently set as the dangerous area. May be removed.
 その後、処理はステップS114に進む。 Thereafter, the process proceeds to step S114.
 一方、ステップS112において、危険領域マップを更新しないと判定された場合、ステップS113の処理はスキップされ、処理はステップS114に進む。 On the other hand, if it is determined in step S112 that the dangerous area map is not updated, the process of step S113 is skipped, and the process proceeds to step S114.
 ステップS114において、シミュレーション部309は、事故が発生したか否かを判定する。例えば、シミュレーション部309は、受信部301を介して、外部のサーバや情報端末11a等から、移動体の事故の発生を示す事故情報を受信した場合、事故が発生したと判定し、処理はステップS115に進む。 In step S114, the simulation unit 309 determines whether an accident has occurred. For example, the simulation unit 309 determines that an accident has occurred when receiving the accident information indicating the occurrence of the accident of the moving object from the external server, the information terminal 11a, or the like via the receiving unit 301, and the processing is performed in steps. The process proceeds to S115.
 なお、事故情報を送信する外部のサーバとして、例えば、eCallシステムのサーバ等が想定される。 Note that, for example, an eCall system server is assumed as an external server that transmits accident information.
 ステップS115において、シミュレーション部309は、事故に関連する位置情報を保存するように設定する。例えば、シミュレーション部309は、位置情報記憶部302に記憶されている位置情報のうち、送信時刻が事故の発生時刻前後の所定の期間内であり、位置情報に示される位置が事故の発生場所を含む所定の範囲内である位置情報を、発生した事故に関連する位置情報に設定する。事故に関連する位置情報は、例えば、古い位置情報を位置情報記憶部302から消去する場合でも、消去されずにそのまま保存される。 In step S115, the simulation unit 309 performs setting so as to save position information related to the accident. For example, the simulation unit 309 includes the position information stored in the position information storage unit 302 within a predetermined period before and after the accident occurrence time, and the position indicated in the position information indicates the location of the accident occurrence. The position information within a predetermined range including the position information is set as position information related to the accident that has occurred. For example, even when the old position information is deleted from the position information storage unit 302, the position information related to the accident is stored as it is without being deleted.
 その後、処理はステップS116に進む。 Thereafter, the process proceeds to step S116.
 一方、ステップS114において、事故が発生していないと判定された場合、ステップS115の処理はスキップされ、処理はステップS116に進む。 On the other hand, if it is determined in step S114 that no accident has occurred, the process of step S115 is skipped, and the process proceeds to step S116.
 ステップS116において、シミュレーション部309は、事故のシミュレーションの実行が指令されたか否かを判定する。例えば、シミュレーション部309は、図示せぬ入力部を介して事故のシミュレーションの実行の指令が入力されたり、或いは、受信部301を介して、他のサーバ等から事故のシミュレーションの実行の指令を受信した場合、事故のシミュレーションの実行が指令されたと判定し、処理はステップS117に進む。 In step S116, the simulation unit 309 determines whether an accident simulation execution has been commanded. For example, the simulation unit 309 receives an accident simulation execution command via an input unit (not shown), or receives an accident simulation execution command from another server or the like via the reception unit 301. If so, it is determined that an accident simulation execution has been commanded, and the process proceeds to step S117.
 ステップS117において、シミュレーション部309は、事故のシミュレーションを実行する。例えば、シミュレーション部309は、シミュレーションにより再現する事故に関連する位置情報を位置情報記憶部302から取得する。そして、例えば、シミュレーション部309は、取得した位置情報に基づいて、事故の発生状況を示すシミュレーション画像を生成し、表示する。例えば、シミュレーション画像において、事故の現場付近の事故の発生前後の移動体の動きが再現される。 In step S117, the simulation unit 309 executes an accident simulation. For example, the simulation unit 309 acquires position information related to the accident to be reproduced by the simulation from the position information storage unit 302. For example, the simulation unit 309 generates and displays a simulation image indicating an accident occurrence state based on the acquired position information. For example, in the simulation image, the movement of the moving body before and after the occurrence of the accident near the accident site is reproduced.
 或いは、例えば、シミュレーション部309は、取得した位置情報に基づいて、上述したシミュレーション画像を表示するためのシミュレーションデータを生成する。そして、シミュレーション部309は、生成したシミュレーションデータを、送信部312を介して、要求元の他のサーバに送信する。 Alternatively, for example, the simulation unit 309 generates simulation data for displaying the above-described simulation image based on the acquired position information. And the simulation part 309 transmits the produced | generated simulation data to the other server of a request origin via the transmission part 312. FIG.
 その後、処理はステップS103に戻り、ステップS103以降の処理が実行される。 Thereafter, the process returns to step S103, and the processes after step S103 are executed.
 一方、ステップS116において、事故のシミュレーションの実行が指令されていないと判定された場合、処理はステップS103に戻り、ステップS103以降の処理が実行される。 On the other hand, when it is determined in step S116 that the execution of the accident simulation is not instructed, the process returns to step S103, and the processes after step S103 are executed.
 以上のようにして、情報端末11が備えられている移動体の事故を確実に防止することができる。 As described above, it is possible to reliably prevent an accident of a moving body provided with the information terminal 11.
 例えば、SLAMを用いることにより、移動体の位置及び速度の推定精度が向上し、移動体の危険予測の精度が向上するため、移動体の事故を確実に防止することができる。 For example, by using SLAM, the accuracy of estimation of the position and speed of the moving body is improved, and the accuracy of the risk prediction of the moving body is improved, so that an accident of the moving body can be surely prevented.
 具体的には、例えば、GPSを用いて移動体の位置を測定する場合、GPS衛星からの電波を受信が困難な都市部等において、移動体の位置を正確に測定することができない。また、民間利用におけるGPSの位置の測定精度は10m程度である。従って、GPSを用いた場合、移動体の危険予測の精度が低下する場合がある。 Specifically, for example, when the position of a moving body is measured using GPS, the position of the moving body cannot be accurately measured in an urban area where it is difficult to receive radio waves from GPS satellites. Moreover, the measurement accuracy of the GPS position in private use is about 10 m. Therefore, when GPS is used, the accuracy of the risk prediction of the moving object may be reduced.
 一方、各情報端末11aでは、SLAMを用いることにより、周囲の条件や環境によらずに、安定して各移動体の位置、姿勢及び速度を高精度に推定することができる。例えば、SLAMでは、数cmの精度で移動体の位置を推定することができる。従って、各移動体の動きの推定精度が向上し、移動体の危険予測の精度が向上する。 On the other hand, by using SLAM, each information terminal 11a can stably estimate the position, posture, and speed of each moving object with high accuracy regardless of the surrounding conditions and environment. For example, in SLAM, the position of a moving body can be estimated with an accuracy of several centimeters. Therefore, the estimation accuracy of the movement of each moving body is improved, and the accuracy of the danger prediction of the moving body is improved.
 また、サーバ12aにおいて、各移動体間の危険予測を行うことにより、各移動体単独で危険予測を行う場合と比較して、危険予測の精度が向上する。 Further, by performing the risk prediction between the mobile bodies in the server 12a, the accuracy of the risk prediction is improved as compared with the case where the risk prediction is performed by each mobile body alone.
 具体的には、SLAMは、基本的に静止物に対する移動体の位置及び姿勢を推定する技術であり、周囲の移動体の認識を主目的とする技術ではない。また、SLAMでは、カメラの死角にある画像に写らない他の移動体を認識することができない。 Specifically, SLAM is basically a technique for estimating the position and orientation of a moving object relative to a stationary object, and is not a technique whose main purpose is to recognize surrounding moving objects. In addition, SLAM cannot recognize other moving objects that are not shown in the image in the blind spot of the camera.
 例えば、図11の例では、異なる道を進んでいる車両421と車両422が、同じ交差点に差し掛かっている。しかし、この交差点は、道路脇の木により見通しが悪くなっており、車両421及び車両422から、お互いの車両の動きを認識することが困難である。 For example, in the example of FIG. 11, a vehicle 421 and a vehicle 422 traveling on different roads are approaching the same intersection. However, this intersection has poor visibility due to trees on the roadside, and it is difficult to recognize the movements of the vehicles from the vehicles 421 and 422.
 また、例えば、図12の例では、駐車中の車両433及び車両433により、車両431の前方の視界が遮られている。従って、車両431から、対向車線から接近中の車両432の動きを認識することが困難である。 Further, for example, in the example of FIG. 12, the field of view in front of the vehicle 431 is blocked by the parked vehicle 433 and the vehicle 433. Therefore, it is difficult to recognize the movement of the vehicle 432 approaching from the oncoming lane from the vehicle 431.
 これに対して、サーバ12aは、車両421及び車両422、並びに、車両431及び車両432の位置情報を収集し、各車両の動きを予測した上で、各車両の危険予測を行う。これにより、サーバ12aは、車両421と車両422の事故、及び、車両431と車両432に事故の発生を予測し、未然に防ぐことができる。 On the other hand, the server 12a collects the positional information of the vehicles 421 and 422, and the vehicles 431 and 432, predicts the movement of each vehicle, and then predicts the danger of each vehicle. As a result, the server 12a can predict and prevent accidents in the vehicles 421 and 422 and accidents in the vehicles 431 and 432.
 さらに、サーバ12aは、各情報端末11aからローカルマップを収集し、収集したローカルマップに基づいてグローバルマップを更新する。これにより、各地の状況の変化を迅速かつ正確にグローバルマップに反映することができため、各移動体の危険予測の精度が向上し、確実に各移動体の事故を防止することができる。 Furthermore, the server 12a collects a local map from each information terminal 11a, and updates the global map based on the collected local map. Thereby, since the change of the situation of each place can be reflected to a global map rapidly and correctly, the precision of the danger prediction of each moving body improves, and the accident of each moving body can be prevented reliably.
 また、例えば、移動体の位置を検出するために、危険領域毎に監視カメラ等を設置する必要がないため、工事やコストの削減を実現することができる。 Also, for example, since it is not necessary to install a monitoring camera or the like for each dangerous area in order to detect the position of the moving body, construction and cost reduction can be realized.
 さらに、サーバ12aは、事故に関連する位置情報を保存し、後で事故の発生状況のシミュレーションを行うことができるので、現場検証、事故の原因の解析、事故防止の対策等を支援することができる。 Furthermore, since the server 12a can store location information related to the accident and can simulate the accident occurrence later, it can support on-site verification, analysis of the cause of the accident, countermeasures for accident prevention, and the like. it can.
<2.第2の実施の形態>
 次に、図13乃至図17を参照して、本技術の第2の実施の形態について説明する。
<2. Second Embodiment>
Next, a second embodiment of the present technology will be described with reference to FIGS.
{情報端末11の第2の実施の形態}
 図13は、図1の情報端末11の第2の実施の形態である情報端末11bの機能の構成例を示している。なお、図中、図2と対応する部分には、同じ符号を付してある。
{Second Embodiment of Information Terminal 11}
FIG. 13 shows an example of the functional configuration of the information terminal 11b which is the second embodiment of the information terminal 11 of FIG. In the figure, parts corresponding to those in FIG.
 情報端末11bは、図2の情報端末11aと比較して、情報処理部103の代わりに情報処理部501が設けられている点が異なる。情報処理部501は、情報処理部103と比較して、危険領域判定部112の代わりに、 危険領域判定部511が設けられている点が異なる。 The information terminal 11b is different from the information terminal 11a in FIG. 2 in that an information processing unit 501 is provided instead of the information processing unit 103. The information processing unit 501 is different from the information processing unit 103 in that a saddle risk region determination unit 511 is provided instead of the risk region determination unit 112.
 危険領域判定部511は、自移動体が危険領域内にあるか否かの判定結果を、受信部102を介してサーバ12b(図14)から受信する。 危険領域判定部511は、受信した判定結果に基づいて、自移動体が危険領域内にあるか否かを判定し、判定結果をSLAM処理部111に通知する。 The dangerous area determination unit 511 receives the determination result as to whether or not the mobile body is in the dangerous area from the server 12b (FIG. 14) via the reception unit 102. The dangerous area determination unit 511 determines whether or not the mobile body is in the dangerous area based on the received determination result, and notifies the SLAM processing unit 111 of the determination result.
{サーバ12の第2の実施の形態}
 図14は、図1のサーバ12の第2の実施の形態であるサーバ12bの機能の構成例を示している。なお、図中、図4と対応する部分には、同じ符号を付してある。
{Second Embodiment of Server 12}
FIG. 14 shows an example of the functional configuration of the server 12b, which is the second embodiment of the server 12 of FIG. In the figure, parts corresponding to those in FIG.
 サーバ12bは、図4のサーバ12aと比較して、グローバルマップ検索部310及び危険領域通知部311の代わりに、グローバルマップ検索部601及び危険領域判定部602が設けられている点が異なる。 The server 12b is different from the server 12a of FIG. 4 in that a global map search unit 601 and a dangerous region determination unit 602 are provided instead of the global map search unit 310 and the dangerous region notification unit 311.
 グローバルマップ検索部601は、受信部301を介して、各情報端末11bからグローバルマップ送信要求情報を受信する。そして、グローバルマップ検索部601は、マップ情報記憶部303に記憶されているグローバルマップにおいて、グローバルマップ送信要求情報に含まれるローカルマップの画像検索を行う。グローバルマップ検索部601は、検索した領域及びその周辺の領域のマップをグローバルマップから抽出し、抽出したマップを、送信部312を介して要求元の情報端末11bに送信する。また、グローバルマップ検索部601は、検索結果に基づいて、要求元の移動体の現在位置を検出し、検出結果を危険領域判定部602に供給する。 The global map search unit 601 receives global map transmission request information from each information terminal 11b via the receiving unit 301. Then, the global map search unit 601 searches for an image of the local map included in the global map transmission request information in the global map stored in the map information storage unit 303. The global map search unit 601 extracts a map of the searched area and its surrounding area from the global map, and transmits the extracted map to the requesting information terminal 11b via the transmission unit 312. In addition, the global map search unit 601 detects the current position of the requesting mobile body based on the search result, and supplies the detection result to the dangerous area determination unit 602.
 危険領域判定部602は、グローバルマップ検索部310により検出された要求元の移動体の現在位置が、マップ情報記憶部303に記憶されている危険領域マップに示される危険領域内であるか否かを判定する。危険領域判定部602は、判定結果を、送信部312を介して、グローバルマップの要求元の情報端末11bに送信する。 The dangerous area determination unit 602 determines whether or not the current position of the requesting mobile body detected by the global map search unit 310 is within the dangerous area indicated in the dangerous area map stored in the map information storage unit 303. Determine. The dangerous area determination unit 602 transmits the determination result to the information terminal 11b that has requested the global map via the transmission unit 312.
{情報処理システム1の処理の第2の実施の形態}
 次に、図15乃至図17を参照して、情報処理システム1の処理の第2の実施の形態について説明する。
{Second Embodiment of Processing of Information Processing System 1}
Next, a second embodiment of the processing of the information processing system 1 will be described with reference to FIGS.
(情報端末11bの処理)
 まず、図15のフローチャートを参照して、情報端末11bの処理について説明する。
(Processing of information terminal 11b)
First, processing of the information terminal 11b will be described with reference to the flowchart of FIG.
 ステップS201乃至S203において、図6のステップS1乃至S3と同様の処理が実行される。 In steps S201 to S203, processing similar to that in steps S1 to S3 in FIG. 6 is executed.
 ステップS204において、 危険領域判定部511は、自移動体が危険領域内にあるか否かを判定する。具体的には、 危険領域判定部511は、サーバ12bから自移動体が危険領域内にあるという判定結果を受信している場合、自移動体が危険領域内にあると判定し、処理はステップS205に進む。 In step S204, the dredge risk area determination unit 511 determines whether or not the mobile body is in the danger area. Specifically, if the determination result that the mobile object is in the dangerous area is received from the server 12b, the dredging dangerous area determination unit 511 determines that the mobile object is in the dangerous area, and the processing is step. The process proceeds to S205.
 ステップS205において、図6のステップS5の処理と同様に、位置情報及びローカルマップがサーバ12bに送信される。 In step S205, the position information and the local map are transmitted to the server 12b as in the process of step S5 of FIG.
 その後、処理はステップS206に進む。 Thereafter, the process proceeds to step S206.
 一方、ステップS204において、 危険領域判定部511は、サーバ12bから自移動体が危険領域外にあるという判定結果を受信している場合、又は、サーバ12bから判定結果を受信していない場合、自移動体が危険領域内にないと判定する。そして、ステップS205の処理はスキップされ、処理はステップS206に進む。 On the other hand, in step S204, the dredge risk area determination unit 511 receives the determination result that the own mobile body is outside the danger area from the server 12b, or if the determination result is not received from the server 12b, It is determined that the moving object is not in the dangerous area. Then, the process of step S205 is skipped, and the process proceeds to step S206.
 ステップS206乃至S210において、図6のステップS6乃至S10と同様の処理が実行される。 In steps S206 to S210, processing similar to that in steps S6 to S10 in FIG. 6 is executed.
 ステップS211において、図6のステップS11の処理と同様に、グローバルマップの送信を要求するか否かが判定される。グローバルマップの送信を要求すると判定された場合、処理はステップS212に進む。 In step S211, it is determined whether or not the transmission of the global map is requested as in the process of step S11 of FIG. If it is determined to request transmission of the global map, the process proceeds to step S212.
 ステップS212において、図6のステップS12の処理と同様に、グローバルマップ送信要求情報がサーバ12aに送信される。 In step S212, global map transmission request information is transmitted to the server 12a in the same manner as in step S12 of FIG.
 サーバ12aは、後述する図16のステップS302において、グローバルマップ送信要求情報を受信し、ステップS305において、要求元の自移動体の周辺のグローバルマップ及び危険領域の判定結果を送信する。 The server 12a receives the global map transmission request information in step S302 of FIG. 16 to be described later, and in step S305, transmits the global map around the requesting mobile body and the determination result of the dangerous area.
 ステップS213において、受信部102は、サーバ12aから送信されたグローバルマップ及び危険領域の判定結果を受信する。受信部102は、受信したグローバルマップをマップ情報記憶部216に記憶させ、危険領域の判定結果を 危険領域判定部511に供給する。 In step S213, the receiving unit 102 receives the global map and the dangerous area determination result transmitted from the server 12a. The receiving unit 102 stores the received global map in the map information storage unit 216 and supplies the dangerous region determination result to the heel risk region determining unit 511.
 その後、処理はステップS201に戻り、ステップS201以降の処理が実行される。 Thereafter, the process returns to step S201, and the processes after step S201 are executed.
 一方、ステップS211において、グローバルマップの送信を要求しないと判定された場合、処理はステップS201に戻り、ステップS201以降の処理が実行される。 On the other hand, if it is determined in step S211 that transmission of the global map is not requested, the process returns to step S201, and the processes after step S201 are executed.
(サーバ12bの処理)
 次に、図16及び図17のフローチャートを参照して、図15の情報端末11bの処理に対応して実行される、サーバ12bの処理について説明する。
(Processing of server 12b)
Next, processing of the server 12b executed in correspondence with the processing of the information terminal 11b of FIG. 15 will be described with reference to the flowcharts of FIGS.
 ステップS301において、図7のステップS101の処理と同様に、各情報端末11bから送信されてくる位置情報、ローカルマップ及び危険領域通知情報の受信が開始される。 In step S301, reception of position information, a local map, and dangerous area notification information transmitted from each information terminal 11b is started in the same manner as in step S101 of FIG.
 ステップS302において、図7のステップS103の処理と同様に、グローバルマップの送信が要求されたか否かが判定される。グローバルマップの送信が要求されたと判定された場合、処理はステップS303に進む。 In step S302, it is determined whether or not the transmission of the global map has been requested in the same manner as in step S103 of FIG. If it is determined that the transmission of the global map is requested, the process proceeds to step S303.
 ステップS303において、図7のステップS104の処理と同様に、要求元の移動体周辺のグローバルマップが検索される。また、グローバルマップ検索部310は、検索結果に基づいて、要求元の移動体の現在位置を検出し、検出結果を危険領域判定部602に供給する。 In step S303, the global map around the requesting mobile unit is searched in the same manner as in step S104 of FIG. In addition, the global map search unit 310 detects the current position of the requesting mobile body based on the search result, and supplies the detection result to the dangerous area determination unit 602.
 ステップS304において、危険領域判定部602は、要求元の移動体が危険領域内にあるか否かを判定する。具体的には、危険領域判定部602は、グローバルマップ検索部310により検出された要求元の移動体の現在位置が、マップ情報記憶部303に記憶されている危険領域マップに示される危険領域内にあるか否かを判定する。 In step S304, the dangerous area determination unit 602 determines whether or not the requesting moving body is in the dangerous area. Specifically, the dangerous area determination unit 602 determines that the current position of the requesting mobile body detected by the global map search unit 310 is within the dangerous area indicated in the dangerous area map stored in the map information storage unit 303. It is determined whether or not.
 ステップS305において、サーバ12bは、グローバルマップ及び危険領域の判定結果を送信する。具体的には、グローバルマップ検索部310は、ステップS305の処理で検索された領域及びその周辺の領域のマップをグローバルマップから抽出する。グローバルマップ検索部310は、抽出したマップを送信部312に供給する。 In step S305, the server 12b transmits the determination result of the global map and the dangerous area. Specifically, the global map search unit 310 extracts a map of the area searched in the process of step S305 and its surrounding area from the global map. The global map search unit 310 supplies the extracted map to the transmission unit 312.
 危険領域判定部602は、ステップS304の処理における判定結果を送信部312に通知する。 The dangerous area determination unit 602 notifies the transmission unit 312 of the determination result in the process of step S304.
 送信部312は、グローバルマップ検索部310から供給されたマップ、及び、危険領域判定部602から通知された判定結果を、要求元の情報端末11bに送信する。 The transmission unit 312 transmits the map supplied from the global map search unit 310 and the determination result notified from the dangerous area determination unit 602 to the requesting information terminal 11b.
 その後、処理はステップS306に進む。 Thereafter, the process proceeds to step S306.
 一方、ステップS302において、グローバルマップの送信が要求されていないと判定された場合、ステップS303乃至S305の処理はスキップされ、処理はステップS306に進む。 On the other hand, if it is determined in step S302 that the transmission of the global map is not requested, the processes in steps S303 to S305 are skipped, and the process proceeds to step S306.
 その後、ステップS306乃至S317において、図7のステップS106乃至S117と同様の処理が実行される。そして、処理はステップS302に戻り、ステップS302以降の処理が実行される。 Thereafter, in steps S306 to S317, processing similar to that in steps S106 to S117 in FIG. 7 is executed. And a process returns to step S302 and the process after step S302 is performed.
 この第2の実施の形態では、各移動体が危険領域内にあるか否かがサーバ12bで判定されるため、各情報端末11bの処理を軽減することができる。 In the second embodiment, since the server 12b determines whether or not each mobile object is in the dangerous area, the processing of each information terminal 11b can be reduced.
<3.第3の実施の形態>
 次に、図18を参照して、本技術の第3の実施の形態について説明する。
<3. Third Embodiment>
Next, a third embodiment of the present technology will be described with reference to FIG.
 第3の実施の形態は、第1の実施の形態と比較して、情報端末11のSLAM処理部111の構成が異なる。 3rd Embodiment differs in the structure of the SLAM process part 111 of the information terminal 11 compared with 1st Embodiment.
{SLAM処理部111bの構成例}
 図18は、図2のSLAM処理部111の第2の実施の形態であるSLAM処理部111bの機能の構成例を示している。なお、図中、図3と対応する部分には同じ符号を付してある。
{Configuration example of SLAM processing unit 111b}
FIG. 18 shows a functional configuration example of the SLAM processing unit 111b which is the second embodiment of the SLAM processing unit 111 of FIG. In the figure, parts corresponding to those in FIG.
 SLAM処理部111bは、図3のSLAM処理部111aと比較して、推定部201の代わりに、推定部701が設けられている点が異なる。推定部701は、推定部201と比較して、特徴量算出部215、動きマッチング部217、移動量推定部218、物体辞書記憶部219、物体認識部220、及び、位置姿勢推定部222の代わりに、特徴量算出部711、画像検索部712、特徴点マッチング部713、位置姿勢推定部714、及び、移動量推定部715が設けられている点が異なる。 The SLAM processing unit 111b is different from the SLAM processing unit 111a in FIG. 3 in that an estimation unit 701 is provided instead of the estimation unit 201. Compared with the estimation unit 201, the estimation unit 701 replaces the feature amount calculation unit 215, the motion matching unit 217, the movement amount estimation unit 218, the object dictionary storage unit 219, the object recognition unit 220, and the position and orientation estimation unit 222. And a feature amount calculation unit 711, an image search unit 712, a feature point matching unit 713, a position / orientation estimation unit 714, and a movement amount estimation unit 715.
 特徴量算出部711は、SLAM処理部111aの特徴量算出部215と同様に、左画像の各特徴点の特徴量を算出する。特徴量算出部711は、各特徴点に関する特徴点情報を、画像検索部712及び特徴点マッチング部713に供給するとともに、マップ情報記憶部216に記憶させる。 The feature amount calculation unit 711 calculates the feature amount of each feature point of the left image in the same manner as the feature amount calculation unit 215 of the SLAM processing unit 111a. The feature quantity calculation unit 711 supplies the feature point information regarding each feature point to the image search unit 712 and the feature point matching unit 713 and causes the map information storage unit 216 to store the feature point information.
 画像検索部712は、特徴量算出部711から供給された特徴量情報に基づいて、マップ情報記憶部216に記憶されているグローバルマップの中から、左画像と類似する画像を検索する。すなわち、画像検索部712は、グローバルマップの中から、自移動体の周辺の画像を検索する。画像検索部712は、検出された画像(以下、類似画像と称する)を特徴点マッチング部713に供給する。 The image search unit 712 searches for an image similar to the left image from the global map stored in the map information storage unit 216 based on the feature amount information supplied from the feature amount calculation unit 711. That is, the image search unit 712 searches the global map for an image around the moving object. The image search unit 712 supplies the detected image (hereinafter referred to as a similar image) to the feature point matching unit 713.
 特徴点マッチング部713は、特徴量算出部711から供給された特徴量情報に基づいて、特徴点検出部212により検出された特徴点に対応する類似画像の特徴点を検出する。特徴点マッチング部713は、類似画像、及び、特徴点の検出結果を位置姿勢推定部714に供給する。 The feature point matching unit 713 detects a feature point of a similar image corresponding to the feature point detected by the feature point detection unit 212 based on the feature amount information supplied from the feature amount calculation unit 711. The feature point matching unit 713 supplies the similar image and the detection result of the feature point to the position / orientation estimation unit 714.
 位置姿勢推定部714は、類似画像における特徴点の位置に基づいて、グローバルマップ(すなわち、空間座標系)における自移動体(より正確には、カメラ201L)の位置及び姿勢を推定する。位置姿勢推定部714は、自移動体の位置及び姿勢を示す位置姿勢情報を、危険領域判定部112、危険予測部113、位置情報生成部202、速度推定部223、及び、移動量推定部715に供給するとともに、位置姿勢情報記憶部221に記憶させる。 The position / orientation estimation unit 714 estimates the position and orientation of the self-moving body (more precisely, the camera 201L) in the global map (that is, the spatial coordinate system) based on the position of the feature point in the similar image. The position / orientation estimation unit 714 obtains position / orientation information indicating the position and orientation of the moving body from the danger region determination unit 112, the risk prediction unit 113, the position information generation unit 202, the speed estimation unit 223, and the movement amount estimation unit 715. And is stored in the position / orientation information storage unit 221.
 移動量推定部715は、位置姿勢情報記憶部221に記憶されている1つ前のフレームと現在のフレームの位置姿勢情報に基づいて、1つ前のフレームと現在のフレームとの間の自移動体の位置及び姿勢の移動量を推定する。移動量推定部715は、推定した自移動体の位置及び姿勢の移動量を示す移動量情報を、物体検出部203及び速度推定部223に供給する。 Based on the position and orientation information of the previous frame and the current frame stored in the position and orientation information storage unit 221, the movement amount estimation unit 715 automatically moves between the previous frame and the current frame. Estimate the amount of movement of the body position and posture. The movement amount estimation unit 715 supplies movement amount information indicating the estimated movement amount of the position and orientation of the moving body to the object detection unit 203 and the speed estimation unit 223.
 このように、第3の実施の形態では、グローバルマップを用いて、移動体の位置及び姿勢が直接推定される。 Thus, in the third embodiment, the position and orientation of the moving object are directly estimated using the global map.
<4.変形例>
 以下、上述した本技術の実施の形態の変形例について説明する。
<4. Modification>
Hereinafter, modifications of the above-described embodiment of the present technology will be described.
{移動体の位置、姿勢、速度等の推定方法に関する変形例}
 以上の説明では、2台のカメラを用いたステレオカメラ方式のSLAMを用いる例を示したが、例えば、1台又は3台以上のカメラを用いたSLAMを用いて、移動体の位置、姿勢、速度等を推定するようにしてもよい。
{Variation regarding estimation method of position, posture, speed, etc. of moving object}
In the above description, an example using a stereo camera type SLAM using two cameras has been shown. However, for example, using a SLAM using one or more cameras, the position, posture, The speed or the like may be estimated.
 また、以上の説明では、SLAM処理部111aが、隣接するフレームの画像(1つ前のフレームの画像と現在のフレームの画像)を用いて、動きマッチング、移動量推定、位置姿勢推定、物体検出等の処理を行う例を示したが、例えば、2フレーム以上離れた画像(例えば、2以上のNフレーム前の画像と現在のフレームの画像)を用いて処理を行うようにしてもよい。これは、SLAM処理部111bについても同様である。 In the above description, the SLAM processing unit 111a uses the adjacent frame images (the image of the previous frame and the image of the current frame) to perform motion matching, movement amount estimation, position and orientation estimation, and object detection. However, for example, the processing may be performed using images separated by two or more frames (for example, images of two or more N frames before and an image of the current frame). The same applies to the SLAM processing unit 111b.
 さらに、SLAM以外の方法により、移動体から撮影した画像に基づいて、移動体の位置、姿勢、速度等を推定するようにすることも可能である。 Furthermore, it is possible to estimate the position, posture, speed, etc. of the moving body based on the image taken from the moving body by a method other than SLAM.
{危険領域マップに関する変形例}
 例えば、危険領域マップの各危険領域に危険度を設定し、情報端末11が、危険度に応じた処理を行うようにしてもよい。
{Modified example of dangerous area map}
For example, a risk level may be set for each risk area of the risk area map, and the information terminal 11 may perform processing according to the risk level.
 また、例えば、時間帯、季節、天候等により、危険な領域及び危険度は異なる。そこで、時間帯、季節、天候等の違いにより複数の危険領域マップを作成し、複数の危険領域マップを使い分けるようにしてもよい。 Also, for example, the dangerous area and the degree of danger differ depending on the time zone, season, weather, etc. Therefore, a plurality of dangerous area maps may be created depending on the time zone, season, weather, etc., and the plurality of dangerous area maps may be used properly.
 また、以上の説明では、情報端末11が、他の移動体との事故にあう危険性を検出した場所を危険領域としてサーバ12に通知する例を示したが、異なる方法により危険領域を検出するようにすることが可能である。例えば、左画像又は右画像内で、進行方向の視界を遮る領域が所定の割合以上になる場所を危険領域として検出するようにしてもよい。 In the above description, the information terminal 11 notifies the server 12 of the location where the risk of an accident with another moving body is detected as the dangerous region. However, the dangerous region is detected by a different method. It is possible to do so. For example, in the left image or the right image, a place where the area that blocks the field of view in the traveling direction is a predetermined ratio or more may be detected as a dangerous area.
{装置の構成や処理分担に関する変形例}
 以上の説明では、各移動体の動きをサーバ12で予測する例を示したが、例えば、各情報端末11が、サーバ12aの動き予測部306と同様の処理により自移動体の動きを予測し、予測結果を位置情報に含めてサーバ12に送信するようにしてもよい。そして、例えば、サーバ12が、各情報端末11からの移動体の動きの予測結果に基づいて、各移動体の危険予測を行うようにしてもよい。
{Variation regarding device configuration and processing sharing}
In the above description, an example in which the movement of each moving body is predicted by the server 12 has been described. For example, each information terminal 11 predicts the movement of its own moving body by the same process as the motion prediction unit 306 of the server 12a. The prediction result may be included in the position information and transmitted to the server 12. For example, the server 12 may perform risk prediction of each mobile object based on the prediction result of the movement of the mobile object from each information terminal 11.
 また、例えば、カメラ101L,101Rを情報端末11の外部に設けるようにしてもよい。 Also, for example, the cameras 101L and 101R may be provided outside the information terminal 11.
{本技術の適用範囲について}
 以上の説明では、主に移動体が車両である場合を例に挙げて説明したが、上述したように、本技術は、移動体が車両以外の人、飛行機、ヘリコプター、ドローン等の場合にも適用することができる。また、本技術は、移動体が、原動機により動く車両の場合だけでなく、レール又は架線により運転する車両や、人の力により動く車両等の場合にも適用することができる。さらに、車両の運転方法(例えば、自動運転、マニュアル運転、遠隔操作等)の違いに関わらず、本技術は適用することが可能である。
{Applicability of this technology}
In the above description, the case where the moving body is a vehicle is mainly described as an example. However, as described above, the present technology can be applied to a case where the moving body is a person other than a vehicle, an airplane, a helicopter, a drone, or the like. Can be applied. In addition, the present technology can be applied not only to a vehicle that moves by a prime mover, but also to a vehicle that is driven by a rail or an overhead line, a vehicle that moves by human power, and the like. Furthermore, the present technology can be applied regardless of differences in vehicle driving methods (for example, automatic driving, manual driving, remote control, etc.).
 また、本技術は、例えば、多くの移動体が行き交う場合や、移動体からの死角が発生しやすい場合に適用すると、大きな効果が得られる。例えば、ヘッドマウントディスプレイを装着した人がAR(拡張現実)やVR(仮想現実)のアプリケーションを利用する場合、多くの人が行き交う路上やイベント会場等において、人同士の衝突、追突、接触等の事故が発生しやすい。そこで、例えば、上述した情報端末11をウエアラブルデバイスにより実現し、人同士の事故を回避するように人を誘導することが想定される。 In addition, for example, when the present technology is applied to a case where many moving bodies go back and forth or a blind spot from the moving body is likely to be generated, a great effect can be obtained. For example, when a person wearing a head-mounted display uses an AR (augmented reality) or VR (virtual reality) application, such as collision, rear-end collision, contact, etc. between people on a road or event venue where many people come and go. Accidents are likely to occur. Therefore, for example, it is assumed that the information terminal 11 described above is realized by a wearable device, and a person is guided to avoid an accident between people.
 さらに、飛行機、ドローン、ヘリコプター等の空中を移動する移動体の場合、移動体から地上を撮影するようにすることにより、SLAM技術を適用することが可能になる。 Furthermore, in the case of a moving body that moves in the air, such as an airplane, drone, and helicopter, SLAM technology can be applied by photographing the ground from the moving body.
 また、本技術は、2種類以上の移動体間の事故を防止する場合にも適用することができる。例えば、自動車、自転車及び人の間の事故、自動車と電車の間の事故等を防止する場合にも適用することができる。 This technology can also be applied to prevent accidents between two or more types of moving objects. For example, the present invention can be applied to prevent accidents between cars, bicycles and people, accidents between cars and trains, and the like.
<5.応用例>
{コンピュータの構成例}
 上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
<5. Application example>
{Example of computer configuration}
The series of processes described above can be executed by hardware or can be executed by software. When a series of processing is executed by software, a program constituting the software is installed in the computer. Here, the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
 図19は、上述した一連の処理をプログラムにより実行するコンピュータのハードウエアの構成例を示すブロック図である。 FIG. 19 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
 コンピュータにおいて、CPU(Central Processing Unit)901,ROM(Read Only Memory)902,RAM(Random Access Memory)903は、バス904により相互に接続されている。 In a computer, a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, and a RAM (Random Access Memory) 903 are connected to each other by a bus 904.
 バス904には、さらに、入出力インタフェース905が接続されている。入出力インタフェース905には、入力部906、出力部907、記憶部908、通信部909、及びドライブ910が接続されている。 An input / output interface 905 is further connected to the bus 904. An input unit 906, an output unit 907, a storage unit 908, a communication unit 909, and a drive 910 are connected to the input / output interface 905.
 入力部906は、キーボード、マウス、マイクロフォンなどよりなる。出力部907は、ディスプレイ、スピーカなどよりなる。記憶部908は、ハードディスクや不揮発性のメモリなどよりなる。通信部909は、ネットワークインタフェースなどよりなる。ドライブ910は、磁気ディスク、光ディスク、光磁気ディスク、又は半導体メモリなどのリムーバブルメディア911を駆動する。 The input unit 906 includes a keyboard, a mouse, a microphone, and the like. The output unit 907 includes a display, a speaker, and the like. The storage unit 908 includes a hard disk, a nonvolatile memory, and the like. The communication unit 909 includes a network interface or the like. The drive 910 drives a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU901が、例えば、記憶部908に記憶されているプログラムを、入出力インタフェース905及びバス904を介して、RAM903にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 901 loads the program stored in the storage unit 908 to the RAM 903 via the input / output interface 905 and the bus 904 and executes the program, for example. Is performed.
 コンピュータ(CPU901)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア911に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU 901) can be provided by being recorded on a removable medium 911 as a package medium, for example. The program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブルメディア911をドライブ910に装着することにより、入出力インタフェース905を介して、記憶部908にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部909で受信し、記憶部908にインストールすることができる。その他、プログラムは、ROM902や記憶部908に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 908 via the input / output interface 905 by attaching the removable medium 911 to the drive 910. The program can be received by the communication unit 909 via a wired or wireless transmission medium and installed in the storage unit 908. In addition, the program can be installed in the ROM 902 or the storage unit 908 in advance.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
 また、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In this specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
 さらに、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Furthermore, embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when a plurality of processes are included in one step, the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
{車両制御システムへの適用例}
 また、上述したように、情報端末11は、例えば、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車などのいずれかの種類の車両に搭載される装置として実現することが可能である。
{Example of application to vehicle control system}
Further, as described above, the information terminal 11 can be realized as a device mounted on any type of vehicle such as an automobile, an electric car, a hybrid electric car, and a motorcycle.
 図20は、本技術が適用され得る車両制御システム1000の概略的な構成の一例を示すブロック図である。車両制御システム1000は、通信ネットワーク1001を介して接続された複数の電子制御ユニットを備える。図20に示した例では、車両制御システム1000は、駆動系制御ユニット1002、ボディ系制御ユニット1004、バッテリ制御ユニット1005、車外情報検出装置1007、車内情報検出装置1010、及び統合制御ユニット1012を備える。これらの複数の制御ユニットを接続する通信ネットワーク1001は、例えば、CAN(Controller Area Network)、LIN(Local Interconnect Network)、LAN(Local Area Network)又はFlexRay(登録商標)等の任意の規格に準拠した車載通信ネットワークであってよい。 FIG. 20 is a block diagram illustrating an example of a schematic configuration of a vehicle control system 1000 to which the present technology can be applied. The vehicle control system 1000 includes a plurality of electronic control units connected via a communication network 1001. In the example illustrated in FIG. 20, the vehicle control system 1000 includes a drive system control unit 1002, a body system control unit 1004, a battery control unit 1005, a vehicle exterior information detection device 1007, a vehicle interior information detection device 1010, and an integrated control unit 1012. . A communication network 1001 that connects the plurality of control units is compliant with an arbitrary standard such as CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), or FlexRay (registered trademark). It may be an in-vehicle communication network.
 各制御ユニットは、各種プログラムにしたがって演算処理を行うマイクロコンピュータと、マイクロコンピュータにより実行されるプログラム又は各種演算に用いられるパラメータ等を記憶する記憶部と、各種制御対象の装置を駆動する駆動回路とを備える。各制御ユニットは、通信ネットワーク1001を介して他の制御ユニットとの間で通信を行うためのネットワークI/Fを備えるとともに、車内外の装置又はセンサ等との間で、有線通信又は無線通信により通信を行うための通信I/Fを備える。図20では、統合制御ユニット1012の機能構成として、マイクロコンピュータ1051、汎用通信I/F1052、専用通信I/F1053、測位部1054、ビーコン受信部1055、車内機器I/F1056、音声画像出力部1057、車載ネットワークI/F1058及び記憶部1059が図示されている。他の制御ユニットも同様に、マイクロコンピュータ、通信I/F及び記憶部等を備える。 Each control unit includes a microcomputer that performs arithmetic processing according to various programs, a storage unit that stores programs executed by the microcomputer or parameters used for various calculations, and a drive circuit that drives various devices to be controlled. Is provided. Each control unit includes a network I / F for performing communication with other control units via the communication network 1001, and wired or wireless communication with devices or sensors inside and outside the vehicle. A communication I / F for performing communication is provided. In FIG. 20, the functional configuration of the integrated control unit 1012 includes a microcomputer 1051, general-purpose communication I / F 1052, dedicated communication I / F 1053, positioning unit 1054, beacon receiving unit 1055, in-vehicle device I / F 1056, audio image output unit 1057, An in-vehicle network I / F 1058 and a storage unit 1059 are illustrated. Similarly, other control units include a microcomputer, a communication I / F, a storage unit, and the like.
 駆動系制御ユニット1002は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット1002は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。駆動系制御ユニット1002は、ABS(Antilock Brake System)又はESC(Electronic Stability Control)等の制御装置としての機能を有してもよい。 The drive system control unit 1002 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 1002 includes a driving force generator for generating a driving force of a vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism that adjusts and a braking device that generates a braking force of the vehicle. The drive system control unit 1002 may have a function as a control device such as ABS (Antilock Brake System) or ESC (ElectronicElectroStability Control).
 駆動系制御ユニット1002には、車両状態検出部1003が接続される。車両状態検出部1003には、例えば、車体の軸回転運動の角速度を検出するジャイロセンサ、車両の加速度を検出する加速度センサ、あるいは、アクセルペダルの操作量、ブレーキペダルの操作量、ステアリングホイールの操舵角、エンジン回転数又は車輪の回転速度等を検出するためのセンサのうちの少なくとも一つが含まれる。駆動系制御ユニット1002は、車両状態検出部1003から入力される信号を用いて演算処理を行い、内燃機関、駆動用モータ、電動パワーステアリング装置又はブレーキ装置等を制御する。 A vehicle state detection unit 1003 is connected to the drive system control unit 1002. The vehicle state detection unit 1003 includes, for example, a gyro sensor that detects the angular velocity of the axial rotational motion of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, or an accelerator pedal operation amount, a brake pedal operation amount, and a steering wheel steering. At least one of sensors for detecting an angle, an engine speed, a rotational speed of a wheel, or the like is included. The drive system control unit 1002 performs arithmetic processing using a signal input from the vehicle state detection unit 1003, and controls an internal combustion engine, a drive motor, an electric power steering device, a brake device, or the like.
 ボディ系制御ユニット1004は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット1004は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット1004には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット1004は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 1004 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 1004 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a blinker, or a fog lamp. In this case, the body control unit 1004 can be input with radio waves transmitted from a portable device that substitutes for a key or signals of various switches. The body system control unit 1004 receives the input of these radio waves or signals, and controls the vehicle door lock device, power window device, lamp, and the like.
 バッテリ制御ユニット1005は、各種プログラムにしたがって駆動用モータの電力供給源である二次電池1006を制御する。例えば、バッテリ制御ユニット1005には、二次電池1006を備えたバッテリ装置から、バッテリ温度、バッテリ出力電圧又はバッテリの残存容量等の情報が入力される。バッテリ制御ユニット1005は、これらの信号を用いて演算処理を行い、二次電池1006の温度調節制御又はバッテリ装置に備えられた冷却装置等の制御を行う。 The battery control unit 1005 controls the secondary battery 1006 that is a power supply source of the drive motor according to various programs. For example, information such as a battery temperature, a battery output voltage, or a remaining battery capacity is input to the battery control unit 1005 from a battery device including the secondary battery 1006. The battery control unit 1005 performs arithmetic processing using these signals, and controls the temperature adjustment control of the secondary battery 1006 or the cooling device provided in the battery device.
 車外情報検出装置1007は、車両制御システム1000を搭載した車両の外部の情報を検出する。例えば、車外情報検出装置1007には、撮像部1008及び車外情報検出部1009のうちの少なくとも一方が接続される。撮像部1008には、ToF(Time Of Flight)カメラ、ステレオカメラ、単眼カメラ、赤外線カメラ及びその他のカメラのうちの少なくとも一つが含まれる。車外情報検出部1009には、例えば、現在の天候又は気象を検出するための環境センサ、あるいは、車両制御システム1000を搭載した車両の周囲の他の車両、障害物又は歩行者等を検出するための周囲情報検出センサが含まれる。 The vehicle outside information detection device 1007 detects information outside the vehicle on which the vehicle control system 1000 is mounted. For example, at least one of the imaging unit 1008 and the outside information detection unit 1009 is connected to the outside information detection device 1007. The imaging unit 1008 includes at least one of a ToF (Time Of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The outside information detection unit 1009 detects, for example, current weather or an environmental sensor for detecting the weather, or other vehicles, obstacles, pedestrians, etc. around the vehicle on which the vehicle control system 1000 is mounted. A surrounding information detection sensor is included.
 環境センサは、例えば、雨天を検出する雨滴センサ、霧を検出する霧センサ、日照度合いを検出する日照センサ、及び降雪を検出する雪センサのうちの少なくとも一つであってよい。周囲情報検出センサは、超音波センサ、レーダ装置及びLIDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)装置のうちの少なくとも一つであってよい。これらの撮像部1008及び車外情報検出部1009は、それぞれ独立したセンサないし装置として備えられてもよいし、複数のセンサないし装置が統合された装置として備えられてもよい。 The environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunshine sensor that detects sunlight intensity, and a snow sensor that detects snowfall. The ambient information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device. The imaging unit 1008 and the vehicle exterior information detection unit 1009 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
 ここで、図21は、撮像部1008及び車外情報検出部1009の設置位置の例を示す。撮像部1101F,1101L,1101R,1101B,1101Cは、例えば、車両1100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部のうちの少なくとも一つの位置に設けられる。フロントノーズに備えられる撮像部1101F及び車室内のフロントガラスの上部に備えられる撮像部1101Cは、主として車両1100の前方の画像を取得する。サイドミラーに備えられる撮像部1101L,1101Rは、主として車両1100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部1101Bは、主として車両1100の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部1101Cは、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 Here, FIG. 21 shows an example of installation positions of the imaging unit 1008 and the vehicle exterior information detection unit 1009. The imaging units 1101F, 1101L, 1101R, 1101B, and 1101C are provided, for example, at at least one position among a front nose, a side mirror, a rear bumper, a back door, and an upper portion of a windshield in the vehicle interior of the vehicle 1100. The imaging unit 1101F provided in the front nose and the imaging unit 1101C provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 1100. The imaging units 1101L and 1101R provided in the side mirror mainly acquire an image of the side of the vehicle 1100. The imaging unit 1101B provided in the rear bumper or the back door mainly acquires an image behind the vehicle 1100. The imaging unit 1101C provided on the upper part of the windshield in the passenger compartment is mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図21には、それぞれの撮像部1101F,1101L,1101R,1101Bの撮影範囲の一例が示されている。撮像範囲aは、フロントノーズに設けられた撮像部1101Fの撮像範囲を示し、撮像範囲b,cは、それぞれサイドミラーに設けられた撮像部1101L,1101Rの撮像範囲を示し、撮像範囲dは、リアバンパ又はバックドアに設けられた撮像部1101Bの撮像範囲を示す。例えば、撮像部1101F,1101L,1101R,1101Bで撮像された画像データが重ね合わせられることにより、車両1100を上方から見た俯瞰画像が得られる。 FIG. 21 shows an example of shooting ranges of the respective imaging units 1101F, 1101L, 1101R, and 1101B. The imaging range a indicates the imaging range of the imaging unit 1101F provided in the front nose, the imaging ranges b and c indicate the imaging ranges of the imaging units 1101L and 1101R provided in the side mirrors, respectively, and the imaging range d The imaging range of the imaging part 1101B provided in the rear bumper or the back door is shown. For example, by overlaying image data captured by the imaging units 1101F, 1101L, 1101R, and 1101B, an overhead image when the vehicle 1100 is viewed from above is obtained.
 車両1100のフロント、リア、サイド、コーナ及び車室内のフロントガラスの上部に設けられる車外情報検出部1102F,1102FL,1102FR,1102ML,1102MR,1102C,1102BL,1102BR,1102Bは、例えば超音波センサ又はレーダ装置であってよい。車両1100のフロントノーズ、リアバンパ、バックドア及び車室内のフロントガラスの上部に設けられる車外情報検出部1102F,1102C,1102Bは、例えばLIDAR装置であってよい。これらの車外情報検出部1102F乃至1102Bは、主として先行車両、歩行者又は障害物等の検出に用いられる。 Vehicle exterior information detection units 1102F, 1102FL, 1102FR, 1102ML, 1102MR, 1102C, 1102BL, 1102BR, and 1102B provided on the front, rear, side, corner, and front windshield of the vehicle 1100 are, for example, an ultrasonic sensor or a radar. It may be a device. The vehicle outside information detection units 1102F, 1102C, and 1102B provided on the front nose, the rear bumper, the back door, and the windshield in the vehicle interior of the vehicle 1100 may be LIDAR devices, for example. These outside information detection units 1102F to 1102B are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, and the like.
 図20に戻って説明を続ける。車外情報検出装置1007は、撮像部1008に車外の画像を撮像させるとともに、撮像された画像データを受信する。また、車外情報検出装置1007は、接続されている車外情報検出部1009から検出情報を受信する。車外情報検出部1009が超音波センサ、レーダ装置又はLIDAR装置である場合には、車外情報検出装置1007は、超音波又は電磁波等を発信させるとともに、受信された反射波の情報を受信する。車外情報検出装置1007は、受信した情報に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。車外情報検出装置1007は、受信した情報に基づいて、降雨、霧又は路面状況等を認識する環境認識処理を行ってもよい。車外情報検出装置1007は、受信した情報に基づいて、車外の物体までの距離を算出してもよい。 Returning to FIG. 20, the description will be continued. The vehicle outside information detection device 1007 causes the imaging unit 1008 to capture an image outside the vehicle and receives the captured image data. Further, the vehicle exterior information detection device 1007 receives detection information from the vehicle exterior information detection unit 1009 connected thereto. When the vehicle exterior information detection unit 1009 is an ultrasonic sensor, a radar device, or a LIDAR device, the vehicle exterior information detection device 1007 transmits ultrasonic waves, electromagnetic waves, or the like, and receives received reflected wave information. The vehicle outside information detection device 1007 may perform an object detection process or a distance detection process such as a person, a car, an obstacle, a sign, or a character on a road surface based on the received information. The vehicle outside information detection device 1007 may perform an environment recognition process for recognizing rainfall, fog, road surface conditions, or the like based on the received information. The vehicle outside information detection device 1007 may calculate a distance to an object outside the vehicle based on the received information.
 また、車外情報検出装置1007は、受信した画像データに基づいて、人、車、障害物、標識又は路面上の文字等を認識する画像認識処理又は距離検出処理を行ってもよい。車外情報検出装置1007は、受信した画像データに対して歪補正又は位置合わせ等の処理を行うとともに、異なる撮像部1008により撮像された画像データを合成して、俯瞰画像又はパノラマ画像を生成してもよい。車外情報検出装置1007は、異なる撮像部1008により撮像された画像データを用いて、視点変換処理を行ってもよい。 Further, the vehicle outside information detection device 1007 may perform an image recognition process or a distance detection process for recognizing a person, a car, an obstacle, a sign, a character on a road surface, or the like based on the received image data. The outside information detection apparatus 1007 performs processing such as distortion correction or alignment on the received image data, and combines the image data captured by the different imaging units 1008 to generate an overhead image or a panoramic image. Also good. The vehicle exterior information detection device 1007 may perform viewpoint conversion processing using image data captured by different imaging units 1008.
 車内情報検出装置1010は、車内の情報を検出する。車内情報検出装置1010には、例えば、運転者の状態を検出する運転者状態検出部1011が接続される。運転者状態検出部1011は、運転者を撮像するカメラ、運転者の生体情報を検出する生体センサ又は車室内の音声を集音するマイク等を含んでもよい。生体センサは、例えば、座面又はステアリングホイール等に設けられ、座席に座った搭乗者又はステアリングホイールを握る運転者の生体情報を検出する。車内情報検出装置1010は、運転者状態検出部1011から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。車内情報検出装置1010は、集音された音声信号に対してノイズキャンセリング処理等の処理を行ってもよい。 The in-vehicle information detection device 1010 detects in-vehicle information. For example, a driver state detection unit 1011 that detects the driver's state is connected to the in-vehicle information detection apparatus 1010. The driver state detection unit 1011 may include a camera that captures an image of the driver, a biological sensor that detects biological information of the driver, a microphone that collects sound in the passenger compartment, and the like. The biometric sensor is provided, for example, on a seat surface or a steering wheel, and detects biometric information of an occupant sitting on the seat or a driver holding the steering wheel. The in-vehicle information detection device 1010 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 1011 and determine whether the driver is asleep. May be. The vehicle interior information detection apparatus 1010 may perform a process such as a noise canceling process on the collected audio signal.
 統合制御ユニット1012は、各種プログラムにしたがって車両制御システム1000内の動作全般を制御する。統合制御ユニット1012には、入力部1018が接続されている。入力部1018は、例えば、タッチパネル、ボタン、マイクロフォン、スイッチ又はレバー等、搭乗者によって入力操作され得る装置によって実現される。入力部1018は、例えば、赤外線又はその他の電波を利用したリモートコントロール装置であってもよいし、車両制御システム1000の操作に対応した携帯電話又はPDA(Personal Digital Assistant)等の外部接続機器であってもよい。入力部1018は、例えばカメラであってもよく、その場合搭乗者はジェスチャにより情報を入力することができる。さらに、入力部1018は、例えば、上記の入力部1018を用いて搭乗者等により入力された情報に基づいて入力信号を生成し、統合制御ユニット1012に出力する入力制御回路などを含んでもよい。搭乗者等は、この入力部1018を操作することにより、車両制御システム1000に対して各種のデータを入力したり処理動作を指示したりする。 The integrated control unit 1012 controls the overall operation in the vehicle control system 1000 according to various programs. An input unit 1018 is connected to the integrated control unit 1012. The input unit 1018 is realized by a device that can be input by a passenger, such as a touch panel, a button, a microphone, a switch, or a lever. The input unit 1018 may be, for example, a remote control device using infrared rays or other radio waves, or may be an external connection device such as a mobile phone or a PDA (Personal Digital Assistant) that supports the operation of the vehicle control system 1000. May be. The input unit 1018 may be, for example, a camera. In that case, the passenger can input information by gesture. Furthermore, the input unit 1018 may include, for example, an input control circuit that generates an input signal based on information input by a passenger or the like using the input unit 1018 and outputs the input signal to the integrated control unit 1012. A passenger or the like operates the input unit 1018 to input various data to the vehicle control system 1000 or instruct a processing operation.
 記憶部1059は、マイクロコンピュータにより実行される各種プログラムを記憶するRAM(Random Access Memory)、及び各種パラメータ、演算結果又はセンサ値等を記憶するROM(Read Only Memory)を含んでいてもよい。また、記憶部1059は、HDD(Hard Disc Drive)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス又は光磁気記憶デバイス等によって実現してもよい。 The storage unit 1059 may include a RAM (Random Access Memory) that stores various programs executed by the microcomputer and a ROM (Read Only Memory) that stores various parameters, calculation results, sensor values, and the like. The storage unit 1059 may be realized by a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
 汎用通信I/F1052は、外部環境1016に存在する様々な機器との間の通信を仲介する汎用的な通信I/Fである。汎用通信I/F1052は、GSM(登録商標)(Global System of Mobile communications)、WiMAX、LTE(Long Term Evolution)若しくはLTE-A(LTE-Advanced)などのセルラー通信プロトコル、又は無線LAN(Wi-Fi(登録商標)ともいう)などのその他の無線通信プロトコルを実装してよい。汎用通信I/F1052は、例えば、基地局又はアクセスポイントを介して、外部ネットワーク(例えば、インターネット、クラウドネットワーク又は事業者固有のネットワーク)上に存在する機器(例えば、アプリケーションサーバ又は制御サーバ)へ接続してもよい。また、汎用通信I/F1052は、例えばP2P(Peer To Peer)技術を用いて、車両の近傍に存在する端末(例えば、歩行者若しくは店舗の端末、又はMTC(Machine Type Communication)端末)と接続してもよい。 General-purpose communication I / F 1052 is a general-purpose communication I / F that mediates communication with various devices existing in the external environment 1016. The general-purpose communication I / F 1052 is a cellular communication protocol such as GSM (registered trademark) (Global System of Mobile communications), WiMAX, Long Term Term Evolution (LTE) or LTE-A (LTE-Advanced), or a wireless LAN (Wi-Fi). (Also referred to as (registered trademark)) may be implemented. The general-purpose communication I / F 1052 is connected to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via, for example, a base station or an access point. May be. Further, the general-purpose communication I / F 1052 is connected to a terminal (for example, a pedestrian or a store terminal, or an MTC (Machine Type Communication) terminal) that exists in the vicinity of the vehicle using, for example, P2P (Peer To To Peer) technology. May be.
 専用通信I/F1053は、車両における使用を目的として策定された通信プロトコルをサポートする通信I/Fである。専用通信I/F1053は、例えば、下位レイヤのIEEE802.11pと上位レイヤのIEEE1609との組合せであるWAVE(Wireless Access in Vehicle Environment)、又はDSRC(Dedicated Short Range Communications)といった標準プロトコルを実装してよい。専用通信I/F1053は、典型的には、車車間(Vehicle to Vehicle)通信、路車間(Vehicle to Infrastructure)通信及び歩車間(Vehicle to Pedestrian)通信のうちの1つ以上を含む概念であるV2X通信を遂行する。 The dedicated communication I / F 1053 is a communication I / F that supports a communication protocol formulated for use in a vehicle. For example, the dedicated communication I / F 1053 may implement a standard protocol such as WAVE (Wireless Access in Vehicle Environment) or DSRC (Dedicated Short Range Communication) that is a combination of the lower layer IEEE 802.11p and the upper layer IEEE 1609 . The dedicated communication I / F 1053 is typically V2X, which is a concept that includes one or more of vehicle-to-vehicle communication, vehicle-to-infrastructure communication, and vehicle-to-pedestrian communication. Perform communication.
 測位部1054は、例えば、GNSS(Global Navigation Satellite System)衛星からのGNSS信号(例えば、GPS(Global Positioning System)衛星からのGPS信号)を受信して測位を実行し、車両の緯度、経度及び高度を含む位置情報を生成する。なお、測位部1054は、無線アクセスポイントとの信号の交換により現在位置を特定してもよく、又は測位機能を有する携帯電話、PHS若しくはスマートフォンといった端末から位置情報を取得してもよい。 For example, the positioning unit 1054 receives a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a Global Positioning System (GPS) satellite), performs positioning, and performs latitude, longitude, and altitude of the vehicle. The position information including is generated. Note that the positioning unit 1054 may specify the current position by exchanging signals with the wireless access point, or may acquire position information from a terminal such as a mobile phone, PHS, or smartphone having a positioning function.
 ビーコン受信部1055は、例えば、道路上に設置された無線局等から発信される電波あるいは電磁波を受信し、現在位置、渋滞、通行止め又は所要時間等の情報を取得する。なお、ビーコン受信部1055の機能は、上述した専用通信I/F1053に含まれてもよい。 The beacon receiving unit 1055 receives, for example, radio waves or electromagnetic waves transmitted from radio stations or the like installed on the road, and acquires information such as the current position, traffic jams, closed roads, or required time. Note that the function of the beacon receiving unit 1055 may be included in the dedicated communication I / F 1053 described above.
 車内機器I/F1056は、マイクロコンピュータ1051と車内に存在する様々な機器との間の接続を仲介する通信インタフェースである。車内機器I/F1056は、無線LAN、Bluetooth(登録商標)、NFC(Near Field Communication)又はWUSB(Wireless USB)といった無線通信プロトコルを用いて無線接続を確立してもよい。また、車内機器I/F1056は、図示しない接続端子(及び、必要であればケーブル)を介して有線接続を確立してもよい。車内機器I/F1056は、例えば、搭乗者が有するモバイル機器若しくはウェアラブル機器、又は車両に搬入され若しくは取り付けられる情報機器との間で、制御信号又はデータ信号を交換する。 The in-vehicle device I / F 1056 is a communication interface that mediates connections between the microcomputer 1051 and various devices existing in the vehicle. The in-vehicle device I / F 1056 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB). The in-vehicle device I / F 1056 may establish a wired connection via a connection terminal (and a cable if necessary). The in-vehicle device I / F 1056 exchanges a control signal or a data signal with, for example, a mobile device or wearable device that a passenger has, or an information device that is carried in or attached to the vehicle.
 車載ネットワークI/F1058は、マイクロコンピュータ1051と通信ネットワーク1001との間の通信を仲介するインタフェースである。車載ネットワークI/F1058は、通信ネットワーク1001によりサポートされる所定のプロトコルに則して、信号等を送受信する。 The in-vehicle network I / F 1058 is an interface that mediates communication between the microcomputer 1051 and the communication network 1001. The in-vehicle network I / F 1058 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 1001.
 統合制御ユニット1012のマイクロコンピュータ1051は、汎用通信I/F1052、専用通信I/F1053、測位部1054、ビーコン受信部1055、車内機器I/F1056及び車載ネットワークI/F1058のうちの少なくとも一つを介して取得される情報に基づき、各種プログラムにしたがって、車両制御システム1000を制御する。例えば、マイクロコンピュータ1051は、取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット1002に対して制御指令を出力してもよい。例えば、マイクロコンピュータ1051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、自動運転等を目的とした協調制御を行ってもよい。 The microcomputer 1051 of the integrated control unit 1012 is connected via at least one of a general-purpose communication I / F 1052, a dedicated communication I / F 1053, a positioning unit 1054, a beacon receiving unit 1055, an in-vehicle device I / F 1056, and an in-vehicle network I / F 1058. Based on the information acquired in this way, the vehicle control system 1000 is controlled according to various programs. For example, the microcomputer 1051 calculates a control target value of the driving force generation device, the steering mechanism, or the braking device based on the acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 1002. Also good. For example, the microcomputer 1051 may perform cooperative control for the purpose of vehicle collision avoidance or impact mitigation, follow-up travel based on inter-vehicle distance, vehicle speed maintenance travel, automatic driving, and the like.
 マイクロコンピュータ1051は、汎用通信I/F1052、専用通信I/F1053、測位部1054、ビーコン受信部1055、車内機器I/F1056及び車載ネットワークI/F1058のうちの少なくとも一つを介して取得される情報に基づき、車両の現在位置の周辺情報を含むローカル地図情報を作成してもよい。また、マイクロコンピュータ1051は、取得される情報に基づき、車両の衝突、歩行者等の近接又は通行止めの道路への進入等の危険を予測し、警告用信号を生成してもよい。警告用信号は、例えば、警告音を発生させたり、警告ランプを点灯させたりするための信号であってよい。 The microcomputer 1051 is information acquired via at least one of the general-purpose communication I / F 1052, the dedicated communication I / F 1053, the positioning unit 1054, the beacon receiving unit 1055, the in-vehicle device I / F 1056, and the in-vehicle network I / F 1058. Based on the above, local map information including peripheral information on the current position of the vehicle may be created. Further, the microcomputer 1051 may generate a warning signal by predicting a danger such as a vehicle collision, approach of a pedestrian or the like or approach to a closed road based on the acquired information. The warning signal may be, for example, a signal for generating a warning sound or lighting a warning lamp.
 音声画像出力部1057は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図20の例では、出力装置として、オーディオスピーカ1013、表示部1014及びインストルメントパネル1015が例示されている。表示部1014は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。表示部1014は、AR(Augmented Reality)表示機能を有していてもよい。出力装置は、これらの装置以外の、ヘッドホン、プロジェクタ又はランプ等の他の装置であってもよい。出力装置が表示装置の場合、表示装置は、マイクロコンピュータ1051が行った各種処理により得られた結果又は他の制御ユニットから受信された情報を、テキスト、イメージ、表、グラフ等、様々な形式で視覚的に表示する。また、出力装置が音声出力装置の場合、音声出力装置は、再生された音声データ又は音響データ等からなるオーディオ信号をアナログ信号に変換して聴覚的に出力する。 The sound image output unit 1057 transmits an output signal of at least one of sound and image to an output device capable of visually or audibly notifying information to a vehicle occupant or the outside of the vehicle. In the example of FIG. 20, an audio speaker 1013, a display unit 1014, and an instrument panel 1015 are illustrated as output devices. The display unit 1014 may include at least one of an on-board display and a head-up display, for example. The display unit 1014 may have an AR (Augmented Reality) display function. The output device may be another device such as a headphone, a projector, or a lamp other than these devices. When the output device is a display device, the display device can display the results obtained by various processes performed by the microcomputer 1051 or information received from other control units in various formats such as text, images, tables, and graphs. Display visually. Further, when the output device is an audio output device, the audio output device converts an audio signal made up of reproduced audio data or acoustic data into an analog signal and outputs it aurally.
 なお、図20に示した例において、通信ネットワーク1001を介して接続された少なくとも二つの制御ユニットが一つの制御ユニットとして一体化されてもよい。あるいは、個々の制御ユニットが、複数の制御ユニットにより構成されてもよい。さらに、車両制御システム1000が、図示されていない別の制御ユニットを備えてもよい。また、上記の説明において、いずれかの制御ユニットが担う機能の一部又は全部を、他の制御ユニットに持たせてもよい。つまり、通信ネットワーク1001を介して情報の送受信がされるようになっていれば、所定の演算処理が、いずれかの制御ユニットで行われるようになってもよい。同様に、いずれかの制御ユニットに接続されているセンサ又は装置が、他の制御ユニットに接続されるとともに、複数の制御ユニットが、通信ネットワーク1001を介して相互に検出情報を送受信してもよい。 In the example shown in FIG. 20, at least two control units connected via the communication network 1001 may be integrated as one control unit. Alternatively, each control unit may be configured by a plurality of control units. Further, the vehicle control system 1000 may include another control unit not shown. In the above description, some or all of the functions of any of the control units may be given to other control units. That is, as long as information is transmitted and received via the communication network 1001, the predetermined arithmetic processing may be performed by any of the control units. Similarly, a sensor or device connected to one of the control units may be connected to another control unit, and a plurality of control units may transmit / receive detection information to / from each other via the communication network 1001. .
 以上説明した車両制御システム1000において、例えば、図2の情報端末11aの情報処理部103及び図13の情報端末11bの情報処理部501は、図20の統合制御ユニット1012に適用することができる。また、例えば、図2の情報端末11a及び図13の情報端末11bの受信部102及び送信部104は、図20の汎用通信I/F1052に適用することができる。さらに、例えば、図2の情報端末11a及び図13の情報端末11bのカメラ101L,101Rは、図20の撮像部1008に適用することができる。なお、カメラ101L,101Rは、例えば、基線長をかせぐために、図21の車外情報検出部1102FL,1102FRの位置に設けることが想定される。 In the vehicle control system 1000 described above, for example, the information processing unit 103 of the information terminal 11a in FIG. 2 and the information processing unit 501 of the information terminal 11b in FIG. 13 can be applied to the integrated control unit 1012 in FIG. Further, for example, the reception unit 102 and the transmission unit 104 of the information terminal 11a in FIG. 2 and the information terminal 11b in FIG. 13 can be applied to the general-purpose communication I / F 1052 in FIG. Furthermore, for example, the cameras 101L and 101R of the information terminal 11a in FIG. 2 and the information terminal 11b in FIG. 13 can be applied to the imaging unit 1008 in FIG. It is assumed that the cameras 101L and 101R are provided at the positions of the vehicle exterior information detection units 1102FL and 1102FR in FIG. 21, for example, in order to increase the baseline length.
 また、情報処理部103及び情報処理部501の少なくとも一部の構成要素は、図20に示した統合制御ユニット1012のためのモジュール(例えば、一つのダイで構成される集積回路モジュール)において実現されてもよい。あるいは、情報処理部103及び情報処理部501が、図20に示した車両制御システム1000の複数の制御ユニットによって実現されてもよい。 Further, at least some of the components of the information processing unit 103 and the information processing unit 501 are realized in a module (for example, an integrated circuit module configured by one die) for the integrated control unit 1012 illustrated in FIG. May be. Alternatively, the information processing unit 103 and the information processing unit 501 may be realized by a plurality of control units of the vehicle control system 1000 illustrated in FIG.
 なお、情報処理部103及び情報処理部501の各機能を実現するためのコンピュータプログラムを、いずれかの制御ユニット等に実装することができる。また、このようなコンピュータプログラムが格納された、コンピュータで読み取り可能な記録媒体を提供することもできる。記録媒体は、例えば、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ等である。また、上記のコンピュータプログラムは、記録媒体を用いずに、例えばネットワークを介して配信されてもよい。 It should be noted that a computer program for realizing the functions of the information processing unit 103 and the information processing unit 501 can be installed in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Further, the above computer program may be distributed via a network, for example, without using a recording medium.
 また、本明細書に記載された効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。 Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
 さらに、例えば、本技術は以下のような構成も取ることができる。 Furthermore, for example, the present technology can take the following configurations.
(1)
 各移動体から撮影された画像に基づいて推定された各前記移動体の位置及び速度を示す位置情報を各前記移動体に備えられている情報端末から受信する受信部と、
 推定された各前記移動体の位置及び速度に基づいて予測される各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う危険予測部と
 を備える情報処理装置。
(2)
 推定された各前記移動体の位置及び速度に基づいて、各前記移動体の動きを予測する動き予測部を
 さらに備える前記(1)に記載の情報処理装置。
(3)
 前記受信部は、各前記移動体から撮影された画像内の特徴点の3次元空間内の位置を示すローカルマップを各前記情報端末から受信し、
 受信した前記ローカルマップに基づいて、所定の領域内の特徴点の3次元空間の位置を示すグローバルマップを更新するグローバルマップ更新部を
 さらに備え、
 前記動き予測部は、さらに前記グローバルマップに基づいて、各前記移動体の動きを予測する
 前記(2)に記載の情報処理装置。
(4)
 前記動き予測部は、各前記移動体が前記グローバルマップ上の静止物体を避ける動きを予測する
 前記(3)に記載の情報処理装置。
(5)
 前記位置情報は、各前記情報端末が予測した各前記移動体の動きを含み、
 前記危険予測部は、各前記情報端末が予測した各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う
 前記(1)に記載の情報処理装置。
(6)
 前記危険予測部により事故にあう危険性があると予測された前記移動体に備えられている前記情報端末への危険の通知を行う危険通知部を
 さらに備える前記(1)乃至(5)のいずれかに記載の情報処理装置。
(7)
 前記危険通知部は、前記移動体が事故を回避するための処理に用いる制御情報を前記情報端末に送信する
 前記(6)に記載の情報処理装置。
(8)
 前記受信部は、事故が発生する危険性がある危険領域の検出結果を各前記情報端末から受信し、
 各前記情報端末による前記危険領域の検出結果に基づいて、前記危険領域を示す危険領域マップを更新する危険領域マップ更新部と、
 前記危険領域マップに基づいて、各前記情報端末に前記危険領域を通知する危険領域通知部と
 をさらに備える前記(1)乃至(7)のいずれかに記載の情報処理装置。
(9)
 前記受信部は、事故が発生する危険性がある危険領域の検出結果を各前記情報端末から受信し、
 各前記情報端末による前記危険領域の検出結果に基づいて、前記危険領域を示す危険領域マップを更新する危険領域マップ更新部と、
 前記危険領域マップに基づいて、各前記移動体が前記危険領域内にあるか否かを判定し、判定結果を各前記情報端末に通知する危険領域判定部と
 をさらに備える前記(1)乃至(7)のいずれかに記載の情報処理装置。
(10)
 前記移動体の事故の発生を示す事故情報を受信した場合、前記事故に関連する位置情報を保存する記憶部を
 さらに備える前記(1)乃至(9)のいずれかに記載の情報処理装置。
(11)
 前記記憶部に保存されている前記位置情報に基づいて、前記移動体の事故の発生状況のシミュレーションを行うシミュレーション部を
 さらに備える前記(10)に記載の情報処理装置。
(12)
 情報処理装置が、
 各移動体から撮影された画像に基づいて推定された各前記移動体の位置及び速度を示す位置情報を各前記移動体に備えられている情報端末から受信する受信ステップと、
 推定された各前記移動体の位置及び速度に基づいて予測される各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う危険予測ステップと
 を含む情報処理方法。
(13)
 移動体に備えられ、
 前記移動体から撮影された画像に基づいて前記移動体の位置及び速度を推定する推定部と、
 推定された前記移動体の位置及び速度を含む位置情報を情報処理装置に送信する送信部と、
 前記情報処理装置から前記移動体が事故にあう危険性が通知された場合、事故を回避するための処理を行う危険回避処理部と
 を備える情報端末。
(14)
 前記推定部は、前記移動体から撮影された画像内の特徴点と前記移動体との相対位置に基づいて、前記移動体の位置及び速度を推定する
 前記(13)に記載の情報端末。
(15)
 前記特徴点の3次元空間内の位置を示すローカルマップを生成するローカルマップ生成部を
 さらに備え、
 前記送信部は、さらに前記ローカルマップを前記情報処理装置に送信する
 前記(14)に記載の情報端末。
(16)
 前記特徴点に基づいて、前記移動体の周辺の物体の検出を行う物体検出部と、
 推定された前記移動体の位置及び速度、並びに、前記物体の検出結果に基づいて、前記移動体の事故の予測を行う危険予測部と
 をさらに備え、
 前記危険回避処理部は、さらに前記危険予測部により前記移動体が事故にあう危険性が予測された場合、事故を回避するための処理を行う
 前記(14)又は(15)に記載の情報端末。
(17)
 前記推定部は、推定した前記移動体の位置及び速度に基づいて、前記移動体の動きを予測し、
 前記送信部は、前記移動体の動きの予測結果を含む前記位置情報を前記情報処理装置に送信する
 前記(13)乃至(16)のいずれかに記載の情報端末。
(18)
 前記送信部は、前記情報処理装置からの情報に基づいて、事故が発生する危険性がある危険領域内に前記移動体があると判定した場合、前記位置情報を前記情報処理装置に送信する
 前記(13)乃至(17)のいずれかに記載の情報端末。
(19)
 前記危険予測部は、さらに前記危険領域の検出を行い、
 前記送信部は、前記危険領域の検出結果を前記情報処理装置に送信する
 前記(18)に記載の情報端末。
(20)
 移動体に備えられた情報端末が、
 前記移動体から撮影された画像に基づいて前記移動体の位置及び速度を推定する推定ステップと、
 推定された前記移動体の位置及び速度を含む位置情報を情報処理装置に送信する送信ステップと、
 前記情報処理装置から前記移動体が事故にあう危険性が通知された場合、事故を回避するための処理を行う危険回避処理ステップと
 を含む情報処理方法。
(1)
A receiving unit that receives position information indicating a position and a speed of each moving body estimated based on an image photographed from each moving body from an information terminal included in each moving body;
An information processing apparatus comprising: a risk prediction unit that predicts an accident between the moving bodies based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
(2)
The information processing apparatus according to (1), further including a motion prediction unit that predicts a motion of each moving body based on the estimated position and speed of each moving body.
(3)
The receiving unit receives a local map indicating a position in a three-dimensional space of a feature point in an image taken from each moving body from each information terminal,
Based on the received local map, further comprising a global map updater for updating a global map indicating the position of the feature point in the predetermined area in the three-dimensional space;
The information processing apparatus according to (2), wherein the motion prediction unit further predicts the motion of each moving object based on the global map.
(4)
The information processing apparatus according to (3), wherein the motion prediction unit predicts a motion in which each moving body avoids a stationary object on the global map.
(5)
The position information includes the movement of each mobile object predicted by each information terminal,
The information processing apparatus according to (1), wherein the risk prediction unit predicts an accident between the moving bodies based on movements of the moving bodies predicted by the information terminals.
(6)
Any of (1) to (5), further comprising a danger notification unit for notifying the information terminal provided in the mobile body that is predicted to have an accident risk by the danger prediction unit An information processing apparatus according to claim 1.
(7)
The information processing apparatus according to (6), wherein the danger notification unit transmits, to the information terminal, control information used for processing for the mobile body to avoid an accident.
(8)
The receiving unit receives a detection result of a dangerous area where an accident may occur from each information terminal,
A dangerous area map updating unit for updating a dangerous area map indicating the dangerous area based on the detection result of the dangerous area by each of the information terminals;
The information processing apparatus according to any one of (1) to (7), further comprising: a dangerous area notification unit that notifies each of the information terminals of the dangerous area based on the dangerous area map.
(9)
The receiving unit receives a detection result of a dangerous area where an accident may occur from each information terminal,
A dangerous area map updating unit for updating a dangerous area map indicating the dangerous area based on the detection result of the dangerous area by each of the information terminals;
(1) to (1), further comprising: a risk area determination unit that determines whether or not each moving body is in the risk area based on the risk area map, and notifies the information terminal of the determination result. The information processing apparatus according to any one of 7).
(10)
The information processing apparatus according to any one of (1) to (9), further including a storage unit that stores location information related to the accident when the accident information indicating the occurrence of the accident of the mobile object is received.
(11)
The information processing apparatus according to (10), further including a simulation unit that simulates an accident occurrence state of the moving body based on the position information stored in the storage unit.
(12)
Information processing device
A receiving step of receiving position information indicating the position and speed of each moving body estimated based on an image captured from each moving body from an information terminal provided in each moving body;
A risk prediction step of predicting an accident between the moving bodies based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
(13)
Provided in the moving body,
An estimation unit that estimates the position and speed of the moving body based on an image taken from the moving body;
A transmission unit that transmits position information including the estimated position and speed of the moving object to the information processing device;
An information terminal comprising: a risk avoidance processing unit that performs a process for avoiding an accident when the information processing apparatus is notified of a risk that the moving body will have an accident.
(14)
The information terminal according to (13), wherein the estimation unit estimates a position and a speed of the moving body based on a relative position between a feature point in an image captured from the moving body and the moving body.
(15)
A local map generating unit that generates a local map indicating the position of the feature point in a three-dimensional space;
The information terminal according to (14), wherein the transmission unit further transmits the local map to the information processing apparatus.
(16)
An object detection unit that detects an object around the moving body based on the feature points;
A risk prediction unit that predicts an accident of the moving body based on the estimated position and speed of the moving body and the detection result of the object, and
The information terminal according to (14) or (15), wherein the risk avoidance processing unit further performs a process for avoiding an accident when the risk predicting unit predicts a risk that the moving body will have an accident. .
(17)
The estimation unit predicts the movement of the moving body based on the estimated position and speed of the moving body,
The information terminal according to any one of (13) to (16), wherein the transmission unit transmits the position information including a prediction result of the movement of the moving body to the information processing apparatus.
(18)
The transmission unit transmits the position information to the information processing device when it is determined that the moving body is in a dangerous area where there is a risk of an accident based on information from the information processing device. (13) The information terminal according to any one of (17).
(19)
The danger prediction unit further detects the dangerous area,
The information terminal according to (18), wherein the transmission unit transmits a detection result of the dangerous area to the information processing apparatus.
(20)
An information terminal provided on a mobile object
An estimation step of estimating the position and speed of the moving body based on an image taken from the moving body;
A transmission step of transmitting position information including the estimated position and speed of the moving body to the information processing apparatus;
An information processing method comprising: a risk avoidance processing step for performing a process for avoiding an accident when the information processing apparatus is notified of a risk that the moving body will have an accident.
 1 情報処理ステム, 11-1乃至11-n,11a,11b 情報端末, 12,12a,12b サーバ, 101R,101L カメラ, 102 受信部, 103 情報処理部, 104 送信部, 111,111a,111b SLAM処理部, 112 危険領域判定部, 113 危険予測部, 114 危険回避処理部, 201 推定部, 202 位置情報生成部, 203 物体検出部, 204 ローカルマップ生成部, 211L,211R 画像補正部, 212 特徴点検出部, 213 視差マッチング部, 214 距離推定部, 215 特徴量算出部, 216 マップ情報記憶部, 217 動きマッチング部, 218 移動量推定部, 219 物体辞書記憶部, 220 物体認識部, 221 位置姿勢情報記憶部, 222 位置姿勢推定部, 301 受信部, 302 位置情報記憶部, 303 マップ情報記憶部, 304 グローバルマップ更新部, 305 危険領域マップ更新部, 306 動き予測部, 307 危険予測部, 308 危険通知部, 309 シミュレーション部, 310 グローバルマップ検索部, 311 危険領域通知部, 312 送信部, 501 情報処理部, 511 危険領域判定部, 601 グローバルマップ検索部, 602 危険領域判定部, 711 特徴量算出部, 712 画像検索部, 713 特徴点マッチング部, 714 位置姿勢推定部, 715 移動量推定部 1 Information processing system, 11-1 to 11-n, 11a, 11b information terminal, 12, 12a, 12b server, 101R, 101L camera, 102 reception unit, 103 information processing unit, 104 transmission unit, 111, 111a, 111b SLAM Processing unit, 112 Dangerous area determination unit, 113 Danger prediction unit, 114 Danger avoidance processing unit, 201 Estimation unit, 202 Location information generation unit, 203 Object detection unit, 204 Local map generation unit, 211L, 211R Image correction unit, 212 Features Point detection unit, 213 parallax matching unit, 214 distance estimation unit, 215 feature amount calculation unit, 216 map information storage unit, 217 motion matching unit, 218 movement amount estimation unit, 219 object dictionary storage unit, 220 object recognition , 221 position and orientation information storage unit, 222 position and orientation estimation unit, 301 reception unit, 302 position information storage unit, 303 map information storage unit, 304 global map update unit, 305 danger area map update unit, 306 motion prediction unit, 307 danger Prediction unit, 308 Danger notification unit, 309 Simulation unit, 310 Global map search unit, 311 Danger region notification unit, 312 Transmission unit, 501 Information processing unit, 511 Danger region determination unit, 601 Global map search unit, 602 Danger region determination unit , 711, feature amount calculation unit, 712, image search unit, 713, feature point matching unit, 714, position and orientation estimation unit, 715, movement amount estimation unit

Claims (20)

  1.  各移動体から撮影された画像に基づいて推定された各前記移動体の位置及び速度を示す位置情報を各前記移動体に備えられている情報端末から受信する受信部と、
     推定された各前記移動体の位置及び速度に基づいて予測される各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う危険予測部と
     を備える情報処理装置。
    A receiving unit that receives position information indicating a position and a speed of each moving body estimated based on an image photographed from each moving body from an information terminal included in each moving body;
    An information processing apparatus comprising: a risk prediction unit that predicts an accident between the moving bodies based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
  2.  推定された各前記移動体の位置及び速度に基づいて、各前記移動体の動きを予測する動き予測部を
     さらに備える請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, further comprising: a motion prediction unit that predicts a motion of each moving body based on the estimated position and speed of each moving body.
  3.  前記受信部は、各前記移動体から撮影された画像内の特徴点の3次元空間内の位置を示すローカルマップを各前記情報端末から受信し、
     受信した前記ローカルマップに基づいて、所定の領域内の特徴点の3次元空間の位置を示すグローバルマップを更新するグローバルマップ更新部を
     さらに備え、
     前記動き予測部は、さらに前記グローバルマップに基づいて、各前記移動体の動きを予測する
     請求項2に記載の情報処理装置。
    The receiving unit receives a local map indicating a position in a three-dimensional space of a feature point in an image taken from each moving body from each information terminal,
    Based on the received local map, further comprising a global map updater for updating a global map indicating the position of the feature point in the predetermined area in the three-dimensional space;
    The information processing apparatus according to claim 2, wherein the motion prediction unit further predicts a motion of each moving body based on the global map.
  4.  前記動き予測部は、各前記移動体が前記グローバルマップ上の静止物体を避ける動きを予測する
     請求項3に記載の情報処理装置。
    The information processing apparatus according to claim 3, wherein the motion prediction unit predicts a motion in which each moving object avoids a stationary object on the global map.
  5.  前記位置情報は、各前記情報端末が予測した各前記移動体の動きを含み、
     前記危険予測部は、各前記情報端末が予測した各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う
     請求項1に記載の情報処理装置。
    The position information includes the movement of each mobile object predicted by each information terminal,
    The information processing apparatus according to claim 1, wherein the risk prediction unit predicts an accident between the moving bodies based on movements of the moving bodies predicted by the information terminals.
  6.  前記危険予測部により事故にあう危険性があると予測された前記移動体に備えられている前記情報端末への危険の通知を行う危険通知部を
     さらに備える請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, further comprising a danger notification unit configured to notify a danger to the information terminal provided in the moving body that is predicted to have an accident by the danger prediction unit.
  7.  前記危険通知部は、前記移動体が事故を回避するための処理に用いる制御情報を前記情報端末に送信する
     請求項6に記載の情報処理装置。
    The information processing apparatus according to claim 6, wherein the danger notification unit transmits control information used for processing for the mobile body to avoid an accident to the information terminal.
  8.  前記受信部は、事故が発生する危険性がある危険領域の検出結果を各前記情報端末から受信し、
     各前記情報端末による前記危険領域の検出結果に基づいて、前記危険領域を示す危険領域マップを更新する危険領域マップ更新部と、
     前記危険領域マップに基づいて、各前記情報端末に前記危険領域を通知する危険領域通知部と
     をさらに備える請求項1に記載の情報処理装置。
    The receiving unit receives a detection result of a dangerous area where an accident may occur from each information terminal,
    A dangerous area map updating unit for updating a dangerous area map indicating the dangerous area based on the detection result of the dangerous area by each of the information terminals;
    The information processing apparatus according to claim 1, further comprising: a dangerous area notification unit that notifies each of the information terminals of the dangerous area based on the dangerous area map.
  9.  前記受信部は、事故が発生する危険性がある危険領域の検出結果を各前記情報端末から受信し、
     各前記情報端末による前記危険領域の検出結果に基づいて、前記危険領域を示す危険領域マップを更新する危険領域マップ更新部と、
     前記危険領域マップに基づいて、各前記移動体が前記危険領域内にあるか否かを判定し、判定結果を各前記情報端末に通知する危険領域判定部と
     をさらに備える請求項1に記載の情報処理装置。
    The receiving unit receives a detection result of a dangerous area where an accident may occur from each information terminal,
    A dangerous area map updating unit for updating a dangerous area map indicating the dangerous area based on the detection result of the dangerous area by each of the information terminals;
    2. The risk area determination unit according to claim 1, further comprising: a risk area determination unit configured to determine whether or not each of the moving objects is in the risk area based on the risk area map, and to notify the information terminal of a determination result. Information processing device.
  10.  前記移動体の事故の発生を示す事故情報を受信した場合、前記事故に関連する位置情報を保存する記憶部を
     さらに備える請求項1に記載の情報処理装置。
    The information processing apparatus according to claim 1, further comprising: a storage unit that stores location information related to the accident when the accident information indicating the occurrence of the accident of the mobile object is received.
  11.  前記記憶部に保存されている前記位置情報に基づいて、前記移動体の事故の発生状況のシミュレーションを行うシミュレーション部を
     さらに備える請求項10に記載の情報処理装置。
    The information processing apparatus according to claim 10, further comprising a simulation unit that performs simulation of an accident occurrence state of the mobile object based on the position information stored in the storage unit.
  12.  情報処理装置が、
     各移動体から撮影された画像に基づいて推定された各前記移動体の位置及び速度を示す位置情報を各前記移動体に備えられている情報端末から受信する受信ステップと、
     推定された各前記移動体の位置及び速度に基づいて予測される各前記移動体の動きに基づいて、各前記移動体間の事故の予測を行う危険予測ステップと
     を含む情報処理方法。
    Information processing device
    A receiving step of receiving position information indicating the position and speed of each moving body estimated based on an image captured from each moving body from an information terminal provided in each moving body;
    A risk prediction step of predicting an accident between the moving bodies based on the movements of the moving bodies predicted based on the estimated position and speed of the moving bodies.
  13.  移動体に備えられ、
     前記移動体から撮影された画像に基づいて前記移動体の位置及び速度を推定する推定部と、
     推定された前記移動体の位置及び速度を含む位置情報を情報処理装置に送信する送信部と、
     前記情報処理装置から前記移動体が事故にあう危険性が通知された場合、事故を回避するための処理を行う危険回避処理部と
     を備える情報端末。
    Provided in the moving body,
    An estimation unit that estimates the position and speed of the moving body based on an image taken from the moving body;
    A transmission unit that transmits position information including the estimated position and speed of the moving object to the information processing device;
    An information terminal comprising: a risk avoidance processing unit that performs a process for avoiding an accident when the information processing apparatus is notified of a risk that the moving body will have an accident.
  14.  前記推定部は、前記移動体から撮影された画像内の特徴点と前記移動体との相対位置に基づいて、前記移動体の位置及び速度を推定する
     請求項13に記載の情報端末。
    The information terminal according to claim 13, wherein the estimation unit estimates a position and a speed of the moving body based on a relative position between a feature point in an image photographed from the moving body and the moving body.
  15.  前記特徴点の3次元空間内の位置を示すローカルマップを生成するローカルマップ生成部を
     さらに備え、
     前記送信部は、さらに前記ローカルマップを前記情報処理装置に送信する
     請求項14に記載の情報端末。
    A local map generating unit that generates a local map indicating the position of the feature point in a three-dimensional space;
    The information terminal according to claim 14, wherein the transmission unit further transmits the local map to the information processing apparatus.
  16.  前記特徴点に基づいて、前記移動体の周辺の物体の検出を行う物体検出部と、
     推定された前記移動体の位置及び速度、並びに、前記物体の検出結果に基づいて、前記移動体の事故の予測を行う危険予測部と
     をさらに備え、
     前記危険回避処理部は、さらに前記危険予測部により前記移動体が事故にあう危険性が予測された場合、事故を回避するための処理を行う
     請求項14に記載の情報端末。
    An object detection unit that detects an object around the moving body based on the feature points;
    A risk prediction unit that predicts an accident of the moving body based on the estimated position and speed of the moving body and the detection result of the object, and
    The information terminal according to claim 14, wherein the danger avoidance processing unit further performs a process for avoiding an accident when the danger predicting unit predicts a risk that the moving body will have an accident.
  17.  前記推定部は、推定した前記移動体の位置及び速度に基づいて、前記移動体の動きを予測し、
     前記送信部は、前記移動体の動きの予測結果を含む前記位置情報を前記情報処理装置に送信する
     請求項13に記載の情報端末。
    The estimation unit predicts the movement of the moving body based on the estimated position and speed of the moving body,
    The information terminal according to claim 13, wherein the transmission unit transmits the position information including a prediction result of the movement of the moving body to the information processing apparatus.
  18.  前記送信部は、前記情報処理装置からの情報に基づいて、事故が発生する危険性がある危険領域内に前記移動体があると判定した場合、前記位置情報を前記情報処理装置に送信する
     請求項13に記載の情報端末。
    The transmission unit transmits the position information to the information processing device when it is determined that the moving body is in a dangerous area where there is a risk of an accident based on information from the information processing device. Item 14. The information terminal according to Item 13.
  19.  前記危険予測部は、さらに前記危険領域の検出を行い、
     前記送信部は、前記危険領域の検出結果を前記情報処理装置に送信する
     請求項18に記載の情報端末。
    The danger prediction unit further detects the dangerous area,
    The information terminal according to claim 18, wherein the transmission unit transmits a detection result of the dangerous area to the information processing apparatus.
  20.  移動体に備えられた情報端末が、
     前記移動体から撮影された画像に基づいて前記移動体の位置及び速度を推定する推定ステップと、
     推定された前記移動体の位置及び速度を含む位置情報を情報処理装置に送信する送信ステップと、
     前記情報処理装置から前記移動体が事故にあう危険性が通知された場合、事故を回避するための処理を行う危険回避処理ステップと
     を含む情報処理方法。
    An information terminal provided on a mobile object
    An estimation step of estimating the position and speed of the moving body based on an image taken from the moving body;
    A transmission step of transmitting position information including the estimated position and speed of the moving body to the information processing apparatus;
    An information processing method comprising: a risk avoidance processing step for performing a process for avoiding an accident when the information processing apparatus is notified of a risk that the moving body will have an accident.
PCT/JP2016/077428 2015-09-30 2016-09-16 Information processing device, information terminal and information processing method WO2017057055A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-193360 2015-09-30
JP2015193360A JP2017068589A (en) 2015-09-30 2015-09-30 Information processing apparatus, information terminal, and information processing method

Publications (1)

Publication Number Publication Date
WO2017057055A1 true WO2017057055A1 (en) 2017-04-06

Family

ID=58423777

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/077428 WO2017057055A1 (en) 2015-09-30 2016-09-16 Information processing device, information terminal and information processing method

Country Status (2)

Country Link
JP (1) JP2017068589A (en)
WO (1) WO2017057055A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110936953A (en) * 2018-09-21 2020-03-31 大众汽车有限公司 Method and device for providing an image of the surroundings and motor vehicle having such a device
WO2020188895A1 (en) * 2019-03-18 2020-09-24 日本電気株式会社 Edge computing server, control method, and non-transitory computer-readable medium
CN113177428A (en) * 2020-01-27 2021-07-27 通用汽车环球科技运作有限责任公司 Real-time active object fusion for object tracking
CN113362646A (en) * 2020-03-05 2021-09-07 本田技研工业株式会社 Information processing apparatus, vehicle, computer-readable storage medium, and information processing method
CN114830205A (en) * 2020-01-17 2022-07-29 日立安斯泰莫株式会社 Electronic control device and vehicle control system
US20220355823A1 (en) * 2019-06-18 2022-11-10 Ihi Corporation Travel route generation device and control device
US20220414983A1 (en) * 2019-11-19 2022-12-29 Sony Group Corporation Information processing device, information processing method, and program
WO2023179988A1 (en) * 2022-03-24 2023-09-28 Zf Friedrichshafen Ag Heuristics-based communication with road users
WO2023243558A1 (en) * 2022-06-15 2023-12-21 ソニーグループ株式会社 Information processing device, program, and information processing system
US11999384B2 (en) * 2019-06-18 2024-06-04 Ihi Corporation Travel route generation device and control device

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6979782B2 (en) * 2017-04-17 2021-12-15 株式会社ゼンリン 3D map data and control device
WO2019008755A1 (en) * 2017-07-07 2019-01-10 マクセル株式会社 Information processing system, and information processing system infrastructure and information processing method used in same
WO2019065546A1 (en) * 2017-09-29 2019-04-04 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data creation method, client device and server
JP6676025B2 (en) 2017-10-23 2020-04-08 本田技研工業株式会社 Vehicle control device, vehicle control method, and program
US10706308B2 (en) * 2018-08-07 2020-07-07 Accenture Global Solutions Limited Image processing for automated object identification
WO2020095541A1 (en) * 2018-11-06 2020-05-14 ソニー株式会社 Information processing device, information processing method, and program
JP7134252B2 (en) * 2018-12-03 2022-09-09 株式会社Nttドコモ Map data generator
KR102651410B1 (en) 2018-12-28 2024-03-28 현대자동차주식회사 Automated Valet Parking System, and infrastructure and vehicle thereof
JP2022516849A (en) * 2019-01-18 2022-03-03 ベステル エレクトロニク サナイー ベ ティカレト エー.エス. Heads-up display systems, methods, data carriers, processing systems and vehicles
JP7243524B2 (en) * 2019-08-23 2023-03-22 トヨタ自動車株式会社 Autonomous driving system
WO2021131797A1 (en) * 2019-12-25 2021-07-01 ソニーグループ株式会社 Information processing device, information processing system, and information processing method
JP7422177B2 (en) 2022-03-31 2024-01-25 本田技研工業株式会社 Traffic safety support system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000306194A (en) * 1999-04-21 2000-11-02 Toshiba Corp Automatic running support system
JP2007248321A (en) * 2006-03-17 2007-09-27 Sumitomo Electric Ind Ltd System and method for estimating traveling position of vehicle
JP2012155654A (en) * 2011-01-28 2012-08-16 Sony Corp Information processing device, notification method, and program
JP2013149191A (en) * 2012-01-23 2013-08-01 Masahiro Watanabe Traffic safety support system
JP2014035639A (en) * 2012-08-08 2014-02-24 Toshiba Corp Traffic accident occurrence prediction device, method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000306194A (en) * 1999-04-21 2000-11-02 Toshiba Corp Automatic running support system
JP2007248321A (en) * 2006-03-17 2007-09-27 Sumitomo Electric Ind Ltd System and method for estimating traveling position of vehicle
JP2012155654A (en) * 2011-01-28 2012-08-16 Sony Corp Information processing device, notification method, and program
JP2013149191A (en) * 2012-01-23 2013-08-01 Masahiro Watanabe Traffic safety support system
JP2014035639A (en) * 2012-08-08 2014-02-24 Toshiba Corp Traffic accident occurrence prediction device, method and program

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110936953A (en) * 2018-09-21 2020-03-31 大众汽车有限公司 Method and device for providing an image of the surroundings and motor vehicle having such a device
WO2020188895A1 (en) * 2019-03-18 2020-09-24 日本電気株式会社 Edge computing server, control method, and non-transitory computer-readable medium
JPWO2020188895A1 (en) * 2019-03-18 2021-12-23 日本電気株式会社 Edge computing servers, control methods, and control programs
JP7173287B2 (en) 2019-03-18 2022-11-16 日本電気株式会社 Edge computing server, control method, and control program
US20220355823A1 (en) * 2019-06-18 2022-11-10 Ihi Corporation Travel route generation device and control device
US11999384B2 (en) * 2019-06-18 2024-06-04 Ihi Corporation Travel route generation device and control device
US20220414983A1 (en) * 2019-11-19 2022-12-29 Sony Group Corporation Information processing device, information processing method, and program
CN114830205B (en) * 2020-01-17 2023-12-08 日立安斯泰莫株式会社 Electronic control device and vehicle control system
CN114830205A (en) * 2020-01-17 2022-07-29 日立安斯泰莫株式会社 Electronic control device and vehicle control system
CN113177428A (en) * 2020-01-27 2021-07-27 通用汽车环球科技运作有限责任公司 Real-time active object fusion for object tracking
CN113362646A (en) * 2020-03-05 2021-09-07 本田技研工业株式会社 Information processing apparatus, vehicle, computer-readable storage medium, and information processing method
CN113362646B (en) * 2020-03-05 2023-04-07 本田技研工业株式会社 Information processing apparatus, vehicle, computer-readable storage medium, and information processing method
US11622228B2 (en) 2020-03-05 2023-04-04 Honda Motor Co., Ltd. Information processing apparatus, vehicle, computer-readable storage medium, and information processing method
WO2023179988A1 (en) * 2022-03-24 2023-09-28 Zf Friedrichshafen Ag Heuristics-based communication with road users
WO2023243558A1 (en) * 2022-06-15 2023-12-21 ソニーグループ株式会社 Information processing device, program, and information processing system

Also Published As

Publication number Publication date
JP2017068589A (en) 2017-04-06

Similar Documents

Publication Publication Date Title
WO2017057055A1 (en) Information processing device, information terminal and information processing method
US10753757B2 (en) Information processing apparatus and information processing method
US10970877B2 (en) Image processing apparatus, image processing method, and program
US11450026B2 (en) Information processing apparatus, information processing method, and mobile object
US11363235B2 (en) Imaging apparatus, image processing apparatus, and image processing method
US11501461B2 (en) Controller, control method, and program
EP3835823B1 (en) Information processing device, information processing method, computer program, information processing system, and moving body device
CN112119282A (en) Information processing apparatus, mobile apparatus, method, and program
US20210033712A1 (en) Calibration apparatus, calibration method, and program
JP2023126642A (en) Information processing device, information processing method, and information processing system
JPWO2018180579A1 (en) Imaging control device, control method of imaging control device, and moving object
US20200191975A1 (en) Information processing apparatus, self-position estimation method, and program
CN112368598A (en) Information processing apparatus, information processing method, computer program, and mobile apparatus
US20230230368A1 (en) Information processing apparatus, information processing method, and program
US20200230820A1 (en) Information processing apparatus, self-localization method, program, and mobile body
US20220276655A1 (en) Information processing device, information processing method, and program
JP2022113054A (en) Information processing device, information processing method, program, and moving device
JP7363890B2 (en) Information processing device, information processing method and program
US20220012552A1 (en) Information processing device and information processing method
US20210042886A1 (en) Image processing apparatus, image processing method, and program
WO2022059489A1 (en) Information processing device, information processing method, and program
WO2020195969A1 (en) Information processing device, information processing method, and program
WO2020116204A1 (en) Information processing device, information processing method, program, moving body control device, and moving body
JP2024003806A (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16851229

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16851229

Country of ref document: EP

Kind code of ref document: A1