WO2019239471A1 - Driving assistance device, driving assistance system, and driving assistance method - Google Patents

Driving assistance device, driving assistance system, and driving assistance method Download PDF

Info

Publication number
WO2019239471A1
WO2019239471A1 PCT/JP2018/022282 JP2018022282W WO2019239471A1 WO 2019239471 A1 WO2019239471 A1 WO 2019239471A1 JP 2018022282 W JP2018022282 W JP 2018022282W WO 2019239471 A1 WO2019239471 A1 WO 2019239471A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
sensor
unit
driving support
time
Prior art date
Application number
PCT/JP2018/022282
Other languages
French (fr)
Japanese (ja)
Inventor
健一 名倉
雄 末廣
響子 細井
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2018559902A priority Critical patent/JPWO2019239471A1/en
Priority to PCT/JP2018/022282 priority patent/WO2019239471A1/en
Publication of WO2019239471A1 publication Critical patent/WO2019239471A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes

Definitions

  • the present invention relates to a driving support device, a driving support system, and a driving support method that perform driving support for an autonomous vehicle.
  • the autonomous driving vehicle controls the driving of the own vehicle by grasping the surrounding situation from the information acquired from the sensor provided in the own vehicle.
  • the autonomous driving vehicle can acquire information on surrounding vehicles from sensors installed on the side of the road or the like, and can control the driving of the vehicle by grasping the surrounding situation.
  • an autonomous driving vehicle uses the detection results of a plurality of sensors to detect obstacles. False detection can be reduced and unnecessary sudden stops can be avoided.
  • the autonomous driving vehicle acquires the detection results of a plurality of sensors individually, it takes a processing load to accurately grasp the surrounding situation by using the detection results of each sensor in combination.
  • the object detection device described in Patent Document 1 determines whether the radar detection object and the image detection object are the same object using information acquired from the millimeter wave radar and the stereo camera.
  • a technique is disclosed that uses information on an object determined to be the same object for vehicle travel control.
  • the object detection device described in Patent Document 1 determines whether or not the radar detection object and the image detection object are the same object, the object detection object is based on the received intensity of the reflected wave of the millimeter wave radar. By setting the range for searching for the image detection object corresponding to the radar detection object according to the type of the object, the object is detected with high accuracy while reducing the processing load.
  • the object detection device described in Patent Document 1 defines a search range in advance so as to include the object for each object type, and the object type estimated based on the reception intensity of the reflected wave of the millimeter wave radar The search range corresponding to is selected. Therefore, when the search range is significantly different from the actual size of the object, there is a problem that the search area may not be set properly and the detection accuracy may deteriorate. This problem applies to both cases where the object detection device is mounted on an autonomous vehicle and installed on the roadside.
  • the present invention has been made in view of the above, and an object of the present invention is to obtain a driving support device capable of improving the detection accuracy of an object while suppressing a processing load.
  • the driving support apparatus of the present invention receives first sensor information from a first sensor using a radar and obtains image information from a second sensor.
  • a receiving unit that receives the second sensor information; a movement information extracting unit that calculates a first position that is the position of the object detected by the first sensor using the first sensor information;
  • a time-series determination unit that manages position information using a plurality of objects derived from the same object as one object among the objects whose first position is calculated by the unit.
  • the driving support device determines the size of the object detected by the second sensor, the second position that is the position of the object, and the type of the object using the second sensor information, and includes the determination result.
  • the information integration unit which integrates and manages the object attribute detection unit that generates attribute information, the position information of the object managed by the time series determination unit, and the attribute information in units of objects
  • an object position estimation unit that estimates the reach of the object at the reception time at which the first sensor information is received next using the information in units of objects.
  • the time-series determination unit is characterized in that the position information is managed for an object within the reachable range among the objects whose first position is calculated by the movement information extraction unit.
  • the driving support device has an effect of improving the object detection accuracy while suppressing the processing load.
  • a diagram showing a configuration example of a driving support system Block diagram showing a configuration example of a driving support device Flowchart showing operation of driving support device delivering position information
  • the figure which shows the example of the position of the object which the object position estimation part estimated The figure which shows the 1st example of the reachable range of the object which the object position estimation part estimated
  • the figure which shows the 2nd example of the reachable range of the object which the object position estimation part estimated The figure which shows the 3rd example of the reachable range of the object which the object position estimation part estimated
  • FIG. 1 is a diagram illustrating a configuration example of a driving support system 400 according to Embodiment 1 of the present invention.
  • the driving support system 400 includes a driving support device 100, a sensor 101, a sensor 102, and a wireless communication device 103.
  • a case where the driving support system 400 is applied to a traffic system installed around an intersection will be described.
  • vehicles 201, 202, 203, bicycle 204, and pedestrians 205, 206, 207 exist around the intersection.
  • the vehicle 201 to 203 at least the vehicle 201 is an autonomous driving vehicle.
  • a vehicle 203 turning right at the intersection, a bicycle 204 crossing the intersection, and a pedestrian 205 can be obstacles.
  • the bicycle 204 includes a person riding the bicycle 204.
  • the driving support device 100 acquires sensor information from the sensor 101 and the sensor 102. Based on the sensor information acquired from the sensors 101 and 102, the driving support device 100 generates position information indicating the position of an obstacle around the intersection, that is, an object, and transmits the position information to the wireless communication device 103.
  • the wireless communication device 103 distributes the position information received from the driving support device 100 to the vehicles 201 to 203.
  • the sensor 101 is, for example, a sensor using a radar.
  • the sensor 101 has a function of periodically detecting a distance from the sensor 101 to the vehicles 201 to 203 traveling on the main line, an angle indicating the direction of the vehicles 201 to 203 with respect to the direction of the sensor 101, and the like.
  • the sensor 101 may further detect at least one of the speed of the vehicles 201 to 203 and the traveling direction of the vehicles 201 to 203.
  • the sensor 101 transmits information on the detected object to the driving support apparatus 100 as sensor information.
  • the sensor 102 is, for example, a camera.
  • the sensor 102 has a function of periodically acquiring image information of an object.
  • the image information is, for example, a still image or a moving image.
  • the sensor 102 may acquire image information of an object only when it detects an object that moves within a shootable range.
  • the sensor 102 transmits image information of the detected object to the driving support apparatus 100 as sensor information.
  • the sensor 101 may be referred to as a first sensor and the sensor 102 may be referred to as a second sensor.
  • the sensor information transmitted from the sensor 101 may be referred to as first sensor information
  • the sensor information transmitted from the sensor 102 may be referred to as second sensor information.
  • FIG. 2 is a block diagram illustrating a configuration example of the driving support apparatus 100 according to the present embodiment.
  • the driving support apparatus 100 includes a reception unit 110, a movement information extraction unit 111, an object attribute detection unit 112, an information integration unit 113, an object position estimation unit 114, a time series determination unit 115, and an information distribution unit 116. .
  • the receiving unit 110 receives sensor information, which is information about an object detected by the sensor 101, from the sensor 101.
  • the sensor information that the receiving unit 110 acquires from the sensor 101 includes, for example, a distance to an object, an angle, a speed, a traveling direction, and the like.
  • the receiving unit 110 receives sensor information that is information related to an object detected by the sensor 102 from the sensor 102.
  • the sensor information that the receiving unit 110 acquires from the sensor 102 is, for example, image information of the vehicle 202, the bicycle 204, and the pedestrian 205 that are present in the imageable range of the sensor 102.
  • the receiving unit 110 receives the first sensor information from the sensor 101 at a period of several tens of ms, and has a longer period or longer time than the period at which the sensor 102 receives the first sensor information.
  • the frequency with which the receiving unit 110 receives the sensor information is not limited to this.
  • the movement information extraction unit 111 extracts the first sensor information acquired from the sensor 101 from the sensor information received by the reception unit 110. Each time the movement information extraction unit 111 extracts the first sensor information, the movement information extraction unit 111 calculates the position of the object detected by the sensor 101 using the distance, the angle, and the like to the object included in the first sensor information. The position information of the object is generated.
  • the position information may be expressed by latitude and longitude, may be expressed by the distance from the starting point of the road, and the expression method is not limited as long as the position of the object can be uniquely identified.
  • the position of the object calculated by the movement information extraction unit 111 may be referred to as a first position.
  • the movement information extraction unit 111 associates the position information with the reception time of the first sensor information and assigns and manages a reflection point ID (IDentification) for identifying the position information.
  • IDentification a reflection point ID for identifying the position information.
  • the movement information extraction unit 111 manages the position information by including the speed information of the object.
  • the time series determination unit 115 acquires the position information to which the reflection point ID is assigned from the movement information extraction unit 111, and arranges the position information in order of reception time and manages them in time series.
  • the time-series determination unit 115 integrates and manages position information of objects that can be determined as the same object based on the reception time, the position of the object, the speed of the object, and the like. That is, the time-series determination unit 115 manages position information with a plurality of objects derived from the same object among the objects whose positions are calculated by the movement information extraction unit 111 as one object.
  • the time-series determination unit 115 uses the position information with the reflection point ID that has the same reception time and whose position is close and moving at the same speed as the position information derived from the same object.
  • the time-series determination unit 115 uses, as position information derived from the same object, position information with different reflection times among the position information assigned with the reflection point ID and moving at a constant speed in a certain direction.
  • the time-series determination unit 115 assigns a trajectory ID to an object regarded as the same object, and associates the position of the object to which the trajectory ID is assigned with the position information to which the reflection point ID related to the object is assigned. to manage.
  • the time series determination unit 115 determines the difference between the position indicated by the position information at different times and the time among the position information of the object to which the trajectory ID is assigned. The speed of the object is calculated from the difference between the two.
  • the object attribute detection unit 112 extracts second sensor information acquired from the sensor 102 from the sensor information received by the reception unit 110.
  • the object attribute detection unit 112 generates attribute information indicating the attribute of the object using the second sensor information, that is, image information.
  • the object attribute detection unit 112 performs image analysis on the image information to recognize the object, and determines the time when the object is detected, the full width of the object, the full length of the object, the position of the object, and the type of the object. Determine, and generate attribute information including the determination result.
  • the position of the object determined by the object attribute detection unit 112 may be referred to as a second position. As the type of object, a vehicle, a bicycle, a pedestrian, or the like is assumed.
  • the total width of the object and the total length of the object may be combined to determine the size of the object.
  • the object attribute detection unit 112 generates object attribute information by combining information on the time when the object is detected, the size of the object, the position of the object, and the type of the object, and assigns an object attribute ID for identifying the attribute information And manage.
  • the time at which the object is detected the time at which the object overlaps a virtual line whose position set on the lane is known may be obtained from image analysis, or the second sensor information It may be a reception time.
  • the information integration unit 113 integrates and manages the object position information managed by the time series determination unit 115 and the attribute information managed by the object attribute detection unit 112 in units of objects. Specifically, the information integration unit 113 uses the attribute information indicated by the object attribute ID based on the time and the position of the object included in the attribute information and the time-series position information of the object managed by the trajectory ID. It is determined whether or not the object indicated by orbit ID is the same object. The information integration unit 113 assigns an object ID to the objects determined to be the same object, links the trajectory ID related to the object and the object attribute ID, and integrates and manages information related to the object.
  • the information integration unit 113 compares the position of the object included in the attribute information with the position of the object managed by the time series determination unit 115, for example. When the position difference is equal to or less than the prescribed threshold value, the objects whose positions are compared are determined to be the same object. If the time indicated by the attribute information is different from the time stored in time series by the trajectory ID, the information integration unit 113 averages the positions of the objects at different times managed by the trajectory ID. By this method, the position corresponding to the time of the object attribute ID is calculated.
  • the information integration unit 113 generates position information to be distributed from the information distribution unit 116 to the wireless communication device 103 using the position information of the object managed by the time series determination unit 115 or the information of the integrated object unit. . For example, based on the position information of the object managed by the time-series determination unit 115, the information integration unit 113 calculates the position of the object at the time when the information distribution unit 116 next outputs the position information to the wireless communication device 103. . The information integration unit 113 sets the calculated position of the object as position information to be distributed to the vehicles 201 to 203 and outputs the position information to the information distribution unit 116.
  • the position information to be distributed may be at the center of the object or at the head center in the traveling direction of the object.
  • the object position estimation unit 114 uses the object unit information integrated by the information integration unit 113 to detect that there is an object to which the object ID is assigned at the reception time when the first sensor information is next received from the sensor 101. Estimate the expected position.
  • the position where the object is expected to exist is defined as the reachable range.
  • the reach range varies depending on the speed of the object, the attribute of the object, and the like.
  • the information of the object unit integrated by the information integration unit 113 is the position information of the object indicated by the trajectory ID linked to the object ID and the attribute information indicated by the object attribute ID.
  • the object position estimation unit 114 notifies the time series determination unit 115 of information on the reachable range.
  • the information distribution unit 116 outputs the position information of the object acquired from the information integration unit 113 to the wireless communication device 103.
  • the information distribution unit 116 periodically distributes position information to the vehicles 201 to 203 via the wireless communication device 103.
  • the transmission cycle for distributing the position information is, for example, adjusted to the operation cycle of the automatic driving system, and is typically 100 ms, but is not limited thereto.
  • the information distribution unit 116 includes a storage unit 117 that stores the position information acquired from the information integration unit 113, and a transmission unit 118 that transmits the position information to the vehicles 201 to 203.
  • FIG. 3 is a flowchart showing an operation in which the driving support apparatus 100 according to the present embodiment distributes position information.
  • the receiving unit 110 determines whether sensor information is received from the sensor 101 or the sensor 102 (step S101). If the sensor information is not received (step S101: No), the receiving unit 110 waits until sensor information is received. When the receiving unit 110 receives the sensor information (step S101: Yes) and the sensor information is the first sensor information (step S102: Yes), the movement information extraction unit 111 receives the first sensor information. Using this, the position of the object is calculated, and the position information of the object is generated (step S103).
  • the movement information extraction unit 111 manages the position information by adding a reflection point ID.
  • the time series determination unit 115 acquires position information from the movement information extraction unit 111 and integrates position information of a plurality of objects derived from the same object (step S104).
  • the time series determination unit 115 performs management by assigning a trajectory ID to an object obtained by integrating position information.
  • the driving support device 100 omits the operations of step S103 and step S104.
  • step S105: Yes When the sensor information received by the receiving unit 110 is the second sensor information (step S105: Yes), the object attribute detection unit 112 generates object attribute information using the second sensor information (step S105). S106). The object attribute detection unit 112 assigns and manages an object attribute ID to the attribute information.
  • step S105: No When the sensor information received by the receiving unit 110 is not the second sensor information (step S105: No), the driving support device 100 omits the operation of step S106.
  • the information integration unit 113 manages the position information of the objects integrated by the time series determination unit 115 and the attribute information is generated by the object attribute detection unit 112 (step S107: Yes), the time series determination unit The position information of the object managed in 115 and the attribute information managed in the object attribute detection unit 112 are integrated (step S108).
  • the information integration unit 113 assigns an object ID to an object estimated as the same object.
  • the information integration unit 113 only needs to generate attribute information in the object attribute detection unit 112, and the attribute information used for integration may be the same as the attribute information used when the previous integration was performed.
  • the object position estimation unit 114 is next connected to the first sensor from the sensor 101.
  • an arrival range that is a position where an object with an object ID is expected to exist is estimated (step S109).
  • the driving support device 100 returns to the process of step S101 when there is a prescribed period until the information distribution unit 116 distributes the position information next time (step S110: Yes).
  • the prescribed period is longer than the processing period from step S101 to step S109 described above in the driving support device 100.
  • the information integration unit 113 next transmits the position information to the wireless communication apparatus 103 by the information distribution unit 116.
  • the position of the object at the distribution time is calculated (step S111). For example, the information integration unit 113 corrects, that is, calculates, the latest position information currently managed by the time-series determination unit 115 to the position of the object at the distribution time, using constant velocity linear motion as a model.
  • the information integration unit 113 outputs the calculated position of the object to the information distribution unit 116 as position information.
  • the information distribution unit 116 distributes the position information acquired from the information integration unit 113 after the distribution time is reached (step S112). Then, the driving assistance device 100 returns to the process of step S101.
  • the information integration unit 113 manages at least one of the conditions in which the position information of the object whose position information is integrated by the time-series determination unit 115 or the attribute information is generated by the object attribute detection unit 112. If the condition information is not satisfied (step S107: No), it is determined whether or not the position information of the object with the position information integrated by the time series determination unit 115 is managed (step S113).
  • the driving support apparatus 100 performs the process of step S110 when the position information of the object in which the position information is integrated is managed by the time-series determination unit 115 (step S113: Yes). The subsequent processing is as described above. If the position information of the object in which the position information is integrated is not managed by the time series determination unit 115 (step S113: No), the driving support apparatus 100 returns to the process of step S101.
  • FIG. 4 is a diagram illustrating an example of an object position estimated by the object position estimation unit 114 according to the present embodiment.
  • the object position estimation unit 114 is a target object whose range is estimated from the information integrated by the information integration unit 113, specifically, the size of the object included in the attribute information managed by the object attribute detection unit 112. The size of can be recognized. As shown in FIG. 4, the total length of the target object whose range is estimated is recognized as L1, and the total width is recognized as W1. Further, the object position estimation unit 114 recognizes the object as the vehicle 300 from the type of the object included in the attribute information.
  • the object position estimation unit 114 uses the information integrated by the information integration unit 113, specifically, the speed of the object managed by the time series determination unit 115, and then the first sensor from the sensor 101.
  • the position of the object at the reception time when information is received is estimated.
  • the position of the vehicle 300 may be the center of the vehicle 300 or the center at the beginning of the traveling direction of the vehicle 300. In FIG. 4, the center at the beginning of the traveling direction of the vehicle 300 is the current vehicle position 301, that is, the reference position.
  • the object position estimation unit 114 estimates, for example, the position 301a of the object at the reception time when the first sensor information is next received from the sensor 101 using a constant velocity linear motion as a model.
  • the object position estimation unit 114 uses information on the object size, that is, the entire width of the object and the total length of the object, when the object position is represented at the center of the object, and represents the object position at the beginning center of the object traveling direction. In this case, information on the entire width of the object among the sizes of the object is used, and information on the total length of the object may not be used.
  • the object position estimation unit 114 uses a position 301a of the object and the type of the object to determine a trapezoid area around the position 301a from the acceleration change range and the course change angle ⁇ set according to the type of the object. Is set, and the set area is set as the aforementioned reachable range.
  • FIG. 5 is a diagram illustrating a first example of the object reachable range estimated by the object position estimation unit 114 according to the present embodiment.
  • the object position estimation unit 114 sets the reach range width W2 in consideration of the course change range for the entire width W1 of the object, that is, the vehicle 300, and calculates the reach range width W3 by calculating from the maximum acceleration, the course change angle, and the like. Set.
  • the object position estimation unit 114 calculates the distance L2 from, for example, the type of object, that is, the range of vehicle acceleration.
  • the object position estimation unit 114 may change the size of the reach range and the shape of the reach range according to the type of the object.
  • FIG. 6 is a diagram illustrating a second example of the object reachable range estimated by the object position estimation unit 114 according to the present embodiment.
  • the object type is a bicycle.
  • the object position estimation unit 114 estimates the object position 302a at the reception time when the first sensor information is next received from the sensor 101, and the width W21.
  • the trapezoid reach indicated by the width W22 and the distance L20 is estimated.
  • FIG. 7 is a diagram illustrating a third example of the object reachable range estimated by the object position estimation unit 114 according to the present embodiment.
  • the object type is a pedestrian.
  • the acceleration change range is smaller than that of the vehicle and the bicycle, but since the course change angle is larger than that of the vehicle and the bicycle, the object position estimation unit 114 sets a circular reachable range. Therefore, the object position estimation unit 114 estimates the object position 303a at the reception time when the first sensor information is next received from the sensor 101 in the case of the object reference position, that is, the current pedestrian position 303, and the width
  • the circular reachable range indicated by W30 and distance L30 is estimated.
  • the object position estimation unit 114 may set the shape of the reach range in consideration of the current object speed in addition to the object type. As described above, the object position estimation unit 114 can set the size of the reachable range based on the object size, the object type, and the object speed, and can set the reachable range shape based on the object type. can do.
  • the time-series determination unit 115 determines the object position when the object reachability range estimated by the object position estimation unit 114 is estimated (step S109). Using the reach range estimated by the estimation unit 114, processing for integrating position information of a plurality of objects derived from the same object is performed (step S104). That is, the time-series determination unit 115 determines whether or not the object whose position is calculated by the movement information extraction unit 111 is the same as the object whose reach range is estimated in the reach range.
  • the time-series determination unit 115 performs determination using the arrival range estimated by the object position estimation unit 114 as a threshold for the position information obtained from the movement information extraction unit 111 and provided with the reflection point ID. Filter the position information used for the.
  • the time series determination unit 115 identifies the position information obtained as a result of the filtering process from the position information of the same object, and performs the position update process in order from the closest distance. In the position update process, the time series determination unit 115 may calculate and obtain an average value of positions, or may estimate the position using a Kalman filter or the like. As described above, the time-series determination unit 115 manages position information for objects within the reach of the objects whose positions are calculated by the movement information extraction unit 111.
  • the time-series determination unit 115 efficiently selects only necessary object position information by limiting the processing target region to the reachable range, Processing load can be reduced.
  • the time series determination unit 115 can follow a change in position due to the movement of the object.
  • the time series determination unit 115 uses the position, speed, etc. of the object at the previous reception time as a model, for example, using constant velocity linear motion as a model.
  • the position of the object at the time may be estimated, and if the difference between the estimated position of the object and the actual position of the object is within a prescribed threshold value, the same object may be estimated.
  • the time series determination unit 115 may estimate the position of the object at the current reception time using a Kalman filter or the like. When the position difference is larger than the prescribed threshold value, the time series determination unit 115 determines that the objects whose positions are compared are different objects.
  • the information distribution unit 116 distributes position information to the vehicles 201 to 203.
  • the reach range estimated by the object position estimation unit 114, the object attribute detection unit The attribute information generated at 112 may be distributed.
  • the time series determination unit 115 may perform processing at the same processing cycle as the information integration unit 113, or may perform processing each time position information is acquired from the movement information extraction unit 111. Good.
  • the driving support system 400 is installed on the roadside, but the driving support device 100 and the sensors 101 and 102 of the driving support system 400 are mounted on an automatic driving vehicle. May be.
  • the driving assistance device 100 generates position information of the mounted autonomous driving vehicle.
  • FIG. 8 is a diagram illustrating an example of a hardware configuration of the driving support apparatus 100 according to the present embodiment.
  • the driving support apparatus 100 includes a CPU (Central Processing Unit) 40, a ROM (Read Only Memory) 41, a RAM (Random Access Memory) 42, a sensor IF (Interface) 43, a communication IF 44, and a memory 45.
  • the CPU 40 controls the entire driving support device 100.
  • the ROM 41 stores programs such as a boot program, a communication program, and a data analysis program.
  • the RAM 42 is used as a work area for the CPU 40.
  • the sensor IF 43 is connected to various sensors such as a radar and a lidar, and functions as an interface between the various sensors and the CPU 40.
  • the communication IF 44 is connected to the autonomous driving vehicle via radio and functions as an interface between the autonomous driving vehicle and the CPU 40.
  • a communication network such as a LAN (Local Area Network) may exist between the sensor IF 43 and the various sensors and between the communication IF 44 and the autonomous driving vehicle.
  • One or both of the sensor IF 43 and the communication IF 44 are also connected to a communication network such as the Internet, and function as an interface between the communication network and the CPU 40.
  • the functions of the reception unit 110, the movement information extraction unit 111, the object attribute detection unit 112, the information integration unit 113, the object position estimation unit 114, and the time series determination unit 115 are software, firmware, or a combination of software and firmware. Realized. Software or firmware is described as a program and stored in the memory 45. The CPU 40 implements the functions of each unit by reading and executing the program stored in the memory 45.
  • the driving support device 100 selects a plurality of objects derived from the same object as one object among the objects whose positions are calculated by the sensor 101 using the radar.
  • the position information is managed, and the position information and the attribute information of the object detected by the sensor 102 that acquires the image information are integrated.
  • the driving support device 100 uses the integrated information to estimate the arrival range of the object at the next reception time when the first sensor information is received from the sensor 101, and processes the object detected by the sensor 101 in the arrival range. I decided to make it.
  • the driving assistance apparatus 100 can improve the detection accuracy of the object which exists around the driving assistance apparatus 100, suppressing processing load.
  • the configuration described in the above embodiment shows an example of the contents of the present invention, and can be combined with another known technique, and can be combined with other configurations without departing from the gist of the present invention. It is also possible to omit or change the part.
  • 100 driving support device 101, 102 sensor, 103 wireless communication device, 110 receiving unit, 111 movement information extracting unit, 112 object attribute detecting unit, 113 information integrating unit, 114 object position estimating unit, 115 time series determining unit, 116 information Distribution unit, 117 storage unit, 118 transmission unit, 201-203,300 vehicle, 204 bicycle, 205-207 pedestrian, 400 driving support system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present invention comprises: a receiving unit (110) that receives first sensor information from a first sensor, and second sensor information from a second sensor; a movement information extraction unit (111) that uses the first sensor information to calculate the position of an object detected by the first sensor; a time-series determination unit (115) that manages position information by deeming a plurality of objects derived from the same object to be one object; an object attribute detection unit (112) that uses the second sensor information to generate attribute information; an information integration unit (113) that integrates and manages, in object units, the position information and the attribute information of objects managed by the time-series determination unit (115); and an object position estimation unit (114) that uses the integrated information in object units to estimate an arrival range of the object at the time the next first sensor information is received. The time-series determination unit (115) manages position information for an object within the arrival range, from among objects for which a first position was calculated by the movement information extraction unit (111).

Description

運転支援装置、運転支援システムおよび運転支援方法Driving support device, driving support system, and driving support method
 本発明は、自動運転車に対して運転支援を行う運転支援装置、運転支援システムおよび運転支援方法に関する。 The present invention relates to a driving support device, a driving support system, and a driving support method that perform driving support for an autonomous vehicle.
 昨今注目を浴びている高度運転支援システム、自動運転システムなどでは、自動運転車は、自車が備えるセンサから取得した情報から周辺状況を把握して自車の運転を制御している。また、自動運転車は、道路脇などの路側に設置されたセンサから周辺の走行車の情報を取得し、周辺状況を把握して自車の運転を制御することも可能である。自動運転車は、自車が備えるセンサから情報を取得する場合、または路側に設置されたセンサから情報を取得する場合のいずれの場合も、複数のセンサの検出結果を利用することで障害物の誤検出を低減し、不要な急停止などを回避することができる。一方で、自動運転車は、複数のセンサの検出結果を個別に取得しても、各センサの検出結果を複合的に用いて周辺状況を正確に把握するためには処理負荷がかかる。 In the advanced driving support system and the automatic driving system that have been attracting attention recently, the autonomous driving vehicle controls the driving of the own vehicle by grasping the surrounding situation from the information acquired from the sensor provided in the own vehicle. In addition, the autonomous driving vehicle can acquire information on surrounding vehicles from sensors installed on the side of the road or the like, and can control the driving of the vehicle by grasping the surrounding situation. In either case of acquiring information from sensors provided in the own vehicle or acquiring information from sensors installed on the roadside, an autonomous driving vehicle uses the detection results of a plurality of sensors to detect obstacles. False detection can be reduced and unnecessary sudden stops can be avoided. On the other hand, even if the autonomous driving vehicle acquires the detection results of a plurality of sensors individually, it takes a processing load to accurately grasp the surrounding situation by using the detection results of each sensor in combination.
 このような問題に対して、特許文献1に記載の物体検出装置は、ミリ波レーダおよびステレオカメラから取得した情報を用いて、レーダ検出物と画像検出物とが同一の物体か否かを判定し、同一の物体と判定した物体の情報を車両の走行制御に利用する技術が開示されている。特許文献1に記載の物体検出装置は、レーダ検出物と画像検出物とが同一の物体か否かを判定する際、ミリ波レーダの反射波の受信強度に基づいてレーダ検出物に対応する物体の種類を推定し、物体の種類に応じてレーダ検出物に対応する画像検出物を探索するための範囲を設定することで、処理負荷を低減しつつ、高精度に物体を検出する。 For such a problem, the object detection device described in Patent Document 1 determines whether the radar detection object and the image detection object are the same object using information acquired from the millimeter wave radar and the stereo camera. In addition, a technique is disclosed that uses information on an object determined to be the same object for vehicle travel control. When the object detection device described in Patent Document 1 determines whether or not the radar detection object and the image detection object are the same object, the object detection object is based on the received intensity of the reflected wave of the millimeter wave radar. By setting the range for searching for the image detection object corresponding to the radar detection object according to the type of the object, the object is detected with high accuracy while reducing the processing load.
特開2007-132748号公報JP 2007-132748 A
 しかしながら、特許文献1に記載の物体検出装置は、物体の種類ごとに該物体を含むようにあらかじめ探索範囲を定めておき、ミリ波レーダの反射波の受信強度に基づいて推定された物体の種類に対応する探索範囲を選択している。そのため、探索範囲が実際の物体の大きさと著しく異なる場合、探索領域が適切に設定されず検出精度が劣化する可能性がある、という問題があった。この問題は、物体検出装置が、自動運転車に搭載される場合も、路側に設置される場合のいずれの場合にも該当する。 However, the object detection device described in Patent Document 1 defines a search range in advance so as to include the object for each object type, and the object type estimated based on the reception intensity of the reflected wave of the millimeter wave radar The search range corresponding to is selected. Therefore, when the search range is significantly different from the actual size of the object, there is a problem that the search area may not be set properly and the detection accuracy may deteriorate. This problem applies to both cases where the object detection device is mounted on an autonomous vehicle and installed on the roadside.
 本発明は、上記に鑑みてなされたものであって、処理負荷を抑制しつつ、物体の検出精度を向上可能な運転支援装置を得ることを目的とする。 The present invention has been made in view of the above, and an object of the present invention is to obtain a driving support device capable of improving the detection accuracy of an object while suppressing a processing load.
 上述した課題を解決し、目的を達成するために、本発明の運転支援装置は、レーダを用いた第1のセンサから第1のセンサ情報を受信し、画像情報を取得する第2のセンサから第2のセンサ情報を受信する受信部と、第1のセンサ情報を用いて、第1のセンサで検出された物体の位置である第1の位置を算出する移動情報抽出部と、移動情報抽出部で第1の位置が算出された物体のうち、同一物体に由来する複数の物体を1つの物体として位置情報を管理する時系列判定部と、を備える。また、運転支援装置は、第2のセンサ情報を用いて、第2のセンサで検出された物体のサイズ、物体の位置である第2の位置、および物体の種別を判定し、判定結果を含む属性情報を生成する物体属性検知部と、時系列判定部で管理される物体の位置情報と、属性情報と、を物体単位で統合して管理する情報統合部と、情報統合部で統合された物体単位の情報を用いて、次に第1のセンサ情報を受信する受信時刻における物体の到達範囲を推定する物体位置推定部と、を備える。時系列判定部は、移動情報抽出部で第1の位置が算出された物体のうち、到達範囲内の物体を対象にして位置情報を管理する、ことを特徴とする。 In order to solve the above-described problems and achieve the object, the driving support apparatus of the present invention receives first sensor information from a first sensor using a radar and obtains image information from a second sensor. A receiving unit that receives the second sensor information; a movement information extracting unit that calculates a first position that is the position of the object detected by the first sensor using the first sensor information; A time-series determination unit that manages position information using a plurality of objects derived from the same object as one object among the objects whose first position is calculated by the unit. Further, the driving support device determines the size of the object detected by the second sensor, the second position that is the position of the object, and the type of the object using the second sensor information, and includes the determination result. Integrated by the information integration unit, which integrates and manages the object attribute detection unit that generates attribute information, the position information of the object managed by the time series determination unit, and the attribute information in units of objects And an object position estimation unit that estimates the reach of the object at the reception time at which the first sensor information is received next using the information in units of objects. The time-series determination unit is characterized in that the position information is managed for an object within the reachable range among the objects whose first position is calculated by the movement information extraction unit.
 本発明にかかる運転支援装置は、処理負荷を抑制しつつ、物体の検出精度を向上できる、という効果を奏する。 The driving support device according to the present invention has an effect of improving the object detection accuracy while suppressing the processing load.
運転支援システムの構成例を示す図A diagram showing a configuration example of a driving support system 運転支援装置の構成例を示すブロック図Block diagram showing a configuration example of a driving support device 運転支援装置が位置情報を配信する動作を示すフローチャートFlowchart showing operation of driving support device delivering position information 物体位置推定部が推定した物体の位置の例を示す図The figure which shows the example of the position of the object which the object position estimation part estimated 物体位置推定部が推定した物体の到達範囲の第1の例を示す図The figure which shows the 1st example of the reachable range of the object which the object position estimation part estimated 物体位置推定部が推定した物体の到達範囲の第2の例を示す図The figure which shows the 2nd example of the reachable range of the object which the object position estimation part estimated 物体位置推定部が推定した物体の到達範囲の第3の例を示す図The figure which shows the 3rd example of the reachable range of the object which the object position estimation part estimated 運転支援装置のハードウェア構成の例を示す図The figure which shows the example of the hardware constitutions of a driving assistance device
 以下に、本発明の実施の形態に係る運転支援装置、運転支援システムおよび運転支援方法を図面に基づいて詳細に説明する。なお、この実施の形態によりこの発明が限定されるものではない。 Hereinafter, a driving support device, a driving support system, and a driving support method according to an embodiment of the present invention will be described in detail with reference to the drawings. Note that the present invention is not limited to the embodiments.
実施の形態.
 図1は、本発明の実施の形態1に係る運転支援システム400の構成例を示す図である。運転支援システム400は、運転支援装置100と、センサ101と、センサ102と、無線通信装置103と、を備える。本実施の形態では、運転支援システム400を交差点周辺に設置した交通システムに適用する場合について説明するが、一例であり、駐車場など図1に示す状況以外にも適用することが可能である。交差点周辺には、車両201,202,203、自転車204、および歩行者205,206,207が存在していることを想定している。車両201~203のうち少なくとも車両201を自動運転車とする。車両201にとって、交差点を右折する車両203、交差点を横断中の自転車204、および歩行者205が障害物となり得る。なお、自転車204には、自転車204に乗った人が含まれるものとする。
Embodiment.
FIG. 1 is a diagram illustrating a configuration example of a driving support system 400 according to Embodiment 1 of the present invention. The driving support system 400 includes a driving support device 100, a sensor 101, a sensor 102, and a wireless communication device 103. In the present embodiment, a case where the driving support system 400 is applied to a traffic system installed around an intersection will be described. It is assumed that vehicles 201, 202, 203, bicycle 204, and pedestrians 205, 206, 207 exist around the intersection. Of the vehicles 201 to 203, at least the vehicle 201 is an autonomous driving vehicle. For the vehicle 201, a vehicle 203 turning right at the intersection, a bicycle 204 crossing the intersection, and a pedestrian 205 can be obstacles. Note that the bicycle 204 includes a person riding the bicycle 204.
 運転支援装置100は、センサ101およびセンサ102からセンサ情報を取得する。運転支援装置100は、センサ101およびセンサ102から取得したセンサ情報に基づいて、交差点周辺の障害物すなわち物体の位置を示す位置情報を生成し、無線通信装置103に送信する。無線通信装置103は、運転支援装置100から受信した位置情報を、車両201~203へ配信する。 The driving support device 100 acquires sensor information from the sensor 101 and the sensor 102. Based on the sensor information acquired from the sensors 101 and 102, the driving support device 100 generates position information indicating the position of an obstacle around the intersection, that is, an object, and transmits the position information to the wireless communication device 103. The wireless communication device 103 distributes the position information received from the driving support device 100 to the vehicles 201 to 203.
 センサ101は、例えば、レーダを用いたセンサである。センサ101は、センサ101から本線を走行する車両201~203までの距離、センサ101の指向方向に対する車両201~203の方向を示す角度などを周期的に検出する機能を有する。センサ101は、さらに、車両201~203の速度または車両201~203の進行方向のうち少なくとも1つを検出してもよい。センサ101は、検出した物体の情報をセンサ情報として運転支援装置100に送信する。センサ102は、例えば、カメラである。センサ102は、物体の画像情報を周期的に取得する機能を有する。画像情報は、例えば、静止画像または動画像である。センサ102は、撮影可能な範囲において移動する物体を検出した場合のみ、物体の画像情報を取得してもよい。センサ102は、検出した物体の画像情報をセンサ情報として運転支援装置100に送信する。以降の説明において、センサ101を第1のセンサと称し、センサ102を第2のセンサと称することがある。また、センサ101が送信するセンサ情報を第1のセンサ情報と称し、センサ102が送信するセンサ情報を第2のセンサ情報と称することがある。 The sensor 101 is, for example, a sensor using a radar. The sensor 101 has a function of periodically detecting a distance from the sensor 101 to the vehicles 201 to 203 traveling on the main line, an angle indicating the direction of the vehicles 201 to 203 with respect to the direction of the sensor 101, and the like. The sensor 101 may further detect at least one of the speed of the vehicles 201 to 203 and the traveling direction of the vehicles 201 to 203. The sensor 101 transmits information on the detected object to the driving support apparatus 100 as sensor information. The sensor 102 is, for example, a camera. The sensor 102 has a function of periodically acquiring image information of an object. The image information is, for example, a still image or a moving image. The sensor 102 may acquire image information of an object only when it detects an object that moves within a shootable range. The sensor 102 transmits image information of the detected object to the driving support apparatus 100 as sensor information. In the following description, the sensor 101 may be referred to as a first sensor and the sensor 102 may be referred to as a second sensor. In addition, the sensor information transmitted from the sensor 101 may be referred to as first sensor information, and the sensor information transmitted from the sensor 102 may be referred to as second sensor information.
 運転支援装置100の構成について説明する。図2は、本実施の形態に係る運転支援装置100の構成例を示すブロック図である。運転支援装置100は、受信部110と、移動情報抽出部111と、物体属性検知部112と、情報統合部113と、物体位置推定部114と、時系列判定部115と、情報配信部116と、を備える。 The configuration of the driving support device 100 will be described. FIG. 2 is a block diagram illustrating a configuration example of the driving support apparatus 100 according to the present embodiment. The driving support apparatus 100 includes a reception unit 110, a movement information extraction unit 111, an object attribute detection unit 112, an information integration unit 113, an object position estimation unit 114, a time series determination unit 115, and an information distribution unit 116. .
 受信部110は、センサ101から、センサ101で検出された物体に関する情報であるセンサ情報を受信する。受信部110がセンサ101から取得するセンサ情報には、例えば、物体までの距離、角度、速度、進行方向などが含まれる。また、受信部110は、センサ102から、センサ102で検出された物体に関する情報であるセンサ情報を受信する。受信部110がセンサ102から取得するセンサ情報は、例えば、センサ102の撮影可能な範囲に存在する車両202、自転車204、歩行者205の画像情報である。本実施の形態では、受信部110は、センサ101から数十msの周期で第1のセンサ情報を受信し、センサ102からは、第1のセンサ情報を受信する周期よりも長い周期または長い時間間隔で第2のセンサ情報を受信することを想定しているが、受信部110がセンサ情報を受信する頻度はこれに限定されない。 The receiving unit 110 receives sensor information, which is information about an object detected by the sensor 101, from the sensor 101. The sensor information that the receiving unit 110 acquires from the sensor 101 includes, for example, a distance to an object, an angle, a speed, a traveling direction, and the like. In addition, the receiving unit 110 receives sensor information that is information related to an object detected by the sensor 102 from the sensor 102. The sensor information that the receiving unit 110 acquires from the sensor 102 is, for example, image information of the vehicle 202, the bicycle 204, and the pedestrian 205 that are present in the imageable range of the sensor 102. In the present embodiment, the receiving unit 110 receives the first sensor information from the sensor 101 at a period of several tens of ms, and has a longer period or longer time than the period at which the sensor 102 receives the first sensor information. Although it is assumed that the second sensor information is received at intervals, the frequency with which the receiving unit 110 receives the sensor information is not limited to this.
 移動情報抽出部111は、受信部110で受信されたセンサ情報のうちセンサ101から取得した第1のセンサ情報を抽出する。移動情報抽出部111は、第1のセンサ情報を抽出するごとに、センサ101で検出された物体について、第1のセンサ情報に含まれる物体までの距離、角度などを用いて物体の位置を算出し、物体の位置情報を生成する。位置情報は、緯度経度で表現してもよいし、道路の起点からの距離で表現してもよいし、物体の位置が一意に特定できれば表現方法は限定されない。移動情報抽出部111で算出された物体の位置を第1の位置と称することがある。移動情報抽出部111は、位置情報と第1のセンサ情報の受信時刻とを関連付けて、位置情報を識別するための反射点ID(IDentification)を付与して管理する。移動情報抽出部111は、第1のセンサ情報に速度の情報が含まれている場合、位置情報に物体の速度の情報を含めて管理する。 The movement information extraction unit 111 extracts the first sensor information acquired from the sensor 101 from the sensor information received by the reception unit 110. Each time the movement information extraction unit 111 extracts the first sensor information, the movement information extraction unit 111 calculates the position of the object detected by the sensor 101 using the distance, the angle, and the like to the object included in the first sensor information. The position information of the object is generated. The position information may be expressed by latitude and longitude, may be expressed by the distance from the starting point of the road, and the expression method is not limited as long as the position of the object can be uniquely identified. The position of the object calculated by the movement information extraction unit 111 may be referred to as a first position. The movement information extraction unit 111 associates the position information with the reception time of the first sensor information and assigns and manages a reflection point ID (IDentification) for identifying the position information. When the first sensor information includes speed information, the movement information extraction unit 111 manages the position information by including the speed information of the object.
 時系列判定部115は、移動情報抽出部111から、反射点IDが付与された位置情報を取得し、位置情報を受信時刻順に並べて時系列で管理する。時系列判定部115は、受信時刻、物体の位置、物体の速度などに基づいて、同一の物体と判定できる物体の位置情報を統合して管理する。すなわち、時系列判定部115は、移動情報抽出部111で位置が算出された物体のうち、同一物体に由来する複数の物体を1つの物体として位置情報を管理する。例えば、時系列判定部115は、反射点IDが付与された位置情報のうち、同じ受信時刻のもので、位置が近く同じ速度で移動しているものを同一物体に由来する位置情報とする。または、時系列判定部115は、反射点IDが付与された位置情報のうち、異なる受信時刻のもので、一定の方向に等速度で移動しているものを同一物体に由来する位置情報とする。時系列判定部115は、同一物体とみなされた物体に軌道IDを付与し、当該物体と関連する反射点IDが付与された位置情報と紐付けて、軌道IDが付与された物体の位置を管理する。なお、時系列判定部115は、反射点IDに速度の情報が含まれていない場合、軌道IDが付与された物体の位置情報のうち、時刻の異なる位置情報で示される位置の差分と、時刻の差分とから物体の速度を算出する。 The time series determination unit 115 acquires the position information to which the reflection point ID is assigned from the movement information extraction unit 111, and arranges the position information in order of reception time and manages them in time series. The time-series determination unit 115 integrates and manages position information of objects that can be determined as the same object based on the reception time, the position of the object, the speed of the object, and the like. That is, the time-series determination unit 115 manages position information with a plurality of objects derived from the same object among the objects whose positions are calculated by the movement information extraction unit 111 as one object. For example, the time-series determination unit 115 uses the position information with the reflection point ID that has the same reception time and whose position is close and moving at the same speed as the position information derived from the same object. Alternatively, the time-series determination unit 115 uses, as position information derived from the same object, position information with different reflection times among the position information assigned with the reflection point ID and moving at a constant speed in a certain direction. . The time-series determination unit 115 assigns a trajectory ID to an object regarded as the same object, and associates the position of the object to which the trajectory ID is assigned with the position information to which the reflection point ID related to the object is assigned. to manage. In addition, when the reflection point ID does not include speed information, the time series determination unit 115 determines the difference between the position indicated by the position information at different times and the time among the position information of the object to which the trajectory ID is assigned. The speed of the object is calculated from the difference between the two.
 物体属性検知部112は、受信部110で受信されたセンサ情報のうちセンサ102から取得した第2のセンサ情報を抽出する。物体属性検知部112は、第2のセンサ情報すなわち画像情報を用いて、物体の属性を示す属性情報を生成する。具体的には、物体属性検知部112は、画像情報に対して画像解析を行って物体を認識し、物体を検出した時刻、物体の全幅、物体の全長、物体の位置、および物体の種別を判定し、判定結果を含む属性情報を生成する。物体属性検知部112で判定された物体の位置を第2の位置と称することがある。物体の種別としては、車両、自転車、または歩行者などを想定している。物体の全幅および物体の全長を併せて物体のサイズとしてもよい。物体属性検知部112は、物体を検出した時刻、物体のサイズ、物体の位置、および物体の種別の情報を組み合わせて物体の属性情報を生成し、属性情報を識別するための物体属性IDを付与して管理する。物体を検出した時刻については、車線上に設定された位置が既知の仮想的な線に物体が重なった瞬間の時刻を画像解析から求めたものであってもよいし、第2のセンサ情報の受信時刻であってもよい。 The object attribute detection unit 112 extracts second sensor information acquired from the sensor 102 from the sensor information received by the reception unit 110. The object attribute detection unit 112 generates attribute information indicating the attribute of the object using the second sensor information, that is, image information. Specifically, the object attribute detection unit 112 performs image analysis on the image information to recognize the object, and determines the time when the object is detected, the full width of the object, the full length of the object, the position of the object, and the type of the object. Determine, and generate attribute information including the determination result. The position of the object determined by the object attribute detection unit 112 may be referred to as a second position. As the type of object, a vehicle, a bicycle, a pedestrian, or the like is assumed. The total width of the object and the total length of the object may be combined to determine the size of the object. The object attribute detection unit 112 generates object attribute information by combining information on the time when the object is detected, the size of the object, the position of the object, and the type of the object, and assigns an object attribute ID for identifying the attribute information And manage. As for the time at which the object is detected, the time at which the object overlaps a virtual line whose position set on the lane is known may be obtained from image analysis, or the second sensor information It may be a reception time.
 情報統合部113は、時系列判定部115で管理されている物体の位置情報と、物体属性検知部112で管理されている属性情報とを、物体単位で統合して管理する。具体的には、情報統合部113は、属性情報に含まれる時刻および物体の位置と、軌道IDで管理されている物体の時系列の位置情報とに基づいて、物体属性IDで示される属性情報の物体と軌道IDで示される物体とが同一物体か否かを判定する。情報統合部113は、同一物体と判定した物体に物体IDを付与し、当該物体に関連する軌道IDと物体属性IDとを紐づけ、当該物体に関連する情報を統合して管理する。情報統合部113は、物体属性検知部112において新規に物体属性IDを検知すると、例えば、属性情報に含まれる物体の位置と、時系列判定部115で管理されている物体の位置とを比較し、位置の差分が規定された閾値以下の場合、位置を比較した物体同士は同一物体と判定する。なお、情報統合部113は、属性情報で示される時刻と、軌道IDによって時系列に保持されている時刻とが異なる場合、軌道IDで管理されている異なる時刻の物体の位置の平均をとるなどの方法で、物体属性IDの時刻に相当する位置を算出する。また、情報統合部113は、時系列判定部115で管理される物体の位置情報、または統合した物体単位の情報を用いて、情報配信部116から無線通信装置103に配信する位置情報を生成する。情報統合部113は、例えば、時系列判定部115で管理される物体の位置情報に基づいて、情報配信部116が次に無線通信装置103に位置情報を出力する時刻における物体の位置を算出する。情報統合部113は、算出した物体の位置を車両201~203に対して配信する位置情報とし、情報配信部116に出力する。配信する位置情報は、物体の中央であってもよいし、物体の進行方向の先頭中央であってもよい。 The information integration unit 113 integrates and manages the object position information managed by the time series determination unit 115 and the attribute information managed by the object attribute detection unit 112 in units of objects. Specifically, the information integration unit 113 uses the attribute information indicated by the object attribute ID based on the time and the position of the object included in the attribute information and the time-series position information of the object managed by the trajectory ID. It is determined whether or not the object indicated by orbit ID is the same object. The information integration unit 113 assigns an object ID to the objects determined to be the same object, links the trajectory ID related to the object and the object attribute ID, and integrates and manages information related to the object. When the object attribute detection unit 112 newly detects the object attribute ID, the information integration unit 113 compares the position of the object included in the attribute information with the position of the object managed by the time series determination unit 115, for example. When the position difference is equal to or less than the prescribed threshold value, the objects whose positions are compared are determined to be the same object. If the time indicated by the attribute information is different from the time stored in time series by the trajectory ID, the information integration unit 113 averages the positions of the objects at different times managed by the trajectory ID. By this method, the position corresponding to the time of the object attribute ID is calculated. Further, the information integration unit 113 generates position information to be distributed from the information distribution unit 116 to the wireless communication device 103 using the position information of the object managed by the time series determination unit 115 or the information of the integrated object unit. . For example, based on the position information of the object managed by the time-series determination unit 115, the information integration unit 113 calculates the position of the object at the time when the information distribution unit 116 next outputs the position information to the wireless communication device 103. . The information integration unit 113 sets the calculated position of the object as position information to be distributed to the vehicles 201 to 203 and outputs the position information to the information distribution unit 116. The position information to be distributed may be at the center of the object or at the head center in the traveling direction of the object.
 物体位置推定部114は、情報統合部113で統合された物体単位の情報を用いて、センサ101から次に第1のセンサ情報を受信する受信時刻において、物体IDが付与された物体が存在すると予想される位置を推定する。物体が存在すると予想される位置を到達範囲とする。到達範囲は、物体の速度、物体の属性などによって範囲が変化する。情報統合部113で統合された物体単位の情報は、物体IDに紐づけられた軌道IDで示される物体の位置情報および物体属性IDで示される属性情報である。物体位置推定部114は、時系列判定部115に対して、到達範囲の情報を通知する。 The object position estimation unit 114 uses the object unit information integrated by the information integration unit 113 to detect that there is an object to which the object ID is assigned at the reception time when the first sensor information is next received from the sensor 101. Estimate the expected position. The position where the object is expected to exist is defined as the reachable range. The reach range varies depending on the speed of the object, the attribute of the object, and the like. The information of the object unit integrated by the information integration unit 113 is the position information of the object indicated by the trajectory ID linked to the object ID and the attribute information indicated by the object attribute ID. The object position estimation unit 114 notifies the time series determination unit 115 of information on the reachable range.
 情報配信部116は、情報統合部113から取得した物体の位置情報を無線通信装置103に出力する。情報配信部116は、無線通信装置103を介して、車両201~203に対して周期的に位置情報を配信する。位置情報を配信する送信周期は、例えば、自動運転システムの動作周期に合わせるものとし、典型的には100msであるが、これに限定されない。情報配信部116は、情報統合部113から取得した位置情報を記憶する記憶部117と、車両201~203に対して位置情報を送信する送信部118と、を有する。 The information distribution unit 116 outputs the position information of the object acquired from the information integration unit 113 to the wireless communication device 103. The information distribution unit 116 periodically distributes position information to the vehicles 201 to 203 via the wireless communication device 103. The transmission cycle for distributing the position information is, for example, adjusted to the operation cycle of the automatic driving system, and is typically 100 ms, but is not limited thereto. The information distribution unit 116 includes a storage unit 117 that stores the position information acquired from the information integration unit 113, and a transmission unit 118 that transmits the position information to the vehicles 201 to 203.
 つづいて、運転支援装置100が周期的に位置情報を配信する動作について説明する。図3は、本実施の形態に係る運転支援装置100が位置情報を配信する動作を示すフローチャートである。まず、運転支援装置100において、受信部110は、センサ101またはセンサ102からセンサ情報を受信したか否かを判定する(ステップS101)。受信部110は、センサ情報を受信していない場合(ステップS101:No)、センサ情報を受信するまで待機する。受信部110がセンサ情報を受信し(ステップS101:Yes)、かつ、センサ情報が第1のセンサ情報であった場合(ステップS102:Yes)、移動情報抽出部111は、第1のセンサ情報を用いて物体の位置を算出し、物体の位置情報を生成する(ステップS103)。移動情報抽出部111は、位置情報に反射点IDを付与して管理する。時系列判定部115は、移動情報抽出部111から位置情報を取得し、同一物体に由来する複数の物体の位置情報を統合する(ステップS104)。時系列判定部115は、位置情報を統合した物体に軌道IDを付与して管理する。受信部110で受信されたセンサ情報が第1のセンサ情報ではなかった場合(ステップS102:No)、運転支援装置100は、ステップS103およびステップS104の動作を省略する。 Subsequently, an operation in which the driving support device 100 periodically distributes position information will be described. FIG. 3 is a flowchart showing an operation in which the driving support apparatus 100 according to the present embodiment distributes position information. First, in the driving assistance device 100, the receiving unit 110 determines whether sensor information is received from the sensor 101 or the sensor 102 (step S101). If the sensor information is not received (step S101: No), the receiving unit 110 waits until sensor information is received. When the receiving unit 110 receives the sensor information (step S101: Yes) and the sensor information is the first sensor information (step S102: Yes), the movement information extraction unit 111 receives the first sensor information. Using this, the position of the object is calculated, and the position information of the object is generated (step S103). The movement information extraction unit 111 manages the position information by adding a reflection point ID. The time series determination unit 115 acquires position information from the movement information extraction unit 111 and integrates position information of a plurality of objects derived from the same object (step S104). The time series determination unit 115 performs management by assigning a trajectory ID to an object obtained by integrating position information. When the sensor information received by the receiving unit 110 is not the first sensor information (step S102: No), the driving support device 100 omits the operations of step S103 and step S104.
 受信部110で受信されたセンサ情報が第2のセンサ情報であった場合(ステップS105:Yes)、物体属性検知部112は、第2のセンサ情報を用いて物体の属性情報を生成する(ステップS106)。物体属性検知部112は、属性情報に物体属性IDを付与して管理する。受信部110で受信されたセンサ情報が第2のセンサ情報ではなかった場合(ステップS105:No)、運転支援装置100は、ステップS106の動作を省略する。 When the sensor information received by the receiving unit 110 is the second sensor information (step S105: Yes), the object attribute detection unit 112 generates object attribute information using the second sensor information (step S105). S106). The object attribute detection unit 112 assigns and manages an object attribute ID to the attribute information. When the sensor information received by the receiving unit 110 is not the second sensor information (step S105: No), the driving support device 100 omits the operation of step S106.
 情報統合部113は、時系列判定部115で統合された物体の位置情報が管理され、かつ、物体属性検知部112で属性情報が生成されている場合(ステップS107:Yes)、時系列判定部115で管理されている物体の位置情報と、物体属性検知部112で管理されている属性情報とを統合する(ステップS108)。情報統合部113は、同一物体と推定された物体に物体IDを付与する。なお、情報統合部113は、物体属性検知部112で属性情報が生成されていればよく、統合に使用する属性情報が前回統合したときに使用した属性情報と同じものであってもよい。 The information integration unit 113 manages the position information of the objects integrated by the time series determination unit 115 and the attribute information is generated by the object attribute detection unit 112 (step S107: Yes), the time series determination unit The position information of the object managed in 115 and the attribute information managed in the object attribute detection unit 112 are integrated (step S108). The information integration unit 113 assigns an object ID to an object estimated as the same object. The information integration unit 113 only needs to generate attribute information in the object attribute detection unit 112, and the attribute information used for integration may be the same as the attribute information used when the previous integration was performed.
 物体位置推定部114は、情報統合部113で物体IDに紐づけられた軌道IDで示される物体の位置情報および物体属性IDで示される属性情報に基づいて、センサ101から次に第1のセンサ情報を受信する受信時刻において、物体IDが付与された物体が存在すると予想される位置である到達範囲を推定する(ステップS109)。 Based on the position information of the object indicated by the trajectory ID and the attribute information indicated by the object attribute ID associated with the object ID by the information integration unit 113, the object position estimation unit 114 is next connected to the first sensor from the sensor 101. At a reception time at which information is received, an arrival range that is a position where an object with an object ID is expected to exist is estimated (step S109).
 運転支援装置100は、情報配信部116が次に位置情報を配信するまで規定された期間がある場合(ステップS110:Yes)、ステップS101の処理に戻る。規定された期間は、運転支援装置100において前述のステップS101からステップS109までの処理期間よりも長い期間とする。情報配信部116が次に位置情報を配信するまで規定された期間がない場合(ステップS110:No)、情報統合部113は、情報配信部116が次に無線通信装置103に位置情報を送信する配信時刻における物体の位置を算出する(ステップS111)。情報統合部113は、例えば、等速直線運動をモデルとして、現在、時系列判定部115で管理されている最新の位置情報を、配信時刻における物体の位置に補正、すなわち算出する。情報統合部113は、算出した物体の位置を位置情報として情報配信部116に出力する。情報配信部116は、配信時刻になってから、情報統合部113から取得した位置情報を配信する(ステップS112)。運転支援装置100は、その後、ステップS101の処理に戻る。 The driving support device 100 returns to the process of step S101 when there is a prescribed period until the information distribution unit 116 distributes the position information next time (step S110: Yes). The prescribed period is longer than the processing period from step S101 to step S109 described above in the driving support device 100. If there is no specified period until the information distribution unit 116 distributes the next position information (step S110: No), the information integration unit 113 next transmits the position information to the wireless communication apparatus 103 by the information distribution unit 116. The position of the object at the distribution time is calculated (step S111). For example, the information integration unit 113 corrects, that is, calculates, the latest position information currently managed by the time-series determination unit 115 to the position of the object at the distribution time, using constant velocity linear motion as a model. The information integration unit 113 outputs the calculated position of the object to the information distribution unit 116 as position information. The information distribution unit 116 distributes the position information acquired from the information integration unit 113 after the distribution time is reached (step S112). Then, the driving assistance device 100 returns to the process of step S101.
 情報統合部113は、時系列判定部115で位置情報が統合された物体の位置情報が管理されている、または、物体属性検知部112で属性情報が生成されている、の条件のうち少なくとも1つを満たしていない場合(ステップS107:No)、時系列判定部115で位置情報が統合された物体の位置情報が管理されているか否かを判定する(ステップS113)。運転支援装置100は、時系列判定部115で位置情報が統合された物体の位置情報が管理されている場合(ステップS113:Yes)、ステップS110の処理を行う。以降の処理は前述の通りである。運転支援装置100は、時系列判定部115で位置情報が統合された物体の位置情報が管理されていない場合(ステップS113:No)、ステップS101の処理に戻る。 The information integration unit 113 manages at least one of the conditions in which the position information of the object whose position information is integrated by the time-series determination unit 115 or the attribute information is generated by the object attribute detection unit 112. If the condition information is not satisfied (step S107: No), it is determined whether or not the position information of the object with the position information integrated by the time series determination unit 115 is managed (step S113). The driving support apparatus 100 performs the process of step S110 when the position information of the object in which the position information is integrated is managed by the time-series determination unit 115 (step S113: Yes). The subsequent processing is as described above. If the position information of the object in which the position information is integrated is not managed by the time series determination unit 115 (step S113: No), the driving support apparatus 100 returns to the process of step S101.
 ここで、物体位置推定部114が推定する物体の到達範囲について説明する。図4は、本実施の形態に係る物体位置推定部114が推定した物体の位置の例を示す図である。物体位置推定部114は、情報統合部113で統合された情報、具体的には、物体属性検知部112で管理されていた属性情報に含まれる物体のサイズから、到達範囲を推定する対象の物体のサイズを認識することができる。図4に示すように、到達範囲を推定する対象の物体の全長をL1、全幅をW1と認識する。また、物体位置推定部114は、属性情報に含まれる物体の種別から、物体が車両300と認識する。また、物体位置推定部114は、情報統合部113で統合された情報、具体的には、時系列判定部115で管理されていた物体の速度を用いて、センサ101から次に第1のセンサ情報を受信する受信時刻における物体の位置を推定する。なお、車両300の位置については、車両300の中央でもよいし、車両300の進行方向先頭の中央であってもよい。図4では、車両300の進行方向先頭の中央を現在の車両の位置301、すなわち基準位置とする。物体位置推定部114は、例えば、等速直線運動をモデルとして、センサ101から次に第1のセンサ情報を受信する受信時刻における物体の位置301aを推定する。なお、物体位置推定部114は、物体の位置を物体の中央で表す場合は物体のサイズすなわち物体の全幅および物体の全長の情報を使用し、物体の位置を物体の進行方向先頭の中央で表す場合は物体のサイズのうち物体の全幅の情報を使用し、物体の全長の情報は使用しなくてもよい。 Here, the range of the object estimated by the object position estimation unit 114 will be described. FIG. 4 is a diagram illustrating an example of an object position estimated by the object position estimation unit 114 according to the present embodiment. The object position estimation unit 114 is a target object whose range is estimated from the information integrated by the information integration unit 113, specifically, the size of the object included in the attribute information managed by the object attribute detection unit 112. The size of can be recognized. As shown in FIG. 4, the total length of the target object whose range is estimated is recognized as L1, and the total width is recognized as W1. Further, the object position estimation unit 114 recognizes the object as the vehicle 300 from the type of the object included in the attribute information. Further, the object position estimation unit 114 uses the information integrated by the information integration unit 113, specifically, the speed of the object managed by the time series determination unit 115, and then the first sensor from the sensor 101. The position of the object at the reception time when information is received is estimated. Note that the position of the vehicle 300 may be the center of the vehicle 300 or the center at the beginning of the traveling direction of the vehicle 300. In FIG. 4, the center at the beginning of the traveling direction of the vehicle 300 is the current vehicle position 301, that is, the reference position. The object position estimation unit 114 estimates, for example, the position 301a of the object at the reception time when the first sensor information is next received from the sensor 101 using a constant velocity linear motion as a model. Note that the object position estimation unit 114 uses information on the object size, that is, the entire width of the object and the total length of the object, when the object position is represented at the center of the object, and represents the object position at the beginning center of the object traveling direction. In this case, information on the entire width of the object among the sizes of the object is used, and information on the total length of the object may not be used.
 物体位置推定部114は、物体の位置301aと、前述の物体の種別とを用いて、物体の種別に応じて設定される加速度変化範囲および進路変更角度θから、位置301aの周囲に台形のエリアを設定し、設定したエリアを前述の到達範囲とする。図5は、本実施の形態に係る物体位置推定部114が推定した物体の到達範囲の第1の例を示す図である。物体位置推定部114は、例えば、物体すなわち車両300の全幅W1に進路変更範囲を考慮して到達範囲の幅W2を設定し、最大の加速度、進路変更角度などから算出して到達範囲の幅W3を設定する。また、物体位置推定部114は、例えば、物体の種別すなわち車両の加速度の範囲から距離L2を算出する。 The object position estimation unit 114 uses a position 301a of the object and the type of the object to determine a trapezoid area around the position 301a from the acceleration change range and the course change angle θ set according to the type of the object. Is set, and the set area is set as the aforementioned reachable range. FIG. 5 is a diagram illustrating a first example of the object reachable range estimated by the object position estimation unit 114 according to the present embodiment. For example, the object position estimation unit 114 sets the reach range width W2 in consideration of the course change range for the entire width W1 of the object, that is, the vehicle 300, and calculates the reach range width W3 by calculating from the maximum acceleration, the course change angle, and the like. Set. In addition, the object position estimation unit 114 calculates the distance L2 from, for example, the type of object, that is, the range of vehicle acceleration.
 物体位置推定部114は、物体の種別に応じて到達範囲の大きさ、および到達範囲の形状を変えてもよい。図6は、本実施の形態に係る物体位置推定部114が推定した物体の到達範囲の第2の例を示す図である。図6では、物体の種別が自転車であることを想定している。物体の種別が自転車の場合、車両よりも負の加速度変化範囲が大きく、進路変更角度も広いことが想定される。そのため、物体位置推定部114は、物体の基準位置すなわち現在の自転車の位置302の場合に、センサ101から次に第1のセンサ情報を受信する受信時刻における物体の位置302aを推定し、幅W21、幅W22および距離L20で示される台形の到達範囲を推定する。 The object position estimation unit 114 may change the size of the reach range and the shape of the reach range according to the type of the object. FIG. 6 is a diagram illustrating a second example of the object reachable range estimated by the object position estimation unit 114 according to the present embodiment. In FIG. 6, it is assumed that the object type is a bicycle. When the type of the object is a bicycle, it is assumed that the negative acceleration change range is larger than the vehicle and the course change angle is wide. Therefore, in the case of the reference position of the object, that is, the current bicycle position 302, the object position estimation unit 114 estimates the object position 302a at the reception time when the first sensor information is next received from the sensor 101, and the width W21. The trapezoid reach indicated by the width W22 and the distance L20 is estimated.
 図7は、本実施の形態に係る物体位置推定部114が推定した物体の到達範囲の第3の例を示す図である。図7では、物体の種別が歩行者であることを想定している。物体の種別が歩行者の場合、加速度変化範囲は車両および自転車よりも小さいが、進路変更角度は車両および自転車よりも大きいため、物体位置推定部114は、円状の到達範囲を設定する。そのため、物体位置推定部114は、物体の基準位置すなわち現在の歩行者の位置303の場合に、センサ101から次に第1のセンサ情報を受信する受信時刻における物体の位置303aを推定し、幅W30および距離L30で示される円状の到達範囲を推定する。物体位置推定部114は、物体の種別の他、現在の物体の速度も考慮して、到達範囲の形状を設定してもよい。このように、物体位置推定部114は、物体のサイズ、物体の種別、および物体の速度に基づいて到達範囲の大きさを設定することができ、物体の種別に基づいて到達範囲の形状を設定することができる。 FIG. 7 is a diagram illustrating a third example of the object reachable range estimated by the object position estimation unit 114 according to the present embodiment. In FIG. 7, it is assumed that the object type is a pedestrian. When the object type is a pedestrian, the acceleration change range is smaller than that of the vehicle and the bicycle, but since the course change angle is larger than that of the vehicle and the bicycle, the object position estimation unit 114 sets a circular reachable range. Therefore, the object position estimation unit 114 estimates the object position 303a at the reception time when the first sensor information is next received from the sensor 101 in the case of the object reference position, that is, the current pedestrian position 303, and the width The circular reachable range indicated by W30 and distance L30 is estimated. The object position estimation unit 114 may set the shape of the reach range in consideration of the current object speed in addition to the object type. As described above, the object position estimation unit 114 can set the size of the reachable range based on the object size, the object type, and the object speed, and can set the reachable range shape based on the object type. can do.
 図3に示すフローチャートの処理を繰り返し実施するなかで、時系列判定部115は、物体位置推定部114によって物体IDが付与された物体の到達範囲が推定されている場合(ステップS109)、物体位置推定部114で推定された到達範囲を用いて、同一物体に由来する複数の物体の位置情報を統合する処理を行う(ステップS104)。すなわち、時系列判定部115は、到達範囲において、移動情報抽出部111で位置が算出された物体が到達範囲を推定された物体と同一か否かを判定する。具体的には、時系列判定部115は、移動情報抽出部111から取得した、反射点IDが付与された位置情報に対して、物体位置推定部114で推定された到達範囲を閾値として、判定に使用する位置情報に対してフィルタ処理を行う。時系列判定部115は、フィルタ処理の結果得られた位置情報を同一物体の位置情報と識別し、もっとも距離が近いものから順に位置の更新処理を行う。時系列判定部115は、位置の更新処理において、位置の平均値を算出して求めてもよいし、カルマンフィルタ等を用いて位置を推定してもよい。このように、時系列判定部115は、移動情報抽出部111で位置が算出された物体のうち、到達範囲内の物体を対象にして位置情報を管理する。これにより、時系列判定部115は、センサ101において広範囲で物体が検出された場合に、処理対象の領域を到達範囲に限定することで、必要な物体の位置情報のみを効率的に選択し、処理負荷を低減することができる。また、時系列判定部115は、到達範囲が予測されていることから、物体の移動による位置の変化にも追随することが可能となる。 When the process of the flowchart shown in FIG. 3 is repeatedly performed, the time-series determination unit 115 determines the object position when the object reachability range estimated by the object position estimation unit 114 is estimated (step S109). Using the reach range estimated by the estimation unit 114, processing for integrating position information of a plurality of objects derived from the same object is performed (step S104). That is, the time-series determination unit 115 determines whether or not the object whose position is calculated by the movement information extraction unit 111 is the same as the object whose reach range is estimated in the reach range. Specifically, the time-series determination unit 115 performs determination using the arrival range estimated by the object position estimation unit 114 as a threshold for the position information obtained from the movement information extraction unit 111 and provided with the reflection point ID. Filter the position information used for the. The time series determination unit 115 identifies the position information obtained as a result of the filtering process from the position information of the same object, and performs the position update process in order from the closest distance. In the position update process, the time series determination unit 115 may calculate and obtain an average value of positions, or may estimate the position using a Kalman filter or the like. As described above, the time-series determination unit 115 manages position information for objects within the reach of the objects whose positions are calculated by the movement information extraction unit 111. Thereby, when the object is detected in a wide range by the sensor 101, the time-series determination unit 115 efficiently selects only necessary object position information by limiting the processing target region to the reachable range, Processing load can be reduced. In addition, since the reach range is predicted, the time series determination unit 115 can follow a change in position due to the movement of the object.
 時系列判定部115は、物体位置推定部114で到達範囲が推定されていない場合、例えば、等速直線運動をモデルとして、前回の受信時刻での物体の位置、速度などを用いて今回の受信時刻での物体の位置を推定し、推定した物体の位置と実際の物体の位置との差分が規定された閾値以内の場合、同一物体と推定してもよい。時系列判定部115は、カルマンフィルタ等を用いて今回の受信時刻での物体の位置を推定してもよい。時系列判定部115は、位置の差分が規定された閾値より大きい場合、位置を比較した物体は異なる物体であると判定する。 When the reach range is not estimated by the object position estimation unit 114, the time series determination unit 115 uses the position, speed, etc. of the object at the previous reception time as a model, for example, using constant velocity linear motion as a model. The position of the object at the time may be estimated, and if the difference between the estimated position of the object and the actual position of the object is within a prescribed threshold value, the same object may be estimated. The time series determination unit 115 may estimate the position of the object at the current reception time using a Kalman filter or the like. When the position difference is larger than the prescribed threshold value, the time series determination unit 115 determines that the objects whose positions are compared are different objects.
 なお、本実施の形態において、情報配信部116は、車両201~203に対して位置情報を配信していたが、位置情報とともに、物体位置推定部114で推定された到達範囲、物体属性検知部112で生成された属性情報などを配信してもよい。 In the present embodiment, the information distribution unit 116 distributes position information to the vehicles 201 to 203. However, together with the position information, the reach range estimated by the object position estimation unit 114, the object attribute detection unit The attribute information generated at 112 may be distributed.
 また、本実施の形態において、時系列判定部115は、情報統合部113と同じ処理周期で処理を行ってもよいし、移動情報抽出部111から位置情報を取得するたびに処理を行ってもよい。 Further, in the present embodiment, the time series determination unit 115 may perform processing at the same processing cycle as the information integration unit 113, or may perform processing each time position information is acquired from the movement information extraction unit 111. Good.
 また、本実施の形態において、運転支援システム400は路側に設置されることを想定しているが、運転支援システム400のうち、運転支援装置100およびセンサ101,102については自動運転車に搭載してもよい。運転支援装置100は、搭載された自動運転車の位置情報を生成する。 In the present embodiment, it is assumed that the driving support system 400 is installed on the roadside, but the driving support device 100 and the sensors 101 and 102 of the driving support system 400 are mounted on an automatic driving vehicle. May be. The driving assistance device 100 generates position information of the mounted autonomous driving vehicle.
 つづいて、運転支援装置100のハードウェア構成について説明する。図8は、本実施の形態に係る運転支援装置100のハードウェア構成の例を示す図である。運転支援装置100は、CPU(Central Processing Unit)40、ROM(Read Only Memory)41、RAM(Random Access Memory)42、センサIF(Interface)43、通信IF44、およびメモリ45から構成され、各構成部がバス46によってそれぞれ接続されている。CPU40は、運転支援装置100全体の制御を司る。ROM41は、ブートプログラム、通信プログラム、データ解析プログラム等のプログラムを格納する。RAM42は、CPU40のワーク領域として使用される。センサIF43は、レーダ、ライダ等の各種センサと接続され、各種センサとCPU40とのインターフェースとして機能する。通信IF44は、無線を介して自動運転車に接続され、自動運転車とCPU40とのインターフェースとして機能する。センサIF43と各種センサとの間、および、通信IF44と自動運転車との間には、LAN(Local Area Network)などの通信ネットワークが存在していてもよい。また、センサIF43および通信IF44のいずれかまたは両方は、インターネットなどの通信ネットワークにも接続され、通信ネットワークとCPU40とのインターフェースとしても機能する。 Next, the hardware configuration of the driving support device 100 will be described. FIG. 8 is a diagram illustrating an example of a hardware configuration of the driving support apparatus 100 according to the present embodiment. The driving support apparatus 100 includes a CPU (Central Processing Unit) 40, a ROM (Read Only Memory) 41, a RAM (Random Access Memory) 42, a sensor IF (Interface) 43, a communication IF 44, and a memory 45. Are connected to each other by a bus 46. The CPU 40 controls the entire driving support device 100. The ROM 41 stores programs such as a boot program, a communication program, and a data analysis program. The RAM 42 is used as a work area for the CPU 40. The sensor IF 43 is connected to various sensors such as a radar and a lidar, and functions as an interface between the various sensors and the CPU 40. The communication IF 44 is connected to the autonomous driving vehicle via radio and functions as an interface between the autonomous driving vehicle and the CPU 40. A communication network such as a LAN (Local Area Network) may exist between the sensor IF 43 and the various sensors and between the communication IF 44 and the autonomous driving vehicle. One or both of the sensor IF 43 and the communication IF 44 are also connected to a communication network such as the Internet, and function as an interface between the communication network and the CPU 40.
 また、受信部110、移動情報抽出部111、物体属性検知部112、情報統合部113、物体位置推定部114、時系列判定部115の機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ45に格納される。CPU40は、メモリ45に記憶されたプログラムを読み出して実行することにより、各部の機能を実現する。 The functions of the reception unit 110, the movement information extraction unit 111, the object attribute detection unit 112, the information integration unit 113, the object position estimation unit 114, and the time series determination unit 115 are software, firmware, or a combination of software and firmware. Realized. Software or firmware is described as a program and stored in the memory 45. The CPU 40 implements the functions of each unit by reading and executing the program stored in the memory 45.
 以上説明したように、本実施の形態によれば、運転支援装置100は、レーダを用いたセンサ101で検出され位置が算出された物体のうち、同一物体に由来する複数の物体を1つの物体として位置情報を管理し、当該位置情報と、画像情報を取得するセンサ102で検出された物体の属性情報とを統合する。運転支援装置100は、統合した情報を用いて、次にセンサ101から第1のセンサ情報を受信する受信時刻における物体の到達範囲を推定し、到達範囲においてセンサ101で検出された物体を処理対象にすることとした。これにより、運転支援装置100は、処理負荷を抑制しつつ、運転支援装置100の周辺に存在する物体の検出精度を向上することができる。 As described above, according to the present embodiment, the driving support device 100 selects a plurality of objects derived from the same object as one object among the objects whose positions are calculated by the sensor 101 using the radar. The position information is managed, and the position information and the attribute information of the object detected by the sensor 102 that acquires the image information are integrated. The driving support device 100 uses the integrated information to estimate the arrival range of the object at the next reception time when the first sensor information is received from the sensor 101, and processes the object detected by the sensor 101 in the arrival range. I decided to make it. Thereby, the driving assistance apparatus 100 can improve the detection accuracy of the object which exists around the driving assistance apparatus 100, suppressing processing load.
 以上の実施の形態に示した構成は、本発明の内容の一例を示すものであり、別の公知の技術と組み合わせることも可能であるし、本発明の要旨を逸脱しない範囲で、構成の一部を省略、変更することも可能である。 The configuration described in the above embodiment shows an example of the contents of the present invention, and can be combined with another known technique, and can be combined with other configurations without departing from the gist of the present invention. It is also possible to omit or change the part.
 100 運転支援装置、101,102 センサ、103 無線通信装置、110 受信部、111 移動情報抽出部、112 物体属性検知部、113 情報統合部、114 物体位置推定部、115 時系列判定部、116 情報配信部、117 記憶部、118 送信部、201~203,300 車両、204 自転車、205~207 歩行者、400運転支援システム。 100 driving support device, 101, 102 sensor, 103 wireless communication device, 110 receiving unit, 111 movement information extracting unit, 112 object attribute detecting unit, 113 information integrating unit, 114 object position estimating unit, 115 time series determining unit, 116 information Distribution unit, 117 storage unit, 118 transmission unit, 201-203,300 vehicle, 204 bicycle, 205-207 pedestrian, 400 driving support system.

Claims (10)

  1.  レーダを用いた第1のセンサから第1のセンサ情報を受信し、画像情報を取得する第2のセンサから第2のセンサ情報を受信する受信部と、
     前記第1のセンサ情報を用いて、前記第1のセンサで検出された物体の位置である第1の位置を算出する移動情報抽出部と、
     前記移動情報抽出部で第1の位置が算出された物体のうち、同一物体に由来する複数の物体を1つの物体として位置情報を管理する時系列判定部と、
     前記第2のセンサ情報を用いて、前記第2のセンサで検出された前記物体のサイズ、前記物体の位置である第2の位置、および前記物体の種別を判定し、判定結果を含む属性情報を生成する物体属性検知部と、
     前記時系列判定部で管理される前記物体の位置情報と、前記属性情報と、を物体単位で統合して管理する情報統合部と、
     前記情報統合部で統合された物体単位の情報を用いて、次に第1のセンサ情報を受信する受信時刻における前記物体の到達範囲を推定する物体位置推定部と、
     を備え、
     前記時系列判定部は、前記移動情報抽出部で第1の位置が算出された物体のうち、前記到達範囲内の物体を対象にして位置情報を管理する、
     ことを特徴とする運転支援装置。
    A receiving unit that receives first sensor information from a first sensor using a radar and receives second sensor information from a second sensor that acquires image information;
    A movement information extraction unit that calculates a first position, which is the position of the object detected by the first sensor, using the first sensor information;
    A time-series determination unit that manages position information with a plurality of objects derived from the same object as one object among the objects whose first position is calculated by the movement information extraction unit;
    Attribute information including a determination result by determining the size of the object detected by the second sensor, the second position that is the position of the object, and the type of the object using the second sensor information An object attribute detection unit for generating
    An information integration unit that integrates and manages the object position information and the attribute information managed by the time-series determination unit;
    An object position estimation unit that estimates the reach of the object at a reception time at which the first sensor information is received next, using the information of the object unit integrated by the information integration unit;
    With
    The time series determination unit manages position information for an object within the reachable range among objects whose first position is calculated by the movement information extraction unit;
    A driving support device characterized by that.
  2.  前記第1のセンサ情報には、前記第1のセンサから前記物体までの距離、および前記第1のセンサの指向方向に対する前記物体の方向を示す角度が含まれ、さらに、前記物体の速度または前記物体の進行方向のうち少なくとも1つが含まれる、
     ことを特徴とする請求項1に記載の運転支援装置。
    The first sensor information includes a distance from the first sensor to the object, and an angle indicating a direction of the object with respect to a directing direction of the first sensor. At least one of the traveling directions of the object is included,
    The driving support device according to claim 1, wherein
  3.  前記第2のセンサ情報に含まれる前記画像情報は、静止画像または動画像である、
     ことを特徴とする請求項1または2に記載の運転支援装置。
    The image information included in the second sensor information is a still image or a moving image.
    The driving support apparatus according to claim 1 or 2, wherein
  4.  前記物体属性検知部は、前記静止画像または前記動画像を用いて、前記物体を検出した時刻、前記第2の位置、および前記物体の種別を判定し、さらに前記物体のサイズとして前記物体の全幅、および前記物体の全長を判定し、前記属性情報を生成する、
     ことを特徴とする請求項3に記載の運転支援装置。
    The object attribute detection unit determines the time at which the object is detected, the second position, and the type of the object using the still image or the moving image, and further determines the full width of the object as the size of the object. And determining the total length of the object, and generating the attribute information,
    The driving support apparatus according to claim 3, wherein
  5.  前記物体位置推定部は、前記物体のサイズ、前記物体の種別、および前記物体の速度に基づいて、前記到達範囲の大きさを設定する、
     ことを特徴とする請求項1から4のいずれか1つに記載の運転支援装置。
    The object position estimation unit sets the size of the reach based on the size of the object, the type of the object, and the speed of the object;
    The driving support apparatus according to any one of claims 1 to 4, wherein
  6.  前記物体位置推定部は、前記物体の種別に基づいて、前記到達範囲の形状を設定する、
     ことを特徴とする請求項1から5のいずれか1つに記載の運転支援装置。
    The object position estimation unit sets the shape of the reach based on the type of the object,
    The driving support device according to any one of claims 1 to 5, wherein
  7.  前記情報統合部は、前記属性情報に含まれる前記第2の位置と、前記時系列判定部で管理されている前記物体の位置とを比較し、位置の差分が規定された閾値以下の場合、位置を比較した物体同士は同一物体と判定する、
     ことを特徴とする請求項1から6のいずれか1つに記載の運転支援装置。
    The information integration unit compares the second position included in the attribute information with the position of the object managed by the time series determination unit, and if the difference in position is equal to or less than a prescribed threshold value, The objects whose positions are compared are determined as the same object.
    The driving support device according to any one of claims 1 to 6, wherein
  8.  前記情報統合部は、前記時系列判定部で管理される前記物体の位置情報、または統合した物体単位の情報を用いて、配信する位置情報を生成し、
     前記運転支援装置は、さらに、
     前記情報統合部から取得した前記位置情報を記憶する記憶部と、自動運転車に対して前記位置情報を送信する送信部と、を有する情報配信部、
     を備えることを特徴とする請求項1から7のいずれか1つに記載の運転支援装置。
    The information integration unit generates position information to be distributed using the position information of the object managed by the time series determination unit or the information of the integrated object unit,
    The driving support device further includes:
    An information distribution unit comprising: a storage unit that stores the position information acquired from the information integration unit; and a transmission unit that transmits the position information to an autonomous vehicle.
    The driving support apparatus according to any one of claims 1 to 7, further comprising:
  9.  請求項1から8のいずれか1つに記載の運転支援装置と、
     レーダを用いた第1のセンサと、
     画像情報を取得する第2のセンサと、
     を備えることを特徴とする運転支援システム。
    The driving support device according to any one of claims 1 to 8,
    A first sensor using a radar;
    A second sensor for acquiring image information;
    A driving support system comprising:
  10.  受信部が、レーダを用いた第1のセンサから第1のセンサ情報を受信し、画像情報を取得する第2のセンサから第2のセンサ情報を受信する第1のステップと、
     移動情報抽出部が、前記第1のセンサ情報を用いて、前記第1のセンサで検出された物体の位置である第1の位置を算出する第2のステップと、
     時系列判定部が、前記移動情報抽出部で第1の位置が算出された物体のうち、同一物体に由来する複数の物体を1つの物体として位置情報を管理する第3のステップと、
     物体属性検知部が、前記第2のセンサ情報を用いて、前記第2のセンサで検出された前記物体のサイズ、前記物体の位置である第2の位置、および前記物体の種別を判定し、判定結果を含む属性情報を生成する第4のステップと、
     情報統合部が、前記時系列判定部で管理される前記物体の位置情報と、前記属性情報と、を物体単位で統合して管理する第5のステップと、
     物体位置推定部が、前記情報統合部で統合された物体単位の情報を用いて、次に第1のセンサ情報を受信する受信時刻における前記物体の到達範囲を推定する第6のステップと、
     を含み、
     前記第1のステップから前記第6のステップまでを繰り返し実施する場合、前記第3のステップにおいて、前記時系列判定部は、前記移動情報抽出部で第1の位置が算出された物体のうち、前記到達範囲内の物体を対象にして位置情報を管理する、
     ことを特徴とする運転支援方法。
    A receiving unit that receives first sensor information from a first sensor using a radar and receives second sensor information from a second sensor that acquires image information;
    A second step in which a movement information extraction unit calculates a first position, which is a position of an object detected by the first sensor, using the first sensor information;
    A third step in which the time-series determination unit manages the position information by using, as one object, a plurality of objects derived from the same object among the objects whose first position is calculated by the movement information extraction unit;
    The object attribute detection unit determines the size of the object detected by the second sensor, the second position that is the position of the object, and the type of the object, using the second sensor information. A fourth step of generating attribute information including the determination result;
    A fifth step in which the information integration unit integrates and manages the position information of the object managed by the time-series determination unit and the attribute information in units of objects;
    A sixth step in which the object position estimating unit estimates the reach of the object at the reception time at which the first sensor information is next received using the information of the object unit integrated by the information integration unit;
    Including
    When the steps from the first step to the sixth step are repeatedly performed, in the third step, the time-series determination unit includes the objects whose first position is calculated by the movement information extraction unit, Managing location information for objects within the reach;
    A driving support method characterized by the above.
PCT/JP2018/022282 2018-06-11 2018-06-11 Driving assistance device, driving assistance system, and driving assistance method WO2019239471A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018559902A JPWO2019239471A1 (en) 2018-06-11 2018-06-11 Driving support device, driving support system, and driving support method
PCT/JP2018/022282 WO2019239471A1 (en) 2018-06-11 2018-06-11 Driving assistance device, driving assistance system, and driving assistance method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/022282 WO2019239471A1 (en) 2018-06-11 2018-06-11 Driving assistance device, driving assistance system, and driving assistance method

Publications (1)

Publication Number Publication Date
WO2019239471A1 true WO2019239471A1 (en) 2019-12-19

Family

ID=68841950

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/022282 WO2019239471A1 (en) 2018-06-11 2018-06-11 Driving assistance device, driving assistance system, and driving assistance method

Country Status (2)

Country Link
JP (1) JPWO2019239471A1 (en)
WO (1) WO2019239471A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022089123A (en) * 2020-12-03 2022-06-15 三菱電機株式会社 Device and method for providing location

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006048568A (en) * 2004-08-09 2006-02-16 Daihatsu Motor Co Ltd Object recognition method and object recognizing device
WO2015005001A1 (en) * 2013-07-08 2015-01-15 本田技研工業株式会社 Object recognition device
JP2015195018A (en) * 2014-03-18 2015-11-05 株式会社リコー Image processor, image processing method, operation support system, and program
WO2017111135A1 (en) * 2015-12-25 2017-06-29 株式会社デンソー Travel assistance device and travel assistance method
JP2018025957A (en) * 2016-08-10 2018-02-15 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Communication method and server

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013002927A (en) * 2011-06-15 2013-01-07 Honda Elesys Co Ltd Obstacle detection apparatus and computer program
JP5905846B2 (en) * 2013-03-29 2016-04-20 株式会社日本自動車部品総合研究所 Crossing determination device and program
JP6453695B2 (en) * 2015-03-31 2019-01-16 株式会社デンソー Driving support device and driving support method
JP6972797B2 (en) * 2016-11-24 2021-11-24 株式会社リコー Information processing device, image pickup device, device control system, mobile body, information processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006048568A (en) * 2004-08-09 2006-02-16 Daihatsu Motor Co Ltd Object recognition method and object recognizing device
WO2015005001A1 (en) * 2013-07-08 2015-01-15 本田技研工業株式会社 Object recognition device
JP2015195018A (en) * 2014-03-18 2015-11-05 株式会社リコー Image processor, image processing method, operation support system, and program
WO2017111135A1 (en) * 2015-12-25 2017-06-29 株式会社デンソー Travel assistance device and travel assistance method
JP2018025957A (en) * 2016-08-10 2018-02-15 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Communication method and server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022089123A (en) * 2020-12-03 2022-06-15 三菱電機株式会社 Device and method for providing location
JP7438928B2 (en) 2020-12-03 2024-02-27 三菱電機株式会社 Apparatus and method for providing location

Also Published As

Publication number Publication date
JPWO2019239471A1 (en) 2020-07-02

Similar Documents

Publication Publication Date Title
US11703876B2 (en) Autonomous driving system
US10831205B2 (en) Route determination device, vehicle control device, route determination method, and storage medium
US10520949B2 (en) Method and device for localizing a vehicle in its surroundings
US10464604B2 (en) Autonomous driving system
CN107727106B (en) Dynamic map construction method, dynamic map construction system and mobile terminal
EP3644294B1 (en) Vehicle information storage method, vehicle travel control method, and vehicle information storage device
JP5565385B2 (en) VEHICLE WIRELESS COMMUNICATION DEVICE AND COMMUNICATION SYSTEM
US20180061236A1 (en) Driving control device, driving control method, and vehicle-to-vehicle communication system
US10162357B2 (en) Distributed computing among vehicles
CN110441790B (en) Method and apparatus in a lidar system for cross-talk and multipath noise reduction
CN109307869B (en) Device and lighting arrangement for increasing the field of view of a lidar detector
US10816972B2 (en) Collective determination among autonomous vehicles
JP7229052B2 (en) vehicle control device, vehicle control system
JP7077967B2 (en) Driving lane estimation device, driving lane estimation method, and control program
EP3517380B1 (en) Travel control method and travel control apparatus for a vehicle
CN112106003A (en) Controller, control method, and program
JP7043765B2 (en) Vehicle driving control method and equipment
CN112486161A (en) Vehicle control device, vehicle control method, and storage medium
CN113002562A (en) Vehicle control device and storage medium
WO2019239471A1 (en) Driving assistance device, driving assistance system, and driving assistance method
JP2007153098A (en) Device for detecting position and method for predicting position of peripheral vehicle
JP2022160281A (en) Vehicle, server, system, method, storage medium and program
WO2023145494A1 (en) Information processing device and information processing method
US20230400316A1 (en) Server selection device and server selection system
JP7094418B1 (en) Traffic control equipment and traffic control system

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018559902

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18922775

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18922775

Country of ref document: EP

Kind code of ref document: A1