WO2017056525A1 - 車載器および車頭間距離算出方法 - Google Patents
車載器および車頭間距離算出方法 Download PDFInfo
- Publication number
- WO2017056525A1 WO2017056525A1 PCT/JP2016/057031 JP2016057031W WO2017056525A1 WO 2017056525 A1 WO2017056525 A1 WO 2017056525A1 JP 2016057031 W JP2016057031 W JP 2016057031W WO 2017056525 A1 WO2017056525 A1 WO 2017056525A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- distance
- dictionary
- feature amount
- unit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Definitions
- Embodiments of the present invention relate to an on-vehicle device and a head-to-head distance calculation method.
- Pattern recognition technology is widely used in an object detection apparatus that detects an object such as a vehicle included in a camera image obtained by imaging the front of the vehicle with a camera.
- the object detection device it is possible to detect the object from the camera video using the pattern recognition technique and calculate the distance from the camera that captured the image to the detected object.
- the distance from the camera to the tip of the detection target object cannot be calculated, and the detection target object from the camera cannot be calculated. Only the distance to the rear end can be calculated.
- the vehicle-mounted device of the embodiment includes a reception unit, a storage unit, an extraction unit, a determination unit, and a calculation unit.
- the receiving unit receives a first image obtained by imaging the front of the first vehicle by the imaging unit.
- storage part memorize
- An extraction part extracts the feature-value of the 2nd vehicle in a 1st image.
- the determination unit reads a dictionary of the vehicle model to be determined from the storage unit, and determines the vehicle type of the second vehicle based on the similarity between the feature value included in the read dictionary and the feature value of the second vehicle.
- the calculation unit obtains a first distance from the front end of the first vehicle to the rear end of the second vehicle based on the first image, and calculates the first distance and the vehicle length of the vehicle belonging to the vehicle type determined by the determination unit. The sum is calculated as the second distance from the tip of the first vehicle to the tip of the second vehicle.
- FIG. 1 is a diagram illustrating an example of a configuration of a traffic information detection system according to the first embodiment.
- FIG. 2 is a diagram illustrating an example of a configuration of a probe car included in the traffic information detection system according to the first embodiment.
- FIG. 3 is a block diagram illustrating an example of a functional configuration of an information processing unit included in the probe car according to the first embodiment.
- FIG. 4 is a diagram for explaining an example of the calculation process of the inter-head distance in the probe car according to the first embodiment.
- FIG. 5 is a flowchart illustrating an example of a schematic flow of a process for calculating the inter-vehicle distance by the probe car according to the first embodiment.
- FIG. 1 is a diagram illustrating an example of a configuration of a traffic information detection system according to the first embodiment.
- FIG. 2 is a diagram illustrating an example of a configuration of a probe car included in the traffic information detection system according to the first embodiment.
- FIG. 3 is a block diagram illustrating an
- FIG. 6 is a flowchart illustrating an example of a flow of calculation processing of the inter-vehicle head distance by the probe car according to the first embodiment.
- FIG. 7 is a diagram for explaining an example of a lane detection process in the probe car according to the first embodiment.
- FIG. 8 is a diagram for explaining an example of attention area setting processing in the probe car according to the first embodiment.
- FIG. 9 is a diagram for explaining an example of detection region correction processing by the probe car according to the first embodiment.
- FIG. 10 is a diagram for explaining an example of vehicle type determination processing by the probe car according to the first embodiment.
- FIG. 11 is a diagram for explaining an example of vehicle type determination processing by the probe car according to the first embodiment.
- FIG. 12 is a diagram for explaining an example of vehicle type determination processing by the probe car according to the first embodiment.
- FIG. 13A is a diagram for explaining an example of a calculation process of the inter-vehicle distance by the probe car according to the first embodiment.
- FIG. 13B is a diagram for explaining an example of a calculation process of the inter-vehicle distance by the probe car according to the first embodiment.
- FIG. 14 is a diagram illustrating an example of an inter-vehicle distance calculated by the probe car according to the first embodiment.
- FIG. 15 is a diagram illustrating an example of a standard vehicle length used to calculate the inter-head distance in the probe car according to the first embodiment.
- FIG. 16 is a diagram for explaining an example of a vehicle position estimation process in the probe car according to the first embodiment.
- FIG. 17 is a flowchart illustrating an example of a flow of processing for extracting a feature amount of a region of interest in the probe car according to the first embodiment.
- FIG. 18 is a diagram for explaining an example of a feature amount extraction process for a region of interest in the probe car according to the first embodiment.
- FIG. 19 is a flowchart illustrating an example of the flow of calculation processing of the inter-head distance by the probe car according to the second embodiment.
- FIG. 20 is a flowchart illustrating an example of a flow of calculation processing of the inter-head distance by the probe car according to the third embodiment.
- FIG. 21 is a diagram for explaining an example of the calculation process of the inter-head distance by the probe car according to the third embodiment.
- FIG. 1 is a diagram illustrating an example of a configuration of a traffic information detection system according to the first embodiment.
- the traffic information detection system according to the present embodiment includes a probe car V1, a detection target vehicle V2, a GPS (Global Positioning System) satellite ST, a base station B, and a server S. ing.
- the probe car V1 (an example of the first vehicle) obtains an inter-vehicle head distance (an example of the second distance) that is a distance from the tip of the probe car V1 to the tip of the detection target vehicle V2 (an example of the second vehicle).
- the detection target vehicle V2 is a vehicle that travels in front of the probe car V1.
- the GPS satellite ST transmits a GPS signal including time and the like to the ground.
- the base station B can wirelessly communicate with the probe car V1, and receives data relating to the inter-head distance obtained by the probe car V1 (hereinafter referred to as distance data).
- the server S generates road traffic information such as traffic jams on the basis of distance data received from the probe car V1 by the base station B and environmental information (for example, weather information) received from a terminal of an environmental information provider. .
- FIG. 2 is a diagram illustrating an example of a configuration of a probe car included in the traffic information detection system according to the first embodiment.
- the probe car V ⁇ b> 1 includes a camera 11, an information processing unit 12, a display unit 13, and a speaker 14.
- the camera 11 (an example of an imaging unit) is provided so that an area including the front of the probe car V1 (hereinafter referred to as a monitoring target area) can be imaged.
- the camera 11 transmits moving image data obtained by imaging the monitoring target area to the information processing unit 12.
- the camera 11 includes a monocular camera and a stereo camera, and transmits moving image data obtained by imaging of each camera to the information processing unit 12.
- the camera 11 is provided according to preset imaging conditions (for example, a predetermined height, depression angle, and rotation angle) in order to image the monitoring target area.
- the camera 11 is provided in the vehicle of the probe car V1, but the present invention is not limited to this as long as the camera 11 is provided so as to be able to image the front of the probe car V1.
- the camera 11 may be provided on the road side of the road on which the probe car V1 travels.
- the information processing unit 12 (an example of the vehicle-mounted device) is mounted inside the probe car V1 together with the camera 11, and is connected to the camera 11 via a wireless communication unit or a cable. Then, the information processing unit 12 receives moving image data from the camera 11 via the wireless communication unit or cable. And the information processing part 12 calculates
- the information processing unit 12 may generate distance data and transmit the distance data to the base station B each time the vehicle head distance is obtained, or after obtaining the vehicle head distance a predetermined number of times.
- the distance data may be generated based on the minute head-to-head distance and transmitted to the base station B.
- the display unit 13 is configured by an LCD (Liquid Crystal Display) or the like, and can display various information such as information (hereinafter referred to as warning information) such as distance data generated by the information processing unit 12.
- the speaker 14 outputs sound of various information such as warning information.
- FIG. 3 is a block diagram illustrating an example of a functional configuration of an information processing unit included in the probe car according to the first embodiment.
- FIG. 4 is a diagram for explaining an example of the calculation process of the inter-head distance in the probe car according to the first embodiment.
- the information processing unit 12 includes a control unit 121, a communication I / F unit 122, a storage unit 123, and an external storage device 124.
- the control unit 121 controls the entire information processing unit 12.
- the control unit 121 is configured by a microcomputer including an MPU (Micro Processing Unit) and the like, and controls the entire information processing unit 12 by executing a control program stored in the storage unit 123 described later. And calculation of the distance between the vehicle heads.
- MPU Micro Processing Unit
- the control unit 121 uses the moving image data received from the camera 11 to distance from the front end of the probe car V1 to the rear end of the detection target vehicle V2 (hereinafter referred to as an inter-vehicle distance). An example of the first distance is calculated. Further, the control unit 121 uses the moving image data received from the camera 11 to determine the vehicle type of the detection target vehicle V2. Then, the control unit 121 calculates a length obtained by adding the calculated inter-vehicle distance and the vehicle length of the vehicle type of the detection target vehicle V2 as the inter-vehicle head distance.
- the vehicle length of a vehicle belonging to the vehicle type (hereinafter referred to as a standard vehicle length) is set in advance for each vehicle type of the detection target vehicle V2. Then, the control unit 121 uses the determined standard vehicle length of the vehicle type as the vehicle length of the detection target vehicle V2 for the calculation of the inter-vehicle head distance. Alternatively, the control unit 121 calculates the inter-vehicle distance to the detection target vehicle V2 and determines the vehicle type, and transmits the calculated inter-vehicle distance and the determined vehicle type to the server S via the base station B. Also good. Then, the server S may obtain the standard vehicle length of the detection target vehicle V2 based on the received vehicle type, and calculate the inter-vehicle distance by adding the standard vehicle length and the received inter-vehicle distance.
- a standard vehicle length the vehicle length of a vehicle belonging to the vehicle type
- the communication I / F unit 122 can communicate with external devices such as the camera 11, the display unit 13, and the speaker 14. Further, the communication I / F unit 122 transmits and receives various information such as distance data to and from the base station B by wireless communication.
- the storage unit 123 is a ROM (Read Only Memory) that is a non-volatile storage unit that stores various types of information such as a control program executed by the control unit 121, and is used as a work area of the control unit 121 and temporarily stores various types of information.
- RAM Random Access Memory
- flash memory that is a non-volatile storage unit that stores setting information set in the information processing unit 12
- VRAM Video Random
- the external storage device 124 is a large-capacity storage device such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive). Specifically, the external storage device 124 (an example of a storage unit) stores a dictionary that is provided for each vehicle type of the detection target vehicle V2 and includes the feature amount of the vehicle belonging to the vehicle type.
- HDD Hard Disk Drive
- SSD Solid State Drive
- the external storage device 124 stores a plurality of dictionaries including a normal vehicle dictionary that includes feature values of vehicles belonging to ordinary vehicles and a large vehicle dictionary that includes feature values of vehicles belonging to large vehicles.
- the dictionary for ordinary vehicles includes higher-order feature amounts (higher-order local autocorrelation features) obtained by expanding the feature amounts of vehicles belonging to ordinary vehicles to higher orders.
- the large vehicle dictionary includes higher-order feature values obtained by expanding higher-order feature values of vehicles belonging to large vehicles.
- FIG. 5 is a flowchart illustrating an example of a schematic flow of a process for calculating the inter-vehicle distance by the probe car according to the first embodiment.
- the control unit 121 receives the moving image data obtained by imaging the monitoring target area with the camera 11 from the camera 11 via the communication I / F unit 122 (step S501). Next, the control unit 121 detects a vehicle in a frame constituting the received moving image data (step S502).
- control unit 121 calculates the inter-vehicle distance based on the position of the vehicle in the frame (step S503). Moreover, the control part 121 determines the vehicle model of the said vehicle based on the feature-value of the vehicle in a flame
- control unit 121 estimates the determined standard vehicle length of the vehicle type as the vehicle length of the vehicle in the frame (step S505). Then, the control unit 121 calculates a length obtained by adding the calculated inter-vehicle distance and the estimated vehicle length as the inter-vehicle head distance (step S506).
- FIG. 6 is a flowchart illustrating an example of a flow of calculation processing of the inter-vehicle head distance by the probe car according to the first embodiment.
- the control unit 121 receives moving image data (an example of a first image) obtained by imaging the monitoring target area by the camera 11 from the camera 11 via the communication I / F unit 122. Receive (step S601).
- the control unit 121 includes moving image data obtained by imaging a monitoring target area with a monocular camera, and moving image data obtained by imaging a monitoring target area with a stereo camera (an example of a second image). ) Receive both.
- control unit 121 sets a region (hereinafter referred to as a region of interest) from which a feature amount is extracted in a frame (an example of a first frame) constituting the received moving image data (step S602).
- a region of interest a region from which a feature amount is extracted in a frame (an example of a first frame) constituting the received moving image data.
- the control unit 121 sets a rectangular area having a size smaller than the frame as the attention area.
- control part 121 extracts the feature-value of the set attention area (step S603). Specifically, based on the luminance information of the region of interest, the control unit 121 has high-level information such as low-dimensional edge information, HOG (Histograms of Oriented Gradients) features, and CoHOG (Co-occurrence Histograms of Oriented Gradients) features. The next feature quantity is extracted.
- the control unit 121 moves the attention area in the frame and extracts the feature amounts of the plurality of attention areas.
- the control part 121 (an example of an extraction part) extracts the feature-value of the vehicle (detection object vehicle V2) in a flame
- FIG. 7 is a diagram for explaining an example of a lane detection process in the probe car according to the first embodiment.
- FIG. 8 is a diagram for explaining an example of attention area setting processing in the probe car according to the first embodiment.
- the upper left corner of the frame F is the origin O
- the vertical direction of the frame F is the Y axis
- the horizontal direction of the frame F is the X axis.
- FIG. 9 is a diagram for explaining an example of detection region correction processing by the probe car according to the first embodiment.
- the control unit 121 detects boundary lines L1, L2, and L3 (for example, white lines) between the lanes A1 and A2 included in the frame and regions other than the lanes A1 and A2. Next, the control unit 121 detects the area surrounded by the detected boundary lines L1, L2, and L3 as lanes A1 and A2, and sets the lanes A1 and A2 as setting target areas (predetermined areas). An example of the image).
- boundary lines L1, L2, and L3 for example, white lines
- the control unit 121 sets the attention area in the setting target area and does not set the attention area for the areas other than the setting target area. In other words, the control unit 121 prohibits the setting of the attention area for the area other than the setting target area.
- the control unit 121 sets all the lanes A1, A2 surrounded by the boundary lines L1, L2, L3 in the setting target area, but only sets the lane A1 of the roadway on which the probe car V1 travels. It may be an area.
- control unit 121 detects the lanes A1 and A2 included in the frame, extracts the feature amount of the vehicle in the lanes A1 and A2 in the frame, and the feature amount in the region other than the lanes A1 and A2 in the frame. Do not extract. As a result, the feature amount is not extracted for regions other than the lanes A1 and A2 in the frame, so that the processing load due to the feature amount extraction from the frame can be reduced.
- the control unit 121 may set the entire frame as a setting target area. In this case, as illustrated in FIG. 8, the control unit 121 extracts the feature amounts of the plurality of attention areas TA in the frame while moving the attention area TA in the entire frame starting from the origin O. Then, the control unit 121 sets the attention area TA from which the feature amount of the vehicle is extracted among the plurality of attention areas TA as a detection area where the vehicle is detected.
- control unit 121 is based on a feature amount of a search area (an example of a second area) that is larger than a detection area (an example of a first area) in which a vehicle is detected in a frame and includes the detection area. Then, the end of the vehicle in the search area is detected. And the control part 121 correct
- the feature amount of the vehicle There is a possibility that the position of the detection area in the frame is deviated from the actual position of the vehicle.
- the control unit 121 when the horizontal end of the detection area 701 does not coincide with the horizontal end of the vehicle 703, the control unit 121 has a horizontal size larger than that of the detection area 701.
- a search area 702 that is large and includes the detection area 701 is set.
- the control unit 121 determines the vehicle 703 based on the depth direction information (distance) distribution 704 of the area corresponding to the search area 702 in the frame F constituting the moving image data obtained by the imaging of the stereo camera.
- the horizontal ends E1 and E2 are detected.
- the control unit 121 corrects the detection region 701 so that the horizontal ends E1 and E2 of the vehicle 703 coincide with the horizontal end of the detection region 701.
- the control unit 121 (an example of a determination unit) reads a part of a dictionary that is different for each frame from the external storage device 124 (step S ⁇ b> 604).
- the control unit 121 reads out a different dictionary for each frame.
- the controller 121 reads out a dictionary of a vehicle type to be determined (for example, a normal vehicle and a large vehicle) from the external storage device 124, It is not limited to.
- the control unit 121 may read all dictionaries of the determination target vehicle type for each frame.
- the dictionary stored in the external storage device 124 includes the feature quantities of objects included in a plurality of images in which objects (for example, vehicles) of the same category (for example, vehicle type) are captured in different states.
- the control unit 121 repeatedly reads out the dictionary from the external storage device 124 in a predetermined order (for example, a normal car dictionary and a large car dictionary). That is, the control unit 121 alternately reads the normal vehicle dictionary and the large vehicle dictionary from the external storage device 124. Then, the control unit 121 repeats reading the dictionary from the external storage device 124 until transmission of moving image data from the camera 11 is stopped. Next, the control unit 121 calculates the similarity between the feature amount of the detection area (the feature amount of the vehicle in the frame) and the feature amount included in the read dictionary (step S605). In the present embodiment, the control unit 121 alternately reads two dictionaries from the external storage device 124. However, if the speeds of the vehicle types to be determined are different, a dictionary of high-speed vehicle types is continuously used for N frames. After reading out, the pattern of reading out only one frame of the dictionary of the car model having a low speed may be repeated.
- a predetermined order for example, a normal car dictionary and a large car
- control unit 121 determines the vehicle type of the vehicle in the frame based on the calculated similarity (step S606).
- the control unit 121 determines that the vehicle type corresponding to the read dictionary is the vehicle type in the frame when the calculated similarity is higher than a predetermined threshold.
- the predetermined threshold is a lower limit of the similarity that is determined to be a vehicle belonging to the vehicle type corresponding to the dictionary.
- the control unit 121 continues to hold the vehicle feature values extracted from the plurality of frames until all the dictionaries of the determination target vehicle types are read, and all the vehicle feature values to be held are stored.
- the vehicle type of the vehicle holding the feature amount is determined based on the similarity with the feature amount included in the dictionary.
- FIGS. 10 to 12 are diagrams for explaining an example of vehicle type determination processing by the probe car according to the first embodiment.
- the control unit 121 has a similarity of “0.3” between the feature amount of the vehicle extracted from the frame and the feature amount included in the ordinary vehicle dictionary, and the feature of the vehicle extracted from the frame. If the similarity between the amount and the feature amount included in the large vehicle dictionary is “0.5”, a dictionary including a feature amount having a high similarity with the vehicle feature amount extracted from the frame (here, for a large vehicle) “Large-sized vehicle” that is a vehicle type of the dictionary) is determined as the vehicle type of the vehicle extracted from the frame.
- the control unit 121 acquires the three-dimensional shape of the detection target vehicle V2 obtained by imaging the monitoring target area with the stereo camera. Then, as shown in FIG. 11, the control unit 121 compares the width and height of the detection target vehicle V2 included in the acquired three-dimensional shape with a predetermined vehicle type classification standard, and determines the vehicle type of the detection target vehicle V2. You may judge.
- the predetermined vehicle type classification standard includes a width and a height for each vehicle type.
- the control part 121 may determine the vehicle type of the detection object vehicle V2 based on the vehicle type information described in the license plate in the vehicle which a flame
- control unit 121 (an example of a calculation unit) obtains an inter-vehicle distance from the front end of the probe car V1 to the rear end of the detection target vehicle V2 based on the frames constituting the moving image data. Furthermore, the control unit 121 calculates a distance obtained by adding the inter-vehicle distance and the determined vehicle length of the vehicle type of the detection target vehicle V2 as the inter-vehicle head distance (step S607).
- FIG. 13A and FIG. 13B are diagrams for explaining an example of an inter-vehicle distance calculation process by the probe car according to the first embodiment.
- the upper left corner of the frame F is the origin O
- the vertical direction of the frame F is the Y axis
- the horizontal direction of the frame F is the X axis.
- FIG. 14 is a diagram illustrating an example of an inter-vehicle distance calculated by the probe car according to the first embodiment.
- FIG. 15 is a diagram illustrating an example of a standard vehicle length used to calculate the inter-head distance in the probe car according to the first embodiment.
- the inter-vehicle distance between vehicles traveling on a road with ups and downs it is obtained by monocular inter-vehicle distance, which is the inter-vehicle distance calculated using moving image data obtained by monocular camera imaging, and stereo camera imaging. This is different from a stereo inter-vehicle distance (an example of a third distance) which is an inter-vehicle distance based on the moving image data. Since the monocular inter-vehicle distance is calculated on the assumption that the road is flat, calculating the inter-vehicle distance between vehicles traveling on a road with up and down causes an error from the actual inter-vehicle distance.
- the inter-vehicle distance in the stereo system is calculated based on the principle of triangulation, an error from the actual inter-vehicle distance is small.
- the inter-vehicle distance cannot be calculated if there is no texture in the region (detection region) where the inter-vehicle distance is calculated. Therefore, it is preferable to calculate the actual inter-vehicle distance using the monocular inter-vehicle distance and the stereo inter-vehicle distance.
- the probe car V1 and the point 1 (or point 2) are calculated.
- the distance between the monocular inter-vehicle distance and the stereo inter-vehicle distance is small from the actual inter-vehicle distance.
- the inter-vehicle distance between the probe car V1 and the detection target vehicle V2 at the point 3 is calculated, since the height difference between the probe car V1 and the point 3 is large, the monocular inter-vehicle distance and the stereo inter-vehicle distance The divergence increases.
- the control unit 121 calculates the monocular inter-vehicle distance and the stereo inter-vehicle distance, and when the difference between the monocular inter-vehicle distance and the stereo inter-vehicle distance is equal to or less than a predetermined value, the monocular inter-vehicle distance The distance between the vehicle heads is obtained using the distance.
- the control unit 121 corrects the monocular inter-vehicle distance based on the stereo inter-vehicle distance. And the control part 121 calculates
- the controller 121 determines the distance between the probe car V ⁇ b> 1 and the detection target vehicle V ⁇ b> 2 at the point 3 (in other words, the detection target vehicle V ⁇ b> 2 located at the coordinate of the Y axis in the frame F: 300).
- the stereo inter-vehicle distance: 80 m is regarded as the monocular inter-vehicle distance. Then, the inter-vehicle distance of the monocular method is corrected.
- the control unit 121 obtains the vehicle length of the vehicle type in the frame F.
- the storage unit 123 stores a standard vehicle length table T that stores a vehicle type and a standard vehicle length of a vehicle belonging to the vehicle type in association with each other. Then, when determining the vehicle type of the vehicle in the frame F, the control unit 121 reads out the standard vehicle length stored in association with the determined vehicle type from the standard vehicle length table T, and stores the standard vehicle length in the frame. Specified as the commander of the vehicle. After that, the control unit 121 calculates a distance obtained by adding the monocular inter-vehicle distance and the specified vehicle length as the inter-vehicle head distance. Thereby, even when not all of the vehicles are included in the frame, the inter-vehicle head distance can be calculated.
- the control unit 121 uses the standard vehicle length stored in association with the determined vehicle type in the standard vehicle length table T as the vehicle length of the vehicle in the frame, but is not limited thereto. .
- the control unit 121 sets the large vehicle with respect to the number of detection target vehicles V2 whose feature values are extracted from the frame within a preset time (for example, one week).
- the ratio of the number of vehicles determined as follows (hereinafter referred to as a large vehicle mixing rate) is obtained.
- the control unit 121 estimates the vehicle length of the detection target vehicle V2 based on the large vehicle mixing ratio, and calculates the sum of the estimated vehicle length and the inter-vehicle distance as the inter-vehicle head distance. For example, the control unit 121 estimates the vehicle length of the detection target vehicle V2 corresponding to the large vehicle mixing ratio from the detection target vehicle V2 whose feature value is extracted from the frame as the vehicle length of the large vehicle.
- the head-to-head distance closer to the actual value can be obtained as the time during which the probe car V1 is driven becomes longer.
- the large vehicle mixture rate measured by a fixed traffic counter (sensor) was calculated, or on large roads without sensors, the large vehicle mixture rate was calculated by people.
- the control unit 121 reads the last read frame (hereinafter referred to as the current frame) and the frames up to a predetermined number (for example, one) before the current frame (hereinafter referred to as the past frame). Based on the position of the detection area of the vehicle, the position where the vehicle is detected is estimated in the frame to be read next (hereinafter referred to as the next frame) (step S608).
- FIG. 16 is a diagram for explaining an example of a vehicle position estimation process in the probe car according to the first embodiment.
- the control unit 121 when detecting a vehicle from the second frame, which is the current frame, the control unit 121 apparently moves the position of the detection region R1 of the vehicle detected from the past frame (first frame). Based on the amount and the moving direction, the position of the detection region R2 of the vehicle in the second frame is estimated. Then, when setting the attention area in the second frame, the control unit 121 sets the estimated detection area R2 or the vicinity of the position of the detection area R2 as the attention area, and the estimated detection area R2 or the An area other than the vicinity of the detection area R2 is not set as the attention area.
- the control unit 121 when detecting a vehicle from the third frame that is the current frame, the control unit 121 detects the apparent movement amount of the detection area R3 of the vehicle detected from the past frame (second frame) and Based on the moving direction, the position of the detection region R4 of the vehicle in the third frame is estimated. Then, when setting the attention area in the third frame, the control unit 121 sets the estimated detection area R4 or the vicinity of the detection area R4 as the attention area, and the estimated detection area R4 or the detection area. An area other than the vicinity of R4 is not set as the attention area. Thereby, it is possible to narrow down the region to be set as the attention region in the next frame, and to increase the extraction efficiency of the vehicle feature amount by extracting the feature amount of the vehicle from the estimated detection region and its vicinity. Can do.
- the control unit 121 determines whether or not the feature amount of the vehicle has been extracted from all the frames constituting the moving image data received from the camera 11 (step S609).
- the control unit 121 returns to step S602 and extracts the feature amount of the vehicle from the frames constituting the moving image data. A region of interest in the next frame that has not been set is set.
- the control unit 121 ends the calculation process of the inter-head distance. Thereafter, the control unit 121 generates distance data using the calculated inter-vehicle head distance, and transmits the distance data to the server S via the base station B.
- control unit 121 performs the processing from step S602 to step S609 shown in FIG. 6, but the present invention is not limited to this, and the moving image data received from the camera is transmitted to the server S.
- the server S may be configured to execute part or all of steps S602 to S609 shown in FIG.
- FIG. 17 is a flowchart illustrating an example of a flow of processing for extracting a feature amount of a region of interest in the probe car according to the first embodiment.
- FIG. 18 is a diagram for explaining an example of a feature amount extraction process for a region of interest in the probe car according to the first embodiment.
- the control unit 121 When receiving the moving image data, the control unit 121 generates an image (hereinafter referred to as a multi-scale image) in which each frame constituting the moving image data is reduced to a different size (step S1701). As shown in FIG. 18, in this embodiment, the control unit 121 generates three multiscale images MF1, MF2, and MF3 obtained by reducing the frame F to three different sizes.
- control unit 121 sets a region of interest for each of the frame and the multiscale image (step S1702). At that time, the control unit 121 sets a region of the same size and the same shape (for example, a rectangular shape) as a region of interest for both the frame and the multiscale image.
- control unit 121 extracts the feature amount of the attention area set in the frame and the multiscale image (step S1703).
- the control unit 121 moves the attention area in the entire multiscale image or the setting target area in the multiscale image in the same manner as in the case where the attention area is set for the frame.
- the control unit 121 extracts the feature amount of each region of interest set in the multiscale image.
- control unit 121 uses a predetermined dictionary including the background feature amount and the vehicle feature amount, and from among the attention regions in the frame and the multiscale image, the attention region in which the vehicle feature amount is extracted, the background The attention area from which the feature amount is extracted is discriminated (step S1704). And the control part 121 makes the attention area from which the feature-value of the vehicle was extracted a detection area.
- the vehicle included in the multiscale image If it falls within the attention area, the feature amount extracted from the attention area can be extracted as the feature amount of the vehicle, so that the detection accuracy of the vehicle included in the frame can be improved.
- the detection target vehicle V2 exists in the distance, the apparent size of the vehicle included in the frame is small, and the feature amount of the vehicle cannot be extracted with the frame as it is, the feature amount of the vehicle included in the multiscale image is extracted. Therefore, the detection accuracy of the vehicle included in the frame can be improved.
- the traffic information detection system According to the traffic information detection system according to the first embodiment, it is possible to calculate the inter-vehicle head distance even when all the vehicles are not included in the frame.
- a plurality of dictionaries are read from an external storage device for one frame, and the vehicle type of the vehicle is determined based on the similarity between the feature amount of the vehicle included in the frame and the feature amount included in each read dictionary. It is an example which determines. In the following description, description of the same parts as those in the first embodiment is omitted.
- FIG. 19 is a flowchart showing an example of the flow of calculation of the inter-head distance by the probe car according to the second embodiment.
- the control unit 121 reads out dictionaries for all vehicle types to be determined from the external storage device 124.
- control unit 121 reads out a plurality of dictionaries (hereinafter referred to as read target dictionaries) for all vehicle types to be determined from the external storage device 124 (step S1901). Next, the control unit 121 calculates the similarity between the feature value of the detection area and the feature value included in one read target dictionary among the plurality of read target dictionaries (step S1902). Then, the control unit 121 determines whether or not the similarity between the feature amount included in each of the plurality of read target dictionaries and the feature amount of the detection area has been calculated.
- read target dictionaries a plurality of dictionaries
- control unit 121 When the similarity between the feature amount included in all the read target dictionaries and the feature amount of the detection region has not been calculated, the control unit 121 returns to step S1901 and, among the plurality of read target dictionaries, still has the feature of the detection region. A dictionary to be read out for which the degree of similarity with the amount has not been calculated is read out.
- the control unit 121 compares the similarity between the feature amount of the detection region and the feature amount included in each read target dictionary. (Step S1903). Then, the control unit 121 determines that the vehicle type corresponding to the read target dictionary including the feature amount having the highest similarity with the feature amount of the detection area is the vehicle type of the vehicle in the frame (step S1904).
- the present embodiment stores an integrated dictionary that includes higher-order feature amounts of vehicles belonging to the vehicle type to be determined (for example, a normal vehicle and a large vehicle), and a similarity to the higher-order feature amount included in the integrated dictionary is predetermined.
- the vehicle type included in the frame is determined based on the similarity between the feature amount of the vehicle equal to or greater than the threshold and the feature amount included in the read target dictionary.
- FIG. 20 is a flowchart showing an example of the flow of calculation of the inter-head distance by the probe car according to the third embodiment.
- FIG. 21 is a diagram for explaining an example of the calculation process of the inter-head distance by the probe car according to the third embodiment.
- the external storage device 124 stores an integrated dictionary including higher-order feature amounts of vehicles belonging to the determination target vehicle type.
- the control unit 121 reads the integrated dictionary from the external storage device 124 (step S2001). Then, the control unit 121 calculates the degree of similarity between the feature amount of the detection area and the higher-order feature amount included in the integrated dictionary (step S2002). Then, the control unit 121 selects a detection region having a feature amount whose similarity with a higher-order feature amount included in the integrated dictionary is equal to or greater than a predetermined threshold among the detection regions (step S2003). Thereafter, the process proceeds to step S1901 and subsequent steps, and the control unit 121 determines the vehicle type included in the frame based on the similarity between the feature amount of the selected detection area and the feature amount included in the read target dictionary.
- the control unit 121 determines, for a detection region that is lower than a predetermined threshold (that is, a detection region that has not been selected), a feature of the detection region for a detection region that has a similarity with a higher-order feature amount included in the integrated dictionary.
- the vehicle type included in the frame is not determined based on the similarity between the amount and the feature amount included in the read target dictionary.
- the control unit 121 includes detection areas 2101 and 2102 having a feature quantity whose similarity to a higher-order feature quantity included in the integrated dictionary is equal to or greater than a predetermined threshold among the detection areas in the frame. select. And the control part 121 respond
- the vehicle type to be used is determined to be a normal vehicle.
- the control unit 121 sets the vehicle type of the detection region 2101 (detection target vehicle V2) having a feature amount whose similarity with the feature amount included in the large vehicle dictionary in the detection regions 2101 and 2102 is equal to or greater than a predetermined threshold. Judge as a car.
- the control unit 121 resembles the feature amount of the detection region and the feature amount included in the normal vehicle dictionary and the large vehicle dictionary. Judgment of vehicle type based on degree is prohibited.
- the reading target dictionary is not read.
- the processing load due to the determination of the vehicle type can be further reduced, and the processing time required for the determination of the vehicle type can be further shortened.
- the program executed by the information processing unit 12 of the present embodiment is provided by being incorporated in advance in a ROM or the like.
- the program executed by the information processing unit 12 of the present embodiment is a file in an installable format or an executable format, and is a computer such as a CD-ROM, flexible disk (FD), CD-R, DVD (Digital Versatile Disk).
- the information may be provided by being recorded on a recording medium that can be read by the user.
- the program executed by the information processing unit 12 of the present embodiment may be stored on a computer connected to a network such as the Internet and provided by being downloaded via the network. Further, the program executed by the information processing unit 12 of the present embodiment may be configured to be provided or distributed via a network such as the Internet.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
図1を用いて、本実施形態にかかる交通情報検出システムの構成について説明する。図1は、第1の実施形態にかかる交通情報検出システムの構成の一例を示す図である。図1に示すように、本実施形態にかかる交通情報検出システムは、プローブカーV1と、検出対象車両V2と、GPS(Global Positioning System)衛星ST、基地局Bと、サーバSと、を有している。プローブカーV1(第1車両の一例)は、プローブカーV1の先端から検出対象車両V2(第2車両の一例)の先端までの距離である車頭間距離(第2距離の一例)を求める。検出対象車両V2は、プローブカーV1の前方を走行する車両である。
本実施形態は、1つのフレームについて、外部記憶装置から複数の辞書を読み出し、当該フレームが含む車両の特徴量と、読み出した各辞書が含む特徴量との類似度に基づいて、当該車両の車種を判定する例である。以下の説明では、第1の実施形態と同様の箇所については説明を省略する。
本実施形態は、判定対象の車種(例えば、普通車および大型車)に属する車両の高次特徴量を含む統合辞書を記憶し、当該統合辞書が含む高次特徴量との類似度が所定の閾値以上の車両の特徴量と、読出対象辞書が含む特徴量との類似度に基づいて、フレームが含む車両の車種を判定する例である。以下の説明では、第2の実施形態と同様の箇所については説明を省略する。
Claims (12)
- 撮像部により第1車両の前方を撮像して得られた第1画像を受信する受信部と、
車種毎に設けられかつ当該車種の車両の特徴量を含む辞書を記憶する記憶部と、
前記第1画像内の第2車両の特徴量を抽出する抽出部と、
前記記憶部から、判定対象の車種の前記辞書を読み出し、当該読み出した辞書が含む特徴量と前記第2車両の特徴量との類似度に基づいて、前記第2車両の車種を判定する判定部と、
前記第1画像に基づいて、前記第1車両の先端から前記第2車両の後端までの第1距離を求め、当該第1距離と前記判定部により判定した車種に属する車両の車長との合計を、前記第1車両の先端から前記第2車両の先端までの第2距離として算出する算出部と、
を備えた車載器。 - 前記判定対象の車種は、大型車を含み、
前記算出部は、予め設定された時間内に前記抽出部により特徴量が抽出された前記第2車両の数に対する、大型車と判定された車両の数の割合を求め、当該割合に基づいて、前記第2車両の車長を推定し、前記第1距離と推定した車長との合計を前記第2距離として算出する請求項1に記載の車載器。 - 前記抽出部は、前記第1画像内において前記第2車両の特徴量を抽出した第1領域より大きくかつ当該第1領域を包含する第2領域内において、前記第2車両の端部を検出し、前記第1領域の端部と前記第2車両の端部とが一致するように、前記第1領域を補正し、当該補正した第1領域の特徴量を、前記第2車両の特徴量とする請求項1に記載の車載器。
- 前記撮像部は、単眼カメラであり、
前記受信部は、ステレオカメラにより前記第1車両の前方を撮像して得られた第2画像を受信し、
前記算出部は、前記第2画像に基づいて、前記第1車両の先端から前記第2車両の後端までの第3距離を求め、前記第1距離と前記第3距離の差分が所定値より大きい場合、前記第3距離に基づいて前記第1距離を補正する請求項1に記載の車載器。 - 前記判定部は、前記記憶部から、前記第2距離の検出対象の車両の高次特徴量を含む統合辞書を読み出し、当該統合辞書が含む前記高次特徴量との類似度が所定の閾値以上の前記第2車両の特徴量と、前記辞書が含む特徴量との類似度に基づいて、前記第2車両の車種を判定し、前記統合辞書が含む前記高次特徴量との類似度が所定の閾値より低い前記第2車両の特徴量と、前記辞書が含む特徴量との類似度に基づく、前記第2車両の車種の判定を禁止する請求項1に記載の車載器。
- 前記第1画像は、動画像であり、
前記判定部は、前記記憶部から、前記動画像を構成するフレーム毎に異なる前記辞書を読み出す請求項1に記載の車載器。 - 撮像部により第1車両の前方を撮像して得られた第1画像を受信し、
前記第1画像内の第2車両の特徴量を抽出し、
記憶部から、車種毎に設けられかつ判定対象の車種の車両の特徴量を含む辞書を読み出し、
当該読み出した辞書が含む特徴量と前記第2車両の特徴量との類似度に基づいて、前記第2車両の車種を判定し、
前記第1画像に基づいて、前記第1車両の先端から前記第2車両の後端までの第1距離を求め、
当該第1距離と判定した車種に属する車両の車長との合計を、前記第1車両の先端から前記第2車両の先端までの第2距離として算出する、
ことを含む車頭間距離算出方法。 - 前記判定対象の車種は、大型車を含み、
予め設定された時間内に特徴量が抽出された前記第2車両の数に対する、大型車と判定された車両の数の割合を求め、
当該割合に基づいて、前記第2車両の車長を推定し、前記第1距離と推定した車長との合計を前記第2距離として算出する、
ことをさらに含む請求項7に記載の車頭間距離算出方法。 - 前記第1画像内において前記第2車両の特徴量を抽出した第1領域より大きくかつ当該第1領域を包含する第2領域内において、前記第2車両の端部を検出し、
前記第1領域の端部と前記第2車両の端部とが一致するように、前記第1領域を補正し、当該補正した第1領域の特徴量を、前記第2車両の特徴量とする、
ことをさらに含む請求項7に記載の車頭間距離算出方法。 - 前記撮像部は、単眼カメラであり、
ステレオカメラにより前記第1車両の前方を撮像して得られた第2画像を受信し、
前記第2画像に基づいて、前記第1車両の先端から前記第2車両の後端までの第3距離を求め、前記第1距離と前記第3距離の差分が所定値より大きい場合、前記第3距離に基づいて前記第1距離を補正する、
ことをさらに含む請求項7に記載の車頭間距離算出方法。 - 前記記憶部から、前記第2距離の検出対象の車両の高次特徴量を含む統合辞書を読み出すことをさらに含み、
当該統合辞書が含む前記高次特徴量との類似度が所定の閾値以上の前記第2車両の特徴量と、前記辞書が含む特徴量との類似度に基づいて、前記第2車両の車種を判定し、
前記統合辞書が含む前記高次特徴量との類似度が所定の閾値より低い前記第2車両の特徴量と、前記辞書が含む特徴量との類似度に基づく、前記第2車両の車種の判定を禁止する請求項7に記載の車頭間距離算出方法。 - 前記第1画像は、動画像であり、
前記記憶部から、前記動画像を構成するフレーム毎に異なる前記辞書を読み出す請求項9に記載の車頭間距離算出方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2016330733A AU2016330733A1 (en) | 2015-09-30 | 2016-03-07 | Vehicle-mounted device and headway distance calculation method |
AU2020200802A AU2020200802B2 (en) | 2015-09-30 | 2020-02-04 | Vehicle-mounted device and headway distance calculation method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015195319A JP6339058B2 (ja) | 2015-09-30 | 2015-09-30 | 車載器および車頭間距離算出方法 |
JP2015-195319 | 2015-09-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017056525A1 true WO2017056525A1 (ja) | 2017-04-06 |
Family
ID=58423025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/057031 WO2017056525A1 (ja) | 2015-09-30 | 2016-03-07 | 車載器および車頭間距離算出方法 |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP6339058B2 (ja) |
AU (2) | AU2016330733A1 (ja) |
WO (1) | WO2017056525A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489552A (zh) * | 2020-04-24 | 2020-08-04 | 科大讯飞股份有限公司 | 一种车头时距预测方法、装置、设备及存储介质 |
CN113936453A (zh) * | 2021-09-09 | 2022-01-14 | 上海宝康电子控制工程有限公司 | 一种基于车头时距的信息识别方法及系统 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6877636B2 (ja) * | 2018-04-23 | 2021-05-26 | 日立Astemo株式会社 | 車載カメラ装置 |
US11443619B2 (en) | 2018-05-15 | 2022-09-13 | Kabushiki Kaisha Toshiba | Vehicle recognition apparatus and vehicle recognition method |
JP2020086735A (ja) * | 2018-11-21 | 2020-06-04 | 株式会社東芝 | 交通情報取得システム及び交通情報取得方法 |
US11814080B2 (en) | 2020-02-28 | 2023-11-14 | International Business Machines Corporation | Autonomous driving evaluation using data analysis |
US11644331B2 (en) | 2020-02-28 | 2023-05-09 | International Business Machines Corporation | Probe data generating system for simulator |
US11702101B2 (en) | 2020-02-28 | 2023-07-18 | International Business Machines Corporation | Automatic scenario generator using a computer for autonomous driving |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011180934A (ja) * | 2010-03-03 | 2011-09-15 | Ricoh Co Ltd | 車種判別装置及び運転支援装置 |
JP2012021883A (ja) * | 2010-07-14 | 2012-02-02 | Toshiba Corp | 車両間隔検出システム、車両間隔検出方法及び車両間隔検出プログラム |
JP2014002534A (ja) * | 2012-06-18 | 2014-01-09 | Toshiba Corp | 車種判別装置及び車種判別方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004127104A (ja) * | 2002-10-04 | 2004-04-22 | Ntt Data Corp | 交通情報予測システム、及びプログラム |
JP4420011B2 (ja) * | 2006-11-16 | 2010-02-24 | 株式会社日立製作所 | 物体検知装置 |
-
2015
- 2015-09-30 JP JP2015195319A patent/JP6339058B2/ja active Active
-
2016
- 2016-03-07 AU AU2016330733A patent/AU2016330733A1/en not_active Abandoned
- 2016-03-07 WO PCT/JP2016/057031 patent/WO2017056525A1/ja active Application Filing
-
2020
- 2020-02-04 AU AU2020200802A patent/AU2020200802B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011180934A (ja) * | 2010-03-03 | 2011-09-15 | Ricoh Co Ltd | 車種判別装置及び運転支援装置 |
JP2012021883A (ja) * | 2010-07-14 | 2012-02-02 | Toshiba Corp | 車両間隔検出システム、車両間隔検出方法及び車両間隔検出プログラム |
JP2014002534A (ja) * | 2012-06-18 | 2014-01-09 | Toshiba Corp | 車種判別装置及び車種判別方法 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489552A (zh) * | 2020-04-24 | 2020-08-04 | 科大讯飞股份有限公司 | 一种车头时距预测方法、装置、设备及存储介质 |
CN113936453A (zh) * | 2021-09-09 | 2022-01-14 | 上海宝康电子控制工程有限公司 | 一种基于车头时距的信息识别方法及系统 |
CN113936453B (zh) * | 2021-09-09 | 2022-08-19 | 上海宝康电子控制工程有限公司 | 一种基于车头时距的信息识别方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
JP6339058B2 (ja) | 2018-06-06 |
AU2020200802B2 (en) | 2022-01-27 |
AU2016330733A1 (en) | 2018-03-15 |
JP2017068712A (ja) | 2017-04-06 |
AU2020200802A1 (en) | 2020-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6339058B2 (ja) | 車載器および車頭間距離算出方法 | |
US10558869B2 (en) | Location specifying apparatus and non-transitory computer-readable medium | |
JP6241422B2 (ja) | 運転支援装置、運転支援方法、および運転支援プログラムを記憶する記録媒体 | |
JP6595182B2 (ja) | マッピング、位置特定、及び姿勢補正のためのシステム及び方法 | |
JP5968064B2 (ja) | 走行レーン認識装置および走行レーン認識方法 | |
CN111507130B (zh) | 车道级定位方法及系统、计算机设备、车辆、存储介质 | |
US9798938B2 (en) | Detecting device, detecting method, and program | |
US20120281881A1 (en) | Method for Estimating the Roll Angle in a Travelling Vehicle | |
KR102441075B1 (ko) | 노면표시기반 차량의 위치추정 방법 및 장치 | |
JP6520740B2 (ja) | 物体検出方法、物体検出装置、およびプログラム | |
TWI504858B (zh) | A vehicle specification measuring and processing device, a vehicle specification measuring method, and a recording medium | |
US11703344B2 (en) | Landmark location estimation apparatus and method, and computer-readable recording medium storing computer program programmed to perform method | |
US20190152487A1 (en) | Road surface estimation device, vehicle control device, and road surface estimation method | |
US20210270625A1 (en) | Follow that car | |
JP6815963B2 (ja) | 車両用外界認識装置 | |
JP2020003463A (ja) | 自車位置推定装置 | |
US20210264170A1 (en) | Compensation for vertical road curvature in road geometry estimation | |
US11443619B2 (en) | Vehicle recognition apparatus and vehicle recognition method | |
JP2020008462A (ja) | 自車位置推定装置 | |
JP2023099851A (ja) | 測定装置、測定方法およびプログラム | |
CN110570680A (zh) | 利用地图信息确定对象位置的方法和系统 | |
AU2019203180B2 (en) | Vehicle recognition apparatus and vehicle recognition method | |
JP2022034051A (ja) | 測定装置、測定方法およびプログラム | |
JPWO2018212286A1 (ja) | 測定装置、測定方法およびプログラム | |
JP2014092935A (ja) | 先行車認識システム、先行車認識方法、及び先行車認識プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16850703 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016330733 Country of ref document: AU Date of ref document: 20160307 Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16850703 Country of ref document: EP Kind code of ref document: A1 |