AU2020200802B2 - Vehicle-mounted device and headway distance calculation method - Google Patents

Vehicle-mounted device and headway distance calculation method Download PDF

Info

Publication number
AU2020200802B2
AU2020200802B2 AU2020200802A AU2020200802A AU2020200802B2 AU 2020200802 B2 AU2020200802 B2 AU 2020200802B2 AU 2020200802 A AU2020200802 A AU 2020200802A AU 2020200802 A AU2020200802 A AU 2020200802A AU 2020200802 B2 AU2020200802 B2 AU 2020200802B2
Authority
AU
Australia
Prior art keywords
vehicle
feature value
image
area
dictionary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2020200802A
Other versions
AU2020200802A1 (en
Inventor
Hiroshi Sakai
Toshio Sato
Yoshihiko Suzuki
Yusuke Takahashi
Hideki Ueno
Kentaro Yokoi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Toshiba Infrastructure Systems and Solutions Corp
Original Assignee
Toshiba Corp
Toshiba Infrastructure Systems and Solutions Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp, Toshiba Infrastructure Systems and Solutions Corp filed Critical Toshiba Corp
Priority to AU2020200802A priority Critical patent/AU2020200802B2/en
Publication of AU2020200802A1 publication Critical patent/AU2020200802A1/en
Application granted granted Critical
Publication of AU2020200802B2 publication Critical patent/AU2020200802B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Abstract

A vehicle-mounted device according to one embodiment includes a receiver, a storage, an extracting unit, a determining unit, and a calculator. The receiver receives a first image acquired by capturing an image of a forward direction of a first vehicle with an image capturing unit. The storage stores therein a dictionary provided for each vehicle type, and containing a feature value of a vehicle belonging to the vehicle type. The extracting unit extracts a feature value of a second vehicle included in the first image. The determining unit reads the dictionary corresponding to a vehicle type to be determined from the storage, and determines a vehicle type of the second vehicle based on a similarity between the feature value contained in the read dictionary and the feature value of a second vehicle. The calculator obtains a first distance between a front end of the first vehicle and a rear end of the second vehicle based on the first image, and calculates a sum of the first distance and a length of a vehicle belonging to the vehicle type determined by the determining unit as a second distance between the front end of the first vehicle and a front end of the second vehicle.

Description

DESCRIPTION VEHICLE-MOUNTED DEVICE AND HEADWAY DISTANCE CALCULATION METHOD RELATED APPLICATION
This application is a divisional of Australian Patent
Application No. 2016330733, the entire contents of which
are hereby incorporated by reference.
FIELD
[0001] Embodiments of the present invention relates to a
vehicle-mounted device and a headway distance calculation
method.
BACKGROUND
[0002] A pattern recognition technology has been widely
used in an object detecting apparatuses for detecting
objects, such as vehicles, included in a camera motion
image that is captured by a camera for capturing an image
of the forward direction of a vehicle. With the pattern
recognition technology, the object detecting apparatus can
detect an object included in a camera motion image, and
calculate the distance between the camera having captured
the image and the detected object.
CITATION LIST
Patent Literature
[0003] Patent Literature 1: Japanese Translation of PCT
International Application Publication No. 2011-529189
Patent Literature 2: Japanese Translation of PCT
International Application Publication No. 2003-532959
18313439_1
SUMMARY OF THE INVENTION
Problem to be Solved by the Invention
[0004] When the entire detected object is not included
in the camera motion image, however, the object detecting
apparatus can only calculate the distance between the
camera and the rear end of the detected object, and cannot
calculate the distance between the camera and the front end
of the detected object.
Means for Solving Problem
[0005] A vehicle-mounted device according to one
embodiment includes a receiver, a storage, an extracting
unit, a determining unit, and a calculator. The receiver
receives a first image acquired by capturing an image of a
forward direction of a first vehicle with an image
capturing unit. The storage stores therein a dictionary
provided for each vehicle type, and containing a feature
value of a vehicle belonging to the vehicle type. The
extracting unit extracts a feature value of a second
vehicle included in the first image. The determining unit
reads the dictionary corresponding to a vehicle type to be
determined from the storage, and determines a vehicle type
of the second vehicle based on a similarity between the
feature value contained in the read dictionary and the
feature value of a second vehicle. The calculator obtains
a first distance between a front end of the first vehicle
and a rear end of the second vehicle based on the first
image, and calculates a sum of the first distance and a
length of a vehicle belonging to the vehicle type
determined by the determining unit as a second distance
between the front end of the first vehicle and a front end
of the second vehicle.
[0005A] A vehicle-mounted device, according to one
18313439_1 embodiment, comprises a communication unit, a storage, an extracting unit, a determination unit, and a calculator. The communication unit that receives a first image acquired by capturing an image of a forward direction of a first vehicle with an image capturing unit, the first vehicle being a probe car running on a road to obtain street traffic information, generating distance data that is used in generating street traffic information based on a headway distance, and transmitting the distance data to a base station. The storage that stores therein a dictionary provided for each vehicle type, and containing a standard vehicle length and a feature value of a vehicle belonging to the vehicle type. The extracting unit that extracts a feature value of a second vehicle included in the first image. The determining unit that reads the dictionary corresponding to a vehicle type to be determined from the storage, and determines a vehicle type of the second vehicle based on a similarity between the feature value contained in the read dictionary and the feature value of the second vehicle. The calculator that obtains a clearance between a front end of the first vehicle and a rear end of the second vehicle based on the first image, calculates a length of the second vehicle based on the vehicle type determined by the determination unit and the standard vehicle length belonging to the vehicle type contained in the dictionary read from the storage, and calculates a sum of the clearance and the length of the second vehicle as the headway distance between the front end of the first vehicle and a front end of the second vehicle.
[0005B] A headway distance calculation method, in accordance with an embodiment, comprises: receiving a first image acquired by capturing an image
18313439_1 of a forward direction of a first vehicle with an image capturing unit, the first vehicle being a probe car running on a road to obtain street traffic information, generating distance data that is used in generating street traffic information based on a headway distance, and transmitting the distance data to a base station; extracting a feature value of a second vehicle included in the first image; reading a dictionary provided for each vehicle type, and containing a standard vehicle length and a feature value of a vehicle belonging to the vehicle type to be determined from a storage; determining a vehicle type of the second vehicle based on a similarity between the feature value contained in the read dictionary and the feature value of the second vehicle; obtaining a clearance between a front end of the probe vehicle and a rear end of the second vehicle based on the first image, calculating a length of the second vehicle based on the determined vehicle type and the standard vehicle length belonging to the vehicle type contained in the dictionary red from the storage; and calculating a sum of the clearance and the length of the second vehicle as the headway distance between the front end of the first vehicle and a rear end of the second vehicle.
BRIEF DESCRIPTION OF DRAWINGS
[00061 FIG. 1 is a schematic illustrating an exemplary
configuration of a traffic information detection system
according to a first embodiment of the present invention.
FIG. 2 is a schematic illustrating an exemplary
18313439_1 configuration of a probe car included in the traffic information detection system according to the first embodiment. FIG. 3 is a block diagram illustrating an exemplary functional configuration of an information processing unit included in the probe car according to the first embodiment. FIG. 4 is a schematic for explaining an exemplary headway distance calculation process performed in the probe car according to the first embodiment. FIG. 5 is a flowchart illustrating an exemplary general sequence of the headway distance calculation process performed in the probe car according to the first embodiment. FIG. 6 is a flowchart illustrating an exemplary sequence of the headway distance calculation process performed in the probe car according to the first embodiment. FIG. 7 is a schematic for explaining an exemplary lane detecting process performed in the probe car according to the first embodiment. FIG. 8 is a schematic for explaining an exemplary region-of-interest setting process performed in the probe car according to the first embodiment. FIG. 9 is a schematic for explaining an exemplary detection area correcting process performed by the probe car according to the first embodiment. FIG. 10 is a schematic for explaining an exemplary vehicle-type determining process performed by the probe car according to the first embodiment. FIG. 11 is a schematic for explaining an exemplary vehicle-type determining process performed by the probe car according to the first embodiment. FIG. 12 is a schematic for explaining an exemplary
18313439_1 vehicle-type determining process performed by the probe car according to the first embodiment. FIG. 13A is a schematic for explaining an exemplary clearance calculation process performed by the probe car according to the first embodiment. FIG. 13B is a schematic for explaining an exemplary clearance calculation process performed by the probe car according to the first embodiment. FIG. 14 is a schematic illustrating an example of a clearance calculated by the probe car according to the first embodiment. FIG. 15 is a schematic illustrating an example of standard vehicle lengths used in the headway distance calculation performed in the probe car according to the first embodiment. FIG. 16 is a schematic for explaining an example of a vehicle position estimating process performed in the probe car according to the first embodiment. FIG. 17 is a flowchart illustrating an exemplary sequence of a region-of-interest feature value extracting process performed in the probe car according to the first embodiment. FIG. 18 is a schematic for explaining an exemplary region-of-interest feature value extracting process performed in the probe car according to the first embodiment. FIG. 19 is a flowchart illustrating an exemplary sequence of a headway distance calculation process performed by the probe car according to a second embodiment of the present invention. FIG. 20 is a flowchart illustrating an exemplary sequence of a headway distance calculation process performed by the probe car according to a third embodiment
18313439_1 of the present invention.
FIG. 21 is a schematic for explaining an exemplary
headway distance calculation process performed by the probe
car according to the third embodiment.
DETAILED DESCRIPTION
[0007] A traffic information detection system using a
vehicle-mounted device and a headway distance calculation
method according to some embodiments of the present
invention will now be explained with reference to the
appended drawings.
[0008] First Embodiment
A configuration of a traffic information detection
system according to this embodiment of the present
invention will now be explained with reference to FIG. 1.
FIG. 1 is a schematic illustrating an exemplary
configuration of a traffic information detection system
according to a first embodiment of the present invention.
As illustrated in FIG. 1, the traffic information detection
system according to this embodiment includes a probe car V1,
a detection target vehicle V2, a global positioning system
(GPS) satellite ST, a base station B, and a server S. The
probe car V1 (an example of a first vehicle) acquires a
headway distance (an example of a second distance) that is
a distance between the front end of the probe car V1 to the
front end of the detection target vehicle V2 (an example of
a second vehicle). The detection target vehicle V2 is a
vehicle running in front of the probe car V1.
[0009] The GPS satellite ST transmits a GPS signal
including time, for example, to the terrestrial. The base
station B is capable of communicating with the probe car V1
wirelessly, and receives data related to the headway
distance acquired by the probe car V1 (hereinafter,
18313439_1 referred to as distance data). The server S generates street traffic information, such as congestions of vehicles, based on the distance data received from the probe car V1 via the base station B, or environment information (such as weather information) received from a terminal or the like of an environment information provider.
[0010] A configuration of the probe car V1 according to this embodiment will now be explained with reference to FIG. 2. FIG. 2 is a schematic illustrating an exemplary configuration of the probe car included in the traffic information detection system according to the first embodiment. As illustrated in FIG. 2, the probe car V1 according to this embodiment includes a camera 11, an information processing unit 12, a display 13, and a speaker 14. The camera 11 (an example of an image capturing unit) is installed in such a manner that the camera 11 is allowed to capture an image of the area including the front side of the probe car V1 (hereinafter, referred to as a monitored area). The camera 11 transmits motion image data acquired by capturing an image of the monitored area to the information processing unit 12. In this embodiment, the camera 11 includes a monocular camera and a stereo camera, and transmits motion image data acquired by capturing images using these cameras to the information processing unit 12.
[0011] In this embodiment, the camera 11 is installed based on preset image capturing conditions (e.g., at a predetermined height, angle of depression, and rotation angle) so that the image of the monitored area can be captured. In this embodiment, the camera 11 is installed onboard the probe car V, but the position is not limited thereto, as long as the camera 11 is installed in a manner enabled to capture an image of the front side of the probe
18313439_1 car V1. For example, the camera 11 may be installed on the roadside where the probe car V1 runs.
[0012] The information processing unit 12 (an example of a vehicle-mounted device) is mounted onboard the probe car V1, together with the camera 11, and is connected to the camera 11 via a wireless communication unit or a cable. The information processing unit 12 receives the motion image data from the camera 11 via the wireless communication unit or the cable. The information processing unit 12 then obtains the headway distance between the front end of the probe car V1 and the front end of the detection target vehicle V2 using the motion image data received from the camera 11. The information processing unit 12 generates distance data that is used in aiding safe driving and in generating street traffic information based on the obtained headway distance, and transmits the distance data to the base station B. In this embodiment, the information processing unit 12 may generate and transmit the distance data to the base station B every time the information processing unit 12 obtains the headway distance, or may also obtain the headway distance a predetermined number of times, and then generate the distance data based on the headway distance covering the predetermined number of times, and transmit the distance data to the base station B. The display 13 is provided as a liquid crystal display (LCD), for example, and is enabled to display various types of information such as distance data or the like generated by the information processing unit 12 (hereinafter, referred to as alert information). The speaker 14 outputs various types of sound information such as sound of the alert information.
[0013] A functional configuration of the information processing unit 12 included in the probe car V1 according
18313439_1 to this embodiment will now be explained with reference to
FIGS. 3 and 4. FIG. 3 is a block diagram illustrating an
exemplary functional configuration of the information
processing unit included in the probe car according to the
first embodiment. FIG. 4 is a schematic for explaining an
exemplary headway distance calculation process performed in
the probe car according to the first embodiment.
[0014] As illustrated in FIG. 3, in this embodiment, the
information processing unit 12 includes a controller 121, a
communication interface (I/F) unit 122, a storage 123, and
an external storage device 124. The controller 121
controls the overall information processing unit 12. In
this embodiment, the controller 121 is provided as a
microcomputer including a micro-processing unit (MPU). And
the controller 121 performs processes such as controlling
the overall information processing unit 12 and calculating
a headway distance, by executing a control program stored
in the storage 123 to be described later.
[0015] As illustrated in FIG. 4, the controller 121 also
calculates the distance between the front end of the probe
car V1 and the rear end of the detection target vehicle V2
(hereinafter, referred to as a clearance; an example of a
first distance), using the motion image data received from
the camera 11. The controller 121 also determines the
vehicle type of the detection target vehicle V2 using the
motion image data received from the camera 11. The
controller 121 then calculates a sum of the calculated
clearance and the vehicle length of the vehicle type of the
detection target vehicle V2, as a headway distance. In
this embodiment, for each vehicle type of detection target
vehicles V2, the length of a vehicle belonging to a vehicle
type (hereinafter, referred to as a standard vehicle
length) is set in advance. The controller 121 then uses
18313439_1 the standard vehicle length of the determined vehicle type as the vehicle length of the detection target vehicle V2, in calculating the headway distance. Alternatively, the controller 121 may calculate the clearance to the detection target vehicle V2 and determine the vehicle type, and transmit the calculated clearance and the determined vehicle type to the server S via the base station B. The server S may then acquire the standard vehicle length of the detection target vehicle V2 based on the received vehicle type, and calculate the headway distance by adding the standard vehicle length and the received clearance.
[0016] The communication I/F unit 122 can communicate
with external devices such as the camera 11, the display 13,
and the speaker 14. The communication I/F unit 122 also
exchange various types of information, such as distance
data, with the base station B over the wireless
communication. The storage 123 includes a read-only memory
(ROM), a random access memory (RAM), a flash ROM, and a
video random access memory (VRAM). The ROM is a non
volatile storage unit for storing therein various types of
information such as a control program executed by the
controller 121. The RAM is used as a working area of the
controller 121 and temporarily storing therein various
types of information. The flash ROM is a non-volatile
storage unit storing therein information such as setting
information set to the information processing unit 12. The
VRAM stores therein data such as motion image data received
from the camera 11.
[0017] The external storage device 124 is a storage
device having a large capacity, examples of which include a
hard disk drive (HDD) and a solid state drive (SSD).
Specifically, the external storage device 124 (an example
of a storage) stores therein a dictionary provided for each
18313439_1 vehicle type of the detection target vehicles V2. The dictionary stores therein the feature value of a vehicle belonging to the vehicle type.
[0018] In this embodiment, the external storage device
124 stores therein a plurality of dictionaries including a
standard-sized vehicle dictionary and a large-sized vehicle
dictionary. The standard-sized vehicle dictionary contains
the feature value of a vehicle belonging to the category of
a standard-sized vehicle. The large-sized vehicle
dictionary contains the feature value of vehicles belonging
to the category of a large-sized vehicle. Specifically,
the standard-sized vehicle dictionary contains a high-order
feature value (high-order local autocorrelation feature)
that is a high-order extension of the feature values of the
vehicles belonging to the standard-sized vehicles. The
large-sized vehicle dictionary contains a high-order
feature value that is a high-order extension of the feature
values of the vehicles belonging to the large-sized
vehicles.
[0019] The headway distance calculation process
performed by the probe car V1 according to this embodiment
will now be generally explained with reference to FIG. 5.
FIG. 5 is a flowchart illustrating an exemplary general
sequence of the headway distance calculation process
performed in the probe car according to the first
embodiment.
[0020] The controller 121 receives motion image data
acquired by capturing an image of the monitored area with
the camera 11, from the camera 11 via the communication I/F
unit 122 (Step S501). The controller 121 then detects a
vehicle included in a frame of the received motion image
data (Step S502).
[0021] The controller 121 calculates a clearance based
18313439_1 on the position or the like of the vehicle included in a frame (Step S503). The controller 121 also determines the vehicle type of the vehicle included in the frame based on the feature value of the vehicle (Step S504).
[0022] The controller 121 then estimates the standard vehicle length of the determined vehicle type as the length of the vehicle included in the frame (Step S505). The controller 121 then calculates the sum of the calculated clearance and the estimated vehicle length, as the headway distance (Step S506).
[0023] The headway distance calculation process performed by the probe car V1 according to this embodiment will now be explained in detail with reference to FIG. 6. FIG. 6 is a flowchart illustrating an exemplary sequence of the headway distance calculation process performed in the probe car according to the first embodiment.
[0024] The controller 121 (an example of a receiver) receives motion image data acquired by capturing an image of the monitored area with the camera 11 (an example of a first image) from the camera 11 via the communication I/F unit 122 (Step S601). In this embodiment, the controller 121 receives both of the motion image data acquired by capturing an image of the monitored area with the stereo camera (an example of a second image), and the motion image data acquired by capturing an image of the monitored area with the monocular camera.
[0025] The controller 121 then sets an area for extracting a feature value (hereinafter, referred to as an area of interest) to a frame (an example of a first frame) of the received motion image data (Step S602). In this embodiment, the controller 121 sets a rectangular area that is smaller in size than the frame, as the area of interest.
[0026] The controller 121 then extracts a feature value
18313439_1 from the set area of interest (Step S603). Specifically, the controller 121 extracts low-order edge information, or a high-order feature value, such as a histogram of oriented gradients (HOG) feature value, or a co-occurrence histogram of oriented gradients (CoHOG) feature value, based on the luminance information of the area of interest. In this embodiment, the controller 121 moves the area of interest throughout the frame, and extracts the feature values of a plurality of areas of interest. In this manner, the controller 121 (an example of an extracting unit) extracts the feature value of a vehicle (detection target vehicle
V2) included in the frame. The controller 121 then uses,
among a plurality of areas of interest, the area of
interest from which the feature value of a vehicle is
extracted as a vehicle detection area.
[0027] A vehicle detection process performed by the
probe car V1 according to this embodiment will now be
explained using FIGS. 7 to 9. FIG. 7 is a schematic for
explaining an exemplary lane detecting process performed in
the probe car according to the first embodiment. FIG. 8 is
a schematic for explaining an exemplary region-of-interest
setting process performed in the probe car according to the
first embodiment. In FIG. 8, the upper left corner of a
frame F is established as a point of origin 0, and the
vertical direction of the frame F is established as a Y
axis. The horizontal direction of the frame F is
established as an X axis. FIG. 9 is a schematic for
explaining an exemplary detection area correcting process
performed by the probe car according to the first
embodiment.
[0028] For example, as illustrated in FIG. 7, the
controller 121 detects border lines Li, L2, and L3 (for
example, white lines) between lanes Al and A2 and the areas
18313439_1 other than the lanes Al, A2 included in the frame. The controller 121 then detects the area surrounded by the detected border lines Li, L2, L3 as the lanes Al, A2, and establishes the lanes Al, A2 as the set target areas (an example of a predetermined image) that are areas to which an area of interest is set.
[0029] The controller 121 then sets the set target areas as the areas of interest, and does not set the area other than the set target areas as the areas of interest. In other words, the controller 121 prohibits any area other than the set target areas from being set as an area of interest. In this embodiment, the controller 121 sets the entire lanes Al, A2 surrounded by the border lines Li, L2, L3 as the set target areas, but the controller 121 may also set only the lane Al of the driveway on which the probe car Vi run as the set target area.
[0030] In other words, the controller 121 detects the lanes Al, A2 included in the frame, and extracts the feature value of a vehicle from the lanes Al, A2 included in the frame, but does not extract the feature value from the areas of the frame other than the lanes Al, A2. In this manner, because the feature value is not extracted from the areas of the frame other than the lanes Al, A2, the processing load required in extracting the feature value from the frame can be reduced.
[0031] The controller 121 may also establish the entire frame as the set target area, as illustrated in FIG. 8. In such a case, as illustrated in FIG. 8, the controller 121 extracts the feature value from a plurality of areas of interest TA inside of the frame, by moving the area of interest TA across the entire frame, starting from the point of origin 0 as a start point. Among the areas of interest TA, the controller 121 establishes the area of
18313439_1 interest TA from which the feature value of a vehicle is extracted as a detection area where a vehicle is detected.
[0032] Based on the feature value of a search area (an example of a second area), the controller 121 detects ends of the vehicle in the search area. The search area is an area that is larger than and including the detection area in which a vehicle is detected (an example of a first area) in the frame. The controller 121 then corrects the detection area in such a manner that the ends of the detected vehicle are matched with respective ends of the detection area. And the controller 121 establishes the feature value of the corrected detection area as the feature value of the vehicle extracted from the frame. The position of the detection area in the frame may be offset from the actual position of the vehicle. In particular, when the lower end of the detection area is offset from the actual lower end of the vehicle, a large error will be introduced to the calculation result of the clearance. To address this issue, in this embodiment, because the detection area is corrected in such a manner that the ends of the detection area are matched with the respective ends of the vehicle, it is possible to prevent the position of the detection area from being offset from the position of the actual vehicle. Therefore, an error introduced to the clearance calculation result can be reduced.
[0033] For example, as illustrated in FIG. 9, when the ends of a detection area 701 in the horizontal direction are not matched with the respective ends of a vehicle 703 in the horizontal direction, the controller 121 sets a search area 702. The search area 702 is larger in size than the detection area 701 in the horizontal direction, and that includes the detection area 701. The controller 121 then detects the ends El, E2 of the vehicle 703 in the
18313439_1 horizontal direction based on a distribution 704 of information in the depth direction (distance) of the area corresponding to the search area 702, in the frame F. The frame F is a frame of the motion image data acquired by capturing images using stereo camera. The controller 121 then corrects the detection area 701 in such a manner that the ends El, E2 of the vehicle 703 in the horizontal direction are matched with the respective ends of the detection area 701 in the horizontal direction.
[0034] Referring back to FIG. 6, the controller 121 (an
example of a determining unit) reads a different dictionary
for each frame from the external storage device 124 (Step
S604). In this manner, the size of the memory used in
reading the dictionaries can be reduced, because the number
of dictionaries that are read in determining the vehicle
type of a vehicle included in the frame can be reduced
during the headway distance calculation process. Therefore,
the processing load required in determining the vehicle
type can also be reduced, and the processing time required
in determining the vehicle type can be reduced. In this
embodiment, the controller 121 reads a different dictionary
for each frame, but the embodiment is not limited thereto,
as long as some dictionaries corresponding to the vehicle
types to be determined (such as those of the standard-sized
vehicles and the large-sized vehicles) are read from the
external storage device 124. For example, the controller
121 may read the entire dictionaries corresponding to the
vehicle types to be determined, for each of the frames.
[0035] When persons are to be detected from a frame as
objects, the objects belonging to the same category, e.g.,
a standing person, a squatting person, a person with
his/her legs open, and a person with his/her legs closed,
may result in different feature values depending on the
18313439_1 condition of the objects. To address this issue, each of the dictionaries stored in the external storage device 124 contains the feature values of the corresponding objects that are included in a plurality of respective images that include the objects (e.g., vehicles) of the same category (e.g., vehicle type) in different conditions.
[00361 Furthermore, in this embodiment, the controller 121 reads the dictionaries repeatedly from the external storage device 124 in a predetermined order (in the order of the standard-sized vehicle dictionary first, and the large-sized vehicle dictionary second, for example). In other words, the controller 121 reads the standard-sized vehicle dictionary and the large-sized vehicle dictionary alternatingly from the external storage device 124. The controller 121 reads the dictionaries repeatedly from the external storage device 124 until the transmission of the motion image data from the camera 11 stops. The controller 121 then calculates the similarity between the feature value of the detection area (the feature value of the vehicle included in the frame) and the feature value contained in the read dictionary (Step S605). In this embodiment, the controller 121 reads the two dictionaries alternatingly from the external storage device 124, but when the vehicle types to be determined have different speeds, the controller 121 may follow repeat a pattern of reading the dictionary of the vehicle type having a higher speed for N frames successively, and then reading the dictionary of the vehicle type having a lower speed for one frame.
[0037] The controller 121 (an example of the determining unit) then determines the vehicle type of the vehicle included in the frame based on the calculated similarity (Step S606). In this embodiment, if the calculated
18313439_1 similarity is equal to or higher than a predetermined threshold, the controller 121 determines the vehicle type corresponding to the read dictionary, as the vehicle type of the vehicle included in the frame. The predetermined threshold is a lower-bound of the similarity at which a vehicle is determined to be the vehicle belonging to the vehicle type corresponding to the dictionary. Furthermore, in this embodiment, the controller 121 retains the feature values of vehicles extracted from a plurality of frames until all of the dictionaries corresponding to the vehicle types to be determined are read. And the controller 121 determines the vehicle types of the vehicles feature values of which are retained based on the similarities between the retained feature values of the vehicles and the feature values contained in the respective dictionaries.
[00381 An exemplary vehicle-type determining process performed by the probe car V1 according to this embodiment will now be explained using FIGS. 10 to 12. FIGS. 10 to 12 are schematics for explaining an exemplary vehicle-type determining process performed by the probe car according to the first embodiment.
[00391 As illustrated in FIG. 10, if the similarity between the feature value of the vehicle extracted from the frame and the feature value contained in the standard-sized vehicle dictionary is "0.3", and if the similarity between the feature value of the vehicle extracted from the frame and the feature value contained in the large-sized vehicle dictionary is "0.5", the controller 121 determines the vehicle type of the vehicle extracted from the frame as the "large-sized vehicle". The "large-sized vehicle" is the vehicle type corresponding to the dictionary (in this example, the large-sized vehicle dictionary) containing a feature value that is more similar to the feature value of
18313439_1 the vehicle extracted from the frame.
[0040] Alternatively, the controller 121 acquires a
three-dimensional shape of the detection target vehicle V2
that is acquired by capturing an image of the monitored
area with the stereo camera. The controller 121 may then
determine the vehicle type of the detection target vehicle
V2 by matching the width and the height of the detection
target vehicle V2 included in the acquired three
dimensional shape with predetermined criteria for
classifying vehicles into vehicle types, as illustrated in
FIG. 11. The predetermined criteria for classifying
vehicles into vehicle types include the width and the
height of each vehicle type. Alternatively, as illustrated
in FIG. 12, the controller 121 may also determine the
vehicle type of the detection target vehicle V2 based on
the vehicle type information specified in the license plate
of the vehicle included in the frame.
[0041] Referring back to FIG. 6, the controller 121 (an
example of a calculator) obtains the clearance between the
front end of the probe car V1 and the rear end of the
detection target vehicle V2, based on the frame included in
the motion image data. The controller 121 also calculates
the distance that is the sum of the clearance and the
vehicle length of the determined vehicle type of the
detection target vehicle V2 as the headway distance (Step
S607).
[0042] An exemplary headway distance calculation process
performed by the probe car V1 according to this embodiment
will now be explained using FIGS. 13A, 13B, 14, and 15.
FIGS. 13A and 13B are schematics for explaining an
exemplary clearance calculation process performed by the
probe car according to the first embodiment. In FIG. 13A,
the upper left corner of the frame F is used as the point
18313439_1 of origin 0, and the vertical direction of the frame F is used as the Y axis. The horizontal direction of the frame
F is used as the X axis. FIG. 14 is a schematic
illustrating an example of the clearance calculated by the
probe car according to the first embodiment. FIG. 15 is a
schematic illustrating an example of the standard vehicle
lengths used in the headway distance calculation performed
in the probe car according to the first embodiment.
[0043] When calculated is the clearance between vehicles
running on a road with an uphill or a downhill, a monocular
clearance is different from a stereo clearance (an example
of a third distance). A monocular clearance is a clearance
calculated using the motion image data acquired by
capturing an image with the monocular camera. A stereo
clearance is a clearance that is based on the motion image
data acquired by capturing an image with the stereo camera.
Because a monocular clearance is calculated assuming that
the road is flat, an error is introduced to the calculation
of the clearance between vehicles running on a road with an
uphill or a downhill, with respect to the actual clearance.
By contrast, because a stereo clearance is calculated based
on the principle of triangulation, the error with respect
to the actual clearance is kept small. When calculated is
a stereo clearance, however, the clearance cannot be
calculated when the area in which the clearance is
calculated (detection area) has no texture. Therefore, it
is preferable to calculate the actual clearance using both
of the monocular clearance and the stereo clearance.
[0044] For example, as illustrated in FIGS. 13A and 13B,
when calculated is the clearance between the probe car V1
and the detection target vehicle V2 that is at a point 1
(or point 2), the elevation difference between the probe
car V1 and the point 1 (or the point 2) is small.
18313439_1
Therefore, neither the monocular clearance nor the stereo
clearance deviates very much from the actual clearance. By
contrast, when calculated is the clearance between the
probe car V1 and the detection target vehicle V2 that is at
the point 3, because the elevation difference between the
probe car V1 and the point 3 is large, the monocular
clearance deviates from the stereo clearance by a larger
degree. To address this issue, the controller 121 obtains
the monocular clearance and the stereo clearance. And, if
the difference between the monocular clearance and the
stereo clearance is equal to or less than a predetermined
amount, the controller 121 obtains the headway distance
using the monocular clearance. If the difference between
the monocular clearance and the stereo clearance is greater
than the predetermined threshold, the controller 121
corrects the monocular clearance based on the stereo
clearance. The controller 121 then acquires the headway
distance using the corrected monocular clearance.
[0045] For example, as illustrated in FIG. 14, assuming
that the controller 121 calculates the clearance between
the probe car V1 and the detection target vehicle V2 that
is at the point 3 (that is, the detection target vehicle V2
positioned at the coordinate: 300 in the Y axis in the
frame F), the difference: 20 m between the monocular
clearance: 60 m and the stereo clearance: 80 m is greater
than the predetermined value: 10 m. Therefore, the
controller 121 corrects the monocular clearance considering
the stereo clearance: 80 m, as the monocular clearance.
[0046] The controller 121 then acquires the vehicle
length of the vehicle type to which the vehicle included in
the frame F belongs. As illustrated in FIG. 15, in this
embodiment, the storage 123 stores therein a standard
vehicle length table T. The standard vehicle length table
18313439_1
T stores therein a vehicle types in a manner associated
with the standard vehicle length of a vehicle belonging to
the vehicle type. Once the controller 121 determines the
vehicle type of the vehicle included in the frame F, the
controller 121 reads the standard vehicle length stored in
a manner associated with the determined vehicle type from
the standard vehicle length table T. And the controller
121 identifies the standard vehicle length as the vehicle
length of the vehicle included in the frame. The
controller 121 then calculates the distance that is a sum
of the monocular clearance and the identified vehicle
length as the headway distance. In this manner, a headway
distance can be calculated even when the entire vehicle is
not included in a frame.
[0047] In this embodiment, the controller 121 determines
the standard vehicle length stored in the standard vehicle
length table T in a manner associated with the determined
vehicle type, as the vehicle length of the vehicle in the
frame, but the embodiment is not limited thereto. For
example, when the vehicle type to be determined includes a
large-sized vehicle, the controller 121 may acquire a ratio
of the number of vehicles determined to be large-sized
vehicles, to the number of detection target vehicles V2
feature values of which have been extracted from the frames,
over a preset time (e.g., 1 week) (hereinafter, referred to
as a large-sized vehicle ratio).
[0048] The controller 121 then estimates the vehicle
length of the detection target vehicle V2 based on the
large-sized vehicle ratio. And the controller 121
calculates the sum of the estimated vehicle length and the
clearance as the headway distance. For example, among the
detection target vehicles V2 feature values of which are
extracted from the frames, the controller 121 estimates the
18313439_1 vehicle length of detection target vehicles V2 corresponding to the large-sized vehicle ratio as the vehicle length of a large-sized vehicle.
[0049] In this manner, when a ratio of probe cars V1
running on the road is low, the reliable headway distance
can be acquired. Furthermore, with the method for
calculating the headway distance using the large-sized
vehicle ratio, when the time over which the probe car V1 is
operated becomes longer, a headway distance that is closer
to the actual value can be obtained. Furthermore, the
large-sized vehicle ratio has conventionally been
calculated using the measurements of fixed traffic counters
(sensors), or manually calculated for the road without any
sensors, but the labor and the cost required in installing
sensor, or the like can be reduced by determining the
large-sized vehicles using the feature values of the
vehicles included in the frames and the feature value
contained in the dictionary.
[0050] Referring back to FIG. 6, the controller 121
estimates the position at which the vehicle is to be
detected in the frame that is to be read next (hereinafter,
referred to as a next frame) based on the position of the
vehicle detection area (Step S608). The vehicle detection
area is an area in the frames between the last frame having
been read (hereinafter, referred to as a current frame) and
the frame at a predetermined number of frames (e.g., one)
previous to the current frame (hereinafter, referred to as
a past frame).
[0051] FIG. 16 is a schematic for explaining an example
of a vehicle position estimating process performed in the
probe car according to the first embodiment. As
illustrated in FIG. 16, when a vehicle is to be detected
from a second frame that is the current frame, the
18313439_1 controller 121 estimates the position of a detection area
R2 corresponding to the vehicle in the second frame, based
on the apparent amount by which and a direction in which
the position of a detection area R1 of the vehicle detected
in the past frame (first frame) moves. To set the area of
interest to the second frame, the controller 121 sets the
estimated detection area R2 or near the position of the
detection area R2 to the area of interest, but does not set
any areas other than the estimated detection area R2 or
other than the area near the detection area R2 to the area
of interest.
[0052] As illustrated in FIG. 16, when a vehicle is to
be detected from a third frame that is the current frame,
the controller 121 estimates the position of a detection
area R4 corresponding to a vehicle in the third frame based
on the apparent amount by which and a direction in which a
detection area R3 corresponding to the vehicle detected in
the past frame (second frame) moves. To set the area of
interest to the third frame, the controller 121 sets the
estimated detection area R4 or near the detection area R4
to the area of interest. And the controller 121 does not
set the areas other than the estimated detection area R4 or
other than areas near the detection area R4 to the area of
interest. In this manner, the areas to be set as the area
of interest in the next frame can be narrowed down, and the
extraction of the feature value of a vehicle can be focused
on the estimated detection area and the area near the
estimated detection area. Therefore, the efficiency of the
vehicle feature value extraction can be improved.
[0053] The controller 121 then determines whether the
feature values of vehicles have been extracted from all of
the frames included in the motion image data received from
the camera 11 (Step S609). If the feature values of
18313439_1 vehicles have not been extracted from all of the frames (No at Step S609), the controller 121 shifts the process back to Step S602, and sets the area of interest in the next frame from which the feature value of a vehicle has not been extracted yet, among the frames included in the motion image data. If the feature values of vehicles have been extracted from all of the frames (Yes at Step S609), the controller 121 ends the headway distance calculation process. The controller 121 then generates distance data using the calculated headway distance, and transmits the distance data to the server S via the base station B.
[0054] In this embodiment, the process from Step S602 to
Step S609 illustrated in FIG. 6 is performed by the
controller 121, but the embodiment is not limited thereto.
It is also possible for the controller 121 to transmit the
motion image data received from the camera to the server S,
and for the server S to perform a part of or the entire
process from Step S602 to Step S609 illustrated in FIG. 6.
[0055] The process of extracting a feature value from
the area of interest, illustrated as Step S602 and Step
S603 in FIG. 6, will now be explained in detail with
reference to FIGS. 17 and 18. FIG. 17 is a flowchart
illustrating an exemplary sequence of the region-of
interest feature value extracting process performed in the
probe car according to the first embodiment. FIG. 18 is a
schematic for explaining the exemplary region-of-interest
feature value extracting process performed in the probe car
according to the first embodiment.
[0056] When the controller 121 receives the motion image
data, the controller 121 generates images by reducing the
size of each frame included in the motion image data into
different sizes (hereinafter, referred to as multi-scale
images) (Step S1701). As illustrated in FIG. 18, in this
18313439_1 embodiment, the controller 121 generates three multi-scale images MF1, MF2, and MF3 that are reductions of the frame F in three different sizes.
[0057] The controller 121 then sets the area of interest
to each of the frame and the multi-scale images (Step
S1702). At this time, the controller 121 sets an area
having the same size and the same shape (e.g., a
rectangular area) to each of the frame and the multi-scale
images, as the area of interest.
[0058] The controller 121 then extracts the feature
value from the area of interest set to each of the frame
and the multi-scale images (Step S1703). In this
embodiment, the controller 121 moves the area of interest
across the entire multi-scale image, or across the set
target area in the multi-scale images, in the same manner
as when the area of interest is set to a frame. The
controller 121 then extracts the feature value of each of
such areas of interest set in the multi-scale images.
[0059] The controller 121 then separates the areas of
interest from which the feature value of a vehicle is
extracted, from the areas of interest from which the
feature value of background is extracted, among the areas
of interest set to the frame and the multi-scale images,
using predetermined dictionaries (Step S1704). The
predetermined dictionaries contain the feature value of
background and the feature values of vehicles. The
controller 121 then establishes the area of interest from
which the feature value of a vehicle is extracted as the
detection area.
[0060] In this manner, for example, even when the
apparent size of the vehicle included in the frame is large
and the vehicle does not fit inside of the area of interest,
and the feature value extracted from the area of interest
18313439_1 cannot be extracted as the feature value of a vehicle, the feature value extracted from the area of interest can be extracted as the feature value of a vehicle as long as the vehicle included in the multi-scale image fits inside of the area of interest. Therefore, the accuracy at which a vehicle included in a frame is detected can be improved.
Furthermore, even when the detection target vehicle V2 is
located far away, and the apparent size of the vehicle
included in the frame is small, thus preventing the feature
value of the vehicle from being extracted in the original
frame, the feature value of the vehicle included in the
multi-scale image can be extracted. Therefore, the
detection accuracy of a vehicle included in a frame can be
improved.
[0061] In the manner described above, with the traffic
information detection system according to the first
embodiment, a headway distance can be calculated even when
the entire vehicle does not fit inside of the frame.
[0062] Second Embodiment
This embodiment describes an example in a plurality of
dictionaries are read from the external storage device for
one frame, and the vehicle type of a vehicle is determined
based on the similarity between the feature value of the
vehicle included in the frame and the feature value
contained in each of the read dictionaries. In the
explanation hereunder, explanations of parts that are the
same as those according to the first embodiment will be
omitted.
[0063] FIG. 19 is a flowchart illustrating an exemplary
sequence of a headway distance calculation process
performed by the probe car according to a second embodiment
of the present invention. In the explanation hereunder,
steps that are the same as those illustrated in FIG. 6 are
18313439_1 given the same reference numerals in FIG. 19, and explanations thereof are omitted herein. In this embodiment, the controller 121 reads the dictionaries corresponding to the entire vehicle types to be determined, from the external storage device 124.
[0064] Specifically, the controller 121 reads a
plurality of dictionaries (hereinafter, referred to as read
dictionaries) corresponding to the entire vehicle types to
be determined, from the external storage device 124 (Step
S1901). The controller 121 then calculates the similarity
between feature value of the detection area and the feature
value contained in one of the read dictionaries having been
read (Step S1902). The controller 121 then determines
whether the similarity between the feature value contained
in each of the read dictionaries and the feature value of
the detection area has been calculated. If the similarity
between the feature value contained in each of the read
dictionaries and the feature value of the detection area
has not been calculated, the controller 121 shifts the
process back to Step S1901, and reads another read
dictionary containing a feature value for which similarity
with the feature value of the detection area has not been
calculated.
[0065] If the similarity between the feature value
contained in every read dictionary with the feature value
of the detection area has been calculated, the controller
121 compares the similarities between the feature value of
the detection area and the feature values contained in the
respective read dictionaries (Step S1903). The controller
121 then determines the vehicle type corresponding to the
read dictionary containing the feature value that is most
similar to the feature value of the detection area as the
vehicle type of the vehicle in the frame (Step S1904).
18313439_1
[00661 In the manner described above, with the traffic information detection system according to the second embodiment, the same operations and effects achieved by the first embodiment can be achieved.
[0067] Third Embodiment This embodiment describes an example in which an integrated dictionary containing a high-order feature value of a vehicle belonging to the vehicle types to be determined (e.g., the standard-sized vehicles and the large-sized vehicles) is stored, and the vehicle type of a vehicle included in a frame is determined based on the similarity between the feature value of the vehicle and the feature value contained in the read dictionary, if the similarity between the feature value of the vehicle and the high-order feature value contained in the integrated dictionary is equal to or higher than a predetermined threshold. In the explanation hereunder, explanations of parts that are the same as those according to the second embodiment will be omitted.
[00681 FIG. 20 is a flowchart illustrating an exemplary sequence of a headway distance calculation process performed by the probe car according to a third embodiment of the present invention. FIG. 21 is a schematic for explaining an exemplary headway distance calculation process performed by the probe car according to the third embodiment. In this embodiment, the external storage device 124 stores therein the integrated dictionary containing the high-order feature value of a vehicle belonging to the vehicle types to be determined.
[00691 Once the feature values of the areas of interest are extracted, the controller 121 reads the integrated dictionary from the external storage device 124 (Step S2001). The controller 121 then calculates the similarity
18313439_1 between the feature value of the detection area and the high-order feature value of the integrated dictionary (Step S2002). The controller 121 then selects, from the detection areas, a detection area a feature value of which exhibits a similarity equal to or higher than the predetermined threshold, to the high-order feature value contained in the integrated dictionary (Step S2003). The process is then shifted to Step S1901 and thereafter, and the controller 121 determines the vehicle type of the vehicle included in the frame, based on the similarity between the feature value of the selected detection area and the feature value contained in the read dictionary.
[0070] Among a plurality of detection areas, for the detection area a feature value of which exhibits a similarity lower than the predetermined threshold, to the high-order feature value contained in the integrated dictionary (that is, the detection area not selected), the controller 121 does not determine the vehicle type of the vehicle included in the frame based on the similarity between the feature value of the detection area and the feature value contained in the read dictionary. In this manner, when a vehicle included in a frame is not a vehicle belonging to a vehicle type for which the headway distance is to be calculated, reading of the read dictionary corresponding to the vehicle type is omitted. Therefore, the processing load required in determining the vehicle type can be reduced, and the processing time required in determining the vehicle type can also be reduced.
[0071] For example, as illustrated in FIG. 21, the controller 121 selects detection areas 2101, 2102 from a plurality of detection areas included in the frame, because the similarity of the feature values of the detection area 2101, 2102 to the high-order feature value contained in the
18313439_1 integrated dictionary is equal to or higher than the predetermined threshold. Among the detection areas 2101,
2102, the controller 121 determines the vehicle type of the
vehicle corresponding to the detection area 2102 (that is,
the detection target vehicle V2) as the standard-sized
vehicle, because the similarity of the feature value of the
detection area 2102 to the feature value contained in the
standard-sized vehicle dictionary is equal to or higher
than a predetermined threshold. Also among the detection
areas 2101, 2102, the controller 121 determines the vehicle
type of the vehicle in the detection area 2101 (the
detection target vehicle V2) as the large-sized vehicle,
because the similarity of the feature value of the
detection area 2101 to the feature value contained in the
large-sized vehicle dictionary is equal to or higher than a
predetermined threshold. For the detection areas (not
illustrated) other than the detection areas 2101, 2102
included in the frame, the controller 121 prohibits
determination of vehicle types based on the similarities
between the feature values of the detection areas and the
feature values contained in the standard-sized vehicle
dictionary and the large-sized vehicle dictionary.
[0072] In the manner described above, with the traffic
information detection system according to the third
embodiment, when the vehicle type of a vehicle included in
the frame is not a vehicle belonging to a vehicle type for
which the headway distance is to be calculated, reading of
the read dictionary corresponding thereto is omitted.
Therefore, the processing load required in determining the
vehicle type can be reduced, and the processing time
required in determining the vehicle type can also be
reduced.
[0073] As explained above, according to the first to the
18313439_1 third embodiments, the headway distance can be calculated even when the entire vehicle is not included in the frame.
[0074] A computer program executed by the information processing unit 12 according to the embodiments is provided in a manner incorporated in a read-only memory (ROM) or the like in advance. The computer program executed by the information processing unit 12 according to the embodiments may be provided in a manner recorded in a computer-readable recording medium such as a compact disc read-only memory (CD-ROM), a flexible disk (FD), a compact disc recordable (CD-R), or a digital versatile disc (DVD), as a file in an installable or executable format.
[0075] Furthermore, the computer program executed by the information processing unit 12 according to the embodiments may be stored in a computer that is connected to a network such as the Internet, and made available for download over the network. The computer program executed by the information processing unit 12 according to the embodiments may also be provided or distributed over a network such as the Internet.
[0076] Some embodiments of the present invention are explained above, but these embodiments are provided by way of examples only, and are not intended to limit the scope of the present invention in any way. These novel embodiments can also be implemented in different configurations, and various omissions, replacements, and modifications are still possible within the scope not deviating from the essence of the present invention. These embodiments or modifications thereof fall within the scope and the essence of the present invention, and are included in the inventions defined by the appended claims and equivalent thereof.
18313439_1

Claims (12)

1. A vehicle-mounted device comprising:
a communication unit that receives a first image
acquired by capturing an image of a forward direction of a
first vehicle with an image capturing unit, the first
vehicle being a probe car running on a road to obtain
street traffic information, generating distance data that
is used in generating street traffic information based on a
headway distance, and transmitting the distance data to a
base station;
a storage that stores therein a dictionary provided
for each vehicle type, and containing a standard vehicle
length and a feature value of a vehicle belonging to the
vehicle type;
an extracting unit that extracts a feature value of a
second vehicle included in the first image;
a determining unit that reads the dictionary
corresponding to a vehicle type to be determined from the
storage, and determines a vehicle type of the second
vehicle based on a similarity between the feature value
contained in the read dictionary and the feature value of
the second vehicle; and
a calculator that obtains a clearance between a front
end of the first vehicle and a rear end of the second
vehicle based on the first image, calculates a length of
the second vehicle based on the vehicle type determined by
the determination unit and the standard vehicle length
belonging to the vehicle type contained in the dictionary
read from the storage, and calculates a sum of the
clearance and the length of the second vehicle as the
headway distance between the front end of the first vehicle
and a front end of the second vehicle.
18313439_1
2. The vehicle-mounted device according to claim 1, wherein the calculator acquires a ratio of number of vehicles determined to be a large-sized vehicle to number of target vehicles feature values of which have been extracted by the extracting unit over a preset time, estimates a length of the target vehicle based on the ratio, and calculates a sum of the clearance and the estimated vehicle length as the headway distance.
3. The vehicle-mounted device according to claim 1, wherein the extracting unit sets a second area in the first image, the second area being an area that is larger than a first area and includes the first area, and the first area being an area from which the feature value of the second vehicle is extracted, detects an end of the target vehicle in the second area; corrects the first area in such a manner that an end of the first area is matched with the end of the target vehicle, and establishes a feature value of the corrected first area as the feature value of the target vehicle.
4. The vehicle-mounted device according to any one of the preceding claims, wherein the image capturing unit is a monocular camera, the communication unit receives a second image acquired by capturing an image of a forward direction of the probe vehicle with a stereo camera, and the calculator obtains a clearance based on the second image between the front end of the probe vehicle and the rear end of the target vehicle based on the second image,
18313439_1 and corrects the clearance obtained based on the first image based on the clearance based on the second image, when a difference between the clearance obtained based on the first image and the clearance obtained based on the second image is greater than a predetermined value.
5. The vehicle-mounted device according to any one of the preceding claims, wherein the determining unit reads, from the storage, an integrated dictionary containing a high order feature value of a vehicle for which the headway distance is to be detected, and when a similarity of the feature value of the target vehicle to the high-order feature value contained in the integrated dictionary is equal to or higher than a predetermined threshold, determines the vehicle type of the target vehicle based on a similarity between the feature value of the target vehicle and the feature value contained in the dictionary, and when the similarity of the feature value of the target vehicle to the high-order feature value contained in the integrated dictionary is lower than the predetermined threshold, prohibits the determination of the vehicle type of the target vehicle based on the similarity between the feature value of the target vehicle and the feature value contained in the dictionary.
6. The vehicle-mounted device according to any one of the preceding claims, wherein the first image is a motion image, and the determining unit reads a different dictionary from the storage for each frame of the motion image.
7. A headway distance calculation method comprising: receiving a first image acquired by capturing an image
18313439_1 of a forward direction of a first vehicle with an image capturing unit, the first vehicle being a probe car running on a road to obtain street traffic information, generating distance data that is used in generating street traffic information based on a headway distance, and transmitting the distance data to a base station; extracting a feature value of a second vehicle included in the first image; reading a dictionary provided for each vehicle type, and containing a standard vehicle length and a feature value of a vehicle belonging to the vehicle type to be determined from a storage; determining a vehicle type of the second vehicle based on a similarity between the feature value contained in the read dictionary and the feature value of the second vehicle; obtaining a clearance between a front end of the probe vehicle and a rear end of the second vehicle based on the first image, calculating a length of the second vehicle based on the determined vehicle type and the standard vehicle length belonging to the vehicle type contained in the dictionary red from the storage; and calculating a sum of the clearance and the length of the second vehicle as the headway distance between the front end of the first vehicle and a rear end of the second vehicle.
8. The method according to claim 7, further comprising:
obtaining a ratio of a number of vehicles determined
to be a large-sized vehicle to a number of second vehicles
feature values of which have been extracted over a preset
18313439_1 time; and estimating a length of the target vehicle based on the ratio, and calculating a sum of the clearance and the estimated vehicle length as the headway distance.
9. The method according to claim 7 or claim 8, further comprising: setting a second area in the first image, the second area being an area that is larger than a first area and includes the first area, and the first area being an area from which the feature value of the second vehicle is extracted; detecting an end of the second vehicle in the second area; correcting the first area in such a manner that an end of the first area is matched with the end of the second vehicle; and establishing a feature value of the corrected first area as the feature value of the target vehicle.
10. The method according to any one of claims 7 to 9, wherein the image capturing unit is a monocular camera, the headway distance calculation method further comprising: receiving a second image acquired by capturing an image of a forward direction of the first vehicle with a stereo camera; obtaining a clearance distance between the front end of the first vehicle and the rear end of the second vehicle based on the second image; and when a difference between the clearance obtained based on the first image and the clearance obtained based on the
18313439_1 second image is greater than a predetermined value, correcting the clearance obtained based on the first image based on the clearance obtained based on the second image.
11. The method according to any one of claims 7 to , further comprising
reading, from the storage, an integrated dictionary containing a high-order feature value of a vehicle for which the headway distance is to be detected, wherein the vehicle type of the second vehicle is determined based on a similarity between the feature value of the second vehicle and the feature value contained in the dictionary when a similarity of the feature value of the second vehicle to the high-order feature value contained in the integrated dictionary is equal to or higher than a predetermined threshold; and the determination of the vehicle type of the second vehicle is prohibited based on the similarity between the feature value of the second vehicle and the feature value contained in the dictionary when the similarity of the feature value of the second vehicle to the high-order feature value contained in the integrated dictionary is lower than the predetermined threshold.
12. The method according to claim 9, wherein the first image is a motion image, and a different dictionary is read from the storage for each frame of the motion image.
18313439_1
・・・ ・・・ ・・・
・・・ ・・・ ・・・ ・・・ ・・・
AU2020200802A 2015-09-30 2020-02-04 Vehicle-mounted device and headway distance calculation method Active AU2020200802B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020200802A AU2020200802B2 (en) 2015-09-30 2020-02-04 Vehicle-mounted device and headway distance calculation method

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP2015-195319 2015-09-30
JP2015195319A JP6339058B2 (en) 2015-09-30 2015-09-30 On-vehicle equipment and head-to-head distance calculation method
PCT/JP2016/057031 WO2017056525A1 (en) 2015-09-30 2016-03-07 Vehicle-mounted device and headway distance calculation method
AU2016330733A AU2016330733A1 (en) 2015-09-30 2016-03-07 Vehicle-mounted device and headway distance calculation method
AU2020200802A AU2020200802B2 (en) 2015-09-30 2020-02-04 Vehicle-mounted device and headway distance calculation method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2016330733A Division AU2016330733A1 (en) 2015-09-30 2016-03-07 Vehicle-mounted device and headway distance calculation method

Publications (2)

Publication Number Publication Date
AU2020200802A1 AU2020200802A1 (en) 2020-02-20
AU2020200802B2 true AU2020200802B2 (en) 2022-01-27

Family

ID=58423025

Family Applications (2)

Application Number Title Priority Date Filing Date
AU2016330733A Abandoned AU2016330733A1 (en) 2015-09-30 2016-03-07 Vehicle-mounted device and headway distance calculation method
AU2020200802A Active AU2020200802B2 (en) 2015-09-30 2020-02-04 Vehicle-mounted device and headway distance calculation method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
AU2016330733A Abandoned AU2016330733A1 (en) 2015-09-30 2016-03-07 Vehicle-mounted device and headway distance calculation method

Country Status (3)

Country Link
JP (1) JP6339058B2 (en)
AU (2) AU2016330733A1 (en)
WO (1) WO2017056525A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6877636B2 (en) * 2018-04-23 2021-05-26 日立Astemo株式会社 In-vehicle camera device
US11443619B2 (en) 2018-05-15 2022-09-13 Kabushiki Kaisha Toshiba Vehicle recognition apparatus and vehicle recognition method
JP2020086735A (en) * 2018-11-21 2020-06-04 株式会社東芝 Traffic information acquisition system and traffic information acquisition method
US11702101B2 (en) 2020-02-28 2023-07-18 International Business Machines Corporation Automatic scenario generator using a computer for autonomous driving
US11814080B2 (en) 2020-02-28 2023-11-14 International Business Machines Corporation Autonomous driving evaluation using data analysis
US11644331B2 (en) 2020-02-28 2023-05-09 International Business Machines Corporation Probe data generating system for simulator
CN111489552B (en) * 2020-04-24 2022-04-22 科大讯飞股份有限公司 Method, device, equipment and storage medium for predicting headway
CN113936453B (en) * 2021-09-09 2022-08-19 上海宝康电子控制工程有限公司 Information identification method and system based on headway

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7782179B2 (en) * 2006-11-16 2010-08-24 Hitachi, Ltd. Obstacle detection apparatus
JP2011180934A (en) * 2010-03-03 2011-09-15 Ricoh Co Ltd Vehicle type discrimination device and drive support device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004127104A (en) * 2002-10-04 2004-04-22 Ntt Data Corp Traffic information prediction system and program
JP5558238B2 (en) * 2010-07-14 2014-07-23 株式会社東芝 Vehicle interval detection system, vehicle interval detection method, and vehicle interval detection program
JP2014002534A (en) * 2012-06-18 2014-01-09 Toshiba Corp Vehicle type determination device and vehicle type determination method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7782179B2 (en) * 2006-11-16 2010-08-24 Hitachi, Ltd. Obstacle detection apparatus
JP2011180934A (en) * 2010-03-03 2011-09-15 Ricoh Co Ltd Vehicle type discrimination device and drive support device

Also Published As

Publication number Publication date
AU2020200802A1 (en) 2020-02-20
JP6339058B2 (en) 2018-06-06
AU2016330733A1 (en) 2018-03-15
JP2017068712A (en) 2017-04-06
WO2017056525A1 (en) 2017-04-06

Similar Documents

Publication Publication Date Title
AU2020200802B2 (en) Vehicle-mounted device and headway distance calculation method
EP4109331A1 (en) Obstacle detection method and apparatus, computer device, and storage medium
US10650271B2 (en) Image processing apparatus, imaging device, moving object device control system, and image processing method
US10331962B2 (en) Detecting device, detecting method, and program
US20170124725A1 (en) Image processing apparatus, imaging device, device control system, frequency distribution image generation method, and recording medium
EP3705371A1 (en) Obstacle detecting device
EP3147884B1 (en) Traffic-light recognition device and traffic-light recognition method
US20160253902A1 (en) Parked vehicle detection device, vehicle management system, and control method
CN102555940B (en) Driving supporting system, driving supporting program and driving supporting method
US10846546B2 (en) Traffic signal recognition device
EP2720193A2 (en) Method and system for detecting uneven road surface
US20180224296A1 (en) Image processing system and image processing method
EP3282389B1 (en) Image processing apparatus, image capturing apparatus, moving body apparatus control system, image processing method, and program
US20210003420A1 (en) Maintaining and Generating Digital Road Maps
US11514683B2 (en) Outside recognition apparatus for vehicle
JP2018048949A (en) Object recognition device
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN110929475B (en) Annotation of radar profiles of objects
KR20130128162A (en) Apparatus and method for detecting curve traffic lane using rio division
US11443619B2 (en) Vehicle recognition apparatus and vehicle recognition method
KR20170030936A (en) Distance measuring device for nearing vehicle
CN110570680A (en) Method and system for determining position of object using map information
KR101266623B1 (en) Method and apparatus for estimating of vehicle distance
JP7210157B2 (en) Fixed position stop control device
EP3287948B1 (en) Image processing apparatus, moving body apparatus control system, image processing method, and program

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)