CN113506264B - Road vehicle number identification method and device - Google Patents

Road vehicle number identification method and device Download PDF

Info

Publication number
CN113506264B
CN113506264B CN202110770253.7A CN202110770253A CN113506264B CN 113506264 B CN113506264 B CN 113506264B CN 202110770253 A CN202110770253 A CN 202110770253A CN 113506264 B CN113506264 B CN 113506264B
Authority
CN
China
Prior art keywords
vehicles
area
road
image
far
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110770253.7A
Other languages
Chinese (zh)
Other versions
CN113506264A (en
Inventor
尚利宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110770253.7A priority Critical patent/CN113506264B/en
Publication of CN113506264A publication Critical patent/CN113506264A/en
Application granted granted Critical
Publication of CN113506264B publication Critical patent/CN113506264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A road vehicle number identification method and apparatus are provided. A method of estimating a number of vehicles in a road-specific area is provided, wherein the road-specific area includes a near-end area and a far-end area, the method comprising: acquiring the number of vehicles in the near-end area and acquiring the average brightness of the image of the far-end area; calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the average brightness of the image of the far-end area; and estimating the number of vehicles in the road-designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.

Description

Road vehicle number identification method and device
Technical Field
The application relates to an intelligent traffic technology, in particular to a method and a device for accurately counting vehicles by using near-end accurate information and far-end fuzzy information fusion.
Background
Real-time statistics of the number of vehicles on the road is helpful for knowing the congestion condition and can be used for optimizing urban traffic scheduling and road planning.
The traditional method for identifying the number of road vehicles mainly comprises the steps of embedding coils, radar speed measuring cameras, laser radar coverage and the like in a road area to be detected. The embedded coil and the radar speed-measuring camera can only count the vehicles passing through in an accumulated manner. To count the total number of traveling vehicles on a specified road, it is necessary to count passing vehicles at the entrance of the road, respectively, and then subtract the counted values. However, if the road is not closed (e.g., there are parking lots or cells on both sides of the road), the data obtained by the above method may fluctuate greatly and thus be inaccurate. Covering a specified road with a lidar can achieve relatively accurate vehicle counts, but at very high cost.
There are also prior art technologies that use cameras to capture road images, and identify and count vehicles by machine vision. But the following problems are encountered:
the length of the road between two traffic light intersections in a city 1 is typically over 400 meters. When a common camera is used for shooting road images, the pixels occupied by vehicles at a distance are too small to be recognized. If a tele lens is used to photograph a distance, it is difficult for the field of view to cover a near vehicle, and the near screen may be blurred. Chinese patent CN112364793a discloses a scheme of detecting a vehicle in a wide range using a long-focus camera in combination with a short-focus camera.
The 2 cameras are generally erected on a portal frame above a traffic light cantilever or a lane at the intersection, and the height is not more than 7 meters. When shooting at this height, a rear car at a distance from the camera will be blocked by a front car, resulting in difficulty in identifying each car from the shot image. Especially when the road is congested, the distance between the front car and the rear car is very short, and most of the rear car in the shot image is blocked by the front car, so that the recognition is difficult.
Disclosure of Invention
It is desirable to obtain the total number of vehicles including the road range of the far-end region and the near-end region, and to solve the problem of inaccurate road vehicle number recognition caused by low accuracy of far-end region recognition in vehicle number recognition of the road.
According to a first aspect of the present application, there is provided a first method of estimating the number of vehicles in a road specification area according to the first aspect of the present application, wherein the road specification area includes a near-end area and a far-end area, the method comprising: acquiring the number of vehicles in the near-end area and acquiring the brightness of an image of the far-end area; calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the brightness of the image in the far-end area; and estimating the number of vehicles in the road-designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.
A first method of estimating the number of vehicles in a road-specified area according to a first aspect of the present application provides a second method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application, wherein an image of the near-end area is acquired, and the number of vehicles in the near-end area is identified from the acquired image; and/or acquiring the number of vehicles in the near-end area through a radar, a sensor arranged in the near-end area and/or a signal actively sent by the vehicles in the near-end area.
According to a method of estimating the number of vehicles of a road-specified region according to a first aspect of the present application, there is provided a method of estimating the number of vehicles of a road-specified region according to a third aspect of the present application, wherein a parameter f1 '= (luminance of a vehicle of the near-end region image-luminance of a vehicle of the near-end region image) ×average length of a vehicle of the near-end region image/length of the near-end region image, and f2' =luminance of a vehicle-free road of the near-end region image is obtained from luminance of a vehicle of the near-end region image, luminance of a vehicle-free road of the near-end region image, vehicle average length of the vehicle of the near-end region image, length of the vehicle-free road of the near-end region image; the number of vehicles in the distal region is obtained from the length of the distal region image (average luminance of the image of the distal region-f 2 ')/f 1'.
According to a method of estimating the number of vehicles in a road-specified region according to the first aspect of the present application, there is provided a method of estimating the number of vehicles in a road-specified region according to the fourth aspect of the present application, wherein the parameter f1' is obtained from the number of vehicles in the image of the one or more first regions obtained from the near-end region image and the luminance statistic value of the image of the one or more first regions; and obtaining the number of vehicles in the far-end area according to the brightness statistic value of the image of the far-end area and the parameter f 1'.
A fourth method for estimating the number of vehicles in a road-specified area according to the first aspect of the present application provides the fifth method for estimating the number of vehicles in a road-specified area according to the first aspect of the present application, wherein parameters f1 'and f2' are obtained according to the number of vehicles in the near-end area image, the average luminance of the near-end area image, and the average luminance of the near-end area image=f1 '×the number of vehicles in the near-end area image+f2'; the number of vehicles in the far-end area is obtained according to the average brightness of the image of the far-end area and parameters f1 'and f 2'.
A fourth method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application provides the method of estimating the number of vehicles in a road-specified area according to the sixth aspect of the present application, wherein the parameter f2' is a luminance statistic of a vehicle-free road based on the image of the one or more first areas; and obtaining the number of vehicles in the far-end area according to the brightness statistic value of the image of the far-end area and parameters f1 'and f 2'.
A fourth method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application provides the seventh method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application, wherein parameters f1' and f2' are obtained from the luminance=f1 ' of the image of the first area; and obtaining the number of vehicles in the far-end area according to the brightness of the image of the far-end area and parameters f1 'and f 2'.
A fourth method of estimating the number of vehicles in a road-specified region according to the first aspect of the present application provides the eighth method of estimating the number of vehicles in a road-specified region according to the first aspect of the present application, wherein the parameters f1 '= (average luminance of vehicles in the first region image-average luminance of no-vehicle roads in the first region image) ×average length of vehicles in the first region image/length of the first region image, and f2' =average luminance of no-vehicle roads in the first region image; the number of vehicles in the distal region is obtained from the length of the distal region image (average luminance of the image of the distal region-f 2 ')/f 1'.
According to a third method of estimating the number of vehicles in a road specification area of the first aspect of the present application, there is provided a method of estimating the number of vehicles in a road specification area according to the ninth aspect of the present application, wherein an image of a near-end area is acquired by a camera, and one or more of the brightness of a vehicle, the brightness of a vehicle-free road of the image of the near-end area, the vehicle average length of the image of the near-end area, the length of the image of the near-end area is acquired from the image of the near-end area; and/or acquiring one or more of a brightness of a vehicle, a brightness of a vehicle-free road of the image of the near-end region, a vehicle average length of the image of the near-end region, a length of the image of the near-end region in a laboratory or offline acquisition of the image of the near-end region.
According to one of the fourth to eighth methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided a method of estimating the number of vehicles in the road specification area according to the tenth aspect of the present application, wherein an image of a near-end area is acquired by a camera, and one or more of a luminance statistic of a vehicle image of an image of one or more first areas, a luminance statistic of a vehicle-free road of an image of one or more first areas, a vehicle average length of an image of one or more first areas, a length of an image of one or more first areas are acquired from the image of the near-end area; and/or obtaining one or more of a brightness of the vehicle in the image of the one or more first areas, a brightness of the vehicle-free road of the image of the one or more first areas, a vehicle average length of the image of the one or more first areas, a length of the image of the one or more first areas, in a laboratory or offline.
According to one of the first to tenth methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided the eleventh method of estimating the number of vehicles in the road specification area according to the first aspect of the present application, further comprising: calculating the brightness of the image of the specified area by using the brightness or relative brightness of the pixels of the image of the specified area; wherein the relative brightness of the pixel is a difference in the pixel brightness from the average brightness in the image of the road specification area in the no-vehicle state, or the relative brightness of the pixel is a difference in the pixel brightness from the average brightness of the infrequently changing area in the image including the road specification area; and wherein the pixel brightness is a pixel brightness value or a statistical value of all pixel brightness within a specified window range around the pixel.
According to one of the first to eleventh methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided a twelfth method of estimating the number of vehicles in the road specification area according to the first aspect of the present application, further comprising: acquiring an image comprising the distal region; stretching the far-end region part in the image comprising the far-end region according to a world coordinate system to obtain the image of the far-end region with the same shape as the road far-end region; and/or acquiring an image comprising the proximal region; and stretching the near-end region part in the image comprising the near-end region according to a world coordinate system to obtain the image of the near-end region which is consistent with the shape of the near-end region of the road.
According to one of the first to twelfth methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided a thirteenth method of estimating the number of vehicles in the road specification area according to the first aspect of the present application, wherein: the calculating the number of vehicles of the far-end area according to the number of vehicles of the near-end area and the brightness of the image of the far-end area comprises: calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the brightness of the image in the far-end area by using a Kalman filtering method, wherein the number of vehicles in the near-end area and the brightness of the image in the far-end area are taken as measurement Z for the Kalman filtering method, the system state X= [ near-end number of vehicles, far-end number of vehicles, outflow vehicle flow rate, far-end to near-end vehicle flow rate and inflow vehicle flow rate for Kalman filtering, wherein the vehicle flow rate refers to the number of vehicles passing through the road specified area in unit time, the inflow vehicle flow rate refers to the number of vehicles entering the road specified area in unit time, the outflow vehicle flow rate refers to the number of vehicles leaving the road specified area in unit time, and the far-end to near-end vehicle flow rate refers to the number of vehicles entering the near-end area in unit time, and the state transition matrix for Kalman filtering is that
Where dt is the time interval of the two iterations of the Kalman filtering; and wherein z=h x+v,>v is the measurement error, and f1' is the specified parameter.
According to a thirteenth method of estimating the number of vehicles in a road-designated area of the first aspect of the present application, there is provided the fourteenth method of estimating the number of vehicles in a road-designated area according to the first aspect of the present application, wherein the system state for kalman filtering x= [ near-end number of vehicles, far-end number of vehicles, outflow flow rate, near-end to far-end flow rate, inflow flow rate ], near-end to far-end flow rate refers to the number of vehicles entering the far-end area from the near-end area per unit time; state transition matrix for kalman filtering
A twelfth or thirteenth method of estimating the number of vehicles in a road-specified area according to the first aspect of the application provides the fifteenth method of estimating the number of vehicles in a road-specified area according to the first aspect of the application, wherein the parameter f1' is a parameter acquired from one or more first areas of the near-end area.
A fifteenth method of estimating the number of vehicles of a road-specified area according to the first aspect of the present application provides the sixteenth method of estimating the number of vehicles of a road-specified area according to the first aspect of the present application, wherein the parameter f1' is obtained from the number of vehicles of the image of the one or more first areas obtained from the near-end area image and the luminance statistic of the image of the one or more first areas; obtaining a parameter f1' according to the number of vehicles in the near-end region image and the average brightness of the near-end region image and according to the average brightness=f1 ' of the near-end region image and the number of vehicles +f2' of the near-end region image; obtaining a parameter f1' according to the brightness = f1' of the image of the first area and the number of vehicles +f2' of the image of the first area; or the parameter f1 '= (average luminance of the vehicle of the first area image-average luminance of the vehicle-free road of the first area image) ×average length of the vehicle of the first area image/length of the first area image, and f2' = average luminance of the vehicle-free road of the first area image.
One of the methods of estimating the number of vehicles in the road specification area according to the twelfth to sixteenth aspects of the present application provides the method of estimating the number of vehicles in the road specification area according to the seventeenth aspect of the present application, wherein the number of vehicles in the near-end area and the difference in brightness of the image of the far-end area are taken as the measurement Z for the kalman filtering method, and
where f1D is a specified parameter.
According to one of the twelfth to sixteenth methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided the eighteenth method of estimating the number of vehicles in the road specification area according to the first aspect of the present application, wherein the number of vehicles in the near-end area, the average luminance of the image of the far-end area, and the average luminance difference of the image of the far-end area are taken as the measurement Z for the kalman filtering method,
wherein f1' and f1D are specified parameters.
According to a tenth or eleventh method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application, there is provided a nineteenth method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application, wherein the luminance used in acquiring the parameter f1 'is replaced with the relative luminance, and the parameter f1D is obtained in the same manner as the parameter f 1'.
According to one of the twelfth to nineteenth methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided the twentieth method of estimating the number of vehicles in the road specification area according to the first aspect of the present application, wherein the calculating the number of vehicles in the far-end area from the number of vehicles in the near-end area and the flat brightness of the image of the far-end area using the kalman filter method includes: obtaining covariance P-of estimated values X-and X-of the system state X (k) of the current round according to the system state X (k-1) of the previous round;
X_=A*X(k-1);
p=a×p (k-1) ×a' +q, where P (k-1) is the covariance matrix of the k-1 train state and Q is the process noise covariance matrix; obtaining the number of vehicles in the near-end region and the brightness of the image of the far-end region as a measured value Z (k) of the present wheel; calculating a markan gain Kg, kg=p_ H '/(h_ p_ H ' +r), wherein H ' represents a rank-converted matrix of the matrix H, "/" represents a matrix division, and R is a covariance matrix of the measurement error; the covariance P (k) of the system states X (k) and X (k) of the present round is updated,
X(k)=X_+Kg*(Z(k)-H*X_);
p (k) = (I-kg×h) ×p, where I is an identity matrix.
According to one of the twelfth to twentieth methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided the twenty-first method of estimating the number of vehicles in the road specification area according to the first aspect of the present application, further comprising: and calculating a parameter f1 'according to the image of the near-end region, wherein the parameter f1' is updated by every N iterations of Kalman filtering according to the parameter calculated according to the image of the near-end region, and if the variation of the parameter calculated according to the image of the near-end region and the parameter f1 'currently used in Kalman filtering exceeds a threshold value after every N iterations of Kalman filtering, the parameter f1' is updated by the calculated parameter.
A twenty-first method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application provides the twenty-second method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application, further comprising: and calculating a parameter f1D 'according to the image of the near-end region, wherein the parameter f1D is updated by the parameter f1D' calculated according to the image of the near-end region every N iterations of the Kalman filtering, and if the variation of the parameter f1D 'calculated according to the image of the near-end region and the parameter f1D currently used in the Kalman filtering exceeds a threshold value after every N iterations of the Kalman filtering, the parameter f1D is updated by the calculated parameter f 1D'.
The first or second method of estimating the number of vehicles in a road-designated area according to the first aspect of the present application provides a twenty-third method of estimating the number of vehicles in a road-designated area according to the first aspect of the present application, further comprising: processing the brightness of the image of the far-end area by using a neural network comprising a multi-layer convolutional neural network and a full-connection layer to obtain the number of vehicles in the far-end area; and processing the number of vehicles in the near-end area and the number of vehicles in the far-end area by using a neural network comprising a plurality of layers of stateful long-short term memory networks and full-connection layers to obtain the number of vehicles in the road specified area.
A twenty-third method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application provides the twenty-fourth method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application, further comprising: processing the image brightness of the far-end area and the image of the near-end area by using a neural network comprising a plurality of convolution neural networks and full connection layers to calculate the number of vehicles in the far-end area.
A twenty-second or twenty-third method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application provides the twenty-fifth method of estimating the number of vehicles in a road-specified area according to the first aspect of the present application, further comprising: the distal region image is preprocessed to annotate the road region, the proximal region and/or the distal region in the distal region image.
According to one of the first to twenty-fifth methods of estimating the number of vehicles in the road specification area of the first aspect of the present application, there is provided a twenty-sixth method of estimating the number of vehicles in the road specification area according to the first aspect of the present application, wherein: the near-end region and the far-end region each comprise a positive integer number of lanes, and the lanes comprised by the near-end region are identical to the lanes comprised by the far-end region; the lanes comprised by each of the proximal and distal regions cover a full lane width in width.
According to a second aspect of the present application, there is provided a first method of estimating the number of vehicles in a road-designated area according to the second aspect of the present application, the method comprising: acquiring brightness of an image of the road specified area; and estimating the number of vehicles in the road specified area according to the brightness of the road specified area.
According to a third aspect of the present application, there is provided a method of estimating the number of vehicles in a road specification area according to the third aspect of the present application, wherein the road specification area includes a near-end area and a far-end area, the method comprising: acquiring the number of vehicles in the near-end area and acquiring an image of the far-end area; processing the image of the far-end area by using a multi-layer convolutional neural network to obtain the number of vehicles in the far-end area; the number of vehicles in the road specified area is obtained by processing the number of vehicles in the near-end area and the number of vehicles in the far-end area through a neural network comprising a plurality of layers of stateful long-short-term memory networks and full-connection layers, or the number of vehicles in the road specified area is obtained by processing the number of vehicles in the near-end area and the number of vehicles in the far-end area through a neural network comprising a self-attention network and a full-connection layer.
According to a first method for estimating the number of vehicles in a road-specified area of a third aspect of the present application, there is provided a second method for estimating the number of vehicles in a road-specified area of the third aspect of the present application, wherein the number of vehicles in the near-end area is acquired, and an image of the far-end area is acquired; and processing the image of the far-end area and the image of the near-end area by using a neural network comprising a plurality of layers of convolutional neural networks and full-connection layers to obtain the number of vehicles in the far-end area.
According to a fourth aspect of the present application, there is provided an information processing apparatus comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements one of the methods of estimating the number of vehicles in a road-designated area according to the first, second and third aspects of the present application when executing the program.
Drawings
The application, as well as a preferred mode of use and further objectives and advantages thereof, will best be understood by reference to the following detailed description of illustrative embodiments when read in conjunction with the accompanying drawings, wherein:
FIG. 1 illustrates a schematic diagram of a vehicle number identification system in accordance with an embodiment of the present application;
FIG. 2A illustrates a schematic view of a road with a vehicle in accordance with an embodiment of the application;
FIG. 2B illustrates an image of the road with vehicle of FIG. 2A acquired by a camera;
FIG. 2C illustrates a schematic view of a roadway without a vehicle according to an embodiment of the present application;
FIGS. 3A-3C illustrate system states for Kalman filtering according to an embodiment of the application;
FIG. 4 illustrates a flow chart of a method for calculating the total number of vehicles in a target road area using the measured number of near vehicles, nearest Cnt, and the luminance Lm of the far end area by Kalman filtering fusion, in accordance with an embodiment of the present application;
FIG. 5 illustrates a graph of estimated road area vehicle number over time according to an embodiment of the application; and
fig. 6 illustrates a block diagram of estimating the number of vehicles in a road area using a deep neural network, according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
Fig. 1 shows a schematic diagram of a vehicle number identification system according to an embodiment of the present application.
Fig. 1 shows a roadway 120, and a post 100 positioned beside the roadway. The mast 100 is a mast such as traffic lights at intersections or a portal frame that spans or partially spans a roadway. Camera 102 is secured to column 100, for example, a rail at the top of column 100. The area of the road 120 in which the number of vehicles to be identified includes a proximal area 122 and a distal area 124, at a distance relative to the mast 100 or camera 102. Distal region 124 is a greater distance relative to column 100 than proximal region 122 is relative to column 100. By way of example, vehicles on the road 120 approach the mast 100 from the distal end region 124 toward the proximal end region 122 and then out of the proximal end region to exit the region of the road 120 where the number of vehicles is to be identified. In the example of fig. 1, there is a region of partial overlap of the proximal region 122 and the distal region 124. Alternatively, the proximal region 122 and the distal region 124 may be adjacent or non-adjacent (with some spacing) in accordance with embodiments of the present application.
The camera 102 includes, for example, a plurality of cameras, each capturing video or images of the proximal region 122 and the distal region 124. For example, the camera 102 includes a tele camera for capturing the distal region 124 and also includes a normal camera for capturing the proximal region 122.
As yet another example, the camera 102 includes a plurality of identical or different cameras. One or more cameras capture images of the proximal region 122 and one or more cameras capture images of the distal region 124.
As yet another example, camera 102 includes one or more infrared or thermal imaging cameras (hereinafter collectively referred to as infrared cameras) for acquiring infrared, luminance or temperature maps (hereinafter collectively referred to as infrared images) of each of proximal region 122 and/or distal region 124. When the road color is close to the vehicle color, the infrared image collected by the infrared camera can more effectively distinguish the road from the vehicle on the road.
The vehicle number identification system according to the embodiment of the application further includes, for example, an information processing apparatus (not shown) and/or a communication device (not shown). The information processing apparatus acquires the image of the near-end region 122 and/or the far-end region 124 acquired by the camera 102, extracts the features of the image, and also implements the method of recognizing the number of vehicles of the road (which will be described in detail later) provided according to the embodiment of the present application to obtain the number of vehicles in the region of the road 120 in which the number of vehicles is to be recognized. The information processing device is, for example, a computer, a server, or an embedded computing device. The communication means couples the information processing apparatus and/or the camera 102 to a network or the internet. The method for identifying the number of vehicles on the road, which is provided by the embodiment of the application, is also implemented through the cloud computing platform.
For the near-end region 122, the image acquired by the camera 102 in the prior art can accurately identify each vehicle in the near-end region 122 and obtain the number of vehicles (denoted as nearCnt).
For the distal end region 124, even though a clear image of the distal end region 124 can be obtained with a tele camera and an image of the complete distal end region 124 can be obtained by stitching images taken by, for example, a plurality of tele cameras, vehicles in the distal end region 124 adjacent to the camera 102 will block vehicles behind (relatively far from the camera 102) due to the limited height of the camera 102. This results in fewer pixels in the image of the distal region 124 that are occluded vehicles, and it is difficult for prior art image recognition techniques to accurately identify the number of vehicles in the image of the distal region 124.
According to an embodiment of the present application, the number of vehicles in a road-designated area is estimated using a method based on image brightness estimation. A specified area of a road is, for example, a road area that enters a certain traffic intersection, and traffic lights located at the traffic intersection control the passage of vehicles within the road area, so that the number of vehicles within the road area is affected by the control strategy of the traffic lights.
In the real world, objects including roads and vehicles are exposed to radiation (due to emitted light, reflected light and/or having temperature), and the radiant flux emitted by a light source in a unit surface area and a unit solid angle perpendicular to its radiation transmission direction is called radiance. Illuminance or luminance (hereinafter collectively referred to as brightness for simplicity) is used to describe the amount of optical radiation that is perceived by the human eye. The camera captures images, corresponding to measurements of optical radiation in various areas of the real world. There are various expressions of the brightness of an image or pixels of an image, for example, the gray values of a gray scale map, and various color models also each have brightness. In the present application, the measurement of infrared light in an image acquired by an infrared camera is also referred to as brightness.
The inventors have realized that vehicles on roads are in a driving state, and thus the heat generated by their engines makes the radiation of the vehicle significantly different from that of the road. The number or density of vehicles on the road directly affects the radiation of the areas of the road where vehicles are present. This difference in radiation is useful for identifying the number of vehicles on the road. Further, brightness measurements of images obtained by cameras or other imaging devices or radiant energy reflecting the road identify the number of vehicles on the road.
Fig. 2A shows a schematic diagram of a road with a vehicle according to an embodiment of the application.
Referring to FIG. 2A, there are a plurality of vehicles (240, 242) on the real world road 220. In fig. 2A, rectangular boxes (e.g., rectangular boxes 240, 242) represent vehicles. Vehicles (240, 242) occupy a portion of the area of roadway 220.
The average radiance of the area on the real-world road 220 where the vehicle is present is noted as (LmV). It will be appreciated that the average radiance score (LmV) for the area where the vehicle is present also indicates the average radiance within a specified area (e.g., near-end area and/or far-end area) of the roadway 220 where the number of vehicles is to be measured.
The vehicles on the real world road 220 have an average length (denoted LenV). The vehicles on the road 220 in the real world have a space (also referred to as a pitch) therebetween, and the average pitch (referred to as Gap) between the vehicles on the road 220 is referred to as a Gap. It will be appreciated that the average length of the vehicle (LenV) and the average spacing of the vehicles (Gap) may also be indicative of the average length and average spacing of the vehicles, respectively, within a designated area (e.g., proximal and/or distal area) of the roadway 220 in which the number of vehicles is to be measured.
Fig. 2B shows an image of the road with vehicle of fig. 2A acquired by a camera. The image of fig. 2B is acquired by, for example, camera 102 of fig. 1.
Due to the presence of perspective and lens distortion, the shape of the image of the road (and the vehicle thereon) shown in fig. 2B is different from the road (and the vehicle thereon) shown in fig. 2A. Vehicles 240 and 242 of fig. 2A are also shown in fig. 2B.
The camera 102 is typically fixed or takes images of the road at a limited or known angle. Given the pose and parameters of the camera 102, each plane of the real world captured by the camera 102 corresponds to a transformation matrix that is used to convert the world coordinates of points on the plane to uniquely determined coordinates in the image captured by the camera 102. The road 220 is made to belong to a plane of the real world with a corresponding transformation matrix. And (3) obtaining the mapping relation between the world coordinates in the real environment and the pixel coordinates of the image acquired by the camera through the calibration of the camera.
According to the embodiment of the application, the region of the target road in the image is acquired from the image acquired by the camera 102, and the road region in the image is stretched, so that the road image consistent with the real-world road shape is obtained. When stretching is performed, the stretching is performed approximately in accordance with a transformation matrix of a real world plane where the road is located. Because vehicles on the road have a certain height, namely the vehicles and the road are not on the same plane in the real world, the shape of the vehicle image in the road image obtained by stretching the road area is different from that of the vehicles on the real world, but the difference does not affect the implementation of the technical scheme of the application.
The stretched road area image in the image captured by the camera 102 has brightness for each pixel as a measure of the radiance of the real world area to which the pixel corresponds. Hereinafter, as not specifically described, the images acquired by the cameras used refer to the stretched images. Measurement of real world radiance is better reflected from luminance information calculated from stretched images
In the image of the road 220 captured by the camera 102, for example, the image area corresponding to the vehicle 240 has brightness. It will be appreciated that averaging the brightness of all pixels in the image of the roadway 220 corresponding to, for example, the area of the vehicle 240 results in an average brightness of the area of the vehicle 240 of the image (reflecting the average radiance of the real world vehicle 240 area).
In some cases, there is no vehicle on the road. Or a partial area of the road is free of vehicles. Fig. 2C shows a schematic view of a road without a vehicle according to an embodiment of the application. In fig. 2C, there is no vehicle on road 270. The no-vehicle road 270 has an average radiance (denoted LmL) of the no-vehicle road. The average luminance is obtained from the image of the road 270 without the vehicle acquired by the camera 102 as a measure of the average luminance (LmL) of the road without the vehicle. It will be appreciated that in fig. 2C, no-vehicle road average radiance LmL is used for areas of the road 270, and the radiance of areas other than the road 270 (e.g., sidewalks, sky, buildings beside roads, etc.) is not included.
Thus, if an area of the road has an average radiance (Lm), the area is
Lm=(LmV*LenV+LmL*Gap)/(LenV+Gap) (1)
Where LmV is the average radiance of the vehicle in the area, lenV is the average length of the vehicle in the area, lmL is the average radiance of the road in the absence of the vehicle in the area, and Gap is the average distance between the vehicles in the area. The number of Vehicles (VC) in the specified road length (Dist) in the area is
VC=Dist/(LenV+Gap) (2)
Thus, according to the formulas (1) and (2)
Lm=(LmV-LmL)*LenV/Dist*VC+LmL=f1*VC+f2 (3)
Wherein f1= (LmV-LmL) LenV/Dist, f2= LmL (4)
It can be seen that the average radiance (Lm) of a road area is linearly related to the number of Vehicles (VC) in that road area. In general, in a road area composed of a plurality of road areas, the total radiance and the total number of vehicles in the road area to be identified are approximately linearly related. It will be appreciated that the relationship between the average radiance of the road expressed by the above formulas (1) to (4) and the number of vehicles is applicable to both single-lane and multi-lane roads.
In one embodiment, the parameters f1 and f2 of the near and far regions of the road are considered to be the same or approximately the same, as are the respective vehicle densities, so that the parameters obtained from the near region can be used to estimate the number of vehicles in the far region. Further, in the processing, the near-end region and the far-end region of the road are selected to be in the same direction or the same lane of the road. In the same lane, the flow will pass through the distal and proximal regions in succession (or vice versa, depending on the direction of travel) so that the respective parameters f1 and f2 are the same or similar to a higher degree in the proximal and distal regions in the same lane. Alternatively or still further, the near and far end regions of the road are selected to include at least 1 lane, and the near and far end regions are selected at the granularity of the lanes to avoid processing lanes of incomplete width in the near and/or far end region images.
According to the embodiment of the application, the brightness of the image of the area of the road collected by the camera is used as the measurement or estimation of the radiance of the area of the road, so that the number of vehicles in the area of the road is estimated by using the brightness of the image of the area of the road collected by the camera. It is noted that the shape of the road area shown in the road area image is different from the road area of the real world due to distortion and perspective, so that the image of the road area needs to be stretched to obtain the same shape as the road area of the real world. The stretching method has been described above, and a stretching method of the prior art or a stretching method to be produced in the future may be applied, which is not limited in the embodiment of the present application. The luminance/average luminance of the image of the road area is replaced with the variables LmV, lmL, and Lm in the above formula (3)/(4) by stretching. It will be appreciated that the camera is positioned such that the image of the road area has the same or substantially the same shape as the corresponding road area without stretching, in which case the brightness/average brightness of the image of the road area is obtained from the unstretched image instead of the variables LmV, lmL and Lm in the above formula (3)/(4).
According to formulas (2) - (4), vc= (Lm-f 2)/f 1 is obtained, and further an estimation of the number of vehicles in the road specification area including the near-end area and the far-end area is obtained.
It will be appreciated that, instead of obtaining the luminance LLmL of the specified image area from the image of the road area, instead of the average radiance LmL of the vehicle-free road, lm is the average radiance of the specified road area including the near-end area and the far-end area, the vehicle area in the image may be identified in a prior art manner (e.g., pattern recognition, machine learning, etc.), and the average length of the vehicle LLenV in the image, the average vehicle pitch LGap in the image, the length LDist of the specified road area in the image including the near-end area and the far-end area, and the average luminance LLmV of the vehicle area image may be obtained instead of the average radiance LmV.
Thus, lvc=ldist/(llenv+lgap), where LVC is the number of vehicles within the image of the specified road area; and f1 '= (LLmV-LLmL) ×llenv/LDist, f2' =llml, wherein LLmV is the average luminance of the vehicle image in the specified road area image with the vehicle, LLmL is the average luminance of the specified road area image without the vehicle, LLenV is the average length of the vehicle in the road specified area image, and LDisk is the length of the road specified area image.
Lvc= (LLm-f 2 ')/f 1' where LLm is the average luminance of the specified road area image. So that the number of vehicles LVC of the specified road area is obtained from the image of the specified road area (the average luminance LL thereof, the parameters f1 'and f2' obtained therefrom).
According to a further embodiment of the application, the number of vehicles therein and the parameters f1 'and f2' are obtained in a prior art manner for the proximal region, while for the distal region, a measurement of its average radiance is obtained from an image of the distal region, and the number of vehicles for the distal region is obtained from lvc= (LLm-f 2 ')/f 1' using the parameters f1 'and f2' obtained from the proximal region. For example, the vehicle number LVC of the far-end region is obtained from lvc= (LLm-f 2 ')/f 1' based on parameters f1 'and f2' obtained from the image of the near-end region and the average luminance LLm of the image of the far-end region. And adding the respective vehicle numbers of the near-end area and the far-end area to obtain the vehicle number of the road.
According to still another embodiment of the present application, f1 'and f2' are found from LLm and LVC obtained from the image according to LLm =f1 '×lvc+f2'. For example, an image of a near-end region of a road provides higher definition, multiple regions are partitioned from the near-end region, and corresponding LLm (i) and LVC (i) are derived from the image of each region, where i indicates one of the regions partitioned from the near-end region. For different values of i (corresponding to the divided regions), f1 'and f2' are solved or fitted. The resulting parameters f1 'and f2' are applied to estimate the number of vehicles in the remote area. For the far-end region, LLm (far) is obtained from the image of the far-end region, where LLm (far) represents the average luminance of the far-end region, and the far-end region vehicle number LVC (far) = (LLm (far) -f2 ')/f 1' is obtained.
In one or more of the above processes, LLm, LLmV, and LLmL each represent the average brightness of the image of the specified area. It will be appreciated that replacing the average luminance with the total luminance of each pixel of the area image may also result in the desired parameters f1 'and f2', and thus in an estimate of the number of vehicles in the area.
According to the embodiment of the application, the average brightness of the image of the road area is acquired by the camera and is used for estimating the number of vehicles in the road area. Compared with a vehicle recognition mode based on feature analysis, the average brightness information of the image is easier to acquire, the influence of the vehicle shielding problem of the far-end area of the road is smaller, and the recognition accuracy in the vehicle number recognition of the far-end area is better than that of the recognition mode based on feature analysis.
The brightness of an image has a greater correlation with the color of objects in the image. Whereas in vehicles on roads, the color appears to be diversified. By way of example, in an image, white (or light) vehicle regions typically exhibit a greater brightness than road regions, while black (or dark) vehicle regions typically exhibit a brightness that is approximately less than road regions. In this case, the average luminance of the road area in the image may be affected by the vehicle color thereon, and even the average luminance of the white vehicle and the black vehicle area may approach the luminance of the vehicle-free road area, thereby causing deviation in the number of road vehicles estimated with the luminance. According to a further embodiment of the application, the average luminance of the image of the road area is calculated with the relative luminance of the pixels to measure or estimate LmV, lmL and/or Lm, instead of the pixel luminance, for the difference (absolute value) of the pixel luminance of the road area with respect to the average luminance of the vehicle-free road area (referred to as relative luminance).
The image of the vehicle-free road area can be selected from the images acquired by the camera, or small blocks of the vehicle-free road area can be selected from a plurality of images of vehicles and spliced to obtain the required image of the complete vehicle-free road area. Further, the brightness (average brightness) of the image of the no-vehicle road area is obtained. And obtaining the relative brightness of the pixel by taking the absolute value or square of the difference between the brightness of the pixel and the average brightness of the image of the road area without the vehicle. The relative brightness of all pixels of the image of the road area is then averaged to obtain the relative brightness of the road area and used to estimate LmV and/or Lm.
According to yet another embodiment of the application, noise that may be present in the image is further processed when calculating the relative brightness of the pixels. Foreign objects on the vehicle, people in the vehicle, flaws in the camera may all introduce noise in the road area image. To calculate the relative luminance of a certain pixel (relative to the average luminance of the no-vehicle road image), the average value or statistical value of the luminance of a small range window (adjacent to a plurality of pixels) around the pixel is used as the luminance of the pixel, and the relative luminance of the pixel relative to the average luminance of the no-vehicle road image is calculated again. For example, a range of 3x3, 4x4, or 8x8 around a pixel is selected, and the average value of the brightness of all pixels in the range is used as the brightness of the pixel.
When the average brightness of the image of the road area is obtained by the camera under various environments, the absolute value of the brightness of the image obtained by the camera may change due to the change of the external illumination condition. In order to avoid introducing large errors, according to one embodiment of the application, the average brightness of the corresponding road image without vehicle is acquired at different moments in time or in the external environment, respectively, for calculating the relative brightness of the pixels. According to a further embodiment of the application, infrequently changing regions are noted in the image. Infrequently changing areas include, for example, areas other than a roadway, such as the sky, buildings on both sides of the roadway, and the like. Traffic flow will not typically occur in the infrequently changing region and thus the image content/brightness of the region will not change frequently due to traffic flow. The average luminance of the infrequently changing area image of the road is used instead of the average luminance of the vehicle-free road. In an image, pixel relative luminance = pixel luminance/average pixel luminance of infrequently changing regions, wherein a pixel luminance is an absolute value of the luminance of the pixel or an average or statistical value of the luminance of a plurality of pixels adjacent to the pixel. Then, the average value of the relative brightness of the pixels in the image area is used as the average relative brightness of the area.
As still another embodiment, since the temperature of the vehicle in running is generally significantly higher than the road surface, the average relative brightness of the vehicle region image is significantly higher than the average relative brightness of the road region image in the infrared image obtained using the infrared camera, and the accuracy of identifying the number of vehicles on the road using the above method is improved.
The brightness of the image of the road area can be obtained in real time.
Luminance information for estimating parameters such as LmV, lmL, and LenV is obtained from the road image, and parameters such as Dist are also estimated from the road image. Alternatively, in case the camera is fixedly set, the LenV and/or Dist parameters are known or pre-specified without being identified from the road image.
Thus, parameters f1 and f2 are estimated from the road image for calculating the number of road vehicles. Parameters f1 and f2 can be identified from each road image. The parameters f1 and f2 reflect the relationship of the number of vehicles on the road and the luminance of the road image. Under similar external conditions, road images at different times may be adapted to the same or similar parameters f1 and f2. For example, in a road image of an early rush hour, parameters f1 and f2 are obtained in a laboratory or off-line and applied to a road image acquired in the field, the number of vehicles of the current road is obtained based on the average luminance (average radiance Lm as a road area) of the road image of the current early rush hour. It will be appreciated that the parameters such as LmV, lmL, and LenV may not be acquired on site, but only the average brightness of the road image (as Lm) and the parameters f1 and f2 may be applied to obtain the number of vehicles on the current road. And in order to obtain higher precision, parameters f1 and f2 obtained in a laboratory or under a line also correspond to different weather conditions (sunny days, cloudy days, rainy days), time periods and the like, so as to select applicable parameters f1 and f2 on site according to the current time period and/or weather conditions. In the case of low precision requirements, single or few parameters f1 and f2 are also used for various or all external conditions (time periods, weather conditions, etc.).
Alternatively or in addition, the near-end region of the road obtained by the camera has enough definition and can obtain enough accurate brightness information representing parameters such as LmV, lmL, lenV and the like. Therefore, in the road image acquired on site, the current values of parameters such as LmV, lmL and LenV are obtained according to the image of the near-end area, and parameters f1 and f2 are updated or corrected by the current values of the parameters so as to obtain more accurate road vehicle numbers. These parameters, which are obtained from the proximal region, also apply to the distal region as well as to the entire road region.
During field operation, if the road is busy, an image of "no-vehicle road" cannot be acquired in real time (corresponding to LmL parameters), a statistical method may be used to estimate real-time no-vehicle road brightness from the average brightness and no-vehicle road brightness of the infrequently changing areas (e.g., sky, road-side buildings, etc.) in the previously acquired image, and the average brightness of the infrequently changing areas in the real-time acquired image. For example, it is considered that the difference or ratio of the average luminance of the infrequently changing region in the image and the vehicle-free road luminance is kept substantially constant, so that the real-time vehicle-free road luminance is estimated from the difference or ratio obtained in advance and the average luminance of the infrequently changing region in the image acquired in real time.
In addition to deriving the number of vehicles of the current road from the measured average luminance of the image of the road and the parameters f1 and f2, according to an embodiment of the present application, there is also provided a method of fusing the number of vehicles in the near-end region (nearest cnt) and the average luminance of the image in the far-end region of the road into the total number of vehicles of the road using kalman filtering.
Fig. 3A-3C illustrate system states for kalman filtering according to an embodiment of the application.
In fig. 3A, the road of the number of vehicles to be measured is divided into a near-end area and a far-end area at a distance from the camera. The system state includes a plurality of parameters of the road area for the number of vehicles to be measured.
By way of example, let X be a system state, x= [ number of near-end vehicles, number of far-end vehicles, flow-out vehicle flow rate, far-end to near-end vehicle flow rate, flow-in vehicle flow rate ], where "vehicle flow rate" means the number of vehicles passing through a specified road section per unit time, "inflow" means entering a far-end area from a far distance toward a camera, and "outflow" means exiting a road area from a near-end area (away from the far-end area) from which the number of vehicles to be measured is located.
In the example of fig. 3A, "near-end vehicle number" refers to the vehicle number in the near-end region of the road, "far-end vehicle number" refers to the vehicle number in the far-end region of the road, "outgoing vehicle flow rate" refers to the vehicle number in the unit time from the near-end region (away from the far-end region) to the road region from which the vehicle number is to be measured, "far-end to near-end vehicle flow rate" refers to the vehicle number in the unit time from the far-end region to the near-end region, and "incoming vehicle flow rate" refers to the vehicle number in the unit time from outside the image range toward the camera to the far-end region. In fig. 3A, the arrow direction indicates the traveling direction of the vehicle in the road area of the number of vehicles to be measured.
It will be appreciated that in addition to the system state X illustrated in fig. 3A, those skilled in the art will recognize other forms of describing the system state. For example, for the opposite lane of the road shown in fig. 3A, the vehicle traveling direction thereon is opposite to the arrow direction of fig. 3A, and accordingly, the system state X' = [ number of near-end vehicles, number of far-end vehicles, outflow vehicle flow rate, near-end to far-end vehicle flow rate, inflow vehicle flow rate]Where "outgoing vehicle flow rate" refers to the number of vehicles that leave the road area from the far end area where the number of vehicles to be measured per unit time, "near-end to far-end vehicle flow rate" refers to the number of vehicles that leave the near end area and enter the far end area per unit time, and "incoming vehicle flow rate" refers to the number of vehicles that enter the near end area from outside the image range toward the camera per unit time. The corresponding state transition matrix for Kalman filtering is
The division of the proximal and distal regions need not be strict according to embodiments of the present application. In the example of fig. 3A, the proximal and distal regions are adjacent to each other but do not overlap, and the proximal and distal regions together form a road region for the number of vehicles being measured. In the example of fig. 3B, the proximal region and the distal region are not adjacent to each other and do not overlap, and the sum of the proximal region and the distal region is still smaller than the road region of the number of vehicles to be measured. In the example of fig. 3C, the proximal end region and the distal end region overlap each other, and the sum of the proximal end region and the distal end region is larger than the road region of the number of vehicles to be measured, and the region covered by the proximal end region and the distal end region together is equal to the road region of the number of vehicles to be measured.
Thus, according to embodiments of the present application, the partitioning of the proximal and distal regions need not be strict, thereby also reducing the requirements on precision such as the nearCnt and/or Lm. The Kalman filtering model is automatically corrected when the model is run.
The process of estimating the number of road vehicles using Kalman filtering according to an embodiment of the application will be described below by taking the system state X shown in FIGS. 3A-3C as an example.
Let Z be a measurement of the system, z= [ nearest cnt, LLm ], where nearest cnt is the number of vehicles in the near-end region obtained with e.g. prior art or future discovered technologies, and road region average radiance Lm is represented by average luminance LLm in the far-end region obtained with e.g. prior art or future discovered technologies, where lm=f (LLm), f is a mapping from LLm to Lm. Alternatively, lm is considered to be in a linear relationship with LLm for simplicity. Optionally, in order to obtain the number of vehicles nearest to the near-end region, the camera may acquire an image of the near-end region and obtain the image by using an image recognition method; it can also be obtained by means of, for example, lidar, sensor networks laid on the road, signals identifying the active emission of the vehicle in the near-end region, etc.
The model for predicting the system state X (K) from the system state X (K-1) of the previous round (round K-1) is as follows:
X (k) =a×x (k-1) +bu+w (k), where a is the state transition matrix of the system, BU is the control quantity, for example, taking bu=0, and W (k) is noise.
Referring also to fig. 3A, in system state X, there is the following relationship:
the number of proximal cars of the kth wheel = the number of proximal cars of the kth-1 wheel + the number of distal to proximal inflow cars of the kth-1 wheel-the number of outflow cars of the kth-1 wheel;
the number of far-end vehicles of the kth wheel = the number of far-end vehicles of the kth-1 wheel + the number of inflow vehicles of the kth-1 wheel-the number of far-end to near-end inflow vehicles of the kth-1 wheel;
the outgoing vehicle flow rate, the far-to-near vehicle flow rate, and the incoming vehicle flow rate are approximately considered to remain unchanged for a short period of time.
So that a state transition matrix a can be obtained,
dt is the time interval between the front and back (k and k-1) measurements, and k is a positive integer.
Let Z (k) be the measurement of the kth wheel set system, and the relation between the system states X (k) and Z (k) be
Z (k) =h X (k) +v (k). Since the measured near-end vehicle number nearest corresponds to the near-end vehicle number of the system state X, and the average radiance represented by the measured value LLm has been disclosed above, lm is the far-end vehicle number f1+f2, thereby And f2 is a constant, the matrix H is not affected, V (k) is a measurement error, and the difference between f1' and f1 reflects the relationship between Lm and LLm. And let R be the covariance matrix of the measurement error (2 x 2), obtained by experimental methods and/or empirical data.
Fig. 4 illustrates a flowchart of a method of calculating the total number of vehicles for a target road area using the measured near end number of vehicles, nearest cnt, and the average luminance of the far end area LLm, using kalman filter fusion, in accordance with an embodiment of the present application.
In the system initialization phase, initial parameters of the system are obtained (410). For example, an initial value X (0) of the system state X, an initial value of a parameter f1', a parameter A, H, P, Q, R for kalman filtering, and the like.
In step 420, the covariance P_ of the estimated values X and X of the system state X (k) of the present round is obtained according to the system state X (k-1) of the previous round;
X_=A*X(k-1);
p_ =a P (k-1) a' +q, where P (k-1) is the covariance matrix of the k-1 train states (5*5) and Q is the process noise covariance matrix (5*5). P and Q are obtained through experimental and/or empirical data.
At step 430, the measured near-end region vehicle number nearCnt and the far-end region average luminance LLm are obtained. The manner in which the near-end region vehicle number nearCnt and the far-end region average luminance LLm are obtained is described in detail above. For example, images of the near-end region and the far-end region of the road are acquired, and the number of vehicles in the near-end region, nearest cnt, and the average luminance LLm of the images of the far-end region are obtained from the images.
In step 450, optionally, the parameter f1 is updated. Since f1= (LmV-LmL) ×lenv/Dist, there is f1' = (LLmV-LLmL) ×llenv/LDist, where LLmV is the average luminance of the vehicle image in the road near-area image with the vehicle, LLmL is the average luminance of the road near-area image without the vehicle, LLenV is the average length of the vehicle in the road near-area image, and LDisk is the length of the road near-area image. Relevant parameters are extracted from the image of the near-end region to update the parameter f1'. Typically, the matrix H does not need to be changed in multiple iterations of kalman filtering. In applications, the brightness of the road may vary significantly due to illumination of the road, weather, etc. Thus, according to an embodiment of the application, the parameter f1' is updated, where appropriate. And to remain stable, not the parameter f1' is updated in every iteration of the kalman filter. By way of example, in kalman filtering, the parameter f1' is updated after a specified number of N iterations, for example. As yet another example, the parameter f1 '(k) is calculated in each iteration of the kalman filter, but instead of applying the calculated parameter f1' (k) to the calculation of the kalman filter process, the parameter f1 '(k 1) obtained in a certain iteration is changed by more than a specified threshold from the parameter f1' previously applied to the kalman filter, and the parameter f1 '(k 1) is used instead of the previous parameter f1' applied to the calculation of the kalman filter. As yet another example, instead of updating the parameter f1' in each iteration of the kalman filter, the parameter f1' (k 1) is acquired after a specified number of N iterations have passed, for example, and the parameter f1' is applied to the calculation of the kalman filter instead of the previous parameter f1' by the parameter f1' (k 1) until the acquired parameter f1' (k 1) has changed by more than a specified threshold from the parameter f1' previously applied to the kalman filter.
In step 460, a markan gain Kg, kg=p— H '/(h×p- +h ' +r), where H ' represents the rank-converted matrix of matrix H and "/" represents matrix division, i.e., multiplication with the inverse of the divided matrix; r is the covariance matrix (2 x 2) of the measurement error.
In step 470, the covariance P (k) of the system states X (k) and X (k) of the present round is updated,
X(k)=X_+Kg*(Z(k)-H*X_);
p (k) = (I-Kg H) p_, where I is 5*5 identity matrix.
After each iteration is completed, the number of near-end vehicles and the number of far-end vehicles in the system state X (k) are summed to obtain the total number of vehicles on the target road of the current wheel (kth wheel).
And returning to step 420, the next iteration of Kalman filtering is performed.
According to an embodiment of the application, an iterative process of computing a Markov filter is continued to track the current number of vehicles on the road.
FIG. 5 illustrates a graph of estimated road area vehicle number over time according to an embodiment of the application.
In fig. 5, the horizontal axis represents time, the right direction is the time elapsed direction, the vertical axis represents the number of vehicles, the solid line is the true value of the number of vehicles on the road, the open circle is marked as the number of vehicles directly from the measurement (nearcnt+ (LLm-f 2 ')/f 1', where f2' =llml), and the number of vehicles obtained using the embodiment according to the present application. It is apparent that the result distribution of the road vehicle number obtained by the kalman filtering method according to the embodiment of the application is closer to the true value, and a plurality of outliers marked by the open circles are filtered.
According to a further embodiment of the application, let the measurement z= [ nearest cnt, D ] of the system, wherein nearest cnt is the number of vehicles in the near-end region obtained with e.g. prior art or future discovered technologies, and D is the average relative brightness in the far-end region of the road obtained with e.g. prior art or future discovered technologies. According to the method described in connection with fig. 4, the number of vehicles in the near-end region, nearest cnt, and the average relative brightness in the far-end region are fused to obtain the total number of vehicles on the road.
According to still another embodiment of the application, let the system measure z= [ nearCnt, LLm, D]And the three measured values (nearest, LLm and D) are fused to obtain the total number of vehicles on the road. Since Z (k) =h×x (k) +v (k), let To estimate the system state X (k) from the measurements Z, where f1l= (LLmV-LLmL) ×llenv/LDist, and f1d= (LDV-LDL) ×llenv/LDist, where f1L and f1D are parameters obtained from the road near-end region, the initial values of which are obtained on-line or from acquired image or sensor data of the road near-end region. For example, LLmV is the average luminance of the vehicle in the near-end region image, LLmL is the average luminance of the road in the absence of the vehicle in the region image, LLenV is the average length of the vehicle in the region image, LDist is the length of the road in the region image, LDV is the average relative luminance of the vehicle in the region image, and LDL is the average relative luminance of the road in the absence of the vehicle in the region image (theoretically equal to 0).
Fig. 6 illustrates a block diagram of estimating the number of vehicles in a road area using a deep neural network, according to an embodiment of the present application.
The deep neural network 600 for estimating the number of vehicles in a road area according to an embodiment of the present application includes a sub-network 610 and a sub-network 620. The input to the deep neural network 600 includes a far-end area image and a near-end area vehicle number. The input to the deep neural network 600 also includes an optional near-end region image. The distal region image and the optional proximal region image are inputs to the sub-network 610. The sub-network 610 outputs the estimated number of vehicles in the far-end zone and provides it to the sub-network 620. The number of near-end area vehicles is also an input to the sub-network 620, and the sub-network 620 outputs an estimated number of road area vehicles (total).
Alternatively, the number of near-end region vehicles is obtained from the near-end region image in accordance with prior art approaches. And preprocessing the distal region image and/or the proximal region image, for example labeling the road region, the distal region and/or the proximal region in the distal region image and/or the proximal region image.
The subnetwork 610 is, for example, a multi-layer convolutional neural network, including, for example, a convolutional layer (CNN), a pooling layer (not shown), and a fully-connected layer (Dense) 615. The convolutional layer (CNN) takes the inputs of the subnetwork 610, while the fully connected layer (Dense) 615 provides the outputs of the subnetwork 610. Optionally, the near-end region image is provided to the sub-network 610 such that the sub-network 610 learns the effect of the characteristics (e.g., brightness) of the near-end region image and the number of far-end region vehicles estimated from the far-end region image. Still alternatively, the average luminance or average relative luminance extracted from the near-end region image is provided as an input to the sub-network 610.
The subnetwork 620 includes a multi-layer stateful long short term memory network (StatefulLSTM) and a full connectivity layer (Dense) 625. A multi-layer stateful long and short term memory network (StatefulLSTM) takes input from sub-network 620 and a full connection layer (Dense) 625 provides output from network 620. Wherein a Stateful long short term memory network (Stateful LSTM) is provided primarily for features that have periodic fluctuations in traffic flow. The Stateful long and short term memory network (Stateful LSTM) may also be replaced with a self-Attention (Selt-Attention) network.
In training, the two sub-networks 610 and 620 are trained separately using data of an artificial standard. After the two sub-networks 610 and 620 are initially stabilized in the training, the two sub-networks 610 and 620 are combined into the deep neural network 600 for the overall training.
The methods provided according to embodiments of the present application may be implemented by software, hardware, firmware, FPGA (field programmable gate array ), single chip microcomputer, microprocessor, microcontroller, ASIC (application specific integrated circuit ), etc. of an information processing device provided at an intersection, traffic light, or network.
Although the examples referred to in the present application are described for illustrative purposes only and not to be limiting of the application, modifications, additions and/or deletions to the embodiments may be made without departing from the scope of the application.
Many modifications and other embodiments of the applications set forth herein will come to mind to one skilled in the art to which these embodiments pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the applications are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (6)

1. A method of estimating a number of vehicles in a road-specific area, wherein the road-specific area includes a near-end area and a far-end area, the method comprising:
acquiring the number of vehicles in the near-end area and acquiring the brightness of an image of the far-end area;
calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the brightness of the image in the far-end area; wherein, according to LLm =f1 '×lvc+f2', the number of vehicles LVC (i) of the images of the plurality of first areas obtained from the image of the near area and the average brightness LLM (i) of the images of the plurality of first areas, for different values of i, solving or fitting to obtain parameters f1 'and f2', wherein LLm is the average brightness of the image of the specified road area, LVC is the number of vehicles in the image of the specified road area, i indicates one of the plurality of first areas;
Obtaining the number of vehicles of the far-end region using the average brightness of the image of the far-end region and the parameters f1 'and f2' according to LLm =f1 '×lvc+f2';
and estimating the number of vehicles in the road-designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.
2. The method of claim 1, further comprising:
acquiring an image comprising the distal region; stretching the far-end region part in the image comprising the far-end region according to a world coordinate system to obtain the image of the far-end region with the same shape as the road far-end region; and/or
Acquiring an image including the proximal region; and stretching the near-end region part in the image comprising the near-end region according to a world coordinate system to obtain the image of the near-end region which is consistent with the shape of the near-end region of the road.
3. A method of estimating a number of vehicles in a road-specific area, wherein the road-specific area includes a near-end area and a far-end area, the method comprising:
acquiring the number of vehicles in the near-end area and acquiring the brightness of an image of the far-end area;
calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the brightness of the image in the far-end area;
Wherein: the calculating the number of vehicles of the far-end area according to the number of vehicles of the near-end area and the brightness of the image of the far-end area comprises: calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the brightness of the image in the far-end area by using a Kalman filtering method, wherein the number of vehicles in the near-end area and the brightness of the image in the far-end area are taken as measurement Z for the Kalman filtering method, the system state X= [ near-end number of vehicles, far-end number of vehicles, outflow vehicle flow rate, far-end to near-end vehicle flow rate and inflow vehicle flow rate for Kalman filtering, wherein the vehicle flow rate refers to the number of vehicles passing through the road specified area in unit time, the inflow vehicle flow rate refers to the number of vehicles entering the road specified area in unit time, the outflow vehicle flow rate refers to the number of vehicles leaving the road specified area in unit time, and the far-end to near-end vehicle flow rate refers to the number of vehicles entering the near-end area in unit time, and the state transition matrix for Kalman filtering is that
Where dt is the time interval of the two iterations of the Kalman filtering; and wherein z=h x+v,>v is a measurement error, and f1' is a specified parameter;
And estimating the number of vehicles in the road-designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.
4. A method of estimating a number of vehicles in a road-specific area, wherein the road-specific area includes a near-end area and a far-end area, the method comprising:
acquiring the number of vehicles in the near-end area and acquiring the brightness of an image of the far-end area;
calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the brightness of the image in the far-end area;
wherein the method comprises the steps of
The calculating the number of vehicles of the far-end area according to the number of vehicles of the near-end area and the brightness of the image of the far-end area comprises: calculating the number of vehicles in the far-end region according to the number of vehicles in the near-end region and the brightness of the image in the far-end region by using a Kalman filtering method, wherein
Taking the number of vehicles in the near-end region, the average brightness of the image of the far-end region and the average brightness difference of the image of the far-end region as a measure Z for the Kalman filtering method,
the system state x= [ near-end vehicle number, far-end vehicle number, outflow vehicle flow rate, far-end to near-end vehicle flow rate, inflow vehicle flow rate ] for kalman filtering, wherein the vehicle flow rate refers to the vehicle number passing through the road specified area per unit time, the inflow vehicle flow rate refers to the vehicle number entering the road specified area per unit time, the outflow vehicle flow rate refers to the vehicle number leaving the road specified area per unit time, the far-end to near-end vehicle flow rate refers to the vehicle number entering the near-end area from the far-end area per unit time, and the state transition matrix for kalman filtering is
Where dt is the time interval of the two iterations of the Kalman filtering; and wherein z=h x+v,
wherein f1' is a specified parameter, V is a measurement error, f1d= (LDV-LDL) ×llenv/LDist, LDV is an average relative luminance of a vehicle in a specified area image, LDL is an average relative luminance of a road in the absence of a vehicle in a specified area image, LLenV is an average length of a vehicle in a specified area image, and LDist is a road length in a specified area image;
and estimating the number of vehicles in the road-designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.
5. The method of claim 4, wherein
The calculating the number of vehicles in the far-end region according to the average of the number of vehicles in the near-end region and the image of the far-end region by using a Kalman filtering method comprises:
obtaining covariance P_ of estimated values X_ and X_ of the system state X (k) of the current round according to the system state X (k-1) of the previous round;
X_=A*X(k-1);
p_ =a x P (k-1) a' +q, where P (k-1) is the covariance matrix of the k-1 train states and Q is the process noise covariance matrix;
obtaining the number of vehicles in the near-end region and the brightness of the image of the far-end region as a measured value Z (k) of the present wheel;
Calculating a markan gain Kg, kg=p_ H '/(h_ p_ H' +r), wherein H 'represents the transposed matrix of matrix H, a' represents the transposed matrix of matrix a, "/" represents the matrix division, R is the covariance matrix of the measurement error;
the covariance P (k) of the system states X (k) and X (k) of the present round is updated,
X(k)=X_+Kg*(Z(k)-H*X_);
p (k) = (I-Kg H) p_, where I is the identity matrix.
6. An information processing apparatus comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements the method according to one of claims 1-5 when executing the program.
CN202110770253.7A 2021-07-07 2021-07-07 Road vehicle number identification method and device Active CN113506264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110770253.7A CN113506264B (en) 2021-07-07 2021-07-07 Road vehicle number identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110770253.7A CN113506264B (en) 2021-07-07 2021-07-07 Road vehicle number identification method and device

Publications (2)

Publication Number Publication Date
CN113506264A CN113506264A (en) 2021-10-15
CN113506264B true CN113506264B (en) 2023-08-29

Family

ID=78012047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110770253.7A Active CN113506264B (en) 2021-07-07 2021-07-07 Road vehicle number identification method and device

Country Status (1)

Country Link
CN (1) CN113506264B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404306A (en) * 1994-04-20 1995-04-04 Rockwell International Corporation Vehicular traffic monitoring system
JP2004020237A (en) * 2002-06-12 2004-01-22 Fuji Heavy Ind Ltd Vehicle control system
JP2007064894A (en) * 2005-09-01 2007-03-15 Fujitsu Ten Ltd Object detector, object detecting method, and object detection program
JP2009186301A (en) * 2008-02-06 2009-08-20 Mazda Motor Corp Object detection device for vehicle
JP2010103810A (en) * 2008-10-24 2010-05-06 Ricoh Co Ltd In-vehicle monitoring apparatus
CN103839415A (en) * 2014-03-19 2014-06-04 重庆攸亮科技有限公司 Traffic flow and occupation ratio information acquisition method based on road surface image feature identification
CN105528891A (en) * 2016-01-13 2016-04-27 深圳市中盟科技有限公司 Traffic flow density detection method and system based on unmanned aerial vehicle monitoring
KR20160083619A (en) * 2014-12-31 2016-07-12 (주)베라시스 Vehicle Detection Method in ROI through Plural Detection Windows
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8599257B2 (en) * 2006-08-18 2013-12-03 Nec Corporation Vehicle detection device, vehicle detection method, and vehicle detection program
US9076332B2 (en) * 2006-10-19 2015-07-07 Makor Issues And Rights Ltd. Multi-objective optimization for real time traffic light control and navigation systems for urban saturated networks
JP5982026B2 (en) * 2014-03-07 2016-08-31 タタ コンサルタンシー サービシズ リミテッドTATA Consultancy Services Limited Multi-range object detection apparatus and method
JP6440411B2 (en) * 2014-08-26 2018-12-19 日立オートモティブシステムズ株式会社 Object detection device
EP3723364A4 (en) * 2017-12-04 2021-02-24 Sony Corporation Image processing device and image processing method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404306A (en) * 1994-04-20 1995-04-04 Rockwell International Corporation Vehicular traffic monitoring system
JP2004020237A (en) * 2002-06-12 2004-01-22 Fuji Heavy Ind Ltd Vehicle control system
JP2007064894A (en) * 2005-09-01 2007-03-15 Fujitsu Ten Ltd Object detector, object detecting method, and object detection program
JP2009186301A (en) * 2008-02-06 2009-08-20 Mazda Motor Corp Object detection device for vehicle
JP2010103810A (en) * 2008-10-24 2010-05-06 Ricoh Co Ltd In-vehicle monitoring apparatus
CN103839415A (en) * 2014-03-19 2014-06-04 重庆攸亮科技有限公司 Traffic flow and occupation ratio information acquisition method based on road surface image feature identification
KR20160083619A (en) * 2014-12-31 2016-07-12 (주)베라시스 Vehicle Detection Method in ROI through Plural Detection Windows
CN105528891A (en) * 2016-01-13 2016-04-27 深圳市中盟科技有限公司 Traffic flow density detection method and system based on unmanned aerial vehicle monitoring
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Vision-based vehicle detection and counting system using deep learning in highway scenes;Huansheng Song et.al;《European Transport Research Review》;1-16 *

Also Published As

Publication number Publication date
CN113506264A (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN111694010B (en) Roadside vehicle identification method based on fusion of vision and laser radar
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110244322B (en) Multi-source sensor-based environmental perception system and method for pavement construction robot
KR100377067B1 (en) Method and apparatus for detecting object movement within an image sequence
CN105711597B (en) Front locally travels context aware systems and method
CN109085823B (en) Automatic tracking driving method based on vision in park scene
CN112149550B (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
CN102867416B (en) Vehicle part feature-based vehicle detection and tracking method
JP4800455B2 (en) Vehicle speed measuring method and apparatus
CN108320510A (en) One kind being based on unmanned plane video traffic information statistical method and system
CN103473554B (en) Artificial abortion's statistical system and method
CN110660222A (en) Intelligent environment-friendly electronic snapshot system for black smoke vehicle on road
CN103176185A (en) Method and system for detecting road barrier
KR100834550B1 (en) Detecting method at automatic police enforcement system of illegal-stopping and parking vehicle and system thereof
CN110379168A (en) A kind of vehicular traffic information acquisition method based on Mask R-CNN
CN113408454B (en) Traffic target detection method, device, electronic equipment and detection system
CN115457780B (en) Vehicle flow and velocity automatic measuring and calculating method and system based on priori knowledge set
CN114913399B (en) Vehicle track optimization method and intelligent traffic system
CN116894855A (en) Intersection multi-target cross-domain tracking method based on overlapping view
CN110414392A (en) A kind of determination method and device of obstacle distance
CN113506264B (en) Road vehicle number identification method and device
CN116794650A (en) Millimeter wave radar and camera data fusion target detection method and device
CN113468911A (en) Vehicle-mounted red light running detection method and device, electronic equipment and storage medium
CN115984768A (en) Multi-target pedestrian real-time detection positioning method based on fixed monocular camera
Rachman et al. Camera Self-Calibration: Deep Learning from Driving Scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant