CN113506264A - Road vehicle number identification method and device - Google Patents

Road vehicle number identification method and device Download PDF

Info

Publication number
CN113506264A
CN113506264A CN202110770253.7A CN202110770253A CN113506264A CN 113506264 A CN113506264 A CN 113506264A CN 202110770253 A CN202110770253 A CN 202110770253A CN 113506264 A CN113506264 A CN 113506264A
Authority
CN
China
Prior art keywords
vehicles
road
area
image
far
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110770253.7A
Other languages
Chinese (zh)
Other versions
CN113506264B (en
Inventor
尚利宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110770253.7A priority Critical patent/CN113506264B/en
Publication of CN113506264A publication Critical patent/CN113506264A/en
Application granted granted Critical
Publication of CN113506264B publication Critical patent/CN113506264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A road vehicle number identification method and apparatus are provided. A method of estimating a number of vehicles of a road designation area is provided, wherein the road designation area includes a proximal area and a distal area, the method comprising: acquiring the number of vehicles in the near-end area, and acquiring the average brightness of the images in the far-end area; calculating the number of vehicles in the far-end region according to the number of vehicles in the near-end region and the average brightness of the images in the far-end region; and estimating the number of vehicles in the road designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.

Description

Road vehicle number identification method and device
Technical Field
The application relates to intelligent traffic technology, in particular to a method and a device for carrying out accurate vehicle counting by using accurate information at a near end and fuzzy information fusion at a far end.
Background
The real-time statistics of the number of vehicles on the road is helpful for knowing the congestion condition, and can be used for optimizing urban traffic scheduling and road planning.
The traditional method for identifying the number of road vehicles mainly comprises the steps of embedding coils, a radar speed measurement camera, laser radar coverage and the like in a road area to be detected. The embedded coil and the radar speed measurement camera can only accumulate and count passing vehicles. To count the total number of vehicles traveling on a specific road, it is necessary to count passing vehicles at the entrance and exit of the road, respectively, and then subtract the count values. However, if the road is not closed (e.g., parking lots or cells are located on both sides of the road), the data obtained by the above method may fluctuate greatly and thus be inaccurate. Covering a designated road with a lidar enables relatively accurate vehicle counting, but at a very high cost.
There is also a prior art that a camera is used to shoot road images, and vehicles are identified and counted through machine vision. However, the following problems are encountered:
the length of a road between two traffic light intersections in city 1 is typically over 400 meters. When a road image is shot by using a common camera, pixels occupied by vehicles at a distance are too small to be recognized. If a telephoto lens is used to capture a far field, the field of view may be difficult to cover for a near vehicle, and the near view may be blurred. Chinese patent CN112364793A discloses a scheme for detecting vehicles in a large range by using a long-focus camera and a short-focus camera in combination.
The 2 camera is generally erected on a traffic light cantilever or a portal frame above a lane at the intersection, and the height is not more than 7 meters. When photographed at this height, a rear vehicle, which is far from the camera, will be occluded by a front vehicle, making it difficult to recognize each vehicle from the photographed image. Particularly, when a road is congested, the distance between the front vehicle and the rear vehicle is very short, and most of the rear vehicle in the shot image is shielded by the front vehicle, so that the rear vehicle is difficult to identify.
Disclosure of Invention
It is desirable to obtain the total number of vehicles in a road range including a far-end area and a near-end area, and to solve the problem of inaccurate road vehicle number identification caused by low accuracy of the identification of the far-end area in the vehicle number identification of the road.
According to a first aspect of the present application, there is provided a method of estimating a number of vehicles of a road specifying area according to the first aspect of the present application, wherein the road specifying area includes a near end area and a far end area, the method comprising: acquiring the number of vehicles in the near-end area, and acquiring the brightness of the image in the far-end area; calculating the number of vehicles in the far-end region according to the number of vehicles in the near-end region and the brightness of the image in the far-end region; and estimating the number of vehicles in the road designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.
According to a first method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, there is provided the second method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, wherein an image of the near-end zone is acquired, and the number of vehicles in the near-end zone is identified from the acquired image; and/or acquiring the number of vehicles in the near-end area through signals actively sent by radar, a sensor arranged in the near-end area and/or a vehicle in the near-end area.
According to the first or second method of estimating the number of vehicles in the road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles in the road-designating zone of the third aspect of the present application, wherein the parameters f1 '(luminance of vehicle in the near-end zone image-luminance of vehicle-free road in the near-end zone image) × vehicle average length in the near-end zone image/length in the near-end zone image, and f 2' (-luminance of vehicle-free road in the near-end zone image) are obtained from the luminance of vehicle in the near-end zone image, the luminance of vehicle-free road in the near-end zone image, the average length of vehicle in the near-end zone image, and the length of vehicle-free road in the near-end zone image; the number of vehicles of the distal region is obtained from (average brightness of the image of the distal region-f 2 ')/f 1'. the length of the distal region image.
According to the first or second method of estimating the number of vehicles of a road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles of a road-designating zone of the fourth aspect of the present application, wherein the parameter f 1' is obtained from the number of vehicles of the image of the one or more first zones obtained from the near-end zone image and the statistical value of the luminance of the image of the one or more first zones; and obtaining the number of vehicles in the far-end area according to the brightness statistic value of the image in the far-end area and the parameter f 1'.
According to a fourth method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, there is provided the fifth method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, wherein the parameters f1 'and f 2' are obtained from the number of vehicles in the near-end zone image, the average brightness of the near-end zone image, and the average brightness of the near-end zone image, wherein f1 'is greater than the number of vehicles in the near-end zone image + f 2'; and obtaining the number of vehicles at the far-end region according to the average brightness of the image at the far-end region and the parameters f1 'and f 2'.
According to a fourth method of estimating the number of vehicles in a road-designating area of the first aspect of the present application, there is provided the sixth method of estimating the number of vehicles in a road-designating area of the first aspect of the present application, wherein the parameter f 2' is obtained from a statistical value of luminances of non-vehicle roads of the images of the one or more first areas; and obtaining the number of vehicles in the far-end region according to the brightness statistic value of the image in the far-end region and the parameters f1 'and f 2'.
According to a fourth method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, wherein the parameters f1 'and f 2' are obtained from the brightness f1 'of the image of the first zone + f 2' of the image of the first zone; and obtaining the number of vehicles at the far-end region according to the brightness of the image at the far-end region and the parameters f1 'and f 2'.
According to a fourth method of estimating the number of vehicles of a road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles of a road-designating zone of the first aspect of the present application, wherein the parameters f1 ═ average luminance of vehicles of a first zone image-average luminance of non-vehicle roads of the first zone image ═ average length of vehicles of the first zone image/length of the first zone image, and f2 ═ average luminance of non-vehicle roads of the first zone image; the number of vehicles of the distal region is obtained from (average brightness of the image of the distal region-f 2 ')/f 1'. the length of the distal region image.
According to a third method for estimating the number of vehicles in a road designation area of the first aspect of the present application, there is provided the ninth method for estimating the number of vehicles in a road designation area of the first aspect of the present application, wherein an image of a near-end area is captured by a camera, and one or more of a brightness of a vehicle, a brightness of a vehicle-free road of the image of the near-end area, an average length of the vehicle of the image of the near-end area, and a length of the image of the near-end area are obtained from the image of the near-end area; and/or acquiring one or more of the brightness of a vehicle, the brightness of a vehicle-free road of the image of the near-end region, the average length of the vehicle of the image of the near-end region and the length of the image of the near-end region in a laboratory or under-line acquisition.
According to one of the fourth to eighth methods of estimating the number of vehicles in a road designated area of the first aspect of the present application, there is provided the method of estimating the number of vehicles in a road designated area of the first aspect of the present application, wherein an image of a near-end area is acquired by a camera, and from the image of the near-end area, one or more of a luminance statistic of a vehicle image of one or more first area images, a luminance statistic of a vehicle-free road of one or more first area images, a vehicle average length of one or more first area images, and a length of one or more first area images are acquired; and/or obtaining one or more of the brightness of the vehicle in the image of the one or more first regions, the brightness of the vehicle-free road of the image of the one or more first regions, the average length of the vehicle in the image of the one or more first regions, the length of the image of the one or more first regions in a laboratory or offline.
According to one of the first to tenth methods of estimating the number of vehicles of a road specifying area of the first aspect of the present application, there is provided the eleventh method of estimating the number of vehicles of a road specifying area of the first aspect of the present application, further comprising: calculating the brightness of the image of the designated area by using the brightness or relative brightness of the pixels of the image of the designated area; wherein the relative luminance of the pixel is a difference between the luminance of the pixel and an average luminance in a vehicle-free state of the image of the road specifying area, or the relative luminance of the pixel is a difference between the luminance of the pixel and an average luminance of an infrequently changing area in the image including the road specifying area; and wherein pixel luminance is the luminance value of a pixel or a statistical value of the luminance of all pixels within a specified window around the pixel.
According to one of the first to eleventh methods of estimating the number of vehicles in a road specifying area of the first aspect of the present application, there is provided the twelfth method of estimating the number of vehicles in a road specifying area of the first aspect of the present application, further comprising: acquiring an image including the distal region; stretching the distal region part in the image comprising the distal region according to a world coordinate system to obtain an image of the distal region with the shape consistent with that of the road distal region; and/or acquiring an image comprising the proximal region; and stretching the near-end area part in the image comprising the near-end area according to a world coordinate system to obtain an image of the near-end area with the shape consistent with the shape of the near-end area of the road.
According to one of the first to twelfth methods of estimating the number of vehicles of a road specifying area of the first aspect of the present application, there is provided the thirteenth method of estimating the number of vehicles of a road specifying area of the first aspect of the present application, wherein: the calculating the number of vehicles of the far-end region according to the number of vehicles of the near-end region and the brightness of the image of the far-end region includes: calculating the number of vehicles in the far-end region according to the number of vehicles in the near-end region and the brightness of the image in the far-end region by using a Kalman filtering method, wherein the number of vehicles of the near-end region and the brightness of the image of the far-end region are taken as a measurement Z for the Kalman filtering method, the system state for kalman filtering X ═ near end number of vehicles, far end number of vehicles, outflow vehicle flow rate, far to near end vehicle flow rate, inflow vehicle flow rate, the traffic flow rate refers to the number of vehicles passing through the specified road area in unit time, the inflow traffic flow rate refers to the number of vehicles entering the specified road area in unit time, the outflow traffic flow rate refers to the number of vehicles leaving the specified road area in unit time, the far-end to near-end traffic flow rate refers to the number of vehicles entering the near-end area from the far-end area in unit time, and a state transition matrix for Kalman filtering is as follows.
Figure BDA0003151803820000031
Wherein dt is the time interval of two iterations of Kalman filtering; and wherein Z is H X + V,
Figure BDA0003151803820000032
v is the measurement error, and f 1' is the specified parameter.
According to a thirteenth method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, wherein a system state X for kalman filtering is [ a near-end number, a far-end number, an outgoing traffic flow rate, a near-end to far-end traffic flow rate, an incoming traffic flow rate ], and a near-end to far-end traffic flow rate refer to the number of vehicles entering the far-end zone from the near-end zone per unit time; the state transition matrix for Kalman filtering is
Figure BDA0003151803820000033
According to a twelfth or thirteenth method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, wherein the parameter f 1' is a parameter obtained from one or more first zones of the near-end zone.
According to a fifteenth method of estimating the number of vehicles in a road-designating area of the first aspect of the present application, there is provided the sixteenth method of estimating the number of vehicles in a road-designating area of the first aspect of the present application, wherein the parameter f 1' is obtained from the number of vehicles in the image of the one or more first areas obtained from the near-end area image and the statistical value of the luminance of the image of the one or more first areas; obtaining a parameter f1 ' according to the number of vehicles in the near-end region image and the average brightness of the near-end region image, and according to the average brightness of the near-end region image, which is f1 ' × the number of vehicles in the near-end region image, which is + f2 '; obtaining a parameter f1 ' according to the brightness f1 ' of the image of the first region and the number of vehicles + f2 ' of the image of the first region; or the parameter f1 '═ average brightness of vehicle of the first region image-average brightness of vehicle-free link of the first region image ×) average length of vehicle of the first region image/length of the first region image, and f 2' ═ average brightness of vehicle-free link of the first region image.
According to one of the twelfth to sixteenth methods of estimating the number of vehicles of a road specifying area of the first aspect of the present application, there is provided the method of estimating the number of vehicles of a road specifying area of the seventeenth aspect of the present application, wherein the number of vehicles of the near-end area and a difference in luminance of the image of the far-end area are taken as the measurement Z for the kalman filter method, and
Figure BDA0003151803820000041
where f1D is the specified parameter.
According to one of the twelfth to sixteenth methods of estimating the number of vehicles of a road specifying area of the first aspect of the present application, there is provided the eighteenth method of estimating the number of vehicles of a road specifying area of the first aspect of the present application, wherein the number of vehicles of the near-end area, the average luminance of the image of the far-end area, and the difference in the average luminance of the image of the far-end area are taken as the measurement Z for the kalman filter method,
Figure BDA0003151803820000042
wherein f 1' and f1D are specified parameters.
According to the tenth or eleventh method of estimating the number of vehicles in the road-specifying area of the first aspect of the present application, there is provided the method of estimating the number of vehicles in the road-specifying area of the first aspect of the present application, wherein the luminance used in obtaining the parameter f1 'is replaced with the relative luminance, and the parameter f1D is obtained in the same manner as in obtaining the parameter f 1'.
According to one of the twelfth to nineteenth methods of estimating the number of vehicles in the road specifying area of the first aspect of the present application, there is provided the method of estimating the number of vehicles in the road specifying area of the first aspect of the present application, wherein the calculating the number of vehicles in the distal area from the number of vehicles in the proximal area and the average brightness of the image in the distal area using the kalman filter method includes: obtaining the covariance P < - > of the estimated values X < - > and X < - > of the system state X (k) of the current round according to the system state X (k < -1 >) of the previous round;
X_=A*X(k-1);
p (k-1) a' + Q, where P (k-1) is the covariance matrix of k-1 round system states and Q is the process noise covariance matrix; obtaining the number of vehicles in the near-end area and the brightness of the image in the far-end area as a measured value Z (k) of the current wheel; calculating a marman gain Kg, Kg ═ P _ · H '/(H × P _ × H ' + R), wherein H ' represents a rotation rank matrix of the matrix H, "/" represents matrix division, and R is a covariance matrix of the measurement errors; updating the covariance P (k) of the system states X (k) and X (k) of the current round,
X(k)=X_+Kg*(Z(k)-H*X_);
p (k) ═ P (I-Kg × H), where I is the identity matrix.
According to one of the twelfth to twentieth methods of estimating the number of vehicles in the road specifying area of the first aspect of the present application, there is provided the method of estimating the number of vehicles in the road specifying area according to the twenty-first method of the first aspect of the present application, further comprising: and calculating a parameter f1 'according to the image of the near-end region, wherein the parameter f 1' is updated by the parameter calculated according to the image of the near-end region in each N iterations of Kalman filtering, and the parameter f1 'is updated by the calculated parameter if the variation of the parameter calculated according to the image of the near-end region and the currently used parameter f 1' in the Kalman filtering exceeds a threshold value after each N iterations of Kalman filtering.
According to a twenty-first method of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles in a road-designating zone according to the twenty-second aspect of the present application, further comprising: calculating a parameter f1D 'according to the image of the near-end region, wherein the parameter f1D is updated by the parameter f 1D' calculated according to the image of the near-end region in each N iterations of Kalman filtering, and if the variation of the parameter f1D 'calculated according to the image of the near-end region and the parameter f1D currently used in Kalman filtering exceeds a threshold value after each N iterations of Kalman filtering, the parameter f1D is updated by the calculated parameter f 1D'.
According to the first or second method of estimating the number of vehicles in the road specifying area of the first aspect of the present application, there is provided the method of estimating the number of vehicles in the road specifying area according to the twenty-third aspect of the present application, further comprising: processing the brightness of the image of the remote area by using a neural network comprising a plurality of layers of convolutional neural networks and fully connected layers to obtain the number of vehicles of the remote area; and processing the vehicles in the near-end area and the vehicles in the far-end area by using a neural network comprising a plurality of layers of stateful long and short term memory networks and a full connection layer to obtain the number of the vehicles in the specified area of the road.
According to a twenty-third method of estimating the number of vehicles in a road specifying area of the first aspect of the present application, there is provided the method of estimating the number of vehicles in a road specifying area according to the twenty-fourth method of the first aspect of the present application, further comprising: and processing the image brightness of the far-end region and the image of the near-end region by using a neural network comprising a plurality of layers of convolutional neural networks and fully connected layers to calculate the number of vehicles in the far-end region.
According to a twenty-fifth aspect of the present application, there is provided a method of estimating the number of vehicles in a road-designating zone, further comprising: preprocessing the distal region image to label a road region, a proximal region and/or a distal region in the distal region image.
According to one of the first to twenty-fifth methods of estimating the number of vehicles in a road-designating zone of the first aspect of the present application, there is provided the method of estimating the number of vehicles in a road-designating zone according to a twenty-sixth method of the first aspect of the present application, wherein: the proximal region and the distal region respectively comprise a positive integer number of lanes, and the lane comprised by the proximal region is the same as the lane comprised by the distal region; the proximal region and the distal region each include a lane that covers the full lane width in width.
According to a second aspect of the present application, there is provided a method of estimating the number of vehicles in a road specifying area according to the first aspect of the present application, the method comprising: acquiring the brightness of the image of the road designated area; and estimating the number of vehicles in the road designated area according to the brightness of the road designated area.
According to a third aspect of the present application, there is provided a method of estimating the number of vehicles of a road specifying area according to the first aspect of the present application, wherein the road specifying area includes a near-end area and a far-end area, the method comprising: acquiring the number of vehicles in the near-end area, and acquiring an image of the far-end area; processing the images of the far-end area by using a multilayer convolutional neural network to obtain the number of vehicles in the far-end area; and processing the number of vehicles in the near end area and the number of vehicles in the far end area by using a neural network comprising a plurality of layers of stateful long and short term memory networks and full connection layers to obtain the number of vehicles in the road designated area, or processing the number of vehicles in the near end area and the number of vehicles in the far end area by using a neural network comprising a self-attention network and a full connection layer to obtain the number of vehicles in the road designated area.
According to the third aspect of the present application, there is provided a method of estimating the number of vehicles in a road specifying area according to the second aspect of the present application, wherein the number of vehicles in the near-end area is acquired, and an image of the far-end area is acquired; and processing the images of the far-end region and the near-end region by using a neural network comprising a plurality of layers of convolutional neural networks and fully connected layers to obtain the number of vehicles in the far-end region.
According to a fourth aspect of the present application, there is provided an information processing apparatus comprising a memory, a processor, and a program stored on the memory and executable on the processor, characterized in that the processor implements one of the methods of estimating the number of vehicles in a road specifying area according to the first, second, and third aspects of the present application when executing the program.
Drawings
The application, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
FIG. 1 illustrates a schematic diagram of a vehicle number identification system according to an embodiment of the present application;
FIG. 2A illustrates a schematic view of a vehicular roadway according to an embodiment of the present application;
FIG. 2B illustrates the image of the vehicular roadway of FIG. 2A captured by a camera;
FIG. 2C illustrates a schematic view of a roadway without vehicles according to an embodiment of the present application;
3A-3C illustrate system states for Kalman filtering in accordance with embodiments of the present application;
FIG. 4 illustrates a flow chart of a method of calculating a total number of vehicles for a target road region using Kalman filtering fusion of a measured near end vehicle number, nearCnt, and a luminance, Lm, of a far end region in accordance with an embodiment of the present application;
FIG. 5 illustrates a graph of estimated road zone vehicle number over time according to an embodiment of the present application; and
FIG. 6 illustrates a block diagram of estimating a road region vehicle number using a deep neural network according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
Fig. 1 shows a schematic diagram of a vehicle number identification system according to an embodiment of the application.
Fig. 1 shows a road 120, and a post 100 located alongside the road. The mast 100 is, for example, a mast of traffic lights located at an intersection or a gantry spanning or partially spanning a roadway. The camera 102 is secured to the mast 100, for example, a cross bar at the top end of the mast 100. The region of the roadway 120 in which the number of vehicles to be identified includes a proximal region 122 and a distal region 124, by distance relative to the mast 100 or camera 102. The distal region 124 is a greater distance relative to the shaft 100 than the proximal region 122. By way of example, vehicles on the roadway 120 approach the mast 100 from the distal region 124 in the direction of the proximal region 122, and then exit the proximal region to leave the region of the roadway 120 in which the number of vehicles is to be identified. In the example of fig. 1, there is a region of partial overlap of the proximal region 122 and the distal region 124. Optionally, the proximal region 122 and the distal region 124 may or may not be adjacent (with a space present) according to embodiments of the present application.
The camera 102 includes, for example, a plurality of cameras, each capturing video or images of the proximal region 122 and the distal region 124. For example, the camera 102 includes a tele camera for photographing the distal region 124 and also includes a normal camera for photographing the proximal region 122.
As yet another example, the camera head 102 includes a plurality of the same or different cameras. One or more cameras acquire images of the proximal region 122, while one or more cameras acquire images of the distal region 124.
As yet another example, the camera 102 includes one or more infrared or thermal imaging cameras (hereinafter collectively referred to as infrared cameras) for acquiring infrared images, intensity maps, or temperature maps (hereinafter collectively referred to as infrared images) of the proximal region 122 and/or the distal region 124, respectively. When the color of the road is close to that of the vehicle, the infrared image collected by the infrared camera can more effectively distinguish the road from the vehicle on the road.
The vehicle number identification system according to the embodiment of the present application further includes, for example, an information processing apparatus (not shown) and/or a communication device (not shown). The information processing apparatus acquires an image of the near-end region 122 and/or the far-end region 124 acquired by the camera 102, extracts features of the image, and also implements a method of identifying the number of vehicles of a road (which will be described in detail later) provided according to an embodiment of the present application to obtain the number of vehicles within a region of the road 120 in which the number of vehicles is to be identified. The information processing device is, for example, a computer, a server, or an embedded computing device. The communication means couples the information processing apparatus and/or the camera 102 to a network or the internet. Therefore, the method for identifying the number of vehicles on the road provided by the embodiment of the application is also implemented through the cloud computing platform.
For the near-end region 122, the prior art can accurately identify each vehicle in the near-end region 122 and obtain the number of vehicles (denoted as nearCnt) by using the image acquired by the camera 102.
For the distal region 124, even if a clear image of the distal region 124 can be obtained using a tele camera and an image of the complete distal region 124 is obtained by, for example, stitching images taken by a plurality of tele cameras, due to the limited height of the camera 102, vehicles in the distal region 124 that are adjacent to the camera 102 will block vehicles behind it (in a direction relatively far from the camera 102). This results in fewer pixels in the image of the distal region 124 occupied by the blocked vehicle, and it is difficult for the image recognition technology of the related art to accurately recognize the number of vehicles in the image of the distal region 124.
According to an embodiment of the present application, the number of vehicles in a road designation area is estimated using a method based on image brightness estimation. The specified area of a road is, for example, a road area entering a traffic intersection, and a traffic light located at the traffic intersection controls the passage of vehicles within the road area, so that the number of vehicles within the road area is influenced by the control strategy of the traffic light.
In the real world, objects including roads and vehicles are irradiated (by emitting light, reflecting light and/or having temperature), and the radiant flux emitted by a light source per unit surface area and per unit solid angle perpendicular to the radiation transmission direction is called radiance. Illuminance or brightness (hereinafter collectively referred to as brightness for simplicity) is used to describe the amount of light radiation that is perceived by the human eye. The camera acquires images, which correspond to the measurement of the light radiation of the various regions of the real world. There are various expressions of the brightness of an image or a pixel of an image, for example, the gray scale value of a gray scale map, and the various color models each have a brightness. In the present application, the measurement of infrared light in an image captured by an infrared camera is also referred to as brightness.
The inventors have realized that a vehicle on a road is in motion and therefore the heat generated by its engine causes the vehicle to radiate significantly differently from the road. The number of vehicles on the road or the density of vehicles in turn directly influences the radiation of the area of the road where the vehicles are present. This difference in radiation is useful for identifying the number of vehicles on the road. Further, the number of vehicles on the road can be identified by brightness measurement of an image obtained by a camera or other imaging device or by reflection of the radiant energy of the road.
FIG. 2A shows a schematic view of a roadway with vehicles according to an embodiment of the present application.
Referring to fig. 2A, there are a plurality of vehicles (240, 242) on a road 220 of the real world. In fig. 2A, rectangular boxes (e.g., rectangular boxes 240, 242) represent vehicles. Vehicles (240, 242) occupy a portion of the area of roadway 220.
The average radiance of the area where the vehicle exists on the real-world road 220 is written as (LmV). It is understood that the average radiance score (LmV) of the area where vehicles are present also indicates the average radiance within a specified area (e.g., the near end area and/or the far end area) of the road 220 where the number of vehicles is to be measured.
The vehicles on the real-world road 220 have an average length (noted LenV). The vehicles on the road 220 of the real world have a space (also referred to as a pitch), and the average pitch (denoted as Gap) between the vehicles on the road 220. It will be appreciated that the average length of the vehicles (LenV) and the average distance of the vehicles (Gap) may also indicate the average length and average distance of the vehicles within a specified region (e.g., the proximal region and/or the distal region) of the roadway 220 in which the number of vehicles is to be measured, respectively.
FIG. 2B shows the image of the vehicular roadway of FIG. 2A captured by a camera. The image of fig. 2B is acquired by, for example, camera 102 of fig. 1.
The shape of the image of the road (and the vehicle thereon) shown in fig. 2B is different from the road (and the vehicle thereon) shown in fig. 2A due to the presence of perspective and lens distortion. Also shown in fig. 2B are vehicles 240 and 242 of fig. 2A.
The camera 102 is typically fixed or takes images of the road at a limited or known angle. With the pose and parameters of the camera 102 known, each plane of the real world captured by the camera 102 corresponds to a transformation matrix that is used to convert the world coordinates of points on the plane to uniquely determined coordinates in the image captured by the camera 102. The road 220 is made to belong to a plane of the real world with a corresponding transformation matrix. And obtaining the mapping relation between the world coordinates in the real environment and the pixel coordinates of the image collected by the camera through the calibration of the camera.
According to the embodiment of the application, the area of the target road in the image is acquired from the image acquired by the camera 102, and the road area in the image is stretched to obtain the road image which is consistent with the shape of the road in the real world. When stretching is performed, the stretching is performed approximately in accordance with the transformation matrix of the plane of the real world in which the road is located. Because the vehicle on the road has a certain height, that is, the vehicle and the road are not on the same plane in the real world, in the road image obtained by stretching the road area, the shape of the vehicle image is different from that of the vehicle in the real world, but the implementation of the technical scheme of the present application is not affected by the difference.
The stretched road area image in the image captured by the camera 102 has brightness of each pixel as a measure of radiance of the real-world area to which the pixel corresponds. Hereinafter, unless otherwise specified, the images captured by the camera used are all images after being stretched. Measurement of real-world radiance in better response to luminance information computed from stretched images
In the image of the road 220 acquired by the camera 102, for example, an image area corresponding to the vehicle 240 has brightness. It will be appreciated that averaging the luminance of all pixels in the image of the road 220 corresponding to, for example, the region of the vehicle 240 results in an average luminance of the region of the vehicle 240 of the image (reflecting the average radiance of the region of the vehicle 240 in the real world).
In some cases, there are no vehicles on the road. Or a partial area of the road is free of vehicles. FIG. 2C illustrates a schematic view of a road without a vehicle according to an embodiment of the present application. In fig. 2C, there are no vehicles on the road 270. The road 270 without a vehicle has a vehicle-free road average radiance (noted LmL). The average luminance is obtained from the image of the vehicle-free road 270 captured by the camera 102 as a measure of the average luminance of the vehicle-free road (LmL). It is to be appreciated that in fig. 2C, the vehicle-free road average radiance LmL is for the road 270 region and does not include the radiance of regions outside the road 270 (e.g., sidewalks, sky, buildings beside the road, etc.).
Thus, if there is an average radiance (denoted as Lm) for a certain area of the road
Lm=(LmV*LenV+LmL*Gap)/(LenV+Gap) (1)
Wherein LmV is the average radiance of the vehicles in the area, LenV is the average length of the vehicles in the area, LmL is the average radiance of the road without vehicles in the area, and Gap is the average distance between the vehicles in the area. Then there are vehicles (denoted as VC) within the specified road length (denoted as Dist) in the area
VC=Dist/(LenV+Gap) (2)
Thus, it is obtained from the equations (1) and (2)
Lm=(LmV-LmL)*LenV/Dist*VC+LmL=f1*VC+f2 (3)
Wherein f1 ═ (LmV-LmL) × LenV/Dist, f2 ═ LmL (4)
It can be seen that the average radiance (Lm) of a road area is linear with the number of Vehicles (VC) within the road area. Generally, in a road area composed of a plurality of road areas and in which the number of vehicles to be identified is equal, the total radiance and the total number of vehicles are approximately in a linear relationship. It is understood that the relationships between the average radiance of the road and the number of vehicles expressed by the above equations (1) to (4) are applicable to both single-lane and multi-lane roads.
In one embodiment, the parameters f1 and f2 of the near end region and the far end region of the road are considered to be the same or approximately the same, and the vehicle density of the near end region and the far end region is considered to be the same or approximately the same, so that the number of vehicles in the far end region can be estimated by using the parameters obtained in the near end region. Further, in the processing, the proximal region and the distal region of the road are selected to be in the same direction or the same lane of the road. In the same lane, the flow will pass through the distal and proximal regions in succession (or vice versa, depending on the direction of traffic), so that the respective parameters f1 and f2 are to a higher degree identical or similar in the proximal and distal regions in the same lane. Alternatively or still further, the proximal region and the distal region of the road are selected to include at least 1 lane, and the proximal region and the distal region are selected at a granularity of lanes to avoid processing lanes of incomplete width in the proximal region and/or the distal region image.
According to the embodiment of the application, the brightness of the image of the road area collected by the camera is used as the measurement or estimation of the radiance of the road area, so that the number of vehicles in the road area is estimated by using the brightness of the road area image collected by the camera. Note that the shape of the road region shown in the road region image is different from that of the road region of the real world due to distortion and perspective, and therefore the image of the road region needs to be stretched to obtain the same shape as that of the road region of the real world. The method of stretching has been described above, and a stretching method of the related art or a stretching method generated in the future may be applied, which is not limited in the embodiments of the present application. By stretching, the luminance/average luminance of the image of the road area is substituted for the variables LmV, LmL, and Lm in the above equation (3)/(4). It will be appreciated that the cameras are positioned so that the image of the road region has the same or substantially the same shape as the corresponding road region without stretching, in which case the brightness/average brightness of the image of the road region taken from the unstretched image replaces the variables LmV, LmL and Lm in the above equation (3)/(4).
From equations (2) - (4), VC ═ (Lm-f2)/f1 is obtained, and an estimate of the number of vehicles in a road designation area including a near-end area and a far-end area is obtained.
It will be appreciated that, from the image of the road region, in addition to obtaining the luminance LLmL of the designated image region instead of the vehicle-free road average radiance LmL, and Lm being the average radiance of the road designated region including the near-end region and the far-end region, the vehicle region in the image may be identified in a manner of the related art (e.g., pattern recognition, machine learning, etc.), thereby obtaining the average vehicle length LLenV in the image, the average vehicle spacing LGap in the image, the length LDist of the road designated region in the image including the near-end region and the far-end region, and the average luminance LLmV of the vehicle region image instead of the vehicle average radiance LmV.
Thus, LVC ═ LDist/(LLenV + LGap), where LVC is the number of vehicles within the image of the specified road area; and f1 '((LLmV-LLmL) × LLenV/LDist), f 2' (LLmL), where LLmV is an average brightness of a vehicle image within a specified road region image with a vehicle, LLmL is an average brightness of a specified road region image without a vehicle, LLenV is an average length of a vehicle within a road specified region image, and LDisk is a length of the road specified region image.
Thus LVC ═ (LLm-f2 ')/f 1', where LLm is the average luminance of the specified road region image. The number of vehicles LVC of the specified road area is thus obtained from the image of the specified road area (the average luminance LL therein, the parameters f1 'and f 2' obtained therefrom).
According to a further embodiment of the application, the number of vehicles and the parameters f1 'and f 2' are obtained in a prior art manner for the proximal region, and the measurement of the average radiance of the distal region is obtained from the image of the distal region, and the number of vehicles of the distal region is obtained from LVC ═ (LLm-f2 ')/f 1' by using the parameters f1 'and f 2' obtained from the proximal region. For example, the vehicle number LVC of the distal region is obtained from LVC ═ (LLm-f2 ')/f 1', based on the parameters f1 'and f 2' obtained from the image of the proximal region and the average luminance LLm of the image of the distal region. And adding the vehicle numbers of the near-end region and the far-end region to obtain the vehicle number of the road.
According to yet another embodiment of the present application, from LLm ═ f1 '. LVC + f 2', f1 'and f 2' are found from LLm and LVC obtained from the images. For example, the image of the near-end region of the road provides higher definition, a plurality of regions are divided from the near-end region, and corresponding llm (i) and lvc (i) are obtained from the image of each region, where i indicates one of the regions divided from the near-end region. For different values of i (corresponding to a plurality of divided areas), f1 'and f 2' are solved or fitted. The obtained parameters f1 'and f 2' are applied to estimate the number of vehicles in the far-end area. For the distal region, llm (far) is obtained from the image of the distal region, where llm (far) represents the average brightness of the distal region, and thus the vehicle number lvc (far) (llm (far) -f2 ')/f 1' of the distal region is obtained.
In one or more of the above processes, LLm, LLmV, and LLmL each represent the average brightness of the image of the specified region. It will be appreciated that replacing the average luminance with the total luminance of the pixels of the area image also results in the required parameters f1 'and f 2', and hence an estimate of the number of vehicles in the area.
According to the embodiment of the application, the average brightness of the image of the road area is acquired through the camera and is used for estimating the number of vehicles in the road area. Compared with a vehicle identification mode based on feature analysis, the average brightness information of the image is easier to acquire, the influence of the vehicle shielding problem of the far-end region of the road is less, and the identification accuracy in the vehicle number identification of the far-end region is better than that of the identification mode based on feature analysis.
The brightness of the image is greatly related to the color of the object in the image. In vehicles on the road, the color representation varies. By way of example, white (or light) vehicle regions in an image typically exhibit greater brightness than road regions, while black (or dark) vehicle regions typically exhibit closer brightness than road regions. In this case, the average luminance of the road area in the image may be affected by the color of the vehicle thereon, and even the average luminance of the white vehicle and the black vehicle area may be close to the luminance of the vehicle-free road area, thereby causing a deviation in the number of road vehicles estimated with the luminance. According to yet another embodiment of the present application, the LmV, LmL and/or Lm are measured or estimated by calculating the average luminance of the image of the road area using the relative luminance of the pixels, instead of the pixel luminance, for the difference (absolute value) of the pixel luminance of the road area relative to the average luminance of the vehicle-free road area (referred to as relative luminance).
The images of the vehicle-free road area can be selected from the images collected by the camera, or small vehicle-free road areas can be selected from a plurality of images with vehicles and spliced to obtain the required images of the complete vehicle-free road area. And the brightness (average brightness) of the image of the vehicle-free road area is obtained. The relative brightness of the pixels is obtained by subtracting the brightness of the pixels from the average brightness of the image of the vehicle-free road area and taking the absolute value or the square. The relative brightness of the road region is then averaged over all pixels of the image of the road region to obtain the relative brightness of the road region and used to estimate LmV and/or Lm.
According to yet another embodiment of the application, noise that may be present in the image is further processed when calculating the relative brightness of the pixels. Foreign objects on the vehicle, people in the vehicle, imperfections in the camera may introduce noise in the road area image. In order to calculate the relative luminance of a certain pixel (the average luminance with respect to the image of the vehicle-free road), the luminance average value or the statistical value of a small range window (a plurality of pixels adjacent to) around the pixel is used as the luminance of the pixel, and the relative luminance of the pixel with respect to the average luminance of the image of the vehicle-free road is further calculated. For example, a 3x3, 4x4 or 8x8 range around a pixel is selected, and the average value of the brightness of all pixels in the range is used as the brightness of the pixel.
When the average brightness of an image of a road area is acquired by a camera under various environments, the absolute value of the brightness of the image acquired by the camera may change due to a change in external lighting conditions. In order to avoid introducing large errors, according to one embodiment of the present application, the average brightness of the corresponding non-vehicle road image is obtained at different times or under external environment respectively, and used for calculating the relative brightness of the pixels. According to yet another embodiment of the present application, infrequently changing regions are marked in the image. The infrequently changing area includes, for example, an area other than a road, such as the sky, buildings on both sides of the road, and the like. The traffic flow does not usually occur in the infrequently changing area, so that the image content/brightness of the area is not frequently changed due to the traffic flow. The average brightness of the infrequently changing area image of the road is used instead of the average brightness of the vehicle-free road. In an image, the relative luminance of a pixel is the average luminance of the pixel/the average luminance of a non-frequently changing area, wherein the luminance of the pixel is the absolute value of the luminance of the pixel or the average or statistical value of the luminance of a plurality of pixels adjacent to the pixel. Then, the average value of the relative luminance of the pixels in the image area is used as the average relative luminance of the area.
As still another embodiment, since the temperature of the running vehicle is generally significantly higher than the road surface, the average relative brightness of the vehicle area image is significantly higher than that of the road area image in the infrared image obtained using the infrared camera, and the accuracy of identifying the number of vehicles on the road using the above method is improved.
The brightness of the image of the road area can be obtained in real time.
Luminance information for estimating parameters such as LmV, LmL, LenV, and the like is obtained from the road image, and parameters such as Dist are also estimated from the road image. Alternatively, in the case where the camera is fixedly set, the LenV and/or Dist parameters are known or pre-specified without being recognized from the road image.
Thus, the parameters f1 and f2 are estimated from the road image for calculating the number of road vehicles. The parameters f1 and f2 can be identified from each road image. The parameters f1 and f2 reflect the relationship between the number of vehicles on the road and the brightness of the road image. Under similar external conditions, road images at different times may apply the same or similar parameters f1 and f 2. For example, in the road image at the early peak hours, the parameters f1 and f2 are obtained in a laboratory or under a line and applied to the road image acquired in the field, and the number of vehicles of the current road is obtained based on the average brightness (as the average radiance Lm of the road area) of the road image at the current early peak hours. It can be understood that the parameters such as LmV, LmL and LenV do not need to be acquired in the field, and only the average brightness of the road image (as Lm) and the parameters f1 and f2 are used to obtain the number of vehicles on the current road. And in order to obtain higher accuracy, the parameters f1 and f2 obtained in a laboratory or under a line also correspond to different weather conditions (sunny days, cloudy days, rainy days), time periods and the like, so that the applicable parameters f1 and f2 are selected on site according to the current time period and/or weather conditions. In the case of less demanding accuracy, single or few parameters f1 and f2 are also used for multiple or all external conditions (time period, weather conditions, etc.).
Alternatively or additionally, the camera can obtain enough definition of the near-end area of the road and can obtain enough accurate brightness information representing parameters such as LmV, LmL, LenV and the like. Therefore, in the road image acquired on site, the current values of the parameters such as LmV, LmL and LenV are obtained from the image of the near-end region, and the parameters f1 and f2 are updated or corrected by the current values of the parameters, so as to obtain a more accurate number of road vehicles. These parameters, which are obtained from the proximal region, also apply to the distal region and to the entire road region.
In the field operation process, if the road is busy and the image of the 'vehicle-free road' (corresponding to LmL parameters) cannot be acquired in real time, a statistical method can be used to estimate the real-time vehicle-free road brightness according to the average brightness of the infrequently changing region (such as sky, buildings beside the road, etc.) in the image acquired in advance and the average brightness of the infrequently changing region in the image acquired in real time. For example, it is considered that the difference or ratio between the average brightness of the infrequently changing region and the vehicle-free road brightness in the image is kept substantially constant, so that the real-time vehicle-free road brightness is estimated based on the difference or ratio obtained in advance and the average brightness of the infrequently changing region in the image acquired in real time.
In addition to obtaining the number of vehicles of the current road according to the measured average brightness of the images of the road and the parameters f1 and f2, according to the embodiment of the application, a method for fusing the number of vehicles (nearCnt) in the near-end region and the average brightness of the images in the far-end region into the total number of vehicles of the road by using kalman filtering is provided.
3A-3C illustrate system states for Kalman filtering in accordance with an embodiment of the application.
In fig. 3A, the road to be measured in number of vehicles is divided into a near-end region and a far-end region by a distance with respect to the camera. The system state comprises a plurality of parameters of the road region for which the number of vehicles is to be measured.
As an example, let X be the system state, where "traffic flow rate" refers to the number of vehicles passing through a specified link per unit time, "inflow" refers to entering a distal area from a distance toward a camera, and "outflow" refers to leaving a road area from a proximal area (departing from the distal area) where the number of vehicles to be measured is.
In the example of fig. 3A, "near-end vehicle number" refers to the number of vehicles in the near-end area of the road, "far-end vehicle number" refers to the number of vehicles in the far-end area of the road, "outgoing vehicle flow rate" refers to the number of vehicles leaving the road area where the number of vehicles is to be measured from the near-end area (departing from the far-end area) per unit time, "far-end to near-end vehicle flow rate" refers to the number of vehicles entering the near-end area from the far-end area toward the camera from outside the image range per unit time, "incoming vehicle flow rate" refers to the number of vehicles entering the far-end area from the far-end area toward the camera from outside the image range. In fig. 3A, the arrow direction indicates the traveling direction of the vehicle in the road region where the number of vehicles is measured.
It will be appreciated that in addition to the system state X illustrated in fig. 3A, other forms of describing the system state will be appreciated by those skilled in the art. For example, for the opposite lane of the road shown in fig. 3A, the driving direction of the vehicle is opposite to the arrow direction of fig. 3A, and accordingly, the system state X 'is [ the number of vehicles near, the number of vehicles far, the flow rate of outgoing vehicles, the flow rate of vehicles near to far, and the flow rate of incoming vehicles ═ the system state X' ]]Wherein the "outgoing vehicle flow rate" refers to the number of vehicles leaving the road area to be measured from the far end region in unit time, the "near-end to far-end vehicle flow rate" refers to the number of vehicles leaving the near end region and entering the far end region in unit time, and the "incoming vehicle flow rate" refers to the number of vehicles entering the far end region from the outside of the image range in unit timeThe number of vehicles driving into the near-end area towards the camera. The corresponding state transition matrix for Kalman filtering is
Figure BDA0003151803820000111
Figure BDA0003151803820000112
The division of the proximal region from the distal region need not be strict according to embodiments of the present application. In the example of fig. 3A, the near end region and the far end region are adjacent to each other without overlapping, and the near end region and the far end region together constitute a road region for the number of vehicles to be measured. In the example of fig. 3B, the two portions of the near-end region and the far-end region are not adjacent to each other and do not overlap with each other, and the sum of the near-end region and the far-end region is still smaller than the road region of the measured vehicle number. In the example of fig. 3C, the two portions of the near end region and the far end region overlap each other, and the sum of the near end region and the far end region is greater than the road region where the number of vehicles is measured, and the area covered by the near end region and the far end region together is equal to the road region where the number of vehicles is measured.
Thus, according to embodiments of the present application, the division of the proximal and distal regions need not be strict, thereby also reducing the requirements on accuracy such as nearCnt and/or Lm. The Kalman filtering model runtime model is automatically corrected.
The following describes a process for estimating the number of road vehicles using kalman filtering according to an embodiment of the present application, taking the system state X shown in fig. 3A-3C as an example.
Let Z be the measurement on the system, where nearCnt is the number of vehicles in the near end region obtained with, for example, the prior art or a technique to be discovered in the future, and road region average radiance Lm is represented by the average radiance LLm in the far end region obtained with, for example, the prior art or a technique to be discovered in the future, where Lm is f (llm), and f is a mapping from LLm to Lm. Alternatively, for simplicity, it is considered that Lm is linearly related to LLm. Optionally, in order to obtain the number of vehicles near the near-end region, the camera may acquire an image of the near-end region and obtain the image by means of image recognition; the signals can also be obtained by means of, for example, laser radar, a sensor network laid on the road, identification of signals actively emitted by vehicles in the near-end area, and the like.
The model for predicting the system state X (K) from the system state X (K-1) of the previous round (K-1) is as follows:
x (k) × X (k-1) + BU + w (k), where a is the state transition matrix of the system, BU is the control quantity, and for example, BU ═ 0, and w (k) is noise.
Referring also to FIG. 3A, in system state X, the following relationship exists:
the number of proximal vehicles of the k-th wheel is equal to the number of proximal vehicles of the k-1-th wheel + the number of inflow vehicles from the distal end to the proximal end of the k-1-the number of outflow vehicles of the k-1-th wheel;
the number of the distal end vehicles of the k-th wheel is equal to the number of the distal end vehicles of the k-1-th wheel + the number of the inflow vehicles of the k-1-th wheel from the distal end to the proximal end;
the outflow vehicle flow rate, the distal-to-proximal vehicle flow rate, and the inflow vehicle flow rate are approximately considered to remain unchanged for a short period of time.
So that the state transition matrix a is available,
Figure BDA0003151803820000121
dt is the time interval between two previous and subsequent measurements (k-th and k-1-th), k being a positive integer.
Let Z (k) be the measurement of the kth wheel to the system, and the relationship between the system states X (k) and Z (k) is
Z (k) ═ H x (k) + v (k). Since the measured near end car count nearCnt corresponds to the near end car count of the system state X, it has been disclosed previously that the average radiance represented by the measured value LLm, Lm, is the far end car count f1+ f2, and thus
Figure BDA0003151803820000122
Figure BDA0003151803820000123
F2 is constant and does not affect the matrix H, V (k) is the measurement error, the difference between f 1' and f1 reflects Lm andLLm, respectively. And let R be the covariance matrix (2 x 2) of the measurement errors, obtained by experimental methods and/or empirical data.
Fig. 4 illustrates a flow chart of a method for calculating a total number of vehicles of a target road region using kalman filtering to fuse a measured near-end vehicle number nearCnt with an average brightness LLm of a far-end region according to an embodiment of the present application.
During a system initialization phase, initial parameters of the system are acquired (410). For example, the initial value X (0) of the system state X, the initial value of the parameter f 1', the parameter A, H, P, Q, R for kalman filtering, and the like.
In step 420, obtaining the covariance P _ofthe estimated values X and X of the current round of system states X (k) according to the previous round of system states X (k-1);
X_=A*X(k-1);
p _ ═ a (k-1) × a' + Q, where P (k-1) is the covariance matrix of the k-1 wheel system states (5 × 5) and Q is the process noise covariance matrix (5 × 5). P and Q are obtained by experimental and/or empirical data.
At step 430, the measured near-end zone vehicle number nearCnt and the far-end zone average brightness LLm are obtained. The manner in which the near-end zone vehicle number nearCnt and the far-end zone average brightness LLm are obtained is described in detail above. For example, images of the near-end region and the far-end region of the road are acquired, and the near-end region vehicle number nearCnt and the average brightness LLm of the far-end region image are obtained from the images.
At step 450, optionally, the parameter f1 is updated. Since f1 is (LmV-LmL) LenV/Dist, there is f 1' (LLmV-LLmL) LLenV/LDist, where LLmV is the average brightness of the image of the near-end area of the roadway with the vehicle, LLmL is the average brightness of the image of the near-end area of the roadway without the vehicle, LLenV is the average length of the vehicle in the image of the near-end area of the roadway, and LDisk is the length of the image of the near-end area of the roadway. The relevant parameters are extracted from the image of the near-end region to update the parameter f 1'. Typically, the matrix H does not need to be changed in multiple iterations of kalman filtering. In application, the brightness of the road may change significantly due to the illumination of the road, the weather, and the like. Thus, according to embodiments of the present application, the parameter f 1' is updated, where appropriate. And to remain stable, the parameter f 1' is not updated in each iteration of the kalman filter. By way of example, in kalman filtering, the parameter f 1' is updated after a specified number, e.g., N, of iterations. As yet another example, the parameter f1 '(k) is calculated in each iteration of kalman filtering, but instead of applying the calculated parameter f 1' (k) to the calculation of the kalman filtering process, the parameter f1 '(k 1) obtained in a certain iteration is changed by an amount exceeding a specified threshold compared to the parameter f 1' previously applied to the markman filtering before the parameter f1 '(k 1) is applied to the calculation of the kalman filtering instead of the previous parameter f 1'. As yet another example, instead of updating the parameter f1 'in each iteration of kalman filtering, the parameter f 1' (k1) is acquired after a specified number of N iterations has passed, for example, and the parameter f1 '(k 1) is applied to the calculation of kalman filtering instead of the previous parameter f 1' before being applied to the markman filtering until the acquired parameter f1 '(k 1) has changed by an amount exceeding a specified threshold compared to the previous parameter f 1' applied to the markman filtering.
In step 460, a marman gain Kg is calculated, Kg ═ P- ×/(H × P- × H '+ R), where H' denotes the rank matrix of the matrix H and "/" denotes the matrix division, i.e. multiplication with the inverse of the divided matrix; r is the covariance matrix (2 x 2) of the measurement errors.
In step 470, the covariance P (k) of the system states X (k) and X (k) of the current round is updated,
X(k)=X_+Kg*(Z(k)-H*X_);
p (k) ═ P _, (I-Kg _) _ H where I is a 5 × 5 identity matrix.
After each iteration is completed, the number of the near-end vehicles in the system state X (k) is summed with the number of the far-end vehicles to obtain the total number of the vehicles of the target road of the current wheel (the k-th wheel).
And returning to the step 420 for the next iteration of the kalman filtering.
According to the embodiment of the application, the iteration process of the Markman filtering is continuously calculated so as to track the current vehicle number of the road.
FIG. 5 illustrates a graph of estimated road zone vehicle number over time according to an embodiment of the present application.
In fig. 5, the horizontal axis represents time, the right direction represents a time lapse direction, the vertical axis represents the number of vehicles, the solid line represents a true value of the number of vehicles on the road, the open circle is marked as the number of vehicles directly obtained from the measurement value of (nearCnt + (LLm-f2 ')/f 1 ', where f2 ' is LLmL), and the number of vehicles obtained using the embodiment according to the present application. Obviously, according to the embodiment of the application, the distribution of the road vehicle number results obtained by the Kalman filtering method is closer to the real value, and a plurality of outliers marked by hollow circles are filtered.
According to a further embodiment of the application, let the measurement Z of the system be [ nearCnt, D ], where nearCnt is the number of vehicles in the near end region, and D is the average relative brightness of the far end region of the road, obtained using, for example, prior art or future discovered techniques. According to the method described in connection with fig. 4, the total number of vehicles of the road is obtained by fusing the number of vehicles near the region and the average relative brightness of the far region.
According to yet another embodiment of the present application, let the measurement Z of the system be [ nearCnt, LLm, D ═ D]And fusing the three measurement values (nearCnt, LLm and D) to obtain the total number of vehicles on the road. Because of Z (k) ═ H (k) + V (k), let
Figure BDA0003151803820000131
Figure BDA0003151803820000132
To estimate the system state x (k) from the measurement Z, where f1L ═ LLmV-LLmL ═ LLenV/LDist, and f1D ═ LDV-LDL ═ LLenV/LDist, where f1L and f1D are both parameters obtained from the near road area, with initial values obtained off-line or from collected images or sensor data of the near road area. For example, LLmV is the average brightness of the vehicle in the near-end region image, LLmL is the average brightness of the road in the region image in the absence of the vehicle, LLenV is the average length of the vehicle in the region image, LDist is the length of the road in the region image, LDV is the average relative brightness of the vehicle in the region image, and LDL is the average relative brightness of the road in the region image in the absence of the vehicleAll relative brightness (theoretically equal to 0).
FIG. 6 illustrates a block diagram of estimating a road region vehicle number using a deep neural network according to an embodiment of the present application.
The deep neural network 600 for estimating the number of vehicles in a road region according to an embodiment of the present application includes a sub-network 610 and a sub-network 620. The inputs to the deep neural network 600 include the distal region image and the proximal region vehicle number. The input to the deep neural network 600 also includes an optional near-end region image. The distal region image and the optional proximal region image are inputs to the subnetwork 610. The sub-network 610 outputs the estimated number of vehicles in the remote area and provides it to the sub-network 620. The number of vehicles in the near end area is also used as input to the sub-network 620, and the sub-network 620 outputs the estimated (total) number of vehicles in the road area.
Alternatively, the near-end region vehicle number is acquired from the near-end region image according to a manner of the related art. And preprocessing the distal region image and/or the proximal region image, such as marking a road region, the distal region and/or the proximal region in the distal region image and/or the proximal region image.
Subnetwork 610 is, for example, a multi-layer convolutional neural network including, for example, convolutional layers (CNN), pooling layers (not shown), and fully-connected layers (sense) 615. Convolutional layer (CNN) takes the input of sub-network 610 and fully connected layer (density) 615 provides the output of sub-network 610. Optionally, the near-end region images are provided to the sub-network 610 so that the sub-network 610 learns the effect of the characteristics (e.g., brightness) of the near-end region images on the number of vehicles in the far-end region estimated from the far-end region images. Still alternatively, the average luminance or average relative luminance extracted from the near-end region image is provided as an input to the sub-network 610.
Sub-network 620 includes a multi-layer stateful long short term memory network (stateful lstm) and a fully connected layer (sense) 625. The multi-layer stateful long short term memory network (stateful lstm) takes the input of the sub-network 620, while the fully connected layer (sense) 625 provides the output from the network 620. The Stateful long-short term memory network (Stateful LSTM) is mainly set for the characteristic that traffic flow has periodic fluctuation. The Stateful long-short term memory network (Stateful LSTM) may also be replaced with a self-Attention (salt-Attention) network.
In training, the two subnetworks 610 and 620 are trained separately with manually standardized data. After the two sub-networks 610 and 620 are initially stabilized in the training process, the two sub-networks 610 and 620 are combined into the deep neural network 600 for overall training.
The method provided by the embodiment of the present application can be implemented by software, hardware, firmware, FPGA (Field Programmable Gate Array), a single chip, a microprocessor, a microcontroller, and/or an ASIC (application Specific Integrated Circuit) of an information processing device disposed at an intersection, a traffic light, or on a network.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
Many modifications and other embodiments of the application set forth herein will come to mind to one skilled in the art to which these embodiments pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the application is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (10)

1. A method of estimating a number of vehicles for a road designation area, wherein the road designation area includes a proximal region and a distal region, the method comprising:
acquiring the number of vehicles in the near-end area, and acquiring the brightness of the image in the far-end area;
calculating the number of vehicles in the far-end region according to the number of vehicles in the near-end region and the brightness of the image in the far-end region;
and estimating the number of vehicles in the road designated area according to the number of vehicles in the near-end area and the number of vehicles in the far-end area.
2. The method of claim 1, wherein
Obtaining a parameter f 1' according to the number of vehicles of one or more first area images obtained from the near-end area image and the luminance statistic value of the one or more first area images;
and obtaining the number of vehicles in the far-end area according to the brightness statistic value of the image in the far-end area and the parameter f 1'.
3. The method of claim 1 or 2, further comprising:
acquiring an image including the distal region; stretching the distal region part in the image comprising the distal region according to a world coordinate system to obtain an image of the distal region with the shape consistent with that of the road distal region; and/or
Acquiring an image including the proximal region; and stretching the near-end area part in the image comprising the near-end area according to a world coordinate system to obtain an image of the near-end area with the shape consistent with the shape of the near-end area of the road.
4. The method of claim 1 or 3, wherein:
the calculating the number of vehicles of the far-end region according to the number of vehicles of the near-end region and the brightness of the image of the far-end region includes: calculating the number of vehicles in the far-end area according to the number of vehicles in the near-end area and the brightness of the image in the far-end area by using a Kalman filtering method, wherein the number of vehicles in the near-end area and the brightness of the image in the far-end area are used as a measurement Z for the Kalman filtering method, and a system state X for Kalman filtering is [ the number of vehicles in the near-end, the number of vehicles in the far-end, the flow speed of outgoing vehicles, the flow speed of far-end to near-end vehicles and the flow speed of incoming vehicles]Wherein the traffic flow rate refers to the number of vehicles passing through the specified area of the road in unit time, the inflow traffic flow rate refers to the number of vehicles entering the specified area of the road in unit time, and the outflow traffic flow rateThe number of vehicles leaving the specified area of the road in unit time, the flow rate of the vehicles from the far end to the near end refers to the number of vehicles entering the near end area from the far end area in unit time, and a state transition matrix for Kalman filtering is
Figure FDA0003151803810000021
Wherein dt is the time interval of two iterations of Kalman filtering; and wherein Z is H X + V,
Figure FDA0003151803810000022
v is the measurement error, and f 1' is the specified parameter.
5. The method of claim 4, wherein
Taking the number of vehicles of the near-end region, the average brightness of the image of the far-end region, and the average brightness difference of the image of the far-end region as a measurement Z for a Kalman filtering method,
Figure FDA0003151803810000023
wherein f 1' and f1D are specified parameters.
6. The method of claim 4 or 5, wherein
The calculating the number of vehicles of the distal region from an average of the number of vehicles of the proximal region and the image of the distal region using a kalman filter method includes:
obtaining the covariance P of estimated values X _ and X _ of the system state X (k) of the current round according to the system state X (k-1) of the previous round;
X_=A*X(k-1);
p _ ═ a (k-1) × a' + Q, where P (k-1) is the covariance matrix of the k-1 round system states and Q is the process noise covariance matrix;
obtaining the number of vehicles in the near-end area and the brightness of the image in the far-end area as a measured value Z (k) of the current wheel;
calculating a marman gain Kg, Kg ═ P _ · H '/(H × P _ × H ' + R), wherein H ' represents a rotation rank matrix of the matrix H, "/" represents a matrix division, and R is a covariance matrix of the measurement errors;
updating the covariance P (k) of the system states X (k) and X (k) of the current round,
X(k)=X_+Kg*(Z(k)-H*X_);
p (k) ═ P _, (I-Kg _) _ H, where I is the identity matrix.
7. The method of claim 1, further comprising:
processing the brightness of the image of the remote area by using a neural network comprising a plurality of layers of convolutional neural networks and fully connected layers to obtain the number of vehicles of the remote area;
and processing the vehicles in the near-end area and the vehicles in the far-end area by using a neural network comprising a plurality of layers of stateful long and short term memory networks and a full connection layer to obtain the number of the vehicles in the specified area of the road.
8. A method of estimating a number of vehicles in a designated area of a roadway, the method comprising:
acquiring the brightness of the image of the road designated area;
and estimating the number of vehicles in the road designated area according to the brightness of the road designated area.
9. A method of estimating a number of vehicles for a road designation area, wherein the road designation area includes a proximal region and a distal region, the method comprising:
acquiring the number of vehicles in the near-end area, and acquiring an image of the far-end area;
processing the images of the far-end area by using a multilayer convolutional neural network to obtain the number of vehicles in the far-end area;
and processing the number of vehicles in the near end area and the number of vehicles in the far end area by using a neural network comprising a plurality of layers of stateful long and short term memory networks and full connection layers to obtain the number of vehicles in the road designated area, or processing the number of vehicles in the near end area and the number of vehicles in the far end area by using a neural network comprising a self-attention network and a full connection layer to obtain the number of vehicles in the road designated area.
10. An information processing apparatus comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements the method according to one of claims 1 to 9 when executing the program.
CN202110770253.7A 2021-07-07 2021-07-07 Road vehicle number identification method and device Active CN113506264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110770253.7A CN113506264B (en) 2021-07-07 2021-07-07 Road vehicle number identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110770253.7A CN113506264B (en) 2021-07-07 2021-07-07 Road vehicle number identification method and device

Publications (2)

Publication Number Publication Date
CN113506264A true CN113506264A (en) 2021-10-15
CN113506264B CN113506264B (en) 2023-08-29

Family

ID=78012047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110770253.7A Active CN113506264B (en) 2021-07-07 2021-07-07 Road vehicle number identification method and device

Country Status (1)

Country Link
CN (1) CN113506264B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404306A (en) * 1994-04-20 1995-04-04 Rockwell International Corporation Vehicular traffic monitoring system
JP2004020237A (en) * 2002-06-12 2004-01-22 Fuji Heavy Ind Ltd Vehicle control system
JP2007064894A (en) * 2005-09-01 2007-03-15 Fujitsu Ten Ltd Object detector, object detecting method, and object detection program
US20080094250A1 (en) * 2006-10-19 2008-04-24 David Myr Multi-objective optimization for real time traffic light control and navigation systems for urban saturated networks
JP2009186301A (en) * 2008-02-06 2009-08-20 Mazda Motor Corp Object detection device for vehicle
JP2010103810A (en) * 2008-10-24 2010-05-06 Ricoh Co Ltd In-vehicle monitoring apparatus
US20100208071A1 (en) * 2006-08-18 2010-08-19 Nec Corporation Vehicle detection device, vehicle detection method, and vehicle detection program
CN103839415A (en) * 2014-03-19 2014-06-04 重庆攸亮科技有限公司 Traffic flow and occupation ratio information acquisition method based on road surface image feature identification
US20150254531A1 (en) * 2014-03-07 2015-09-10 Tata Consultancy Services Limited Multi range object detection device and method
CN105528891A (en) * 2016-01-13 2016-04-27 深圳市中盟科技有限公司 Traffic flow density detection method and system based on unmanned aerial vehicle monitoring
KR20160083619A (en) * 2014-12-31 2016-07-12 (주)베라시스 Vehicle Detection Method in ROI through Plural Detection Windows
US20170220877A1 (en) * 2014-08-26 2017-08-03 Hitachi Automotive Systems, Ltd. Object detecting device
US20200410274A1 (en) * 2017-12-04 2020-12-31 Sony Corporation Image processing apparatus and image processing method
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404306A (en) * 1994-04-20 1995-04-04 Rockwell International Corporation Vehicular traffic monitoring system
JP2004020237A (en) * 2002-06-12 2004-01-22 Fuji Heavy Ind Ltd Vehicle control system
JP2007064894A (en) * 2005-09-01 2007-03-15 Fujitsu Ten Ltd Object detector, object detecting method, and object detection program
US20100208071A1 (en) * 2006-08-18 2010-08-19 Nec Corporation Vehicle detection device, vehicle detection method, and vehicle detection program
US20080094250A1 (en) * 2006-10-19 2008-04-24 David Myr Multi-objective optimization for real time traffic light control and navigation systems for urban saturated networks
JP2009186301A (en) * 2008-02-06 2009-08-20 Mazda Motor Corp Object detection device for vehicle
JP2010103810A (en) * 2008-10-24 2010-05-06 Ricoh Co Ltd In-vehicle monitoring apparatus
US20150254531A1 (en) * 2014-03-07 2015-09-10 Tata Consultancy Services Limited Multi range object detection device and method
CN103839415A (en) * 2014-03-19 2014-06-04 重庆攸亮科技有限公司 Traffic flow and occupation ratio information acquisition method based on road surface image feature identification
US20170220877A1 (en) * 2014-08-26 2017-08-03 Hitachi Automotive Systems, Ltd. Object detecting device
KR20160083619A (en) * 2014-12-31 2016-07-12 (주)베라시스 Vehicle Detection Method in ROI through Plural Detection Windows
CN105528891A (en) * 2016-01-13 2016-04-27 深圳市中盟科技有限公司 Traffic flow density detection method and system based on unmanned aerial vehicle monitoring
US20200410274A1 (en) * 2017-12-04 2020-12-31 Sony Corporation Image processing apparatus and image processing method
CN112562330A (en) * 2020-11-27 2021-03-26 深圳市综合交通运行指挥中心 Method and device for evaluating road operation index, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HUANSHENG SONG ET.AL: "Vision-based vehicle detection and counting system using deep learning in highway scenes", 《EUROPEAN TRANSPORT RESEARCH REVIEW》, pages 1 - 16 *
刘亚东等: "基于多尺度边缘和局部熵原理的前方车辆检测", 《计算机技术与发展 》, vol. 18, no. 3, pages 200 - 202 *
曾智洪: "高速公路中的行车道检测和车辆跟踪", 《自动化学报 》, no. 3, pages 450 - 456 *

Also Published As

Publication number Publication date
CN113506264B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN111694010B (en) Roadside vehicle identification method based on fusion of vision and laser radar
KR100377067B1 (en) Method and apparatus for detecting object movement within an image sequence
CN109085823B (en) Automatic tracking driving method based on vision in park scene
KR101392294B1 (en) Video segmentation using statistical pixel modeling
CN109657581B (en) Urban rail transit gate traffic control method based on binocular camera behavior detection
CN102867416B (en) Vehicle part feature-based vehicle detection and tracking method
CN110569704A (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109543600A (en) A kind of realization drivable region detection method and system and application
CN104951775A (en) Video technology based secure and smart recognition method for railway crossing protection zone
CN110660222A (en) Intelligent environment-friendly electronic snapshot system for black smoke vehicle on road
CN110956146B (en) Road background modeling method and device, electronic equipment and storage medium
CN108830880B (en) Video visibility detection early warning method and system suitable for expressway
CN110379168A (en) A kind of vehicular traffic information acquisition method based on Mask R-CNN
CN113408454B (en) Traffic target detection method, device, electronic equipment and detection system
KR100820952B1 (en) Detecting method at automatic police enforcement system of illegal-stopping and parking vehicle using single camera and system thereof
Raguraman et al. Intelligent drivable area detection system using camera and LiDAR sensor for autonomous vehicle
CN110414392A (en) A kind of determination method and device of obstacle distance
CN112233079B (en) Method and system for fusing images of multiple sensors
CN113506264B (en) Road vehicle number identification method and device
CN210515650U (en) Intelligent environment-friendly electronic snapshot system for black smoke vehicle on road
CN115457780B (en) Vehicle flow and velocity automatic measuring and calculating method and system based on priori knowledge set
CN114581748B (en) Multi-agent perception fusion system based on machine learning and implementation method thereof
Rachman et al. Camera Self-Calibration: Deep Learning from Driving Scenes
CN115984768A (en) Multi-target pedestrian real-time detection positioning method based on fixed monocular camera
Hanel et al. Iterative Calibration of a Vehicle Camera using Traffic Signs Detected by a Convolutional Neural Network.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant