Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a vehicle flow monitoring method according to an embodiment of the present invention. As shown in fig. 1, the vehicle flow monitoring method includes:
step 101: and extracting detection marked lines in the video monitoring data collected in real time.
In the prior art, video monitoring data acquired in real time are generally stored, and the stored video monitoring data are called out or analyzed manually when video analysis is needed, so that a hysteresis effect is caused, and real-time monitoring of vehicle flow cannot be realized in a vehicle monitoring and analyzing scene. In the embodiment of the invention, the video monitoring data acquired in real time are some basic information directly acquired by the front-end video acquisition equipment, and the detection marked lines for counting the vehicle flow are included, so that the video monitoring data can be directly used as basic resource data for further real-time intelligent analysis in the follow-up process.
In an embodiment of the present invention, the monitoring mark line in the video monitoring data may be a non-physical real mark line set on the video image of the video monitoring data, for example, a monitoring person may set a non-physical real mark line on the video image of the video monitoring data collected in real time according to the monitoring requirement, so as to count the passing vehicle flow. The non-physical real reticle may be implemented in various ways, for example, a layer including the non-physical real reticle is newly created on a video image of the video monitoring data acquired in real time, and when a vehicle displayed on the video image of the video monitoring data meets the non-physical real reticle, the vehicle is considered to pass through the non-physical real reticle. In another embodiment of the present invention, the monitoring reticle in the video monitoring data may also be a physical real reticle preset in the monitoring scene corresponding to the video monitoring data. The physical real reticle is thus already contained in the video image of the video surveillance data acquired in real time, which can be extracted directly from the video image based on existing image object extraction techniques. However, the present invention is not limited to the specific form of the detection mark.
In an embodiment of the present invention, the video surveillance data collected in real time may include surveillance video and one or more of the following: video capture location information and video capture time information. The monitoring video is resource data for subsequently extracting vehicle characteristic information, and the video acquisition position information and the video acquisition time information can be used as additional attribute information for determining the time and the place of the current vehicle flow monitoring scene.
In an embodiment of the invention, the video monitoring data acquired in real time can also be directly sent to monitoring personnel in the monitoring center for viewing, so that the monitoring personnel can supervise the automatic proceeding of the whole vehicle flow monitoring process.
Step 102: and counting the vehicle flow information of the vehicle passing the detection marking line in the video monitoring data in real time.
Based on the extracted detection markings, the flow rate of a vehicle is incremented each time a vehicle passes the detection markings. It should be understood that the vehicle flow information is not purely digital information, and since the video monitoring data has a time axis, the vehicle flow information should also be changed based on the advancement of the time axis, i.e., the vehicle flow information includes the number of vehicles passing through the detection mark line at all time points or time periods.
Step 103: and determining traffic flow control information of a monitoring scene corresponding to the video monitoring data according to the vehicle flow information counted in real time.
Specifically, the determination process of the traffic flow control information may be implemented based on a pre-training model of machine descriptions of traffic flow control policies under different traffic flow states. For example, the maximum allowable vehicle flow in the current monitoring scene is 50 vehicles/minute, and the vehicle flow obtained by real-time monitoring reaches 45 vehicles/minute, and a light traffic flow control strategy is suggested; and when the vehicle flow obtained by real-time monitoring reaches 80 vehicles/minute, the traffic flow of the road section under the current monitoring scene is proved to be seriously beyond the traffic capacity, and a strict traffic flow control strategy is suggested to be adopted. It should be understood that the specific correspondence between the traffic flow management and control information and the vehicle flow information monitored in real time may be adjusted by the monitoring personnel according to actual needs, but no matter how the adjustment is made, the correspondence may be established based on a pre-training model, and the specific correspondence is not limited by the present invention.
Fig. 2 is a schematic flow chart of a vehicle flow monitoring method according to an embodiment of the present invention. Unlike the vehicle flow monitoring method shown in fig. 1, the method shown in fig. 2 further includes:
step 201: and extracting the vehicle characteristic information of the vehicle passing the detection marking line from the video monitoring data in real time.
In one embodiment of the invention, the vehicle characteristic information may include one or more of the following: vehicle brand, vehicle type, and vehicle travel speed for further specifying the identity of the vehicle passing the detection marking. In an embodiment of the present invention, the extraction process of the vehicle feature information may be implemented based on a pre-training model process of the vehicle feature information. For example, a model is pre-trained on the correspondence between the vehicle type and the shape of the vehicle in advance, and when it is recognized that the vehicle in the surveillance video is small and does not have a hopper, it can be determined that the vehicle is a car based on the result of the pre-trained model.
It should be understood that a more diversified traffic management strategy can be implemented based on the diversified vehicle characteristic information, for example, the speed of all vehicles passing through the detection marking for a certain period of time may be averaged, and the average value may be used as a factor for determining the traffic flow management information in addition to the vehicle flow information. For another example, when the vehicle license plate number included in the vehicle characteristic information includes too many outer-port license plates and the vehicle traffic information has been shown to exceed the passing capability of the current monitoring scene, a traffic control policy for shunting the vehicle at the outer port and the vehicle at the outer port can be adopted. As described above, the specific correspondence between the traffic flow management and control information, the vehicle flow information monitored in real time, and the vehicle characteristic information extracted in real time may also be adjusted by the monitoring personnel according to actual needs, and the specific correspondence is not limited in the present invention.
Step 202: and generating a semantic analysis result based on the vehicle flow information counted in real time and the vehicle characteristic information extracted in real time. For example, the semantic analysis result can be expressed as: the traffic flow of the lanes under the current monitoring scene is 45 vehicles/minute, wherein the average speed of the passing vehicles is 23 km/H.
Step 203: and determining the traffic flow control information of the monitoring scene corresponding to the video monitoring data in real time based on the semantic analysis result and a pre-training model of machine description of the traffic flow control strategy corresponding to different traffic flow information and vehicle characteristic information. For example, based on the above-mentioned semantic analysis results, the traffic flow management information that can be determined may be: the current lane state is congested, and traffic policemen are required to manage and control on site.
In an embodiment of the present invention, the vehicle traffic information counted in real time and the vehicle feature information extracted in real time can be saved. And by receiving an inquiry instruction which takes the vehicle characteristic information, the video monitoring time period or the vehicle flow information as inquiry conditions, the vehicle flow information of the vehicle passing through the detection marked line under all the video monitoring time periods corresponding to the vehicle characteristic information is called, or all the vehicle characteristic information corresponding to the video monitoring time periods and the vehicle flow information of the vehicle passing through the detection marked line are called, or the vehicle characteristic information under the video monitoring time periods corresponding to the vehicle flow information is called.
Fig. 3 is a schematic flow chart of a vehicle flow monitoring method according to another embodiment of the present invention. Unlike the method shown in fig. 2, the vehicle flow monitoring method shown in fig. 3 further includes:
step 301: and intercepting the area video monitoring data of the area where the vehicle passing through the detection marked line is located from the video monitoring data.
Step 302: and extracting the vehicle characteristic information of the vehicle passing the detection marking line from the regional video monitoring data in real time.
By intercepting the regional video monitoring data from the video monitoring data, the video monitoring data of the region irrelevant to the currently monitored vehicle passing through the detection marked line is removed, the calculation amount of subsequently extracting the vehicle characteristic information is reduced, and the calculation burden of hardware analysis resources is reduced.
It is noted that while for purposes of simplicity of explanation, the methodologies of the present invention are shown and described as a series of acts, it is to be understood and appreciated that the claimed subject matter is not limited by the order of execution of the acts, as some acts may occur in different orders or concurrently with other acts from that shown and described herein, and some acts may also include sub-steps, the possibility of temporal interleaving between such sub-steps. For example, in an embodiment of the present invention, as shown in fig. 4, after video monitoring data is collected in real time (step 401), a detection marking in the video monitoring data collected in real time is extracted (step 402), then regional video monitoring data of a region of a vehicle passing through the detection marking is intercepted from the video monitoring data (step 403), and vehicle feature information is extracted from the regional video monitoring data (step 404), and then a semantic analysis structure is generated based on vehicle flow information counted in real time and the vehicle feature information extracted in real time (step 405), and finally traffic flow management and control information of a monitoring scene corresponding to the video monitoring data is determined in real time based on the semantic analysis result and a pre-training model of machine description of a traffic flow management and control strategy corresponding to different vehicle flow information and vehicle feature information (step 406). Moreover, not all illustrated acts may be required to implement a methodology in accordance with the appended claims. Moreover, the description of steps does not exclude that the method may also comprise additional steps, which may have additional effects. It should also be understood that method steps described in different embodiments or flows may be combined with or substituted for one another.
Fig. 5 is a schematic structural diagram of a vehicle flow monitoring system according to an embodiment of the present invention. As shown in fig. 5, the vehicle flow monitoring system 50 includes:
the environment sensing device 501 is configured to collect video monitoring data in real time;
a geographic marker analysis device 502 configured to extract a detection marker line from the video monitoring data collected in real time;
the statistical device is configured to count the vehicle flow information of the vehicle passing the detection marking line in the video monitoring data in real time; and
and the decision device 506 is configured to determine traffic flow management and control information of the monitoring scene corresponding to the video monitoring data according to the vehicle flow information counted by the counting device in real time.
In an embodiment of the invention, the system further comprises:
and the detection marking setting device is configured to add a non-physical real marking on the video image of the video monitoring data.
In an embodiment of the invention, the system further comprises:
and the object characteristic analysis device 504 is configured to extract the vehicle characteristic information of the vehicle passing the detection marking from the video monitoring data in real time.
In an embodiment of the invention, the system further comprises:
an object region extraction means 503 configured to intercept, from the video surveillance data, region video surveillance data of a region where the vehicle passing through the detection reticle is located;
wherein the object characteristic analysis device 504 is further configured to extract vehicle characteristic information of the vehicle passing the detection marking from the regional video surveillance data in real time.
In an embodiment of the invention, the system further comprises:
a semantic analysis device 505 configured to generate a semantic analysis result based on the vehicle traffic information counted in real time and the vehicle feature information extracted in real time;
wherein the decision device 506 is further configured to: and determining the traffic flow control information of the monitoring scene corresponding to the video monitoring data in real time based on the semantic analysis result generated by the semantic analysis device 505 and the pre-training model of the machine description of traffic flow control in different traffic flow states.
In an embodiment of the present invention, the decision device 506 is further configured to: receiving a query instruction which takes the vehicle characteristic information, the video monitoring time period or the vehicle flow information as query conditions; and calling vehicle flow information of the vehicle passing through the detection marked line under all video monitoring time periods corresponding to the vehicle characteristic information, or calling all vehicle characteristic information corresponding to the video monitoring time periods and the vehicle flow information of the vehicle passing through the detection marked line, or calling the vehicle characteristic information corresponding to the vehicle flow information under the video monitoring time periods.
Therefore, the vehicle flow monitoring system provided by the embodiment of the invention is realized on the basis of an intelligent information model of the monitoring video. The information flow in the monitoring video intelligent information model can be extracted in different layers, and certain dependency relationship exists between adjacent layers, as shown in fig. 6. In the process of collecting video monitoring data by the environment sensing device 501, the video and the on-site sensing data (such as sound, time, geographical position of a camera, temperature, weather, pose of a camera, etc.) are stored in the environment sensing layer, and these information are basic elements of a monitoring scene provided by the traditional video monitoring and intelligent video monitoring, and provide necessary support for top-level decision making. In the front-end processing process, the originally acquired video monitoring data is subjected to preliminary processing (including traditional preprocessing, front-end intelligent analysis based on a statistical learning method and the like), and the results of the preliminary processing are stored in a characteristic layer, a geographical sign layer and an object layer, which respectively correspond to the process of extracting the vehicle characteristic information of the vehicle by the object characteristic analysis device 504, the process of extracting the detection marking by the geographical sign analysis device 502 and the process of intercepting the regional video monitoring data from the originally acquired video monitoring data by the object region extraction device 503. In the back-end processing process, the above-mentioned corresponding layers are combined according to different application requirements, and the machine learning technology is used to analyze and process, and the related processing results are stored in the semantic layer, corresponding to the process of generating semantic analysis results by the semantic analysis device 505. For the semantic analysis result in the semantic analysis device 505, different decision suggestions can be given for the monitoring personnel to refer to by using the judgment model trained by the machine learning technology; meanwhile, the monitoring personnel can also send instructions to the system to inquire corresponding contents in the monitoring data, which belong to a decision/understanding layer. The judgment model and the decision suggestion obtained through the judgment belong to the decision category. In the observation process, a monitoring person sends a certain instruction to a monitoring system to inquire a target with certain characteristics in a certain event, and the system interprets the instruction into a description mode according with the structural model to retrieve the grasped data, belonging to the understanding category.
It should be understood that when the vehicle traffic monitoring system includes a front-end video capture device and a rear-end video analysis device, the environment sensing apparatus 501 may be disposed in the front-end video capture device, and the geographic marking analyzing apparatus 502, the object region extracting apparatus 503, the object feature analyzing apparatus 504, the semantic analyzing apparatus 505, and the decision-making apparatus 506 may be disposed in the front-end video capture device or the rear-end video analysis device, respectively. All devices in the vehicle flow monitoring system can realize respective analysis and extraction functions and gradual extraction of information flow so as to finally achieve the purpose of semantic decision. The invention does not limit whether the device in the vehicle flow monitoring system is specifically arranged on the front-end video acquisition equipment or the rear-end video analysis equipment.
The teachings of the present invention can also be implemented as a computer program product of a computer-readable storage medium, comprising computer program code which, when executed by a processor, enables the processor to implement a vehicle flow monitoring method as described herein in accordance with the methods of embodiments of the present invention. The computer storage medium may be any tangible medium, such as a floppy disk, a CD-ROM, a DVD, a hard drive, even a network medium, and the like.
It should be understood that although one implementation form of the embodiments of the present invention described above may be a computer program product, the method or apparatus of the embodiments of the present invention may be implemented in software, hardware, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. It will be appreciated by those of ordinary skill in the art that the methods and apparatus described above may be implemented using computer executable instructions and/or embodied in processor control code, such code provided, for example, on a carrier medium such as a disk, CD or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The methods and apparatus of the present invention may be implemented in hardware circuitry, such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, or in software for execution by various types of processors, or in a combination of hardware circuitry and software, such as firmware.
It should be understood that although several modules or units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, according to exemplary embodiments of the invention, the features and functions of two or more modules/units described above may be implemented in one module/unit, whereas the features and functions of one module/unit described above may be further divided into implementations by a plurality of modules/units. Furthermore, some of the modules/units described above may be omitted in some application scenarios. The object region extraction means may not be included, for example, when the computational power of the hardware computational resources is not limited.
It is also to be understood that the description has described only some of the critical, not necessarily essential, techniques and features, and may not have described some of the features that could be implemented by those skilled in the art, in order not to obscure the embodiments of the invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and the like that are within the spirit and principle of the present invention are included in the present invention.