CN113823095B - Method and device for determining traffic state, storage medium and electronic device - Google Patents

Method and device for determining traffic state, storage medium and electronic device Download PDF

Info

Publication number
CN113823095B
CN113823095B CN202111382155.2A CN202111382155A CN113823095B CN 113823095 B CN113823095 B CN 113823095B CN 202111382155 A CN202111382155 A CN 202111382155A CN 113823095 B CN113823095 B CN 113823095B
Authority
CN
China
Prior art keywords
target
determining
area
sub
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111382155.2A
Other languages
Chinese (zh)
Other versions
CN113823095A (en
Inventor
魏东东
陆晓栋
周永哲
吴忠人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111382155.2A priority Critical patent/CN113823095B/en
Publication of CN113823095A publication Critical patent/CN113823095A/en
Application granted granted Critical
Publication of CN113823095B publication Critical patent/CN113823095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications

Abstract

The embodiment of the invention provides a method and a device for determining a traffic state, a storage medium and an electronic device, wherein the method comprises the steps of acquiring a target image acquired by shooting a target area by a camera device, wherein the target area is an area where a target object runs; determining a target displacement value of the target object within a predetermined period of time based on the target image; the traffic state of the target area is determined based on the target displacement value. The invention solves the problems of low efficiency and high cost of determining the traffic state in the related technology, and achieves the effects of improving the efficiency of determining the traffic state and reducing the cost of determining the traffic state.

Description

Method and device for determining traffic state, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a method and a device for determining a traffic state, a storage medium and an electronic device.
Background
With the rapid development of domestic infrastructure, a large number of highways, urban expressways, viaducts, tunnels, sea-crossing bridges and the like have been built in China, and huge demands for traffic scene management and road operation and maintenance are presented at the same time.
With the rapid increase of the quantity of motor vehicles, the traffic state of congestion often occurs on the expressway, the behavior greatly reduces the traffic capacity of the road, traffic accidents are easily caused, and personnel and property losses are generated. In the real traffic management, on-site supervision is performed by on-site police force, assistant management personnel and the like, and the abnormal behaviors of the road are artificially identified through the monitoring video of the road, so that a large amount of police force resources are consumed.
In the related art, when determining the traffic state, data from multiple sources generally needs to be fused, and the cost for acquiring the data is high.
Therefore, the problems of low efficiency and high cost in determining the traffic state exist in the related art.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining a traffic state, a storage medium and an electronic device, which are used for at least solving the problems of low efficiency and high cost in determining the traffic state in the related art.
According to an embodiment of the present invention, there is provided a traffic state determination method including: acquiring a target image acquired by shooting a target area by a camera device, wherein the target area is an area where a target object runs; determining a target displacement value of the target object within a predetermined period of time based on the target image; determining a traffic status of the target area based on the target displacement value.
According to another embodiment of the present invention, there is provided a traffic state determination apparatus including: the device comprises an acquisition module, a display module and a processing module, wherein the acquisition module is used for acquiring a target image acquired by shooting a target area by a camera device, and the target area is an area where a target object runs; a first determination module for determining a target displacement value of the target object within a predetermined time based on the target image; a second determination module to determine a traffic status of the target area based on the target displacement value.
According to yet another embodiment of the invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program, when executed by a processor, implements the steps of the method as set forth in any of the above.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the target image acquired by shooting the target area by the camera equipment is acquired, the target displacement value of the target object in the preset time is determined according to the target image, and the traffic state of the target area is determined according to the target displacement value. The traffic state of the target area can be determined through the target image acquired by the camera device without fusing other data, so that the problems of low efficiency and high cost of determining the traffic state in the related technology can be solved, and the effects of improving the efficiency of determining the traffic state and reducing the cost of determining the traffic state are achieved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a method for determining a traffic state according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining traffic conditions according to an embodiment of the present invention;
FIG. 3 is a first target block diagram according to an exemplary embodiment of the present invention;
FIG. 4 is a schematic diagram of predetermined point locations in accordance with an exemplary embodiment of the present invention;
FIG. 5 is a flow chart of a method of determining traffic conditions in accordance with an embodiment of the present invention;
fig. 6 is a block diagram of a configuration of a traffic state determination apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the example of the method running on the mobile terminal, fig. 1 is a hardware structure block diagram of the mobile terminal of the method for determining a traffic state according to the embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the method for determining a traffic state in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In the present embodiment, a method for determining a traffic state is provided, and fig. 2 is a flowchart of a method for determining a traffic state according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring a target image acquired by shooting a target area by a camera device, wherein the target area is an area where a target object runs;
step S204, determining a target displacement value of the target object in a preset time based on the target image;
step S206, determining the traffic state of the target area based on the target displacement value.
In the above-described embodiments, the image pickup apparatus may be a traffic camera, for example, a monitoring camera installed in a highway. The target area is an area where a target object included in the shooting area of the image pickup apparatus travels, for example, each lane in an expressway. The target object may be a vehicle. The analysis of the detection requirement of the highway needs a scheme which utilizes the existing data sources as much as possible, has low implementation cost and is beneficial to large-area popularization, and the existing method needs multiple data sources and is high in data acquisition cost and not suitable for the detection requirement of the highway. In the embodiment, the basis that the cameras are already laid on the highway in a large area is utilized, and the computer vision technology is mature day by day, so that the monitoring video images are directly detected and analyzed, and the detection requirement of the highway is met.
In the above embodiment, after the target image is obtained, a video image coordinate system may be established, where an upper left corner of the target image is used as an origin, a positive X-axis direction is from the upper left corner to the upper right corner, and a positive Y-axis direction is from the upper left corner to the lower left corner. An area DetectRegion (detection area, i.e., target area) where an expressway is located is drawn in a monitor video of the image pickup apparatus, and this area DetectRegion (detection area) is taken as a detection area for detecting an automobile target, and targets outside the DetectRegion are not subjected to detection tracking.
In the above-described embodiment, after the target image is acquired, the target displacement value of the target object within the predetermined time may be determined from the target image. The predetermined time may be 1s, or may be a time for continuously acquiring N frames of images. When the target displacement value is determined, the target displacement value within the preset time can be determined according to the target image and the image acquired before the acquisition time of the acquired target image and the position of the target object in the two images. If so, determining the image acquired 1s before the current acquired target image, and determining the target displacement value according to the image acquired 1s before and the target image. Or determining the number of image frames, such as N frames, which can be acquired by the camera device in a predetermined time period, and determining a target displacement value according to the target image and the first N frames of images of the target image. Wherein the target displacement value may be a pixel displacement value.
Optionally, the main body of the above steps may be a background processor, or other devices with similar processing capabilities, and may also be a machine integrated with at least an image acquisition device and a data processing device, where the image acquisition device may include a graphics acquisition module such as a camera, and the data processing device may include a terminal such as a computer and a mobile phone, but is not limited thereto.
According to the invention, the target image acquired by shooting the target area by the camera equipment is acquired, the target displacement value of the target object in the preset time period is determined according to the target image, and the traffic state of the target area is determined according to the target displacement value. The traffic state of the target area can be determined through the target image acquired by the camera device without fusing other data, so that the problems of low efficiency and high cost of determining the traffic state in the related technology can be solved, and the effects of improving the efficiency of determining the traffic state and reducing the cost of determining the traffic state are achieved.
In one exemplary embodiment, determining a target displacement value of the target object within a predetermined time based on the target image comprises: determining a first image acquired by the camera equipment, wherein a first acquisition time for acquiring the first image is before a second acquisition time for acquiring the target image, and the preset time is separated between the first acquisition time and the second acquisition time; determining a first central point of a first target frame corresponding to the target object in the first image, wherein the first target frame is used for framing the target object; determining a second image included in the target image and acquired at an end time point within the predetermined time period; determining a second central point of the first target frame corresponding to the target object in the second image; determining the target displacement value based on the first center point and the second center point. In this embodiment, a first image acquired by the image capturing apparatus at a first acquisition time that is a predetermined time before a second acquisition time at which the target image is acquired may be acquired, a first center point of a first target frame corresponding to the target object in the first image may be determined, and a second center point of a second target frame corresponding to the target object in the target image may be determined. The first target frame is a frame for framing the target object, namely framing the target object out by using the target frame. For example, the target object is outlined by a rectangular frame, and the first target frame diagram can be seen in fig. 3. An image coordinate system can be established, the coordinates of the first central point and the second central point are determined, and the target displacement value is determined according to the coordinate difference value of the two central points. The target displacement value may be a pixel displacement value, and may be determined according to the pixel displacement value displayencurrent of the target object per second when the traffic state is determined.
In one exemplary embodiment, determining the traffic status of the target area based on the target displacement value comprises: determining that the target object is in a congestion state if the target displacement value is less than or equal to a first threshold; determining the slow-moving state of the target object under the condition that the target displacement value is larger than the first threshold and smaller than or equal to a second threshold; determining that the target object is in a clear state if the target displacement value is greater than the second threshold, wherein the first threshold is less than the second threshold, and the first threshold and the second threshold are both thresholds determined based on the height of a corresponding first target frame of the target object in the target image; determining a first number of objects in the congested state, a second number of objects in the crawl state, and a third number of objects in a clear state; determining the traffic status of the target area based on the first number, the second number, and the third number. In this embodiment, if the displayencurrent is not less than 0 and not more than overflowthrex ObjHeight, that is, the target displacement value is not less than zero and not more than the first threshold, the target object is in a congested state. If OverFlowThre × ObjHeight < DisplaymentCurrret ≦ AmbleThre × ObjHeight, the target object is in a lazy line state. If AmbleThre × ObjHeight < DisplaymentCurrent, the target object is in a normal driving state, i.e., a clear state. Wherein, displamegcurrent is a target displacement value of the target object in a preset time; the OverFlowThre is a preset congestion threshold; objheight is the height of a first target frame corresponding to the target object; AmbleThre is a preset creep threshold.
In the above embodiment, after the state of the target object is determined, the traffic state of the target area may be determined according to the number of objects in different states by the number of objects in different states.
In one exemplary embodiment, determining the traffic state based on the first number, the second number, and the third number comprises: determining a first driving direction area and a second driving direction area included in the target area based on the target image; determining a first sub-number included in the first number in the first driving direction zone and a second sub-number included in the second driving direction zone; determining a third sub-number in the first driving direction region and a fourth sub-number in the second driving direction region, which are included in the second number; determining a fifth sub-number included in the third number and located in the first driving direction zone and a sixth sub-number included in the third number and located in the second driving direction zone; determining the traffic state of the first direction of travel region based on the first, third and fifth sub-numbers; determining the traffic state of the second driving direction zone based on the second sub-quantity, the fourth sub-quantity and the sixth sub-quantity. In this embodiment, the first traveling direction area and the second traveling direction area included in the target area may be determined from the target image. Wherein the first direction of travel region and the second direction of travel region may be different direction lanes. The number of the objects in different states in the first driving direction area and the second driving direction area is determined respectively, and the traffic state of the areas is determined according to the number of the objects in different states.
In the above-described embodiment, the first traveling direction area may be an oncoming road, the second traveling direction area may be an oncoming road, and the number of objects in a congested state, the number of objects in a slow traveling state, and the number of objects in a normal traveling (i.e., a clear state) may be counted up for the oncoming road or the oncoming road. And finally, obtaining the traffic state of the coming or going road according to the relation among the congestion quantity, the slow running quantity and the normal running quantity.
In one exemplary embodiment, determining the first and second traveling direction areas included in the target area based on the target image includes: acquiring a second image of the target object appearing in the target area for the first time; determining a third central point of a corresponding first target frame of the target object in the second image; determining a second central point of the first target frame corresponding to the target object in the target image; determining a difference in coordinates of the second center point and the third center point; determining the first direction of travel region and the second direction of travel region based on the difference. In this embodiment, during the driving of the target object in the target area, the difference between the Y value of the center point position coordinate (PlaceCurrent) of the first target frame in the target image and the Y value of the target initial position (PlaceFrt) may be used to determine whether the moving direction of the target object is to go (forward direction) or to go (backward direction), and finally, the moving direction of the target object determines whether the target object is to drive to the road or to go. A first driving method zone and a second driving direction zone are determined. Wherein, the target initial position is the position of the third central point. And all objects of the current frame can be divided into two types according to the road direction: 1. the object is driving towards the road; 2. the object is traveling on the outgoing road.
In one exemplary embodiment, after determining the traffic status of the target area based on the first number, the second number, and the third number, the method further comprises: determining a target driving area included in the target area; determining a frame identifier corresponding to the traffic state; and displaying the frame corresponding to the target driving area according to the frame identification. In this embodiment, after the traffic state of the target area is determined, the target driving area included in the target area and the traffic state of the target driving area may be determined, a frame identifier corresponding to the traffic state is determined, and a frame corresponding to the target driving area is displayed according to the frame identifier. For example, when the traffic state is a congestion state, the frame mark is red, when the traffic state is a slow-moving state, the frame mark is yellow, and when the traffic state is a smooth state, the frame mark is red. Different traffic states can be represented by corresponding different colors, and the frame can be directly displayed according to the frame identification when the traffic states are determined, so that the traffic states can be visually displayed.
In one exemplary embodiment, determining the target travel area included in the target area includes: determining all second target frames used for selecting objects in the target area at a second acquisition moment; determining a predetermined point position included in the second target frame; performing convex hull processing on the preset point location to obtain a target point location on a convex hull; and determining the target driving area included in the target area based on the target point position. In this embodiment, when the target driving area included in the target area is determined, it may be determined that, when the second acquisition time for acquiring the target image is determined, more second target frames for framing the object are included in the target area, and predetermined points included in the second target frames, such as a lower left corner and a lower right corner of each target frame, are determined. And carrying out convex hull processing on the preset point location to obtain a target point location on the convex hull, and determining a target driving area according to the target point location. When the target travel region is determined, a target travel region in the first travel direction region and a target travel region in the second travel direction region may be determined, respectively.
In the above embodiment, when the target object is a motor vehicle, the coordinates of the lower left corner and the lower right corner of the bottom of the current circumscribed rectangle detection frame of all motor vehicle target current frames coming from or going to the road may be obtained, and then the driving area including all the targets coming from or going to the road may be finally obtained according to the mathematical method of the convex hull solution. Wherein, the convex hull processing steps are as follows:
(1. put all the points in the two-dimensional coordinate system, the point with the smallest ordinate must be the point on the convex hull, wherein, the predetermined point schematic diagram can be seen in fig. 4, as shown in fig. 4, and the point P0 is the point with the smallest ordinate.
(2. translate the coordinates of all points once, with P0 as the origin.
The results obtained are P1, P2, P3, P4, P5, P6, P7, P8. from geometric knowledge, the first point P1 and the last point P8 in the results must be points on the convex hull.
Above, the first point P0 and the second point P1 on the convex hull are known and placed inside the stack. Now from the result obtained in step 3), the point following P1 is taken out as the current point, i.e. P2. Next, start finding a third point:
(4. connect P0 with the point at the top of the stack, get the line L-see if the current point is to the right or left of the line L-step 5 is performed if it is to the right of the line and step 6 is performed if it is on the line or to the left of the line.
(5. if on the right, then that element at the top of the stack is not a point on the convex hull, pop the top element.
(6. the current point is a point on the convex hull, it is pushed onto the stack, and step 7 is performed.
(7. check if the current point P2 is the last element of the result of step 3. if it is, it ends, if it is, then the point after P2 is taken as the current point, and return to step 4.
And after the target point location is obtained, determining a target driving area according to the target point location. Then, according to the traffic state of the target driving area, displaying by using different colors, for example, red represents congestion; orange stands for slow line; green represents clear.
The following method for determining traffic conditions in conjunction with the embodiments is merely illustrative:
fig. 5 is a flowchart of a method for determining a traffic state according to an embodiment of the present invention, as shown in fig. 5, the method includes:
step S502, a video image coordinate system is established, the upper left corner of the image is taken as the origin, the positive direction of the X axis is from the upper left corner to the upper right corner, and the positive direction of the Y axis is from the upper left corner to the lower left corner.
In step S504, a DetectRegion (detection area) in which the expressway is located is drawn in the surveillance video, and the DetectRegion (detection area) is used as a detection area for detecting an object of the motor vehicle (no detection tracking is performed for an object outside the DetectRegion). Setting traffic state threshold parameters: congestion threshold (overlaflowthree), and creep threshold (amblethree).
In step S506, when the vehicle object first appears in the detection area DetectRegion (detection area), a vehicle model obtains a vehicle object bounding rectangle detection frame, and the coordinates of the center point of the detection frame are used as the vehicle object initial position (PlaceFrt).
Step S508, calculating the pixel displacement value (displayencurrent) of each motor vehicle target per second, if the displayencurrent is not less than 0 and not more than overflowThre multiplied by Objheight, the target is in a congestion state; if OverFlowThre × ObjHeight < displayencurrent is not more than AmbleThre × ObjHeight, the target is in a slow-moving state; if AmbleThre × Objheight < DisplaymentCurrent, the target is in a normal driving state. (illustration: 1. displacioncurrent: pixel displacement value per second for each motor vehicle target; 2. overlaflowthree: congestion threshold; 3. ObjHeight: target detection box height; 4. amblethree: creep threshold).
Step S510, in the process that the motor vehicle target moves in the detection area, the difference value between the Y value of the current circumscribed rectangle detection frame central point position coordinate (PlaceCurrent) of the motor vehicle target of the current frame and the Y value of the target initial position (PlaceFrt) is utilized to judge whether the movement direction of the motor vehicle target is in the forward direction (head direction) or in the backward direction (tail direction), and finally, whether the motor vehicle target is in the forward direction or in the backward direction to drive on the road is judged according to the movement direction of the motor vehicle. Based on the previous calculations, all motor vehicle targets of the current frame are classified into two categories according to road direction: 1. the motor vehicle is aimed at driving to the road; 2. the motor vehicle target is traveling to the road. That is, it is determined whether the Y value of PlaceCurrent is greater than the Y value of PlaceFrt, and if yes, step S512 is executed, and if no, step S514 is executed.
In step S512, the target belongs to the incoming road.
In step S514, the target belongs to a forward road.
In step S516, the number of congestion state targets, the number of slow traveling state targets, and the number of normal traveling targets in the incoming road or the outgoing road are counted. And finally, obtaining the traffic state of the coming or going according to the relation among the congestion target number, the slow running target number and the normal running target number.
Step S518, obtaining the coordinates of the lower left corner and the lower right corner of the bottom of the current circumscribed rectangle detection frame of all motor vehicle target current frames coming to or going to the road, then obtaining the driving area containing all targets coming to or going to the road according to the mathematical method of convex hull solution.
In step S520, different colors are used for displaying the obtained driving areas to or from the driving area (for example, red represents congestion, orange represents slow driving, and green represents clear).
In the foregoing embodiment, the motion state and the driving direction of the vehicle are determined by calculating the difference relationship between the pixel displacement of the vehicle target in the image and the Y value of PlaceCurrent (the current frame vehicle target currently circumscribes the position coordinates of the central point of the rectangular detection frame) and the Y value of the target initial position PlaceFrt (the central point of the vehicle target initial position detection frame); calculating a traveling region to or from by a convex hull solving mathematical method; and displaying in real time by combining the calculated forward or backward driving area and the corresponding state. Along with the development of computer vision, the computer vision technology is gradually and rapidly developed in the field of traffic monitoring, so that not only is police resources saved, but also an alarm can be given to road abnormal events in time, and the great personnel and property safety is avoided.
The computer vision technology is applied to traffic road management, so that the levels of scientization, modernization and informatization of traffic management work can be improved to a great extent, police resources are relieved, road traffic safety is enhanced and guaranteed, and traffic road accidents are reduced. In addition, the cost for acquiring data by the computer vision technology is far lower than that of other data acquisition means, such as other ways of radar data, GPS data, and high-grade or Baidu data, and the intelligent monitoring on the traffic road in large batch and large area is facilitated.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for determining a traffic state is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a block diagram showing a configuration of a traffic state determining apparatus according to an embodiment of the present invention, as shown in fig. 6, the apparatus including:
the acquisition module 62 is configured to acquire a target image acquired by shooting a target area by a camera device, where the target area is an area where a target object travels;
a first determination module 64 for determining a target displacement value of the target object within a predetermined time based on the target image;
a second determination module 66 for determining a traffic status of the target area based on the target displacement value.
In one exemplary embodiment, the first determination module 64 may determine the target displacement value of the target object within a predetermined time based on the target image by: determining a first image acquired by the camera equipment, wherein a first acquisition time for acquiring the first image is before a second acquisition time for acquiring the target image, and the preset time is separated between the first acquisition time and the second acquisition time; determining a first central point of a first target frame corresponding to the target object in the first image, wherein the first target frame is used for framing the target object; determining a second central point of the first target frame corresponding to the target object in the target image; determining the target displacement value based on the first center point and the second center point.
In an exemplary embodiment, the second determination module 66 may enable determining the traffic status of the target area based on the target displacement value by: determining that the target object is in a congestion state if the target displacement value is less than or equal to a first threshold; determining the slow-moving state of the target object under the condition that the target displacement value is larger than the first threshold and smaller than or equal to a second threshold; determining that the target object is in a clear state if the target displacement value is greater than the second threshold, wherein the first threshold is less than the second threshold, and the first threshold and the second threshold are both thresholds determined based on the height of a corresponding first target frame of the target object in the target image; determining a first number of objects in the congested state, a second number of objects in the crawl state, and a third number of objects in a clear state; determining the traffic status of the target area based on the first number, the second number, and the third number.
In an exemplary embodiment, the second determination module 66 may enable determining the traffic state based on the first number, the second number, and the third number by: determining a first driving direction area and a second driving direction area included in the target area based on the target image; determining a first sub-number included in the first number in the first driving direction zone and a second sub-number included in the second driving direction zone; determining a third sub-number in the first driving direction region and a fourth sub-number in the second driving direction region, which are included in the second number; determining a fifth sub-quantity in the first direction of travel region and a sixth sub-quantity in the second direction of travel region comprised in the third quantity; determining the traffic state of the first direction of travel region based on the first, third and fifth sub-numbers; determining the traffic state of the second driving direction zone based on the second sub-quantity, the fourth sub-quantity and the sixth sub-quantity.
In an exemplary embodiment, the second determination module 66 may enable determining the first direction of travel region and the second direction of travel region included in the target region based on the target image by: acquiring a second image of the target object appearing in the target area for the first time; determining a third central point of a corresponding first target frame of the target object in the second image; determining a second central point of the first target frame corresponding to the target object in the target image; determining a difference in coordinates of the second center point and the third center point; determining the first direction of travel region and the second direction of travel region based on the difference.
In one exemplary embodiment, the apparatus may be configured to determine a target travel area included in the target area after determining the traffic state of the target area based on the first number, the second number, and the third number; determining a frame identifier corresponding to the traffic state; and displaying a frame corresponding to the target driving area according to the frame identification.
In one exemplary embodiment, the apparatus may achieve the determination of the target travel area included in the target area by: determining all second target frames used for selecting objects in the target area at a second acquisition moment; determining a predetermined point position included in the second target frame; performing convex hull processing on the preset point location to obtain a target point location on a convex hull; and determining the target driving area included in the target area based on the target point position.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention further provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method described in any of the above.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for determining a traffic state, comprising:
acquiring a target image acquired by shooting a target area by a camera device, wherein the target area is an area where a target object runs;
determining a target displacement value of the target object within a predetermined time based on the target image;
determining a traffic status of the target area based on the target displacement value;
wherein determining the traffic status of the target area based on the target displacement value comprises: determining that the target object is in a congestion state if the target displacement value is less than or equal to a first threshold; determining the slow-moving state of the target object under the condition that the target displacement value is larger than the first threshold and smaller than or equal to a second threshold; determining that the target object is in a clear state if the target displacement value is greater than the second threshold, wherein the first threshold is less than the second threshold, and the first threshold and the second threshold are both thresholds determined based on the height of a corresponding first target frame of the target object in the target image; determining a first number of objects in the congested state, a second number of objects in the crawl state, and a third number of objects in a clear state; determining the traffic status of the target area based on the first number, the second number, and the third number;
determining the traffic state based on the first number, the second number, and the third number comprises: determining a first driving direction area and a second driving direction area included in the target area based on the target image; determining a first sub-number included in the first number in the first driving direction zone and a second sub-number included in the second driving direction zone; determining a third sub-number in the first driving direction region and a fourth sub-number in the second driving direction region, which are included in the second number; determining a fifth sub-number included in the third number and located in the first driving direction zone and a sixth sub-number included in the third number and located in the second driving direction zone; determining the traffic state of the first direction of travel region based on the first, third and fifth sub-numbers; determining the traffic state of the second driving direction zone based on the second sub-quantity, the fourth sub-quantity and the sixth sub-quantity.
2. The method of claim 1, wherein determining a target displacement value of the target object within a predetermined time based on the target image comprises:
determining a first image acquired by the camera equipment, wherein a first acquisition time for acquiring the first image is before a second acquisition time for acquiring the target image, and the preset time is separated between the first acquisition time and the second acquisition time;
determining a first central point of a first target frame corresponding to the target object in the first image, wherein the first target frame is used for framing the target object;
determining a second central point of the first target frame corresponding to the target object in the target image;
determining the target displacement value based on the first center point and the second center point.
3. The method of claim 1, wherein determining a first direction of travel region and a second direction of travel region included in the target region based on the target image comprises:
acquiring a second image of the target object appearing in the target area for the first time;
determining a third central point of a corresponding first target frame of the target object in the second image;
determining a second central point of the first target frame corresponding to the target object in the target image;
determining a difference in coordinates of the second center point and the third center point;
determining the first direction of travel region and the second direction of travel region based on the difference.
4. The method of claim 1, wherein after determining the traffic status of the target area based on the first number, second number, and third number, the method further comprises:
determining a target driving area included in the target area;
determining a frame identifier corresponding to the traffic state;
and displaying the frame corresponding to the target driving area according to the frame identification.
5. The method according to claim 4, wherein determining a target travel area included in the target area comprises:
determining all second target frames used for selecting objects in the target area at a second acquisition moment;
determining a predetermined point position included in the second target frame;
performing convex hull processing on the preset point location to obtain a target point location on a convex hull;
and determining the target driving area included in the target area based on the target point position.
6. An apparatus for determining a traffic state, comprising:
the device comprises an acquisition module, a display module and a processing module, wherein the acquisition module is used for acquiring a target image acquired by shooting a target area by a camera device, and the target area is an area where a target object runs;
a first determination module for determining a target displacement value of the target object within a predetermined time based on the target image;
a second determination module for determining a traffic state of the target area based on the target displacement value;
wherein the second determination module enables determining the traffic status of the target area based on the target displacement value by: determining that the target object is in a congestion state if the target displacement value is less than or equal to a first threshold; determining the slow-moving state of the target object under the condition that the target displacement value is larger than the first threshold and smaller than or equal to a second threshold; determining that the target object is in a clear state if the target displacement value is greater than the second threshold, wherein the first threshold is less than the second threshold, and the first threshold and the second threshold are both thresholds determined based on the height of a corresponding first target frame of the target object in the target image; determining a first number of objects in the congested state, a second number of objects in the crawl state, and a third number of objects in a clear state; determining the traffic status of the target area based on the first number, second number, and third number;
the second determination module enables determining the traffic status based on the first number, the second number, and the third number by: determining a first driving direction area and a second driving direction area included in the target area based on the target image; determining a first sub-number included in the first number in the first driving direction zone and a second sub-number included in the second driving direction zone; determining a third sub-number in the first driving direction region and a fourth sub-number in the second driving direction region, which are included in the second number; determining a fifth sub-number included in the third number and located in the first driving direction zone and a sixth sub-number included in the third number and located in the second driving direction zone; determining the traffic state of the first direction of travel region based on the first, third and fifth sub-numbers; determining the traffic status of the second direction of travel area based on the second sub-quantity, the fourth sub-quantity, and the sixth sub-quantity.
7. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, wherein the computer program, when being executed by a processor, carries out the steps of the method as claimed in any one of the claims 1 to 5.
8. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 5.
CN202111382155.2A 2021-11-22 2021-11-22 Method and device for determining traffic state, storage medium and electronic device Active CN113823095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111382155.2A CN113823095B (en) 2021-11-22 2021-11-22 Method and device for determining traffic state, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111382155.2A CN113823095B (en) 2021-11-22 2021-11-22 Method and device for determining traffic state, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN113823095A CN113823095A (en) 2021-12-21
CN113823095B true CN113823095B (en) 2022-05-03

Family

ID=78917925

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111382155.2A Active CN113823095B (en) 2021-11-22 2021-11-22 Method and device for determining traffic state, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN113823095B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139645A (en) * 2015-07-23 2015-12-09 合肥革绿信息科技有限公司 Urban regional road network operation index assessment method based on floating car technology
CN105336169A (en) * 2015-12-09 2016-02-17 青岛海信网络科技股份有限公司 Method and system for judging traffic jams based on videos
US20160210862A1 (en) * 2012-09-12 2016-07-21 Omron Corporation Data flow control order generating apparatus and sensor managing apparatus
CN109872530A (en) * 2017-12-05 2019-06-11 广州腾讯科技有限公司 A kind of generation method of traffic information, car-mounted terminal and server
CN110050300A (en) * 2017-11-13 2019-07-23 北京嘀嘀无限科技发展有限公司 Traffic congestion monitoring system and method
CN113240906A (en) * 2021-05-27 2021-08-10 山东产研信息与人工智能融合研究院有限公司 Vehicle guiding method and system based on real-time monitoring of road congestion in logistics park

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI717102B (en) * 2019-11-14 2021-01-21 黃玄 Traffic condition system for internet of vehicles based on image recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160210862A1 (en) * 2012-09-12 2016-07-21 Omron Corporation Data flow control order generating apparatus and sensor managing apparatus
CN105139645A (en) * 2015-07-23 2015-12-09 合肥革绿信息科技有限公司 Urban regional road network operation index assessment method based on floating car technology
CN105336169A (en) * 2015-12-09 2016-02-17 青岛海信网络科技股份有限公司 Method and system for judging traffic jams based on videos
CN110050300A (en) * 2017-11-13 2019-07-23 北京嘀嘀无限科技发展有限公司 Traffic congestion monitoring system and method
CN109872530A (en) * 2017-12-05 2019-06-11 广州腾讯科技有限公司 A kind of generation method of traffic information, car-mounted terminal and server
CN113240906A (en) * 2021-05-27 2021-08-10 山东产研信息与人工智能融合研究院有限公司 Vehicle guiding method and system based on real-time monitoring of road congestion in logistics park

Also Published As

Publication number Publication date
CN113823095A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN105493502A (en) Video monitoring method, video monitoring system, and computer program product
CN113807270A (en) Road congestion detection method and device and electronic equipment
CN110379172A (en) The generation method and device of traffic rules, storage medium, electronic device
CN111753612A (en) Method and device for detecting sprinkled object and storage medium
EP4345773A1 (en) Lane line extraction method and apparatus, vehicle and storage medium
CN113034938A (en) Intelligent traffic system for city management
CN112349087B (en) Visual data input method based on holographic perception of intersection information
WO2022166606A1 (en) Target detection method and apparatus
CN112863195B (en) Vehicle state determination method and device
CN113823095B (en) Method and device for determining traffic state, storage medium and electronic device
CN114677843B (en) Road condition information processing method, device, system and electronic equipment
CN111429723A (en) Communication and perception data fusion method based on road side equipment
CN116524718A (en) Remote visual processing method and system for intersection data
Habib et al. Lane departure detection and transmission using Hough transform method
CN114024997B (en) Intelligent equipment based on automatic driving and AIOT Internet of things platform method
CN113286096B (en) Video identification method and system
CN112149471A (en) Loopback detection method and device based on semantic point cloud
CN104835324A (en) Road deviation monitoring method and system
CN112258881B (en) Vehicle management method based on intelligent traffic
CN111709354B (en) Method and device for identifying target area, electronic equipment and road side equipment
CN115063969A (en) Data processing method, device, medium, roadside cooperative device and system
Panda et al. Application of Image Processing In Road Traffic Control
CN112597874A (en) Signal lamp identification method and device and computer readable storage medium
CN113221800A (en) Monitoring and judging method and system for target to be detected
CN112258880B (en) Vehicle management system based on intelligent traffic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant