CN111741267B - Method, device, equipment and medium for determining vehicle delay - Google Patents

Method, device, equipment and medium for determining vehicle delay Download PDF

Info

Publication number
CN111741267B
CN111741267B CN202010591986.XA CN202010591986A CN111741267B CN 111741267 B CN111741267 B CN 111741267B CN 202010591986 A CN202010591986 A CN 202010591986A CN 111741267 B CN111741267 B CN 111741267B
Authority
CN
China
Prior art keywords
composite image
vehicle
target vehicle
determining
composite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010591986.XA
Other languages
Chinese (zh)
Other versions
CN111741267A (en
Inventor
周善存
刘永超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010591986.XA priority Critical patent/CN111741267B/en
Publication of CN111741267A publication Critical patent/CN111741267A/en
Application granted granted Critical
Publication of CN111741267B publication Critical patent/CN111741267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for determining vehicle delay, which are used for solving the problem of inaccurate vehicle delay determined in the prior art. According to the embodiment of the invention, the images simultaneously acquired by at least two cameras of the multi-view camera are spliced to obtain the composite image, when the distance between the target vehicle and the first identification position in the composite image reaches the set first distance threshold, the composite image is taken as the first composite image, the first acquisition time of the first composite image is acquired, the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold is determined according to the composite image spliced before the first composite image, the second acquisition time of the second composite image is acquired, and the vehicle delay of the target vehicle is determined according to the first acquisition time and the second acquisition time, so that the vehicle passing data of two directions of the target vehicle can be accurately acquired, and the accuracy of the determined target vehicle delay is improved.

Description

Method, device, equipment and medium for determining vehicle delay
Technical Field
The invention relates to the technical field of urban road traffic, in particular to a method, a device, equipment and a medium for determining vehicle delay.
Background
The intersection is a key node in an urban road traffic network, and the quality of traffic operation conditions at the intersection directly influences the service quality of the traffic network. The vehicle delay at the intersection refers to the time that the vehicle is additionally lost at the intersection controlled by the signal lamp due to the influence of the signal lamp. Therefore, the parameters can reflect the design reasonability of the control scheme of the intersection and can also reflect the time loss condition of the driver.
In the related art, a method for determining vehicle delay at an intersection comprises the following steps:
the method comprises the following steps: the vehicle passing data is captured by the electric alarm camera, the vehicle track is deduced by means of probability, the vehicle arrival track and the vehicle departure track are reconstructed, and the vehicle delay of each vehicle experiencing delay is determined according to the arrival and departure time of the vehicles. However, in the method, the vehicle track determined by a deduction method is inaccurate through the acquired vehicle passing data, so that the accuracy of determining the vehicle delay of each vehicle is not high.
The second method comprises the following steps: based on the motorcade following theory, collecting traffic characteristic data, intersection geometric parameters, vehicle power, geometric parameters and a signal control scheme, dividing vehicles in a period into three groups according to different states of passing through intersections, judging respective duration time of three groups of traffic flow input according to a discriminant in an algorithm, and further calculating vehicle delay of each group of single vehicles. In order to accurately collect the data for determining the vehicle delay, a plurality of electric warning rods are arranged on a road section to be detected at the intersection at equal intervals, and each electric warning rod is provided with a camera. However, at present, only one electric warning rod is arranged on most roads, the difficulty of adding the electric warning rod on an intersection is often very high, data collected by a camera on one electric warning rod is generally not comprehensive, and the accuracy of vehicle delay determined according to the collected data and an algorithm is not high.
The third method comprises the following steps: the method comprises the steps of grouping passing vehicles in intersection ports in the gate data according to the passing phase and the driving direction of an intersection by using vehicle passing data uploaded by a gate device, obtaining corresponding signal periods by taking the head time interval of the vehicles in a group as a signal period boundary point, and estimating vehicle delay of the intersection according to the time when the grouped vehicles in the group pass through the intersection and the average travel time when the vehicles in the group pass through the intersection under the green light. Because the method is also based on the collected data of one direction, namely the headway, the vehicle delay of each vehicle is estimated through an algorithm, and the determined vehicle delay of each vehicle at the intersection is inaccurate.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a medium for determining vehicle delay, which are used for solving the problem of inaccurate vehicle delay determined in the prior art.
The embodiment of the invention also provides a method for determining vehicle delay, which comprises the following steps:
acquiring images acquired by at least two cameras of a multi-view camera at the same time, splicing the images to obtain a composite image, if the distance between a target vehicle and a first identification position in the composite image is determined to reach a set first distance threshold, determining the composite image as a first composite image, and acquiring first acquisition time of the first composite image;
determining a second composite image when the distance between the target vehicle and a second identification position reaches a set second distance threshold according to a composite image spliced before the first composite image, and acquiring second acquisition time of the second composite image;
and determining the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time.
Furthermore, the at least two cameras collect images in different ranges of the road section to be monitored according to a set time interval.
Further, the determining, according to the composite image obtained by stitching before the first composite image, the second composite image when the distance between the target vehicle and the second identification position reaches a set second distance threshold includes:
identifying a vehicle feature of the target vehicle in the first composite image;
according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image.
Further, the identifying a vehicle feature of the target vehicle in the first composite image comprises:
identifying lane information and vehicle characteristics of the target vehicle in the first composite image;
determining that the target vehicle exists in a composite image spliced before the first composite image according to the vehicle characteristics of the target vehicle, comprising:
and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
Further, if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the composite image obtained by stitching before the first composite image, determining that the target vehicle exists in the composite image includes:
and sequentially detecting the composite images spliced before the first composite image according to the sequence of the acquisition time of the composite images, and if determining that candidate vehicles matched with the lane information and the vehicle characteristics of the target vehicle exist in the currently detected composite image, determining that the target vehicle exists in the currently detected composite image.
Further, the determining, according to the composite image obtained by stitching before the first composite image, the second composite image when the distance between the target vehicle and the second identification position reaches a set second distance threshold includes:
according to the sequence of the acquisition time of the composite images, composite images obtained by splicing before the first composite image are sequentially detected, according to the position information and the vehicle characteristics of the vehicle in the first composite image, if the target vehicle is determined to exist in the previous composite image of the first composite image, whether the distance between the target vehicle and the second identification position in the previous composite image reaches a set second distance threshold value is judged, if yes, the previous composite image is used as a second composite image, and if not, the previous composite image is updated to be the first composite image and detection is continued.
An embodiment of the present invention further provides a device for determining vehicle delay, where the device includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring images acquired by at least two cameras of a multi-view camera at the same time, splicing the images to obtain a composite image, and if the distance between a target vehicle and a first identification position in the composite image is determined to reach a set first distance threshold, determining the composite image as a first composite image and acquiring first acquisition time of the first composite image;
the determining unit is used for determining a second synthetic image when the distance between the target vehicle and a second identification position reaches a set second distance threshold value according to a synthetic image spliced before the first synthetic image, and acquiring second acquisition time of the second synthetic image;
and the processing unit is used for determining the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time.
Further, the determination unit is specifically configured to identify a vehicle feature of the target vehicle in the first composite image; according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image.
Further, the determining unit is specifically configured to identify lane information and vehicle characteristics of the target vehicle in the first composite image; and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
Further, the determining unit is specifically configured to sequentially detect the composite images obtained by stitching before the first composite image according to an order of acquisition times of the composite images, and if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the currently detected composite image, determine that the target vehicle exists in the currently detected composite image.
Further, the determining unit is specifically configured to sequentially detect, according to an acquisition time sequence of the composite images, composite images obtained by stitching before the first composite image, determine, according to position information of a vehicle in the first composite image and vehicle characteristics, if it is determined that the target vehicle exists in a previous composite image of the first composite image, whether a distance between the target vehicle and a second identification position in the previous composite image reaches a set second distance threshold, if so, use the previous composite image as a second composite image, otherwise, update the previous composite image as the first composite image, and continue the detection.
An embodiment of the present invention further provides an electronic device, where the electronic device at least includes a processor and a memory, and the processor is configured to implement the steps of the method for determining vehicle delay as described in any one of the above when executing a computer program stored in the memory.
Embodiments of the present invention further provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the vehicle delay determination methods described above.
According to the embodiment of the invention, the images simultaneously acquired by at least two cameras of the multi-view camera are spliced to obtain the composite image, when the distance between the target vehicle and the first identification position in the composite image reaches the set first distance threshold, the composite image is taken as the first composite image, the first acquisition time of the first composite image is acquired, the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold is determined according to the composite image spliced before the first composite image, the second acquisition time of the second composite image is acquired, and the vehicle delay of the target vehicle is determined according to the first acquisition time and the second acquisition time, so that the vehicle passing data of two directions of the target vehicle can be accurately acquired, and the accuracy of the determined target vehicle delay is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a vehicle delay determination process according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a specific process for determining a second composite image according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a specific process for determining a second composite image according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a specific process for determining a second composite image according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a specific process for determining a second composite image according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a specific vehicle delay determination process provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of a specific vehicle delay determination process provided by an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a vehicle delay determination device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to improve the accuracy of determining vehicle delays, embodiments of the present invention provide a method, an apparatus, a device, and a medium for determining vehicle delays.
Example 1:
fig. 1 is a schematic diagram of a vehicle delay determination process provided by an embodiment of the present invention, where the process includes the following steps:
s101: the method comprises the steps of obtaining images collected by at least two cameras of a multi-view camera at the same time, obtaining a composite image according to the images in a splicing mode, determining the composite image as a first composite image if the fact that the distance between a target vehicle and a first identification position in the composite image reaches a set first distance threshold value, and obtaining first collection time of the first composite image.
The method for determining the vehicle delay provided by the embodiment of the invention is applied to electronic equipment, and the electronic equipment can be image acquisition equipment comprising a multi-view camera, and can also be equipment such as a PC (personal computer) and a server which receive images acquired by the multi-view camera.
In the embodiment of the invention, in order to better acquire the vehicle image and acquire the vehicle passing data of the vehicle, the road section between the first identification position and the second identification position is taken as the road section to be detected, and the electric police pole of the road section to be detected, which is close to the first identification position, is provided with the multi-view camera, the multi-view camera comprises at least two cameras, the shooting area of each camera is different, and the adjacent shooting areas can be overlapped. In addition, accurate vehicle passing data can be obtained, the time for collecting the images by each camera is the same, for example, the images are collected according to the same time interval, or the images are collected according to the set same time collection point.
After images acquired by at least one camera of the multi-view camera at the same moment are acquired, the electronic equipment performs corresponding processing on each acquired image based on the vehicle delay determining method provided by the embodiment of the invention, so that the vehicle delay of each vehicle is determined.
The acquired images acquired by at least one camera of the multi-view camera at the same time may be acquired by the electronic device itself, or may be received by other devices for transmission, which is not specifically limited herein.
After the electronic equipment acquires images acquired by at least one camera of the multi-view camera at the same moment, the images are spliced to obtain a composite image, each vehicle contained in the composite image is identified, whether the distance between the vehicle and the first identification position in the composite image reaches a set first distance threshold value or not is identified, and whether the vehicle leaves a road section to be detected or not is determined. When the fact that the distance between the vehicle and the first identification position in the composite image reaches a set first distance threshold value is determined, the license plate number of the vehicle is recognized, the vehicle of the license plate number is used as a target vehicle, the composite image is used as a first composite image, the acquisition time of each image used for splicing the first composite image is obtained, and the acquisition time is used as the first acquisition time of the first composite image.
In order to accurately identify whether the distance between the target vehicle and the first identification position in the composite image reaches a set first distance threshold value, the installation position of the multi-view camera is in a region adjacent to the first identification position, and the region is far away from a second identification position of a preset road section to be detected.
It should be noted that, it belongs to the prior art to specifically identify whether the distance between the vehicle and the first identification position in the composite image reaches the set first distance threshold, and identify the license plate number of the vehicle in the composite image, and this is not limited specifically here.
S102: and determining a second composite image when the distance between the target vehicle and the second identification position reaches a set second distance threshold according to the composite image spliced before the first composite image, and acquiring second acquisition time of the second composite image.
After the electronic device determines the first composite image, it may perform corresponding processing on the composite image obtained by stitching before the first composite image, determine that there is a second composite image in which the distance between the target vehicle and the second identification position reaches a set second distance threshold, that is, the composite image when the target vehicle enters the road segment to be detected, and obtain a second acquisition time of the second composite image, so that the subsequent electronic device determines the vehicle delay of the target vehicle in the road segment to be detected according to the second acquisition time and the first acquisition time obtained in the above embodiment.
S103: and determining the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time.
After the second acquisition time of the second synthetic image is acquired, the electronic equipment performs corresponding processing according to the first acquisition time and the second acquisition time corresponding to the target vehicle, and determines the vehicle delay of the target vehicle.
Specifically, when the vehicle delay of the target vehicle is determined, the time difference between the first acquisition time and the second acquisition time is determined, and the time difference is determined as the vehicle delay of the target vehicle.
For example, if the first acquisition time is "165020", the second acquisition time is "164530", the first two digits of the acquisition times represent hours, the 3 rd to 4 th digits represent minutes, and the fifth to sixth digits represent seconds, the vehicle delay of the target vehicle is determined to be "000510", that is, the vehicle delay is 5 minutes and 10 seconds, based on the time difference between the first acquisition time and the second acquisition time.
According to the embodiment of the invention, the images simultaneously acquired by at least two cameras of the multi-view camera are spliced to obtain the composite image, when the distance between the target vehicle and the first identification position in the composite image reaches the set first distance threshold, the composite image is taken as the first composite image, the first acquisition time of the first composite image is acquired, the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold is determined according to the composite image spliced before the first composite image, the second acquisition time of the second composite image is acquired, and the vehicle delay of the target vehicle is determined according to the first acquisition time and the second acquisition time, so that the vehicle passing data of two directions of the target vehicle can be accurately acquired, and the accuracy of the determined target vehicle delay is improved.
Example 2:
in order to accurately acquire the vehicle delay of each vehicle of the road section to be monitored, on the basis of the above embodiment, in the embodiment of the present invention, the at least two cameras acquire images of different ranges of the road section to be monitored according to a set time interval.
If at least two cameras of the multi-view camera continuously acquire images of a road section to be detected, the electronic equipment for determining vehicle delay needs to consume a large amount of resources to splice each image acquired at the same moment, and store each spliced image. Therefore, in order to reduce the resources occupied for processing the images acquired by each camera at the same time, in the embodiment of the present invention, a set time interval is preset. And simultaneously acquiring images in different ranges in the road section to be detected by each camera of the multi-view camera according to the set time interval.
When the set time interval is set, different values are set according to different scenes, and if the information of each vehicle on the road section to be detected is expected to be collected as much as possible, the accuracy of the determined vehicle delay is further ensured, and the set time interval can be set to be smaller; if it is desired to reduce the resources used to process the images captured by each camera at the same time and improve the real-time performance of determining vehicle delays, the set time interval may be set to be larger, but not too large, and preferably, the set time interval may be 40ms, 30ms, etc.
Due to the fact that the lengths of the road sections to be detected are different, the number of the cameras contained in the selected multi-view cameras is different. For example, if the road section to be detected is frequently traffic-blocked and the motorcade of traffic-blocked is long, a multi-camera with a large number of cameras can be selected; if the cost of the multi-view camera is expected to be reduced and the road section to be detected is not easy to block, the multi-view camera with a small number of cameras can be selected. Preferably, it is ensured that the sum of the shot ranges of each camera of the selected multi-view camera should include the entire road section to be detected. For example, generally, the length of the road segment to be detected between the first identifier position and the second identifier position is 300m, the multi-view camera includes three cameras A, B, C, and with the first identifier position as the origin, that is, 0m, the camera a of the multi-view camera acquires images in the range of 0-100m, the camera B acquires images in the range of 90-220m, and the camera C acquires images in the range of 210 and 400 m.
It should be noted that the range size of each camera shooting area of the multi-view camera may be the same or different. The range of the photographing region is expressed by a photographing distance in the embodiment of the present invention.
Example 3:
in order to accurately determine each composite image including a target vehicle, in an embodiment of the present invention based on the above embodiments, the determining, from the composite images spliced before the first composite image, a second composite image when a distance between the target vehicle and a second identification position reaches a set second distance threshold includes:
identifying a vehicle feature of the target vehicle in the first composite image;
according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image.
In an actual application scene, due to different installation positions of the cameras, the cameras can only accurately identify the license plate number of the target vehicle in the collected close-distance image, and the license plate number of the target vehicle in the collected far-distance image is not accurately identified, so that the second collection time of the vehicle cannot be acquired when the vehicle delay of the target vehicle is determined. The same vehicle, whether in a close-range image or a far-range image, generally has almost the same vehicle characteristics. Therefore, in the embodiment of the invention, in order to accurately determine the vehicle delay of the target vehicle, it is possible to determine that each composite image of the target vehicle exists, based on the vehicle characteristics of the target vehicle. And for each composite image in which the target vehicle is determined to exist, determining whether the distance between the target vehicle and the second identification position in the composite image reaches a set second threshold value, and acquiring a second composite image.
Wherein the vehicle characteristics comprise at least one of the characteristics of the type of the vehicle, the color of the vehicle, the logo of the vehicle and the like.
In a specific implementation process, the electronic device first identifies vehicle features of a target vehicle in a first composite image, and then matches vehicle features of candidate vehicles included in a composite image spliced before the first composite image with the vehicle features of the target vehicle. If the candidate vehicle with the matched vehicle characteristics exists in a certain composite image spliced before the first composite image, determining that the matched candidate vehicle is the target vehicle, and if the target vehicle exists in the composite image, judging whether the distance between the target vehicle and the second identification position in the composite image reaches a set second distance threshold value, and if so, taking the composite image as a second composite image; otherwise, the other composite images obtained by splicing before the first composite image are continuously searched based on the method. Wherein the matching means that the vehicle feature of the candidate vehicle is consistent with the vehicle feature of the target vehicle.
It should be noted that, recognizing the vehicle characteristics of the vehicle in the image belongs to the prior art, and details are not described herein.
Fig. 2 is a schematic diagram of a specific process for determining a second composite image according to an embodiment of the present invention, where the process includes:
s201: vehicle features of the target vehicle in the first composite image are identified.
S202: any composite image obtained by stitching before the first composite image is acquired.
S203: and judging whether a candidate vehicle matched with the vehicle feature of the target vehicle exists in the composite image, if so, executing S204, otherwise, executing S207.
S204: it is determined that the target vehicle is present in the composite image.
S205: and judging whether the distance between the target vehicle and the second identification position in the composite image reaches a set second distance threshold value, if so, executing S206, otherwise, executing S207.
S206: the composite image serves as a second composite image.
S207: another composite image obtained by stitching before the first composite image is acquired, and then S203 is performed.
In another possible embodiment, the identifying the vehicle feature of the target vehicle in the first composite image includes:
identifying lane information and vehicle characteristics of the target vehicle in the first composite image;
determining that the target vehicle exists in a composite image spliced before the first composite image according to the vehicle characteristics of the target vehicle, comprising:
and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
Since more and more people travel using the vehicle and more vehicles having the same vehicle characteristics, if it is determined that the composite image of the target vehicle exists only based on the vehicle characteristics, it is likely that the determined second composite image is inaccurate. In general, if the road section to be detected is a guiding lane, the lane change is generally impossible after the vehicle enters the guiding lane, that is, a certain vehicle enters the lane where the road section to be detected is located, and the lane where the vehicle leaves the road section to be detected is the same lane. Therefore, in the embodiment of the present invention, the lane information and the vehicle feature where the target vehicle is located in the first composite image may be identified, and each composite image containing the target vehicle may be determined according to the lane information and the vehicle feature of the target vehicle.
In a specific implementation process, lane information and vehicle features of each candidate vehicle included in the composite images obtained before the first composite image are respectively matched with lane information and vehicle features of the target vehicle, and if a candidate vehicle matched with both the lane information and the vehicle features of the target vehicle is determined to exist in one composite image obtained before the first composite image, the target vehicle is determined to exist in the composite image. Subsequently, based on the above-described embodiment, it is determined whether the distance between the target vehicle and the second identification position in the composite image reaches the set second distance threshold, thereby determining whether the composite image is the second composite image. The matching means that the lane information of the candidate vehicle is consistent with the lane information of the target vehicle, and the vehicle characteristic of the candidate vehicle is consistent with the vehicle characteristic of the target vehicle.
It should be noted that, the determination of the second composite image may be randomly determined from composite images obtained by stitching before the first composite image.
Fig. 3 is a schematic diagram of a specific process for determining a second composite image according to an embodiment of the present invention, where the process includes:
s301: vehicle features and lane information of the target vehicle in the first composite image are identified.
S302: any composite image obtained by stitching before the first composite image is acquired.
S303: lane information and vehicle characteristics of each candidate vehicle included in the composite image are acquired.
S304: and judging whether a candidate vehicle which is matched with the vehicle characteristic and the lane information of the target vehicle exists in the composite image, if so, executing S305, and otherwise, executing S308.
S305: it is determined that the target vehicle is present in the composite image.
S306: and judging whether the distance between the target vehicle and the second identification position in the composite image reaches a set second distance threshold value, if so, executing S307, and otherwise, executing S308.
S307: the composite image serves as a second composite image.
S308: another composite image obtained by stitching before the first composite image is acquired, and then S303 is performed.
In another possible implementation manner, in order to determine the second composite image as quickly and accurately as possible, in an embodiment of the present invention, the composite images obtained by stitching before the first composite image may be processed in order from late to early of the acquisition time of the composite image, so as to determine the second composite image. Specifically, if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the composite image obtained by stitching before the first composite image, determining that the target vehicle exists in the composite image includes:
and sequentially detecting the composite images spliced before the first composite image according to the sequence of the acquisition time of the composite images, and if determining that candidate vehicles matched with the lane information and the vehicle characteristics of the target vehicle exist in the currently detected composite image, determining that the target vehicle exists in the currently detected composite image.
Since the composite images including the same vehicle are continuous, in the embodiment of the present invention, the composite images stitched before the first composite image are detected sequentially in order of the acquisition time of the composite images from late to early, the determined second composite image may be more accurate, and the efficiency of determining the second composite image may be faster than that of randomly detecting the composite images.
In a specific implementation process, each synthesized image obtained by splicing before the first synthesized image is sequentially detected according to the sequence of the acquisition time of the synthesized images. The method in the above embodiment is applied to each composite image, and if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the currently detected composite image, it is determined that the target vehicle exists in the currently detected composite image.
For example, the lane information of the identified target vehicle is the lane 1 of left turn, the vehicle characteristics are "left turn lane" and "white", the currently detected composite image is identified to include the candidate vehicles a, b and c, the lane information of the candidate vehicle a is the lane 1 of left turn, the vehicle characteristics are "lane, vehicle and" black ", the lane information of the candidate vehicle b is the lane 2 of straight lane, the vehicle characteristics are" straight lane "and" black ", the lane information of the candidate vehicle c is the lane 1 of left turn, the vehicle characteristics are" left turn lane "and" white ", the lane information and the vehicle characteristics of the candidate vehicle c are determined to be respectively matched with the lane information and the vehicle characteristics of the target vehicle, and then the currently detected composite image is determined to include the target vehicle.
It should be noted that the lane information and the vehicle characteristics of the vehicle in the specific identification image belong to the prior art, and are not described herein again.
Fig. 4 is a schematic diagram of a specific process for determining a second composite image according to an embodiment of the present invention, where the process includes:
s401: vehicle features and lane information of the target vehicle in the first composite image are identified.
S402: a previous composite image of the first composite image is acquired.
S403: and acquiring lane information and vehicle characteristics of each candidate vehicle contained in the previous composite image.
S404: and judging whether a candidate vehicle matched with the vehicle characteristics and the lane information of the target vehicle exists in the previous composite image, if so, executing S405, and otherwise, executing S408.
S405: it is determined that the target vehicle exists in the previous composite image.
S406: and judging whether the distance between the target vehicle and the second identification position in the previous composite image reaches a set second distance threshold, if so, executing S407, and otherwise, executing S408.
S407: the previous composite image is taken as the second composite image.
S408: the previous synthesized image is updated to the first synthesized image, and then S402 is performed.
Example 4:
in order to further accurately determine the second composite image, on the basis of the above embodiment, in an embodiment of the present invention, the determining the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold value according to the composite image obtained by stitching before the first composite image includes:
according to the sequence of the acquisition time of the composite images, composite images obtained by splicing before the first composite image are sequentially detected, according to the position information and the vehicle characteristics of the vehicle in the first composite image, if the target vehicle is determined to exist in the previous composite image of the first composite image, whether the distance between the target vehicle and the second identification position in the previous composite image reaches a set second distance threshold value is judged, if yes, the previous composite image is used as a second composite image, and if not, the previous composite image is updated to be the first composite image and detection is continued.
Although in general the vehicle is not allowed to change lane after entering the guide lane, it is not excluded that the vehicle may change lane after entering the guide lane, and it may also occur that the road section to be detected does not have a guide lane, or that vehicles with the same vehicle characteristics and the same lane are successively present in the lane to be detected. However, the running speed of the general vehicle on the road section to be detected is not high, the set time interval for each camera to acquire the images is not large, and the position change of each vehicle in each two adjacent composite images is not large. Based on this, in the embodiment of the present invention, the second composite image may also be determined by the position information of the vehicle and the vehicle characteristics.
In a specific implementation process, the composite images obtained by splicing before the first composite image are sequentially detected according to the sequence of the acquisition time of the composite images. First, the position information of the target vehicle and the vehicle feature in the first composite image are identified, and then the position information of each candidate vehicle and the vehicle feature in the previous composite image of the first composite image are identified. And judging whether candidate vehicles respectively matched with the position information and the vehicle characteristics of the target vehicle in the first composite image exist in the previous composite image, thereby determining whether the target vehicle is contained in the previous composite image. The matching of the position information of the candidate vehicle and the position information of the target vehicle means that the distance between the position information of the candidate vehicle and the position information of the target vehicle is smaller than a set third distance threshold.
The previous image of the first composite image is a composite image obtained by stitching before the first composite image.
Specifically, if it is determined that candidate vehicles respectively matching the position information and the vehicle features of the target vehicle in the first synthetic image exist in the previous synthetic image, it is determined that the target vehicle exists in the previous synthetic image, it is determined whether the distance between the target vehicle in the previous synthetic image and the second identification position reaches a set second distance threshold, if so, the previous synthetic image is taken as the second synthetic image, otherwise, the previous synthetic image is updated to the first synthetic image, and the previous synthetic image of the updated first synthetic image is continuously detected according to the method.
For example, the preset third distance threshold is 10, the candidate vehicles a, b, and c are included in the previous composite image of the first composite image, the position information of the candidate vehicle a is (40,50), the vehicle feature is "vehicle feature" and "black", the position information of the candidate vehicle b is (80,50), the vehicle feature is "vehicle feature" and "black", the position information of the candidate vehicle c is (40,80), the vehicle feature is "vehicle feature" and "white", the position information of the target vehicle in the identified first composite image is (43,54), the vehicle feature is "vehicle feature" and "black", and the distance between the position information of the candidate vehicle a and the position information of the target vehicle is identified and determined
Figure BDA0002555926600000161
And if the distance is smaller than a preset third distance threshold value 10 and the vehicle characteristics of the candidate vehicle a are consistent with those of the target vehicle, determining that the previous composite image contains the target vehicle, and continuing to subsequently judge whether the distance between the target vehicle and the second identification position in the previous composite image reaches a set second distance threshold value.
Fig. 5 is a schematic diagram of a specific process for determining a second composite image according to an embodiment of the present invention, where the process includes:
s501: vehicle features and location information of the target vehicle in the first composite image are identified.
S502: a previous composite image of the first composite image is acquired.
S503: the position information and the vehicle feature of each candidate vehicle included in the previous composite image are acquired.
S504: and judging whether a candidate vehicle matched with the vehicle characteristic and the position information of the target vehicle exists in the previous composite image, if so, executing S505, and otherwise, executing S508.
The matching with the vehicle feature of the target vehicle means that a candidate vehicle which is consistent with the vehicle feature of the target vehicle exists in the previous synthetic image, and the matching with the position information of the target vehicle means that a candidate vehicle which is smaller than the set third distance threshold value in distance from the position information of the target vehicle exists in the previous synthetic image.
S505: it is determined that the target vehicle exists in the previous composite image.
S506: and judging whether the distance between the target vehicle and the second identification position in the previous composite image reaches a set second distance threshold, if so, executing S507, and otherwise, executing S508.
S507: the previous composite image is taken as the second composite image.
S508: the previous synthesized image is updated to the first synthesized image, and then S501 is performed.
Example 5:
the following describes a method for determining vehicle delay according to an embodiment of the present invention in detail by using specific embodiments.
Fig. 6 is a schematic diagram of a specific vehicle delay determination process provided in an embodiment of the present invention, where the process includes:
s601: and acquiring images acquired by at least two cameras of the multi-view camera.
In specific implementation, at least two cameras of a multi-view camera mounted on one electric police pole cover the whole detection range of the road section to be detected. The range size shot by each camera only covers a part of the road section to be detected, the whole road section to be detected is continuously covered through the sum of the range sizes of the shooting areas of the cameras, for example, the detection range of the length of the road section to be detected is about 400 meters generally, the camera A of the multi-view camera collects images in the range of 0-100m, the camera B collects images in the range of 90-260m, and the camera C collects images in the range of 210-400 m. In addition, each camera simultaneously acquires images of different ranges of the road section to be detected according to a set time interval.
S602: the electronic equipment splices a plurality of images collected at the same time into a composite image.
Because each image acquired by each camera of the multi-view camera at the same moment only covers a part of the road section to be detected, the electronic equipment splices at least two acquired cameras of the multi-view camera and each image acquired at the same moment into a composite image, so that the composite image comprises the whole road section to be detected.
S603: the electronic device performs vehicle object recognition.
In order to save the time consumed by subsequently determining each composite image containing the target vehicle, in the embodiment of the invention, when the electronic equipment is spliced to obtain one composite image, the acquisition time of the composite image is acquired, and the license plate number, the vehicle characteristics and the position information of each vehicle in the composite image are identified.
S604: the electronic device tracks the trajectory of the vehicle for each of the identification information.
In S603, the electronic device has already identified the license plate number, the vehicle feature, and the position information of each vehicle in each composite image, and assigned each vehicle identification information, and may sequentially detect each composite image obtained by stitching after the composite image according to the sequence of the acquisition time, determine whether there is a candidate vehicle in the currently detected composite image that matches both the position information of the vehicle having a certain identification information in the previous composite image of the currently detected composite image and the vehicle feature, and if so, determine the vehicle having the identification information in the currently detected composite image.
By the method, the track of each vehicle with the identification information on the road section to be detected is determined.
S605: the electronic equipment identifies the target vehicle with the distance from the first identification position reaching the set first distance threshold.
Specifically, if it is determined that the distance between the target vehicle of a certain license plate number and the first identification position reaches a set first distance threshold value in a certain composite image obtained by splicing images acquired by at least two cameras of the multi-view camera at the same time, the composite image is used as a first composite image, and the first acquisition time of the first composite image is acquired.
S606: the electronic device determines a vehicle delay of the target vehicle.
Since the trajectory of each vehicle of the identification information on the road segment to be detected has already been determined in S604 above, the identification information of the target vehicle in the first composite image is acquired when it is determined in S605 that the distance between the target vehicle and the first identification position reaches the set first distance threshold. And determining the track of the vehicle containing the identification information according to the identification information, namely determining each composite image containing the target vehicle, and determining a second composite image of which the distance between the target vehicle and the second identification position reaches a set second distance threshold value according to each composite image containing the target vehicle.
And calculating the time difference between the first acquisition time and the second acquisition time according to the first acquisition time of the target vehicle acquired in the step S605 and the determined second acquisition time of the second synthetic image, and taking the time difference as the vehicle delay of the target vehicle, namely the vehicle delay of the target vehicle of which the license plate number is determined.
Fig. 7 is a schematic diagram of a specific vehicle delay determination process provided in an embodiment of the present invention, where the process includes:
s701: and for a composite image obtained by splicing images acquired by at least two cameras of the multi-view camera at the same time, if the distance between the target vehicle and the first identification position in the composite image x is determined to reach a set first distance threshold, taking the composite image x as a first composite image, and acquiring first acquisition time of the first composite image x.
S702: the lane information of the target vehicle and the vehicle characteristics are identified in the first composite image x.
S703: a composite image containing the target vehicle is determined.
Specifically, each composite image obtained by stitching before the first composite image x is sequentially detected according to the sequence of the acquisition time of the composite images, whether candidate vehicles respectively matched with the lane information and the vehicle characteristics of the target vehicle exist in the currently detected composite image or not is identified, and if yes, it is determined that the currently detected composite image contains the target vehicle.
S704: it is determined whether the composite image containing the target vehicle is the second composite image.
When the composite image including the target vehicle is determined in S703, it is determined whether the distance between the target vehicle and the second identification position in the composite image reaches a set second distance threshold, and if so, the composite image is determined to be the second composite image, otherwise, the previous composite image of the composite image is obtained, and the previous composite image is taken as the currently detected composite image, and the process is directed to S703.
S705: and determining the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time of the second composite image.
Example 6:
fig. 8 is a schematic structural diagram of a vehicle delay determination device according to an embodiment of the present invention, where the device includes:
the acquiring unit 81 is configured to acquire images acquired by at least two cameras of the multi-view camera at the same time, obtain a composite image according to the images by stitching, determine that the composite image is a first composite image if it is determined that a distance between a target vehicle and a first identification position in the composite image reaches a set first distance threshold, and acquire first acquisition time of the first composite image;
a determining unit 82, configured to determine, according to a composite image obtained by stitching before the first composite image, a second composite image when a distance between the target vehicle and a second identification position reaches a set second distance threshold, and acquire a second acquisition time of the second composite image;
and the processing unit 83 is configured to determine the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time.
Further, the determining unit 82 is specifically configured to identify a vehicle feature of the target vehicle in the first composite image; according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image.
Further, the determining unit 82 is specifically configured to identify lane information and vehicle characteristics of the target vehicle in the first composite image; and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
Further, the determining unit 82 is specifically configured to sequentially detect the composite images obtained by stitching before the first composite image according to an order of the acquisition time of the composite images, and if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the currently detected composite image, determine that the target vehicle exists in the currently detected composite image.
Further, the determining unit 82 is specifically configured to sequentially detect, according to an order of the acquisition time of the composite images, composite images obtained by stitching before the first composite image, determine, according to the position information of the vehicle and the vehicle characteristics in the first composite image, whether a distance between the target vehicle and a second identification position in the previous composite image reaches a set second distance threshold if it is determined that the target vehicle exists in the previous composite image of the first composite image, if so, use the previous composite image as the second composite image, otherwise, update the previous composite image as the first composite image, and continue to detect.
For the concepts, explanations, detailed descriptions and other steps related to the determination device of vehicle delay in the present invention and related to the technical solutions provided in the embodiments of the present invention, please refer to the description of the foregoing methods or other embodiments, which will not be described herein.
According to the embodiment of the invention, the images simultaneously acquired by at least two cameras of the multi-view camera are spliced to obtain the composite image, when the distance between the target vehicle and the first identification position in the composite image reaches the set first distance threshold, the composite image is taken as the first composite image, the first acquisition time of the first composite image is acquired, the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold is determined according to the composite image spliced before the first composite image, the second acquisition time of the second composite image is acquired, and the vehicle delay of the target vehicle is determined according to the first acquisition time and the second acquisition time, so that the vehicle passing data of two directions of the target vehicle can be accurately acquired, and the accuracy of the determined target vehicle delay is improved.
Example 7:
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and on the basis of the foregoing embodiments, an embodiment of the present invention further provides an electronic device, as shown in fig. 9, including: the system comprises a processor 91, a communication interface 92, a memory 93 and a communication bus 94, wherein the processor 91, the communication interface 92 and the memory 93 are communicated with each other through the communication bus 94;
the memory 93 has stored therein a computer program which, when executed by the processor 91, causes the processor 91 to perform the steps of:
acquiring images acquired by at least two cameras of a multi-view camera at the same time, splicing the images to obtain a composite image, if the distance between a target vehicle and a first identification position in the composite image is determined to reach a set first distance threshold, determining the composite image as a first composite image, and acquiring first acquisition time of the first composite image;
determining a second composite image when the distance between the target vehicle and a second identification position reaches a set second distance threshold according to a composite image spliced before the first composite image, and acquiring second acquisition time of the second composite image;
and determining the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time.
Further, the processor 91, in particular, is configured to identify a vehicle feature of the target vehicle in the first composite image; according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image.
Further, the processor 91 is specifically configured to identify lane information and vehicle features of the target vehicle in the first composite image; and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
Further, the processor 91 is specifically configured to sequentially detect the composite images obtained by stitching before the first composite image according to an acquisition time sequence of the composite images, and if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the currently detected composite image, determine that the target vehicle exists in the currently detected composite image.
Further, the processor 91 is specifically configured to sequentially detect a composite image obtained by stitching before the first composite image according to an acquisition time sequence of the composite image, determine, according to position information of a vehicle in the first composite image and vehicle characteristics, if it is determined that the target vehicle exists in a previous composite image of the first composite image, whether a distance between the target vehicle and a second identification position in the previous composite image reaches a set second distance threshold, if so, use the previous composite image as a second composite image, otherwise, update the previous composite image as the first composite image, and continue the detection.
Because the principle of the electronic device for solving the problem is similar to the method for determining the vehicle delay, the implementation of the electronic device can be referred to the implementation of the method, and repeated details are not repeated.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 92 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
According to the embodiment of the invention, the images simultaneously acquired by at least two cameras of the multi-view camera are spliced to obtain the composite image, when the distance between the target vehicle and the first identification position in the composite image reaches the set first distance threshold, the composite image is taken as the first composite image, the first acquisition time of the first composite image is acquired, the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold is determined according to the composite image spliced before the first composite image, the second acquisition time of the second composite image is acquired, and the vehicle delay of the target vehicle is determined according to the first acquisition time and the second acquisition time, so that the vehicle passing data of two directions of the target vehicle can be accurately acquired, and the accuracy of the determined target vehicle delay is improved.
Example 8:
on the basis of the foregoing embodiments, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program executable by an electronic device is stored, and when the program is run on the electronic device, the electronic device is caused to execute the following steps:
acquiring images acquired by at least two cameras of a multi-view camera at the same time, splicing the images to obtain a composite image, if the distance between a target vehicle and a first identification position in the composite image is determined to reach a set first distance threshold, determining the composite image as a first composite image, and acquiring first acquisition time of the first composite image;
determining a second composite image when the distance between the target vehicle and a second identification position reaches a set second distance threshold according to a composite image spliced before the first composite image, and acquiring second acquisition time of the second composite image;
and determining the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time.
Furthermore, the at least two cameras collect images in different ranges of the road section to be monitored according to a set time interval.
Further, the determining, according to the composite image obtained by stitching before the first composite image, the second composite image when the distance between the target vehicle and the second identification position reaches a set second distance threshold includes:
identifying a vehicle feature of the target vehicle in the first composite image;
according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image.
Further, the identifying a vehicle feature of the target vehicle in the first composite image comprises:
identifying lane information and vehicle characteristics of the target vehicle in the first composite image;
determining that the target vehicle exists in a composite image spliced before the first composite image according to the vehicle characteristics of the target vehicle, comprising:
and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
Further, if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the composite image obtained by stitching before the first composite image, determining that the target vehicle exists in the composite image includes:
and sequentially detecting the composite images spliced before the first composite image according to the sequence of the acquisition time of the composite images, and if determining that candidate vehicles matched with the lane information and the vehicle characteristics of the target vehicle exist in the currently detected composite image, determining that the target vehicle exists in the currently detected composite image.
Further, the determining, according to the composite image obtained by stitching before the first composite image, the second composite image when the distance between the target vehicle and the second identification position reaches a set second distance threshold includes:
according to the sequence of the acquisition time of the composite images, composite images obtained by splicing before the first composite image are sequentially detected, according to the position information and the vehicle characteristics of the vehicle in the first composite image, if the target vehicle is determined to exist in the previous composite image of the first composite image, whether the distance between the target vehicle and the second identification position in the previous composite image reaches a set second distance threshold value is judged, if yes, the previous composite image is used as a second composite image, and if not, the previous composite image is updated to be the first composite image and detection is continued.
Since the principle of solving the problem by the computer readable medium is similar to the method for determining the vehicle delay, after the processor executes the computer program in the computer readable medium, the steps implemented by the processor can be referred to as implementation of the method, and repeated details are not repeated.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs), etc.
According to the embodiment of the invention, the images simultaneously acquired by at least two cameras of the multi-view camera are spliced to obtain the composite image, when the distance between the target vehicle and the first identification position in the composite image reaches the set first distance threshold, the composite image is taken as the first composite image, the first acquisition time of the first composite image is acquired, the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold is determined according to the composite image spliced before the first composite image, the second acquisition time of the second composite image is acquired, and the vehicle delay of the target vehicle is determined according to the first acquisition time and the second acquisition time, so that the vehicle passing data of two directions of the target vehicle can be accurately acquired, and the accuracy of the determined target vehicle delay is improved.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (9)

1. A method of determining vehicle delays, the method comprising:
acquiring images acquired by at least two cameras of a multi-view camera at the same time, splicing the images to obtain a composite image, if the distance between a target vehicle and a first identification position in the composite image is determined to reach a set first distance threshold, determining the composite image as a first composite image, and acquiring first acquisition time of the first composite image;
determining a second composite image when the distance between the target vehicle and a second identification position reaches a set second distance threshold according to a composite image spliced before the first composite image, and acquiring second acquisition time of the second composite image;
determining vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time;
wherein, the determining, according to the composite image obtained by stitching before the first composite image, the second composite image when the distance between the target vehicle and the second identification position reaches the set second distance threshold includes:
identifying a vehicle feature of the target vehicle in the first composite image;
according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image;
wherein the identifying the vehicle feature of the target vehicle in the first composite image comprises:
identifying lane information and vehicle characteristics of the target vehicle in the first composite image;
determining that the target vehicle exists in a composite image spliced before the first composite image according to the vehicle characteristics of the target vehicle, comprising:
and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
2. The method according to claim 1, characterized in that the at least two cameras acquire images of different ranges of the road section to be monitored at set time intervals.
3. The method according to claim 1, wherein determining that the target vehicle is present in the composite image spliced before the first composite image if it is determined that a candidate vehicle that matches both the lane information and the vehicle feature of the target vehicle is present in the composite image comprises:
and sequentially detecting the composite images spliced before the first composite image according to the sequence of the acquisition time of the composite images, and if determining that candidate vehicles matched with the lane information and the vehicle characteristics of the target vehicle exist in the currently detected composite image, determining that the target vehicle exists in the currently detected composite image.
4. The method of claim 1, wherein determining the second composite image when the target vehicle is at the set second distance threshold from the second identified location based on the composite image stitched before the first composite image comprises:
according to the sequence of the acquisition time of the composite images, composite images obtained by splicing before the first composite image are sequentially detected, according to the position information and the vehicle characteristics of the vehicle in the first composite image, if the target vehicle is determined to exist in the previous composite image of the first composite image, whether the distance between the target vehicle and the second identification position in the previous composite image reaches a set second distance threshold value is judged, if yes, the previous composite image is used as a second composite image, and if not, the previous composite image is updated to be the first composite image and detection is continued.
5. A vehicle delay determining apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring images acquired by at least two cameras of a multi-view camera at the same time, splicing the images to obtain a composite image, and if the distance between a target vehicle and a first identification position in the composite image is determined to reach a set first distance threshold, determining the composite image as a first composite image and acquiring first acquisition time of the first composite image;
the determining unit is used for determining a second synthetic image when the distance between the target vehicle and a second identification position reaches a set second distance threshold value according to a synthetic image spliced before the first synthetic image, and acquiring second acquisition time of the second synthetic image;
the processing unit is used for determining the vehicle delay of the target vehicle according to the first acquisition time and the second acquisition time;
wherein the determination unit is specifically configured to identify a vehicle feature of the target vehicle in the first composite image; according to the vehicle characteristics of the target vehicle, if the target vehicle is determined to exist in a composite image spliced before the first composite image, whether the distance between the target vehicle and a second identification position in the composite image reaches a set second distance threshold value is judged, and if yes, the composite image is used as a second composite image;
the determining unit is specifically configured to identify lane information and vehicle features of the target vehicle in the first composite image; and if determining that the candidate vehicles which are matched with the lane information and the vehicle characteristics of the target vehicle exist in the composite image spliced before the first composite image, determining that the target vehicle exists in the composite image.
6. The apparatus according to claim 5, wherein the determining unit is configured to detect the composite images stitched before the first composite image in sequence according to an acquisition time sequence of the composite images, and determine that the target vehicle exists in the currently detected composite image if it is determined that a candidate vehicle matching both the lane information and the vehicle feature of the target vehicle exists in the currently detected composite image.
7. The apparatus according to claim 6, wherein the determining unit is specifically configured to sequentially detect the composite images obtained by stitching before the first composite image according to an order of the acquisition time of the composite images, determine, according to the position information of the vehicle and the vehicle characteristics in the first composite image, whether a distance between the target vehicle and a second identification position in a previous composite image of the first composite image reaches a set second distance threshold value if it is determined that the target vehicle exists in the previous composite image, regard the previous composite image as the second composite image if it is determined that the distance between the target vehicle and the second identification position in the previous composite image reaches the set second distance threshold value, otherwise, update the previous composite image as the first composite image, and continue the detection.
8. An electronic device, characterized in that the electronic device comprises at least a processor and a memory, the processor being adapted to carry out the steps of the method of determining a vehicle delay according to any one of claims 1-4 when executing a computer program stored in the memory.
9. A computer-readable storage medium, characterized in that it stores a computer program which, when being executed by a processor, carries out the steps of the method of determining a vehicle delay according to any one of claims 1 to 4.
CN202010591986.XA 2020-06-24 2020-06-24 Method, device, equipment and medium for determining vehicle delay Active CN111741267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010591986.XA CN111741267B (en) 2020-06-24 2020-06-24 Method, device, equipment and medium for determining vehicle delay

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010591986.XA CN111741267B (en) 2020-06-24 2020-06-24 Method, device, equipment and medium for determining vehicle delay

Publications (2)

Publication Number Publication Date
CN111741267A CN111741267A (en) 2020-10-02
CN111741267B true CN111741267B (en) 2022-03-08

Family

ID=72651114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010591986.XA Active CN111741267B (en) 2020-06-24 2020-06-24 Method, device, equipment and medium for determining vehicle delay

Country Status (1)

Country Link
CN (1) CN111741267B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112816B (en) * 2021-04-06 2022-07-08 安徽百诚慧通科技股份有限公司 Method for extracting average running delay of vehicle on road section

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364347A (en) * 2008-09-17 2009-02-11 同济大学 Detection method for vehicle delay control on crossing based on video
CN101783073A (en) * 2010-01-07 2010-07-21 同济大学 Signalized intersection delay measuring method based on bisection detector
CN102272807A (en) * 2009-01-28 2011-12-07 爱信艾达株式会社 Navigation device, probe information transmission method, computer-readable storage medium that storing probe information transmission program, and traffic information generation device
CN108806271A (en) * 2018-06-22 2018-11-13 安徽科力信息产业有限责任公司 A kind of method and system of record road Vehicle emission distribution
CN109544947A (en) * 2019-01-08 2019-03-29 重庆交通大学 Calculation method of intersection delay based on track of vehicle reconstruct under monitoring scene
CN110322687A (en) * 2018-03-30 2019-10-11 杭州海康威视系统技术有限公司 The method and apparatus for determining target intersection running state information
CN110992707A (en) * 2019-12-25 2020-04-10 深圳人人停技术有限公司 Vehicle parking management method, system and computer readable storage medium
CN111210460A (en) * 2018-11-21 2020-05-29 杭州海康威视系统技术有限公司 Method and device for matching camera to road section, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107331169A (en) * 2017-09-01 2017-11-07 山东创飞客交通科技有限公司 Urban road intersection signal time distributing conception evaluation method and system under saturation state
US10559201B1 (en) * 2018-02-27 2020-02-11 Traffic Technology Services, Inc. Using connected vehicle data to optimize traffic signal timing plans
CN108765985B (en) * 2018-06-13 2020-12-25 重庆交通大学 Signalized intersection entrance lane delay calculation method based on arrival of first vehicle
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras
CN110444012A (en) * 2019-06-26 2019-11-12 南京慧尔视智能科技有限公司 The calculation method and device of intersection vehicles delay time at stop and stop frequency

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364347A (en) * 2008-09-17 2009-02-11 同济大学 Detection method for vehicle delay control on crossing based on video
CN102272807A (en) * 2009-01-28 2011-12-07 爱信艾达株式会社 Navigation device, probe information transmission method, computer-readable storage medium that storing probe information transmission program, and traffic information generation device
CN101783073A (en) * 2010-01-07 2010-07-21 同济大学 Signalized intersection delay measuring method based on bisection detector
CN110322687A (en) * 2018-03-30 2019-10-11 杭州海康威视系统技术有限公司 The method and apparatus for determining target intersection running state information
CN108806271A (en) * 2018-06-22 2018-11-13 安徽科力信息产业有限责任公司 A kind of method and system of record road Vehicle emission distribution
CN111210460A (en) * 2018-11-21 2020-05-29 杭州海康威视系统技术有限公司 Method and device for matching camera to road section, electronic equipment and storage medium
CN109544947A (en) * 2019-01-08 2019-03-29 重庆交通大学 Calculation method of intersection delay based on track of vehicle reconstruct under monitoring scene
CN110992707A (en) * 2019-12-25 2020-04-10 深圳人人停技术有限公司 Vehicle parking management method, system and computer readable storage medium

Also Published As

Publication number Publication date
CN111741267A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN112489427B (en) Vehicle trajectory tracking method, device, equipment and storage medium
KR20190090393A (en) Lane determining method, device and storage medium
CN110322687B (en) Method and device for determining running state information of target intersection
CN109427191B (en) Driving detection method and device
CN109784254B (en) Vehicle violation event detection method and device and electronic equipment
CN108932849B (en) Method and device for recording low-speed running illegal behaviors of multiple motor vehicles
CN107885795A (en) A kind of data verification method, system and the device of bayonet socket data
CN111737377B (en) Method and device for identifying drift trajectory, computing equipment and storage medium
CN108847024A (en) Traffic congestion recognition methods and system based on video
CN112597807B (en) Violation detection system, method and device, image acquisition equipment and medium
CN113850237B (en) Internet vehicle target detection and evaluation method and system based on video and track data
CN111741267B (en) Method, device, equipment and medium for determining vehicle delay
CN110796230A (en) Method, equipment and storage medium for training and using convolutional neural network
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN117242489A (en) Target tracking method and device, electronic equipment and computer readable medium
CN113112813B (en) Illegal parking detection method and device
WO2024098992A1 (en) Vehicle reversing detection method and apparatus
Hardjono et al. Virtual Detection Zone in smart phone, with CCTV, and Twitter as part of an Integrated ITS
CN109800685A (en) The determination method and device of object in a kind of video
CN115762230A (en) Parking lot intelligent guiding method and device based on remaining parking space amount prediction
CN115019242A (en) Abnormal event detection method and device for traffic scene and processing equipment
CN114882709A (en) Vehicle congestion detection method and device and computer storage medium
CN114677843A (en) Road condition information processing method, device and system and electronic equipment
CN109740518B (en) Method and device for determining object in video
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant