CN113053104A - Target state determination method and device, computer equipment and storage medium - Google Patents

Target state determination method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113053104A
CN113053104A CN202110215168.4A CN202110215168A CN113053104A CN 113053104 A CN113053104 A CN 113053104A CN 202110215168 A CN202110215168 A CN 202110215168A CN 113053104 A CN113053104 A CN 113053104A
Authority
CN
China
Prior art keywords
detected
target
area
tracking
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110215168.4A
Other languages
Chinese (zh)
Inventor
朱月萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202110215168.4A priority Critical patent/CN113053104A/en
Publication of CN113053104A publication Critical patent/CN113053104A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications

Abstract

The application relates to a target state determination method, a target state determination device, a computer device and a storage medium. The method comprises the following steps: tracking a target to be detected in a video to be detected to obtain a tracking result; acquiring a to-be-detected area corresponding to the to-be-detected video; calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected; judging whether the area to be detected is blocked or not according to the driving characteristics of the target to be detected in the area to be detected; when the area to be detected is blocked, acquiring a relevant area corresponding to the area to be detected; and acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area. The method can improve the accuracy.

Description

Target state determination method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for determining a target state, a computer device, and a storage medium.
Background
Along with the improvement of living standard of people, the travel modes are greatly changed, and the life of people is improved by various travel modes, and corresponding rules need to be set for various travel modes along with the improvement of the life of people. For example, "stop at red light, go at green light", but as the quantity of automobile reserves in urban areas increases sharply, it is illegal to drive "make a dash over green light" when traffic is congested. When the motor vehicle meets the traffic jam of the intersection ahead, the motor vehicle does not stop outside the intersection in sequence to wait, and enters the intersection and stays at the green light, namely the motor vehicle does not wait in sequence when the intersection meets the traffic jam.
In the conventional technology, for illegal behaviors which do not wait in sequence when traffic is blocked, pictures or video data taken by an electronic police are usually obtained by manually checking one by one, so that a large amount of manpower and material resources are consumed, and the accuracy is easily reduced due to various artificial factors.
Disclosure of Invention
In view of the above, it is necessary to provide a target state determination method, apparatus, computer device and storage medium capable of improving accuracy.
A method of target state determination, the method comprising:
tracking a target to be detected in a video to be detected to obtain a tracking result;
acquiring a to-be-detected area corresponding to the to-be-detected video;
calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected;
judging whether the area to be detected is blocked or not according to the driving characteristics of the target to be detected in the area to be detected;
when the area to be detected is blocked, acquiring a relevant area corresponding to the area to be detected;
and acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area.
In one embodiment, the driving characteristics include a density of the target to be detected in the area to be detected and/or a driving speed of the target to be detected in the area to be detected.
In one embodiment, the calculation method of the density of the target to be detected in the region to be detected includes:
acquiring a tracking frame of a target to be detected, which is tracked at the current time;
determining the number of targets to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected;
acquiring the area of the region to be detected;
and calculating according to the number of the targets to be detected in the region to be detected and the area of the region to be detected to obtain the density of the targets to be detected.
In one embodiment, the calculation method of the traveling speed of the target to be detected in the area to be detected includes:
acquiring a tracking frame of a target to be detected, which is tracked at the current time;
acquiring a target to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected;
acquiring the position of a target to be detected in the area to be detected in a first preset frame;
obtaining a moving distance according to the position of the target to be detected in the area to be detected and the acquired position of the first preset frame;
and calculating the running speed according to the moving distance.
In one embodiment, the obtaining the state of the target to be detected according to the position relationship among the target track to be detected, the region to be detected, and the association region includes:
determining a stop line according to the to-be-detected region and the associated region;
and obtaining the state of the target to be detected according to the position relation between the target track to be detected and the stop line.
In one embodiment, the tracking a target to be detected in a video to be detected to obtain a tracking result includes:
acquiring a current tracking template;
determining a previous frame corresponding to the current frame in the video to be detected, and acquiring a target area of a target to be detected in the previous frame;
determining a corresponding tracking area of the target area in the current frame, and expanding the tracking area;
and detecting the target to be detected in the expanded tracking area according to the tracking template so as to track the target to be detected corresponding to the tracking template to obtain a tracking result.
In one embodiment, the obtaining the current tracking template includes:
when the current tracking template does not exist, acquiring a first reference frame in a video to be detected, and performing target detection on the first reference frame to obtain a first target;
obtaining a current tracking template according to the first target;
when the current tracking template exists, judging whether the current frame is a reference frame;
when the current frame is a reference frame, performing target detection on the reference frame to obtain a current target, and tracking a target to be detected in the reference frame according to the current tracking template to obtain a tracking target;
calculating the overlapping degree of the current target and the tracking target;
when the overlapping degree is larger than or equal to a preset value, updating a current tracking template of a corresponding tracking target according to the current target;
and when the overlapping degree is smaller than a preset value, generating a new current tracking template according to the current target.
In one embodiment, the calculating the overlapping degree of the current target and the tracking target includes:
determining a target boundary corresponding to the current target and a tracking boundary corresponding to the tracking target;
determining the overlapping part of the current target and the tracking target according to the target boundary and the corresponding tracking boundary;
respectively calculating the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located;
and calculating the overlapping degree according to the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located.
A target state determination apparatus, the apparatus comprising:
the tracking module is used for tracking a target to be detected in the video to be detected to obtain a tracking result;
the to-be-detected area determining module is used for acquiring an to-be-detected area corresponding to the to-be-detected video;
the driving characteristic calculation module is used for calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected;
the judging module is used for judging whether the area to be detected is blocked or not according to the running characteristics of the target to be detected in the area to be detected;
the association area determining module is used for acquiring an association area corresponding to the area to be detected when the area to be detected is blocked;
and the state determining module is used for acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
The target state determining method, the target state determining device, the computer equipment and the storage medium track the target to be detected to obtain the tracking result, so that the driving characteristics of each target to be detected can be obtained according to the tracking result, thereby judging whether the area to be detected is blocked according to the driving characteristics, if so, obtaining the target track of each target to be detected in the blocked area to be detected according to the tracking result, namely determining the target track of the target in the area to be detected when the area to be detected is blocked, thereby determining the target state of the target to be detected according to the target track, the area to be detected and the associated area, determining whether blockage occurs and the target track after the blockage according to the tracking result, the accuracy of the blockage and the target track can be improved, and the accuracy of subsequent state judgment is further improved.
Drawings
FIG. 1 is a diagram of an application environment of a method for target state determination in one embodiment;
FIG. 2 is a flow diagram illustrating a method for target state determination in one embodiment;
FIG. 3 is a schematic diagram of area division at an intersection in one embodiment;
FIG. 4 is a schematic flow chart of step S202 in the embodiment shown in FIG. 2;
FIG. 5 is a framework diagram of a Siamese RPN network in one embodiment;
FIG. 6 is a block diagram of a target state determination device in one embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The target state determination method provided by the application can be applied to the application environment shown in fig. 1. Wherein, the collection terminal 102 communicates with the server 104 through a network. The capture terminal 102 may capture a video to be detected and send the captured video to be detected to the server 104. After receiving the video to be detected, the server 104 may store the video to be detected in the database, and start detection of the video to be detected through a timing task, or directly detect the video to be detected, wherein the server 104 may track a target to be detected in the video to be detected to obtain a tracking result, and obtain a region to be detected corresponding to the video to be detected, so that the server 104 calculates a driving characteristic of the target to be detected in the region to be detected according to the tracking result of the target to be detected; and judging whether the to-be-detected area is blocked or not according to the driving characteristics of the to-be-detected target in the to-be-detected area, if the server judges that the to-be-detected area is blocked, acquiring a related area corresponding to the to-be-detected area, acquiring a target track of the to-be-detected target in the to-be-detected area when the to-be-detected target is blocked in the to-be-detected area according to a tracking result of the to-be-detected target, and obtaining a target state of the to-be-detected target according to the position relation of. The collection terminal 102 may be, but not limited to, an electronic police or a camera for detecting traffic conditions, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a method for determining a target state is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s202: and tracking the target to be detected in the video to be detected to obtain a tracking result.
Specifically, the video to be detected is collected by the collection terminal, the collection terminal can send the collected video to be detected to the server, and the server can store the video to be detected in the database, and start the detection of the video to be detected through a timing task, or directly detect the video to be detected. The server needs to track all targets in the video to be detected to obtain a tracking result, for example, by combining yolo and SiamRPN algorithms.
The detection of all targets by the server can be realized by detecting all targets of the first frame of video, judging whether new targets exist at intervals of preset frames, and if so, adding corresponding targets. The tracking of the target may be performed in a template manner, where the template may be generated after each new target is detected, and the template may be updated in a manner of updating the corresponding template after every preset frame.
Further, preferably, the target is a vehicle.
S204: and acquiring a to-be-detected area corresponding to the to-be-detected video.
Specifically, each video to be detected corresponds to the acquisition terminal, and the position of the acquisition terminal is fixed, so that the corresponding relationship between the acquisition terminal and the intersection can be established, the area to be detected corresponding to the intersection is defined in advance, and the associated area corresponding to the area to be detected is determined.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating the area division of an intersection according to an embodiment, in which when a green light is turned on, vehicles in an area (c) are taken as an example of target vehicles, and when a left-turn area (e.g., area (c) in fig. 3) is blocked, vehicles in a left-turn lane in the area (c) cannot normally travel forward, straight and right; when the right-turn area is blocked, the situation is similar to that of the left-turn area; when the straight-going area (such as the area I in the figure 3) is blocked, vehicles in the straight-going lane in the area (sixth) cannot go forward, and the vehicle normally runs by turning left and right; when the front of the vehicle is blocked (the area ninthly in fig. 3 is blocked), the vehicles on the straight, left-turn and right-turn lanes in the area (c) cannot move forward and need to wait in sequence before the stop line.
S206: and calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected.
S208: and judging whether the area to be detected is blocked or not according to the driving characteristics of the target to be detected in the area to be detected.
Specifically, for the delimited area to be detected, the index for determining whether a jam occurs in the area may be a driving characteristic. The driving characteristics comprise the density of the target to be detected in the area to be detected and/or the driving speed of the target to be detected in the area to be detected.
The driving characteristics may be obtained according to the tracking result of the detected target, for example, the driving characteristics may be obtained by calculating according to the position information and the time information in the tracking result of the detected target, where the position information may determine the number of targets in the area to be detected, and the moving distance of the target may also be determined according to the position information, and then the speed of the target may be determined according to the time information. The server may determine whether the area to be detected is blocked according to the density of the target to be detected in the area to be detected and/or the driving speed of the target to be detected in the area to be detected, for example, if the density threshold is a and the speed threshold is b, when the density p > a of the target to be detected in the area to be detected and the driving speed v < b of the target to be detected in the area to be detected are met, it is determined that the area to be detected is blocked.
S210: and when the area to be detected is blocked, acquiring the associated area corresponding to the area to be detected.
S212: and acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area.
Specifically, with reference to fig. 3, when the server determines that the to-be-detected area is blocked, the server acquires a correlation area corresponding to the to-be-detected area, that is, the correlation area preset in fig. 3, and then determines the target state of the to-be-detected target according to the movement trajectory of the to-be-detected target, for example, determines whether the vehicle waits in sequence according to the movement trajectory of the vehicle. Specifically, the server may obtain the target state of the target to be detected according to the target trajectory of the target to be detected in the area to be detected when the area to be detected is blocked and the position relationship between the area to be detected and the associated area, that is, obtain the target state of the target to be detected according to the target trajectory to be detected, the position relationship between the area to be detected and the associated area, including: determining a stop line according to the to-be-detected region and the associated region; and obtaining the target state of the target to be detected according to the position relation between the target track to be detected and the stop line. For example, if the trajectory moves from before the stop line to after the stop line when the vehicle cannot move forward, it is determined that the vehicles do not wait in sequence and the vehicle is illegal.
In the above embodiment, the vehicles in the intersection monitoring video are tracked, whether congestion occurs is determined according to the density and the average moving speed of the vehicles in the specific area, and when traffic congestion occurs, whether the target vehicles wait sequentially is determined. The method can automatically and effectively screen all vehicles appearing in the road, judge whether the vehicles wait in sequence when the traffic is congested, effectively save the police strength, enhance the law enforcement breadth and improve the law enforcement efficiency.
According to the target state determining method, the tracking result is obtained by tracking the target to be detected, so that the driving characteristics of each target to be detected can be obtained according to the tracking result, whether the area to be detected is blocked or not is judged according to the driving characteristics, if yes, the target track of each target to be detected in the blocked area to be detected can be obtained according to the tracking result, namely the target track of the target located in the area to be detected is determined when the area to be detected is blocked, so that the target state of the target to be detected is determined according to the target track, the area to be detected and the associated area, whether the target is blocked or not is determined according to the tracking result, the blocked target track can be determined, the accuracy of the blockage and the target track can be improved, and the accuracy of subsequent state judgment is improved.
In one embodiment, the calculation method of the density of the target to be detected in the area to be detected includes: acquiring a tracking frame of a target to be detected, which is tracked at the current time; determining the number of targets to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected; acquiring the area of a region to be detected; and calculating according to the number of the targets to be detected in the region to be detected and the area of the region to be detected to obtain the density of the targets to be detected.
Specifically, when the density of the target to be detected is calculated, for a certain specific area in the current frame of the video, first, the number N of the target to be detected in the area to be detected is calculated, and an initial value of N is set to 0, where the number of the target to be detected in the area to be detected can be determined according to a tracking frame of the target to be detected, which is obtained by tracking at the current time, assuming that coordinates of a center point of an obtained tracking rectangular frame are (x, y), it is determined whether the center point falls within the area to be detected (which can be obtained through a pointPolygonTest function of opencv), if the center point is within the area to be detected, N is N +1, after all vehicles are traversed, the actual vehicle number N within the area to be detected is obtained, and then the server obtains the area to be detected as s, and the vehicle density of the:
Ρ=N/s
in the above embodiment, the number of the targets to be detected in the area to be detected is determined according to the position relationship between the vehicle tracking frame and the area to be detected, and then the vehicle density is calculated.
In one embodiment, the calculation method of the running speed of the target to be detected in the area to be detected comprises the following steps: acquiring a tracking frame of a target to be detected, which is tracked at the current time; acquiring a target to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected; acquiring the position of a target to be detected in a region to be detected in a first preset frame; obtaining a moving distance according to the position of the target to be detected in the area to be detected and the obtained position of the first preset frame; and calculating the running speed according to the moving distance.
The method comprises the steps that a server can be combined with density calculation when the running speed is calculated, namely vehicles in a region to be detected are obtained when the density is calculated, tracking tracks of the vehicles are obtained according to tracking, each vehicle in the region to be detected is traversed, and specifically, a tracking frame of a target to be detected can be obtained when the density is calculated; and acquiring the target to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected. The server calculates a moving distance of the vehicle from the previous t frame to the current frame as Li (where i is a vehicle number), and sets a center point coordinate of the vehicle of the previous m frames as (xim, yim), for example, obtains a position of a target to be detected in the region to be detected in the first preset frame, and then obtains the moving distance according to the position of the target to be detected in the region to be detected and the obtained position of the first preset frame, that is, assuming that the center point coordinate of the vehicle of the current frame is (xm, ym), then:
Li=sqrt(pow((xm-xim),2)+pow((ym-yim),2))
where sqrt is squared on and pow is squared.
The server calculates the running speed according to the moving distance, wherein the running time t can be determined according to the time interval between frames and the number of frames, and the average speed v of the vehicles in the area is as follows:
v=1/N*∑iN(Li/t)
in the above embodiment, the calculation of the running speed may be combined with the calculation of the density, which improves the calculation efficiency.
In one embodiment, referring to fig. 4, fig. 4 is a schematic flowchart of step S202 in the embodiment shown in fig. 2, where in step S202, the step of tracking the target to be detected in the detection video to obtain a tracking result includes:
s402: and acquiring the current tracking template.
Specifically, the current tracking template may be a tracking template generated by recognizing a first reference frame when the video to be detected starts to be read, or by recognizing a new target for the first time in the tracking process, or by updating the tracking template in the tracking process.
S404: and determining a previous frame corresponding to the current frame in the video to be detected, and acquiring a target area of the target to be detected in the previous frame.
S406: and determining a corresponding tracking area of the target area in the current frame, and expanding the tracking area.
Specifically, the server first obtains a previous frame corresponding to a current frame to be identified, and then obtains a tracking result of the previous frame, that is, a target area of a target to be detected in the previous frame. In order to improve the detection accuracy, the server first determines a tracking area corresponding to the target area in the current frame, and performs an expansion process on the tracking area, for example, in a subsequent frame, an image with an area around the target of the previous frame twice as large as that of the target of the previous frame is taken as an input of the detection branch.
S408: and detecting the target to be detected in the expanded tracking area according to the tracking template so as to track the target to be detected corresponding to the tracking template to obtain a tracking result.
Specifically, referring to fig. 5, in this embodiment, a siameseppn network is used to track a target to be detected, where the siameseppn network is divided into two branches, the first branch is a template branch, the other branch is a detection branch, the template branch is a current tracking template obtained by inputting, the detection branch is a tracking area after inputting and expanding, the two branches respectively perform feature extraction, for example, the template branch obtains template features of 6 × 256, the detection branch obtains detection features of 22 × 256, then the template features and the detection features extracted by the detection branch are subjected to a cross-correlation operation after passing through the same convolution layer (Conv), and a frame classified as a foreground with the highest confidence is retrieved as a tracking frame of the current frame.
In the above embodiment, the tracking of the target to be detected is realized by determining the tracking area and then combining the two branches of the siamesrpn network through the tracking template.
In one embodiment, the acquisition of the current tracking template may include three cases:
the first case is: when the current tracking template does not exist, acquiring a first reference frame in a video to be detected, and performing target detection on the first reference frame to obtain a first target; and obtaining the current tracking template according to the first target. At this time, the first reference frame in the video to be detected, that is, the first frame of the video to be detected, is obtained, target detection is performed, for example, template detection is performed through yolo, then the detected region is extracted, the detected region is input into a CNN module of a template branch in the siamese rpn network to extract features, and the current tracking template is obtained according to the extracted features.
The second case is: when the current tracking template exists, judging whether the current frame is a reference frame; and when the current tracking template is not the reference frame, directly acquiring the current tracking template. The reference frame refers to a frame that needs to be updated, for example, there is one frame reference frame every n frames.
The third case is: when the current frame is a reference frame, performing target detection on the reference frame to obtain a current target, and tracking the target to be detected in the reference frame according to the current tracking template to obtain a tracking target; calculating the overlapping degree of the current target and the tracking target; when the overlapping degree is larger than or equal to a preset value, updating a current tracking template of the corresponding tracking target according to the current target; and when the overlapping degree is smaller than a preset value, generating a new current tracking template according to the current target. Specifically, when the current frame is a reference frame, two kinds of processing are required to be performed on the current frame, one is normally tracking through a siameseRPN network, the other is to obtain a current target by detecting a target to be detected in the current frame through yolo, the two kinds of processing can be parallel processing, and then the server calculates the overlapping degree of the current target and the tracking target; when the overlapping degree is larger than or equal to a preset value, updating a current tracking template of the corresponding tracking target according to the current target; and when the overlapping degree is smaller than a preset value, generating a new current tracking template according to the current target.
In practical application, since the vehicle is likely to change its direction during the driving process, the vehicle shape changes greatly, and if the target vehicle detected for the first time is always used as the template frame, the tracking will be deviated. In order to solve the problem, the IOU of the current frame of the detected target vehicle and the frame of the tracked vehicle are calculated every n frames, if the IOU of the two targets is larger than a certain threshold value (for example: 0.8), the detected target vehicle and the corresponding tracked vehicle are considered to be the same target, and the template feature extraction is carried out on the detected target (the feature of the vehicle in the detection frame is taken because the detection frame is more accurate than the tracking frame) to replace the original template.
Optionally, calculating the overlapping degree of the current target and the tracking target includes: determining a target boundary corresponding to the current target, and determining an overlapping part of the current target and the tracking target according to the target boundary and the corresponding tracking boundary; respectively calculating the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located; and calculating the overlapping degree according to the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located.
Specifically, the video is set to perform yolo target detection on the image once every n frames, calculate the IOU of each detected target vehicle and the target vehicle which is currently tracked, if the IOU is smaller than a certain threshold (for example: 0.1), prove that the target vehicle is a new target, similarly perform template feature extraction on the target, and record the position information of any two target rectangular boxes as upper left coordinates (x11, y11), lower right coordinates (x12, y12), upper left coordinates (x21, y21) and lower right coordinates (x22, y 22). The logic to compute the IOU is given below:
firstly, taking x11, wherein the maximum value of x21 is xA; y11, the minimum value of y21 being yA; x12, the minimum value of x22 being xB; y12, y22 has a maximum value of xB.
Secondly, the areas of the two frames are calculated as Area1 ═ x12-x11 ═ y12-y 11;
Area2=(x22-x21)*(y22-y21)。
the two-box overlap area is then calculated as interArea ═ max (xB-xA,0) × max (yB-yA, 0).
Finally, IOU is calculated as INTERAREA/(AreA1+ AreA2-INTERAREA)
In the embodiment, the tracking of the vehicle in the video is realized through the yolo and SiamRPN algorithm, and the accuracy of vehicle tracking is improved.
It should be understood that although the steps in the flowcharts of fig. 2 and 4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 6, there is provided a target state determination apparatus including: the tracking module 100, the to-be-detected region determining module 200, the driving feature calculating module 300, the judging module 400, the associated region determining module 500 and the state determining module 600, wherein:
the tracking module 100 is configured to track a target to be detected in a video to be detected to obtain a tracking result;
the to-be-detected region determining module 200 is configured to acquire a to-be-detected region corresponding to a to-be-detected video;
the driving feature calculation module 300 is configured to calculate a driving feature of the target to be detected in the area to be detected according to the tracking result of the target to be detected;
the judging module 400 is used for judging whether the area to be detected is blocked or not according to the driving characteristics of the target to be detected in the area to be detected;
the association area determining module 500 is configured to, when the to-be-detected area is blocked, acquire an association area corresponding to the to-be-detected area;
the state determining module 600 is configured to obtain a target trajectory of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and obtain the target state of the target to be detected according to the target trajectory to be detected, the area to be detected, and the position relationship between the associated area.
In one embodiment, the driving characteristics include a density of the objects to be detected in the area to be detected and/or a driving speed of the objects to be detected in the area to be detected.
In one embodiment, the target state determining apparatus may further include:
the first tracking frame determining module is used for acquiring a tracking frame of the target to be detected, which is obtained by tracking at the current time;
the quantity determining module is used for determining the quantity of the targets to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected;
the area acquisition module is used for acquiring the area of the area to be detected;
and the density calculation module is used for calculating according to the number of the targets to be detected in the region to be detected and the area of the region to be detected to obtain the density of the targets to be detected.
In one embodiment, the target state determining apparatus may further include:
the second tracking frame determining module is used for acquiring a tracking frame of the target to be detected, which is obtained by tracking at the current time;
the target position determining module is used for acquiring a target to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected;
the historical position determining module is used for acquiring the position of a target to be detected in the region to be detected in a first preset frame;
the moving distance calculation module is used for obtaining a moving distance according to the position of the target to be detected in the area to be detected and the acquired position of the first preset frame;
and the speed calculation module is used for calculating the running speed according to the moving distance.
In one embodiment, the state determining module 600 includes:
a stop line determining unit for determining a stop line according to the region to be detected and the associated region;
and the target state determining unit is used for obtaining the target state of the target to be detected according to the position relation between the target track to be detected and the stop line.
In one embodiment, the tracking module 100 may include:
a template obtaining unit for obtaining a current tracking template;
the target area determining unit is used for determining a previous frame corresponding to the current frame in the video to be detected and acquiring a target area of a target to be detected in the previous frame;
the expanding unit is used for determining a tracking area corresponding to the target area in the current frame and expanding the tracking area;
and the tracking unit is used for detecting the target to be detected in the expanded tracking area according to the tracking template so as to track the target to be detected corresponding to the tracking template to obtain a tracking result.
In one embodiment, the template obtaining unit includes:
the first target detection subunit is used for acquiring a first reference frame in the video to be detected and performing target detection on the first reference frame to obtain a first target when the current tracking template does not exist;
the first generation subunit is used for obtaining a current tracking template according to the first target;
the judging subunit is used for judging whether the current frame is a reference frame or not when the current tracking template exists;
the reference frame processing subunit is used for carrying out target detection on the reference frame to obtain a current target when the current frame is the reference frame, and tracking the target to be detected in the reference frame according to the current tracking template to obtain a tracking target;
the overlap degree calculation operator unit is used for calculating the overlap degree of the current target and the tracking target;
the updating subunit is used for updating the current tracking template of the corresponding tracking target according to the current target when the overlapping degree is greater than or equal to the preset value;
and the second generation subunit is used for generating a new current tracking template according to the current target when the overlapping degree is smaller than a preset value.
In one embodiment, the overlap calculation subunit includes:
the boundary determining unit is used for determining a target boundary corresponding to the current target and a tracking boundary corresponding to the tracking target;
the overlap part determining unit is used for determining the overlap part of the current target and the tracking target according to the target boundary and the corresponding tracking boundary;
the area calculating unit is used for calculating the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located respectively;
and the overlapping degree calculating unit is used for calculating the overlapping degree according to the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located.
For specific limitations of the target state determination device, reference may be made to the above limitations of the target state determination method, which are not described herein again. The respective modules in the above target state determination device may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing tracking results, target states and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a target state determination method.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program: tracking a target to be detected in a video to be detected to obtain a tracking result; acquiring a to-be-detected area corresponding to a to-be-detected video; calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected; judging whether the area to be detected is blocked or not according to the driving characteristics of the target to be detected in the area to be detected, and acquiring a relevant area corresponding to the area to be detected when the area to be detected is blocked; and acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area.
In one embodiment, the driving characteristics involved in the execution of the computer program by the processor include a density of objects to be detected in the area to be detected and/or a driving speed of the objects to be detected in the area to be detected.
In one embodiment, the calculation of the density of the target to be detected in the region to be detected involved in the execution of the computer program by the processor includes: acquiring a tracking frame of a target to be detected, which is tracked at the current time; determining the number of targets to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected; acquiring the area of a region to be detected; and calculating according to the number of the targets to be detected in the region to be detected and the area of the region to be detected to obtain the density of the targets to be detected.
In one embodiment, the calculation of the travel speed of the object to be detected in the area to be detected involved in the execution of the computer program by the processor includes: acquiring a tracking frame of a target to be detected, which is tracked at the current time; acquiring a target to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected; acquiring the position of a target to be detected in a region to be detected in a first preset frame; obtaining a moving distance according to the position of the target to be detected in the area to be detected and the obtained position of the first preset frame; and calculating the running speed according to the moving distance.
In one embodiment, the obtaining of the target state of the target to be detected according to the target track to be detected, the position relationship between the region to be detected and the associated region, which is achieved when the processor executes the computer program, includes: determining a stop line according to the to-be-detected region and the associated region; and obtaining the target state of the target to be detected according to the position relation between the target track to be detected and the stop line.
In one embodiment, the tracking of the target to be detected in the video to be detected by the processor when the processor executes the computer program to obtain the tracking result includes: acquiring a current tracking template; determining a previous frame corresponding to the current frame in a video to be detected, and acquiring a target area of a target to be detected in the previous frame; determining a corresponding tracking area of the target area in the current frame, and expanding the tracking area; and detecting the target to be detected in the expanded tracking area according to the tracking template so as to track the target to be detected corresponding to the tracking template to obtain a tracking result.
In one embodiment, the obtaining the current tracking template, as implemented by the processor executing the computer program, comprises: when the current tracking template does not exist, acquiring a first reference frame in a video to be detected, and performing target detection on the first reference frame to obtain a first target; obtaining a current tracking template according to the first target; when the current tracking template exists, judging whether the current frame is a reference frame; when the current frame is a reference frame, performing target detection on the reference frame to obtain a current target, and tracking the target to be detected in the reference frame according to the current tracking template to obtain a tracking target; calculating the overlapping degree of the current target and the tracking target; when the overlapping degree is larger than or equal to a preset value, updating a current tracking template of the corresponding tracking target according to the current target; and when the overlapping degree is smaller than a preset value, generating a new current tracking template according to the current target.
In one embodiment, the calculating of the degree of overlap of the current target and the tracked target, as implemented by the processor when executing the computer program, comprises: determining a target boundary corresponding to a current target and a tracking boundary corresponding to a tracking target; determining the overlapping part of the current target and the tracking target according to the target boundary and the corresponding tracking boundary; respectively calculating the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located; and calculating the overlapping degree according to the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: tracking a target to be detected in a video to be detected to obtain a tracking result; acquiring a to-be-detected area corresponding to a to-be-detected video; calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected; judging whether the area to be detected is blocked or not according to the driving characteristics of the target to be detected in the area to be detected, and acquiring a relevant area corresponding to the area to be detected when the area to be detected is blocked; and acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area.
In one embodiment, the driving characteristics involved in the execution of the computer program by the processor include a density of objects to be detected in the area to be detected and/or a driving speed of the objects to be detected in the area to be detected.
In one embodiment, the manner of calculating the density of the object to be detected in the area to be detected involved in the execution of the computer program by the processor comprises: acquiring a tracking frame of a target to be detected, which is tracked at the current time; determining the number of targets to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected; acquiring the area of a region to be detected; and calculating according to the number of the targets to be detected in the region to be detected and the area of the region to be detected to obtain the density of the targets to be detected.
In one embodiment, the manner of calculating the travel speed of the object to be detected in the area to be detected, to which the computer program is executed by the processor, includes: acquiring a tracking frame of a target to be detected, which is tracked at the current time; acquiring a target to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected; acquiring the position of a target to be detected in a region to be detected in a first preset frame; obtaining a moving distance according to the position of the target to be detected in the area to be detected and the obtained position of the first preset frame; and calculating the running speed according to the moving distance.
In one embodiment, the obtaining of the target state of the target to be detected according to the position relationship among the target track to be detected, the region to be detected, and the associated region, when the computer program is executed by the processor, includes: determining a stop line according to the to-be-detected region and the associated region; and obtaining the target state of the target to be detected according to the position relation between the target track to be detected and the stop line.
In one embodiment, the tracking of the target to be detected in the video to be detected, which is implemented when the computer program is executed by the processor, to obtain the tracking result includes: acquiring a current tracking template; determining a previous frame corresponding to the current frame in a video to be detected, and acquiring a target area of a target to be detected in the previous frame; determining a corresponding tracking area of the target area in the current frame, and expanding the tracking area; and detecting the target to be detected in the expanded tracking area according to the tracking template so as to track the target to be detected corresponding to the tracking template to obtain a tracking result.
In one embodiment, the obtaining a current tracking template, implemented when the computer program is executed by the processor, comprises: when the current tracking template does not exist, acquiring a first reference frame in a video to be detected, and performing target detection on the first reference frame to obtain a first target; obtaining a current tracking template according to the first target; when the current tracking template exists, judging whether the current frame is a reference frame; when the current frame is a reference frame, performing target detection on the reference frame to obtain a current target, and tracking the target to be detected in the reference frame according to the current tracking template to obtain a tracking target; calculating the overlapping degree of the current target and the tracking target; when the overlapping degree is larger than or equal to a preset value, updating a current tracking template of the corresponding tracking target according to the current target; and when the overlapping degree is smaller than a preset value, generating a new current tracking template according to the current target.
In one embodiment, the computer program, when executed by a processor, implements calculating a degree of overlap of a current target and a tracked target, comprising: determining a target boundary corresponding to a current target and a tracking boundary corresponding to a tracking target; determining the overlapping part of the current target and the tracking target according to the target boundary and the corresponding tracking boundary; respectively calculating the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located; and calculating the overlapping degree according to the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A method for target state determination, the method comprising:
tracking a target to be detected in a video to be detected to obtain a tracking result;
acquiring a to-be-detected area corresponding to the to-be-detected video;
calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected;
judging whether the area to be detected is blocked or not according to the driving characteristics of the target to be detected in the area to be detected;
when the area to be detected is blocked, acquiring a relevant area corresponding to the area to be detected;
and acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area.
2. The object state determination method according to claim 1, characterized in that the running characteristic includes a density of the object to be detected in the area to be detected and/or a running speed of the object to be detected in the area to be detected.
3. The method for determining the status of an object according to claim 2, wherein the density of the object to be detected in the area to be detected is calculated by:
acquiring a tracking frame of a target to be detected, which is tracked at the current time;
determining the number of targets to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected;
acquiring the area of the region to be detected;
and calculating according to the number of the targets to be detected in the region to be detected and the area of the region to be detected to obtain the density of the targets to be detected.
4. The object state determination method according to claim 2, wherein the calculation manner of the traveling speed of the object to be detected in the area to be detected includes:
acquiring a tracking frame of a target to be detected, which is tracked at the current time;
acquiring a target to be detected in the area to be detected according to the position of the tracking frame and the position of the area to be detected;
acquiring the position of a target to be detected in the area to be detected in a first preset frame;
obtaining a moving distance according to the position of the target to be detected in the area to be detected and the acquired position of the first preset frame;
and calculating the running speed according to the moving distance.
5. The method for determining the target state according to any one of claims 1 to 4, wherein the obtaining the target state to be detected according to the position relationship among the target track to be detected, the region to be detected, and the association region includes:
determining a stop line according to the to-be-detected region and the associated region;
and obtaining the state of the target to be detected according to the position relation between the target track to be detected and the stop line.
6. The method for determining the target state according to any one of claims 1 to 4, wherein the tracking the target to be detected in the video to be detected to obtain the tracking result comprises:
acquiring a current tracking template;
determining a previous frame corresponding to the current frame in the video to be detected, and acquiring a target area of a target to be detected in the previous frame;
determining a corresponding tracking area of the target area in the current frame, and expanding the tracking area;
and detecting the target to be detected in the expanded tracking area according to the tracking template so as to track the target to be detected corresponding to the tracking template to obtain a tracking result.
7. The method of claim 6, wherein the obtaining the current tracking template comprises:
when the current tracking template does not exist, acquiring a first reference frame in a video to be detected, and performing target detection on the first reference frame to obtain a first target;
obtaining a current tracking template according to the first target;
when the current tracking template exists, judging whether the current frame is a reference frame;
when the current frame is a reference frame, performing target detection on the reference frame to obtain a current target, and tracking a target to be detected in the reference frame according to the current tracking template to obtain a tracking target;
calculating the overlapping degree of the current target and the tracking target;
when the overlapping degree is larger than or equal to a preset value, updating a current tracking template of a corresponding tracking target according to the current target;
and when the overlapping degree is smaller than a preset value, generating a new current tracking template according to the current target.
8. The method of determining the target status according to claim 7, wherein the calculating the degree of overlap between the current target and the tracking target includes:
determining a target boundary corresponding to the current target and a tracking boundary corresponding to the tracking target;
determining the overlapping part of the current target and the tracking target according to the target boundary and the corresponding tracking boundary;
respectively calculating the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located;
and calculating the overlapping degree according to the overlapping area of the overlapping part, the area of the area where the current target is located and the area of the area where the tracking target is located.
9. An object state determination apparatus, characterized in that the apparatus comprises:
the tracking module is used for tracking a target to be detected in the video to be detected to obtain a tracking result;
the to-be-detected area determining module is used for acquiring an to-be-detected area corresponding to the to-be-detected video;
the driving characteristic calculation module is used for calculating the driving characteristics of the target to be detected in the area to be detected according to the tracking result of the target to be detected;
the judging module is used for judging whether the area to be detected is blocked or not according to the running characteristics of the target to be detected in the area to be detected;
the association area determining module is used for acquiring an association area corresponding to the area to be detected when the area to be detected is blocked;
and the state determining module is used for acquiring a target track of the target to be detected in the area to be detected when the area to be detected is blocked according to the tracking result of the target to be detected, and acquiring the target state of the target to be detected according to the position relation among the target track to be detected, the area to be detected and the associated area.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202110215168.4A 2021-02-24 2021-02-24 Target state determination method and device, computer equipment and storage medium Pending CN113053104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110215168.4A CN113053104A (en) 2021-02-24 2021-02-24 Target state determination method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110215168.4A CN113053104A (en) 2021-02-24 2021-02-24 Target state determination method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113053104A true CN113053104A (en) 2021-06-29

Family

ID=76509166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110215168.4A Pending CN113053104A (en) 2021-02-24 2021-02-24 Target state determination method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113053104A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203311638U (en) * 2013-05-31 2013-11-27 上海亚视信息科技有限公司 Control system for city road crossing trailing and jams
CN106846797A (en) * 2017-01-19 2017-06-13 浙江宇视科技有限公司 Vehicle peccancy detection method and device
CN107067734A (en) * 2017-04-11 2017-08-18 山东大学 A kind of urban signal controlling intersection vehicles are detained peccancy detection method
CN110634153A (en) * 2019-09-19 2019-12-31 上海眼控科技股份有限公司 Target tracking template updating method and device, computer equipment and storage medium
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium
CN111402612A (en) * 2019-01-03 2020-07-10 北京嘀嘀无限科技发展有限公司 Traffic incident notification method and device
CN112132071A (en) * 2020-09-27 2020-12-25 上海眼控科技股份有限公司 Processing method, device and equipment for identifying traffic jam and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203311638U (en) * 2013-05-31 2013-11-27 上海亚视信息科技有限公司 Control system for city road crossing trailing and jams
CN106846797A (en) * 2017-01-19 2017-06-13 浙江宇视科技有限公司 Vehicle peccancy detection method and device
CN107067734A (en) * 2017-04-11 2017-08-18 山东大学 A kind of urban signal controlling intersection vehicles are detained peccancy detection method
CN111402612A (en) * 2019-01-03 2020-07-10 北京嘀嘀无限科技发展有限公司 Traffic incident notification method and device
CN110634153A (en) * 2019-09-19 2019-12-31 上海眼控科技股份有限公司 Target tracking template updating method and device, computer equipment and storage medium
CN110853076A (en) * 2019-11-08 2020-02-28 重庆市亿飞智联科技有限公司 Target tracking method, device, equipment and storage medium
CN112132071A (en) * 2020-09-27 2020-12-25 上海眼控科技股份有限公司 Processing method, device and equipment for identifying traffic jam and storage medium

Similar Documents

Publication Publication Date Title
WO2021134441A1 (en) Automated driving-based vehicle speed control method and apparatus, and computer device
JP2020052694A (en) Object detection apparatus, object detection method, and computer program for object detection
WO2022188663A1 (en) Target detection method and apparatus
US20160202355A1 (en) Object detecting device, radar device, and object detection method
JP2021165080A (en) Vehicle control device, vehicle control method, and computer program for vehicle control
Liu et al. Vision-based real-time lane marking detection and tracking
CN111937036A (en) Method, apparatus, and computer-readable storage medium having instructions for processing sensor data
JP2021026644A (en) Article detection apparatus, article detection method, and article-detecting computer program
CN111213153A (en) Target object motion state detection method, device and storage medium
CN111753639A (en) Perception map generation method and device, computer equipment and storage medium
WO2021223116A1 (en) Perceptual map generation method and apparatus, computer device and storage medium
US20230278587A1 (en) Method and apparatus for detecting drivable area, mobile device and storage medium
CN112733598A (en) Vehicle law violation determination method and device, computer equipment and storage medium
CN114694078A (en) Traffic behavior judgment method based on multi-target tracking
CN111383455A (en) Traffic intersection object flow statistical method, device, computer equipment and medium
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
US20220270480A1 (en) Signal control apparatus and method based on reinforcement learning
CN113178074A (en) Traffic flow machine learning modeling system and method applied to vehicle
CN112580565A (en) Lane line detection method, lane line detection device, computer device, and storage medium
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
CN109344776B (en) Data processing method
CN113053104A (en) Target state determination method and device, computer equipment and storage medium
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN115236672A (en) Obstacle information generation method, device, equipment and computer readable storage medium
CN112257485A (en) Object detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
AD01 Patent right deemed abandoned

Effective date of abandoning: 20230602