CN110163125B - Real-time video identification method based on track prediction and size decision - Google Patents
Real-time video identification method based on track prediction and size decision Download PDFInfo
- Publication number
- CN110163125B CN110163125B CN201910367702.6A CN201910367702A CN110163125B CN 110163125 B CN110163125 B CN 110163125B CN 201910367702 A CN201910367702 A CN 201910367702A CN 110163125 B CN110163125 B CN 110163125B
- Authority
- CN
- China
- Prior art keywords
- identification
- information
- target
- result
- cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a real-time video identification method based on track prediction and size decision, which comprises the following steps: identifying the characteristics of the object captured by the video shooting tool by using a video identification algorithm, and outputting the identified target information; checking whether the identified target information meets the preset confidence requirement; carrying out stability check; then predicting the motion track of the target object; and when the motion track of the target object, the cache number and the confidence coefficient of the recognition result accord with preset service triggering conditions, integrating the current recognition result and the cached result in the system, and preferably selecting the final stable recognition result as output. On the basis of not changing a video identification core algorithm, the invention improves the stability and accuracy of the identification result by using a multiple identification result optimization method and improves the efficiency and accuracy of video monitoring and entrance and exit management.
Description
Technical Field
The invention relates to optimization of an output result of a real-time video recognition system, in particular to a real-time video recognition method based on track prediction and size decision.
Background
With the rapid development of the related technology of artificial intelligence, more and more intelligent embedded video front ends appear in the traditional video monitoring industry. The video front ends are not only responsible for video data acquisition, but also start to play the roles of object perception and intelligent identification. The embedded video monitoring equipment generally comprises an algorithm module for identifying a target object, a cache management module for an identification result, a local storage module for the identification result and a communication module for supporting submission of the identification result. Among the modules, an algorithm module for target object identification is responsible for analyzing video data and realizing tracking and feature identification of a target object. And the cache management module for identifying the result is responsible for caching the result identified by the algorithm module, and transmitting the identification result to a related management system or an upper computer through a communication module according to a related strategy, or keeping the identification result in a local storage device of the equipment through a local storage module.
As the most critical module in the monitoring equipment, the algorithm module for target object identification determines the overall identification capability of the equipment, and is the key for the success of the related equipment. However, no matter how the algorithm theory develops and how the algorithm modules optimize and improve, one hundred percent accurate identification of the target object cannot be achieved.
In order to improve the accuracy of the recognition result, in patent CN 201510779639-a secondary optimization method for pedestrian re-recognition result of space-time constrained surveillance video, the accuracy of recognition is improved by re-recognition to calculate the joint probability of all path combinations among multiple cameras. The method needs to be capable of simultaneously operating the video information and the recognition result of the N devices, and does not relate to improvement and promotion of the recognition result of a single device. In patent CN 201510443499-a license plate recognition method and device, license plate information recognized by an algorithm module is compared with white list information stored in equipment, and the accuracy of license plate recognition is improved through a preset white list. However, white list information, such as license plate recognition at road gates, intrusion detection in defense areas, does not exist in a considerable number of monitoring scenarios.
Within the video range covered by the monitoring device, monitored objects entering the range are often recognized by the monitoring device multiple times. Due to the limitation of storage resources and communication resources of the video monitoring equipment, the equipment cannot report all results to an upper computer or a service management server, and can only selectively report identification results according to preset parameters. Therefore, how to use the computing resources and the recognition results of the front-end device and select the recognition results with better accuracy from all the captured results has a very important influence on the success of the whole monitoring device. The recognition accuracy of the whole equipment can be improved through the optimization processing of the front-end equipment on the recognition.
Disclosure of Invention
The invention aims to provide a real-time video identification method based on track prediction and size decision. Aiming at the existing front-end equipment for video monitoring, on the basis of the existing computing resources and identification results, a preferred method for the identification results is provided, namely, the moving track of a target object is utilized to predict the moving direction of the object in a video picture, and the optimal identification position and identification size are selected. On the basis, the existing recognition results are integrated, and the best recognition result is selected. In the front-end equipment for real-time video monitoring, the method solves the problem of optimization of an intelligent identification result, realizes prediction and positioning of an optimal identification position, can improve the identification accuracy of the equipment, and can meet the real-time requirement of engineering application.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a real-time video identification method based on track prediction and size decision comprises the following steps:
1) identifying the characteristics of an object captured by a video shooting tool by utilizing a video identification algorithm, and outputting identified target information, wherein the target information comprises identification time, target characteristic information and position information of a target in a current picture;
2) checking whether the target information identified in the step 1) meets the preset confidence requirement, and if not, directly discarding; otherwise, caching the identified target information according to the time sequence, and marking the information as an uncommitted state;
3) performing stability check according to the identification result originally existing in the system and the identification result in the uncommitted state in the cache obtained in the step 2); checking whether the cache number meets the preset threshold requirement and checking whether the object form meets the preset requirement; when the threshold requirement is met, triggering a submission process of the identification result, notifying the identification result to the service, and marking the corresponding information as a submitted state; otherwise, entering step 4);
4) obtaining the position information of the target object according to the identification result originally existing in the system and the identification result in the uncommitted state in the cache after the processing of the step 3), and predicting the motion track of the target object;
5) triggering a submission process of an identification result by using the object motion track predicted in the step 4), when the target object motion track meets a preset service triggering condition, notifying the identification result to a service, and marking corresponding information as a submitted state; when the target related characteristic information does not accord with the preset service triggering condition, the step 6) is carried out for processing;
6) and periodically detecting the uncommitted identification items in the cache queue, when a certain identification item does not meet the service direct triggering condition but meets the basic requirements preset in the step 2-5 and the cache time exceeds a preset time threshold, triggering the submission process of the identification result, notifying the identification result to the service and marking the corresponding information as a submitted state.
The position information in the step 1) comprises coordinate information of the target relative to the upper left corner of the screen and size information of the target.
The specific steps of caching the identified target information according to the time sequence in the step 2) and marking the information as an uncommitted state are as follows:
(1) searching whether an identification item meeting the threshold requirement exists in a cache queue or not for the latest target identification information;
(2) when the identification items meeting the threshold requirement exist in the cache queue, comparing the identification time of the two items; when the time difference exceeds a preset threshold range, caching the latest target identification information, and marking the latest target identification information as unidentified; when the time difference does not exceed a preset threshold value, entering the next step;
(3) checking the marks in the cache entries, and directly discarding the latest result when the cache entries are in a submission state; when the cache entry is in an uncommitted state, the latest identification information is also marked as unidentified and the latest identification information is cached.
The step 3) of checking the number of the buffers is to improve the accuracy of the recognition result through multiple recognition, and for the recognized object, the form value is calculated according to the following equation:
Dimednsion=w1*W+w2H+w3*Area
that is, the shape of the object is the weighted sum, W, of the width W, height H, and Area of the object1、w2And w3Respectively, are preset weighted values.
The specific steps of the step 4) are as follows:
a) for each identification item, collecting the position information of the item; the position information adopts the center coordinate of the target object or the upper left corner or the lower right corner coordinate of the target object;
b) performing least square fitting of a function on the position information of the target object by using the time of target information identification;
c) and predicting subsequent position information of the target object by utilizing the fitted function to form a motion track.
The motion track of the target object in the step 5) meets a preset service triggering condition, wherein the service triggering condition means that the target object may appear in a specified range or depart from a video range subsequently; the video picture is divided into a plurality of virtual defense areas, each defense area is provided with a certain stability threshold value,
the operation marked as committed in step 6) includes all early identification results in the cache that are consistent with the entry.
The triggering identification result submitting process is used for reporting the result in the identification result to the service to be the stable identification result, the triggering operation is carried out on the premise of selecting the optimal identification result by integrating the identification result originally existing in the current system and the identification result in the cache, and the submitted result is defined as the stable identification result; the decision-making step for stabilizing the recognition result comprises the following steps:
(a) enumerating a number of features to be submitted; checking the feature quantity in each item, and selecting the mode of the feature quantity as the feature quantity value to be submitted; when the mode is multiple, taking the mode closest to the current time as the length of the submitted characteristic value;
(b) circulating the number of the features, and selecting a specific feature value of each position; counting specific characteristic values at corresponding positions in each entry, and selecting a mode of the specific characteristic values as characteristic values to be submitted; when the mode is multiple, the mode closest to the current time is taken as the characteristic value to be submitted.
The invention has the beneficial effects that: the invention provides a method for optimizing an identification result, which aims at the problem of outputting the identification result of the existing video monitoring front-end equipment. And on the basis of predicting the optimal recognition position and size, integrating the existing recognition results and deciding the optimal stable recognition result. The method assists the video front end to avoid the identification of the target object with longer distance and smaller object size in the monitoring picture, reduces the possibility of error of the identification algorithm module, improves the accuracy rate of the video front end identification, and can meet the real-time requirement of the video front end equipment.
Drawings
Fig. 1 is a schematic diagram of an application of the embodiment of the present invention to a license plate recognition camera, where white boxes in the diagram represent candidate license plate positions, from which an optimal reported license plate is selected by the method of the present invention;
FIG. 2 is a flow chart of the present invention;
FIG. 3 is a flowchart of step 2) of caching identified target information according to a chronological order and marking the information as an uncommitted state in accordance with the present invention;
FIG. 4 is a flowchart illustrating the target object trajectory prediction according to the location information in the time cache entry in step 4) of the present invention;
fig. 5 is a schematic diagram of arming of the target object motion trajectory service triggering condition in step 5) of the present invention;
fig. 6 is a flowchart of decision making for stabilizing the recognition result when the submission flow of the recognition result is triggered in step 6) of the present invention.
Detailed Description
The real-time video identification method based on track prediction and size decision can be applied to various different intelligent video monitoring front ends, including license plate identification systems, car face identification systems, vehicle feature identification systems, pedestrian feature identification systems and the like based on embedded cameras. The present invention will be described in further detail below with reference to the accompanying drawings by taking the license plate recognition as an example.
Fig. 1 shows an embodiment of a method for real-time video recognition based on trajectory prediction and size decision on an embedded license plate recognition camera. Each blue rectangular box in the figure represents a position where the corresponding license plate is recognized once, wherein the leftmost rectangular box in the figure is recognized for the first time; the car enters the picture from the upper left corner of the picture and moves to the lower right of the picture. In the process, the size of the license plate is gradually increased, and the character textures on the license plate are gradually clear. In view of the size and texture change characteristics of the license plate, for an algorithm module in the license plate camera, false recognition is easy to occur when the license plate just enters a recognition area. Therefore, the license plate camera is applied to the embodiment, the running direction, the optimal recognition position and the optimal size of the license plate can be predicted according to the license plate position information recognized by the algorithm module, and therefore the recognition capability of the camera is improved.
Fig. 2 is a flowchart illustrating a method for real-time video recognition based on trajectory prediction and size decision according to this embodiment, where the method includes:
1) and identifying the characteristics of the object captured by the embedded license plate identification camera by using a video identification algorithm, and outputting identified target information, wherein the target information comprises identification time, target characteristic information and position information of a target in a current picture. The position information comprises coordinate information of the target relative to the upper left corner of the screen and size information of the target.
For the embedded license plate recognition camera used in this embodiment, the target feature information includes the number of characters of the license plate, the text content, the recognition confidence, the color, and the license plate type code;
2) checking whether the recognition result meets the preset confidence requirement, if not, directly discarding; otherwise, caching the identified target information according to the time sequence, and marking the information as an uncommitted state; wherein
As shown in fig. 3, which is a flowchart of step 2), the specific steps of caching the identified target information according to the time sequence and marking the information as an uncommitted state are as follows:
2.1 searching whether the latest target identification information has an identification item meeting the threshold value requirement in the cache queue;
2.2 when the identification items meeting the threshold requirement exist in the cache queue, comparing the identification time of the two items; if the time difference exceeds a preset threshold range, caching the latest target identification information; if the time difference does not exceed the preset threshold value, entering the next step;
2.3 checking the mark in the cache entry, and directly discarding the latest result when the cache entry is in a submission state; when the cache entry is in an uncommitted state, the latest identification information is also marked as unidentified and the latest identification information is cached.
3) Performing stability check according to the identification result existing in the system and the identification result in the uncommitted state in the cache obtained in the step 2); and checking whether the buffer amount meets the preset threshold requirement or not, and checking whether the target object form meets the preset requirement or not. If the threshold requirement is met, triggering the submission process of the identification result, notifying the identification result to the service, marking the corresponding information as the submitted state, and otherwise, entering the step 4); in the embodiment, for the embedded license plate recognition camera, the morphological inspection of the target object is to check whether the recognized license plate position and size reach preset thresholds, so as to ensure that the license plate position is closer to the camera and the license plate size is large enough. When the target license plate reaches the threshold value, the definition of the license plate is in accordance with the requirement, and the license plate algorithm module can correctly recognize license plate information to a great extent;
in the step 3), the checking of the number of the buffers is to improve the accuracy of the recognition result through multiple recognition, and when the number meets the requirement of the threshold, the subsequent recognition result submission process has a better basis, so that the correct recognition result can be preferably selected. For the identified object, the shape value is calculated according to the following equation:
Dimednsion=w1*W+w2H+w3*Area
that is, the shape of the object is the weighted sum, W, of the width W, height H, and Area of the object1、w2And w3Respectively, are preset weighted values.
4) Obtaining the position information of the target object according to the identification result originally existing in the system and the identification result in the uncommitted state in the cache after the processing of the step 3), and predicting the motion track of the target object;
fig. 4 is a flowchart of step 4), in which the position information of the target object is obtained according to the current recognition result and the recognition result in the uncommitted state in the cache, and the motion trajectory of the target object is predicted. The method comprises the following specific steps:
4.1 for each identified item, collecting the position information of the item; the position information can be the center coordinate of the target object, and can also be the upper left corner or lower right corner coordinate of the target object; for the embedded license plate recognition camera, the collected information should include the coordinates of the upper left corner, the lower right corner and the center coordinates of the license plate, so that a plurality of coordinate sets can be formed;
4.2 performing least square fitting of a function on the position information of the target object by using the time of target information identification; the fitting function may select a polynomial, i.e. f (x) a + bx + cx2But are not limited to polynomials. For any position coordinate set a { (x)i,yi) I ═ 1,2,. and n }, and its corresponding time set is T ═ T ·iI 1, 2.., n, where the abscissa and ordinate are respectively subjected to a least square method, the fitting result is X (t) respectivelyi) And Y (t)i) I.e. the position where the target object will appear at the next moment; a. b and c need parameters determined by a least square method, x represents an abscissa, y represents a coordinate, and t represents time;
and 4.3, predicting the subsequent position information of the target object by using the function of 4.2 fitting to form a motion track.
5) Triggering a submission process of an identification result if the predicted movement track of the target object in the step 4) meets a preset service triggering condition, reporting the identification result to a service, and marking corresponding information as a submitted state; when the target related characteristic information does not accord with the preset service triggering condition, the step 6) is carried out for processing;
wherein the motion trajectory of the target object in the step 5) meets a preset service trigger condition, which means that the target object may appear in a specified range or depart from a video range subsequently.
As shown in fig. 5, fig. 5 is a schematic diagram of video arming, in which a video picture is divided into a plurality of arming areas, and different areas have different threshold parameters. Generally speaking, when the target object is moving from outside to inside, it is predicted that the target object can continue to wait for the optimal position to appear. When the target object is moving from inside to outside, and when the target object is still in the first and second defense areas, the method means that the method can continue to wait for a new recognition result of the algorithm module. If the user is in the third defense area, the submission process of the identification result should be triggered at this time.
6) Periodically detecting the identification items in the uncommitted state in the cache queue, when a certain identification item does not meet the service direct triggering condition but meets the basic requirements preset in the step 2-5 and the cache time exceeds a preset time threshold, triggering the submission process of the identification result, notifying the service of the identification result, and marking the corresponding information as the submitted state; the operation marked as committed here includes all early identification results in the cache that are consistent with the entry.
In the triggering identification result submitting process in this embodiment, a result in an identification result is notified to a service as a stable identification result, the triggering operation is to submit the identification result on the premise that an optimal identification result is selected by integrating the identification result originally existing in the current system and the identification result in the cache, and the submitted result is defined as the stable identification result; the decision-making step for stabilizing the recognition result comprises the following steps:
(a) enumerating a number of features to be submitted; checking the feature quantity in each item, and selecting the mode of the feature quantity as the feature quantity value to be submitted; when the mode is multiple, taking the mode closest to the current time as the length of the submitted characteristic value;
(b) circulating the number of the features, and selecting a specific feature value of each position; counting specific characteristic values at corresponding positions in each entry, and selecting a mode of the specific characteristic values as characteristic values to be submitted; when the mode is multiple, the mode closest to the current time is taken as the characteristic value to be submitted.
As shown in fig. 6, fig. 6 is a decision flow chart of a stable recognition result, in which the current recognition result and the recognition result in the cache are integrated, and an optimal recognition result is selected for submission, and the result is the stable recognition result. Here, taking the license plate camera as an example, when the result given by the recognition algorithm module is as follows:
{ Guangdong B01345, New BD13456, Guangdong B01345, Guangdong BD13456}
In the first step, the length of the license plate is first determined. Since there are 2 license plates with a length of 7 and 4 license plates with a length of 8 in the above input, that is, 8 is a mode, the final result is a license plate with a length of 8.
Then, the specific content of the license plate is decided. The Chinese characters are analyzed firstly, although the Chinese characters appear Guangdong and New, the Guangdong appears for 5 times and belongs to the described mode, so that the first character is Guangdong, and then the subsequent characters are determined. Subsequent characters are processed in a similar manner, and the final stable recognition result is "yue BD 13456.
The embodiment provides a real-time video identification method based on track prediction and size decision, aiming at the problem of output of the identification result of the existing video monitoring front-end equipment. And on the basis of predicting the optimal recognition position and size, integrating the existing recognition results and deciding the optimal stable recognition result. The method assists the video front end to avoid the identification of the target object with longer distance and smaller object size in the monitoring picture, reduces the possibility of error of the identification algorithm module, improves the accuracy rate of the video front end identification, and can meet the real-time requirement of the video front end equipment.
Claims (8)
1. A real-time video identification method based on track prediction and size decision is characterized by comprising the following steps: the method comprises the following steps:
1) identifying the characteristics of an object captured by a video shooting tool by utilizing a video identification algorithm, and outputting identified target information, wherein the target information comprises identification time, target characteristic information and position information of a target in a current picture;
2) checking whether the target information identified in the step 1) meets the preset confidence requirement, and if not, directly discarding; otherwise, caching the identified target information according to the time sequence, and marking the information as an uncommitted state;
3) performing stability check according to the identification result originally existing in the system and the identification result in the uncommitted state in the cache obtained in the step 2); checking whether the cache number meets the preset threshold requirement and checking whether the object form meets the preset requirement; when the cache number reaches the threshold requirement, triggering the submission process of the identification result, notifying the identification result to the service, and marking the corresponding information as the submitted state; otherwise, entering step 4);
4) obtaining the position information of the target object according to the identification result originally existing in the system and the identification result in the uncommitted state in the cache after the processing of the step 3), and predicting the motion track of the target object;
5) triggering a submission process of an identification result by using the object motion track predicted in the step 4), when the target object motion track meets a preset service triggering condition, notifying the identification result to a service, and marking corresponding information as a submitted state; when the target related characteristic information does not accord with the preset service triggering condition, the step 6) is carried out for processing;
6) and periodically detecting the uncommitted identification items in the cache queue, when a certain identification item does not meet the service direct triggering condition but meets the basic requirements preset in the step 2-5 and the cache time exceeds a preset time threshold, triggering the submission process of the identification result, notifying the identification result to the service and marking the corresponding information as a submitted state.
2. The method of claim 1, wherein the trajectory prediction and size decision-based real-time video recognition method comprises: the position information in the step 1) comprises coordinate information of the target relative to the upper left corner of the screen and size information of the target.
3. The method of claim 1, wherein the trajectory prediction and size decision-based real-time video recognition method comprises: the specific steps of caching the identified target information according to the time sequence in the step 2) and marking the information as an uncommitted state are as follows:
(1) searching whether an identification item meeting the threshold requirement exists in a cache queue or not for the latest target identification information;
(2) when the identification items meeting the threshold requirement exist in the cache queue, comparing the identification time of the two items; when the time difference exceeds a preset threshold range, caching the latest target identification information, and marking the latest target identification information as unidentified; when the time difference does not exceed a preset threshold value, entering the next step;
(3) checking the marks in the cache entries, and directly discarding the latest result when the cache entries are in a submission state; when the cache entry is in an uncommitted state, the latest identification information is also marked as unidentified and the latest identification information is cached.
4. The method of claim 1, wherein the trajectory prediction and size decision-based real-time video recognition method comprises: the step 3) of checking the number of the buffers is to improve the accuracy of the recognition result through multiple recognition, and for the recognized object, the form value is calculated according to the following equation:
Dimension=w1*W+w2H+w3*Area
that is, the shape of the object is the weighted sum, W, of the width W, height H, and Area of the object1、w2And w3Respectively, are preset weighted values.
5. The method of claim 1, wherein the trajectory prediction and size decision-based real-time video recognition method comprises: the specific steps of the step 4) are as follows:
a) for each identification item, collecting the position information of the item; the position information adopts the center coordinate of the target object or the upper left corner or the lower right corner coordinate of the target object;
b) performing least square fitting of a function on the position information of the target object by using the time of target information identification;
c) and predicting subsequent position information of the target object by utilizing the fitted function to form a motion track.
6. The method of claim 1, wherein the trajectory prediction and size decision-based real-time video recognition method comprises: the motion track of the target object in the step 5) meets a preset service triggering condition, wherein the service triggering condition refers to that the target object can be subsequently present in a specified range or be out of a video range; the video picture is divided into a plurality of virtual defense areas, each defense area is provided with a certain stability threshold value,
7. the method of claim 1, wherein the trajectory prediction and size decision-based real-time video recognition method comprises: the operation marked as committed in step 6) includes all early identification results in the cache that are consistent with the entry.
8. The method of claim 1, wherein the trajectory prediction and size decision-based real-time video recognition method comprises: the triggering identification result submitting process is used for reporting the result in the identification result to the service to be the stable identification result, the triggering identification operation is carried out on the premise of selecting the optimal identification result by integrating the identification result originally existing in the current system and the identification result in the cache, and the submitted result is defined as the stable identification result; the decision-making step for stabilizing the recognition result comprises the following steps:
(a) enumerating a number of features to be submitted; checking the feature quantity in each item, and selecting the mode of the feature quantity as the feature quantity value to be submitted; when the mode is multiple, taking the mode closest to the current time as the length of the submitted characteristic value;
(b) circulating the number of the features, and selecting a specific feature value of each position; counting specific characteristic values at corresponding positions in each entry, and selecting a mode of the specific characteristic values as characteristic values to be submitted; when the mode is multiple, the mode closest to the current time is taken as the characteristic value to be submitted.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910367702.6A CN110163125B (en) | 2019-05-05 | 2019-05-05 | Real-time video identification method based on track prediction and size decision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910367702.6A CN110163125B (en) | 2019-05-05 | 2019-05-05 | Real-time video identification method based on track prediction and size decision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110163125A CN110163125A (en) | 2019-08-23 |
CN110163125B true CN110163125B (en) | 2021-04-30 |
Family
ID=67633423
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910367702.6A Active CN110163125B (en) | 2019-05-05 | 2019-05-05 | Real-time video identification method based on track prediction and size decision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163125B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112445727B (en) * | 2020-11-27 | 2023-08-25 | 鹏城实验室 | Edge cache replacement method and device based on viewport characteristics |
CN115914582B (en) * | 2023-01-05 | 2023-04-28 | 百鸟数据科技(北京)有限责任公司 | Small object detection optimization method based on fusion time sequence information |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1897015A (en) * | 2006-05-18 | 2007-01-17 | 王海燕 | Method and system for inspecting and tracting vehicle based on machine vision |
DE112012004767T5 (en) * | 2011-11-16 | 2014-11-06 | Flextronics Ap, Llc | Complete vehicle ecosystem |
CN103116987B (en) * | 2013-01-22 | 2014-10-29 | 华中科技大学 | Traffic flow statistic and violation detection method based on surveillance video processing |
-
2019
- 2019-05-05 CN CN201910367702.6A patent/CN110163125B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110163125A (en) | 2019-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103049787B (en) | A kind of demographic method based on head shoulder feature and system | |
CN103246896B (en) | A kind of real-time detection and tracking method of robustness vehicle | |
CN116153086B (en) | Multi-path traffic accident and congestion detection method and system based on deep learning | |
CN111241343A (en) | Road information monitoring and analyzing detection method and intelligent traffic control system | |
CN103366569A (en) | Method and system for snapshotting traffic violation vehicle in real time | |
CN112200830A (en) | Target tracking method and device | |
CN110163125B (en) | Real-time video identification method based on track prediction and size decision | |
CN107644206A (en) | A kind of road abnormal behaviour action detection device | |
Makhmutova et al. | Object tracking method for videomonitoring in intelligent transport systems | |
CN114973207B (en) | Road sign identification method based on target detection | |
CN110781785A (en) | Traffic scene pedestrian detection method improved based on fast RCNN algorithm | |
CN108229256A (en) | A kind of road construction detection method and device | |
CN111079621B (en) | Method, device, electronic equipment and storage medium for detecting object | |
CN115841649A (en) | Multi-scale people counting method for urban complex scene | |
CN115762230A (en) | Parking lot intelligent guiding method and device based on remaining parking space amount prediction | |
Patil | Applications of deep learning in traffic management: A review | |
Lou et al. | Vehicles detection of traffic flow video using deep learning | |
CN117523437B (en) | Real-time risk identification method for substation near-electricity operation site | |
CN115035543B (en) | Big data-based movement track prediction system | |
CN117058634A (en) | Expressway scene self-adaptive traffic offence behavior identification method | |
CN117037085A (en) | Vehicle identification and quantity statistics monitoring method based on improved YOLOv5 | |
Yadav et al. | An Efficient Yolov7 and Deep Sort are Used in a Deep Learning Model for Tracking Vehicle and Detection | |
Špaňhel et al. | Detection of traffic violations of road users based on convolutional neural networks | |
CN111027482A (en) | Behavior analysis method and device based on motion vector segmentation analysis | |
CN114882709A (en) | Vehicle congestion detection method and device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |