CN108764338B - Pedestrian tracking method applied to video analysis - Google Patents

Pedestrian tracking method applied to video analysis Download PDF

Info

Publication number
CN108764338B
CN108764338B CN201810527019.XA CN201810527019A CN108764338B CN 108764338 B CN108764338 B CN 108764338B CN 201810527019 A CN201810527019 A CN 201810527019A CN 108764338 B CN108764338 B CN 108764338B
Authority
CN
China
Prior art keywords
pedestrian
frame
classifier
rectangular
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810527019.XA
Other languages
Chinese (zh)
Other versions
CN108764338A (en
Inventor
赵怀林
王莉
许士芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technology
Original Assignee
Shanghai Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technology filed Critical Shanghai Institute of Technology
Priority to CN201810527019.XA priority Critical patent/CN108764338B/en
Publication of CN108764338A publication Critical patent/CN108764338A/en
Application granted granted Critical
Publication of CN108764338B publication Critical patent/CN108764338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian tracking method applied to video analysis, which comprises the following steps: detecting pedestrians in a video scene through a background subtraction method; deducing the moving position of the pedestrian at the next moment through an optical flow algorithm, and taking the moving position as the measurement of whether the pedestrian is the same person, wherein the characteristic is marked as A; comparing the similarity of the sizes of the rectangular frames of the pedestrians, and marking the characteristic as B; extracting a color histogram of the pedestrian in each rectangular frame, and comparing the similarity of the color histograms of the current frame detection frame and the next frame detection frame, wherein the characteristic is marked as C; combining the three characteristics and marking as a characteristic F; training a logic classifier by using the characteristic F to enable the logic classifier to have the capability of judging whether the person is the same person; and carrying out association of the pedestrian detection frames between each frame by using a trained logic Stent classifier. According to the pedestrian tracking method, the association of data between the rectangular frames is completed by combining a series of characteristics and using the logic Stent classifier, so that the pedestrian tracking of the monitoring video is realized.

Description

Pedestrian tracking method applied to video analysis
Technical Field
The invention relates to the field of image processing and pattern recognition of computer vision, in particular to a pedestrian tracking method applied to video analysis.
Background
In recent years, image processing and pattern recognition based on computer vision are popular problems in the field of machine vision, and their applications are very wide. Among them, pedestrian detection and tracking is an important branch. The pedestrian detection and tracking refers to a process of detecting the position information of a pedestrian from a video sequence, assigning the same mark number to the position of the same pedestrian in different frames, and determining the motion track of the pedestrian. At present, a plurality of effective tracking algorithms are continuously proposed, but the tracking target is interfered by a plurality of external factors, such as illumination change, scale expansion change, object shielding and the like, so that the target is easily lost and the tracking fails. In the existing pedestrian tracking algorithm, the most common features such as color features, HOG features, edge features and the like are combined, and the detection algorithm with stronger robustness and higher precision is worthy of research.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a pedestrian tracking method applied to video analysis, which combines three characteristics and completes the data association between pedestrian rectangular frames by training a logical Stent classifier so as to realize the tracking algorithm of monitoring video analysis.
In order to achieve the above purpose, the technical solution for solving the technical problem is as follows:
a pedestrian tracking method applied to video analysis comprises the following parts:
detecting pedestrians in a video scene through a background subtraction method to obtain an initial rectangular region corresponding to one or more pedestrian targets;
deducing the moving position of the pedestrian at the next moment by using an optical flow algorithm through the change of pixel moving speed information in the image sequence, comparing the moving position with the pedestrian detection position of the next frame, and measuring whether the pedestrian is the same or not by using the similarity of the moving position and the pedestrian detection position, wherein the characteristic is marked as A;
comparing the similarity of the sizes of the rectangular frames of the pedestrians to serve as the measure of whether the rectangular frames of the pedestrians are the same pedestrian or not, and marking the feature as B;
extracting a color histogram of the pedestrian in each rectangular frame, comparing the similarity of the color histograms of the current frame detection frame and the next frame detection frame, and taking the similarity as the measurement of whether the current frame detection frame and the next frame detection frame are the same pedestrian or not, wherein the characteristic is marked as C;
combining the three characteristics to obtain a new characteristic, and marking as a characteristic F;
training the logic classifier by taking the characteristic F as the input of the logic classifier, so that the logic classifier has the capability of judging whether the person is the same person;
and carrying out association of the pedestrian detection frames among each frame by using the trained logic Stent classifier to realize pedestrian tracking.
Further, the background subtraction method is a mixed gaussian background modeling, the mean u and the variance d of a model gaussian function at a pixel point (x, y) are calculated, the probability P of a point (x, y) in a new frame of image in a probability model is calculated, and foreground points and background points are judged by comparing the probability P with a threshold T.
Further, the optical flow algorithm is a Farneback global optical flow algorithm, an optical flow field is calculated through optical flows between two frames, the moving speed of the pedestrian is obtained, the moving position of the pedestrian at the next moment is deduced according to the moving speed, and the central point of a rectangular frame is set as (x)1,y1) And detecting the central point (x) of the rectangular frame with the pedestrian in the next frame2,y2) Comparing, calculating the distance between the center points of the two frames of rectangular frames
Figure GDA0002965900600000021
This feature is denoted a.
Further, the similarity of the sizes of the rectangular frames of the pedestrians is measured by the ratio of the intersection union of the two rectangular frames, the intersection area I and the union area U of the rectangular frames of the pedestrians in the current frame and the next frame of image are calculated, the similarity of the sizes of the rectangular frames of the pedestrians is represented by the ratio of the I and the U, and the similarity is used as the measurement of whether the rectangular frames of the pedestrians are the same pedestrian, namely the similarity is measured by the ratio of the I to the U
Figure GDA0002965900600000022
This feature is denoted B.
Further, extracting a color histogram of the pedestrian in each rectangular frame, dividing a color space into 24 BINs, 8 BINs in each of three channels R, G and B, and calculating the number of pixels with gray values falling in each BIN to obtain a vector { x1,x2,x3,…x24And recording the color histogram vector of the next frame as y1,y2,y3,…y24}, calculating
Figure GDA0002965900600000023
This feature is denoted C.
Further, the combined feature F is a fusion of the above-mentioned features a, B and C, where F is [ a, B, C, a, B, a, C, B, C, a, B, C ].
Further, the combined features F are used as input training classifiers of the logic Stent classifier, so that the classifier has the capability of judging whether the same person exists or not, and data association between rectangular frames is completed, wherein the same person is marked as 1, the same person is not marked as 0, the same person is a positive example, different persons are negative examples, and the ratio of the positive sample to the negative sample is 1: 1.
Further, the relevance of the pedestrian detection frames among each frame is carried out by using a trained logic classifier, the marks of the rectangular frames of the pedestrians detected in the first frame image of the video are sorted, whether the pedestrians detected in the next frame image are the same as the pedestrians detected in the first frame image is judged by the logic classifier, if yes, the serial number of the pedestrians detected in the next frame image is marked as the same as that of the previous frame image, if not, a new mark is given, and the like, so that the tracking of the pedestrians is realized.
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects:
the invention relates to a pedestrian tracking method applied to video analysis, which combines various characteristics of pedestrians in a video and uses a logic Stent classifier to complete data association between rectangular frames, thereby achieving the tracking effect of high precision and high robustness on a target and being suitable for pedestrian detection and tracking of a fixed-position monitoring camera.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a general flow diagram of a pedestrian detection and tracking method of the present invention, wherein:
the characteristics are as follows: the similarity between the motion position of the pedestrian at the next moment deduced by the current frame and the pedestrian detection position of the next frame;
and (B) is as follows: similarity of the sizes of the rectangular frames of the current frame and the next frame;
and (C) feature: similarity of color histograms in the current frame and the next frame rectangular frame;
FIG. 2 is a flow chart of mixed Gaussian background modeling for pedestrian detection in the present invention;
FIG. 3 is a diagram illustrating the effect of the embodiment of the present invention.
Detailed Description
While the embodiments of the present invention will be described and illustrated in detail with reference to the accompanying drawings, it is to be understood that the invention is not limited to the specific embodiments disclosed, but is intended to cover various modifications, equivalents, and alternatives falling within the scope of the invention as defined by the appended claims.
The invention provides a pedestrian tracking method applied to monitoring video analysis, which is shown in a general flow diagram in figure 1 and comprises the following parts:
detecting pedestrians in a video scene through a background subtraction method to obtain an initial rectangular region corresponding to one or more pedestrian targets;
deducing the moving position of the pedestrian at the next moment by using an optical flow algorithm through the change of pixel moving speed information in the image sequence, comparing the moving position with the pedestrian detection position of the next frame, and measuring whether the pedestrian is the same or not by using the similarity of the moving position and the pedestrian detection position, wherein the characteristic is marked as A;
comparing the similarity of the sizes of the rectangular frames of the pedestrians to serve as the measure of whether the rectangular frames of the pedestrians are the same pedestrian or not, and marking the feature as B;
extracting a color histogram of the pedestrian in each rectangular frame, comparing the similarity of the color histograms of the current frame detection frame and the next frame detection frame, and taking the similarity as the measurement of whether the current frame detection frame and the next frame detection frame are the same pedestrian or not, wherein the characteristic is marked as C;
combining the three characteristics to obtain a new characteristic, and marking as a characteristic F;
training the logic classifier by taking the characteristic F as the input of the logic classifier, so that the logic classifier has the capability of judging whether the person is the same person;
and carrying out association of the pedestrian detection frames among each frame by using the trained logic Stent classifier to realize pedestrian tracking.
First, a background subtraction method is used to detect pedestrians in a video scene, and the method is specifically a mixed gaussian background modeling, as shown in fig. 2. Calculating the mean u and the variance d of the model Gaussian function at the pixel point (x, y), calculating the probability P of the point (x, y) in the new frame of image in the probability model, and judging the foreground point and the background point by comparing the probability P with the threshold T. According to different application scenarios, the threshold T is adjusted, where in the specific embodiment, T is 0.1, and finally, an initial rectangular region corresponding to one or more pedestrian targets is obtained.
Further, the optical flow algorithm is a Farneback global optical flow algorithm, an optical flow field is calculated through optical flows between two frames, the moving speed of the pedestrian is obtained, the moving position of the pedestrian at the next moment is deduced according to the moving speed, and the central point of a rectangular frame is set as (x)1,y1) And detecting the central point (x) of the rectangular frame with the pedestrian in the next frame2,y2) Comparing, calculating the distance between the center points of the two frames of rectangular frames
Figure GDA0002965900600000041
This feature is denoted as A; .
Secondly, the similarity of the sizes of the pedestrian rectangular frames is measured through the ratio of the intersection union of the two rectangular frames, the intersection area I and the union area U of the pedestrian rectangular frames in the current frame image and the next frame image are calculated, the similarity of the sizes of the pedestrian rectangular frames is represented through the ratio of the I to the U and is used as the measurement of whether the pedestrian rectangular frames are the same, namely the similarity is measured through whether the pedestrian rectangular frames are the same, namely the pedestrian rectangular frames are the same
Figure GDA0002965900600000042
This feature is denoted B.
Furthermore, the color histogram of the pedestrian in each rectangular frame is extracted, the color space is divided into 24 BINs, 8 BINs in each of three channels of R, G and B, the number of pixels with the gray value falling in each BIN is calculated respectively, and a vector { x }is obtained1,x2,x3,…x24And recording the color histogram vector of the next frame as y1,y2,y3,…y24}, calculating
Figure GDA0002965900600000051
This feature is denoted C.
Then, the feature a, the feature B, and the feature C are combined to obtain a new feature F, i.e., F ═ a, B, C, a × a, B × B, C × C, a × B, a × C, B × C, a × B. The feature F includes both a single feature A, B, C and a non-linear combination of three features, such as a × A, A × B.
And further, the combined features F are used as an input training classifier of the logic Stent classifier, so that the classifier has the capability of judging whether the classifier is the same person or not, data association between rectangular frames is completed, wherein the same person is marked as 1, the same person is not marked as 0, the same person is a positive example, and different persons are negative examples, and in order to ensure the accuracy of the training result of the logic Stent classifier, the proportion of positive samples to negative samples is selected to be 1: 1.
And finally, carrying out association of the pedestrian detection frames among each frame by using a trained logic classifier, sequencing the marks of the rectangular frames of the pedestrians detected in the first frame of the video image, judging whether the pedestrians detected in the next frame of the video image are the same as the pedestrians detected in the first frame of the video image or not by using the logic classifier, if so, marking the serial number which is the same as that of the previous frame, otherwise, giving a new mark, and so on, thereby realizing pedestrian tracking. The specific implementation effect of the algorithm is shown in fig. 3.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A pedestrian tracking method applied to video analysis is characterized by comprising the following parts:
detecting pedestrians in a video scene through a background subtraction method to obtain an initial rectangular region corresponding to one or more pedestrian targets;
deducing the moving position of the pedestrian at the next moment by using an optical flow algorithm through the change of pixel moving speed information in the image sequence, comparing the moving position with the pedestrian detection position of the next frame, and measuring whether the pedestrian is the same or not by using the similarity of the moving position and the pedestrian detection position, wherein the characteristic is marked as A;
comparing the similarity of the sizes of the rectangular frames of the pedestrians to serve as the measure of whether the rectangular frames of the pedestrians are the same pedestrian or not, and marking the feature as B;
extracting a color histogram of the pedestrian in each rectangular frame, comparing the similarity of the color histograms of the current frame detection frame and the next frame detection frame, and taking the similarity as the measurement of whether the current frame detection frame and the next frame detection frame are the same pedestrian or not, wherein the characteristic is marked as C;
combining the three characteristics to obtain a new characteristic, and marking as a characteristic F;
training the logic classifier by taking the characteristic F as the input of the logic classifier, so that the logic classifier has the capability of judging whether the person is the same person;
and carrying out association of the pedestrian detection frames among each frame by using the trained logic Stent classifier to realize pedestrian tracking.
2. The pedestrian tracking method applied to video analysis according to claim 1, wherein the background subtraction method is used for modeling mixed gaussian background, calculating a mean u and a variance d of a gaussian function of a model at a pixel point (x, y), calculating a probability P of a point (x, y) in a probability model in a new frame of image, and comparing the probability P with a threshold T to determine foreground points and background points.
3. The method as claimed in claim 1, wherein the optical flow algorithm is a Farneback global optical flow algorithm, the optical flow field is calculated by using the optical flow between two frames to obtain the moving speed of the pedestrian, and the moving position of the pedestrian at the next moment is deduced based on the moving speed, and the central point of the rectangular frame is (x)1,y1) And detecting the central point (x) of the rectangular frame with the pedestrian in the next frame2,y2) Comparing, calculating the distance between the center points of the two frames of rectangular frames
Figure FDA0002965900590000011
This feature is denoted a.
4. The method of claim 1, wherein the similarity of the sizes of the rectangular frames of the pedestrian is measured by a ratio of intersection union of two rectangular frames, the intersection area I and the union area U of the rectangular frames of the pedestrian in the current frame and the next frame of image are calculated, and the similarity of the sizes of the rectangular frames of the pedestrian is represented by the ratio of I and U as a measure of whether the rectangular frames of the pedestrian are the same pedestrian or not, that is, the pedestrian is tracked by the method of comparing the sizes of the rectangular frames of the pedestrian and the pedestrian
Figure FDA0002965900590000021
This feature is denoted B.
5. The method according to claim 1, wherein the method for tracking pedestrians applied to video analysis comprises extracting a color histogram of the pedestrians in each rectangular frame, dividing a color space into 24 BINs, 8 BINs for each of three channels R, G, and B, calculating the number of pixels with gray values falling within each BIN, and obtaining a vector { x [ ("x")1,x2,x3,…x24And recording the color histogram vector of the next frame as y1,y2,y3,…y24}, calculating
Figure FDA0002965900590000022
This feature is denoted C.
6. The pedestrian tracking method applied to video analysis according to claim 1, wherein the combined feature F is a fusion of the above features a, B, C, a, C, B, C.
7. The pedestrian tracking method applied to video analysis according to claim 1 or 6, wherein the combined feature F is used as an input of a logistic Stent classifier to train the classifier, so that the classifier has the capability of judging whether the same person is the same person, and data association between rectangular frames is completed, wherein the same person is marked as 1, the same person is not the same person and is a positive example, the same person is a negative example, and the ratio of the positive sample to the negative sample is 1: 1.
8. The pedestrian tracking method applied to video analysis according to claim 1, wherein a trained logistic classifier is used to perform association of pedestrian detection frames between each frame, the rectangular frame signs of pedestrians detected in the first frame image of the video are sorted, the logistic classifier is used to judge whether the pedestrians detected in the next frame image are the same person as the pedestrians detected in the first frame image, if yes, the serial number of the detected pedestrian is marked to be the same as that of the previous frame, if not, a new sign is given, and so on, thereby realizing pedestrian tracking.
CN201810527019.XA 2018-05-28 2018-05-28 Pedestrian tracking method applied to video analysis Active CN108764338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810527019.XA CN108764338B (en) 2018-05-28 2018-05-28 Pedestrian tracking method applied to video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810527019.XA CN108764338B (en) 2018-05-28 2018-05-28 Pedestrian tracking method applied to video analysis

Publications (2)

Publication Number Publication Date
CN108764338A CN108764338A (en) 2018-11-06
CN108764338B true CN108764338B (en) 2021-05-04

Family

ID=64003134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810527019.XA Active CN108764338B (en) 2018-05-28 2018-05-28 Pedestrian tracking method applied to video analysis

Country Status (1)

Country Link
CN (1) CN108764338B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558505A (en) * 2018-11-21 2019-04-02 百度在线网络技术(北京)有限公司 Visual search method, apparatus, computer equipment and storage medium
CN110991280A (en) * 2019-11-20 2020-04-10 北京影谱科技股份有限公司 Video tracking method and device based on template matching and SURF
CN113379985B (en) * 2020-02-25 2022-09-27 北京君正集成电路股份有限公司 Nursing electronic fence alarm device
CN113379984B (en) * 2020-02-25 2022-09-23 北京君正集成电路股份有限公司 Electronic nursing fence system
CN111882582B (en) * 2020-07-24 2021-10-08 广州云从博衍智能科技有限公司 Image tracking correlation method, system, device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521616A (en) * 2011-12-28 2012-06-27 江苏大学 Pedestrian detection method on basis of sparse representation
CN103632170A (en) * 2012-08-20 2014-03-12 深圳市汉华安道科技有限责任公司 Pedestrian detection method and device based on characteristic combination
CN104050460A (en) * 2014-06-30 2014-09-17 南京理工大学 Pedestrian detection method with multi-feature fusion
CN106778478A (en) * 2016-11-21 2017-05-31 中国科学院信息工程研究所 A kind of real-time pedestrian detection with caching mechanism and tracking based on composite character
CN106778570A (en) * 2016-12-05 2017-05-31 清华大学深圳研究生院 A kind of pedestrian's real-time detection and tracking
CN107492116A (en) * 2017-09-01 2017-12-19 深圳市唯特视科技有限公司 A kind of method that face tracking is carried out based on more display models

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6281460B2 (en) * 2014-09-24 2018-02-21 株式会社デンソー Object detection device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521616A (en) * 2011-12-28 2012-06-27 江苏大学 Pedestrian detection method on basis of sparse representation
CN103632170A (en) * 2012-08-20 2014-03-12 深圳市汉华安道科技有限责任公司 Pedestrian detection method and device based on characteristic combination
CN104050460A (en) * 2014-06-30 2014-09-17 南京理工大学 Pedestrian detection method with multi-feature fusion
CN106778478A (en) * 2016-11-21 2017-05-31 中国科学院信息工程研究所 A kind of real-time pedestrian detection with caching mechanism and tracking based on composite character
CN106778570A (en) * 2016-12-05 2017-05-31 清华大学深圳研究生院 A kind of pedestrian's real-time detection and tracking
CN107492116A (en) * 2017-09-01 2017-12-19 深圳市唯特视科技有限公司 A kind of method that face tracking is carried out based on more display models

Also Published As

Publication number Publication date
CN108764338A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108764338B (en) Pedestrian tracking method applied to video analysis
CN109977782B (en) Cross-store operation behavior detection method based on target position information reasoning
CN104978567B (en) Vehicle checking method based on scene classification
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
CN104615986B (en) The method that pedestrian detection is carried out to the video image of scene changes using multi-detector
CN111046856B (en) Parallel pose tracking and map creating method based on dynamic and static feature extraction
CN106022231A (en) Multi-feature-fusion-based technical method for rapid detection of pedestrian
CN109754009B (en) Article identification method, article identification device, vending system and storage medium
CN111340855A (en) Road moving target detection method based on track prediction
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
Naufal et al. Preprocessed mask RCNN for parking space detection in smart parking systems
Liu et al. Multi-type road marking recognition using adaboost detection and extreme learning machine classification
CN111539980B (en) Multi-target tracking method based on visible light
Ahmed et al. Traffic sign detection and recognition model using support vector machine and histogram of oriented gradient
Sivasangari et al. Indian Traffic Sign Board Recognition and Driver Alert System Using CNN
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
Alam et al. A vision-based system for traffic light detection
CN110334703B (en) Ship detection and identification method in day and night image
CN111968154A (en) HOG-LBP and KCF fused pedestrian tracking method
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
Kumar Accurate object detection & instance segmentation of remote sensing, imagery using cascade mask R-CNN with HRNet backbone
CN115331151A (en) Video speed measuring method and device, electronic equipment and storage medium
CN111402185A (en) Image detection method and device
CN109636834A (en) Video frequency vehicle target tracking algorism based on TLD innovatory algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant