CN104239865A - Pedestrian detecting and tracking method based on multi-stage detection - Google Patents
Pedestrian detecting and tracking method based on multi-stage detection Download PDFInfo
- Publication number
- CN104239865A CN104239865A CN201410471167.6A CN201410471167A CN104239865A CN 104239865 A CN104239865 A CN 104239865A CN 201410471167 A CN201410471167 A CN 201410471167A CN 104239865 A CN104239865 A CN 104239865A
- Authority
- CN
- China
- Prior art keywords
- frame
- target area
- tracking
- pedestrian
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention provides a pedestrian detecting and tracking method based on multi-stage detection. The method comprises the following steps that 1), background frames are extracted and preprocessed; 2), foreground regions are extracted and labeled through a rectangular frame, current frames and the background image frames are used for carrying out difference, foreground information is extracted, binarization processing is carried out on the foreground information, all the communicated foreground regions are labeled through the outer contour, a minimum bounding rectangle is drawn for each contour region, size information of all the rectangles is recorded, a differential image gray value and a threshold value T are compared, parts larger than T are determined as motion object parts, or else the parts are background parts; 3), the external characteristics of pedestrians are utilized for judging target regions preliminarily; 4), statistics is made on the number of the rectangular regions Rn judged to be the similar target regions, a pedestrian detecting algorithm based on the HOG characteristics is adopted for ruling out non-target regions, and then the optical flow method is adopted for tracking the remaining target regions. According to the method, the calculation speed is high, the instantaneity is good, and the practicality is high.
Description
Technical field
The present invention relates to video identification field, particularly relate to a kind of pedestrian detection and tracking.
Background technology
The flow process that existing optical flow method realize target is followed the tracks of: 1) for the sequence of frames of video obtained, utilizes certain object detection method (tradition is light stream detection), detects the foreground target that may occur; 2) if foreground target has appearred in a certain frame, the key feature points (can produce at random, also can utilize angle point to do unique point) finding it representative; 3) for any two adjacent video frames afterwards, find the key feature points optimum position in the current frame occurred in previous frame, thus obtain foreground target position coordinates in the current frame.
Optical flow method can give a velocity to each pixel in image, then according to the velocity feature of each pixel, performance analysis is carried out to image, the velocity formed by detection moving object and background is identified and pursuit movement object, it is a kind of effective motion tracking algorithms, but due to wind in video sequence, leaf swing, the interference of the factors such as DE Camera Shake and non-targeted movable information, make the garbage that optical flow method needs additional detections a large amount of when carrying out pedestrian detection and following the tracks of, have a strong impact on computing velocity, real-time and practicality cannot be ensured.
Summary of the invention
In order to overcome the deficiency that computing velocity is comparatively slow, real-time is poor, practicality is poor of pedestrian detection and tracking that existing optical flow method realizes, the invention provides that a kind of computing velocity is very fast, real-time well, the stronger pedestrian detection based on multiple-stage treatment of practicality and tracking.
The technical solution adopted for the present invention to solve the technical problems is:
Based on pedestrian detection and the tracking of multiple-stage treatment, described detection and tracking method comprises the steps:
1) extract background frames pre-service, using the first frame video of input as initial background picture frame, and carry out Image semantic classification to each frame original image frame of input, the video for different size will be normalized;
2) extract foreground area and mark with rectangle frame, difference is carried out by present frame and background image frame, extract foreground information and carry out binary conversion treatment, each foreground area be communicated with is marked with outline, minimum enclosed rectangle is drawn to each contour area, record the dimension information of all rectangles, rectangular area R
nrepresent, its dimension information comprises length H
n, width W
n, angle is λ
n; Foreground extraction formula:
Δd
t(x,y)=|I
t(x,y)-B
t(x,y)|
Wherein Δ d
t(x, y), I
t(x, y), B
t(x, y) represents difference image, current frame image, the gray-scale value of background image at (x, y) place of t respectively, by difference image gray-scale value d
t(x, y) and threshold value T contrast, and namely the part being greater than T is defined as Moving Objects part, otherwise is background parts;
3) preliminary judgement of target area, utilizes the resemblance of pedestrian to carry out preliminary judgement to target area, comprises the threshold values L arranging rectangle major axis and minor axis length ratio
1, L
2, the threshold values A of major axis and ground angle, draw for the rectangular area in threshold range and be tentatively divided into like target area, other rectangular areas are divided into nontarget area and abandon;
Wherein L
1, L
2, A is setting value, R
nrepresent whether be target area, 1 is expressed as target area, and 0 represents nontarget area;
4) the final judgement of target area, for the rectangular area R be judged to be like target area
nnumber is added up, and first adopts the pedestrian detection algorithm based on HOG feature to get rid of nontarget area, then adopts optical flow method to follow the tracks of remaining target area.
Further, described step 4) in, HOG feature detection only detects the image-region be tentatively judged as like pedestrian region, and in conjunction with SVM pedestrian's sorter, classified in target area, by each like pedestrian's regioinvertions be a width Graphs With Independent picture frame, extract through normalization size characteristic, input SVM classifier successively, whether final differentiation is pedestrian.
Further again, described step 4) in, Sample Storehouse picture and picture size to be measured are normalized to 64 × 32 by the pedestrian detection algorithm based on HOG feature, and block size is 16 × 16, and each piece of unit being divided into 48 × 8 pixels, step-length is 8 pixels.
Further, described step 4) in, adopt the Lucas-Kanade algorithm improved to carry out light stream estimation, if previous frame is F
n, present frame is F
n+1, containing X target area in previous frame, containing Y target area in present frame, convert target area to target image frame with the rectangle frame in these regions for border respectively, be respectively F
n={ F
n, 1, F
n, 2f
n,Xand F
n+1={ F
n+1,1, F
n+1,2f
n+1, Y, calculate central point according to rectangle frame size and be respectively C
n={ C
n, 1, C
n, 2c
n,Xand C
n+1={ C
n+1,1, C
n+1,2c
n+1, Y, then with C
nin point set as Optical-flow Feature point set, respectively to C
n+1in target location point set carry out light stream pyramid calculation, finally from C
n+1in find out and C
nthe corresponding unique point of middle each point is mated, and is mapped to F after the match is successful
n+1correspondence position with F
n+1the center point set of middle target area is that Optical-flow Feature point set pair next frame mates, and iteration like this is gone down.
Described step 4) in, the flow process utilizing Lucas-Kanade algorithm to carry out following the tracks of is as follows:
4.1) initialization needs the unique point of tracking, is C herein
n;
4.2) calculate the light stream pyramid of two frames, calculate the impact point corresponding to initialization feature point according to the light stream between two frames, namely need the point followed the tracks of.
4.3) indicating characteristic point and movement locus, exchange input and output point, and previous frame and present frame exchange and previous frame and present frame pyramid exchange, and follow the tracks of next time;
Wherein, the pyramidal function cvCalcOpticalFlowPyrLK () adopting OpenCV to provide that calculates of light stream realizes.
Adopt multiple target area to replace the mode of overall diagram picture frame, respectively optical flow computation tracking is carried out to each target area, the result of finally following the tracks of is mapped on original image frame, uses F
nreplace the previous frame parameter p rev in function, use F
n+1replace current frame parameters curr, use C
nas unique point parameter p rev_features, use C
n+1light stream pyramid calculation is carried out as target location point set.
Described step 1) in, adopt and every setting-up time, the picture frame being less than threshold values with background difference result is dynamically updated as new background frames.
Described step 2) in, when adopting background subtraction method to obtain prospect, the contour area adopting central point distance to be less than threshold values merges, and marks, as moving region with the rectangle frame of parcel contour edge.
Beneficial effect of the present invention is mainly manifested in: computing velocity is very fast, real-time is good, practicality is stronger.
Accompanying drawing explanation
Fig. 1 is method step process flow diagram of the present invention;
Fig. 2 is the flow chart of steps of pedestrian tracking;
Fig. 3 is the design sketch of pedestrian detection and tracking in the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
With reference to Fig. 1 ~ Fig. 3, a kind of pedestrian detection based on multistage detection and tracking, comprise the following steps:
1) extract the first frame video as initial background picture frame, and the Image semantic classification such as gaussian filtering, noise filtering is carried out to each frame original image frame of input;
For optimization process speed and guarantee picture quality, the video for different size will be normalized, and be normalized to 320 × 240 by unified for the frame of video being greater than 320 × 240, the frame of video that will be less than 320 × 240 keeps original size, in order to avoid resolution is too small;
For reducing algorithm complex, the present invention does not adopt mixed Gauss model to carry out Adaptive background subtraction renewal, and adopts and dynamically updated as new background frames by the picture frame being less than threshold values with background difference result at regular intervals;
2) difference is carried out by present frame and background image frame, extract foreground information and carry out binary conversion treatment, marking each foreground area be communicated with outline, minimum enclosed rectangle is drawn to each contour area, record the dimension information of all rectangles, rectangular area R
nrepresent, its dimension information comprises length H
n, width W
n, angle is λ
n; Foreground extraction algorithm is as follows:
Δd
t(x,y)=|I
t(x,y)-B
t(x,y)|
Wherein Δ d
t(x, y), I
t(x, y), B
t(x, y) represents difference image, current frame image, the gray-scale value of background image at (x, y) place of t respectively, and difference image gray-scale value and threshold value T are contrasted, namely the part being greater than T is defined as Moving Objects part, otherwise is background parts.
When adopting background subtraction method to obtain prospect, easily there is disconnected cavity in the place little in the gray-scale value difference of prospect and background, so the present invention adopts, the contour area of adjacent comparatively near (central point distance is less than threshold values) is merged, and mark, as moving region with the rectangle frame of parcel contour edge.
3) utilize the resemblance of pedestrian to carry out preliminary judgement to target area, comprise the threshold values L that rectangle major axis and minor axis length ratio are set
1, L
2, the threshold values A of major axis and ground angle, draw for the rectangular area in threshold range and be tentatively divided into like target area, other rectangular areas are divided into nontarget area and abandon.Wherein threshold value setting and publicity as follows:
L
1=2,L
2=10,A=75°
Wherein L
1, L
2, A setting value time experimentally effect setting, can regulate as the case may be in practical application;
4) for the rectangular area R be judged to be like target area
nnumber is added up, and first adopts the pedestrian detection algorithm based on HOG (histograms of oriented gradients) feature to get rid of nontarget area further, then adopts optical flow method to follow the tracks of remaining target area.
HOG feature detection only detects the image-region be tentatively judged as like pedestrian region, and in conjunction with SVM pedestrian's sorter, classified in target area, by each like pedestrian's regioinvertions be a width Graphs With Independent picture frame, extract through normalization size characteristic, input SVM classifier successively, whether final differentiation is pedestrian;
In order to reduce algorithm complex, Sample Storehouse picture and picture size to be measured are normalized to 64 × 32, block size is 16 × 16, the each piece of unit being divided into 48 × 8 pixels, step-length is 8 pixels, the HOG intrinsic dimensionality obtained so just reduces to 757 dimensions from 3781 of routine dimensions, thus greatly reduces algorithm complex, makes HOG detect the real-time being unlikely to influential system; In addition, because carried out preliminary judgement to pedestrian target region in step 3, sacrificial features dimension can't bring too much influence to precision to a certain extent.
The present invention adopts the Lucas-Kanade algorithm of improvement to carry out light stream estimation, judges through the target area of above 3,4 steps, can lock onto target region substantially, if previous frame is F
n, present frame is F
n+1, containing X target area in previous frame, containing Y target area in present frame, convert target area to target image frame with the rectangle frame in these regions for border respectively, be respectively F
n={ F
n, 1, F
n, 2f
n,Xand F
n+1={ F
n+1,1, F
n+1,2f
n+1, Y, calculate central point according to rectangle frame size and be respectively C
n={ C
n, 1, C
n, 2c
n,Xand C
n+1={ C
n+1,1, C
n+1,2c
n+1, Y, then with C
nin point set as Optical-flow Feature point set, respectively to C
n+1in target location point set carry out light stream pyramid calculation, finally from C
n+1in find out and C
nthe corresponding unique point of middle each point is mated, and is mapped to F after the match is successful
n+1correspondence position with F
n+1the center point set of middle target area is that Optical-flow Feature point set pair next frame mates, and iteration like this is gone down.
The flow process utilizing Lucas-Kanade algorithm to carry out following the tracks of is as follows:
4.1) initialization needs the unique point of tracking, is C herein
n;
4.2) calculate the light stream pyramid of two frames, calculate the impact point corresponding to initialization feature point according to the light stream between two frames, namely need the point followed the tracks of.
4.3) indicating characteristic point and movement locus, exchange input and output point, and previous frame and present frame exchange and previous frame and present frame pyramid exchange, and follow the tracks of next time;
Wherein, the light stream pyramidal calculating function that opencv can be utilized to provide
CvCalcOpticalFlowPyrLK (const CvArr*prev, const CvArr*curr, CvArr*prev_pyr, CvArr*curr_pyr, const CvPoint2D32f*prev_features, CvPoint2D32f*curr_features, int count, CvSize win_size, int level, char*status, float*track_error, CvTermCriteria criteria, int flags) realize, the present invention adopts the mode replacing overall diagram picture frame with multiple target area, respectively optical flow computation tracking is carried out to each target area, the result of finally following the tracks of is mapped on original image frame, do like this and can reduce useless tracking and tracking error, accurately fast target is followed the tracks of, the present invention uses F respectively
nreplace the previous frame parameter p rev in function, use F
n+1replace current frame parameters curr, use C
nas unique point parameter p rev_features, use C
n+1light stream pyramid calculation is carried out as target location point set.
The optical flow method target detection principle of the present embodiment: give a velocity to each pixel in image, material is thus formed a motion vector field.In a certain particular moment, the point on image and the some one_to_one corresponding on three-dimensional body, this corresponding relation can be calculated by projection.According to the velocity feature of each pixel, performance analysis can be carried out to image.If do not have moving target in image, then light stream vector is continually varying at whole image-region.When there being moving object in image, target and background also exists relative motion.The velocity that moving object is formed velocity that is inevitable and background is different, so just can calculate the position of moving object.
In the present embodiment, have chosen one section of indoor monitor video, former video size is 640 × 480, development platform is Win7, development environment is QT+OpenCV, Fig. 3 is the design sketch of pedestrian detection and tracking, wherein Fig. 3 a is the background frames image not occurring pedestrian, Fig. 3 b is prospect profile after tentatively extraction prospect and extraneous histogram, Fig. 3 c is the seemingly target area after utilizing the body characteristics of pedestrian tentatively to filter, Fig. 3 d is the pedestrian region recognized after HOG feature detection, Fig. 3 e is the design sketch adopting optical flow method to follow the tracks of rectangle frame region, Fig. 3 f is the pedestrian tracking design sketch after rectangle frame.
Claims (8)
1. based on pedestrian detection and the tracking of multiple-stage treatment, it is characterized in that: described detection and tracking method comprises the steps:
1) extract background frames pre-service, using the first frame video of input as initial background picture frame, and carry out Image semantic classification to each frame original image frame of input, the video for different size will be normalized;
2) extract foreground area and mark with rectangle frame, difference is carried out by present frame and background image frame, extract foreground information and carry out binary conversion treatment, each foreground area be communicated with is marked with outline, minimum enclosed rectangle is drawn to each contour area, record the dimension information of all rectangles, rectangular area R
nrepresent, its dimension information comprises length H
n, width W
n, angle is λ
n; Foreground extraction formula:
Δd
t(x,y)=|I
t(x,y)-B
t(x,y)|
Wherein Δ d
t(x, y), I
t(x, y), B
t(x, y) represents difference image, current frame image, the gray-scale value of background image at (x, y) place of t respectively, by difference image gray-scale value d
t(x, y) and threshold value T contrast, and namely the part being greater than T is defined as Moving Objects part, otherwise is background parts;
3) preliminary judgement of target area, utilizes the resemblance of pedestrian to carry out preliminary judgement to target area, comprises the threshold values L arranging rectangle major axis and minor axis length ratio
1, L
2, the threshold values A of major axis and ground angle, draw for the rectangular area in threshold range and be tentatively divided into like target area, other rectangular areas are divided into nontarget area and abandon;
Wherein L
1, L
2, A is setting value, R
nrepresent whether be target area, 1 is expressed as target area, and 0 represents nontarget area;
4) the final judgement of target area, for the rectangular area R be judged to be like target area
nnumber is added up, and first adopts the pedestrian detection algorithm based on HOG feature to get rid of nontarget area, then adopts optical flow method to follow the tracks of remaining target area.
2. a kind of pedestrian detection based on multiple-stage treatment and tracking as claimed in claim 1, it is characterized in that: described step 4) in, HOG feature detection only detects the image-region be tentatively judged as like pedestrian region, and in conjunction with SVM pedestrian's sorter, classified in target area, by each like pedestrian's regioinvertions be a width Graphs With Independent picture frame, extract through normalization size characteristic, input SVM classifier successively, whether final differentiation is pedestrian.
3. a kind of pedestrian detection based on multiple-stage treatment and tracking as claimed in claim 1 or 2, it is characterized in that: described step 4) in, Sample Storehouse picture and picture size to be measured are normalized to 64 × 32 by the pedestrian detection algorithm based on HOG feature, block size is 16 × 16, the each piece of unit being divided into 48 × 8 pixels, step-length is 8 pixels.
4. a kind of pedestrian detection based on multiple-stage treatment and tracking as claimed in claim 1 or 2, is characterized in that: described step 4) in, adopt the Lucas-Kanade algorithm improved to carry out light stream estimation, if previous frame is F
n, present frame is F
n+1, containing X target area in previous frame, containing Y target area in present frame, convert target area to target image frame with the rectangle frame in these regions for border respectively, be respectively F
n={ F
n, 1, F
n, 2... F
n,Xand F
n+1={ F
n+1,1, F
n+1,2... F
n+1, Y, calculate central point according to rectangle frame size and be respectively C
n={ C
n, 1, C
n, 2... C
n,Xand C
n+1={ C
n+1,1, C
n+1,2c
n+1, Y, then with C
nin point set as Optical-flow Feature point set, respectively to C
n+1in target location point set carry out light stream pyramid calculation, finally from C
n+1in find out and C
nthe corresponding unique point of middle each point is mated, and is mapped to F after the match is successful
n+1correspondence position with F
n+1the center point set of middle target area is that Optical-flow Feature point set pair next frame mates, and iteration like this is gone down.
5. a kind of pedestrian detection based on multiple-stage treatment and tracking as claimed in claim 4, is characterized in that: described step 4) in, the flow process utilizing Lucas-Kanade algorithm to carry out following the tracks of is as follows:
4.1) initialization needs the unique point of tracking, is C herein
n;
4.2) calculate the light stream pyramid of two frames, calculate the impact point corresponding to initialization feature point according to the light stream between two frames, namely need the point followed the tracks of.
4.3) indicating characteristic point and movement locus, exchange input and output point, and previous frame and present frame exchange and previous frame and present frame pyramid exchange, and follow the tracks of next time;
Wherein, the pyramidal function cvCalcOpticalFlowPyrLK () adopting OpenCV to provide that calculates of light stream realizes.
6. a kind of pedestrian detection based on multiple-stage treatment and tracking as claimed in claim 5, it is characterized in that: adopt multiple target area to replace the mode of overall diagram picture frame, respectively optical flow computation tracking is carried out to each target area, the result of finally following the tracks of is mapped on original image frame, uses F
nreplace the previous frame parameter p rev in function, use F
n+1replace current frame parameters curr, use C
nas unique point parameter p rev_features, use C
n+1light stream pyramid calculation is carried out as target location point set.
7. a kind of pedestrian detection based on multiple-stage treatment and tracking as claimed in claim 1 or 2, it is characterized in that: described step 1) in, adopt and every setting-up time, the picture frame being less than threshold values with background difference result is dynamically updated as new background frames.
8. a kind of pedestrian detection based on multiple-stage treatment and tracking as claimed in claim 1 or 2, it is characterized in that: described step 2) in, when adopting background subtraction method to obtain prospect, the contour area adopting central point distance to be less than threshold values merges, and mark, as moving region with the rectangle frame of parcel contour edge.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410471167.6A CN104239865B (en) | 2014-09-16 | 2014-09-16 | Pedestrian detecting and tracking method based on multi-stage detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410471167.6A CN104239865B (en) | 2014-09-16 | 2014-09-16 | Pedestrian detecting and tracking method based on multi-stage detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104239865A true CN104239865A (en) | 2014-12-24 |
CN104239865B CN104239865B (en) | 2017-04-12 |
Family
ID=52227891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410471167.6A Active CN104239865B (en) | 2014-09-16 | 2014-09-16 | Pedestrian detecting and tracking method based on multi-stage detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104239865B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657712A (en) * | 2015-02-09 | 2015-05-27 | 惠州学院 | Method for detecting masked person in monitoring video |
CN105160328A (en) * | 2015-09-17 | 2015-12-16 | 国家电网公司 | Human body contour identification method based on binary image |
CN106203360A (en) * | 2016-07-15 | 2016-12-07 | 上海电力学院 | Intensive scene crowd based on multistage filtering model hives off detection algorithm |
CN106600652A (en) * | 2016-12-30 | 2017-04-26 | 南京工业大学 | Panoramic camera positioning method based on artificial neural network |
CN106775630A (en) * | 2016-11-21 | 2017-05-31 | 江苏大学 | It is a kind of cross-platform to pedestrian detection in video and the method that preserves testing result |
CN106845338A (en) * | 2016-12-13 | 2017-06-13 | 深圳市智美达科技股份有限公司 | Pedestrian detection method and system in video flowing |
CN107331118A (en) * | 2017-07-05 | 2017-11-07 | 浙江宇视科技有限公司 | Fall detection method and device |
CN107452021A (en) * | 2016-04-19 | 2017-12-08 | 深圳正谱云教育技术有限公司 | Camera to automatically track system and method based on single-lens image Dynamic Recognition |
CN107480653A (en) * | 2017-08-30 | 2017-12-15 | 安徽理工大学 | passenger flow volume detection method based on computer vision |
CN107481269A (en) * | 2017-08-08 | 2017-12-15 | 西安科技大学 | A kind of mine multi-cam moving target continuous tracking method |
CN108009466A (en) * | 2016-10-28 | 2018-05-08 | 北京旷视科技有限公司 | Pedestrian detection method and device |
CN108780576A (en) * | 2016-04-06 | 2018-11-09 | 赫尔实验室有限公司 | The system and method removed using the ghost image in the video clip of object bounds frame |
CN108920997A (en) * | 2018-04-10 | 2018-11-30 | 国网浙江省电力有限公司信息通信分公司 | Judge that non-rigid targets whether there is the tracking blocked based on profile |
CN108961304A (en) * | 2017-05-23 | 2018-12-07 | 阿里巴巴集团控股有限公司 | Identify the method for sport foreground and the method for determining target position in video in video |
WO2019041569A1 (en) * | 2017-09-01 | 2019-03-07 | 歌尔科技有限公司 | Method and apparatus for marking moving target, and unmanned aerial vehicle |
CN109842738A (en) * | 2019-01-29 | 2019-06-04 | 北京字节跳动网络技术有限公司 | Method and apparatus for shooting image |
CN109960961A (en) * | 2017-12-14 | 2019-07-02 | 中国电信股份有限公司 | Pedestrian recognition method and device |
CN110084837A (en) * | 2019-05-15 | 2019-08-02 | 四川图珈无人机科技有限公司 | Object detecting and tracking method based on UAV Video |
CN110929597A (en) * | 2019-11-06 | 2020-03-27 | 普联技术有限公司 | Image-based leaf filtering method and device and storage medium |
CN111860192A (en) * | 2020-06-24 | 2020-10-30 | 国网宁夏电力有限公司检修公司 | Moving object identification method and system |
CN112184751A (en) * | 2019-07-04 | 2021-01-05 | 虹软科技股份有限公司 | Object identification method and system and electronic equipment |
CN112215870A (en) * | 2020-09-17 | 2021-01-12 | 武汉联影医疗科技有限公司 | Liquid flow track overrun detection method, device and system |
CN112818836A (en) * | 2021-01-29 | 2021-05-18 | 国网江苏省电力有限公司电力科学研究院 | Personnel target detection method and system for transformer substation scene |
CN114863472A (en) * | 2022-03-28 | 2022-08-05 | 深圳海翼智新科技有限公司 | Multi-stage pedestrian detection method, device and storage medium |
CN118537749A (en) * | 2024-07-24 | 2024-08-23 | 南京航空航天大学 | On-orbit lightweight space non-cooperative target identification method based on improved prospect extraction |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280408B (en) * | 2018-01-08 | 2021-11-02 | 北京联合大学 | Crowd abnormal event detection method based on hybrid tracking and generalized linear model |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1361543A2 (en) * | 2002-05-09 | 2003-11-12 | Matsushita Electric Industrial Co., Ltd. | Determining object motion from optical flow analysis |
CN101887524A (en) * | 2010-07-06 | 2010-11-17 | 湖南创合制造有限公司 | Pedestrian detection method based on video monitoring |
CN103400113A (en) * | 2013-07-10 | 2013-11-20 | 重庆大学 | Method for detecting pedestrian on expressway or in tunnel based on image processing |
-
2014
- 2014-09-16 CN CN201410471167.6A patent/CN104239865B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1361543A2 (en) * | 2002-05-09 | 2003-11-12 | Matsushita Electric Industrial Co., Ltd. | Determining object motion from optical flow analysis |
CN101887524A (en) * | 2010-07-06 | 2010-11-17 | 湖南创合制造有限公司 | Pedestrian detection method based on video monitoring |
CN103400113A (en) * | 2013-07-10 | 2013-11-20 | 重庆大学 | Method for detecting pedestrian on expressway or in tunnel based on image processing |
Non-Patent Citations (1)
Title |
---|
杨磊 等: "基于减背景的行人检测及简单行为识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657712B (en) * | 2015-02-09 | 2017-11-14 | 惠州学院 | Masked man's detection method in a kind of monitor video |
CN104657712A (en) * | 2015-02-09 | 2015-05-27 | 惠州学院 | Method for detecting masked person in monitoring video |
CN105160328A (en) * | 2015-09-17 | 2015-12-16 | 国家电网公司 | Human body contour identification method based on binary image |
CN105160328B (en) * | 2015-09-17 | 2018-08-03 | 国家电网公司 | A kind of human body contour outline recognition methods based on binary image |
CN108780576B (en) * | 2016-04-06 | 2022-02-25 | 赫尔实验室有限公司 | System and method for ghost removal in video segments using object bounding boxes |
CN108780576A (en) * | 2016-04-06 | 2018-11-09 | 赫尔实验室有限公司 | The system and method removed using the ghost image in the video clip of object bounds frame |
CN107452021A (en) * | 2016-04-19 | 2017-12-08 | 深圳正谱云教育技术有限公司 | Camera to automatically track system and method based on single-lens image Dynamic Recognition |
CN106203360A (en) * | 2016-07-15 | 2016-12-07 | 上海电力学院 | Intensive scene crowd based on multistage filtering model hives off detection algorithm |
CN108009466A (en) * | 2016-10-28 | 2018-05-08 | 北京旷视科技有限公司 | Pedestrian detection method and device |
CN108009466B (en) * | 2016-10-28 | 2022-03-15 | 北京旷视科技有限公司 | Pedestrian detection method and device |
CN106775630A (en) * | 2016-11-21 | 2017-05-31 | 江苏大学 | It is a kind of cross-platform to pedestrian detection in video and the method that preserves testing result |
CN106845338B (en) * | 2016-12-13 | 2019-12-20 | 深圳市智美达科技股份有限公司 | Pedestrian detection method and system in video stream |
CN106845338A (en) * | 2016-12-13 | 2017-06-13 | 深圳市智美达科技股份有限公司 | Pedestrian detection method and system in video flowing |
CN106600652A (en) * | 2016-12-30 | 2017-04-26 | 南京工业大学 | Panoramic camera positioning method based on artificial neural network |
CN108961304A (en) * | 2017-05-23 | 2018-12-07 | 阿里巴巴集团控股有限公司 | Identify the method for sport foreground and the method for determining target position in video in video |
CN107331118A (en) * | 2017-07-05 | 2017-11-07 | 浙江宇视科技有限公司 | Fall detection method and device |
CN107331118B (en) * | 2017-07-05 | 2020-11-17 | 浙江宇视科技有限公司 | Fall detection method and device |
CN107481269A (en) * | 2017-08-08 | 2017-12-15 | 西安科技大学 | A kind of mine multi-cam moving target continuous tracking method |
CN107480653A (en) * | 2017-08-30 | 2017-12-15 | 安徽理工大学 | passenger flow volume detection method based on computer vision |
WO2019041569A1 (en) * | 2017-09-01 | 2019-03-07 | 歌尔科技有限公司 | Method and apparatus for marking moving target, and unmanned aerial vehicle |
CN109960961A (en) * | 2017-12-14 | 2019-07-02 | 中国电信股份有限公司 | Pedestrian recognition method and device |
CN108920997A (en) * | 2018-04-10 | 2018-11-30 | 国网浙江省电力有限公司信息通信分公司 | Judge that non-rigid targets whether there is the tracking blocked based on profile |
CN109842738A (en) * | 2019-01-29 | 2019-06-04 | 北京字节跳动网络技术有限公司 | Method and apparatus for shooting image |
CN110084837B (en) * | 2019-05-15 | 2022-11-04 | 四川图珈无人机科技有限公司 | Target detection and tracking method based on unmanned aerial vehicle video |
CN110084837A (en) * | 2019-05-15 | 2019-08-02 | 四川图珈无人机科技有限公司 | Object detecting and tracking method based on UAV Video |
CN112184751A (en) * | 2019-07-04 | 2021-01-05 | 虹软科技股份有限公司 | Object identification method and system and electronic equipment |
CN110929597A (en) * | 2019-11-06 | 2020-03-27 | 普联技术有限公司 | Image-based leaf filtering method and device and storage medium |
CN111860192A (en) * | 2020-06-24 | 2020-10-30 | 国网宁夏电力有限公司检修公司 | Moving object identification method and system |
CN112215870B (en) * | 2020-09-17 | 2022-07-12 | 武汉联影医疗科技有限公司 | Liquid flow track overrun detection method, device and system |
CN112215870A (en) * | 2020-09-17 | 2021-01-12 | 武汉联影医疗科技有限公司 | Liquid flow track overrun detection method, device and system |
CN112818836A (en) * | 2021-01-29 | 2021-05-18 | 国网江苏省电力有限公司电力科学研究院 | Personnel target detection method and system for transformer substation scene |
CN112818836B (en) * | 2021-01-29 | 2022-08-19 | 国网江苏省电力有限公司电力科学研究院 | Method and system for detecting personnel target of transformer substation scene |
CN114863472A (en) * | 2022-03-28 | 2022-08-05 | 深圳海翼智新科技有限公司 | Multi-stage pedestrian detection method, device and storage medium |
CN118537749A (en) * | 2024-07-24 | 2024-08-23 | 南京航空航天大学 | On-orbit lightweight space non-cooperative target identification method based on improved prospect extraction |
CN118537749B (en) * | 2024-07-24 | 2024-10-08 | 南京航空航天大学 | On-orbit lightweight space non-cooperative target identification method based on improved prospect extraction |
Also Published As
Publication number | Publication date |
---|---|
CN104239865B (en) | 2017-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104239865A (en) | Pedestrian detecting and tracking method based on multi-stage detection | |
Min et al. | A new approach to track multiple vehicles with the combination of robust detection and two classifiers | |
CN103871079B (en) | Wireless vehicle tracking based on machine learning and light stream | |
CN106846359A (en) | Moving target method for quick based on video sequence | |
CN104978567B (en) | Vehicle checking method based on scene classification | |
CN105528794A (en) | Moving object detection method based on Gaussian mixture model and superpixel segmentation | |
CN107133610B (en) | Visual detection and counting method for traffic flow under complex road conditions | |
CN104835182A (en) | Method for realizing dynamic object real-time tracking by using camera | |
CN105427626A (en) | Vehicle flow statistics method based on video analysis | |
Timofte et al. | Combining traffic sign detection with 3D tracking towards better driver assistance | |
EP2813973B1 (en) | Method and system for processing video image | |
CN101924871A (en) | Mean shift-based video target tracking method | |
CN103793708A (en) | Multi-scale license plate precise locating method based on affine correction | |
CN104268520A (en) | Human motion recognition method based on depth movement trail | |
CN103794050A (en) | Real-time transport vehicle detecting and tracking method | |
CN106296743A (en) | A kind of adaptive motion method for tracking target and unmanned plane follow the tracks of system | |
Xia et al. | Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach | |
Abdullah et al. | Vehicles detection system at different weather conditions | |
CN113221739A (en) | Monocular vision-based vehicle distance measuring method | |
CN103996199A (en) | Movement detection method based on depth information | |
Mishra et al. | Video-based vehicle detection and classification in heterogeneous traffic conditions using a novel kernel classifier | |
Niknejad et al. | Embedded multi-sensors objects detection and tracking for urban autonomous driving | |
Meshram et al. | Vehicle detection and tracking techniques used in moving vehicles | |
Du | CAMShift-Based Moving Object Tracking System | |
CN110660081B (en) | Target tracking method based on self-adaptive feature selection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20190107 Address after: 310021 325, room 1, No. 1, Jian Qiao Road, Jianggan District, Hangzhou, Zhejiang. Patentee after: Hangzhou Entropy Technology Co., Ltd. Address before: 315824 2, -11, No. 479, West Road, Ningbo, Beilun, Zhejiang. Patentee before: NINGBO XONLINK INFORMATION TECHNOLOGY CO., LTD. |
|
TR01 | Transfer of patent right |