CN103559498A - Rapid man and vehicle target classification method based on multi-feature fusion - Google Patents
Rapid man and vehicle target classification method based on multi-feature fusion Download PDFInfo
- Publication number
- CN103559498A CN103559498A CN201310436746.2A CN201310436746A CN103559498A CN 103559498 A CN103559498 A CN 103559498A CN 201310436746 A CN201310436746 A CN 201310436746A CN 103559498 A CN103559498 A CN 103559498A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- area
- frame
- target area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the field of image processing and photoelectric technology, and particularly relates to a rapid man and vehicle target classification method based on multi-feature fusion. The method comprises the following steps of acquiring a monitoring video image and obtaining a difference image of two adjacent frames of images and carrying out image segmenting, noise removing, hole filling to form a target image, determining target areas on the target image and calculating the area of each target areas, the area of an external rectangle, a centroid coordinate and rectangular saturation, judging two target areas with small area difference and a closest centroid coordinate Euclidean distance in two adjacent frames of images as a same target, and if the same target stably appears on a plurality of frames, judging the target as a man or vehicle according to the area, speed and rectangular saturation of the target. According to the method, the rapid discrimination of moving target man and vehicle attributes is realized, the classification accuracy is high, and the real-time performance is good.
Description
Technical field
The invention belongs to image processing, field of photoelectric technology, be specifically related to a kind of quick people's car objective classification method based on multi-feature fusion.
Background technology
In video monitoring system, there will be a large amount of moving targets, and in whole moving targets, take personnel targets and vehicle target as main, this two classes target is also the highest priority of monitoring simultaneously.Because the management expectancy for personnel targets and vehicle target has remarkable difference, therefore in video monitoring system, there is the demand of people's car target classification.
People's car objective classification method of the prior art mainly, based on statistics training, need to be collected a large amount of vehicles and personnel's image pattern, and recognition speed is slow, higher to arithmetic facility demand, has greatly affected recognition effect.
Summary of the invention
The technical issues that need to address of the present invention are: people's car objective classification method of the prior art mainly, based on statistics training, need to be collected a large amount of vehicles and personnel's image pattern, and recognition efficiency is low.
Technical scheme of the present invention is as described below:
Quick people's car objective classification method, comprises the steps:
Obtain monitor video image, adjacent two two field pictures are carried out to inter-frame difference, obtain difference image; Adopt maximum variance between clusters definite threshold, difference image is carried out to image and cut apart; The noise in image is cut apart in removal, and filling cavity forms target image; Target image is carried out to pixels statistics, remove the low region of target image pixel value, form target area, maximal value and minimum value according to target area pixel transverse and longitudinal coordinate, construct the boundary rectangle in this region, in calculating target image, the area of each target area is, the area of boundary rectangle, center-of-mass coordinate and rectangle saturation degree; Two target areas that in adjacent two two field pictures, area discrepancy is little, center-of-mass coordinate Euclidean distance is nearest are judged to be to same target; If more than the some frames of the stable appearance of same target, according to area, speed and the rectangle saturation degree of this target, judge that this target is for personnel or vehicle.
As preferred version, method of the present invention comprises the following steps:
Step 1.
Obtain monitor video image, k two field picture is designated as F
k; Adjacent two two field pictures are carried out to inter-frame difference: get the absolute value of adjacent two two field picture respective pixel values differences, after pixel difference, obtain difference image FD
k;
Step 2.
Calculate difference image FD
kaverage μ and standard deviation sigma, for difference image FD
kin be greater than the pixel value of μ+σ, with maximum between-cluster variance criterion, calculate optimum segmentation threshold value Th, according to threshold value Th to difference image FD
kcarry out image and cut apart, form and cut apart image FS
k:
Wherein,
FD
k(i, j) represents difference image FD
kthe pixel value of the capable j row of i pixel;
FS
k(i, j) represents to cut apart image FS
kthe pixel value of the capable j row of i pixel;
Step 3
Adopt morphological operation to remove and cut apart the noise in image, filling cavity, forms target image FM
k;
Step 4
By target image FM
kmiddle pixel value is that 1 connected region is carried out pixel count statistics, removal pixel count is less than the region of N1, to the target area retaining, maximal value and minimum value according to pixel transverse and longitudinal coordinate, form the boundary rectangle in this region, in order to identify this target area, calculate the area S of each target area in image, the area RS of boundary rectangle, center-of-mass coordinate (x, y) and rectangle saturation degree R
Step 5
Each two field picture to monitor video carries out above-mentioned steps 1~step 4, add up the features such as the area of each two field picture target area, the area of target area boundary rectangle and target area center-of-mass coordinate, according to feature, carry out target association: establish and in k frame, have N target area, for its n target area, in k-1 frame, target area meeting area discrepancy and be less than n target area Euclidean distance minimum of centroid distance k frame under 10% condition, is judged to be same target;
Step 6
If same target is stable there is N2 frame more than, by many features identification, judge that these targets are personnel or vehicle:
Wherein,
D
s, d
v, d
rbe followed successively by area features weighting coefficient, velocity characteristic weighting coefficient, rectangle saturation degree characteristic weighing coefficient;
ThS
1, ThS
2be followed successively by the high and low threshold value of target area;
ThV
1, ThV
2be followed successively by the high and low threshold value of target velocity;
ThR
1, ThR
2be followed successively by the high and low threshold value of target rectangle saturation degree;
S
nit is the area of n target area of k frame;
V
nbe the speed of n target area of k frame;
R
nit is the rectangle saturation degree of n target area of k frame;
ThS
1, ThS
2, ThV
1, ThV
2, ThR
1, ThR
2numerical value is specifically set according to monitoring scene;
Setting threshold Th
0if, F
n>=Th
0, judge that this target is as vehicle; If F
n<Th
0, judge that this target is as personnel.
As preferred version,
In step 3, adopt 5 * 5 square template to FS
kcorrode operation, remove noise; To removing image after noise, adopt 7 * 7 square template to FS
kcarry out expansive working, fill up the cavity in image, form target image FM
k.
As preferred version,
In step 6,
The method of calculating n target area speed of k frame is as described below, k >=2:
Wherein,
V
nkbe the speed of n target area of k frame;
A, b are weighting coefficient, and preferred value is a=0.7, b=0.3;
D
nkit is the barycenter displacement of n target area of k frame;
D
n (k-1)be in k-1 frame, to be judged as the barycenter displacement of the target area of same target with n target area of k frame;
(xn
k, yn
k) be the center-of-mass coordinate of n target area of k frame;
(xn
k-1, yn
k-1) be in k-1 frame, to be judged as the center-of-mass coordinate of the target area of same target with n target area of k frame.
As preferred version,
N1=60; N2=10; d
s=0.3, d
v=0.3, d
r=0.4; ThS
1=2000 pixel counts, ThS
2=6000 pixel counts; ThV
1=4 pixel/frame, ThV
2=8 pixel/frame; ThR
1=0.5, ThR
2=0.8; Th
0=0.5.
Beneficial effect of the present invention is:
A kind of quick people's car objective classification method based on multi-feature fusion of the present invention, by extracting the features such as area, speed and rectangle saturation degree of moving target, realizes the Quick of moving target people car attribute, classification accurately high, real-time is good.
Accompanying drawing explanation
Fig. 1 is a kind of quick people's car objective classification method process flow diagram based on multi-feature fusion of the present invention.
Embodiment
Below in conjunction with drawings and Examples, a kind of quick people's car objective classification method based on multi-feature fusion of the present invention is elaborated.
A kind of quick people's car objective classification method based on multi-feature fusion of the present invention, comprises the following steps:
Step 1.
Obtain monitor video image, k two field picture is designated as F
k; Adjacent two two field pictures are carried out to inter-frame difference: get the absolute value of adjacent two two field picture respective pixel values differences, after pixel difference, obtain difference image FD
k, i.e. FD
k=| F
k-F
k-1|.
Step 2.
If difference image FD
kfor the capable c row of r, FD
k(i, j) represents the pixel value of the capable j row of i pixel, calculates difference image FD
kaverage μ and standard deviation sigma:
For difference image FD
kin be greater than the pixel value of μ+σ, with maximum between-cluster variance (OTSU) criterion, calculate optimum segmentation threshold value Th.
According to threshold value Th to difference image FD
kcarry out image and cut apart, form and cut apart image FS
k.Cut apart rule as follows:
Wherein, FS
k(i, j) represents to cut apart image FS
kthe pixel value of the capable j row of i pixel.
Step 3
Adopt morphological operation to remove and cut apart the noise in image, filling cavity: adopt 5 * 5 square template to FS
kcorrode operation, remove noise; To removing image after noise, adopt 7 * 7 square template to FS
kcarry out expansive working, fill up the cavity in image, form target image FM
k.
Step 4
By target image FM
kmiddle pixel value is that 1 connected region is carried out pixel count statistics, removes the region that pixel count is less than N1, N1=60 in the present embodiment.To the target area retaining, according to maximal value and the minimum value of pixel transverse and longitudinal coordinate, form the boundary rectangle in this region, in order to identify this target area, calculate the area S of each target area in image, the area RS of boundary rectangle, center-of-mass coordinate (x, y) and rectangle saturation degree R
Step 5
Each two field picture to monitor video carries out above-mentioned steps 1~step 4, add up the features such as the area of each two field picture target area, the area of target area boundary rectangle and target area center-of-mass coordinate, according to feature, carry out target association: establish and in k frame, have N target area, for its n target area, in k-1 frame, target area meeting area discrepancy and be less than n target area Euclidean distance minimum of centroid distance k frame under 10% condition, is judged to be same target.
Step 6
If same target is stable there is N2 frame more than, by many features identification, judge that these targets are personnel or vehicle:
Wherein,
D
s, d
v, d
rbe followed successively by area features weighting coefficient, velocity characteristic weighting coefficient, rectangle saturation degree characteristic weighing coefficient;
ThS
1, ThS
2be followed successively by the high and low threshold value of target area;
ThV
1, ThV
2be followed successively by the high and low threshold value of target velocity;
ThR
1, ThR
2be followed successively by the high and low threshold value of target rectangle saturation degree;
S
nit is the area of n target area of k frame;
V
nbe the speed of n target area of k frame;
R
nit is the rectangle saturation degree of n target area of k frame.
ThS
1, ThS
2, ThV
1, ThV
2, ThR
1, ThR
2numerical value is specifically set according to monitoring scene.
In the present embodiment, N2=10; d
s=0.3, d
v=0.3, d
r=0.4; ThS
1=2000 pixel counts, ThS
2=6000 pixel counts; ThV
1=4 pixel/frame, ThV
2=8 pixel/frame; ThR
1=0.5, ThR
2=0.8.
Wherein, calculate the method for n target area speed of k frame (k >=2) as described below:
Wherein,
V
nkbe the speed of n target area of k frame;
A, b are weighting coefficient, in the present embodiment, and a=0.7, b=0.3;
D
nkit is the barycenter displacement of n target area of k frame;
D
n (k-1)be in k-1 frame, to be judged as the barycenter displacement of the target area of same target with n target area of k frame;
(xn
k, yn
k) be the center-of-mass coordinate of n target area of k frame;
(xn
k-1, yn
k-1) be in k-1 frame, to be judged as the center-of-mass coordinate of the target area of same target with n target area of k frame.
Consider that personnel targets area is little, vehicle target area is large, personnel targets movement velocity is little, vehicle target movement velocity is large, and personnel targets rectangle saturation degree is low, vehicle target rectangle saturation degree is high, can carry out the classification of people's car by following criterion:
Setting threshold Th
0if, F
n>=Th
0, judge that this target is as vehicle; If F
n<Th
0, judge that this target is as personnel.In the present embodiment, Th
0=0.5.
Claims (6)
1. quick people's car objective classification method based on multi-feature fusion, is characterized in that: comprise the steps:
Obtain monitor video image, adjacent two two field pictures are carried out to inter-frame difference, obtain difference image; Adopt maximum variance between clusters definite threshold, difference image is carried out to image and cut apart; The noise in image is cut apart in removal, and filling cavity forms target image; Target image is carried out to pixels statistics, remove the low region of target image pixel value, form target area, maximal value and minimum value according to target area pixel transverse and longitudinal coordinate, construct the boundary rectangle in this region, in calculating target image, the area of each target area is, the area of boundary rectangle, center-of-mass coordinate and rectangle saturation degree; Two target areas that in adjacent two two field pictures, area discrepancy is little, center-of-mass coordinate Euclidean distance is nearest are judged to be to same target; If more than the some frames of the stable appearance of same target, according to area, speed and the rectangle saturation degree of this target, judge that this target is for personnel or vehicle.
2. quick people's car objective classification method based on multi-feature fusion according to claim 1, is characterized in that: comprise the following steps:
Step 1.
Obtain monitor video image, k two field picture is designated as F
k; Adjacent two two field pictures are carried out to inter-frame difference: get the absolute value of adjacent two two field picture respective pixel values differences, after pixel difference, obtain difference image FD
k;
Step 2.
Calculate difference image FD
kaverage μ and standard deviation sigma, for difference image FD
kin be greater than the pixel value of μ+σ, with maximum between-cluster variance criterion, calculate optimum segmentation threshold value Th, according to threshold value Th to difference image FD
kcarry out image and cut apart, form and cut apart image FS
k:
Wherein,
FD
k(i, j) represents difference image FD
kthe pixel value of the capable j row of i pixel;
FS
k(i, j) represents to cut apart image FS
kthe pixel value of the capable j row of i pixel;
Step 3
Adopt morphological operation to remove and cut apart the noise in image, filling cavity, forms target image FM
k;
Step 4
By target image FM
kmiddle pixel value is that 1 connected region is carried out pixel count statistics, removal pixel count is less than the region of N1, to the target area retaining, maximal value and minimum value according to pixel transverse and longitudinal coordinate, form the boundary rectangle in this region, in order to identify this target area, calculate the area S of each target area in image, the area RS of boundary rectangle, center-of-mass coordinate (x, y) and rectangle saturation degree R
Step 5
Each two field picture to monitor video carries out above-mentioned steps 1~step 4, add up the features such as the area of each two field picture target area, the area of target area boundary rectangle and target area center-of-mass coordinate, according to feature, carry out target association: establish and in k frame, have N target area, for its n target area, in k-1 frame, target area meeting area discrepancy and be less than n target area Euclidean distance minimum of centroid distance k frame under 10% condition, is judged to be same target;
Step 6
If same target is stable there is N2 frame more than, by many features identification, judge that these targets are personnel or vehicle:
Wherein,
D
s, d
v, d
rbe followed successively by area features weighting coefficient, velocity characteristic weighting coefficient, rectangle saturation degree characteristic weighing coefficient;
ThS
1, ThS
2be followed successively by the high and low threshold value of target area;
ThV
1, ThV
2be followed successively by the high and low threshold value of target velocity;
ThR
1, ThR
2be followed successively by the high and low threshold value of target rectangle saturation degree;
S
nit is the area of n target area of k frame;
V
nbe the speed of n target area of k frame;
R
nit is the rectangle saturation degree of n target area of k frame;
ThS
1, ThS
2, ThV
1, ThV
2, ThR
1, ThR
2numerical value is specifically set according to monitoring scene;
Setting threshold Th
0if, F
n>=Th
0, judge that this target is as vehicle; If F
n<Th
0, judge that this target is as personnel.
3. quick people's car objective classification method based on multi-feature fusion according to claim 2, is characterized in that:
In step 6,
The method of calculating n target area speed of k frame is as described below, k >=2:
Wherein,
V
nkbe the speed of n target area of k frame;
A, b are weighting coefficient;
D
nkit is the barycenter displacement of n target area of k frame;
D
n (k-1)be in k-1 frame, to be judged as the barycenter displacement of the target area of same target with n target area of k frame;
(xn
k, yn
k) be the center-of-mass coordinate of n target area of k frame;
(xn
k-1, yn
k-1) be in k-1 frame, to be judged as the center-of-mass coordinate of the target area of same target with n target area of k frame.
4. quick people's car objective classification method based on multi-feature fusion according to claim 3, is characterized in that: a=0.7, b=0.3.
5. according to the based on multi-feature fusion quick people's car objective classification method described in claim 2 or 3, it is characterized in that: in step 3, adopt 5 * 5 square template to FS
kcorrode operation, remove noise; To removing image after noise, adopt 7 * 7 square template to FS
kcarry out expansive working, fill up the cavity in image, form target image FM
k.
6. according to the based on multi-feature fusion quick people's car objective classification method described in claim 2 or 3 or 4, it is characterized in that: N1=60; N2=10; d
s=0.3, d
v=0.3, d
r=0.4; ThS
1=2000 pixel counts, ThS
2=6000 pixel counts; ThV
1=4 pixel/frame, ThV
2=8 pixel/frame; ThR
1=0.5, ThR
2=0.8; Th
0=0.5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310436746.2A CN103559498A (en) | 2013-09-24 | 2013-09-24 | Rapid man and vehicle target classification method based on multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310436746.2A CN103559498A (en) | 2013-09-24 | 2013-09-24 | Rapid man and vehicle target classification method based on multi-feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103559498A true CN103559498A (en) | 2014-02-05 |
Family
ID=50013739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310436746.2A Pending CN103559498A (en) | 2013-09-24 | 2013-09-24 | Rapid man and vehicle target classification method based on multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103559498A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104050684A (en) * | 2014-05-27 | 2014-09-17 | 华中科技大学 | Video moving object classification method and system based on on-line training |
CN104658009A (en) * | 2015-01-09 | 2015-05-27 | 北京环境特性研究所 | Moving-target detection method based on video images |
CN104657741A (en) * | 2015-01-09 | 2015-05-27 | 北京环境特性研究所 | Target classification method based on video images |
CN104866825A (en) * | 2015-05-17 | 2015-08-26 | 华南理工大学 | Gesture language video frame sequence classification method based on Hu moments |
CN106778746A (en) * | 2016-12-23 | 2017-05-31 | 成都赫尔墨斯科技有限公司 | A kind of anti-unmanned plane method of multiple target |
CN107273815A (en) * | 2017-05-24 | 2017-10-20 | 中国农业大学 | A kind of individual behavior recognition methods and system |
CN107563985A (en) * | 2017-08-31 | 2018-01-09 | 成都空御科技有限公司 | A kind of detection method of infrared image moving air target |
CN107564041A (en) * | 2017-08-31 | 2018-01-09 | 成都空御科技有限公司 | A kind of detection method of visible images moving air target |
CN107909081A (en) * | 2017-10-27 | 2018-04-13 | 东南大学 | The quick obtaining and quick calibrating method of image data set in a kind of deep learning |
CN108154521A (en) * | 2017-12-07 | 2018-06-12 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of moving target detecting method based on object block fusion |
CN108596946A (en) * | 2018-03-21 | 2018-09-28 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of moving target real-time detection method and system |
WO2019237976A1 (en) * | 2018-06-11 | 2019-12-19 | 全球能源互联网研究院有限公司 | Differential image-based foreign matter detection method and apparatus, and device and storage medium |
CN110738858A (en) * | 2019-10-31 | 2020-01-31 | 浙江大华技术股份有限公司 | Camera image processing method, coder-decoder and storage device |
CN110940959A (en) * | 2019-12-13 | 2020-03-31 | 中国电子科技集团公司第五十四研究所 | Man-vehicle classification and identification method for low-resolution radar ground target |
CN111260779A (en) * | 2018-11-30 | 2020-06-09 | 华为技术有限公司 | Map construction method, device and system and storage medium |
WO2020151172A1 (en) * | 2019-01-23 | 2020-07-30 | 平安科技(深圳)有限公司 | Moving object detection method and apparatus, computer device, and storage medium |
CN112329729A (en) * | 2020-11-27 | 2021-02-05 | 珠海大横琴科技发展有限公司 | Small target ship detection method and device and electronic equipment |
CN116030367A (en) * | 2023-03-27 | 2023-04-28 | 山东智航智能装备有限公司 | Unmanned aerial vehicle viewing angle moving target detection method and device |
CN117746272A (en) * | 2024-02-21 | 2024-03-22 | 西安迈远科技有限公司 | Unmanned aerial vehicle-based water resource data acquisition and processing method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
CN101729872A (en) * | 2009-12-11 | 2010-06-09 | 南京城际在线信息技术有限公司 | Video monitoring image based method for automatically distinguishing traffic states of roads |
CN101739551A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Method and system for identifying moving objects |
CN101739685A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Moving object classification method and system thereof |
CN101826228A (en) * | 2010-05-14 | 2010-09-08 | 上海理工大学 | Detection method of bus passenger moving objects based on background estimation |
US20110001615A1 (en) * | 2009-07-06 | 2011-01-06 | Valeo Vision | Obstacle detection procedure for motor vehicle |
-
2013
- 2013-09-24 CN CN201310436746.2A patent/CN103559498A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
CN101739551A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Method and system for identifying moving objects |
CN101739685A (en) * | 2009-02-11 | 2010-06-16 | 北京智安邦科技有限公司 | Moving object classification method and system thereof |
US20110001615A1 (en) * | 2009-07-06 | 2011-01-06 | Valeo Vision | Obstacle detection procedure for motor vehicle |
CN101729872A (en) * | 2009-12-11 | 2010-06-09 | 南京城际在线信息技术有限公司 | Video monitoring image based method for automatically distinguishing traffic states of roads |
CN101826228A (en) * | 2010-05-14 | 2010-09-08 | 上海理工大学 | Detection method of bus passenger moving objects based on background estimation |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104050684A (en) * | 2014-05-27 | 2014-09-17 | 华中科技大学 | Video moving object classification method and system based on on-line training |
CN104050684B (en) * | 2014-05-27 | 2016-10-05 | 华中科技大学 | A kind of video frequency motion target sorting technique based on on-line training and system |
CN104657741A (en) * | 2015-01-09 | 2015-05-27 | 北京环境特性研究所 | Target classification method based on video images |
CN104657741B (en) * | 2015-01-09 | 2017-11-03 | 北京环境特性研究所 | A kind of objective classification method based on video image |
CN104658009A (en) * | 2015-01-09 | 2015-05-27 | 北京环境特性研究所 | Moving-target detection method based on video images |
CN104866825B (en) * | 2015-05-17 | 2019-01-29 | 华南理工大学 | A kind of sign language video frame sequence classification method based on Hu square |
CN104866825A (en) * | 2015-05-17 | 2015-08-26 | 华南理工大学 | Gesture language video frame sequence classification method based on Hu moments |
CN106778746A (en) * | 2016-12-23 | 2017-05-31 | 成都赫尔墨斯科技有限公司 | A kind of anti-unmanned plane method of multiple target |
CN107273815A (en) * | 2017-05-24 | 2017-10-20 | 中国农业大学 | A kind of individual behavior recognition methods and system |
CN107563985A (en) * | 2017-08-31 | 2018-01-09 | 成都空御科技有限公司 | A kind of detection method of infrared image moving air target |
CN107563985B (en) * | 2017-08-31 | 2020-08-25 | 成都空御科技有限公司 | Method for detecting infrared image air moving target |
CN107564041A (en) * | 2017-08-31 | 2018-01-09 | 成都空御科技有限公司 | A kind of detection method of visible images moving air target |
CN107564041B (en) * | 2017-08-31 | 2021-05-07 | 成都空御科技有限公司 | Method for detecting visible light image aerial moving target |
CN107909081A (en) * | 2017-10-27 | 2018-04-13 | 东南大学 | The quick obtaining and quick calibrating method of image data set in a kind of deep learning |
CN108154521A (en) * | 2017-12-07 | 2018-06-12 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of moving target detecting method based on object block fusion |
CN108154521B (en) * | 2017-12-07 | 2021-05-04 | 中国航空工业集团公司洛阳电光设备研究所 | Moving target detection method based on target block fusion |
CN108596946A (en) * | 2018-03-21 | 2018-09-28 | 中国航空工业集团公司洛阳电光设备研究所 | A kind of moving target real-time detection method and system |
WO2019237976A1 (en) * | 2018-06-11 | 2019-12-19 | 全球能源互联网研究院有限公司 | Differential image-based foreign matter detection method and apparatus, and device and storage medium |
CN111260779A (en) * | 2018-11-30 | 2020-06-09 | 华为技术有限公司 | Map construction method, device and system and storage medium |
WO2020151172A1 (en) * | 2019-01-23 | 2020-07-30 | 平安科技(深圳)有限公司 | Moving object detection method and apparatus, computer device, and storage medium |
CN110738858A (en) * | 2019-10-31 | 2020-01-31 | 浙江大华技术股份有限公司 | Camera image processing method, coder-decoder and storage device |
CN110940959A (en) * | 2019-12-13 | 2020-03-31 | 中国电子科技集团公司第五十四研究所 | Man-vehicle classification and identification method for low-resolution radar ground target |
CN112329729A (en) * | 2020-11-27 | 2021-02-05 | 珠海大横琴科技发展有限公司 | Small target ship detection method and device and electronic equipment |
CN116030367A (en) * | 2023-03-27 | 2023-04-28 | 山东智航智能装备有限公司 | Unmanned aerial vehicle viewing angle moving target detection method and device |
CN117746272A (en) * | 2024-02-21 | 2024-03-22 | 西安迈远科技有限公司 | Unmanned aerial vehicle-based water resource data acquisition and processing method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103559498A (en) | Rapid man and vehicle target classification method based on multi-feature fusion | |
CN106845415B (en) | Pedestrian fine identification method and device based on deep learning | |
CN104282020B (en) | A kind of vehicle speed detection method based on target trajectory | |
CN102542289B (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
WO2019196131A1 (en) | Method and apparatus for filtering regions of interest for vehicle-mounted thermal imaging pedestrian detection | |
Lai et al. | Image-based vehicle tracking and classification on the highway | |
CN106846359A (en) | Moving target method for quick based on video sequence | |
EP2919189A2 (en) | Pedestrian tracking and counting method and device for near-front top-view monitoring video | |
CN108960011B (en) | Partially-shielded citrus fruit image identification method | |
CN112115775B (en) | Smoke sucking behavior detection method based on computer vision under monitoring scene | |
CN104217208A (en) | Target detection method and device | |
CN104751491A (en) | Method and device for tracking crowds and counting pedestrian flow | |
TWI759651B (en) | Object recognition system based on machine learning and method thereof | |
CN108205660B (en) | Infrared image pedestrian flow detection device and detection method based on top view angle | |
CN105069429A (en) | People flow analysis statistics method based on big data platform and people flow analysis statistics system based on big data platform | |
CN103164858A (en) | Adhered crowd segmenting and tracking methods based on superpixel and graph model | |
CN103336946A (en) | Binocular stereoscopic vision based clustered tomato identification method | |
CN113370977A (en) | Intelligent vehicle forward collision early warning method and system based on vision | |
CN112347814A (en) | Passenger flow estimation and display method, system and computer readable storage medium | |
CN105354857B (en) | A kind of track of vehicle matching process for thering is viaduct to block | |
CN103544489A (en) | Device and method for locating automobile logo | |
CN105869174A (en) | Sky scene image segmentation method | |
CN106529441A (en) | Fuzzy boundary fragmentation-based depth motion map human body action recognition method | |
CN104658009A (en) | Moving-target detection method based on video images | |
CN105023231A (en) | Bus data acquisition method based on video recognition and cell phone GPS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140205 |