CN103871080A - Method for adaptively quantizing optical flow features on complex video monitoring scenes - Google Patents

Method for adaptively quantizing optical flow features on complex video monitoring scenes Download PDF

Info

Publication number
CN103871080A
CN103871080A CN201410114805.9A CN201410114805A CN103871080A CN 103871080 A CN103871080 A CN 103871080A CN 201410114805 A CN201410114805 A CN 201410114805A CN 103871080 A CN103871080 A CN 103871080A
Authority
CN
China
Prior art keywords
region
thr
motion
video
optical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410114805.9A
Other languages
Chinese (zh)
Inventor
樊亚文
郑世宝
吴双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201410114805.9A priority Critical patent/CN103871080A/en
Publication of CN103871080A publication Critical patent/CN103871080A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention belongs to the technical field of digital image processing and relates to a method for adaptively quantizing optical flow features on complex video monitoring scenes. According to the method, the local statistical features are calculated after probability denoising is performed on video space based on the optical flow features, then, the video space position is adaptively quantized, and the video space is divided into a plurality of micro-block areas; finally, each micro-block area is filtered through a motion complexity threshold value, the quantization number is judged, a visual dictionary is generated, and adaptive quantization is achieved. According to the method, the effectiveness and the diversity of motion on the video monitoring scenes are described based on the local statistical features of optical flow. The effective pixel ratio and the motion complexity features are fused, the liveliness of local motion is described, and then the optical flow feature position can be adaptively quantized. On the basis of the motion complexity features, the diversity of the local motion is described, and then the optical flow feature direction can be adaptively quantized. Better discriminability can be played through adaptive quantization of the optical flow features in the next scene analysis based on a word bag model.

Description

Optical-flow Feature adaptive quantizing method in complicated video monitoring scene
Technical field
What the present invention relates to is the method in a kind of digital image processing techniques field, specifically a kind of Optical-flow Feature adaptive quantizing method in complicated video monitoring scene.
Background technology
Become more and more important at intelligent video monitoring Computer vision technique, such as magnitude of traffic flow monitoring, event detection or the detection etc. of blocking up.In computer vision application, behavioural analysis is a basic task.But in view of the complicacy of environmental baseline, as illumination and Changes in weather, crowded etc., behavioural analysis still faces some challenges.At present mainly be divided into two large classes about the research of behavioural analysis.One class is the method for based target tracking characteristics.But under complicated scene, still lack reliable multiple target tracking algorithm.In addition, the motion that track algorithm is difficult to adapt to flip-flop, under complicated scene, still lacks reliable multiple target tracking algorithm.In addition, track algorithm is difficult to adapt to the motion of flip-flop.Therefore the another kind of directly method based on basic motion feature is more suitable for complicated monitoring scene analysis.Wherein using maximum a kind of features is light stream (optical flow), has comprised a large amount of local motion information.
Through prior art literature search is found, general by Optical-flow Feature being carried out to the fixed quantisation of position and direction, as document Wang X, Ma X, Grimson W E L. " Unsupervised activity perception in crowded and complicated scenes using hierarchical bayesian models " ([J] .Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2009, 31 (3): 539 ?555.), adopt word bag mode to represent video segment, wherein light stream direction even amount changes into 4, Location quantization is by realizing video monitoring scene gridding, each area size is 10 × 10.
This fixed quantisation mode makes it face following three problems: 1) reduction quantified precision can cause the loss of locus and directional resolution; 2) improve quantified precision and can reduce loss, but can cause data volume to increase; 3) adopt unified quantified precision, do not consider distribution of movement feature actual in video monitoring scene.
Summary of the invention
The present invention is directed to prior art above shortcomings, propose a kind of Optical-flow Feature adaptive quantizing method in complicated video monitoring scene, based on the partial statistics characteristic of light stream, describe validity and the diversity of in video monitoring scene, moving.Merge valid pixel ratio and motion complexity feature, describe the liveness of local motion, realize the adaptive quantizing of Optical-flow Feature position.Based on motion complexity feature, the diversity of local motion is described, realize the adaptive quantizing of Optical-flow Feature direction.The adaptive quantizing of Optical-flow Feature makes it in next step the scene analysis based on word bag model, can bring into play better ability to see things in their true light.
The present invention is achieved by the following technical solutions, the present invention includes following steps:
The first step: sdi video is carried out to the probability denoising based on Optical-flow Feature, and concrete steps are:
1.1) the upper Optical-flow Feature number producing of each spatial point (x, y) in sdi video is added up, and is normalized:
Figure DEST_PATH_GDA0000492232750000021
wherein: the light stream probability of happening that P (x, y) representation space point (x, y) is located; A (x i, y i) represent the Com ( a , b ) = 1 , if a ≥ b 0 , else
The light stream range value that i frame mid point (x, y) is located; A thrrepresent light stream amplitude threshold, can comprehensively determine from the distance of monitoring scene according to the severe degree of moving in video scene and camera; N represents to participate in the video totalframes of statistics; Com represents light stream amplitude comparison operator.
1.2) the Optical-flow Feature probability of happening value P (x, y) calculating is compared with threshold value, judge which kind of region current point belongs to, specific as follows:
For given max-thresholds Thr max, as P (x, y) >Thr max, spatial point (x, y) belongs to noise dynamic area (Noise Dynamic Region, NDR);
For given minimum threshold Thr min, as P (x, y) <Thr min, locus (x, y) belongs to static region (Static Region, SR);
Work as Thr min≤ P (x, y)≤Thr max, locus (x, y) belongs to effective dynamic area (Active Dynamic Region, ADR).
1.3) repeat above-mentioned steps 1.1) to 1.2) until all spatial point were all carried out calculating and judgement.
1.4) Optical-flow Feature occurring in noise dynamic area and static region is removed.
Second step: the sdi video after denoising is calculated to partial statistics characteristic, carry out the adaptive quantizing of sdi video position, and sdi video is divided into several miniature region, concrete steps are:
2.1), by sdi video position grid, each area size is H × H, 2≤H≤64, and H is integer.
2.2) for the piece region in sdi video, calculate valid pixel ratio and motion complexity, be specially:
Valid pixel ratio
Figure DEST_PATH_GDA0000492232750000023
wherein: PV represents the motion pixel in piece region; PA represents all pixels in piece region; Dim represents Statistical Operator.
Motion complexity
Figure DEST_PATH_GDA0000492232750000024
wherein: ND represents normalization direction histogram; U represents to be uniformly distributed; M represents that histogram quantizes exponent number, 4≤M≤32, and M is positive integer; D kLrepresent normalization histogram and be uniformly distributed between KL (Kullback ?Leibler Divergence) distance.
Described KL specifically calculates in the following manner:
Figure DEST_PATH_GDA0000492232750000031
wherein: i component of di normalization histogram; U ii component of normalization histogram, U i=1M.
2.3) according to valid pixel ratio and motion complexity, obtain piece region liveness AD=μ VPP+ (1-μ) MCD, wherein: AD represents the motion liveness in piece region; μ represents hybrid parameter, 0≤μ≤1; Then active piece region is further cut apart;
Described cutting apart refers to: for given liveness threshold value Thr aD, as AD>=Thr aD, and the size in current block region do not reach minimum value L × L, 2≤L≤10, and L<H, and L is positive integer, will on current block regional space, be divided into four equal-sized regions; Work as AD<Thr aD, represent that current block region is inactive, no longer cut apart.
2.4) repeat above-mentioned steps 2.2) and 2.3), until piece region cannot further be cut apart, all piece regions are inactive, and are able to whole sdi video to be divided into L miniature region, and each miniature region is numbered from 1 to L.
The 3rd step: adopt each miniature region of motion complexity threshold filtering and judge quantized amount, generating vision dictionary, realizing adaptive quantizing.
Described filtration refers to: as the motion complexity MCD in i miniature region i>=Thr mCD, Thr mCDfor motion complexity threshold value changes into 8 by the side vector that occurs in the Optical-flow Feature in this miniature region, otherwise be quantized into 4.
Described vision dictionary refers to: each vision word coding form is: A.R.D, wherein: A represents numbering and the 1≤A≤L in miniature region; R represents precision and R=4 or the R=8 that direction quantizes; D represents direction numbering and 1≤D≤8; The vision total words comprising in this vision dictionary is:
Figure DEST_PATH_GDA0000492232750000032
wherein: C sizerepresent the scale of dictionary; R irepresent the precision that in i piece region, direction quantizes.
Technique effect
Compared with prior art, technique effect of the present invention comprises: 1) based on probability statistics feature, light stream is carried out to effective denoising; 2) vision dictionary can effectively represent actual motion feature; 3) in the situation that maintaining small-scale vision dictionary, ability to see things in their true light is strong.
Accompanying drawing explanation
Fig. 1 is Optical-flow Feature adaptive quantizing schematic diagram of the present invention.
Fig. 2 is video monitoring scene schematic diagram.
Fig. 3 is embodiment Optical-flow Feature adaptive quantizing schematic diagram;
In figure: (a) be superimposed upon the original Optical-flow Feature on figure; (b) be superimposed upon the Optical-flow Feature after the denoising on figure; (c) Optical-flow Feature after quantification.
Embodiment
Below embodiments of the invention are elaborated, the present embodiment is implemented under take technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
Embodiment 1
The video sequence that this enforcement adopts is from database QMUL (The Queen Mary University of London) traffic database, and frame per second is 25pfs, and resolution is that 360 × 288, Fig. 2 is video monitoring scene.QMUL database comes from Marie Antoinette institute of London University, is to be specifically designed to the database that complicated video monitoring scene is analyzed.
As shown in Figure 1, the present embodiment comprises following concrete steps:
The first step: Optical-flow Feature is carried out to probability denoising.Concrete steps are:
1.1) the upper Optical-flow Feature number producing of each spatial point (x, y) in video scene is added up, and is normalized:
Figure DEST_PATH_GDA0000492232750000041
wherein: the Optical-flow Feature probability of happening that P (x, y) representation space point (x, y) is located; A (x i, y i) table Com ( a , b ) = 1 , if a &GreaterEqual; b 0 , else
Show the light stream range value that i frame mid point (x, y) is located; A thrrepresent light stream amplitude threshold (the present embodiment A thr=0.8); N represents video totalframes (the present embodiment N=12000); Com represents light stream amplitude comparison operator.
1.2) the Optical-flow Feature probability of happening value P (x, y) calculating is compared with threshold value, judge which kind of region current point belongs to, specific as follows:
For given max-thresholds Thr max(the present embodiment Thr max=0.7), as P (x, y) >Thr max, spatial point (x, y) belongs to noise dynamic area (Noise Dynamic Region, NDR);
For given minimum threshold Thr min(the present embodiment Thr max=0.01), as P (x, y) <Thr min, locus (x, y) belongs to static region (Static Region, SR);
Work as Thr min≤ P (x, y)≤Thr max, locus (x, y) belongs to effective dynamic area (Active Dynamic Region, ADR).
1.3) repeat above-mentioned steps 1.1) to 1.2) until all spatial point were all carried out calculating and judgement.
1.4) noise dynamic area and static region think that to follow-up video analysis be useless, and the Optical-flow Feature occurring in this two classes region is removed.
Second step: the Optical-flow Feature after denoising is carried out to partial statistics characteristic extraction, carry out the adaptive quantizing of sdi video position.Concrete steps are:
2.1), by sdi video position grid, each area size is H × H, 2≤H≤64, and H is integer.The present embodiment H=64.
2.2) for the piece region in sdi video, calculate light stream statistical nature, the present invention defines valid pixel ratio, specifically,
VPP = dim { PV } dim { PA } ,
Wherein: VPP represents valid pixel ratio; PV represents the motion pixel in piece region; All pixels in PA piece region; Dim represents Statistical Operator.
2.3) for the piece region in sdi video, calculate light stream statistical nature, the present invention's definition has motion complexity.First to the Optical-flow Feature travel direction statistics with histogram in region, and histogram is normalized.For the complicacy of moving in metric blocks region, the present invention calculated normalization histogram and be uniformly distributed between KL (Kullback ?Leibler Divergence) distance, specifically,
Figure DEST_PATH_GDA0000492232750000052
wherein: MCD represents the complexity of moving in piece region; ND represents normalization direction histogram; U represents to be uniformly distributed; M represents that histogram quantizes exponent number, 4≤M≤32, and M is positive integer, the present embodiment M=8.
2.4) by above-mentioned steps 2.2) to 2.3) calculate two kinds of features merge, the present invention defines liveness, specifically,
AD=μVPP+(1-μ)MCD,
Wherein: AD represents the motion liveness in piece region; μ represents hybrid parameter, 0≤μ≤1, the present embodiment μ=0.6.
2.5) judge that whether current block region is active, specifically,
For given liveness threshold value Thr aD(the present embodiment Thr aD=0.32), as AD>=Thr aD, and the size in current block region do not reach minimum value L × L, 2≤L≤10, and L<H, and L is positive integer, will on current block regional space, be divided into four equal-sized regions; Work as AD<Thr aD, represent that current block region is inactive, no longer cut apart.The present embodiment L=4,
2.6) repeat above-mentioned steps 2.2) to 2.5) one by one statistical nature extraction is carried out in the piece region in sdi video, until that all piece regions are judged as is inactive, or reach minimum dimension.Last whole sdi video is divided into M piece region, and each region is numbered from 1 to M.M=147 in the present embodiment.
The 3rd step: in order to consider the diversity of motion, the present invention is based on motion complexity, the direction of Optical-flow Feature has been carried out to adaptive quantizing.Concrete steps are:
3.1) each the piece region after finishing for Location quantization, according to the motion complexity in this region, judges that the direction of Optical-flow Feature is quantized into 4 or 8, specifically,
For given motion complexity threshold value Thr mCD(the present embodiment Thr mCD=0.5), work as MCD i>=Thr mCD, the side vector that occurs in the Optical-flow Feature in this piece region changes into 8, otherwise is quantized into 4.
3.2) repeat above-mentioned steps 3.1) one by one the piece region in sdi video is carried out the quantization encoding of direction of motion.
The 4th step: position and direction can obtain a vision dictionary after quantizing to finish, and wherein each vision word coding form is: A.R.D, wherein: A represents the numbering in piece region, 1≤A≤M, and be A integer; R represents the precision that direction quantizes, R=4 or R=8; D represents direction numbering 1≤D≤8, and D is integer.The vision total words that vision dictionary comprises is:
Figure DEST_PATH_GDA0000492232750000053
wherein: C sizerepresent the scale of dictionary; R irepresent the precision that in i piece region, direction quantizes.
By experiment prove, the present embodiment than before method can effectively quantize Optical-flow Feature, set up vision dictionary.Fig. 3 (a) is original Optical-flow Feature, and Fig. 3 (b) is the Optical-flow Feature after the denoising being obtained by the present embodiment, and Fig. 3 (c) is by Optical-flow Feature after the present embodiment adaptive quantizing.Table 1 is the scale comparison of the vision dictionary of different quantification manners.From finding out, the result being obtained by the present embodiment method can, in the case of the dictionary that maintains small-scale, keep higher ability to see things in their true light, is convenient to follow-up further processing.
Table 1
Location quantization 4×4 8×8 16×16 32×32 64×64 Self-adaptation
Direction quantizes 4 4 4 4 4 Self-adaptation
Dictionary scale 25920 6480 1565 432 40 684

Claims (8)

1. an Optical-flow Feature adaptive quantizing method in complicated video monitoring scene, is characterized in that, comprises the following steps:
The first step: sdi video is carried out to the probability denoising based on Optical-flow Feature;
Second step: the sdi video after denoising is calculated to partial statistics characteristic, carry out the adaptive quantizing of sdi video position, and sdi video is divided into several miniature region;
The 3rd step: adopt each miniature region of motion complexity threshold filtering and judge quantized amount, generating vision dictionary, realizing adaptive quantizing.
2. method according to claim 1, is characterized in that, the described first step specifically comprises the following steps:
1.1) the upper Optical-flow Feature number producing of each spatial point (x, y) in sdi video is added up, and is normalized:
Figure FDA0000481959180000011
wherein: the light stream probability of happening that P (x, y) representation space point (x, y) is located; A (x i, y i) represent the Com ( a , b ) = 1 , ifa &GreaterEqual; b 0 , else
The light stream range value that i frame mid point (x, y) is located; A thrrepresent light stream amplitude threshold, can comprehensively determine from the distance of monitoring scene according to the severe degree of moving in video scene and camera; N represents to participate in the video totalframes of statistics; Com represents light stream amplitude comparison operator;
1.2) successively the Optical-flow Feature probability of happening value P (x, y) of all spatial point that calculate is compared with threshold value, the Optical-flow Feature occurring in noise dynamic area and static region is removed.
3. method according to claim 2, is characterized in that, described noise dynamic area refers to: for given max-thresholds Thr max, as P (x, y) >Thr max, spatial point (x, y) belongs to noise dynamic area; Described static region refers to: for given minimum threshold Thr min, as P (x, y) <Thr min, locus (x, y) belongs to.
4. method according to claim 1, is characterized in that, described second step specifically comprises the following steps:
2.1), by sdi video position grid, each area size is H × H, 2≤H≤64, and H is integer;
2.2), for the piece region in sdi video, calculate valid pixel than VPP and motion complexity MCD;
2.3) according to valid pixel ratio and motion complexity, obtain piece region liveness AD=μ VPP+ (1-μ) MCD, wherein: μ represents hybrid parameter, 0≤μ≤1; Then active piece region is further cut apart;
2.4) repeat above-mentioned steps 2.2) and 2.3), until piece region cannot further be cut apart, all piece regions are inactive, and are able to whole sdi video to be divided into L miniature region, and each miniature region is numbered from 1 to L.
5. method according to claim 4, is characterized in that, described valid pixel ratio
Figure FDA0000481959180000021
wherein: PV represents the motion pixel in piece region; PA represents all pixels in piece region; Dim represents Statistical Operator;
Described motion complexity
Figure FDA0000481959180000022
wherein: ND represents normalization direction histogram; U represents to be uniformly distributed; M represents that histogram quantizes exponent number, 4≤M≤32, and M is positive integer; D kLrepresent normalization histogram and be uniformly distributed between KL distance.
6. method according to claim 4, is characterized in that, described cutting apart refers to: for given liveness threshold value Thr aD, as AD>=Thr aD, and the size in current block region do not reach minimum value L × L, 2≤L≤10, and L<H, and L is positive integer, will on current block regional space, be divided into four equal-sized regions; Work as AD<Thr aD, represent that current block region is inactive, no longer cut apart.
7. method according to claim 1, is characterized in that, described filtration refers to: as the motion complexity MCD in i miniature region i>=Thr mCD, Thr mCDfor motion complexity threshold value changes into 8 by the side vector that occurs in the Optical-flow Feature in this miniature region, otherwise be quantized into 4.
8. method according to claim 1, is characterized in that, described vision dictionary refers to: each vision word coding form is: A.R.D, wherein: A represents numbering and the 1≤A≤L in miniature region; R represents precision and R=4 or the R=8 that direction quantizes; D represents direction numbering and 1≤D≤8; The vision total words comprising in this vision dictionary is:
Figure FDA0000481959180000023
wherein: C sizerepresent the scale of dictionary; R irepresent the precision that in i piece region, direction quantizes.
CN201410114805.9A 2014-03-25 2014-03-25 Method for adaptively quantizing optical flow features on complex video monitoring scenes Pending CN103871080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410114805.9A CN103871080A (en) 2014-03-25 2014-03-25 Method for adaptively quantizing optical flow features on complex video monitoring scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410114805.9A CN103871080A (en) 2014-03-25 2014-03-25 Method for adaptively quantizing optical flow features on complex video monitoring scenes

Publications (1)

Publication Number Publication Date
CN103871080A true CN103871080A (en) 2014-06-18

Family

ID=50909585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410114805.9A Pending CN103871080A (en) 2014-03-25 2014-03-25 Method for adaptively quantizing optical flow features on complex video monitoring scenes

Country Status (1)

Country Link
CN (1) CN103871080A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472478A (en) * 2019-06-26 2019-11-19 南京邮电大学 A kind of scene analysis method and system based on optical flow field statistical nature
CN110827313A (en) * 2019-09-19 2020-02-21 深圳云天励飞技术有限公司 Fast optical flow tracking method and related equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038771A (en) * 2006-03-18 2007-09-19 辽宁师范大学 Novel method of digital watermarking for protecting literary property of music works
CN101039432A (en) * 2006-03-16 2007-09-19 华为技术有限公司 Method and apparatus for realizing self-adaptive quantization in coding process
CN101043621A (en) * 2006-06-05 2007-09-26 华为技术有限公司 Self-adaptive interpolation process method and coding/decoding module

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101039432A (en) * 2006-03-16 2007-09-19 华为技术有限公司 Method and apparatus for realizing self-adaptive quantization in coding process
CN101038771A (en) * 2006-03-18 2007-09-19 辽宁师范大学 Novel method of digital watermarking for protecting literary property of music works
CN101043621A (en) * 2006-06-05 2007-09-26 华为技术有限公司 Self-adaptive interpolation process method and coding/decoding module

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAWEN FAN ET AL: ""Video Sensor-Based Complex Scene Analysis with Granger Causality"", 《SENSORS》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472478A (en) * 2019-06-26 2019-11-19 南京邮电大学 A kind of scene analysis method and system based on optical flow field statistical nature
CN110472478B (en) * 2019-06-26 2022-09-20 南京邮电大学 Scene analysis method and system based on optical flow field statistical characteristics
CN110827313A (en) * 2019-09-19 2020-02-21 深圳云天励飞技术有限公司 Fast optical flow tracking method and related equipment
CN110827313B (en) * 2019-09-19 2023-03-03 深圳云天励飞技术股份有限公司 Fast optical flow tracking method and related equipment

Similar Documents

Publication Publication Date Title
CN109753885B (en) Target detection method and device and pedestrian detection method and system
CN107369159B (en) Threshold segmentation method based on multi-factor two-dimensional gray level histogram
CN110458172A (en) A kind of Weakly supervised image, semantic dividing method based on region contrast detection
CN106447674B (en) Background removing method for video
US20090060267A1 (en) Salience estimation for object-based visual attention model
CN110472634A (en) Change detecting method based on multiple dimensioned depth characteristic difference converged network
Gao et al. Synergizing appearance and motion with low rank representation for vehicle counting and traffic flow analysis
Filonenko et al. Real-time flood detection for video surveillance
Ma et al. Fusioncount: Efficient crowd counting via multiscale feature fusion
CN101237581B (en) H.264 compression domain real time video object division method based on motion feature
CN103871080A (en) Method for adaptively quantizing optical flow features on complex video monitoring scenes
CN105930789A (en) Human body behavior recognition based on logarithmic Euclidean space BOW (bag of words) model
CN104376311A (en) Face recognition method integrating kernel and Bayesian compressed sensing
Delibasoglu UAV images dataset for moving object detection from moving cameras
CN106022310B (en) Human body behavior identification method based on HTG-HOG and STG characteristics
CN116071374B (en) Lane line instance segmentation method and system
Kaltsa et al. Dynamic texture recognition and localization in machine vision for outdoor environments
CN110110665B (en) Detection method for hand area in driving environment
CN112149596A (en) Abnormal behavior detection method, terminal device and storage medium
CN109558819B (en) Depth network lightweight method for remote sensing image target detection
Jin et al. Fusing Canny operator with vibe algorithm for target detection
CN110472478B (en) Scene analysis method and system based on optical flow field statistical characteristics
Eleuch et al. A study on the impact of multiview distributed feature coding on a multicamera vehicle tracking system at roundabouts
Choi et al. Fast super-resolution algorithm using ELBP classifier
Ke et al. A Video Image Compression Method based on Visually Salient Features.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140618

RJ01 Rejection of invention patent application after publication