CN106887010A - Ground moving target detection method based on high-rise scene information - Google Patents

Ground moving target detection method based on high-rise scene information Download PDF

Info

Publication number
CN106887010A
CN106887010A CN201710023810.2A CN201710023810A CN106887010A CN 106887010 A CN106887010 A CN 106887010A CN 201710023810 A CN201710023810 A CN 201710023810A CN 106887010 A CN106887010 A CN 106887010A
Authority
CN
China
Prior art keywords
target
formula
frame
light stream
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710023810.2A
Other languages
Chinese (zh)
Other versions
CN106887010B (en
Inventor
杨涛
任强
张艳宁
刘小飞
段文成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, Shenzhen Institute of Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710023810.2A priority Critical patent/CN106887010B/en
Publication of CN106887010A publication Critical patent/CN106887010A/en
Application granted granted Critical
Publication of CN106887010B publication Critical patent/CN106887010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Ground moving target detection method based on high-rise scene information, there is the technical problem of false-alarm for solving existing multi-target detection method.Technical scheme is to extract preliminary testing result using frame difference method first;The light stream vector of each point is calculated again, the position in its next frame is judged after the target of present frame is superimposed with its light stream vector, realize the association to target, eliminate a part of false-alarm;Finally judge motor point and background dot using the high layer information fundamental matrix F of scene, eliminate substantial amounts of false-alarm.

Description

Ground moving target detection method based on high-rise scene information
Technical field
The present invention relates to a kind of multi-target detection method, more particularly to a kind of ground motion based on high-rise scene information Object detection method.
Background technology
Multi-target detection is a challenging task in computer vision field.The most base of traditional motion detection Realized in frame difference.Yet with the actual three-dimensional scenic of scene, frame difference has parallax, causes a large amount of false-alarms.Document " Goyal H.Frame Differencing with Simulink model for Moving Object Detection[J] .International Journal of Advanced Research in Computer Engineering& Technology, 2013,2 (1) " discloses a kind of multi-target detection method (frame differential method).The method is assumed to be carried on the back in scene Scape is level, and by difference, the part for being higher by ground level will detect that.In affine transformation not being accounted for due to the method Parallax caused by the three-dimensional structure of scene, therefore with substantial amounts of false-alarm, do not applied to for actual three-dimensional scenic and comprising few Amount noise.
The content of the invention
In order to overcome the shortcomings of that existing multi-target detection method has false-alarm, the present invention provides a kind of based on high-rise scene letter The Ground moving target detection method of breath.The method extracts preliminary testing result using frame difference method first;Each point is calculated again Light stream vector, judge the position in its next frame after the target of present frame is superimposed with its light stream vector, realize to target Association, eliminates a part of false-alarm;Finally judge motor point and background dot using the high layer information fundamental matrix F of scene, remove Substantial amounts of false-alarm.
The technical solution adopted for the present invention to solve the technical problems is:A kind of ground motion based on high-rise scene information Object detection method, is characterized in comprising the following steps:
Step one, frame are poor.
For the scene of different height, different image registration algorithms is used.For high-altitude shoot video sequence due to Its meet sparse optical flow three using Lucas-Kanade sparse optical flows it is assumed that therefore realize Image Feature Point Matching;For low The image that sky shoots is unsatisfactory for the assumed condition of light stream due to it, therefore uses sobel operator extraction image characteristic points.By sparse Light stream or sobel operators realize images match, finally estimate the affine transformation between two images using RANSAC, specific as follows:
In formula, CpAnd CnIt is the pixel coordinate of the characteristic point of former frame and next frame, C'pAnd C'nBe conversion after former frame and The pixel coordinate of next frame, Ak-1And Ak+1It is the affine transformation matrix of 2*3.The image difference of former frame and next frame affine transformation Difference present frame, obtains Preliminary detection result, as follows:
Dk=| | Sk-S'k-1||∪||Sk-S'k+1|| (2)
In formula, DkRepresent image, S' after differencek-1And S'k+1It is the result after present frame and next frame affine transformation, SkIt is Current frame image.Binaryzation finally is carried out to difference image, threshold value is set to 40.
Step 2, light stream association.
Light stream is estimated:Classical optical flow method is based primarily upon the hypothesis of brightness constancy, pixel small movements and Space Consistency. In continuous videos, it is assumed that the corresponding grey scale pixel value of object has not because motion changes:
I (x, y, t)=I (x+dx, y+dy, t+dt) (3)
In formula, x and y is transverse and longitudinal coordinate, and I is image intensity value.Above formula Taylor expansion has:
Ixdx+Iydy+ItDt=0 (4)
In formula, Ix Iy ItThe gradient of correspondence direction is represented respectively.Be converted to vector form as follows:
In formula, u and v is respectively the light stream size on correspondence direction.Above formula is represented by:
Ad=b (6)
Solved using least square methodMinimum be worth to light stream vector d, it is as follows:
D=(ATA)-1ATb (7)
Prediction association:First, it is assumed that each point is (x in the coordinate of the frame of kth -1k-1,yk-1), the light stream according to step one is estimated Stratagem is omited, and obtains a light stream vector V, then target is as follows in the position prediction of next frame:
In formula,It is prediction coordinate, (Vx,Vy) it is light stream motion vector.
Secondly, all targets for being obtained to first frame difference, predict it in the position of next frame by light stream.To each target For, if it has certain destination matches in enough points and next frame, they are same target, and decision function is fixed Justice is as follows:
In formula,It is the target of quadratic difference detection, SkIt is the state equation of point, next calculates the confidence of object matching Degree:
In formula, α is belonging to the sum of the point of target, and two target association probability are:
αρ=α/β (11)
In formula, β is institute's quantity summation a little in target, and the probability of two target associations of receiving is set to ε=0.8, if Two targetsWithInterrelated, then the incidence relation between them is defined as:
To each target, a relation integration A={ A is definedm,...,An, A in formulamRepresentTo each association For set, only when the number of the relation integration of target is more than given threshold, using it as candidate target.
Step 3, the motion detection based on high layer information.
Using the characteristic point of sobel operator extraction images, the matching to image characteristic point is completed according to beeline.X= (x, y) and x'=(x', y') are a pair of match points in image, be converted into it is single answer vector X=[x, y, 1] and X'=[x', y',1]T, they meet:
X'TFX=0 (13)
In formula, F is the fundamental matrix for 3*3.Obtained by solving system of linear equations substantially using 8 algorithms of normalization Matrix F.The characteristic point matched in actual calculating process will not strictly meet above formula, therefore, corrected using Sampson, lead to Exterior point in the correct amount judgement for calculate matching is crossed, Samposon confidence levels K is defined as follows:
K=X'TFX/M (14)
In formula, (FX)1=f11x+f12y+f13, (x, y) is the coordinate of the pixel of X.Analogy determines (FX)2,(FTX')1, (FTX')2, so that it is determined that the Sampson confidence levels of each point.
Outer dot matrixH and W are respectively image height and width, and it is defined as follows:
The interior exterior point ratio calculation of each candidate target is as follows:
In formula,It is a candidate target,Be candidate quantity summation a little, moving target decision function M It is defined as follows:
In formula, η is the probability threshold value of exterior point, only when the exterior point ratio of candidate is more than η, can just determine that it is one The target of motion.
The beneficial effects of the invention are as follows:The method extracts preliminary testing result using frame difference method first;Calculate each again The light stream vector of point, judges the position in its next frame after the target of present frame is superimposed with its light stream vector, realize to target Association, eliminate a part of false-alarm;Finally judge motor point and background dot using the high layer information fundamental matrix F of scene, go Except substantial amounts of false-alarm.
The present invention is elaborated with reference to specific embodiment.
Specific embodiment
Ground moving target detection method of the present invention based on high-rise scene information is comprised the following steps that:
1st, frame is poor.
For the scene of different height, different image registration algorithms is used.For high-altitude shoot video sequence due to Its meet sparse optical flow three using Lucas-Kanade sparse optical flows it is assumed that therefore realize Image Feature Point Matching;For low The image that sky shoots is unsatisfactory for the assumed condition of light stream due to it, therefore uses sobel operator extraction image characteristic points.By sparse Light stream or sobel operators realize images match, finally estimate the affine transformation between two images using RANSAC, specific as follows:
In formula, CpAnd CnIt is the pixel coordinate of the characteristic point of former frame and next frame, C'pAnd C'nIt is the pixel seat after conversion Mark, wherein Ak-1And Ak+1It is the affine transformation matrix of 2*3.The image difference difference of former frame and next frame affine transformation is current Frame, obtains Preliminary detection result, as follows:
Dk=| | Sk-S'k-1||∪||Sk-S'k+1|| (20)
In formula, DkRepresent image, S' after differencek-1And S'k+1It is the result after present frame and next frame affine transformation, SkIt is Current frame image.Binaryzation finally is carried out to difference image, threshold value is set to 40.
2nd, light stream association.
Light stream association is mainly two parts:Light stream estimates that prediction is associated.
1) light stream is estimated:Classical optical flow method is based primarily upon the hypothesis of brightness constancy, pixel small movements and Space Consistency. In continuous videos, it is assumed that the corresponding grey scale pixel value of object has not because motion changes:
I (x, y, t)=I (x+dx, y+dy, t+dt) (21)
In formula, x and y is transverse and longitudinal coordinate, and I is image intensity value.Above formula Taylor expansion has:
Ixdx+Iydy+ItDt=0 (22)
In formula, Ix Iy ItThe gradient of correspondence direction is represented respectively.Be converted to vector form as follows:
In formula, u and v is respectively the light stream size on correspondence direction.Above formula is represented by:
Ad=b (24)
Solved using least square methodMinimum be worth to light stream vector d, it is as follows:
D=(ATA)-1ATb (25)
2) prediction association:First, it is assumed that each point is (x in the coordinate of the frame of kth -1k-1,yk-1), according to the light stream of step one Estimate strategy, obtain a light stream vector V, then target is as follows in the position prediction of next frame:
In formula,It is prediction coordinate, (Vx,Vy) it is light stream motion vector.
Secondly, all targets for being obtained to first frame difference, predict it in the position of next frame by light stream.To each target For, if it has certain destination matches in enough points and next frame, they are same target, and decision function is fixed Justice is as follows:
In formula,It is the target of quadratic difference detection, SkIt is the state equation of point, next calculates the confidence of object matching Degree:
In formula, α is belonging to the sum of the point of target, and two target association probability are:
αρ=α/β (29)
In formula, β is institute's quantity summation a little in target, and the probability of two target associations of receiving is set to ε=0.8, if Two targetsWithInterrelated, then the incidence relation between them is defined as:
To each target, a relation integration A={ A is definedm,...,An, A in formulamRepresentTo each association For set, only when the number of the relation integration of target is more than given threshold, using it as candidate target.
3rd, the motion detection based on high layer information.
Using the characteristic point of sobel operator extraction images, the matching to image characteristic point is completed according to beeline.X= (x, y) and x'=(x', y') are a pair of match points in image, be converted into it is single answer vector X=[x, y, 1] and X'=[x', y',1]T, they meet:
X'TFX=0 (31)
In formula, F is the fundamental matrix for 3*3.Obtained by solving system of linear equations substantially using 8 algorithms of normalization Matrix F.The characteristic point matched in actual calculating process will not strictly meet above formula, therefore, corrected using Sampson, lead to Exterior point in the correct amount judgement for calculate matching is crossed, Samposon confidence levels K is defined as follows:
K=X'TFX/M (32)
In formula, (FX)1=f11x+f12y+f13, (x, y) is the coordinate of the pixel of X.Analogy determines (FX)2,(FTX')1, (FTX')2, so that it is determined that the Sampson confidence levels of each point,
Outer dot matrixH and W are respectively image height and width, and it is defined as follows:
The interior exterior point ratio calculation of each candidate target is as follows:
In formula,It is a candidate target,Be candidate quantity summation a little, moving target decision function M It is defined as follows:
In formula, η is the probability threshold value of exterior point, only when the exterior point ratio of candidate is more than η, can just determine that it is one The target of motion.

Claims (1)

1. a kind of Ground moving target detection method based on high-rise scene information, it is characterised in that comprise the following steps:
Step one, frame are poor;
For the scene of different height, different image registration algorithms is used;Video sequence for high-altitude shooting is full due to it Three of sufficient sparse optical flow using Lucas-Kanade sparse optical flows it is assumed that therefore realize Image Feature Point Matching;Clapped for low latitude The image taken the photograph is unsatisfactory for the assumed condition of light stream due to it, therefore uses sobel operator extraction image characteristic points;By sparse optical flow Or sobel operators realize images match, the affine transformation between two images is finally estimated using RANSAC, it is specific as follows:
C p ′ = A k - 1 * C p C n ′ = A k + 1 * C n - - - ( 1 )
In formula, CpAnd CnIt is the pixel coordinate of the characteristic point of former frame and next frame, C'pAnd C'nIt is former frame and next after conversion The pixel coordinate of frame, Ak-1And Ak+1It is the affine transformation matrix of 2*3;The image difference difference of former frame and next frame affine transformation Present frame, obtains Preliminary detection result, as follows:
Dk=| | Sk-S'k-1||∪||Sk-S'k+1|| (2)
In formula, DkRepresent image, S' after differencek-1And S'k+1It is the result after present frame and next frame affine transformation, SkIt is current Two field picture;Binaryzation finally is carried out to difference image, threshold value is set to 40;
Step 2, light stream association;
Light stream is estimated:Classical optical flow method is based primarily upon the hypothesis of brightness constancy, pixel small movements and Space Consistency;Continuous In video, it is assumed that the corresponding grey scale pixel value of object has not because motion changes:
I (x, y, t)=I (x+dx, y+dy, t+dt) (3)
In formula, x and y is transverse and longitudinal coordinate, and I is image intensity value;Above formula Taylor expansion has:
Ixdx+Iydy+ItDt=0 (4)
In formula, IxIyItThe gradient of correspondence direction is represented respectively;Be converted to vector form as follows:
[ I x I y ] u v = - I t - - - ( 5 )
In formula, u and v is respectively the light stream size on correspondence direction;Above formula is represented by:
Ad=b (6)
Solved using least square methodMinimum be worth to light stream vector d, it is as follows:
D=(ATA)-1ATb (7)
Prediction association:First, it is assumed that each point is (x in the coordinate of the frame of kth -1k-1,yk-1), plan is estimated in the light stream according to step one Slightly, a light stream vector V is obtained, then target is as follows in the position prediction of next frame:
x k e = x k - 1 + V x y k e = y k - 1 + V y - - - ( 8 )
In formula,It is prediction coordinate, (Vx,Vy) it is light stream motion vector;
Secondly, all targets for being obtained to first frame difference, predict it in the position of next frame by light stream;To each target Speech, if it has certain destination matches in enough points and next frame, they are same target, decision function definition It is as follows:
S k ( x , y ) = 1 , i f ( x , y ) ∈ O k l ; 0 , o t h e r s - - - ( 9 )
In formula,It is the target of quadratic difference detection, SkIt is the state equation of point, next calculates the confidence level of object matching:
α = Σ ( x , y ) ∈ O k l S k ( x , y ) - - - ( 10 )
In formula, α is belonging to the sum of the point of target, and two target association probability are:
αρ=α/β (11)
In formula, β is institute's quantity summation a little in target, and the probability of two target associations of receiving is set to ε=0.8, if two TargetWithInterrelated, then the incidence relation between them is defined as:
A ( O k - 1 l , O k l ) = 1 , i f α ρ > ϵ 0 , o t h e r s - - - ( 12 )
To each target, a relation integration A={ A is definedm,...,An, A in formulamRepresentTo each relation integration For, only when the number of the relation integration of target is more than given threshold, using it as candidate target;
Step 3, the motion detection based on high layer information;
Using the characteristic point of sobel operator extraction images, the matching to image characteristic point is completed according to beeline;X=(x, y) It is a pair of match points in image with x'=(x', y'), is converted into and single answers vector X=[x, y, 1] and X'=[x', y', 1]T, They meet:
X'TFX=0 (13)
In formula, F is the fundamental matrix for 3*3;Using 8 algorithms of normalization fundamental matrix is obtained by solving system of linear equations F;The characteristic point matched in actual calculating process will not strictly meet above formula, therefore, corrected using Sampson, by meter The correct amount for calculating matching judges interior exterior point, and Samposon confidence levels K is defined as follows:
K=X'TFX/M (14)
M = ( F X ) 1 2 + ( F X ) 2 2 + ( F T X ′ ) 1 2 + ( F T X ′ ) 2 2 - - - ( 15 )
In formula, (FX)1=f11x+f12y+f13, (x, y) is the coordinate of the pixel of X;Analogy determines (FX)2,(FTX')1, (FTX')2, so that it is determined that the Sampson confidence levels of each point;Outer dot matrix Outi,j,H and W points Not Wei image height and width, it is defined as follows:
Out i , j = 1 , o u t l i e r s 0 , o t h e r s - - - ( 16 )
The interior exterior point ratio calculation of each candidate target is as follows:
σ = Σ ( i , j ) ∈ O k l Out i , j / N k l - - - ( 17 )
In formula,It is a candidate target,Be candidate quantity summation a little, moving target decision function M definition It is as follows:
M O k l = 1 , σ > η 0 , o t h e r s - - - ( 18 )
In formula, η is the probability threshold value of exterior point, only when the exterior point ratio of candidate is more than η, can just determine that it is a motion Target.
CN201710023810.2A 2017-01-13 2017-01-13 Ground moving target detection method based on high-rise scene information Active CN106887010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710023810.2A CN106887010B (en) 2017-01-13 2017-01-13 Ground moving target detection method based on high-rise scene information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710023810.2A CN106887010B (en) 2017-01-13 2017-01-13 Ground moving target detection method based on high-rise scene information

Publications (2)

Publication Number Publication Date
CN106887010A true CN106887010A (en) 2017-06-23
CN106887010B CN106887010B (en) 2019-09-24

Family

ID=59176289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710023810.2A Active CN106887010B (en) 2017-01-13 2017-01-13 Ground moving target detection method based on high-rise scene information

Country Status (1)

Country Link
CN (1) CN106887010B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830885A (en) * 2018-05-31 2018-11-16 北京空间飞行器总体设计部 One kind being based on the relevant detection false alarm rejection method of multidirections difference residual energy
CN109035306A (en) * 2018-09-12 2018-12-18 首都师范大学 Moving-target automatic testing method and device
CN109087322A (en) * 2018-07-18 2018-12-25 华中科技大学 A kind of Moving small targets detection method of Aerial Images
CN109472824A (en) * 2017-09-07 2019-03-15 北京京东尚科信息技术有限公司 Article position change detecting method and device, storage medium, electronic equipment
CN109740558A (en) * 2019-01-10 2019-05-10 吉林大学 A kind of Detection of Moving Objects based on improvement optical flow method
CN110555868A (en) * 2019-05-31 2019-12-10 南京航空航天大学 method for detecting small moving target under complex ground background
CN111950484A (en) * 2020-08-18 2020-11-17 青岛聚好联科技有限公司 High-altitude parabolic information analysis method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179713A (en) * 2007-11-02 2008-05-14 北京工业大学 Method of detecting single moving target under complex background
CN101908214A (en) * 2010-08-10 2010-12-08 长安大学 Moving object detection method with background reconstruction based on neighborhood correlation
CN102307274A (en) * 2011-08-31 2012-01-04 南京南自信息技术有限公司 Motion detection method based on edge detection and frame difference
CN103679172A (en) * 2013-10-10 2014-03-26 南京理工大学 Method for detecting long-distance ground moving object via rotary infrared detector
CN105761279A (en) * 2016-02-18 2016-07-13 西北工业大学 Method for tracking object based on track segmenting and splicing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101179713A (en) * 2007-11-02 2008-05-14 北京工业大学 Method of detecting single moving target under complex background
CN101908214A (en) * 2010-08-10 2010-12-08 长安大学 Moving object detection method with background reconstruction based on neighborhood correlation
CN102307274A (en) * 2011-08-31 2012-01-04 南京南自信息技术有限公司 Motion detection method based on edge detection and frame difference
CN103679172A (en) * 2013-10-10 2014-03-26 南京理工大学 Method for detecting long-distance ground moving object via rotary infrared detector
CN105761279A (en) * 2016-02-18 2016-07-13 西北工业大学 Method for tracking object based on track segmenting and splicing

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472824A (en) * 2017-09-07 2019-03-15 北京京东尚科信息技术有限公司 Article position change detecting method and device, storage medium, electronic equipment
CN108830885A (en) * 2018-05-31 2018-11-16 北京空间飞行器总体设计部 One kind being based on the relevant detection false alarm rejection method of multidirections difference residual energy
CN108830885B (en) * 2018-05-31 2021-12-07 北京空间飞行器总体设计部 Detection false alarm suppression method based on multi-directional differential residual energy correlation
CN109087322A (en) * 2018-07-18 2018-12-25 华中科技大学 A kind of Moving small targets detection method of Aerial Images
CN109087322B (en) * 2018-07-18 2021-07-27 华中科技大学 Method for detecting small moving target of aerial image
CN109035306A (en) * 2018-09-12 2018-12-18 首都师范大学 Moving-target automatic testing method and device
CN109035306B (en) * 2018-09-12 2020-12-15 首都师范大学 Moving target automatic detection method and device
CN109740558A (en) * 2019-01-10 2019-05-10 吉林大学 A kind of Detection of Moving Objects based on improvement optical flow method
CN109740558B (en) * 2019-01-10 2022-11-18 吉林大学 Moving target detection method based on improved optical flow method
CN110555868A (en) * 2019-05-31 2019-12-10 南京航空航天大学 method for detecting small moving target under complex ground background
CN111950484A (en) * 2020-08-18 2020-11-17 青岛聚好联科技有限公司 High-altitude parabolic information analysis method and electronic equipment

Also Published As

Publication number Publication date
CN106887010B (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN106887010B (en) Ground moving target detection method based on high-rise scene information
WO2020173226A1 (en) Spatial-temporal behavior detection method
US11763485B1 (en) Deep learning based robot target recognition and motion detection method, storage medium and apparatus
CN102741884B (en) Moving body detecting device and moving body detection method
CN109791603B (en) Method for capturing objects in an environment region of a motor vehicle by predicting the movement of the objects, camera system and motor vehicle
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
CN109064484B (en) Crowd movement behavior identification method based on fusion of subgroup component division and momentum characteristics
CN110490928A (en) A kind of camera Attitude estimation method based on deep neural network
CN104680559B (en) The indoor pedestrian tracting method of various visual angles based on motor behavior pattern
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
CN109299643B (en) Face recognition method and system based on large-posture alignment
CN107590438A (en) A kind of intelligent auxiliary driving method and system
CN111832413B (en) People flow density map estimation, positioning and tracking method based on space-time multi-scale network
CN106529419B (en) The object automatic testing method of saliency stacking-type polymerization
CN109389086A (en) Detect the method and system of unmanned plane silhouette target
CN102156985A (en) Method for counting pedestrians and vehicles based on virtual gate
CN112633220A (en) Human body posture estimation method based on bidirectional serialization modeling
CN105488811A (en) Depth gradient-based target tracking method and system
CN111797688A (en) Visual SLAM method based on optical flow and semantic segmentation
CN104517095A (en) Head division method based on depth image
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN106210447A (en) Video image stabilization method based on background characteristics Point matching
CN106204637A (en) Optical flow computation method
Lohdefink et al. Self-supervised domain mismatch estimation for autonomous perception
CN113077505A (en) Optimization method of monocular depth estimation network based on contrast learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant