CN102592290A - Method for detecting moving target region aiming at underwater microscopic video - Google Patents

Method for detecting moving target region aiming at underwater microscopic video Download PDF

Info

Publication number
CN102592290A
CN102592290A CN2012100351846A CN201210035184A CN102592290A CN 102592290 A CN102592290 A CN 102592290A CN 2012100351846 A CN2012100351846 A CN 2012100351846A CN 201210035184 A CN201210035184 A CN 201210035184A CN 102592290 A CN102592290 A CN 102592290A
Authority
CN
China
Prior art keywords
unique point
video
sift unique
sift
background model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100351846A
Other languages
Chinese (zh)
Inventor
陈耀武
罗雷
杨帅帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2012100351846A priority Critical patent/CN102592290A/en
Publication of CN102592290A publication Critical patent/CN102592290A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting a moving target region aiming at underwater microscopic video. The method comprises the following steps of: (1) acquiring the microscopic video, and generating scale invariant feature transform (SIFT) characteristic point vectors; (2) reducing the dimensions of the SIFT characteristic point vectors; (3) establishing an initialized hybrid Gaussian background model; and (4) matching the SIFT characteristic point low-dimensional vectors one by one, updating the hybrid Gaussian background model, and segmenting a foreground region from an image. According to the method, the SIFT characteristic point vectors of the video image are generated by using an SIFT algorithm, the background model in an SIFT characteristic point vector region is established through the hybrid Gaussian model, and target organisms in the video image are segmented by using the background model and the current video and adopting a background matching algorithm based on the SIFT characteristic point vector region, so that interference of movement of non-target objects to the detection can be effectively eliminated, and the detection accuracy of the target organisms is improved.

Description

A kind of motion target area detection method of microscopy video under water that is directed against
Technical field
The invention belongs to technical field of video image processing, be specifically related to a kind of motion target area detection method of microscopy video under water that is directed against.
Background technology
The underwater intelligent biological recognition system can utilize the microscopy video image automatically to find target organism, and tasks such as the identification of completion target organism and tracking, and gordian technique wherein is the detection that moving target is biological.Motion detection is meant region of variation in the video (possibly be target organism) is extracted from background image as the important component part of moving target identification.Yet owing to there is the influence such as illumination, shadow and chaotic interference etc., making background image generation dynamic change becomes the quite work of difficulty thereby cause target organism accurately to detect.
Object detection method commonly used at present has time domain method of difference (Temporal Difference), optical flow method (Optical Flow) and background modeling method (Background Modeling) etc.Wherein background modeling method is the most frequently used a kind of method, and it is that present frame generates corresponding background image that its ultimate principle is based on background model, utilizes the difference of present frame and background image to carry out target detection.Can obtain the overall profile of the target that detects through the background modeling method; But the necessary updated at any time of background model is so that adapt to the dynamic change of background environment; For example in the variation of illumination condition, the background object with the wind or current and motion etc., the influence of when changing target being cut apart to reduce dynamic scene.Existing background modeling method probably has following several kinds: average, the mixed Gauss model of sequential medium filtering, increment type Gauss, Density Estimator, order core density are approximate, characteristic background modeling etc.Wherein adaptive mixed Gauss model (Adaptive Gaussian Mixture Model) can be good at separating the light disturbance of moving target by no means; Such as shade, water surface glistening light of waves or the like; Can obtain comparatively desirable objective contour, be the most sane and long-acting background modeling algorithm of generally acknowledging.The concrete performing step of algorithm is generally:
(1) since moment t, the historical pixel value of each pixel on time shaft carried out the mixed Gauss model modeling of K rank, obtain its distributed model, model formation is following:
P ( X t ) = Σ i = 1 K ω i , t · η ( X t , μ i , t , Σ i , t )
(2) when the next frame of image is read, a new pixel point value and an existing K Gaussian distribution are mated, through matching result background model is upgraded automatically.
Background modeling method through adopting mixed Gauss model is handled image in pixel domain, can in the less video of background dynamics amplitude of variation, obtain profile target area comparatively clearly.Yet for microscopy video under water; Except the illumination condition influence that water level fluctuation brings; Also have a large amount of non-target objects in the background, like sand grain, biological remains or fragment, aquatic organism excreta etc., these non-target objects are because the influence of current is unsteady aimlessly in water.Adopt traditional mixing Gauss model background modeling method, can these be present in the non-target object that floats in the background and also be partitioned into target organism, this will bring very perturbation to follow-up target organism identification.
Summary of the invention
To the above-mentioned technological deficiency of existing in prior technology, the invention provides a kind of motion target area detection method of microscopy video under water that is directed against, can effectively eliminate the interference of non-target object motion to detecting.
A kind of motion target area detection method of microscopy video under water that is directed against comprises the steps:
(1) obtains microscopy video, calculate SIFT (Scale-invariantfeature transform, the conversion of yardstick invariant features) the unique point vector set that generates the every two field picture of microscopy video;
The set of described SIFT unique point vector includes several SIFT unique point vectors; Described microscopy video is divided into background video and video to be detected;
A certain two field picture begins to occur target organism in the middle of the described microscopy video, and then all images before this two field picture is formed described background video, and all images (comprising this two field picture) after this two field picture is formed described video to be detected.
(2) described SIFT unique point vector is carried out dimensionality reduction, obtain the low dimensional vector of SIFT unique point;
(3) utilize mixed Gauss model that microscopy video first two field picture is carried out modeling, obtain initialized mixed Gaussian background model;
Described mixed Gaussian background model has k Gaussian distribution, and k equals 3,4 or 5;
(4) begin from microscopy video second two field picture, successively the low dimensional vector of each SIFT unique point of the every two field picture of microscopy video is mated one by one, and upgrade the mixed Gaussian background model according to matching result; And then from the every two field picture of video to be detected, be partitioned into foreground area (target organism zone).
The formula of described initialized mixed Gaussian background model is following:
P ( X ) = Σ j = 1 k ω j · η ( X , μ j , Σ j )
Wherein: k is the number of Gaussian distribution in the mixed Gaussian background model, and X is the SIFT unique point vector of first two field picture, ω jBe the weight coefficient of j Gaussian distribution during the mixed Gaussian of first two field picture distributes, μ jThe mean vector and the covariance matrix of j Gaussian distribution during the mixed Gaussian that is respectively first two field picture with ∑ j distributes, η is the Gaussian distribution probability density function.
In the described step (4); The low dimensional vector of SIFT unique point is mated and the process of upgrading the mixed Gaussian background model is: the mixed Gaussian background model that obtains according to last time coupling renewal; The low dimensional vector of current SIFT unique point to be matched is mated, and then upgrade the mixed Gaussian background model according to matching result; Wherein, matching process adopts initialized mixed Gaussian background model for the first time.
In the described step (4), the process that from the every two field picture of video to be detected, is partitioned into foreground area is:
1) upgrades the mixed Gaussian background model that obtains according to the last time coupling, the low dimensional vector of current SIFT unique point to be matched is mated; The low dimensional vector of certain SIFT unique point that the described current low dimensional vector of SIFT unique point to be matched is a video two field picture to be detected;
2) upgrade the mixed Gaussian background model according to matching result; And this SIFT unique point is hanged down the corresponding pixel of dimensional vector carry out area dividing: judge whether low dimensional vector of this SIFT unique point and last k the Gaussian distribution upgrading the mixed Gaussian background model that obtains of mating all do not match; If then the corresponding pixel of the low dimensional vector of this SIFT unique point belongs to foreground area; If not, then the corresponding pixel of the low dimensional vector of this SIFT unique point belongs to the background area;
3) according to step 1) and 2), travel through the low dimensional vector of each SIFT unique point of the every two field picture of video to be detected.
Beneficial effect of the present invention is:
(1) the present invention need not mixed Gauss model is carried out the computing that best background is described, and reduces operand.
(2) the present invention reduces operand to unique point vector set the carrying out dimension-reduction treatment of 128 dimensions of SIFT algorithm generation.
(3) the present invention can effectively get rid of the motion artifacts of non-target object in the background, improves the accuracy that target organism detects.
(4) the present invention can effectively detect the target organism that complex background, inclination, deformation, dirt, partial occlusion, light change.
(5) testing process of the present invention is shorter working time, is very suitable for the on-line analysis of microscopy video under water.
Description of drawings
Fig. 1 is the flowchart of detection method of the present invention.
Fig. 2 is the Gaussian distribution model synoptic diagram of video image background point SIFT proper vector.
Fig. 3 is the SIFT proper vector synoptic diagram of video image foreground area.
Fig. 4 (a) is the two field picture after the existing background modeling detection method Region Segmentation of employing.
Fig. 4 (b) is the two field picture after the employing detection method Region Segmentation of the present invention.
Embodiment
In order to describe the present invention more particularly, detection method of the present invention is elaborated below in conjunction with accompanying drawing and embodiment.
As shown in Figure 1, a kind of motion target area detection method of microscopy video under water that is directed against comprises the steps:
(1) obtains microscopy video, utilize the SIFT algorithm computation to generate the SIFT unique point vector set of the every two field picture of microscopy video; The set of SIFT unique point vector includes several SIFT unique point vectors;
Microscopy video is the video about the microscopic observation waters, and it is divided into background video and video to be detected; A certain two field picture begins to occur target organism in the middle of the microscopy video, and then all images before this two field picture has been formed background video, and all images (comprising this two field picture) after this two field picture has been formed video to be detected.
The SIFT algorithm is a yardstick invariant features mapping algorithm; Be David G.Lowe on the basis of summing up existing characteristic detection method based on the invariant technology in 2004, proposition a kind of based on metric space, keep the image local feature of stability to describe operator to graphical rule convergent-divergent, rotation, affined transformation, illumination condition---the SIFT operator.The generation of SIFT unique point vector is divided into following four steps: 1, detect extreme point in the metric space; 2, remove the extreme point and the unsettled edge extreme point of low contrast, obtain unique point; 3, the direction parameter of calculated characteristics point; 4, generate SIFT unique point vector, vectorial dimension is generally 128 dimensions.
The SIFT unique point vector that utilization SIFT algorithm extracts has following advantage: 1, the SIFT characteristic is the local feature of image; To rotation, scale, the luminance transformation voltinism that remains unchanged, visual angle change, affined transformation, noise are also kept stability to a certain degree; 2, uniqueness is good, quantity of information abundant, is applicable in the magnanimity property data base and matees fast and accurately; 3, how optimum, even several objects of minority also can produce a large amount of SIFT unique point vectors.
(2) SIFT unique point vector is carried out dimensionality reduction, obtain the low dimensional vector of SIFT unique point; SIFT unique point vector is 128 dimensions, and the low dimensional vector of the SIFT unique point behind the dimensionality reduction is 8 dimensions, can reduce calculated amount effectively.
(3) utilize mixed Gauss model that microscopy video first two field picture is carried out modeling, obtain initialized mixed Gaussian background model;
The formula of initialized mixed Gaussian background model is following:
P ( X ) = Σ j = 1 k ω j · η ( X , μ j , Σ j )
Wherein: k is the number of Gaussian distribution in the mixed Gaussian background model, and k=4 in this embodiment, X are the SIFT unique point vector of first two field picture, ω jBe the weight coefficient of j Gaussian distribution during the mixed Gaussian of first two field picture distributes, 4 initialized weight coefficients of Gaussian distribution are 1, μ in this embodiment jAnd ∑ jThe mean vector and the covariance matrix of j Gaussian distribution during the mixed Gaussian that is respectively first two field picture distributes, η is the Gaussian distribution probability density function.
(4) begin from microscopy video second two field picture; Successively the low dimensional vector of each SIFT unique point of the every two field picture of microscopy video is mated one by one; And upgrade the mixed Gaussian background model according to matching result, the Gaussian distribution model of video image background point SIFT proper vector shown in Figure 2;
From the every two field picture of video to be detected, be partitioned into foreground area (target organism zone), the SIFT proper vector of video image foreground area shown in Figure 3.
The low dimensional vector of SIFT unique point is mated and the process of upgrading the mixed Gaussian background model is: the mixed Gaussian background model that obtains according to last time coupling renewal; The low dimensional vector of current SIFT unique point to be matched is mated, and then upgrade the mixed Gaussian background model according to matching result; Wherein, matching process adopts initialized mixed Gaussian background model for the first time.
The process that from the every two field picture of video to be detected, is partitioned into foreground area is:
1) upgrades the mixed Gaussian background model that obtains according to the last time coupling, the low dimensional vector of current SIFT unique point to be matched is mated; The low dimensional vector of certain SIFT unique point that the described current low dimensional vector of SIFT unique point to be matched is a video two field picture to be detected;
2) upgrade the mixed Gaussian background model according to matching result; And this SIFT unique point is hanged down the corresponding pixel of dimensional vector carry out area dividing: judge whether low dimensional vector of this SIFT unique point and last k the Gaussian distribution upgrading the mixed Gaussian background model that obtains of mating all do not match; If then the corresponding pixel of the low dimensional vector of this SIFT unique point belongs to foreground area; If not, then the corresponding pixel of the low dimensional vector of this SIFT unique point belongs to the background area;
3) according to step 1) and 2), travel through the low dimensional vector of each SIFT unique point of the every two field picture of video to be detected.
Wherein, the method according to matching result renewal mixed Gaussian background model is:
If current SIFT unique point vector matees with at least one Gaussian distribution in the mixed Gaussian background model, then remain unchanged for unmatched Gaussian distribution its average μ and variance ∑; Gaussian distribution for coupling is upgraded its average μ and variance ∑, and more new formula is following:
μ j = ( 1 - ρ ) · μ j - 1 + ρ · X
Σ j = ( 1 - ρ ) · Σ j - 1 + ρ · diag [ ( X - μ j ) T ( X - μ j ) ]
ρ = α · η ( X | μ j - 1 , Σ j - 1 )
Where:
Figure BDA0000136213300000064
and
Figure BDA0000136213300000065
were pre-update mixture Gaussian background model SIFT feature point vector with the Gaussian distribution matching mean vector and covariance matrix; X SIFT feature point of the current vector; α is the learning rate parameter estimation, in this embodiment α = 0.5.
If all Gaussian distribution in current SIFT unique point vector and the mixed Gaussian background model all do not match; Then upgrade its weight coefficient ω, average μ and variance ∑ for the Gaussian distribution of least mating (Gaussian distribution that the respective weights coefficient is minimum) earlier, more new formula is following:
ω j=W 0,μ j=X,∑ j=V 0·I
Wherein: W 0And V 0Be the practical experience value, X is current SIFT unique point vector, and I is 8 * 8 unit matrix, W in this embodiment 0And V 0Be respectively 0.1 and 16.
Weight coefficient ω to all Gaussian distribution in the mixed Gaussian background model upgrades then, and more new formula is following:
ω j * = ( 1 - α ) · ω j + α · ( M j ) , M j = ω j / Σ j = 1 k ω j
Shown in Fig. 4 (a), there are a large amount of erroneous judgement information among the target detection result under the mixed Gaussian background model of conventional pixel territory, it is target organism with the impurity detection of sub aqua sport easily; And the target detection of this embodiment under the mixed Gaussian background model of SIFT vector field can be rejected the interference of sub aqua sport impurity object, only detects target organism, shown in Fig. 4 (b).

Claims (4)

1. one kind is directed against the motion target area detection method of microscopy video under water, comprises the steps:
(1) obtains microscopy video, calculate the SIFT unique point vector set that generates the every two field picture of microscopy video;
The set of described SIFT unique point vector includes several SIFT unique point vectors; Described microscopy video is divided into background video and video to be detected;
(2) described SIFT unique point vector is carried out dimensionality reduction, obtain the low dimensional vector of SIFT unique point;
(3) utilize mixed Gauss model that microscopy video first two field picture is carried out modeling, obtain initialized mixed Gaussian background model;
(4) begin from microscopy video second two field picture, successively the low dimensional vector of each SIFT unique point of the every two field picture of microscopy video is mated one by one, and upgrade the mixed Gaussian background model according to matching result; And then from the every two field picture of video to be detected, be partitioned into foreground area.
2. the motion target area detection method of microscopy video under water that is directed against according to claim 1, it is characterized in that: the formula of described initialized mixed Gaussian background model is following:
P ( X ) = Σ j = 1 k ω j · η ( X , μ j , Σ j )
Wherein: k is the number of Gaussian distribution in the mixed Gaussian background model, and X is the SIFT unique point vector of first two field picture, ω jBe the weight coefficient of j Gaussian distribution during the mixed Gaussian of first two field picture distributes, μ jAnd ∑ jThe mean vector and the covariance matrix of j Gaussian distribution during the mixed Gaussian that is respectively first two field picture distributes, η is the Gaussian distribution probability density function.
3. the motion target area detection method of microscopy video under water that is directed against according to claim 1; It is characterized in that: in the described step (4); The low dimensional vector of SIFT unique point is mated and the process of upgrading the mixed Gaussian background model is: the mixed Gaussian background model that obtains according to last time coupling renewal; The low dimensional vector of current SIFT unique point to be matched is mated, and then upgrade the mixed Gaussian background model according to matching result; Wherein, matching process adopts initialized mixed Gaussian background model for the first time.
4. the motion target area detection method of microscopy video under water that is directed against according to claim 1, it is characterized in that: in the described step (4), the process that from the every two field picture of video to be detected, is partitioned into foreground area is:
1) upgrades the mixed Gaussian background model that obtains according to the last time coupling, the low dimensional vector of current SIFT unique point to be matched is mated; The low dimensional vector of certain SIFT unique point that the described current low dimensional vector of SIFT unique point to be matched is a video two field picture to be detected;
2) upgrade the mixed Gaussian background model according to matching result; And this SIFT unique point is hanged down the corresponding pixel of dimensional vector carry out area dividing: judge whether low dimensional vector of this SIFT unique point and last k the Gaussian distribution upgrading the mixed Gaussian background model that obtains of mating all do not match; If then the corresponding pixel of the low dimensional vector of this SIFT unique point belongs to foreground area; If not, then the corresponding pixel of the low dimensional vector of this SIFT unique point belongs to the background area;
3) according to step 1) and 2), travel through the low dimensional vector of each SIFT unique point of the every two field picture of video to be detected.
CN2012100351846A 2012-02-16 2012-02-16 Method for detecting moving target region aiming at underwater microscopic video Pending CN102592290A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100351846A CN102592290A (en) 2012-02-16 2012-02-16 Method for detecting moving target region aiming at underwater microscopic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100351846A CN102592290A (en) 2012-02-16 2012-02-16 Method for detecting moving target region aiming at underwater microscopic video

Publications (1)

Publication Number Publication Date
CN102592290A true CN102592290A (en) 2012-07-18

Family

ID=46480880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100351846A Pending CN102592290A (en) 2012-02-16 2012-02-16 Method for detecting moving target region aiming at underwater microscopic video

Country Status (1)

Country Link
CN (1) CN102592290A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400380A (en) * 2013-07-25 2013-11-20 河海大学 Single camera underwater target three-dimensional trace stimulation method merged with image matrix offset
CN105141893A (en) * 2015-08-05 2015-12-09 广州杰赛科技股份有限公司 Moving method and environment detecting device
CN109460719A (en) * 2018-10-24 2019-03-12 四川阿泰因机器人智能装备有限公司 A kind of electric operating safety recognizing method
CN109993091A (en) * 2019-03-25 2019-07-09 浙江大学 A kind of monitor video object detection method eliminated based on background
CN110120096A (en) * 2019-05-14 2019-08-13 东北大学秦皇岛分校 A kind of unicellular three-dimensional rebuilding method based on micro- monocular vision
CN111242974A (en) * 2020-01-07 2020-06-05 重庆邮电大学 Vehicle real-time tracking method based on twin network and back propagation
CN115376053A (en) * 2022-10-26 2022-11-22 泰山学院 Video shot boundary detection processing method, system, storage medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6456337B1 (en) * 1997-03-06 2002-09-24 Fujitsu General Limited Moving image correcting circuit for display device
CN102175693A (en) * 2011-03-08 2011-09-07 中南大学 Machine vision detection method of visual foreign matters in medical medicament
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6456337B1 (en) * 1997-03-06 2002-09-24 Fujitsu General Limited Moving image correcting circuit for display device
CN102175693A (en) * 2011-03-08 2011-09-07 中南大学 Machine vision detection method of visual foreign matters in medical medicament
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
熊超: "《视频图像中运动车辆检测与跟踪技术的研究》", 《中国优秀硕士学位论文全文数据库 信息科技编辑》 *
王科俊 等: "《采用SIFT特征和增强型高斯混合模型的目标识别》", 《弹箭与制导学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400380A (en) * 2013-07-25 2013-11-20 河海大学 Single camera underwater target three-dimensional trace stimulation method merged with image matrix offset
CN103400380B (en) * 2013-07-25 2016-11-23 河海大学 The single camera submarine target three-dimensional track analogy method of fusion image matrix offset
CN105141893A (en) * 2015-08-05 2015-12-09 广州杰赛科技股份有限公司 Moving method and environment detecting device
CN105141893B (en) * 2015-08-05 2018-06-08 广州杰赛科技股份有限公司 A kind of moving method and environment arrangement for detecting
CN109460719A (en) * 2018-10-24 2019-03-12 四川阿泰因机器人智能装备有限公司 A kind of electric operating safety recognizing method
CN109993091A (en) * 2019-03-25 2019-07-09 浙江大学 A kind of monitor video object detection method eliminated based on background
CN109993091B (en) * 2019-03-25 2020-12-15 浙江大学 Monitoring video target detection method based on background elimination
CN110120096A (en) * 2019-05-14 2019-08-13 东北大学秦皇岛分校 A kind of unicellular three-dimensional rebuilding method based on micro- monocular vision
CN111242974A (en) * 2020-01-07 2020-06-05 重庆邮电大学 Vehicle real-time tracking method based on twin network and back propagation
CN111242974B (en) * 2020-01-07 2023-04-11 重庆邮电大学 Vehicle real-time tracking method based on twin network and back propagation
CN115376053A (en) * 2022-10-26 2022-11-22 泰山学院 Video shot boundary detection processing method, system, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN102592290A (en) Method for detecting moving target region aiming at underwater microscopic video
CN106204572B (en) Road target depth estimation method based on scene depth mapping
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
CN106338733B (en) Forward-Looking Sonar method for tracking target based on frogeye visual characteristic
CN110246151B (en) Underwater robot target tracking method based on deep learning and monocular vision
WO2015010451A1 (en) Method for road detection from one image
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN104700071B (en) A kind of extracting method of panorama sketch road profile
CN105182350A (en) Multi-beam sonar target detection method by applying feature tracking
CN104200495A (en) Multi-target tracking method in video surveillance
CN107742306B (en) Moving target tracking algorithm in intelligent vision
CN103364410A (en) Crack detection method of hydraulic concrete structure underwater surface based on template search
CN103177454A (en) Dynamic image moving object detection method
CN114200477A (en) Laser three-dimensional imaging radar ground target point cloud data processing method
Song et al. Combining stereo and time-of-flight images with application to automatic plant phenotyping
CN103077531A (en) Grayscale target automatic tracking method based on marginal information
CN112945196B (en) Strip mine step line extraction and slope monitoring method based on point cloud data
CN110991547A (en) Image significance detection method based on multi-feature optimal fusion
CN105913425B (en) A kind of more pig contour extraction methods based on adaptive oval piecemeal and wavelet transformation
CN102592135A (en) Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN105976385A (en) Image segmentation method based on image data field
CN113221739B (en) Monocular vision-based vehicle distance measuring method
CN109558877B (en) KCF-based offshore target tracking algorithm
Yu et al. Lisnownet: Real-time snow removal for lidar point clouds
Asif et al. An active contour and kalman filter for underwater target tracking and navigation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120718