CN101144716A - Multiple angle movement target detection, positioning and aligning method - Google Patents
Multiple angle movement target detection, positioning and aligning method Download PDFInfo
- Publication number
- CN101144716A CN101144716A CNA2007101758651A CN200710175865A CN101144716A CN 101144716 A CN101144716 A CN 101144716A CN A2007101758651 A CNA2007101758651 A CN A2007101758651A CN 200710175865 A CN200710175865 A CN 200710175865A CN 101144716 A CN101144716 A CN 101144716A
- Authority
- CN
- China
- Prior art keywords
- moving target
- target
- dimensional reconstruction
- location
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a detecting, positioning and corresponding method of a multi visual angle moving target, and belongs to the video monitoring and control technology field. The detecting, positioning and corresponding method comprises the following procedures that: a foreground detection is performed to the video frequency image of various visual angles, a two-valued foreground image is obtained; a space field model is established according to the two-valued foreground image, the three-dimensional reconstruction is performed in the space field model, and the three-dimensional reconstruction result of a moving target is obtained; the analysis is performed to the three-dimensional reconstruction result, the moving target is detected and positioned in the space field, and the space location of the moving target is obtained; the projection is performed to various visual angles according to the space location of the moving target, and the coincidence relation of the moving target among various visual angles is confirmed. The invention is characterized in that the screen treatment capacity is strong, the operational speed is quick, and the real time requirement of the video monitoring and control can be met.
Description
Technical field
The present invention relates to technical field of video monitoring, particularly a kind of multiple angle movement target detection, location and corresponding method.
Background technology
In recent years, video monitoring is used widely in fields such as urban safety, traffic monitoring, safety in production.And in video monitoring, the technical matters of multi-cam monitoring is perplexing industry development forward always.The multi-cam monitoring comprises two kinds of situations: a kind of is not have overlapping region between the camera visual field, i.e. the single-view situation; Another is between the camera visual field overlapping region to be arranged, i.e. the various visual angles situation.
Motion target detection under the various visual angles situation, location and corresponding technology are the gordian techniquies in the multi-cam video monitoring system, are follow-up bases of carrying out senior processing such as target following, behavioural analysis and Target Recognition.Under the situation that target isolates, detection, location and corresponding target are fairly simple problems, and existing method can be handled preferably.Under the situation that has a plurality of moving targets, detection, location and the corresponding target problem complexity that then becomes.The problem that mainly has following two aspects: the one,, influence each other between a plurality of targets and produce and block, make accurate detection, location and the corresponding target very difficulty that becomes; The 2nd,, a plurality of visual angles a plurality of target computation complexity height.
In existing various visual angles supervisory system, detection, location and correspondence problem mainly can be summed up as two classes: based on the method for image space with based on the method that merges the space.Method based on image space is: detect target earlier in each visual angle image, handle the image at each visual angle respectively, then detected next target information between each visual angle is mated with location and corresponding target, this is the method for often using at present; Based on the method that merges the space be: detect target earlier in each visual angle image, the image information with each visual angle merges then, forms to merge the space, detects target in merging the space, and realizes the location and the correspondence of target.
Limitation based on the method for image space is: it is poor to block processing power.Owing to will detect target at image space earlier, testing result is the basis of the corresponding and location of subsequent target, and testing result is blocked and influenced very greatly, even possible errors is difficult to guarantee the correctness of later process.Just occur in recent years based on the method that merges the space, typical method is FPPF (FleuretF, Lengagne R, and Fua P.Fixed point probability field for complex occlusion handling.In:International Conference onComputer Vision, 2005.), the foreground image at each visual angle of associating calculates target is overlooked the field in the space the possibility that exists earlier, in overlooking the field, detect target, and then carry out the corresponding and detection of each visual angle target.Directly on the two dimensional image at each visual angle, do not extract target based on the method that merges the space, but the spatial manipulation after information fusion, the information that merges has been considered between target because of blocking the influence that produces, therefore to handle the ability of blocking stronger for these class methods, often obtain better result, but the deficiency that exists is that computation complexity is very high, can't satisfy real-time requirement.
To sum up, existing method is handled and is blocked a little less than the ability, and the computation complexity height is difficult to satisfy the requirement of video monitoring.
Summary of the invention
To block processing power in order strengthening, to reduce computation complexity, the invention provides a kind of multiple angle movement target detection, location and corresponding method based on three-dimensional reconstruction.
Described technical scheme is as follows: at first obtain the video at each visual angle, extract the moving target prospect at each visual angle; The foreground information of uniting each visual angle carries out Three-dimension Target fast and rebuilds in the space; Because each visual angle only provides foreground information, therefore among the result of three-dimensional reconstruction moving target will only be arranged, and there is not background information, in the projection of vertical direction ground, can form the peak of a projection in the position at target place, position by detected peaks can obtain target, also target is located simultaneously; Because the calibration information at each visual angle is known, therefore, according in the space resulting target location can target detection be arrived at each visual angle to each visual angle projection, same position has also shown its corresponding relation to the projection at each visual angle.
Concrete steps comprise:
Video image to a plurality of visual angles carries out foreground detection, obtains the two-value foreground image;
According to described two-value foreground image, set up the spatial field model, in described spatial field model, carry out three-dimensional reconstruction, obtain the three-dimensional reconstruction result of described moving target;
Described three-dimensional reconstruction result is analyzed, in spatial field, detected and locate described moving target, obtain the locus of moving target;
According to the locus of described moving target,, determine the corresponding relation of described moving target between described a plurality of visual angle to the projection respectively of described a plurality of visual angles.
Carrying out three-dimensional reconstruction in the spatial field model specifically comprises:
Described spatial field model is divided into the equal-sized volumetric pixel of volume, carries out three-dimensional reconstruction in described spatial field model, described volumetric pixel is square or spherosome;
Three-dimensional reconstruction result is being analyzed, in spatial field, is being detected and the concrete steps of locating described moving target comprise:
A. add up with the vertically projection of described three-dimensional reconstruction result, and by vertical direction and to obtain the two-dimensional projection image flat field;
B. described two-dimensional projection image flat field is carried out smoothing processing;
C. the two-dimensional projection image flat field after the described smoothing processing is carried out binary conversion treatment;
D. the two-dimensional projection image flat field after the described binary conversion treatment being carried out pseudo-object removal handles;
E: the two-dimensional projection image flat field behind the described pseudo-object removal is carried out the connected domain analyzing and processing, obtain connected domain, the quantity of described connected domain equates that with the quantity of described moving target a connected domain is determined a moving target;
F: maximizing in the zone of the two-dimensional projection image flat field of described connected domain correspondence, described maximal value position is the locus of described moving target.
The beneficial effect of technical scheme provided by the invention is:
One, it is strong to handle the ability of blocking, and this is because various processing are to carry out after merging each visual angle information enforcement three-dimensional reconstruction; Two, fast operation, this is owing to three-dimensional reconstruction algorithm and location and detection fast belongs to same process, and utilizes projective parameter directly to obtain each visual angle corresponding relation according to positioning result.Multiple angle movement target detection of the present invention, location and corresponding method can satisfy the real-time requirement of video monitoring.
Description of drawings
Fig. 1 is the applicable elements synoptic diagram of the embodiment of the invention multiple angle movement target detection, location and the corresponding method that provide;
Fig. 2 is the spatial field synoptic diagram of the embodiment of the invention multiple angle movement target detection, location and the corresponding method that provide;
Fig. 3 is the simplified flow chart of the embodiment of the invention multiple angle movement target detection, location and the corresponding method that provide;
Fig. 4 is the detail flowchart of the embodiment of the invention multiple angle movement target detection, location and the corresponding method that provide;
Fig. 5 is provide three-dimensional reconstruction result is analyzed of the embodiment of the invention, detects in spatial field and the process flow diagram of setting movement target.
Embodiment
For making the purpose, technical solutions and advantages of the present invention clearer, embodiment of the present invention is described further in detail below in conjunction with accompanying drawing.
As shown in Figure 1: a plurality of cameras (at least two) monitor a zone.The projective parameter of all cameras (comprising focal length, image coordinate system diaxon angle, principal point parameter outside intrinsic parameters such as the position of image coordinate system intersection point, the metric that camera coordinates ties up to image coordinate system and video camera translation and rotation etc.) is demarcated.
The embodiment of the invention proposes a kind of multiple angle movement target detection, location and corresponding method based on three-dimensional reconstruction, and concrete steps are as follows:
Step 101: the video image that comprises a plurality of visual angles is carried out foreground detection, obtain the two-value foreground image of moving target.The effect of this part is exactly the sport foreground image that obtains each visual angle.Can adopt multiple foreground detection method, what this enforcement was adopted is mixed Gauss model.
As shown in Figure 2: to comprise moving target place base surface area is the bottom, and the area of space that can comprise moving target is set up the target empty bay, is called for short spatial field, and Ω represents with symbol, and the height of Ω will be higher than the height of the moving target in the scene.
Step 102: with the two-value foreground image information that obtains in the step 101, set up the spatial field model, and in the spatial field model, carry out three-dimensional reconstruction, obtain the three-dimensional reconstruction result of moving target.
Detailed process is, spatial field is divided into the equal-sized volumetric pixel of volume, and v represents volumetric pixel with symbol.For each volumetric pixel, if it comprises moving target (perhaps comprising the part target), then volumetric pixel exists, otherwise then volumetric pixel does not exist, so the three-dimensional reconstruction process can be converted into the process of judging that whether each volumetric pixel exists.The reconstruction result of all volumetric pixels promptly is the three-dimensional reconstruction result of moving target in the space in the spatial field.The yardstick that volumetric pixel is divided can improve computing velocity so greatly than bigger in the prior art.
For each volumetric pixel v, the probability tables that volumetric pixel exists is shown,
p(E
v(i)=1|I)
Wherein, E
v(i) { 0,1} is a stochastic variable to ∈, E
v(i)=1 the volumetric pixel of position i exists in the representation space field, E
v(i)=0 there is not I={I in the volumetric pixel of position i in the representation space field
1, I
2..., I
NBe the set of view data, I
kBe k the image that camera obtained, N is the quantity of camera.
Suppose between the volumetric pixel separately, calculate p (E according to following formula
v(i)=1|I)
Wherein fore (
k i) the pixel quantity of foreground image in the representation space field in the view field of volumetric pixel in camera k of position i, area (
i k) whole pixel quantities in the representation space field in the view field of volumetric pixel in camera k of position i, comprise whole pixels of the prospect of being divided into and background.Volumetric pixel is made of square, also can be made of spherosome, and the projection of volumetric pixel at each visual angle that constitutes by spherosome all is a circle, calculates more convenient.
There is probability for the volumetric pixel that calculates, according to threshold value P
vJudge whether volumetric pixel exists, promptly
If p (E
v(i)=1|I, Θ)〉P
v, volumetric pixel v exists
Else volumetric pixel v does not exist
Threshold value P
vAdjusting the back according to parameters such as projective parameter, volumetric pixel size and target sizes determines.
Like this, whether exist, just obtained the three-dimensional reconstruction result of moving target in the spatial field according to each volumetric pixel in the spatial field.Spatial field and be referred to as to be the spatial field model based on the three-dimensional reconstruction process in the spatial field.
Step 103: analyze by Three-dimension Target reconstructed results in the spatial field, in spatial field, detect and orient moving target, thereby obtain the locus of moving target.
Step 104: the locus of the moving target that obtains according to step 103, directly to each visual angle projection, determine the corresponding relation of moving target between each visual angle.Because the projective parameter of known each camera, same position has also shown the corresponding relation of each visual angle target to each visual angle projection, thereby realizes the correspondence of the moving target between the visual angle.
In above-mentioned steps 103, it is very consuming time utilizing existing method directly to analyze, and the complexity of algorithm is very big.The invention provides a kind of improved method and carry out moving object detection and location.Concrete grammar is as follows:
Target is on big ground level, and the bottom of each target all is to contact with ground, and its main body trunk extends upward vertical direction.Spatial field is carried out projection earthward, and the projection result that adds up, then exist the position of target can form the peak of projection.For the flat field that the spatial field model projection obtains, the probability that each position has target to exist in the flat field is p (E
Π(i)=1|I), calculate p (E in the following manner
Π(i)=1|I),
Cum (i) representation space field is at the accumulated value of each position, bottom, and the mode that adds up is that vertical direction adds up, if the existence of k individual pixel, then cum (i)=k are arranged on the vertical direction of bottom position i.C is a normalized factor.
As shown in Figure 5, analyzing and testing and the concrete steps that obtain the moving target locus comprise:
Step 301: the vertically projection of three-dimensional reconstruction result with step 102 obtains obtains the two-dimensional projection image flat field;
Step 302: flat field is carried out smoothing processing, can adopt various smoothing methods,, median smoothing level and smooth, mean value smoothing as Gauss;
Step 303: according to selected in advance threshold value p
Π, flat field is carried out binary conversion treatment, greater than p
ΠPut 1, less than p
ΠThen put 0, obtain the binaryzation flat field, and keep original flat field, threshold value p
ΠUsually all obtain less;
Step 304: adopt mathematical morphology operators that the binaryzation flat field is carried out pseudo-object removal and handle, purpose is to remove the zone that comprises less point in the flat field, these points all are pseudo-target usually, the morphological operator that adopts is 5 * 5 rectangle, the operation of carrying out is to carry out the corrosion operation earlier, carries out expansive working then;
Step 305: the binaryzation flat field is carried out the connected domain analysis, employing be document (Sonka et al.lmageProcessing, Analysis, and Machine Vision.
Thomson PublishingPress, 2002, pp559~599.) method forms a plurality of connected regions, and the quantity of connected domain equates that with the quantity of moving target each connected domain is considered as detecting a target, and promptly each connected domain is determined a target;
Step 306: in the original flat field zone of the connected domain correspondence that obtains, seek maximal value in the connected domain, the maximal value position is regarded as the moving target position of current connected domain correspondence.
The embodiment of the invention is owing to moving object detection, location and corresponding processing are carried out after three-dimensional reconstruction, moving target blocks situation in a visual angle, may not can in another visual angle block, the part of perhaps blocking is inequality, information between each visual angle has complementarity, adopt the method for three-dimensional reconstruction to merge the information at a plurality of visual angles, can reduce even eliminate the influence of blocking greatly, this has guaranteed that method of the present invention has the stronger processing power of blocking.
Three-dimensional reconstruction---the spatial field model that the embodiment of the invention proposes is only united the two-value foreground information enforcement at each visual angle and is rebuild, and the scale ratio of volumetric pixel division simultaneously is bigger, has reduced the volumetric pixel number that needs reconstruction, can carry out three-dimensional reconstruction fast.Motion target detection and space orientation process are actually same process, and the moving target corresponding process also promptly can obtain each visual angle projective parameter owing to knowing in advance by direct projection.In addition, how many calculated amount are these processes do not need substantially, guaranteed the fast characteristics of processing speed of method yet.
The above only is preferred embodiment of the present invention, and is in order to restriction the present invention, within the spirit and principles in the present invention not all, any modification of being done, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.
Claims (4)
1. a multiple angle movement target detection, location and corresponding method is characterized in that, specifically may further comprise the steps:
Step 1: the video image to a plurality of visual angles carries out foreground detection, obtains the two-value foreground image;
Step 2: according to described two-value foreground image, set up the spatial field model, in described spatial field model, carry out three-dimensional reconstruction, obtain the three-dimensional reconstruction result of moving target;
Step 3: described three-dimensional reconstruction result is analyzed, in spatial field, detected and locate described moving target, obtain the locus of described moving target;
Step 4:,, determine the corresponding relation of described moving target between described a plurality of visual angle to the projection respectively of described a plurality of visual angles according to the locus of described moving target.
2. multiple angle movement target detection according to claim 1, location and corresponding method is characterized in that, carry out three-dimensional reconstruction in the described step 2 in described spatial field model and comprise:
Described spatial field model is divided into the equal-sized volumetric pixel of volume, in described spatial field model, carries out three-dimensional reconstruction.
3. multiple angle movement target detection according to claim 2, location and corresponding method is characterized in that, described volumetric pixel is square or spherosome.
4. multiple angle movement target detection according to claim 1, location and corresponding method is characterized in that, specifically comprise in the described step 3:
A:, and add up by vertical direction and to obtain the two-dimensional projection image flat field with the vertically projection of described three-dimensional reconstruction result;
B: described two-dimensional projection image flat field is carried out smoothing processing;
C: the two-dimensional projection image flat field after the described smoothing processing is carried out binary conversion treatment;
D: the two-dimensional projection image flat field after the described binary conversion treatment is carried out pseudo-object removal handle;
E: the two-dimensional projection image flat field behind the described pseudo-object removal is carried out the connected domain analyzing and processing, obtain connected domain, the quantity of described connected domain equates that with the quantity of described moving target a connected domain is determined a moving target;
F: maximizing in the zone of the two-dimensional projection image flat field of described connected domain correspondence, described maximal value position is the locus of described moving target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007101758651A CN100513997C (en) | 2007-10-15 | 2007-10-15 | Multiple angle movement target detection, positioning and aligning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007101758651A CN100513997C (en) | 2007-10-15 | 2007-10-15 | Multiple angle movement target detection, positioning and aligning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101144716A true CN101144716A (en) | 2008-03-19 |
CN100513997C CN100513997C (en) | 2009-07-15 |
Family
ID=39207363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2007101758651A Expired - Fee Related CN100513997C (en) | 2007-10-15 | 2007-10-15 | Multiple angle movement target detection, positioning and aligning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100513997C (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102081296A (en) * | 2010-12-01 | 2011-06-01 | 南京航空航天大学 | Device and method for quickly positioning compound-eye vision imitated moving target and synchronously acquiring panoramagram |
CN101877133B (en) * | 2009-12-17 | 2012-05-23 | 上海交通大学 | Motion segmentation method of two-dimensional view image scene |
CN101719286B (en) * | 2009-12-09 | 2012-05-23 | 北京大学 | Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof |
CN102609949A (en) * | 2012-02-16 | 2012-07-25 | 南京邮电大学 | Target location method based on trifocal tensor pixel transfer |
CN102708370A (en) * | 2012-05-17 | 2012-10-03 | 北京交通大学 | Method and device for extracting multi-view angle image foreground target |
CN103136738A (en) * | 2011-11-29 | 2013-06-05 | 北京航天长峰科技工业集团有限公司 | Registering method of fixing vidicon surveillance video and three-dimensional model in complex scene |
CN103136739A (en) * | 2011-11-29 | 2013-06-05 | 北京航天长峰科技工业集团有限公司 | Registering method of controllable vidicon surveillance video and three-dimensional model in complex scene |
CN103809176A (en) * | 2014-03-13 | 2014-05-21 | 中国电子科技集团公司第三十八研究所 | Single-pixel millimeter wave imaging device and method |
CN103837137A (en) * | 2014-03-13 | 2014-06-04 | 中国电子科技集团公司第三十八研究所 | Quick large-image single-pixel imaging device and quick large-image single-pixel imaging method |
CN104034316A (en) * | 2013-03-06 | 2014-09-10 | 深圳先进技术研究院 | Video analysis-based space positioning method |
CN104517292A (en) * | 2014-12-25 | 2015-04-15 | 杭州电子科技大学 | Multi-camera high-density crowd partitioning method based on planar homography matrix restraint |
WO2015085498A1 (en) * | 2013-12-10 | 2015-06-18 | 华为技术有限公司 | Method and device for acquiring target motion feature |
CN106453220A (en) * | 2016-06-17 | 2017-02-22 | 四川师范大学 | Butt joint type safety protection identification method |
CN106651957A (en) * | 2016-10-19 | 2017-05-10 | 大连民族大学 | Monocular vision target space positioning method based on template |
CN106780551A (en) * | 2016-11-18 | 2017-05-31 | 湖南拓视觉信息技术有限公司 | A kind of Three-Dimensional Moving Targets detection method and system |
CN110443247A (en) * | 2019-08-22 | 2019-11-12 | 中国科学院国家空间科学中心 | A kind of unmanned aerial vehicle moving small target real-time detecting system and method |
CN111882656A (en) * | 2020-06-19 | 2020-11-03 | 深圳宏芯宇电子股份有限公司 | Graph processing method, equipment and storage medium based on artificial intelligence |
CN112001884A (en) * | 2020-07-14 | 2020-11-27 | 浙江大华技术股份有限公司 | Training method, counting method, equipment and storage medium of quantity statistical model |
CN112800828A (en) * | 2020-12-18 | 2021-05-14 | 零八一电子集团有限公司 | Target track method for ground grid occupation probability |
CN113538578A (en) * | 2021-06-22 | 2021-10-22 | 恒睿(重庆)人工智能技术研究院有限公司 | Target positioning method and device, computer equipment and storage medium |
CN113628251A (en) * | 2021-10-11 | 2021-11-09 | 北京中科金马科技股份有限公司 | Smart hotel terminal monitoring method |
CN113804166A (en) * | 2021-11-19 | 2021-12-17 | 西南交通大学 | Rockfall motion parameter digital reduction method based on unmanned aerial vehicle vision |
CN114998836A (en) * | 2022-06-13 | 2022-09-02 | 北京拙河科技有限公司 | Panoramic monitoring method and device for airport runway |
-
2007
- 2007-10-15 CN CNB2007101758651A patent/CN100513997C/en not_active Expired - Fee Related
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719286B (en) * | 2009-12-09 | 2012-05-23 | 北京大学 | Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof |
CN101877133B (en) * | 2009-12-17 | 2012-05-23 | 上海交通大学 | Motion segmentation method of two-dimensional view image scene |
CN102081296A (en) * | 2010-12-01 | 2011-06-01 | 南京航空航天大学 | Device and method for quickly positioning compound-eye vision imitated moving target and synchronously acquiring panoramagram |
CN102081296B (en) * | 2010-12-01 | 2012-11-07 | 南京航空航天大学 | Device and method for quickly positioning compound-eye vision imitated moving target and synchronously acquiring panoramagram |
CN103136739A (en) * | 2011-11-29 | 2013-06-05 | 北京航天长峰科技工业集团有限公司 | Registering method of controllable vidicon surveillance video and three-dimensional model in complex scene |
CN103136738A (en) * | 2011-11-29 | 2013-06-05 | 北京航天长峰科技工业集团有限公司 | Registering method of fixing vidicon surveillance video and three-dimensional model in complex scene |
CN102609949A (en) * | 2012-02-16 | 2012-07-25 | 南京邮电大学 | Target location method based on trifocal tensor pixel transfer |
CN102708370B (en) * | 2012-05-17 | 2015-04-15 | 北京交通大学 | Method and device for extracting multi-view angle image foreground target |
CN102708370A (en) * | 2012-05-17 | 2012-10-03 | 北京交通大学 | Method and device for extracting multi-view angle image foreground target |
CN104034316A (en) * | 2013-03-06 | 2014-09-10 | 深圳先进技术研究院 | Video analysis-based space positioning method |
CN104034316B (en) * | 2013-03-06 | 2018-02-06 | 深圳先进技术研究院 | A kind of space-location method based on video analysis |
WO2015085498A1 (en) * | 2013-12-10 | 2015-06-18 | 华为技术有限公司 | Method and device for acquiring target motion feature |
CN103809176A (en) * | 2014-03-13 | 2014-05-21 | 中国电子科技集团公司第三十八研究所 | Single-pixel millimeter wave imaging device and method |
CN103837137A (en) * | 2014-03-13 | 2014-06-04 | 中国电子科技集团公司第三十八研究所 | Quick large-image single-pixel imaging device and quick large-image single-pixel imaging method |
CN103837137B (en) * | 2014-03-13 | 2016-04-20 | 中国电子科技集团公司第三十八研究所 | Large image list pixel imaging device and method fast |
CN103809176B (en) * | 2014-03-13 | 2016-06-29 | 中国电子科技集团公司第三十八研究所 | A kind of single pixel mm-wave imaging apparatus and method |
CN104517292A (en) * | 2014-12-25 | 2015-04-15 | 杭州电子科技大学 | Multi-camera high-density crowd partitioning method based on planar homography matrix restraint |
CN106453220A (en) * | 2016-06-17 | 2017-02-22 | 四川师范大学 | Butt joint type safety protection identification method |
CN106651957A (en) * | 2016-10-19 | 2017-05-10 | 大连民族大学 | Monocular vision target space positioning method based on template |
CN106651957B (en) * | 2016-10-19 | 2019-07-30 | 大连民族大学 | Monocular vision object space localization method based on template |
CN106780551A (en) * | 2016-11-18 | 2017-05-31 | 湖南拓视觉信息技术有限公司 | A kind of Three-Dimensional Moving Targets detection method and system |
CN106780551B (en) * | 2016-11-18 | 2019-11-08 | 湖南拓视觉信息技术有限公司 | A kind of Three-Dimensional Moving Targets detection method and system |
CN110443247A (en) * | 2019-08-22 | 2019-11-12 | 中国科学院国家空间科学中心 | A kind of unmanned aerial vehicle moving small target real-time detecting system and method |
CN111882656A (en) * | 2020-06-19 | 2020-11-03 | 深圳宏芯宇电子股份有限公司 | Graph processing method, equipment and storage medium based on artificial intelligence |
CN112001884A (en) * | 2020-07-14 | 2020-11-27 | 浙江大华技术股份有限公司 | Training method, counting method, equipment and storage medium of quantity statistical model |
CN112800828A (en) * | 2020-12-18 | 2021-05-14 | 零八一电子集团有限公司 | Target track method for ground grid occupation probability |
CN113538578A (en) * | 2021-06-22 | 2021-10-22 | 恒睿(重庆)人工智能技术研究院有限公司 | Target positioning method and device, computer equipment and storage medium |
CN113628251A (en) * | 2021-10-11 | 2021-11-09 | 北京中科金马科技股份有限公司 | Smart hotel terminal monitoring method |
CN113628251B (en) * | 2021-10-11 | 2022-02-01 | 北京中科金马科技股份有限公司 | Smart hotel terminal monitoring method |
CN113804166A (en) * | 2021-11-19 | 2021-12-17 | 西南交通大学 | Rockfall motion parameter digital reduction method based on unmanned aerial vehicle vision |
CN113804166B (en) * | 2021-11-19 | 2022-02-08 | 西南交通大学 | Rockfall motion parameter digital reduction method based on unmanned aerial vehicle vision |
CN114998836A (en) * | 2022-06-13 | 2022-09-02 | 北京拙河科技有限公司 | Panoramic monitoring method and device for airport runway |
CN114998836B (en) * | 2022-06-13 | 2023-03-03 | 北京拙河科技有限公司 | Panoramic monitoring method and device for airport runway |
Also Published As
Publication number | Publication date |
---|---|
CN100513997C (en) | 2009-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100513997C (en) | Multiple angle movement target detection, positioning and aligning method | |
CN103824066B (en) | A kind of licence plate recognition method based on video flowing | |
Kluge | Extracting road curvature and orientation from image edge points without perceptual grouping into features | |
CN103268480A (en) | System and method for visual tracking | |
CN104537342B (en) | A kind of express lane line detecting method of combination ridge border detection and Hough transformation | |
CN102542289A (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN115049700A (en) | Target detection method and device | |
CN109685827B (en) | Target detection and tracking method based on DSP | |
CN102842039B (en) | Road image detection method based on Sobel operator | |
CN111191730B (en) | Method and system for detecting oversized image target oriented to embedded deep learning | |
Luo et al. | Multiple lane detection via combining complementary structural constraints | |
CN113034586B (en) | Road inclination angle detection method and detection system | |
CN102749034B (en) | Railway switch gap offset detection method based on image processing | |
CN103714547A (en) | Image registration method combined with edge regions and cross-correlation | |
CN105761507B (en) | A kind of vehicle count method based on three-dimensional track cluster | |
CN107220964A (en) | A kind of linear feature extraction is used for geology Taking stability appraisal procedure | |
CN103632376A (en) | Method for suppressing partial occlusion of vehicles by aid of double-level frames | |
CN111967374B (en) | Mine obstacle identification method, system and equipment based on image processing | |
Li et al. | Judgment and optimization of video image recognition in obstacle detection in intelligent vehicle | |
CN105243354B (en) | A kind of vehicle checking method based on target feature point | |
CN115240086A (en) | Unmanned aerial vehicle-based river channel ship detection method, device, equipment and storage medium | |
Liu et al. | Towards industrial scenario lane detection: vision-based AGV navigation methods | |
CN113516853B (en) | Multi-lane traffic flow detection method for complex monitoring scene | |
CN105631900B (en) | A kind of wireless vehicle tracking and device | |
CN110443142A (en) | A kind of deep learning vehicle count method extracted based on road surface with segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20090715 Termination date: 20191015 |
|
CF01 | Termination of patent right due to non-payment of annual fee |