CN103077533B - A kind of based on frogeye visual characteristic setting movement order calibration method - Google Patents
A kind of based on frogeye visual characteristic setting movement order calibration method Download PDFInfo
- Publication number
- CN103077533B CN103077533B CN201210574497.9A CN201210574497A CN103077533B CN 103077533 B CN103077533 B CN 103077533B CN 201210574497 A CN201210574497 A CN 201210574497A CN 103077533 B CN103077533 B CN 103077533B
- Authority
- CN
- China
- Prior art keywords
- image
- moving
- moving target
- region
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention discloses a kind of based on frogeye visual characteristic setting movement order calibration method, the method comprises: adopt the moving overset grids algorithm based on frame-to-frame differences method, the characteristic of simulation frogeye vision system to moving target sensitivity extracts the moving region in image, concrete: utilize frame differential method to the second camera acquisition for carrying out relay tracking to sequence image in consecutive frame do calculus of differences, obtain the image comprising some moving regions of one or more moving target; Motion target tracking block diagram picture selected in the image comprising some moving regions of one or more moving target by described and the first video camera carries out Histogram Matching, from this some moving regions image, find region the most similar, this region is then the selected band of position of moving target in the second video camera.By adopting method disclosed by the invention, moving target can be extracted accurately from complex scene, thus realize location quickly and accurately.
Description
Technical field
The present invention relates to area of pattern recognition, particularly relate to a kind of based on frogeye visual characteristic setting movement order calibration method.
Background technology
At present, the construction of smart city and About Safety Cities is increasing for the demand of intelligent video monitoring system, also gets more and more for its functional requirement, and relay tracking becomes a large major function of intelligent video monitoring system.Because the monitoring range of current video camera is limited, in order to carry out continuous surveillance to the same target in larger monitoring scene region, to obtain, target is more clear, more specifically image, and need the collaborative work of multiple-camera, relay tracking is then the function produced according to this demand.Nearly all relay tracking method all relates to the coupling of moving target, and conventional matching process is according to the difference of choosing of matching characteristic, can be divided into three major types: 1, directly utilize the pixel value of original image to mate; 2, utilize the physical form feature (point, line) of image, as the feature such as edge, angle point, need the pixel number carrying out correlation computations to have obvious minimizing, and there is stronger adaptive faculty; 3, the algorithm of the advanced features such as the tree search of constraint is used.But for the relay tracking under actual complex scene, these conventional target matching methods cannot directly be applied preferably, its basic reason is that actual scene is comparatively complicated, there is more interference, directly use conventional matching algorithm to meet to face calculated amount comparatively large, disturb problem that is more and that cannot accurately locate.
Summary of the invention
The object of this invention is to provide one based on frogeye visual characteristic setting movement order calibration method, moving target can be extracted accurately from complex scene, thus realize target is located quickly and accurately.
A kind of based on frogeye visual characteristic setting movement order calibration method, comprising:
Adopt the moving overset grids algorithm based on frame-to-frame differences method, the characteristic of simulation frogeye vision system to moving target sensitivity extracts the moving region in image, concrete: utilize frame differential method, to the second camera acquisition for carrying out relay tracking to sequence image in consecutive frame do calculus of differences, obtain and comprise the image of some moving regions of one or more moving target;
Motion target tracking block diagram picture selected in the image comprising some moving regions of one or more moving target by described and the first video camera carries out Histogram Matching, from this some moving regions image, find region the most similar, this region is then the selected band of position of moving target in the second video camera.
As seen from the above technical solution provided by the invention, by based on the stagnant zone in frogeye visual characteristic filtering complex scene image, the moving region of moving target can be extracted comparatively accurately, decrease calculated amount during object matching, add the accuracy of location.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
A kind of process flow diagram based on frogeye visual characteristic setting movement order calibration method that Fig. 1 provides for the embodiment of the present invention one;
Another process flow diagram based on frogeye visual characteristic setting movement order calibration method that Fig. 2 provides for the embodiment of the present invention two.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on embodiments of the invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to protection scope of the present invention.
Comprise multiple video camera in a monitoring scene region, completed the monitoring work of scene areas by the cooperation of multiple video camera.First video camera (such as, gun shaped video camera), for carrying out integral monitoring to scene areas, when there is the moving target that user pays close attention in this first video camera, need call the second video camera (such as, ball-shaped camera) and carrying out relay tracking to this moving target.
For comparatively complex environment (such as, multiple moving target may be there is in the second video camera), if the second video camera will carry out relay tracking to the moving target paid close attention in the first video camera accurately, the coupling of moving target first to be carried out.
Embodiment one
A kind of process flow diagram based on frogeye visual characteristic setting movement order calibration method that Fig. 1 provides for the embodiment of the present invention one, mainly comprises the steps:
Step 101, based on the stagnant zone in frogeye visual characteristic filtering second video camera, extract moving region image.
Frogeye vision system has specific sensibility to moving target, the static part details in frog cannot see (being at least do not pay close attention to) world around, and this characteristic can be utilized to carry out the filtration of stagnant zone for moving target in complex scene.
Therefore, adopt the moving overset grids algorithm based on frame-to-frame differences method to simulate the visual characteristic of frogeye to moving target sensitivity, with the stagnant zone in filtering second scene, the moving region realizing moving target is accurately extracted; Concrete: utilize frame differential method to the second camera acquisition to sequence image in consecutive frame do calculus of differences, obtain the image comprising some moving regions of one or more moving target.
Frame-to-frame differences method, mainly through doing calculus of differences to consecutive frame in sequence image, compares the difference of the gray-scale value of consecutive frame corresponding pixel points, then extracts the image of the some moving regions comprising one or more moving target by selected threshold.
The location of step 102, moving target.
Because the monitoring range of the first video camera is limited, therefore, if relay tracking need be carried out to a certain moving target, then need by the second video camera.The moving region (some moving regions of one or more moving target can be comprised) in the second camera supervised scene can be obtained by step 101, now, the moving region in the second video camera can be utilized to mate with a certain motion target tracking block diagram picture selected in the first video camera, obtain the position of this target in the second video camera.Concrete: user can select a motion target tracking block diagram picture (comprising the moving region of a moving target) from the scene of the first video camera, the image of the some moving regions comprising one or more moving target in this image and the second video camera is carried out Histogram Matching, from described second video camera, find region the most similar in the image of some moving regions, this region is then the predetermined band of position of moving target in the second video camera.
When acquisition moving target is behind the second video camera region, can using the initial tracking box region of this region as the second video camera; The active tracing algorithm that recycling is core with average drift (MeanShift) method controls described second camera motion, make this moving target be positioned at the middle section of the second camera scene all the time, and the size of tracking box remains within predetermined scope.
The embodiment of the present invention, by filtering the stagnant zone in complex scene image based on the moving overset grids algorithm of frogeye visual characteristic, can be extracted the moving region of moving target comparatively accurately, decrease the calculated amount of object matching, add the accuracy of its coupling.
Embodiment two
For the ease of understanding the present invention, below in conjunction with accompanying drawing, 2 couples of the present invention are described further, and as shown in Figure 2, mainly comprise the steps:
Step 201, the first video camera carry out the detection and tracking of monitoring scene regional movement target.
Background difference can be adopted to carry out moving object detection, when the first video camera detect there is moving target in monitoring scene region time, judge whether to meet predetermined tracking condition, if so, then proceed to step 202.Described predetermined tracking condition comprises: judge whether described first video camera is in the edge of the first camera supervised scene areas for the position of the tracking box of monitoring moving target; Concrete: if when the distance of this tracking box and the first camera views border longitudinal direction or transverse direction is less than predetermined value (such as, 3 pixels), then judge that moving target is in the edge of the first camera supervised scene areas.
Step 202, call the second idle video camera relay tracking is carried out to moving target.
For the ease of carrying out relay tracking, need to arrange at scene areas to arrange multiple presetting bit to each the second video camera.Such as, P presetting bit is set on the summit of the first camera supervised scene areas and top, bottom, left end and right-hand member center.
When meeting predetermined tracking condition, then call idle second video camera and move to the presetting bit nearest apart from moving target and carry out relay tracking.
Step 203, adopt based on the stagnant zone in moving overset grids algorithm filtering second video camera of frogeye visual characteristic, extract the image of moving region.
Second video camera arrives after presetting bit, needs to remain on this position stability a period of time (such as, 500 milliseconds), is used for carrying out with the first video camera for shooting the image that moving target mates.This time can set according to the actual conditions in scene, but needs to ensure during this period of time, and moving target matching algorithm can calculate, and moving target can not walk out the monitoring scene of this moment second video camera.
The vision system of frogeye has susceptibility to moving target, therefore, this enforcement adopts the moving overset grids algorithm based on frame-to-frame differences method to simulate the visual characteristic of frogeye, and with the static target in filtering second camera scene, the moving region realizing moving target is accurately extracted.
Concrete: utilize the Three image difference in frame-to-frame differences method to carry out calculus of differences respectively to two two field pictures adjacent in continuous print three two field picture, obtain two gray scale difference images:
D
k-1,k(x,y)=|f
k-1(x,y)-f
k(x,y)|;
D
k,k+1(x,y)=|f
k+1(x,y)-f
k(x,y)|;
Wherein, f
k-1(x, y), f
k(x, y) and f
k+1(x, y) is continuous print three two field picture; D
k-1k(x, y) and D
k, k+1(x, y) for adjacent two two field pictures carry out calculus of differences after the gray scale difference image that obtains;
Utilize threshold value to D
k-1, k(x, y) and D
k, k+1(x, y) carries out binaryzation, obtains corresponding binary image B
k-1, k(x, y) and B
k, k+1(x, y);
By binary image B
k-1, k(x, y) and B
k, k+1(x, y) carries out phase and computing, obtains the three frame difference bianry images comprising some moving regions of one or more moving target
By above-mentioned algorithm, the image of the some moving regions comprising one or more moving target can be obtained in the second video camera.But the moving region in this image may exist the situation splitting into polylith, therefore, in order to improve the accuracy of follow-up moving target matching algorithm, need to mark the one or more moving targets in described some moving regions.
Its step mainly comprises: first, utilize the method for morphology closed operation and reduction resolution, from the image of described some moving regions, extract continuous print moving region: the described image comprising some moving regions of one or more moving target is carried out the expansion process in morphology by (1), obtain the image D after expanding
n; (2) by D
nresolution obtain image R
n, concrete: by D
nbe divided into the sub-block of Z × Z, if in each sub-block pixel be 255 pixel account for over half, then all pixels in current sub-block are set to 255, otherwise are 0; (3) to described image R
ncarry out the corrosion treatment in morphology, obtain the image comprising continuous print moving region.
Secondly, mark connected region.After obtaining continuous print moving region, also need the region to being communicated with to mark.
The present embodiment uses the method for scanning bianry image, and the straight line data structure in any a line of bianry image is:
If the straight line Line of adjacent rows
iand Line
oeight neighborhood is communicated with, then must meet following relation simultaneously:
Line
i.m_1ColumnTail+1≥Line
o.m_1ColumnHead;
Line
o.m_1ColumnTail+1≥Line
i.m_1ColumnHead;
By lining by line scan, the straight line of all connections being linked to be chained list and carrying out unified figure notation, the information of connected region can be obtained.Barycenter, area, the girth of each moving target just can be calculated easily, for classification or the feature representation of target by the information obtained.
Finally, framework location.
Extract the minimum enclosed rectangle framework of each independently connected region respectively, rower of going forward side by side is noted.To distinguish the moving region of moving target and correspondence.
The location of step 204, moving target.
Image step 203 obtained mates with a certain motion target tracking block diagram picture selected in the first video camera.
The present embodiment uses histogram matching, first, and the histogram W of the tracking box image selected in the image that calculation procedure 203 obtains and the first video camera
1with W
2;
Again by W
1with W
2be converted to the image of regulation probability density function, and find out region the most similar; Concrete:
For histogram W
1with w
2each pixel, if pixel value is r
k, this value is mapped to the gray level s of its correspondence
k; Map gray level s again
kto final gray level z
k;
Gray level s
kwith z
kcalculate by following method:
Suppose that r and z is respectively the front gray level with processing rear image of process, p
r(r) and p
zz () is respectively corresponding continuous probability density function, estimate p according to image before treatment
r(r), p
zz () is for expecting the regulation probability density function that the image after processing has.Another s is a stochastic variable, and has:
Wherein w is integration variable, and its discrete formula is as follows:
N be pixel in image quantity and, n
jfor gray level r
jpixel quantity, L is the quantity of discrete gray levels.
Then, suppose to define stochastic variable z, and have:
Wherein t is integration variable, and its discrete expression is:
From formula (1) and (3), G (z)=T (z), therefore z must meet following condition:
z=G
-1(s)=G
-1[T(r)](5)
Transforming function transformation function T (r) is obtained by formula (1), p
rr (), by processing front image valuation, its discrete expression is:
z
k=G
-1[T(r)],k=0,1,2...,L-1(6)
That is, first utilize above-mentioned formula (2) to each gray level r
kprecomputation maps gray level s
k; Recycling formula (4) is from the P with predetermined rule density function
zz () obtains transforming function transformation function G; Finally, utilize formula (6) to each s
kvalue precomputation z
k.
When acting on histogram W respectively by above-mentioned steps
1with W
2after, complete the coupling of moving target, from the second video camera, be namely positioned at user-selected fixed moving target in the first video camera.
Step 205, relay tracking.
The region of described second Camera Positioning is carried out relay tracking as initial tracking box region to moving target.Such as, can adopt with average drift (MeanShift) method be core active tracing algorithm control described second camera motion, make this moving target be positioned at the middle section of the second camera scene all the time, and the size of tracking box remain within predetermined scope.
The embodiment of the present invention, by filtering the stagnant zone in complex scene image, can be extracted the moving region of moving target comparatively accurately, decrease the calculated amount of object matching, adds the accuracy of its coupling.
Through the above description of the embodiments, those skilled in the art can be well understood to above-described embodiment can by software simulating, and the mode that also can add necessary general hardware platform by software realizes.Based on such understanding, the technical scheme of above-described embodiment can embody with the form of software product, it (can be CD-ROM that this software product can be stored in a non-volatile memory medium, USB flash disk, portable hard drive etc.) in, comprise some instructions and perform method described in each embodiment of the present invention in order to make a computer equipment (can be personal computer, server, or the network equipment etc.).
The above; be only the present invention's preferably embodiment, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.
Claims (7)
1., based on a frogeye visual characteristic setting movement order calibration method, it is characterized in that, comprising:
Adopt the moving overset grids algorithm based on frame-to-frame differences method, the characteristic of simulation frogeye vision system to moving target sensitivity extracts the moving region in image, concrete: utilize frame differential method to the second camera acquisition for carrying out relay tracking to sequence image in consecutive frame do calculus of differences, obtain the image comprising some moving regions of one or more moving target;
Motion target tracking block diagram picture selected in the image comprising some moving regions of one or more moving target by described and the first video camera carries out Histogram Matching, from this some moving regions image, find region the most similar, this region is then the selected band of position of moving target in the second video camera;
Described comprise some moving regions of one or more moving target by described image and the first video camera in selected motion target tracking block diagram picture carry out Histogram Matching, the step finding region the most similar from the image of described some moving regions comprises: the histogram W comprising the image of some moving regions of one or more moving target and the selected motion target tracking block diagram picture of the first video camera described in calculating respectively
1with W
2; By above-mentioned histogram W
1with W
2be converted to the image with regulation probability density function, and find out region the most similar;
Wherein, described by above-mentioned histogram W
1with W
2the step being converted to the image with regulation probability density function comprises:
For histogram W
1with W
2each pixel, if pixel value is r
k, this value is mapped to the gray level s of its correspondence
k; Map gray level s again
kto final gray level z
k;
Concrete: to each gray level r
kprecomputation maps gray level s
k:
Wherein, p
r(r
j) be corresponding continuous probability density function, n be pixel in image quantity and, n
jfor gray level r
jpixel quantity, L is the quantity of discrete gray levels;
Utilize the P with predetermined rule density function
zz () obtains transforming function transformation function G:
To each s
kvalue precomputation z
k: z
k=G
-1[T (r)], k=0,1,2..., L-1;
Described from this moving region, find region the most similar after also comprise:
Using the initial tracking box region of this region as the second video camera;
Utilize average drift MeanShift algorithm to control described second camera motion, make this moving target be positioned at the middle section of the second camera scene all the time, and the size of tracking box remains within predetermined scope.
2. method according to claim 1, it is characterized in that, described utilize frame differential method to the second camera acquisition for carrying out relay tracking to sequence image in consecutive frame do calculus of differences, obtain comprise the image of some moving regions of one or more moving target step comprise:
Utilize Three image difference to carry out calculus of differences respectively to two two field pictures adjacent in continuous print three two field picture, obtain two gray scale difference images:
D
k-1,k(x,y)=|f
k-1(x,y)-f
k(x,y)|;
D
k,k+1(x,y)=|f
k+1(x,y)-f
k(x,y)|;
Wherein, f
k-1(x, y), f
k(x, y) and f
k+1(x, y) is continuous print three two field picture; D
k-1, k(x, y) and D
k, k+1(x, y) for adjacent two two field pictures carry out calculus of differences after the gray scale difference image that obtains;
Utilize threshold value to D
k-1, k(x, y) and D
k, k+1(x, y) carries out binaryzation, obtains corresponding binary image B
k-1, k(x, y) and B
k, k+1(x, y);
By binary image B
k-1, k(x, y) and B
k, k+1(x, y) carries out phase and computing, obtains the three frame difference bianry images comprising some moving regions of one or more moving target
3. method according to claim 2, is characterized in that, the method also comprises: mark the one or more moving targets in described some moving regions;
Concrete: the method utilizing morphology closed operation and reduction resolution, extracts continuous print moving region from described comprising the image of some moving regions of one or more moving target;
Use the bianry image comprising S bar straight line to line by line scan to the described image extracting continuous print moving region, the straight line of all connections is linked to be chained list and carries out unified figure notation, obtain the information of connected region, wherein S is positive integer;
Extract the minimum enclosed rectangle framework of each independently connected region respectively, rower of going forward side by side is noted.
4. method according to claim 3, is characterized in that, the described method utilizing morphology closed operation and reduce resolution, comprises from the described step extracting continuous print moving region the image of some moving regions of one or more moving target that comprises:
The described image comprising some moving regions of one or more moving target is carried out the expansion process in morphology, obtain the image D after expanding
n;
Reduce described D
nresolution obtain image R
n, concrete: by D
nbe divided into the sub-block of Z × Z, if in each sub-block pixel be 255 pixel account for over half, then all pixels in current sub-block are set to 255, otherwise are 0;
To described image R
ncarry out the corrosion treatment in morphology, obtain the image comprising continuous print moving region.
5. method according to claim 1, is characterized in that, described utilize frame differential method to the second camera acquisition for carrying out relay tracking to sequence image in consecutive frame do calculus of differences before also comprise:
Carried out the detection and tracking of moving target by the first video camera, after this first video camera detects moving target, judge whether to meet predetermined tracking condition, if so, then call the second idle video camera and relay tracking is carried out to moving target.
6. method according to claim 5, is characterized in that, described in judge whether that meeting predetermined tracking condition comprises:
Judge whether described first video camera is in the edge of the first camera supervised scene areas for the position of the tracking box of monitoring moving target; Concrete: if when the distance of this tracking box and the first camera views border longitudinal direction or transverse direction is less than predetermined value, then judge that moving target is in the edge of the first camera supervised scene areas.
7. the method according to claim 5 or 6, is characterized in that, described in call the second idle video camera and relay tracking is carried out to moving target comprise:
The summit of described first camera supervised scene areas and top, bottom, left end and right-hand member center are provided with P is carried out relay tracking presetting bit for the second video camera, when meeting predetermined tracking condition, then call idle second video camera and move to the presetting bit nearest apart from moving target and carry out relay tracking.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210574497.9A CN103077533B (en) | 2012-12-26 | 2012-12-26 | A kind of based on frogeye visual characteristic setting movement order calibration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210574497.9A CN103077533B (en) | 2012-12-26 | 2012-12-26 | A kind of based on frogeye visual characteristic setting movement order calibration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103077533A CN103077533A (en) | 2013-05-01 |
CN103077533B true CN103077533B (en) | 2016-03-02 |
Family
ID=48154052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210574497.9A Active CN103077533B (en) | 2012-12-26 | 2012-12-26 | A kind of based on frogeye visual characteristic setting movement order calibration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103077533B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105791687A (en) * | 2016-03-04 | 2016-07-20 | 苏州卓视蓝电子科技有限公司 | Frogeye bionic detection method and frogeye bionic camera |
CN107844734B (en) * | 2016-09-19 | 2020-07-07 | 杭州海康威视数字技术股份有限公司 | Monitoring target determination method and device and video monitoring method and device |
CN107133969B (en) * | 2017-05-02 | 2018-03-06 | 中国人民解放军火箭军工程大学 | A kind of mobile platform moving target detecting method based on background back projection |
CN109767454B (en) * | 2018-12-18 | 2022-05-10 | 西北工业大学 | Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance |
CN111885301A (en) * | 2020-06-29 | 2020-11-03 | 浙江大华技术股份有限公司 | Gun and ball linkage tracking method and device, computer equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101572803A (en) * | 2009-06-18 | 2009-11-04 | 中国科学技术大学 | Customizable automatic tracking system based on video monitoring |
CN101883261A (en) * | 2010-05-26 | 2010-11-10 | 中国科学院自动化研究所 | Method and system for abnormal target detection and relay tracking under large-range monitoring scene |
CN102289822A (en) * | 2011-09-09 | 2011-12-21 | 南京大学 | Method for tracking moving target collaboratively by multiple cameras |
CN102509088A (en) * | 2011-11-28 | 2012-06-20 | Tcl集团股份有限公司 | Hand motion detecting method, hand motion detecting device and human-computer interaction system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7058204B2 (en) * | 2000-10-03 | 2006-06-06 | Gesturetek, Inc. | Multiple camera control system |
-
2012
- 2012-12-26 CN CN201210574497.9A patent/CN103077533B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101572803A (en) * | 2009-06-18 | 2009-11-04 | 中国科学技术大学 | Customizable automatic tracking system based on video monitoring |
CN101883261A (en) * | 2010-05-26 | 2010-11-10 | 中国科学院自动化研究所 | Method and system for abnormal target detection and relay tracking under large-range monitoring scene |
CN102289822A (en) * | 2011-09-09 | 2011-12-21 | 南京大学 | Method for tracking moving target collaboratively by multiple cameras |
CN102509088A (en) * | 2011-11-28 | 2012-06-20 | Tcl集团股份有限公司 | Hand motion detecting method, hand motion detecting device and human-computer interaction system |
Also Published As
Publication number | Publication date |
---|---|
CN103077533A (en) | 2013-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4152204A1 (en) | Lane line detection method, and related apparatus | |
EP2858008B1 (en) | Target detecting method and system | |
Kastrinaki et al. | A survey of video processing techniques for traffic applications | |
CN113506317B (en) | Multi-target tracking method based on Mask R-CNN and apparent feature fusion | |
CN103077533B (en) | A kind of based on frogeye visual characteristic setting movement order calibration method | |
CN103714538B (en) | road edge detection method, device and vehicle | |
CN101394546B (en) | Video target profile tracing method and device | |
CN101976504B (en) | Multi-vehicle video tracking method based on color space information | |
CN101883209B (en) | Method for integrating background model and three-frame difference to detect video background | |
CN110533695A (en) | A kind of trajectory predictions device and method based on DS evidence theory | |
CN103578119A (en) | Target detection method in Codebook dynamic scene based on superpixels | |
CN102142085B (en) | Robust tracking method for moving flame target in forest region monitoring video | |
CN104200485A (en) | Video-monitoring-oriented human body tracking method | |
CN104915969A (en) | Template matching tracking method based on particle swarm optimization | |
CN106780727B (en) | Vehicle head detection model reconstruction method and device | |
CN104346811A (en) | Video-image-based target real-time tracking method and device | |
CN110827320B (en) | Target tracking method and device based on time sequence prediction | |
CN103237197B (en) | For the method for the self adaptation multiple features fusion of robust tracking | |
CN104660994A (en) | Special maritime camera and intelligent maritime monitoring method | |
CN105335701A (en) | Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion | |
CN104299245A (en) | Augmented reality tracking method based on neural network | |
CN111027430A (en) | Traffic scene complexity calculation method for intelligent evaluation of unmanned vehicles | |
CN113763427B (en) | Multi-target tracking method based on coarse-to-fine shielding processing | |
CN109242019A (en) | A kind of water surface optics Small object quickly detects and tracking | |
Samadzadegan et al. | Automatic lane detection in image sequences for vision-based navigation purposes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |