CN103942793A - Video consistent motion area detection method based on thermal diffusion - Google Patents
Video consistent motion area detection method based on thermal diffusion Download PDFInfo
- Publication number
- CN103942793A CN103942793A CN201410153243.9A CN201410153243A CN103942793A CN 103942793 A CN103942793 A CN 103942793A CN 201410153243 A CN201410153243 A CN 201410153243A CN 103942793 A CN103942793 A CN 103942793A
- Authority
- CN
- China
- Prior art keywords
- centerdot
- thermal
- thermal map
- represent
- thermal diffusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a video consistent motion area detection method based on thermal diffusion. The thermal diffusion process in the physics is introduced, an original optical flow field serves as a hot source and external force, and the original optical flow field becomes a thermograph better reflecting the scene motion mode according to the anisotropic diffusion principle. By building Delaunay triangulation, the motion relation between points can be better mastered, and the boundaries of different areas of the thermograph are found. The method can effectively master the motion information of global scenes, the points which are spaced by a long distance can be mutually affected through the correlation of motion information, and noise generated by calculation of optical flow has better robustness.
Description
Technical field
The present invention relates to a kind of image cuts apart and clustering technique, relate in particular to the consistent moving region of a kind of video based on thermal diffusion detection technique, that is: thermal diffusion process is introduced in Video processing, by the analysis to video motion pattern, detected the region of consistent motion.
Background technology
Consistent moving region detection technique is a very important technology in video and image processing field.It utilizes light stream extraction, image the technology such as to cut apart, by the analysis to key point or entire image space time motor pattern, by Region Segmentation consistent motor behavior out.The consistent region of described motor behavior can be player of a starting stride etc. on the vehicle toward a direction running, marathon course on road.This technology has very important effect in application such as dense population behavioural analysis, traffic monitorings.
Through the literature search of prior art is found, paper " A lagrang ian particle dynamics approach for crowd flow segmentation and stability analysis " that the people such as S.Ali deliver for 2007 on " Conference on Computer Vision and Pattern Recognition " proposes to utilize Lagrangian particle dynamic method (Lagrangian particle dynamics) to cut apart the high density consistent moving region of trooping, and the border that obtains consistent moving region by building FTLE field is used for cutting apart.The subject matter of the method is, thereby the error of optical flow computation can cause the discontinuous segmentation effect that affects in border obtaining.In addition, the method can only be confined to high density and troop, and in the time that crowd or Che Qun etc. in video are sparse, the accuracy of the method has significantly and declines.
Dividing the consistent region of motion by the calculating to key point time-space relationship is current comparatively popular method.Such as, the limitation of above two kinds of methods has been only to consider the relation between short distance key point, and has ignored kinematic relation between key point relatively far apart, in the time of the scenes such as appearance covers, blocks, consistent moving region is difficult to divide.And in the time that prospect background is close, the accuracy of cutting apart can be had a greatly reduced quality.
Therefore,, for above variety of problems, be necessary to find and a kind ofly can react the method that global motion relation can accurately be held again motion details and can overcome optical flow computation equal error.
Summary of the invention
The present invention is directed to the deficiencies in the prior art, proposed a kind of consistent motion region detection method based on thermal diffusion.Thermal diffusion process in physics is introduced to this method, by processing the optical flow field of original calculation, can overcome the error of optical flow computation, also can distant point be connected by thermal diffusion, hold motion details simultaneously, improve the accuracy that consistent moving region is detected.
Principle of the present invention is: this method subject matter to be processed is exactly how to find the correlativity of different pixels motion and how will closely couple together with remote related pixel.Consider the light stream of image in processing and the similarity of hot-fluid in physics, all relevant with the motion of particle, the present invention is dexterously by thermal diffusion process, allows closely and remote particle interaction, automatically connection.Diffusion in this method is anisotropic, like this, is more easily linked together in a direction and the relative position example consistent with direction of motion, and the thermal map building by thermal diffusion is also more close to real sports ground.
According to the consistent motion region detection method of a kind of video based on thermal diffusion provided by the invention, comprise following steps:
Step 1: obtain input video optical flow field F, with F=(F
x, F
y) represent F
xrepresent x direction optical flow field, F
yrepresent y direction optical flow field, F
x(i, j) represents to be positioned at the x direction light stream of (i, j) position, F
y(i, j) represents to be positioned at the y direction light stream of (i, j) position; As Fig. 2;
Step 2: using the optical flow field of T0 frame in step 1 as thermal source and acting force, carry out thermal diffusion, obtain the thermal map E after thermal diffusion, wherein, E=(E
x, E
y), E
xrepresent x direction thermal map, E
yrepresent y direction thermal map; As Fig. 3;
Step 3: choose at random on thermal map
it is individual,, the length that M is video image, what N was video image is wide, and obtains the De Luonei triangulated graph (Delaunay Triangulation) of these points, as Fig. 4, calculates the weights on all limits in De Luonei triangulated graph;
Step 4: find weights in step 3 to be greater than the limit of threshold value Th, and using the institute on these limits a little as frontier point, build bianry image, obtain ultimate bound by the method expanding; And according to the thermal map in the ultimate bound obtaining and step 2, cut apart with dividing ridge method; Segmentation result is shown in Fig. 5.
Preferably: the E of thermal map described in step 2 obtains by following steps:
Step a) builds the two-dimentional null matrix E that two sizes are M × N
xand E
y, E
x(i, j) represents to be positioned at the x direction thermal map of (i, j) position, E
y(i, j) represents to be positioned at the y direction thermal map of (i, j) position; Build U=(U
x, U
y) make U=F, wherein, E
xrepresent x direction thermal map, E
yrepresent y direction thermal map;
Step b), for the every bit (x, y) in M × N point, is upgraded the thermal map E (x, y) that is positioned at (x, y) position, E (x, y)=[E by formula (1)
x(x, y), E
y(x, y)], wherein, E
x(x, y) represents to be positioned at the x direction thermal map of (x, y) position, E
y(x, y) represents to be positioned at the y direction thermal map of (x, y) position;
Wherein, e
(i, j)the energy of the point that (x, y) represents to be positioned at (i, j) to (x, y) diffusion;
In formula (1), be positioned at (x, y) position e
(i, j)(x, y)=[e
x (i, j)(x, y), e
y (i, j)(x, y)] obtained by formula (2) (3), wherein, e
x (i, j)(x, y) represents point that x direction is positioned at (i, the j) energy to (x, y) diffusion, e
y (i, j)(x, y) represents point that y direction is positioned at (i, the j) energy to (x, y) diffusion;
Wherein, U
x(i, j) represents to be positioned at the some x direction external force field of (i, j), U
y(i, j) represents to be positioned at the some y direction external force field of (i, j), and e represents natural logarithm, k
p, k
frepresent constant, x represents the horizontal ordinate of propagation energy receiving station, and y represents the ordinate of propagation energy receiving station, and i represents the horizontal ordinate that propagation energy is put, and j represents the ordinate of propagation energy receiving station, TH
crepresent a threshold value;
In formula (2), (3), for formula
For formula
Make U=E;
Step c) repeats b) several times; Now, thermal diffusion process completes, E=(E
x, E
y) be gained thermal map.
Preferably: the weights described in step 3 are obtained by following methods:
The limit of 2 that is positioned at (x, y) and (i, j) for connection, its weights are
Preferably: the bianry image construction method described in step 4 is as follows:
Building size is the null matrix B of M × N, is greater than the limit of Th for weights in each step 3, carries out following steps:
Be greater than Th if connect (x, y) with the limit weights of (i, j) 2, be made as 1 by connecting points all on the limit of (x, y) and (i, j) 2 in matrix B; B is required bianry image.
Compared with prior art, the present invention has following beneficial effect:
1, local neighbor point can be connected by thermal diffusion process, also can hold the relation between overall situation reference point relatively far apart, automatically the point of join dependency connection.Therefore, the present invention can process better block, the situation such as covering.In addition, in diffusion process, introducing particle light stream correlativity (is C (F (x, y), F (i in formula (2) (3), j))) can make just received energy mutually of the particle that is really associated, avoid the destruction to primary light stream computational data.
2, the present invention can play the effect of amplifying tiny light stream by diffusion process.In previous method, being all difficult to detect or be mistaken as for tiny light stream is at a distance error, and in the present invention, thermal diffusion process interacts the tiny light stream point close with ambient light flow path direction, makes the region that is originally difficult to detect become more obvious in the situation that not makeing mistakes.
3, the present invention can automatically overcome the error of optical flow computation.In the time there is error in optical flow computation, because particle light stream correlativity (is C (F (x in formula (2) (3), y), F (i, j))) existence, around the point of optical flow computation mistake cannot be diffused into, and point around can be by energy to error point through spreading several times, like this, by diffusion process repeatedly, the error of optical flow computation can reduce even to disappear.
Brief description of the drawings
By reading the detailed description of non-limiting example being done with reference to the following drawings, it is more obvious that other features, objects and advantages of the present invention will become:
Fig. 1 is the process flow diagram of the inventive method.
Fig. 2 is the optical flow field of actual traffic video example frame.
Fig. 3 is the thermal map being obtained by Fig. 2 optical flow field.
Fig. 4 is Fig. 2 frame triangulation schematic diagram.
Fig. 5 is the last segmentation result of Fig. 2 frame.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.Following examples will contribute to those skilled in the art further to understand the present invention, but not limit in any form the present invention.It should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, can also make some distortion and improvement.These all belong to protection scope of the present invention.
The present embodiment comprises following steps:
Step 1: obtain the optical flow field (each is two two-dimensional matrixs that size is M × N, and is x direction, and is y direction) of input video, with F=(F
x, F
y) represent F
x(i, j) represents to be positioned at the x direction light stream of (i, j) position, F
y(i, j) represents to be positioned at the y direction light stream of (i, j) position.
The method that the paper " High accuracy optical flow estimation based on a theory for warping " that wherein optical flow field described in step 1 is delivered by T.Brox etc. in the present embodiment for 2004 on " European Conference on Computer Vision " proposes obtains.
Step 2: using the optical flow field of T0 frame in step 1 as thermal source and acting force, carry out thermal diffusion, obtain the thermal map E after thermal diffusion.
The wherein thermal map E=(E described in step 2
x, E
y) can be obtained by following methods:
A) build two two-dimentional null matrix E that size is M × N
xand E
y, E
x(i, j) represents to be positioned at the x direction thermal map of (i, j) position, E
y(i, j) represents to be positioned at the y direction thermal map of (i, j) position.Build U=(U
x, U
y) make U=F.
B), for the every bit (x, y) in M × N point, upgrade E (x, the y)=[E that is positioned at (x, y) position by formula (1)
x(x, y), E
y(x, y)].
In formula, in (1), be positioned at (x, y) position e
(i, j)(x, y)=[e
x (i, j)(x, y),
ey (i, j)(x, y)] obtained by formula (2) (3).
In formula (2) (3),
K in the present embodiment
pget 0.2, k
fget 0.8, TH
cget 0.7.
Make U=E.
C) repeat b) n time (n=3 in the present embodiment).Now, thermal diffusion process completes, E=(E
x, E
y) be gained thermal map.
Step 3: choose at random on thermal map
individual, and obtain the De Luonei triangulated graph (Delaunay Triangulation) of these points, calculate the weights on all limits in De Luonei triangulated graph.
The method proposing in the paper " Two algorithms for constructing a Delaunay triangulation " that wherein the De Luonei triangulation described in step 3 can be delivered by D.Lee etc. for 1980 on " International Journal of Computer & Information Sciences " obtains.
Wherein the weights described in step 3 are obtained by following methods.The limit of 2 that is positioned at (x, y) and (i, j) for connection, its weights are
Step 4: find weights in step 3 to be greater than the limit of threshold value Th, and using the institute on these limits a little as frontier point, build bianry image, obtain ultimate bound by the method expanding.And according to the thermal map in the ultimate bound obtaining and step 2, cut apart by the method for watershed divide.
Wherein the Th described in step 4 gets 0.7 in the present embodiment.
Wherein the bianry image construction method described in step 4 is as follows, and building size is the null matrix B of M × N, is greater than the limit of Th for weights in each step 3, carries out following steps.Be greater than Th if connect (x, y) with the limit weights of (i, j) 2, be made as 1 by connecting points all on the limit of (x, y) and (i, j) 2 in matrix B.B is required bianry image.
Wherein the ultimate bound described in step 4 be in image promising 1 point.
The method that " the Watersheds in digital spaces:an efficient algorithm based on immersion simulations " that wherein dividing ridge method described in step 4 uses L.Vincent etc. to be published in " IEEE Transactions on Pattern Analysis and Machine Intelligence " for 1991 in the present embodiment proposes realizes.
Implementation result
According to above-mentioned steps, the different videos that comprises object, group movement is carried out to the detection of consistent moving region, experiment completes on computers.
First, utilize the present invention to compare with previously mentioned Lagrangian dynamic particles method and consistent region unchangeability method, testing result of the present invention is 7.8% with actual consistent domain error, and Lagrangian dynamic particles method is respectively 32.5% and 25.6% with consistent region unchangeability method error, can find out, accuracy rate of the present invention is far away higher than additive method.
Secondly, on the different videos such as traffic video, religion distinguished gathering, square, test, the present invention can adapt to various videos, region consistent with the reality number error of calculation only has 14%, and other two kinds of methods all have the example of failure, it is respectively 124% and 105% with the actual consistent region number error of calculation.Can find out thus, the robustness of this method is better than additive method.
Finally, for different parameters, on same video database, test.Different parameters (as Th etc.) testing result fluctuating error of the present invention is no more than 2.5%.
Experiment shows, than existing consistent motion region detection method, the present invention has higher robustness to different video, has better effect detecting consistent moving region.
Above specific embodiments of the invention are described.It will be appreciated that, the present invention is not limited to above-mentioned specific implementations, and those skilled in the art can make various distortion or amendment within the scope of the claims, and this does not affect flesh and blood of the present invention.
Claims (4)
1. the consistent motion region detection method of the video based on thermal diffusion, is characterized in that, comprises following steps:
Step 1: obtain input video optical flow field F, with F=(F
x, F
y) represent F
xrepresent the light stream of x direction, F
yrepresent the light stream of y direction, F
x(i, j) represents to be positioned at the x direction light stream of (i, j) position, F
y(i, j) represents to be positioned at the y direction light stream of (i, j) position;
Step 2: using the optical flow field of T0 frame in step 1 as thermal source and acting force, carry out thermal diffusion, obtain the thermal map E after thermal diffusion, wherein, E=(E
x, E
y), E
xrepresent x direction thermal map, E
yrepresent y direction thermal map;
Step 3: choose at random on thermal map
the length individual, M is video image, what N was video image is wide, and obtains the De Luonei triangulated graph of these points, calculates the weights on all limits in De Luonei triangulated graph;
Step 4: find weights in step 3 to be greater than the limit of threshold value Th, and using the institute on these limits a little as frontier point, build bianry image, obtain ultimate bound by the method expanding; And according to the thermal map in the ultimate bound obtaining and step 2, cut apart with dividing ridge method.
2. the consistent motion region detection method of the video based on thermal diffusion according to claim 1, is characterized in that: the E of thermal map described in step 2 obtains by following steps:
Step a) builds the two-dimentional null matrix E that two sizes are M × N
xand E
y, E
x(i, j) represents to be positioned at the x direction thermal map of (i, j) position, E
y(i, j) represents to be positioned at the y direction thermal map of (i, j) position; Build U=(U
x, U
y) make U=F, wherein, U represents to spread suffered external force field, U
xrepresent x direction external force field, U
yrepresent y direction external force field;
Step b), for the every bit (x, y) in M × N point, is upgraded the thermal map E (x, y) that is positioned at (x, y) position, E (x, y)=[E by formula (1)
x(x, y), E
y(x, y)], wherein, E
x(x, y) represents to be positioned at the x direction thermal map of (x, y) position, E
y(x, y) represents to be positioned at the y direction thermal map of (x, y) position;
Wherein, e
(i, j)the energy of the point that (x, y) represents to be positioned at (i, j) to (x, y) diffusion;
In formula (1), be positioned at (x, y) position e
(i, j)(x, y)=[e
x (i, j)(x, y), e
y (i, j)(x, y)] obtained by formula (2) (3), wherein, e
x (i, j)(x, y) represents point that x direction is positioned at (i, the j) energy to (x, y) diffusion, e
y (i, j)(x, y) represents point that y direction is positioned at (i, the j) energy to (x, y) diffusion;
Wherein, U
x(i, j) represents to be positioned at the some x direction external force field of (i, j), U
y(i, j) represents to be positioned at the some y direction external force field of (i, j), and e represents natural logarithm, k
p, k
frepresent constant, x represents the horizontal ordinate of propagation energy receiving station, and y represents the ordinate of propagation energy receiving station, and i represents the horizontal ordinate that propagation energy is put, and j represents the ordinate of propagation energy receiving station, TH
crepresent a threshold value;
In formula (2), (3), for formula
For formula
Make U=E;
Step c) repeats b) several times; Now, thermal diffusion process completes, E=(E
x, E
y) be gained thermal map.
3. the consistent motion region detection method of the video based on thermal diffusion according to claim 2, is characterized in that: the weights described in step 3 are obtained by following methods:
The limit of 2 that is positioned at (x, y) and (i, j) for connection, its weights are
4. the consistent motion region detection method of the video based on thermal diffusion according to claim 3, is characterized in that: the bianry image construction method described in step 4 is as follows:
Building size is the null matrix B of M × N, is greater than the limit of Th for weights in each step 3, carries out following steps:
Be greater than Th if connect (x, y) with the limit weights of (i, j) 2, be made as 1 by connecting points all on the limit of (x, y) and (i, j) 2 in matrix B; B is required bianry image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410153243.9A CN103942793B (en) | 2014-04-16 | 2014-04-16 | The consistent motion region detection method of video based on thermal diffusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410153243.9A CN103942793B (en) | 2014-04-16 | 2014-04-16 | The consistent motion region detection method of video based on thermal diffusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103942793A true CN103942793A (en) | 2014-07-23 |
CN103942793B CN103942793B (en) | 2016-11-16 |
Family
ID=51190444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410153243.9A Active CN103942793B (en) | 2014-04-16 | 2014-04-16 | The consistent motion region detection method of video based on thermal diffusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103942793B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105654478A (en) * | 2015-12-25 | 2016-06-08 | 华中科技大学 | Underwater heat source detecting method based on thermal area integration |
CN107784129A (en) * | 2016-08-24 | 2018-03-09 | 中国海洋大学 | Time Continuous flow field structure analytical technology based on objective Euler's coherent structure |
WO2021057996A1 (en) * | 2019-09-28 | 2021-04-01 | Beijing Bytedance Network Technology Co., Ltd. | Geometric partitioning mode in video coding |
US11463687B2 (en) | 2019-06-04 | 2022-10-04 | Beijing Bytedance Network Technology Co., Ltd. | Motion candidate list with geometric partition mode coding |
US11509893B2 (en) | 2019-07-14 | 2022-11-22 | Beijing Bytedance Network Technology Co., Ltd. | Indication of adaptive loop filtering in adaptation parameter set |
US11575911B2 (en) | 2019-06-04 | 2023-02-07 | Beijing Bytedance Network Technology Co., Ltd. | Motion candidate list construction using neighboring block information |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708571A (en) * | 2011-06-24 | 2012-10-03 | 杭州海康威视软件有限公司 | Method and device for detecting strenuous motion in video |
US8723104B2 (en) * | 2012-09-13 | 2014-05-13 | City University Of Hong Kong | Methods and means for manipulating particles |
-
2014
- 2014-04-16 CN CN201410153243.9A patent/CN103942793B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102708571A (en) * | 2011-06-24 | 2012-10-03 | 杭州海康威视软件有限公司 | Method and device for detecting strenuous motion in video |
US8723104B2 (en) * | 2012-09-13 | 2014-05-13 | City University Of Hong Kong | Methods and means for manipulating particles |
Non-Patent Citations (2)
Title |
---|
潘光远: ""光流场算法及其在视频目标检测中的应用研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
路子赟: ""光流场计算及其若干优化技术研究"", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105654478A (en) * | 2015-12-25 | 2016-06-08 | 华中科技大学 | Underwater heat source detecting method based on thermal area integration |
CN107784129A (en) * | 2016-08-24 | 2018-03-09 | 中国海洋大学 | Time Continuous flow field structure analytical technology based on objective Euler's coherent structure |
US11463687B2 (en) | 2019-06-04 | 2022-10-04 | Beijing Bytedance Network Technology Co., Ltd. | Motion candidate list with geometric partition mode coding |
US11575911B2 (en) | 2019-06-04 | 2023-02-07 | Beijing Bytedance Network Technology Co., Ltd. | Motion candidate list construction using neighboring block information |
US11611743B2 (en) | 2019-06-04 | 2023-03-21 | Beijing Bytedance Network Technology Co., Ltd. | Conditional implementation of motion candidate list construction process |
US11509893B2 (en) | 2019-07-14 | 2022-11-22 | Beijing Bytedance Network Technology Co., Ltd. | Indication of adaptive loop filtering in adaptation parameter set |
US11647186B2 (en) | 2019-07-14 | 2023-05-09 | Beijing Bytedance Network Technology Co., Ltd. | Transform block size restriction in video coding |
WO2021057996A1 (en) * | 2019-09-28 | 2021-04-01 | Beijing Bytedance Network Technology Co., Ltd. | Geometric partitioning mode in video coding |
US11722667B2 (en) | 2019-09-28 | 2023-08-08 | Beijing Bytedance Network Technology Co., Ltd. | Geometric partitioning mode in video coding |
Also Published As
Publication number | Publication date |
---|---|
CN103942793B (en) | 2016-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103942793A (en) | Video consistent motion area detection method based on thermal diffusion | |
Sun et al. | Leveraging crowdsourced GPS data for road extraction from aerial imagery | |
Mnih et al. | Learning to label aerial images from noisy data | |
Li et al. | Reconstructing building mass models from UAV images | |
Zhou et al. | A fast and accurate segmentation method for ordered LiDAR point cloud of large-scale scenes | |
CN102855459B (en) | For the method and system of the detection validation of particular prospect object | |
CN110309842B (en) | Object detection method and device based on convolutional neural network | |
US20150081252A1 (en) | Methods, apparatuses and computer program products for automatic, non-parametric, non-iterative three dimensional geographic modeling | |
EP3443482B1 (en) | Classifying entities in digital maps using discrete non-trace positioning data | |
CN110390302A (en) | A kind of objective detection method | |
CN102622769A (en) | Multi-target tracking method by taking depth as leading clue under dynamic scene | |
CN105574848A (en) | A method and an apparatus for automatic segmentation of an object | |
CN105005760A (en) | Pedestrian re-identification method based on finite mixture model | |
CN103020606A (en) | Pedestrian detection method based on spatio-temporal context information | |
Brejcha et al. | GeoPose3K: Mountain landscape dataset for camera pose estimation in outdoor environments | |
Nguyen et al. | Local density encoding for robust stereo matching | |
CN103226825B (en) | Based on the method for detecting change of remote sensing image of low-rank sparse model | |
Mukherjee et al. | Development of new index-based methodology for extraction of built-up area from landsat7 imagery: Comparison of performance with svm, ann, and existing indices | |
CN105224914A (en) | A kind of based on obvious object detection method in the nothing constraint video of figure | |
CN102663684A (en) | SAR image segmentation method based on Gauss mixing model parameter block migration clustering | |
CN103530601A (en) | Monitoring blind area crowd state deduction method based on Bayesian network | |
Meesuk et al. | Using multidimensional views of photographs for flood modelling | |
Usmani et al. | Towards global scale segmentation with OpenStreetMap and remote sensing | |
Rimboux et al. | Smart IoT cameras for crowd analysis based on augmentation for automatic pedestrian detection, simulation and annotation | |
Jin et al. | The Segmentation of Road Scenes Based on Improved ESPNet Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |