CN108052585B - Method for judging dynamic target in complex environment - Google Patents

Method for judging dynamic target in complex environment Download PDF

Info

Publication number
CN108052585B
CN108052585B CN201711307523.0A CN201711307523A CN108052585B CN 108052585 B CN108052585 B CN 108052585B CN 201711307523 A CN201711307523 A CN 201711307523A CN 108052585 B CN108052585 B CN 108052585B
Authority
CN
China
Prior art keywords
grid
picture
pictures
description data
dynamic target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711307523.0A
Other languages
Chinese (zh)
Other versions
CN108052585A (en
Inventor
高兴宇
钟汇才
洪一峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Fenghua United Technology Co., Ltd.
Original Assignee
Jiangsu Fenghua United Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Fenghua United Technology Co ltd filed Critical Jiangsu Fenghua United Technology Co ltd
Priority to CN201711307523.0A priority Critical patent/CN108052585B/en
Publication of CN108052585A publication Critical patent/CN108052585A/en
Application granted granted Critical
Publication of CN108052585B publication Critical patent/CN108052585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Abstract

The invention provides a method for judging a dynamic target in a complex environment, which mainly comprises the following steps: defining a plurality of sample pictures, carrying out first granularity gridding treatment on the sample pictures to form a plurality of gridded grid pictures, adding geographic position information and environment description data into the grid pictures, storing the grid pictures into a database of a geographic information system, and establishing index association; then, gridding the picture to be judged based on a first granularity gridding processing mode, and carrying out first comparison with the grid picture to screen out a plurality of grid pictures which generate changes; and finally, carrying out second comparison on the grid picture with the same geographical position information and the environment description data in the sample picture, and judging whether the target is a dynamic target. The invention effectively eliminates the interference of the external environment, realizes the accurate judgment of the dynamic target in the complex scene and has the advantage of low calculation cost.

Description

Method for judging dynamic target in complex environment
Technical Field
The invention relates to the technical field of multimedia content analysis, in particular to a method for judging a dynamic target in a complex environment.
Background
The border line between countries requires accurate determination of the dynamic target and determination of the attributes of the dynamic target (e.g., people, vehicles, objects, etc.). At present, border lines are usually monitored or attended by people, so that the monitoring efficiency of dynamic targets in complex scenes is very low. Therefore, in the prior art, a closed-circuit television system is adopted to perform centralized remote monitoring on border lines, but the prior art still depends on manual judgment of dynamic targets appearing in monitoring videos, so that the defects of low monitoring efficiency and the like exist.
In addition, due to limitations in video quality, traditional video content analysis technology and other aspects, the judgment on the attributes of the dynamic targets in the complex scene is not accurate enough, and problems of erroneous judgment, missing report and the like are easy to occur.
Finally, in an outdoor environment, due to the fact that shadows formed by illumination, rain, fog and snow and cloud floating easily cause great interference on monitoring videos on an ambient line, the defects that dynamic targets cannot be found or non-dynamic targets are judged to be the dynamic targets by mistake exist. This also greatly affects the accuracy of the determination of the dynamic target to some extent.
Disclosure of Invention
The invention aims to solve the problems that the dynamic target in the complex scene is accurately judged to reduce the interference of the interference factors of the external environment on the judgment of the dynamic target, so that the dynamic target is accurately monitored and judged, and the time complexity and space complexity overhead required by judging the dynamic target in the complex scene are reduced.
In order to achieve the above object, the present invention provides a method for determining a dynamic target in a complex environment, comprising the following steps:
step S1, defining a plurality of sample pictures, carrying out first granularity gridding processing on the sample pictures to form a grid picture containing a plurality of griddings, adding geographic position information and environment description data to grids in the grid picture, storing the grids in a database of a geographic information system, and establishing index association;
step S2, performing a gridding process on the to-be-determined picture based on a first granularity gridding process, and performing a first comparison with the grid pictures of the added geographic location information and the environmental description data stored in the database in step S1 to screen out a plurality of grid pictures with changes, where a calculation formula of the first comparison is shown in the following formula (1):
Δi=|Xi-X*|>ηi,i=1,2,…,h (1);
wherein eta isiIs an empirical parameter, ηiThe value range of (A) is 0-30%, X*Is a grid picture of annotated geographical location information and environmental description data stored in a database, XiH is the number of the selected grid pictures in the first comparison, and h is a positive integer greater than or equal to 1;
then, comparing the grid picture with the same geographical location information with the environment description data in the sample picture for the second time to determine whether the target in the area where the changed grid picture is located is a dynamic target, wherein a calculation formula of the second comparison is as shown in the following formula (2):
Δj=|Yj-Y*|>ηj,j=1,2,…,n (2);
wherein eta isjIs an empirical parameter, ηjThe value range of (A) is 0-30%, Y*Is a grid picture of annotated geographical location information and environmental description data stored in a database, YjAnd n is the number of the selected grid pictures in the second comparison, and n is a positive integer greater than or equal to 1.
As a further improvement of the present invention, in step S1, the sample picture is a picture obtained by an image capturing device under different seasonal and/or diurnal variation conditions in the same scene.
As a further improvement of the invention, the environment description data is composed of one or more than two of seasonal description data, diurnal variation data, dynamic target description data, non-dynamic target description data, altitude data, dynamic target occurrence probability data or non-dynamic target occurrence probability data, and unique environment description data and geographical position information are labeled for each grid picture.
As a further improvement of the present invention, the resolution of the grid picture formed by the first-granularity grid processing in step S1 is 0.1 × 0.1m to 1.0 × 1.0 m.
As a further improvement of the present invention, the step S2 further includes: carrying out at least one secondary gridding treatment on the screened grid pictures with the changed grid granularity smaller than the first granularity gridding treatment to form a plurality of sub-grids; the sub-grid includes all the geographical location information and environment description data included in the changed grid picture.
As a further improvement of the present invention, the step S2 further includes: sequentially carrying out at least one secondary gridding treatment on grids corresponding to four quadrants according to the sequence of the coverage areas in the adjacent four quadrants from large to small so as to form a plurality of sub-grids, and carrying out data matching on the geographical position information of a superior grid to which the sub-grids belong in a database according to the established index association so as to finally determine whether a target image covered by the minimum circumscribed rectangle is a dynamic target.
Compared with the prior art, the invention has the beneficial effects that: the method can effectively eliminate external environment interference (such as a pseudo-dynamic target caused by movement of a shadow cast by cloud floating), thereby avoiding misjudgment of a non-dynamic target as a dynamic target to the maximum extent, realizing accurate judgment of the dynamic target in a complex scene, reducing the misjudgment rate of judging the non-dynamic target as the dynamic target, and simultaneously remarkably reducing the time complexity and space complexity overhead required by judging the dynamic target in the complex scene by combining geographical position information and environment description data.
Drawings
Fig. 1 is a sample picture in step S1;
FIG. 2 is a grid image obtained by performing a first granular gridding process on the image of FIG. 1;
FIG. 3 is a schematic diagram of adding geographical location information to a grid picture;
FIG. 4 is a sample picture taken by an image capture device in a scene;
FIG. 5 is another picture taken under the same conditions by the same image capture device at the angle shown in FIG. 4;
fig. 6 is a schematic diagram of a mesh picture formed by performing the first-granularity meshing process on fig. 5, and determining a target in an area where the mesh picture that changes after performing a secondary comparison between the mesh picture with the same geographical location information and the description data in the picture shown in fig. 5 is a dynamic target;
FIG. 7 is a schematic diagram of the dynamic target of FIG. 6 undergoing a secondary gridding process;
fig. 8 is a schematic diagram of a dynamic target extracted from the minimum bounding rectangle determined by the dynamic target shown in fig. 7.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Fig. 1 to fig. 3 are first used to describe a detailed process of performing a first-time gridding process on a sample picture and a picture to be determined based on a first granularity gridding process in the present embodiment.
Referring to fig. 1, there is a tree a (in the embodiment pre-identified as a static target) in a sample picture 10. Such as a plurality of sample pictures 10, which are acquired by the image acquisition device, are referred to as sample pictures. In order to improve the accuracy of determining a dynamic target in a complex environment, in the present embodiment, the sample picture 10 in step S1 is a picture obtained by an image capturing device (e.g., a video camera, a still camera) in different seasons, different daytime change conditions, or both of the same scene.
The sample picture 10 shown in fig. 1 is sliced at a 10 × 10 gridding slice scale to form 100 gridded grid pictures, namely grid picture a 00-grid picture a99 (see fig. 2). Then, the mesh pictures covered by the tree a as the static object include mesh picture a67, mesh picture a68, mesh picture a77, and mesh picture a 78. And then, the four grid pictures are added with geographical position information and environment description data and then are stored in a database of a geographical position information system, and index association is established. The environment description data is composed of one or more than two of seasonal description data, daytime change data, dynamic target description data, non-dynamic target description data, altitude data, dynamic target occurrence probability data or non-dynamic target occurrence probability data, and unique environment description data and geographical position information are marked for each grid picture.
Specifically, the operations of adding the geographical location information and the environment description data are respectively performed on 100 mesh pictures included in the mesh pictures a00 to a 99. For example, the environment description data standardized for the grid picture a00 is: summer (which may be specific to a specific month and/or date) + day (which may be specific to a certain time) + grass + latitude + altitude data + dynamic target occurrence probability + target attribute (for example, the target to which the grid picture belongs is a tree, a road, an animal or a person, etc.), and index-associates all the above-mentioned sub-environment description data with the grid picture a 00.
Meanwhile, referring to fig. 3, for a certain static object (e.g., tree a) in the sample picture 10, the geographical location information of the four grid pictures covered by the static object is annotated. Specifically, the geographical location information of the grid picture a67 is planned jointly by (x1, y1), (x2, y2), (x4, y4) and (x5, y5), the geographical location information of the grid picture a68 is planned jointly by (x2, y2), (x3, y3), (x5, y5) and (x6, y6), the geographical location information of the grid picture a77 is planned jointly by (x4, y4), (x5, y5), (x7, y7) and (x8, y8), and the geographical location information of the grid picture a78 is planned jointly by (x5, y5), (x6, y6), (x8, y8) and (x9, y 9). The specific location of the tree a in the sample picture 10 can be described by the geographical location information.
Specifically, in the present embodiment, the resolution of the mesh picture formed in the first-granularity meshing processing in step S1 is 0.1 × 0.1m to 1.0 × 1.0 m. Specifically, the segmentation granularity of the grid picture obtained by performing the first-granularity gridding processing in step S1 depends on the number of pixels and the resolution of the original sample picture 10. In general, in order to ensure that a dynamic target is accurately determined and to take account of the problem of computational overhead, the original sample picture 10 may be divided into 100 mesh pictures, and the shape of each mesh picture is not limited to a square, for example, a rectangular mesh picture, but a regular hexagonal mesh picture. The difference is that in step S1, when adding the geographical location information to the regular hexagonal grid picture, coordinate data of six coordinate points needs to be marked.
Referring to fig. 4, step S2 is next performed. We describe in detail the process of determining the sample picture 20 when a dynamic object B appears in a certain frame.
In fig. 4, a river 30 and a tree a are shown in the sample picture 20. In fig. 5, a ship 40 appears in the river 30 on the premise of the same environmental description data. Then, with respect to FIG. 4, the vessel 40 is the dynamic target B in FIG. 5. As described above, the sample picture 20 needs to be gridded by the first granularity gridding process to form 100 grid pictures, namely, grid picture a 00-grid picture a 99.
Next, the picture to be determined 20a shown in fig. 6 is compared with the sample picture 20 to determine that the gridding processing is performed on the sample picture 20 based on the first granularity gridding processing mode, and the first comparison is performed with the grid pictures filled with the geographic location information and the environmental description data stored in the database in step S1 to screen out the grid pictures with changes, so that the grid pictures with changes are used as highly suspected objects to perform the second comparison.
In fig. 6, the changed grid pictures include grid picture a38 and grid picture a 39; in fig. 6, the grid picture a67, the grid picture a68, the grid picture a77, and the grid picture a67 in the picture 20a to be determined are not changed. So far the computer can completely decide tree a as a static target.
In this embodiment, the calculation formula for comparing the sample picture 20 with the picture 20a to be determined for the first time is shown in the following formula (1):
Δi=|Xi-X*|>ηi,i=1,2,…,h (1);
wherein eta isiIs an empirical parameter, ηiThe value range of (1) is 0-30%, X is a grid picture of the added geographic position information and the environment description data stored in the database, and X isiAnd h is the number of the selected grid pictures in the first comparison, and h is a positive integer greater than or equal to 1.
Then, the grid pictures with the same geographical location information are compared with the environment description data in the sample picture 20 for the second time to determine whether the target in the area where the changed grid picture is located (i.e. the grid picture a38 and the grid picture a39) is a dynamic target, and the calculation formula of the second comparison is as follows (2):
Δj=|Yj-Y*|>ηj,j=1,2,…,n (2);
wherein eta isjIs an empirical parameter, ηjThe value range of (A) is 0-30%, Y*Is a grid picture of annotated geographical location information and environmental description data stored in a database, YjAnd n is the number of the selected grid pictures in the second comparison, and n is a positive integer greater than or equal to 1.
In the method disclosed in this embodiment, it can be found whether the target in the area where the changed mesh picture is located is a dynamic target through the second comparison, so as to determine that the changed mesh picture really has the dynamic target. The dynamic targets are not limited to ships, pedestrians, animals.
Meanwhile, the calculation cost for comparing the dynamic targets is reduced, and the detection precision of the dynamic targets is improved. In the method for determining a dynamic target in a complex environment disclosed in this embodiment, the step S2 further includes performing at least one secondary gridding process on the screened changed grid pictures, i.e., the grid picture a38 and the grid picture a39, with a smaller gridding granularity.
In the actual calculation process, the area covered by the ship 40 (i.e., the dynamic target B) appearing in the river 30 is too large. Therefore, as shown in fig. 7 and fig. 8, preferably, in this embodiment, the step S2 further includes: and performing at least one secondary gridding treatment on the screened mesh pictures (namely the mesh picture A38 and the mesh picture A39) with the smaller gridding granularity relative to the first granularity gridding treatment to form a plurality of sub meshes A3800-sub meshes A3833 (16 sub meshes in total) and sub meshes A3900-sub meshes A3933 (16 sub meshes in total).
In the present embodiment, the smaller gridding granularity refers to the resolution at the time of the first gridding process; also, in the present embodiment, the mesh processing may be performed one or more times on the mesh picture that has been changed with a smaller mesh granularity; further, the gridding operation may be performed in the transverse direction, the longitudinal direction, or both the transverse direction and the longitudinal direction as shown in fig. 7.
All the sub-grids include all the geographical location information and environment description data included in the changed grid picture. Specifically, since the subgrid a3800 to the subgrid a3833 belong to the grid picture a38, and the subgrid a3900 to the subgrid a3933 belong to the grid picture a 39. Therefore, the sub-grids formed by at least some secondary gridding processes can search the corresponding geographical position information and the environment description data in a database of a geographical information system through the established indexes, and the geographical position information and the environment description data are assigned to a plurality of sub-grids formed by the secondary gridding processes, so that reference data and basis are provided for later judging whether the area where the changed grid picture is located is a dynamic target.
Preferably, in this embodiment, the step S2 further includes: and sequentially carrying out at least one secondary gridding treatment on the grids corresponding to the four quadrants according to the sequence of the coverage areas in the adjacent four quadrants from large to small so as to form a plurality of sub-grids, and carrying out data matching on the geographical position information of the upper-level grids to which the sub-grids belong in a database according to the established index association so as to finally determine whether the target image covered by the minimum circumscribed rectangle 70 is a dynamic target.
Specifically, the minimum bounding rectangle 70 may be divided into quadrants i, ii, iii, and iv at an intersection 200 where the grid picture a38 borders the grid picture a39 as an origin. Therefore, the final judgment range of whether the target is a dynamic target is further limited in the sub-grid A3910, the sub-grid A3911, the sub-grid A3833, the sub-grid A3920, the sub-grid A3921 and the sub-grid A3833, and the calculation overhead of scanning contrast is greatly reduced.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (6)

1. A method for judging a dynamic target in a complex environment is characterized by comprising the following steps:
step S1, defining a plurality of sample pictures, carrying out first granularity gridding processing on the sample pictures to form a plurality of gridded grid pictures, adding geographical position information and environment description data into the grid pictures, storing the grid pictures into a database of a geographical information system, and establishing index association;
step S2, performing a gridding process on the to-be-determined picture based on a first granularity gridding process, and performing a first comparison with the grid pictures of the added geographic location information and the environmental description data stored in the database in step S1 to screen out a plurality of grid pictures with changes, where a calculation formula of the first comparison is shown in the following formula (1):
Δi=|Xi-X*|>ηi,i=1,2,…,h (1);
wherein eta isiIs an empirical parameter, ηiThe value range of (A) is 0-30%, X*Is a grid picture of annotated geographical location information and environmental description data stored in a database, XiH is the number of the selected grid pictures in the first comparison, and h is a positive integer greater than or equal to 1;
then, comparing the grid picture with the same geographical location information with the environment description data in the sample picture for the second time to determine whether the target in the area where the changed grid picture is located is a dynamic target, wherein a calculation formula of the second comparison is as shown in the following formula (2):
Δj=|Yj-Y*|>ηj,j=1,2,…,n (2);
wherein eta isjIs an empirical parameter, ηjThe value range of (A) is 0-30%, Y*Is a grid picture of annotated geographical location information and environmental description data stored in a database, YjN is the second grid picture to be determined and changedAnd n is a positive integer greater than or equal to 1.
2. The method according to claim 1, wherein the sample pictures in step S1 are selected from pictures obtained by image capturing devices under different seasonal and/or diurnal variations in the same scene.
3. The method according to claim 1, wherein the environment description data is composed of one or more of seasonal description data, diurnal variation data, dynamic target description data, non-dynamic target description data, altitude data, dynamic target occurrence probability data, or non-dynamic target occurrence probability data, and unique environment description data and geographical location information are labeled for each grid picture.
4. The method according to claim 1, wherein the resolution of the grid picture formed in the first granularity grid processing in the step S1 is 0.1 mx 0.1m to 1.0 χ 1.0 m.
5. The method according to claim 1, wherein the step S2 further comprises: carrying out at least one secondary gridding treatment on the screened grid pictures with the changed grid granularity smaller than the first granularity gridding treatment to form a plurality of sub-grids; the sub-grid includes all the geographical location information and environment description data included in the changed grid picture.
6. The method according to claim 5, wherein the step S2 further comprises: and sequentially carrying out at least one secondary gridding treatment on the grids corresponding to the four quadrants according to the sequence of the coverage areas in the adjacent four quadrants from large to small so as to form a plurality of sub-grids, and carrying out data matching on the geographical position information of the upper-level grid to which the sub-grids belong in a database according to the established index association so as to finally determine whether the target image covered by the minimum circumscribed rectangle is a dynamic target.
CN201711307523.0A 2017-12-11 2017-12-11 Method for judging dynamic target in complex environment Active CN108052585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711307523.0A CN108052585B (en) 2017-12-11 2017-12-11 Method for judging dynamic target in complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711307523.0A CN108052585B (en) 2017-12-11 2017-12-11 Method for judging dynamic target in complex environment

Publications (2)

Publication Number Publication Date
CN108052585A CN108052585A (en) 2018-05-18
CN108052585B true CN108052585B (en) 2021-11-23

Family

ID=62123457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711307523.0A Active CN108052585B (en) 2017-12-11 2017-12-11 Method for judging dynamic target in complex environment

Country Status (1)

Country Link
CN (1) CN108052585B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114777193B (en) * 2022-04-24 2023-05-30 浙江英集动力科技有限公司 Method and system for switching household regulation and control modes of secondary network of heating system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102456140A (en) * 2010-10-22 2012-05-16 支录奎 Human behavior invasion multi-mode identification integration threshold values determination method based on image purification
CN102609954A (en) * 2010-12-17 2012-07-25 微软公司 Validation analysis of human target
CN102867196A (en) * 2012-09-13 2013-01-09 武汉大学 Method for detecting complex sea-surface remote sensing image ships based on Gist characteristic study
US8739064B1 (en) * 2004-10-06 2014-05-27 Adobe Systems Incorporated Thumbnail scaling based on display pane size
CN104639397A (en) * 2015-01-13 2015-05-20 中国科学院计算技术研究所 Method and system for obtaining regular activity areas of user
CN104820818A (en) * 2014-12-26 2015-08-05 广东中科遥感技术有限公司 Fast recognition method for moving object

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090276644A1 (en) * 2008-05-02 2009-11-05 Goodnow Kenneth J Structure for semiconductor power distribution and control

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8739064B1 (en) * 2004-10-06 2014-05-27 Adobe Systems Incorporated Thumbnail scaling based on display pane size
CN102456140A (en) * 2010-10-22 2012-05-16 支录奎 Human behavior invasion multi-mode identification integration threshold values determination method based on image purification
CN102609954A (en) * 2010-12-17 2012-07-25 微软公司 Validation analysis of human target
CN102867196A (en) * 2012-09-13 2013-01-09 武汉大学 Method for detecting complex sea-surface remote sensing image ships based on Gist characteristic study
CN104820818A (en) * 2014-12-26 2015-08-05 广东中科遥感技术有限公司 Fast recognition method for moving object
CN104639397A (en) * 2015-01-13 2015-05-20 中国科学院计算技术研究所 Method and system for obtaining regular activity areas of user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
复杂背景下运动目标检测和识别关键技术研究;谢晓萌;《中国博士学位论文全文数据库》;20130515;I138-28 *

Also Published As

Publication number Publication date
CN108052585A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN105893972B (en) Automatic monitoring method for illegal building based on image and implementation system thereof
CN109416413B (en) Solar energy forecast
Bergamasco et al. Scalable methodology for the photovoltaic solar energy potential assessment based on available roof surface area: Further improvements by ortho-image analysis and application to Turin (Italy)
CN110516014B (en) Method for mapping urban road monitoring video to two-dimensional map
US11830167B2 (en) System and method for super-resolution image processing in remote sensing
CN113469278B (en) Strong weather target identification method based on deep convolutional neural network
CN112380367B (en) Entropy-based remote sensing image data screening method
US20220286599A1 (en) Systems and methods for adjusting a monitoring device
CN111949817A (en) Crop information display system, method, equipment and medium based on remote sensing image
CN111683221B (en) Real-time video monitoring method and system for natural resources embedded with vector red line data
CN116343103A (en) Natural resource supervision method based on three-dimensional GIS scene and video fusion
Chen et al. Building change detection in very high-resolution remote sensing image based on pseudo-orthorectification
CN116168246A (en) Method, device, equipment and medium for identifying waste slag field for railway engineering
CN113552656B (en) Rainfall intensity monitoring method and system based on outdoor image multi-space-time fusion
CN108052585B (en) Method for judging dynamic target in complex environment
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN113628180A (en) Semantic segmentation network-based remote sensing building detection method and system
CN115861922B (en) Sparse smoke detection method and device, computer equipment and storage medium
CN112040265A (en) Multi-camera collaborative geographic video live broadcast stream generation method
Li et al. Low-cost 3D building modeling via image processing
CN113505139A (en) Remote sensing image automatic updating and history backtracking method and device based on single service
Guo et al. Population estimation in Singapore based on remote sensing and open data
Gueguen et al. Urbanization analysis by mutual information based change detection between SPOT 5 panchromatic images
CN116523663B (en) Method and device for determining urban building group wind disaster insurance premium based on physical model
CN112767469B (en) Highly intelligent acquisition method for urban mass buildings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20180919

Address after: 214000 -2201, 592 West Bank Road, Huishan, Wuxi, Jiangsu.

Applicant after: Jiangsu Fenghua United Technology Co., Ltd.

Address before: 214000 -2202-1, 592 West Bank Road, Huishan, Wuxi, Jiangsu.

Applicant before: Jiangsu Fenghua Valley Technology Development Co. Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant