CN116630825A - Satellite remote sensing data and monitoring video fusion method and system - Google Patents

Satellite remote sensing data and monitoring video fusion method and system Download PDF

Info

Publication number
CN116630825A
CN116630825A CN202310681508.1A CN202310681508A CN116630825A CN 116630825 A CN116630825 A CN 116630825A CN 202310681508 A CN202310681508 A CN 202310681508A CN 116630825 A CN116630825 A CN 116630825A
Authority
CN
China
Prior art keywords
remote sensing
monitoring
target
rendering
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310681508.1A
Other languages
Chinese (zh)
Inventor
顾竹
杜腾腾
张弓
张文鹏
徐春萌
吴众望
彭欣
张艳忠
简敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiage Tiandi Technology Co ltd
Original Assignee
Beijing Jiage Tiandi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiage Tiandi Technology Co ltd filed Critical Beijing Jiage Tiandi Technology Co ltd
Priority to CN202310681508.1A priority Critical patent/CN116630825A/en
Publication of CN116630825A publication Critical patent/CN116630825A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application relates to the technical field of satellite remote sensing, and particularly discloses a method and a system for fusing satellite remote sensing data and monitoring video. According to the method, the monitoring background image is obtained by video monitoring shooting; acquiring a regional remote sensing image corresponding to the monitoring background image; performing target recognition on the regional remote sensing image, and determining and marking a plurality of target objects; identifying and rendering a plurality of target objects according to the monitoring background image; and comparing the monitoring background image with the area remote sensing image, identifying a plurality of lack objects, and identifying and rendering. The method can acquire a monitoring background image of video monitoring, determine an area remote sensing image corresponding to satellite remote sensing data, mark a plurality of target objects for identification and rendering, identify a plurality of deficient objects and identify and render, and effectively fuse the satellite remote sensing data with the monitoring video, so that the processed satellite remote sensing data is close to the actual situation, and the satellite remote sensing data can be conveniently used in practice.

Description

Satellite remote sensing data and monitoring video fusion method and system
Technical Field
The application belongs to the technical field of satellite remote sensing, and particularly relates to a method and a system for fusing satellite remote sensing data and monitoring video.
Background
The satellite remote sensing data is that the remote sensing satellite detects the reflection of an earth surface object on electromagnetic waves and the electromagnetic waves emitted by the object on the earth surface in space, thereby extracting the object information, completing the remote recognition of the object, converting the electromagnetic waves, and recognizing to obtain a visible image, which is not only a satellite image, but also popular simple explanation: the satellite takes pictures of the ground in the air, the ground grows, the satellite takes pictures of the ground, and the satellite carries longitude and latitude information and real-time landform pictures.
In the conventional processing technology for satellite remote sensing data, the satellite remote sensing data and a monitoring video cannot be fused, but target identification and rendering are directly performed according to the outline of a target in the satellite remote sensing data, so that identification errors easily occur, the processed satellite remote sensing data has great difference from the actual situation, and the satellite remote sensing data cannot be used conveniently.
Disclosure of Invention
The embodiment of the application aims to provide a method and a system for fusing satellite remote sensing data and monitoring video, which aim to solve the problems in the background technology.
In order to achieve the above object, the embodiment of the present application provides the following technical solutions:
a method for fusing satellite remote sensing data and monitoring video specifically comprises the following steps:
video monitoring shooting is carried out, a monitoring background image is obtained, and a monitoring positioning position is determined;
acquiring satellite remote sensing data, and acquiring an area remote sensing image corresponding to the monitoring background image based on the monitoring positioning position;
performing target recognition on the regional remote sensing image, and determining and marking a plurality of target objects;
identifying and rendering a plurality of target objects according to the monitoring background image;
and comparing the monitoring background image with the area remote sensing image, identifying a plurality of lack objects, and identifying and rendering.
As a further limitation of the technical solution of the embodiment of the present application, the video monitoring shooting, obtaining a monitoring background image, and determining a monitoring positioning position specifically includes the following steps:
video monitoring shooting is carried out, and monitoring video data are obtained;
performing frame-by-frame processing on the monitoring video data to obtain a plurality of frame-by-frame monitoring images;
dynamically identifying a plurality of the frame-by-frame monitoring images, and screening monitoring background images;
and performing monitoring positioning and determining the monitoring positioning position.
As a further limitation of the technical solution of the embodiment of the present application, the acquiring satellite remote sensing data, based on the monitoring positioning position, and acquiring the area remote sensing image corresponding to the monitoring background image specifically includes the following steps:
acquiring satellite remote sensing data transmitted by a remote sensing satellite;
determining and intercepting a corresponding image of a film area from the satellite remote sensing data based on the monitoring positioning position;
the corresponding image of the region and the monitoring background image are synthesized, and the video monitoring azimuth is identified;
and intercepting a region remote sensing image from the corresponding image of the region according to the video monitoring azimuth.
As a further limitation of the technical solution of the embodiment of the present application, the target recognition is performed on the remote sensing image of the area, and the determining and marking of the plurality of target objects specifically includes the following steps:
performing target recognition on the regional remote sensing image, and determining a plurality of target objects;
determining target boundaries of a plurality of target objects;
in the regional remote sensing image, a plurality of target objects are marked according to a plurality of target boundaries.
As a further limitation of the technical solution of the embodiment of the present application, the identifying and rendering the plurality of target objects according to the monitoring background image specifically includes the following steps:
performing position recognition on a plurality of target objects according to the monitoring background image to obtain a plurality of target recognition images;
performing type recognition on a plurality of target recognition images, and determining corresponding target types;
matching target rendering substrates corresponding to a plurality of target types from a preset type rendering database;
and rendering the plurality of target identification images according to the plurality of target rendering substrates.
As a further limitation of the technical solution of the embodiment of the present application, the object comparison between the monitoring background image and the area remote sensing image, the identification of a plurality of deficiency objects and the identification and rendering of the deficiency objects specifically include the following steps:
performing object comparison on the monitoring background image and the regional remote sensing image, and recording a comparison result;
identifying a plurality of starved objects according to the comparison result;
matching object rendering substrates corresponding to a plurality of lack objects from a preset type rendering database;
and rendering the substrate according to the plurality of objects, and performing complementary rendering in the regional remote sensing image.
The system comprises a monitoring shooting processing unit, a remote sensing data processing unit, a target identification marking unit, a target identification rendering unit and a lack identification rendering unit, wherein:
the monitoring shooting processing unit is used for carrying out video monitoring shooting, obtaining a monitoring background image and determining a monitoring positioning position;
the remote sensing data processing unit is used for acquiring satellite remote sensing data and acquiring an area remote sensing image corresponding to the monitoring background image based on the monitoring positioning position;
the target identification marking unit is used for carrying out target identification on the regional remote sensing image and determining and marking a plurality of target objects;
the target recognition rendering unit is used for recognizing and rendering a plurality of target objects according to the monitoring background image;
and the lack identification rendering unit is used for comparing the object of the monitoring background image with the area remote sensing image, identifying a plurality of lack objects and performing identification and rendering.
As a further limitation of the technical solution of the embodiment of the present application, the remote sensing data processing unit specifically includes:
the remote sensing acquisition module is used for acquiring satellite remote sensing data sent by a remote sensing satellite;
the corresponding intercepting module is used for determining and intercepting a corresponding image of the film area from the satellite remote sensing data based on the monitoring positioning position;
the azimuth identifying module is used for integrating the corresponding image of the region and the monitoring background image and identifying the video monitoring azimuth;
and the region intercepting module is used for intercepting a region remote sensing image from the corresponding image of the region according to the video monitoring azimuth.
As a further limitation of the technical solution of the embodiment of the present application, the object recognition rendering unit specifically includes:
the type recognition module is used for carrying out type recognition on the plurality of target recognition images and determining corresponding target types;
the target substrate matching module is used for matching target rendering substrates corresponding to a plurality of target types from a preset type rendering database;
and the target rendering module is used for rendering the plurality of target identification images according to the plurality of target rendering substrates.
As a further limitation of the technical solution of the embodiment of the present application, the deficiency identifier rendering unit specifically includes:
the object comparison module is used for comparing the object of the monitoring background image with the remote sensing image of the area and recording a comparison result;
the starvation identification module is used for identifying a plurality of starvation objects according to the comparison result;
the object substrate matching module is used for matching object rendering substrates corresponding to a plurality of lack objects from a preset type rendering database;
and the supplementary rendering module is used for rendering the substrate according to the plurality of objects and carrying out supplementary rendering in the region remote sensing image.
Compared with the prior art, the application has the beneficial effects that:
according to the embodiment of the application, the monitoring background image is obtained by video monitoring shooting; acquiring a regional remote sensing image corresponding to the monitoring background image; performing target recognition on the regional remote sensing image, and determining and marking a plurality of target objects; identifying and rendering a plurality of target objects according to the monitoring background image; and comparing the monitoring background image with the area remote sensing image, identifying a plurality of lack objects, and identifying and rendering. The method can acquire a monitoring background image of video monitoring, determine an area remote sensing image corresponding to satellite remote sensing data, mark a plurality of target objects for identification and rendering, identify a plurality of deficient objects and identify and render, and effectively fuse the satellite remote sensing data with the monitoring video, so that the processed satellite remote sensing data is close to the actual situation, and the satellite remote sensing data can be conveniently used in practice.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application.
Fig. 1 shows a flowchart of a method provided by an embodiment of the present application.
Fig. 2 shows a flowchart of video surveillance shooting processing in the method provided by the embodiment of the application.
Fig. 3 shows a flowchart of satellite remote sensing data processing in the method according to the embodiment of the application.
Fig. 4 shows a flowchart of a target identification mark process in the method according to the embodiment of the present application.
Fig. 5 shows a flowchart of object recognition rendering in the method according to the embodiment of the present application.
FIG. 6 illustrates a flow chart of starved object marker rendering in a method provided by an embodiment of the application.
Fig. 7 shows an application architecture diagram of a system provided by an embodiment of the present application.
Fig. 8 shows a block diagram of a remote sensing data processing unit in the system according to an embodiment of the present application.
Fig. 9 is a block diagram illustrating a structure of a target recognition rendering unit in a system according to an embodiment of the present application.
FIG. 10 is a block diagram illustrating a system of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It can be understood that in the picture of the satellite remote sensing data, no identification information is usually needed to carry out post-processing treatment, in the existing satellite remote sensing data processing technology, the satellite remote sensing data and the monitoring video cannot be fused, but target identification and rendering are directly carried out according to the outline of the target in the satellite remote sensing data, so that identification errors easily occur, the processed satellite remote sensing data has great difference from the actual situation, and the satellite remote sensing data is inconvenient to use.
In order to solve the problems, the embodiment of the application acquires a monitoring background image by performing video monitoring shooting; acquiring a regional remote sensing image corresponding to the monitoring background image; performing target recognition on the regional remote sensing image, and determining and marking a plurality of target objects; identifying and rendering a plurality of target objects according to the monitoring background image; and comparing the monitoring background image with the area remote sensing image, identifying a plurality of lack objects, and identifying and rendering. The method can acquire a monitoring background image of video monitoring, determine an area remote sensing image corresponding to satellite remote sensing data, mark a plurality of target objects for identification and rendering, identify a plurality of deficient objects and identify and render, and effectively fuse the satellite remote sensing data with the monitoring video, so that the processed satellite remote sensing data is close to the actual situation, and the satellite remote sensing data can be conveniently used in practice.
Fig. 1 shows a flowchart of a method provided by an embodiment of the present application.
Specifically, in a preferred embodiment provided by the present application, a method for fusing satellite remote sensing data and surveillance video, the method specifically includes the following steps:
step S101, video monitoring shooting is carried out, a monitoring background image is obtained, and a monitoring positioning position is determined.
In the embodiment of the application, the video monitoring shooting is carried out to obtain the shot monitoring video data, then the frame-by-frame processing is carried out on the monitoring video data to obtain a plurality of frame-by-frame monitoring images, the dynamic analysis and marking are carried out on the frame-by-frame monitoring images, dynamic objects (such as people, pets and vehicles) in the frame-by-frame monitoring images are identified, the frame-by-frame monitoring images without the dynamic objects are screened out from the frame-by-frame monitoring images, the frame-by-frame monitoring images are marked as monitoring background images, and the monitoring positioning position is obtained by positioning the video monitoring shooting position (the monitoring positioning data can be extracted from the monitoring video data, and the monitoring positioning position is determined).
Specifically, fig. 2 shows a flowchart of video surveillance shooting processing in the method provided by the embodiment of the application.
In a preferred embodiment of the present application, the video monitoring shooting, obtaining a monitoring background image, and determining a monitoring positioning position specifically includes the following steps:
step S1011, video monitoring shooting is carried out, and monitoring video data is obtained.
Step S1012, performing frame-by-frame processing on the monitoring video data, and obtaining a plurality of frame-by-frame monitoring images.
Step S1013, dynamically identifying a plurality of the frame-by-frame monitoring images, and screening a monitoring background image.
Step S1014, performing monitoring positioning, and determining the monitoring positioning position.
Further, the method for fusing satellite remote sensing data and monitoring video further comprises the following steps:
step S102, acquiring satellite remote sensing data, and acquiring an area remote sensing image corresponding to the monitoring background image based on the monitoring positioning position.
In the embodiment of the application, satellite remote sensing data sent by a remote sensing satellite are acquired, position marks are carried out in the satellite remote sensing data according to a monitoring positioning position, the maximum sight distance of video monitoring shooting is taken as an original point, the maximum sight distance is taken as a radius, a corresponding image of a region is determined from the satellite remote sensing data, then the corresponding image of the region is cut out, and then the monitoring background image is combined, in a surrounding space of the corresponding image of the region, the video monitoring azimuth corresponding to the video monitoring shooting is identified and determined, and then the remote sensing image of the region is cut out from the corresponding image of the region according to the video monitoring azimuth.
Specifically, fig. 3 shows a flowchart of satellite remote sensing data processing in the method provided by the embodiment of the application.
In a preferred embodiment of the present application, the acquiring satellite remote sensing data, based on the monitoring positioning position, acquiring the area remote sensing image corresponding to the monitoring background image specifically includes the following steps:
step S1021, satellite remote sensing data sent by a remote sensing satellite is obtained.
Step S1022, determining and intercepting a corresponding image of the film region from the satellite remote sensing data based on the monitored positioning position.
Step S1023, integrating the corresponding image of the region and the monitoring background image, and identifying the video monitoring azimuth.
And step S1024, intercepting the remote sensing image of the area from the corresponding image of the area according to the video monitoring azimuth.
Further, the method for fusing satellite remote sensing data and monitoring video further comprises the following steps:
and step S103, carrying out target recognition on the regional remote sensing image, and determining and marking a plurality of target objects.
In the embodiment of the application, a plurality of target objects in the area remote sensing image are determined by carrying out target identification on the area remote sensing image, and meanwhile, the target boundaries of the plurality of target objects are determined, and in the area remote sensing image, the plurality of target objects are marked according to the plurality of target boundaries.
Specifically, fig. 4 shows a flowchart of a target identification mark processing in the method provided by the embodiment of the application.
In a preferred embodiment of the present application, the target recognition of the remote sensing image of the area, and determining and marking a plurality of target objects specifically include the following steps:
step S1031, performing target recognition on the area remote sensing image, and determining a plurality of target objects.
Step S1032, determining target boundaries of a plurality of the target objects.
In step S1033, marking a plurality of target objects according to a plurality of target boundaries in the area remote sensing image.
Further, the method for fusing satellite remote sensing data and monitoring video further comprises the following steps:
and step S104, identifying and rendering a plurality of target objects according to the monitoring background image.
In the embodiment of the application, the positions of a plurality of target objects in a monitoring background image are determined by carrying out recognition analysis on the monitoring background image, then the target recognition images of the plurality of target objects are obtained from the monitoring background image, then the target types corresponding to different target recognition images are determined by carrying out type recognition on the plurality of target recognition images (wherein the target types can comprise residential buildings, office buildings, roads, playgrounds, ponds and the like), target rendering substrates corresponding to different target types are matched from a preset type rendering database, and then the plurality of target recognition images are rendered in a region remote sensing image according to the plurality of target rendering substrates, so that the target types corresponding to different target objects can be intuitively known from the region remote sensing image, and the actual use of satellite remote sensing data is facilitated.
Specifically, fig. 5 shows a flowchart of object recognition rendering in the method provided by the embodiment of the present application.
In a preferred embodiment of the present application, the identifying and rendering the plurality of target objects according to the monitoring background image specifically includes the following steps:
step S1041, performing position recognition on the plurality of target objects according to the monitoring background image, to obtain a plurality of target recognition images.
Step S1042, performing type recognition on the plurality of target recognition images to determine a corresponding target type.
Step S1043, matching target rendering substrates corresponding to the multiple target types from the preset type rendering database.
Step S1044, rendering the plurality of target recognition images according to the plurality of target rendering substrates.
Further, the method for fusing satellite remote sensing data and monitoring video further comprises the following steps:
and step 105, comparing the monitoring background image with the area remote sensing image, identifying a plurality of deficient objects and carrying out identification and rendering.
In the embodiment of the application, the object comparison is carried out on the monitoring background image and the area remote sensing image, the comparison result is recorded, a plurality of object objects which are in the monitoring background image and are not in the area remote sensing image are determined according to the comparison result, the object objects are all marked as deficient objects, the object types corresponding to different deficient objects are determined by carrying out type recognition on the deficient objects, the object rendering substrates of different object types are matched from a preset type rendering database, and then the plurality of deficient objects are additionally rendered in the area remote sensing image according to the plurality of object rendering substrates, so that the object which cannot be clearly shot by satellite remote sensing can be intuitively observed in the area remote sensing image.
Specifically, fig. 6 shows a flowchart of rendering a starved object marker in the method according to the embodiment of the present application.
In a preferred embodiment of the present application, the object comparing the monitoring background image with the area remote sensing image, identifying a plurality of deficient objects, and identifying and rendering specifically includes the following steps:
and step S105, comparing the monitoring background image with the area remote sensing image, and recording a comparison result.
Step S1051, identifying a plurality of starved objects according to the comparison result.
Step S1052, matching object rendering substrates corresponding to the plurality of deficiency objects from a preset type rendering database.
And step S1053, the substrate is rendered according to the plurality of objects, and the region remote sensing image is subjected to complementary rendering.
Further, fig. 7 shows an application architecture diagram of the system provided by the embodiment of the present application.
In another preferred embodiment of the present application, a system for integrating satellite remote sensing data with surveillance video includes:
the monitoring shooting processing unit 101 is configured to perform video monitoring shooting, obtain a monitoring background image, and determine a monitoring positioning position.
In the embodiment of the present application, the monitoring and shooting processing unit 101 obtains the monitoring video data obtained by video monitoring and shooting, then performs frame-by-frame processing on the monitoring video data to obtain a plurality of frame-by-frame monitoring images, performs dynamic analysis and marking on the frame-by-frame monitoring images, identifies dynamic objects (such as people, pets, vehicles, etc.) existing in the frame-by-frame monitoring images, further screens out frame-by-frame monitoring images without dynamic objects from the frame-by-frame monitoring images, marks the frame-by-frame monitoring images as monitoring background images, and obtains a monitoring positioning position (may also be by extracting positioning data of monitoring and shooting from the monitoring video data) by positioning the video monitoring and shooting, and determines the monitoring positioning position.
The remote sensing data processing unit 102 is configured to obtain satellite remote sensing data, and obtain an area remote sensing image corresponding to the monitoring background image based on the monitoring positioning position.
In the embodiment of the application, the remote sensing data processing unit 102 obtains satellite remote sensing data sent by a remote sensing satellite, performs position marking in the satellite remote sensing data according to a monitoring positioning position, takes the monitoring positioning position as an origin and takes the maximum viewing distance as a radius according to the maximum viewing distance of video monitoring shooting, determines a corresponding image of a patch from the satellite remote sensing data, then cuts out the corresponding image of the patch, further combines with a monitoring background image, and in a surrounding space of the corresponding image of the patch, recognizes and determines a video monitoring azimuth corresponding to the video monitoring shooting, and further cuts out an area remote sensing image from the corresponding image of the patch according to the video monitoring azimuth.
Specifically, fig. 8 shows a block diagram of a remote sensing data processing unit 102 in the system according to an embodiment of the present application.
In a preferred embodiment of the present application, the remote sensing data processing unit 102 specifically includes:
the remote sensing acquisition module 1021 is configured to acquire satellite remote sensing data sent by a remote sensing satellite.
And the corresponding intercepting module 1022 is used for determining and intercepting the corresponding image of the film region from the satellite remote sensing data based on the monitoring positioning position.
And the azimuth identifying module 1023 is used for integrating the corresponding image of the region and the monitoring background image to identify the video monitoring azimuth.
And the region intercepting module 1024 is configured to intercept a remote sensing image of a region from the corresponding image of the region according to the video monitoring azimuth.
Further, the system for fusing satellite remote sensing data and monitoring video further comprises:
and the target identification marking unit 103 is used for carrying out target identification on the regional remote sensing image and determining and marking a plurality of target objects.
In the embodiment of the present application, the target recognition marking unit 103 determines a plurality of target objects in the area remote sensing image by performing target recognition on the area remote sensing image, and determines target boundaries of the plurality of target objects at the same time, and marks the plurality of target objects according to the plurality of target boundaries in the area remote sensing image.
And the target recognition rendering unit 104 is used for recognizing and rendering a plurality of target objects according to the monitoring background image.
In the embodiment of the present application, the target recognition and rendering unit 104 determines the positions of a plurality of target objects in the monitoring background image by performing recognition analysis on the monitoring background image, and further obtains target recognition images of the plurality of target objects from the monitoring background image, and then determines target types corresponding to different target recognition images (wherein the target types may include residential buildings, office buildings, roads, playgrounds, ponds, etc.) by performing type recognition on the plurality of target recognition images, matches target rendering substrates corresponding to different target types from a preset type rendering database, and further renders the plurality of target recognition images in the region remote sensing image according to the plurality of target rendering substrates, thereby intuitively knowing the target types corresponding to different target objects from the region remote sensing image, and facilitating actual use of satellite remote sensing data.
Specifically, fig. 9 shows a block diagram of the structure of the object recognition rendering unit 104 in the system according to the embodiment of the present application.
In a preferred embodiment of the present application, the object recognition rendering unit 104 specifically includes:
the type identifying module 1041 is configured to perform type identification on the plurality of object identifying images, and determine a corresponding object type.
The target substrate matching module 1042 is configured to match target rendering substrates corresponding to a plurality of target types from a preset type rendering database.
The target rendering module 1043 is configured to render a plurality of target recognition images according to a plurality of target rendering substrates.
Further, the system for fusing satellite remote sensing data and monitoring video further comprises:
and the lack identification rendering unit 105 is configured to compare the monitored background image with the area remote sensing image, identify a plurality of lack objects, and perform identification and rendering.
In the embodiment of the present application, the deficiency identifier rendering unit 105 determines that a plurality of object objects in the monitoring background image, which are not in the area remote sensing image, are all identified as deficiency objects according to the comparison result by comparing the monitoring background image with the area remote sensing image, and determines the object types corresponding to different deficiency objects by performing type recognition on the deficiency objects, and matches object rendering substrates of different object types from a preset type rendering database, and further performs complementary rendering on the deficiency objects in the area remote sensing image according to the object rendering substrates, so that objects which cannot be clearly photographed by satellite remote sensing can be visually checked in the area remote sensing image.
Specifically, fig. 10 shows a block diagram of the structure of the starvation identification rendering unit 105 in the system according to the embodiment of the present application.
In a preferred embodiment provided by the present application, the starvation identification rendering unit 105 specifically includes:
the object comparison module 1051 is configured to compare the monitoring background image with the remote sensing image of the region, and record a comparison result.
The starvation identification module 1052 is configured to identify a plurality of starvation objects according to the comparison result.
The object substrate matching module 1053 is configured to match object rendering substrates corresponding to the plurality of deficiency objects from a preset type rendering database.
And the supplementary rendering module 1054 is used for rendering the substrate according to a plurality of objects and carrying out supplementary rendering in the region remote sensing image.
It should be understood that, although the steps in the flowcharts of the embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
The foregoing description of the preferred embodiments of the application is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the application.

Claims (10)

1. The method for fusing satellite remote sensing data and monitoring video is characterized by comprising the following steps of:
video monitoring shooting is carried out, a monitoring background image is obtained, and a monitoring positioning position is determined;
acquiring satellite remote sensing data, and acquiring an area remote sensing image corresponding to the monitoring background image based on the monitoring positioning position;
performing target recognition on the regional remote sensing image, and determining and marking a plurality of target objects;
identifying and rendering a plurality of target objects according to the monitoring background image;
and comparing the monitoring background image with the area remote sensing image, identifying a plurality of lack objects, and identifying and rendering.
2. The method for merging satellite remote sensing data and monitoring video according to claim 1, wherein the steps of performing video monitoring shooting, obtaining a monitoring background image, and determining a monitoring positioning position specifically include the following steps:
video monitoring shooting is carried out, and monitoring video data are obtained;
performing frame-by-frame processing on the monitoring video data to obtain a plurality of frame-by-frame monitoring images;
dynamically identifying a plurality of the frame-by-frame monitoring images, and screening monitoring background images;
and performing monitoring positioning and determining the monitoring positioning position.
3. The method for merging satellite remote sensing data with surveillance video according to claim 1, wherein the step of acquiring the satellite remote sensing data, based on the surveillance positioning position, the step of acquiring the regional remote sensing image corresponding to the surveillance background image specifically includes the following steps:
acquiring satellite remote sensing data transmitted by a remote sensing satellite;
determining and intercepting a corresponding image of a film area from the satellite remote sensing data based on the monitoring positioning position;
the corresponding image of the region and the monitoring background image are synthesized, and the video monitoring azimuth is identified;
and intercepting a region remote sensing image from the corresponding image of the region according to the video monitoring azimuth.
4. The method for merging satellite remote sensing data and surveillance video according to claim 1, wherein the target recognition of the regional remote sensing image, and determining and marking a plurality of target objects specifically comprises the following steps:
performing target recognition on the regional remote sensing image, and determining a plurality of target objects;
determining target boundaries of a plurality of target objects;
in the regional remote sensing image, a plurality of target objects are marked according to a plurality of target boundaries.
5. The method for merging satellite remote sensing data and surveillance video according to claim 1, wherein the identifying and rendering the plurality of target objects according to the surveillance background image specifically comprises the following steps:
performing position recognition on a plurality of target objects according to the monitoring background image to obtain a plurality of target recognition images;
performing type recognition on a plurality of target recognition images, and determining corresponding target types;
matching target rendering substrates corresponding to a plurality of target types from a preset type rendering database;
and rendering the plurality of target identification images according to the plurality of target rendering substrates.
6. The method for merging satellite remote sensing data and surveillance video according to claim 1, wherein the object comparing the surveillance background image and the regional remote sensing image, identifying a plurality of deficient objects and performing identification and rendering specifically comprises the following steps:
performing object comparison on the monitoring background image and the regional remote sensing image, and recording a comparison result;
identifying a plurality of starved objects according to the comparison result;
matching object rendering substrates corresponding to a plurality of lack objects from a preset type rendering database;
and rendering the substrate according to the plurality of objects, and performing complementary rendering in the regional remote sensing image.
7. The system is characterized by comprising a monitoring shooting processing unit, a remote sensing data processing unit, a target identification marking unit, a target identification rendering unit and a lack identification rendering unit, wherein:
the monitoring shooting processing unit is used for carrying out video monitoring shooting, obtaining a monitoring background image and determining a monitoring positioning position;
the remote sensing data processing unit is used for acquiring satellite remote sensing data and acquiring an area remote sensing image corresponding to the monitoring background image based on the monitoring positioning position;
the target identification marking unit is used for carrying out target identification on the regional remote sensing image and determining and marking a plurality of target objects;
the target recognition rendering unit is used for recognizing and rendering a plurality of target objects according to the monitoring background image;
and the lack identification rendering unit is used for comparing the object of the monitoring background image with the area remote sensing image, identifying a plurality of lack objects and performing identification and rendering.
8. The system for merging satellite remote sensing data and surveillance video according to claim 7, wherein the remote sensing data processing unit specifically comprises:
the remote sensing acquisition module is used for acquiring satellite remote sensing data sent by a remote sensing satellite;
the corresponding intercepting module is used for determining and intercepting a corresponding image of the film area from the satellite remote sensing data based on the monitoring positioning position;
the azimuth identifying module is used for integrating the corresponding image of the region and the monitoring background image and identifying the video monitoring azimuth;
and the region intercepting module is used for intercepting a region remote sensing image from the corresponding image of the region according to the video monitoring azimuth.
9. The system for merging satellite remote sensing data and surveillance video according to claim 7, wherein the object recognition rendering unit specifically comprises:
the type recognition module is used for carrying out type recognition on the plurality of target recognition images and determining corresponding target types;
the target substrate matching module is used for matching target rendering substrates corresponding to a plurality of target types from a preset type rendering database;
and the target rendering module is used for rendering the plurality of target identification images according to the plurality of target rendering substrates.
10. The system for merging satellite remote sensing data and surveillance video according to claim 7, wherein the deficiency identification rendering unit specifically comprises:
the object comparison module is used for comparing the object of the monitoring background image with the remote sensing image of the area and recording a comparison result;
the starvation identification module is used for identifying a plurality of starvation objects according to the comparison result;
the object substrate matching module is used for matching object rendering substrates corresponding to a plurality of lack objects from a preset type rendering database;
and the supplementary rendering module is used for rendering the substrate according to the plurality of objects and carrying out supplementary rendering in the region remote sensing image.
CN202310681508.1A 2023-06-09 2023-06-09 Satellite remote sensing data and monitoring video fusion method and system Pending CN116630825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310681508.1A CN116630825A (en) 2023-06-09 2023-06-09 Satellite remote sensing data and monitoring video fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310681508.1A CN116630825A (en) 2023-06-09 2023-06-09 Satellite remote sensing data and monitoring video fusion method and system

Publications (1)

Publication Number Publication Date
CN116630825A true CN116630825A (en) 2023-08-22

Family

ID=87616996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310681508.1A Pending CN116630825A (en) 2023-06-09 2023-06-09 Satellite remote sensing data and monitoring video fusion method and system

Country Status (1)

Country Link
CN (1) CN116630825A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116886879A (en) * 2023-09-08 2023-10-13 北京国星创图科技有限公司 Satellite-ground integrated digital twin system and method
CN117114513A (en) * 2023-10-24 2023-11-24 北京英视睿达科技股份有限公司 Image-based crop pesticide and fertilizer use evaluation method, device, equipment and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491721A (en) * 2017-05-05 2017-12-19 北京佳格天地科技有限公司 Classification of remote-sensing images device and method
WO2018137103A1 (en) * 2017-01-24 2018-08-02 深圳企管加企业服务有限公司 Basin pollution detection method and system based on multi-source remote sensing data
CN110765944A (en) * 2019-10-23 2020-02-07 长光禹辰信息技术与装备(青岛)有限公司 Target identification method, device, equipment and medium based on multi-source remote sensing image
CN110806198A (en) * 2019-10-25 2020-02-18 北京前沿探索深空科技有限公司 Target positioning method and device based on remote sensing image, controller and medium
CN111539481A (en) * 2020-04-28 2020-08-14 北京市商汤科技开发有限公司 Image annotation method and device, electronic equipment and storage medium
CN111683221A (en) * 2020-05-21 2020-09-18 武汉大学 Real-time video monitoring method and system for natural resources embedded with vector red line data
CN113947714A (en) * 2021-09-29 2022-01-18 广州市赋安电子科技有限公司 Multi-mode collaborative optimization method and system for video monitoring and remote sensing
CN113989656A (en) * 2021-09-28 2022-01-28 中国人民解放军战略支援部队航天工程大学 Event interpretation method and device for remote sensing video, computer equipment and storage medium
CN113989662A (en) * 2021-10-18 2022-01-28 中国电子科技集团公司第五十二研究所 Remote sensing image fine-grained target identification method based on self-supervision mechanism
CN114187179A (en) * 2021-12-14 2022-03-15 广州赋安数字科技有限公司 Remote sensing image simulation generation method and system based on video monitoring
CN115062823A (en) * 2022-05-26 2022-09-16 中国科学院地理科学与资源研究所 Carbon dioxide emission prediction method and device based on land utilization
CN115471761A (en) * 2022-10-31 2022-12-13 宁波拾烨智能科技有限公司 Coastal beach change monitoring method integrating multi-source remote sensing data

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137103A1 (en) * 2017-01-24 2018-08-02 深圳企管加企业服务有限公司 Basin pollution detection method and system based on multi-source remote sensing data
CN107491721A (en) * 2017-05-05 2017-12-19 北京佳格天地科技有限公司 Classification of remote-sensing images device and method
CN110765944A (en) * 2019-10-23 2020-02-07 长光禹辰信息技术与装备(青岛)有限公司 Target identification method, device, equipment and medium based on multi-source remote sensing image
CN110806198A (en) * 2019-10-25 2020-02-18 北京前沿探索深空科技有限公司 Target positioning method and device based on remote sensing image, controller and medium
CN111539481A (en) * 2020-04-28 2020-08-14 北京市商汤科技开发有限公司 Image annotation method and device, electronic equipment and storage medium
CN111683221A (en) * 2020-05-21 2020-09-18 武汉大学 Real-time video monitoring method and system for natural resources embedded with vector red line data
CN113989656A (en) * 2021-09-28 2022-01-28 中国人民解放军战略支援部队航天工程大学 Event interpretation method and device for remote sensing video, computer equipment and storage medium
CN113947714A (en) * 2021-09-29 2022-01-18 广州市赋安电子科技有限公司 Multi-mode collaborative optimization method and system for video monitoring and remote sensing
CN113989662A (en) * 2021-10-18 2022-01-28 中国电子科技集团公司第五十二研究所 Remote sensing image fine-grained target identification method based on self-supervision mechanism
CN114187179A (en) * 2021-12-14 2022-03-15 广州赋安数字科技有限公司 Remote sensing image simulation generation method and system based on video monitoring
CN115062823A (en) * 2022-05-26 2022-09-16 中国科学院地理科学与资源研究所 Carbon dioxide emission prediction method and device based on land utilization
CN115471761A (en) * 2022-10-31 2022-12-13 宁波拾烨智能科技有限公司 Coastal beach change monitoring method integrating multi-source remote sensing data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康金忠;王桂周;何国金;王慧慧;尹然宇;江威;张兆明;: "遥感视频卫星运动车辆目标快速检测", 遥感学报, no. 09, 16 September 2020 (2020-09-16), pages 44 - 52 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116886879A (en) * 2023-09-08 2023-10-13 北京国星创图科技有限公司 Satellite-ground integrated digital twin system and method
CN116886879B (en) * 2023-09-08 2023-11-03 北京国星创图科技有限公司 Satellite-ground integrated digital twin system and method
CN117114513A (en) * 2023-10-24 2023-11-24 北京英视睿达科技股份有限公司 Image-based crop pesticide and fertilizer use evaluation method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN116630825A (en) Satellite remote sensing data and monitoring video fusion method and system
KR101530255B1 (en) Cctv system having auto tracking function of moving target
CN111259813B (en) Face detection tracking method, device, computer equipment and storage medium
EP3043555B1 (en) Image storage method and device thereof
EP3091735B1 (en) Method and device for extracting surveillance record videos
WO2016207899A1 (en) System and method for secured capturing and authenticating of video clips
CN109740003B (en) Filing method and device
US11496671B2 (en) Surveillance video streams with embedded object data
CN110826484A (en) Vehicle weight recognition method and device, computer equipment and model training method
CN111444798A (en) Method and device for identifying driving behavior of electric bicycle and computer equipment
CN111354024A (en) Behavior prediction method for key target, AI server and storage medium
US11244168B2 (en) Method of highlighting an object of interest in an image or video
US20190164309A1 (en) Method of detecting shooting direction and apparatuses performing the same
CN112906483A (en) Target re-identification method and device and computer readable storage medium
CN111340710A (en) Method and system for acquiring vehicle information based on image stitching
CN114842446A (en) Parking space detection method and device and computer storage medium
CN113947103A (en) High-altitude parabolic model updating method, high-altitude parabolic detection system and storage medium
CN111402192A (en) Inspection well cover detection method and device
JP2021002162A (en) Image processing apparatus and program for image processing
US20100189368A1 (en) Determining video ownership without the use of fingerprinting or watermarks
CN111078804B (en) Information association method, system and computer terminal
CN114339367A (en) Video frame processing method, device and equipment
CN112668435A (en) Video-based key target image snapshot method, device and storage medium
US11972622B2 (en) Updating of annotated points in a digital image
CN105320704A (en) Cross-regional similar vehicle retrieval method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination