CN114187179B - Remote sensing image simulation generation method and system based on video monitoring - Google Patents

Remote sensing image simulation generation method and system based on video monitoring Download PDF

Info

Publication number
CN114187179B
CN114187179B CN202111525188.8A CN202111525188A CN114187179B CN 114187179 B CN114187179 B CN 114187179B CN 202111525188 A CN202111525188 A CN 202111525188A CN 114187179 B CN114187179 B CN 114187179B
Authority
CN
China
Prior art keywords
image
monitoring camera
target
remote sensing
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111525188.8A
Other languages
Chinese (zh)
Other versions
CN114187179A (en
Inventor
李晓威
陈升敬
李学恒
叶成瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fu'an Digital Technology Co ltd
Original Assignee
Guangzhou Fu'an Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fu'an Digital Technology Co ltd filed Critical Guangzhou Fu'an Digital Technology Co ltd
Priority to CN202111525188.8A priority Critical patent/CN114187179B/en
Publication of CN114187179A publication Critical patent/CN114187179A/en
Application granted granted Critical
Publication of CN114187179B publication Critical patent/CN114187179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to a remote sensing image simulation generation method and a system based on video monitoring, wherein the method comprises the steps of establishing a mapping relation between picture coordinates and remote sensing longitude and latitude coordinates of a monitoring camera according to preset monitoring camera parameters; extracting a background image of the remote sensing image; carrying out target detection on each frame of image in the monitoring camera to obtain detection target information; carrying out analog image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image; and carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image. The invention utilizes the real-time performance of monitoring to simulate and generate a remote sensing image, thereby realizing real-time remote sensing dynamic.

Description

Remote sensing image simulation generation method and system based on video monitoring
Technical Field
The invention relates to the technical field of remote sensing information processing and application, in particular to a remote sensing image simulation generation method and system based on video monitoring.
Background
The satellite remote sensing provides indispensable important technical support in the aspect of social public service, and from the aspect of satellite remote sensing application data volume, the fields of China's soil, meteorology, ocean, environment and agriculture are five fields with the largest satellite remote sensing data application scale. In the field of satellite remote sensing, china will mainly develop three series of land observation, ocean observation and atmosphere observation in the future.
The information of satellite remote sensing observation is macroscopic and comprehensive, in addition, the long-term continuous observation can be realized to form time sequence information, but the period required for obtaining a remote sensing image of a certain area is long, and the real-time continuous observation is difficult to realize.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a remote sensing image simulation generation method and system based on video monitoring.
In order to achieve the purpose, the invention provides the following scheme:
a remote sensing image simulation generation method based on video monitoring comprises the following steps:
establishing a mapping relation between picture coordinates and remote sensing longitude and latitude coordinates of a monitoring camera according to preset monitoring camera parameters;
extracting a background image of the remote sensing image;
carrying out target detection on each frame of image in the monitoring camera to obtain detection target information;
carrying out analog image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image;
and carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image.
Preferably, the establishing of the mapping relationship between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates according to the preset monitoring camera parameters includes:
acquiring parameters of the preset monitoring camera; the preset monitoring camera parameters comprise height of a monitoring camera from a horizontal plane, an included angle between a central line of the monitoring camera and a vertical line, an included angle between projection of the central line of the monitoring camera on the horizontal plane and a geographical true north direction, a horizontal field angle of the monitoring camera, a vertical field angle of the monitoring camera and image resolution parameter information of the monitoring camera;
calibrating the vertical projection position of the monitoring camera on the horizontal plane;
calculating a straight-line horizontal distance and a longitude horizontal distance between the vertical projection position and any position on a horizontal plane in a visual range of the monitoring camera based on a hemiversine formula;
calculating an included angle between a connecting line of the vertical projection position and the arbitrary position and the geographical true north direction according to the straight line horizontal distance and the longitude horizontal distance, and recording the included angle as a first included angle;
calculating an included angle between a connecting line of the position of the monitoring camera and the arbitrary position and a vertical line according to the linear horizontal distance and the height of the monitoring camera from the horizontal plane, and recording the included angle as a second included angle;
calculating a conversion relation of the picture coordinates of the monitoring camera at any position according to the included angle between the central line of the monitoring camera and a vertical line, the included angle between the projection of the central line of the monitoring camera on a horizontal plane and the true north direction of geography, the horizontal field angle of the monitoring camera, the vertical field angle of the monitoring camera and the image resolution parameter information of the monitoring camera;
establishing a monitoring camera picture coordinate set according to a plurality of picture coordinates in a monitoring camera picture;
obtaining a transformation matrix according to the monitoring camera picture coordinate set and the conversion relation;
and determining the mapping relation between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates according to the transformation matrix.
Preferably, the extracting a background image of the remote sensing image comprises:
extracting identification frame information sets of all targets in the remote sensing image based on an image target detection algorithm;
removing the target in the remote sensing image according to the identification frame information set;
and inputting the remote sensing image without the target into a trained generation confrontation network model to obtain the repaired background image.
Preferably, the performing target detection on each frame of image in the monitoring camera to obtain detection target information includes:
carrying out target detection on the monitoring camera by using a trained target detection model to obtain detection target information; the detection target information includes: a recognition frame of the target, a type of the target, and a visual characteristic of the target.
Preferably, the performing simulated image processing according to the detection target information and the mapping relationship to obtain a target image of a remote sensing image includes:
generating a target image in a remote sensing view according to the type of the target and the visual characteristics of the target based on the trained convolutional neural network;
converting the coordinates of the identification frame of the target in the monitoring camera picture into remote sensing longitude and latitude coordinates according to the mapping relation so as to obtain a rectangular area corresponding to the identification frame in the monitoring image in the remote sensing image;
stretching the target image in the remote sensing view by utilizing a bilinear interpolation algorithm to obtain a stretched image with the same size as the identification frame of the target;
and superposing the stretched image and the rectangular area to obtain a target image of the remote sensing image.
A remote sensing image simulation generation system based on video monitoring comprises:
the mapping relation determining module is used for establishing a mapping relation between a picture coordinate of the monitoring camera and a remote sensing longitude and latitude coordinate according to a preset monitoring camera parameter;
the extraction module is used for extracting a background image of the remote sensing image;
the target detection module is used for carrying out target detection on each frame of image in the monitoring camera to obtain detection target information;
the image processing module is used for carrying out simulated image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image;
and the superposition module is used for carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image.
Preferably, the mapping relationship determining module specifically includes:
the information acquisition unit is used for acquiring the parameters of the preset monitoring camera; the preset monitoring camera parameters comprise the height of the monitoring camera from the horizontal plane, the included angle between the central line of the monitoring camera and the vertical line, the included angle between the projection of the central line of the monitoring camera on the horizontal plane and the geographical true north direction, the horizontal field angle of the monitoring camera, the vertical field angle of the monitoring camera and the image resolution parameter information of the monitoring camera;
the calibration unit is used for calibrating the vertical projection position of the monitoring camera on the horizontal plane;
the first calculation unit is used for calculating a straight-line horizontal distance and a longitude horizontal distance between the vertical projection position and any position on a horizontal plane in a visual range of the monitoring camera based on a hemiversine formula;
the second calculation unit is used for calculating an included angle between a connecting line of the vertical projection position and the arbitrary position and the geographical true north direction according to the straight line horizontal distance and the longitude horizontal distance, and recording the included angle as a first included angle;
the third calculating unit is used for calculating an included angle between a connecting line of the position of the monitoring camera and the arbitrary position and a vertical line according to the linear horizontal distance and the height of the monitoring camera from the horizontal plane, and recording the included angle as a second included angle;
the fourth calculation unit is used for calculating the conversion relation of the picture coordinates of the monitoring camera at any position according to the included angle between the central line of the monitoring camera and a vertical line, the projection of the central line of the monitoring camera on a horizontal plane and the included angle in the true north direction of geography, the horizontal field angle of the monitoring camera, the vertical field angle of the monitoring camera and the image resolution parameter information of the monitoring camera;
the set building unit is used for building a monitoring camera picture coordinate set according to a plurality of picture coordinates in a monitoring camera picture;
the matrix determining unit is used for obtaining a transformation matrix according to the monitoring camera picture coordinate set and the conversion relation;
and the mapping relation determining unit is used for determining the mapping relation between the picture coordinate of the monitoring camera and the remote sensing longitude and latitude coordinate according to the transformation matrix.
Preferably, the extraction module specifically includes:
the information set extraction unit is used for extracting the identification frame information sets of all targets in the remote sensing image based on an image target detection algorithm;
the target removing unit is used for removing a target in the remote sensing image according to the identification frame information set;
and the repairing unit is used for inputting the remote sensing image without the target into the trained generation confrontation network model to obtain the repaired background image.
Preferably, the target detection module specifically includes:
the target detection unit is used for carrying out target detection on the monitoring camera by utilizing a trained target detection model to obtain the detection target information; the detection target information includes: a recognition frame of the target, a type of the target, and a visual characteristic of the target.
Preferably, the image processing module specifically includes:
the image generation unit is used for generating a target image in a remote sensing view according to the type of the target and the visual characteristics of the target based on the trained convolutional neural network;
the conversion unit is used for converting the coordinates of the identification frame of the target in the picture of the monitoring camera into remote sensing longitude and latitude coordinates according to the mapping relation so as to obtain a rectangular area corresponding to the identification frame in the monitoring image in the remote sensing image;
the stretching unit is used for stretching the target image in the remote sensing view by utilizing a bilinear interpolation algorithm to obtain a stretched image with the same size as the identification frame of the target;
and the superposition unit is used for superposing the stretched image and the rectangular area to obtain a target image of the remote sensing image.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a remote sensing image simulation generation method and a system based on video monitoring, wherein the method comprises the steps of establishing a mapping relation between picture coordinates and remote sensing longitude and latitude coordinates of a monitoring camera according to preset monitoring camera parameters; extracting a background image of the remote sensing image; carrying out target detection on each frame of image in the monitoring camera to obtain detection target information; performing simulated image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image; and carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image. The invention utilizes the real-time property of monitoring to simulate and generate a remote sensing image, thereby realizing real-time remote sensing dynamic.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow diagram of a simulation generation method in an embodiment provided by the present invention;
FIG. 2 is a flow chart of inversion from video surveillance to remote sensing in an embodiment provided by the present invention;
fig. 3 is a first schematic diagram of a method for calculating a mapping relationship between picture coordinates and longitude and latitude coordinates of a monitoring camera in an embodiment of the present invention;
fig. 4 is a second schematic diagram of a method for calculating a mapping relationship between picture coordinates and longitude and latitude coordinates of a monitoring camera in an embodiment of the present invention;
fig. 5 is a schematic diagram of a position of an object to be detected in a screen in an embodiment of the present invention;
fig. 6 is a block diagram of an optimization system in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different elements and not for describing a particular sequential order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, the inclusion of a list of steps, processes, methods, etc. is not limited to only those steps recited, but may alternatively include additional steps not recited, or may alternatively include additional steps inherent to such processes, methods, articles, or devices.
The invention aims to provide a remote sensing image simulation generation method and system based on video monitoring, which can simulate and generate a remote sensing image by utilizing the real-time performance of monitoring and realize real-time remote sensing dynamic.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 and fig. 2 are a flowchart of a simulation generation method and an inversion flowchart from video monitoring to remote sensing in an embodiment provided by the present invention, respectively, as shown in fig. 1 and fig. 2, the present invention provides a remote sensing image simulation generation method based on video monitoring, including:
step 100: establishing a mapping relation between picture coordinates and remote sensing longitude and latitude coordinates of a monitoring camera according to preset monitoring camera parameters;
step 200: extracting a background image of the remote sensing image;
step 300: carrying out target detection on each frame of image in the monitoring camera to obtain detection target information;
step 400: carrying out analog image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image;
step 500: and carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image.
Preferably, the establishing of the mapping relationship between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates according to the preset monitoring camera parameters includes:
acquiring parameters of the preset monitoring camera; the preset monitoring camera parameters comprise the height of the monitoring camera from the horizontal plane, the included angle between the central line of the monitoring camera and the vertical line, the included angle between the projection of the central line of the monitoring camera on the horizontal plane and the geographical true north direction, the horizontal field angle of the monitoring camera, the vertical field angle of the monitoring camera and the image resolution parameter information of the monitoring camera;
calibrating the vertical projection position of the monitoring camera on the horizontal plane;
calculating a straight-line horizontal distance and a longitude horizontal distance between the vertical projection position and any position on a horizontal plane in a visual range of the monitoring camera based on a hemiversine formula;
calculating an included angle between a connecting line of the vertical projection position and the arbitrary position and the geographical true north direction according to the straight line horizontal distance and the longitude horizontal distance, and recording the included angle as a first included angle;
calculating an included angle between a connecting line of the position of the monitoring camera and the arbitrary position and a vertical line according to the linear horizontal distance and the height of the monitoring camera from the horizontal plane, and recording the included angle as a second included angle;
calculating a conversion relation of the picture coordinate of the monitoring camera at any position according to the included angle between the central line of the monitoring camera and a vertical line, the projection of the central line of the monitoring camera on a horizontal plane and the included angle in the true north direction of geography, the horizontal field angle of the monitoring camera, the vertical field angle of the monitoring camera and the image resolution parameter information of the monitoring camera;
establishing a monitoring camera picture coordinate set according to a plurality of picture coordinates in a monitoring camera picture;
obtaining a transformation matrix according to the picture coordinate set of the monitoring camera and the conversion relation;
and determining the mapping relation between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates according to the transformation matrix.
The first step in this embodiment is to establish a mapping relationship between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates, so as to align the monitoring image and the remote sensing image.
Optionally, the process of establishing the mapping relationship between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates is specifically as follows:
step 1.1: parameter acquisition preparation:
as shown in fig. 3 to 5, N in fig. 4 indicates the geographical true north direction, the height of the monitoring camera from the horizontal plane is measured as H, the included angle between the center line of the monitoring camera and the vertical line is measured as θ, the included angle between the projection of the center line of the monitoring camera on the horizontal plane and the geographical true north direction is measured as β, and the horizontal field angle of the monitoring camera is measured as ω x The vertical field angle of the monitoring camera is omega y Acquiring the resolution parameter information of the image of the monitoring camera as X multiplied by Y (X is the pixel width of the image, and Y is the pixel height);
suppose that: the coordinates of the center of the monitoring camera are (0, 0), and the position O of the monitoring camera is on the horizontal planeThe vertical projection position of the upper surface is O', and the longitude and latitude thereof are (lambda) 0 ,ψ 0 ) Aiming at any position A on the horizontal plane within the visual range of the monitoring camera i Latitude and longitude coordinates (λ) i ,ψ i ) Can be converted into the picture coordinate (x) of the monitoring camera in the following way i ,y i );
Step 1.2: according to a Haverine formula, calculating a vertical projection position O' of the position of the monitoring camera on a horizontal plane and an arbitrary position A on the horizontal plane within the visible range of the monitoring camera i Straight horizontal distance d i In units of m, O' and A i Horizontal distance s of longitude of i The unit is m:
Figure BDA0003410039250000091
Figure BDA0003410039250000092
Figure BDA0003410039250000093
Figure BDA0003410039250000094
wherein: a. b are all intermediate variable values, O' (λ) 0 ,ψ 0 ) In order to monitor the vertical projection position of the camera position on the horizontal plane, A ii ,ψ i ) Is any position on a horizontal plane within the visual range of the monitoring camera, and r is the radius of the earth and the unit is m;
step 1.3: from step 1.2, O' and A are calculated i Angle beta between connecting line of (a) and true north direction of geography i
Figure BDA0003410039250000101
Step 1.4: from step 1.2, calculate O and A i Angle theta between the connecting line and the vertical line i
Figure BDA0003410039250000102
H is the height of the monitoring camera from the horizontal plane, and the unit is m;
step 1.5: calculation of A i In the picture coordinate (x) of the monitoring camera i ,y i ):
Figure BDA0003410039250000103
Figure BDA0003410039250000104
Wherein, X is the pixel width of the image, Y is the pixel height, and the parameter values of X and Y can be obtained according to the image resolution of the monitoring camera of X multiplied by Y;
theta is the included angle between the central line of the monitoring camera and the vertical line, beta is the included angle between the projection of the central line of the monitoring camera on the horizontal plane and the true north direction of geography, omega x For monitoring the horizontal field angle, omega, of the camera y The vertical field angle of the monitoring camera is adopted.
Step 1.6: randomly selecting a plurality of picture coordinates in a picture of a monitoring camera to obtain a picture coordinate set of the monitoring camera, selecting three groups of coordinates from the picture coordinate set of the monitoring camera each time, converting the selected picture coordinates of the monitoring camera into longitude and latitude coordinates through the mapping relation between the picture coordinates of the monitoring camera and the longitude and latitude coordinates, and calculating according to the picture coordinates of the monitoring camera and the converted longitude and latitude coordinates to obtain a transformation matrix H, wherein the specific process comprises the following steps:
obtaining a plurality of transformation matrixes H through inverse matrix calculation i
Figure BDA0003410039250000111
Wherein (x) i1 ,y i1 )、(x i2 ,y i2 )、(x i3 ,y i3 ) Is three groups of coordinates in the picture of the monitoring camera (lon) i1 ,lat i1 )、(lon i2 ,lat i2 )、(lon i3 ,lat i3 ) Is a longitude and latitude coordinate converted by three groups of monitoring camera picture coordinates.
Taking a plurality of transformation matrices H i Average value of (a):
Figure BDA0003410039250000112
the conversion relation between the picture coordinate of the monitoring camera and the remote sensing longitude and latitude coordinate is obtained as follows:
Figure BDA0003410039250000113
wherein, (x, y) is picture coordinates of the monitoring camera, and (lon, lat) is remote sensing longitude and latitude coordinates.
Preferably, the extracting a background image of the remote sensing image comprises:
extracting identification frame information sets of all targets in the remote sensing image based on an image target detection algorithm;
removing the target in the remote sensing image according to the identification frame information set;
and inputting the remote sensing image without the target into a trained generation confrontation network model to obtain the repaired background image.
The second step of this embodiment is to extract the background image of the remote sensing image.
Specifically, obtaining a background image of the remote sensing image, removing all targets on the remote sensing image, and performing image inpainting on the removed area, specifically, the process is as follows:
step 2.1: and extracting the identification frames of all targets in the remote sensing image.
And carrying out target detection on the remote sensing image by adopting a YOLO V3 target detection model, pre-training on a COCO public data set, and replacing the COCO public data set with other target detection models such as SSD and Faster R-CNN in the alternative.
Through an image target detection algorithm, the identification frame information set omega = { < lon of all targets in the remote sensing image can be obtained i ,lat i ,a i ,b i > -, where (lon) i ,lat i ) The upper left longitude and latitude coordinates of the identification frame of the ith target (a) i ,b i ) The longitude and latitude coordinates of the lower right of the identification frame of the ith target.
Step 2.2: and removing the target from the remote sensing image by taking the size of the identification frame as a standard.
And (3) according to the identification frame obtained in the step (2.1), carrying out matting processing on the image block of the area where the identification frame is located by utilizing a matting technology, wherein the background image and the target are removed together to obtain the remote sensing image from which all the targets are removed.
Step 2.3: and (3) filling the area removed in the step (2.2) by utilizing a pyramid context encoder network (PEN-Net) to realize the restoration of the image.
The pyramid context encoder network includes a pyramid context encoder, a multi-scale decoder, and a discriminator. The pyramid context encoder and the multi-scale decoder form a generator, and the Patch-GAN is used as a discriminator.
In the model training, a generation model and a discrimination model are respectively trained by using a stochastic gradient descent method. (1) The fixed discrimination model is not trained, a random gradient descent method is used for training the generation model, and when any two adjacent cycles are ended, the loss value of the generation model is smaller than a threshold value s 1 And stopping training the generated model. (2) The fixed generation model is not trained, a discrimination model is trained by using a random gradient descent method, and when any two adjacent cycles are ended, the loss value of the discrimination model is less than a threshold value s 2 And stopping training the discriminant model. Repeating the steps (1) and (2) until the total loss value of the model is less than the threshold value s 3
And (3) taking the remote sensing image with the target removed in the step 2.2 as an input, training the generated confrontation network model in the step 2.3, and obtaining a repaired remote sensing image, wherein the remote sensing image is the remote sensing image with only the background after all the targets are removed.
Preferably, the performing target detection on each frame of image in the monitoring camera to obtain detection target information includes:
carrying out target detection on the monitoring camera by using a trained target detection model to obtain detection target information; the detection target information includes: a recognition frame of the target, a type of the target, and a visual characteristic of the target.
Preferably, the performing simulated image processing according to the detection target information and the mapping relationship to obtain a target image of a remote sensing image includes:
generating a target image in a remote sensing view according to the type of the target and the visual characteristics of the target based on the trained convolutional neural network;
converting the coordinates of the identification frame of the target in the monitoring camera picture into remote sensing longitude and latitude coordinates according to the mapping relation so as to obtain a rectangular area corresponding to the identification frame in the monitoring image in the remote sensing image;
stretching the target image in the remote sensing view by utilizing a bilinear interpolation algorithm to obtain a stretched image with the same size as the identification frame of the target;
and superposing the stretched image and the rectangular area to obtain a target image of the remote sensing image.
The third step of this embodiment is to perform simulation generation of remote sensing images for each frame of image in the monitoring video.
Further, the target image in the monitoring image passes through a depth network, so that the target image in the remote sensing field is obtained, and the simulation generation of the remote sensing image is realized, and the specific process is as follows:
step 3.1: and detecting the target in the monitoring image, and acquiring the position size, the type and the visual characteristics of the target.
Marking all targets in the monitored image by using a picture marking tool; inputting the marked monitoring image into a YOLO V3 algorithm, training the algorithm, calculating a loss value between a prediction result and a real result by a loss function in the training process, reversely updating the weight layer by layer through a gradient descent method, repeatedly iterating, continuously updating the weight, training a YOLO model and storing.
And carrying out target detection on the monitoring image through the trained YOLO model so as to obtain the position size, the type and the visual characteristics of the target, wherein the position size of the target is the identification frame of the target.
Step 3.2: and generating a target image in the remote sensing visual field through the U-net neural network according to the visual characteristics and the type of the target.
A U-net convolutional neural network is established, and the network comprises 28 convolutional layers, 4 pooling layers and 4 upsampling layers. Wherein, the first 27 convolutional layers adopt a convolution kernel of 3 multiplied by 3 to carry out feature extraction, and then are connected with a Relu activation function to carry out nonlinear processing; performing feature extraction on the last 1 convolutional layer by adopting a convolution kernel of 1 multiplied by 1, and classifying by adding a sigmoid activation function; the pooling layer selects a maximum pooling mode, and a window of the pooling layer is 3 multiplied by 3; and selecting a window of an upper sampling layer as a 2 x 2 window, adopting a cross entropy cost function for calculating an error between an output classification and a real label, and performing gradient descent through an AdamaOptimizer algorithm to update network parameters. In order to avoid that a plurality of neurons in the network learn the same content and improve the training effect, before network training, random initialization which is subject to normal distribution and has a standard deviation of 0.1 is adopted for all network parameters. In order to avoid the over-fitting phenomenon, a BN layer is added behind each convolution layer, and corresponding Dropout layers are added at positions with more network intermediate parameters.
And carrying out target identification detection on the remote sensing image, carrying out image dicing by taking the identification frame of each target as the size to obtain n remote sensing image samples subjected to image dicing, and selecting 70% of the remote sensing image samples as training data and 30% of the remote sensing image samples as verification data.
In the training process, an error value in the training process of the U-net convolutional neural network model is recorded by using a multiple cross validation method, when the error of a validation set does not decrease any more, the training is stopped, the current weight value is stored as the parameter of the trained U-net convolutional neural network model, then the U-net convolutional neural network model is subjected to performance test by using a test set, and if the performance difference on the training set is greater than a threshold value, the learning rate is adjusted until the model parameter meeting the generalization requirement is found.
And 3.1, according to the target information obtained in the step 3.1, taking the visual characteristics and the type of the target as the input of the model, and generating a target image in the remote sensing visual field through the trained U-net convolutional neural network. Wherein the generated target image does not contain a background.
Step 3.3: and (3) according to the position and the size of the target extracted from the monitoring image, calculating the position and the size of the target in the remote sensing view field through the mapping relation between the picture coordinate of the monitoring camera and the remote sensing longitude and latitude coordinate established in the step (1) to obtain a rectangular area.
According to the image target detection in the step 3.1, an identification frame of the target can be obtained, and the upper left coordinate point and the lower right coordinate point of the identification frame are known, so that the position of the target is obtained; through the mapping relation between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates established in the step 1, the coordinates of the identification frame in the picture of the monitoring camera can be converted into the remote sensing longitude and latitude coordinates, and therefore a rectangular area corresponding to the identification frame in the monitoring image is obtained in the remote sensing image.
Step 3.4: the image generated in step 3.2 is stretched and superimposed on the rectangular area in step 3.3.
The image generated in step 3.2 is taken as an input image, and the size of the target image is the rectangular area in step 3.3.
Acquiring the size of an input image as a multiplied by b and the size of a target image as m multiplied by n; the scaling factors of the input image and the target image in the row direction and the column direction are respectively
Figure BDA0003410039250000151
y f Representing a scaling factor, x, of the input image and the target image in the row direction f A scaling factor representing the column direction of the input image and the target image; for the scaling factor y of the input image and the target image in the row and column directions f And x f Rounding down to obtain a rounded scaling factor y 'in the row and column directions' f And x' f In which
Figure BDA0003410039250000152
Let Δ y f =y f -y′ f ,Δx f =x f -x′ f
The specific calculation process for applying the bilinear interpolation algorithm to image scaling is as follows:
(1) Calculating the pixel value at the coordinate (i, j) position of the pixel point in the target image by performing reverse extrapolation on the current pixel position, wherein i is more than or equal to 1 and less than or equal to m, and j is more than or equal to 1 and less than or equal to n; the pixel point (i, j) is analyzed to correspond to the pixel point coordinate in the input image as
Figure BDA0003410039250000153
Position of, pixel point coordinates
Figure BDA0003410039250000154
The location may not correspond to a corresponding pixel value in the input image, i.e. is a virtual pixel location in the input image.
Finding in input images
Figure BDA0003410039250000155
The position of four nearby pixel points is (x) 1 ,y 1 )、(x 2 ,y 2 )、(x 3 ,y 3 )、(x 4 ,y 4 ) Taking pixel values A, B, C and D of the positions of the four pixel points as reference pixel values at the positions of pixel point coordinates (i, j) in the target image to carry out bilinear interpolation, and obtaining a pixel value M of a virtual image pixel point in the input image corresponding to the position of the pixel point coordinates (i, j) in the target image based on a basic bilinear interpolation algorithm; the basic bilinear interpolation algorithm is:
M=(1-Δx f )(1-Δy f )A+Δx f (1-Δy f )C+Δy f (1-Δx f )B+Δx f Δy f D
as can be seen from the above basic formula of the bilinear interpolation algorithm, 8 multiplications are required for calculating the pixel value of one interpolation point, and the above formula is factorized in order to simplify the above formula.
The bilinear interpolation algorithm is divided into interpolation calculation in the vertical direction and interpolation algorithm in the horizontal direction, and interpolation calculation is carried out on the bilinear interpolation algorithm by utilizing A and C in the vertical direction:
Figure BDA0003410039250000161
and b and d are utilized to calculate through interpolation:
Figure BDA0003410039250000162
wherein i +1 is more than or equal to 2 and less than or equal to m
Finally, the horizontal interpolation calculation is carried out by utilizing the result of the vertical interpolation calculation; thereby obtaining the pixel value of the virtual image pixel in the input image corresponding to the position of the pixel point coordinate (i, j) in the target image
Figure BDA0003410039250000163
Figure BDA0003410039250000164
Making i =1,2, \8230;, m, j =1,2, \8230;, n traverse m × n pixel point coordinates in the target image, and sequentially performing bilinear interpolation calculation, thereby obtaining a pixel value of a virtual image pixel point in the input image corresponding to the pixel point coordinate (1, 1) position in the target image
Figure BDA0003410039250000165
The position of the pixel point coordinate (m, n) in the target image corresponds to the virtual image in the input imagePixel value of pixel point
Figure BDA0003410039250000166
And finally finishing the scaling processing of the whole target image.
Specifically, according to the image scaling method, the image generated in step 3.2 is subjected to stretch transformation until the size of the recognition frame of the corresponding target detected in step 3.1 is consistent with that of the recognition frame, and the image is superimposed on the rectangular area in step 3.3.
As an optional implementation mode, the step 3 is continuously circulated, so that the simulation generation of the remote sensing image can be realized, and the real-time remote sensing dynamic state is obtained.
Fig. 6 is a module connection diagram of an optimization system in an embodiment provided by the present invention, and as shown in fig. 6, the present invention further provides a remote sensing image simulation generation system based on video monitoring, including:
the mapping relation determining module is used for establishing a mapping relation between the picture coordinate of the monitoring camera and the remote sensing longitude and latitude coordinate according to the preset monitoring camera parameter;
the extraction module is used for extracting a background image of the remote sensing image;
the target detection module is used for carrying out target detection on each frame of image in the monitoring camera to obtain detection target information;
the image processing module is used for carrying out analog image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image;
and the superposition module is used for carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image.
Preferably, the mapping relationship determining module specifically includes:
the information acquisition unit is used for acquiring the parameters of the preset monitoring camera; the preset monitoring camera parameters comprise the height of the monitoring camera from the horizontal plane, the included angle between the central line of the monitoring camera and the vertical line, the included angle between the projection of the central line of the monitoring camera on the horizontal plane and the geographical true north direction, the horizontal field angle of the monitoring camera, the vertical field angle of the monitoring camera and the image resolution parameter information of the monitoring camera;
the calibration unit is used for calibrating the vertical projection position of the monitoring camera on the horizontal plane;
the first calculation unit is used for calculating a straight-line horizontal distance and a longitude horizontal distance between the vertical projection position and any position on a horizontal plane in a visual range of the monitoring camera based on a semi-normal vector formula;
the second calculation unit is used for calculating an included angle between a connecting line of the vertical projection position and the arbitrary position and the geographical true north direction according to the straight line horizontal distance and the longitude horizontal distance, and recording the included angle as a first included angle;
the third calculation unit is used for calculating an included angle between a connecting line of the position of the monitoring camera and the arbitrary position and a vertical line according to the straight-line horizontal distance and the height of the monitoring camera from the horizontal plane, and recording the included angle as a second included angle;
the fourth calculation unit is used for calculating the conversion relation of the picture coordinates of the monitoring camera at any position according to the included angle between the central line of the monitoring camera and a vertical line, the projection of the central line of the monitoring camera on a horizontal plane and the included angle in the true north direction of geography, the horizontal field angle of the monitoring camera, the vertical field angle of the monitoring camera and the image resolution parameter information of the monitoring camera;
the set building unit is used for building a monitoring camera picture coordinate set according to a plurality of picture coordinates in a monitoring camera picture;
the matrix determining unit is used for obtaining a transformation matrix according to the monitoring camera picture coordinate set and the conversion relation;
and the mapping relation determining unit is used for determining the mapping relation between the picture coordinate of the monitoring camera and the remote sensing longitude and latitude coordinate according to the transformation matrix.
Preferably, the extraction module specifically includes:
the information set extraction unit is used for extracting the identification frame information sets of all targets in the remote sensing image based on an image target detection algorithm;
the target removing unit is used for removing a target in the remote sensing image according to the identification frame information set;
and the repairing unit is used for inputting the remote sensing image without the target into the trained generation confrontation network model to obtain the repaired background image.
Preferably, the object detection module specifically includes:
the target detection unit is used for carrying out target detection on the monitoring camera by utilizing a trained target detection model to obtain the detection target information; the detection target information includes: a recognition box of the target, a type of the target, and a visual characteristic of the target.
Preferably, the image processing module specifically includes:
the image generation unit is used for generating a target image in a remote sensing view field according to the type of the target and the visual characteristics of the target based on the trained convolutional neural network;
the conversion unit is used for converting the coordinates of the identification frame of the target in the picture of the monitoring camera into remote sensing longitude and latitude coordinates according to the mapping relation so as to obtain a rectangular area corresponding to the identification frame in the monitoring image in the remote sensing image;
the stretching unit is used for stretching the target image in the remote sensing view by utilizing a bilinear interpolation algorithm to obtain a stretched image with the same size as the identification frame of the target;
and the superposition unit is used for superposing the stretched image and the rectangular area to obtain a target image of the remote sensing image.
The invention has the following beneficial effects:
(1) The invention utilizes a remote sensing image simulation generation method based on video monitoring to simulate and generate a remote sensing image according to the real-time performance of monitoring and the data information of video monitoring, thereby obtaining real-time remote sensing dynamic. And the real-time performance of obtaining the remote sensing dynamic state is further improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (4)

1. A remote sensing image simulation generation method based on video monitoring is characterized by comprising the following steps:
establishing a mapping relation between picture coordinates and remote sensing longitude and latitude coordinates of a monitoring camera according to preset monitoring camera parameters;
extracting a background image of the remote sensing image;
carrying out target detection on each frame of image in the monitoring camera to obtain detection target information;
performing simulated image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image;
carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image;
the method for establishing the mapping relation between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates according to the preset monitoring camera parameters comprises the following steps:
step 1.1: parameter acquisition preparation: measuring the height h of the monitoring camera from the horizontal plane, the included angle theta between the central line of the monitoring camera and the vertical line, the included angle beta between the projection of the central line of the monitoring camera on the horizontal plane and the true north direction of geography, and the horizontal field angle omega of the monitoring camera x The vertical field angle of the monitoring camera is omega y Acquiring the image resolution parameter information of the monitoring camera as X multiplied by Y,x is the pixel width of the image and Y is the pixel height;
suppose that: the coordinate of the center of the picture of the monitoring camera is (0, 0), the vertical projection position of the position O of the monitoring camera on the horizontal plane is O', and the longitude and latitude of the vertical projection position are (lambda) 0 ,ψ 0 ) Aiming at any position A on the horizontal plane within the visual range of the monitoring camera i Latitude and longitude coordinates (λ) i ,ψ i ) Can be converted into the picture coordinate (x) of the monitoring camera in the following way i ,y i );
Step 1.2: according to a Haverine formula, calculating the vertical projection position O' of the position of the monitoring camera on the horizontal plane and the optional position A of the monitoring camera on the horizontal plane within the visible range i Straight horizontal distance d i In units of m, O' and A i Longitude horizontal distance s of i The unit is m:
Figure FDA0003907366630000011
Figure FDA0003907366630000012
Figure FDA0003907366630000013
Figure FDA0003907366630000014
wherein: a. b is the intermediate variable value, r is the radius of the earth and the unit is m;
step 1.3: calculation of O' and A i Angle beta between connecting line of (a) and true north direction of geography i
Figure FDA0003907366630000021
Step 1.4: calculation of O and A i Angle theta between the connecting line and the vertical line i
Figure FDA0003907366630000022
H is the height of the monitoring camera from the horizontal plane, and the unit is m;
step 1.5: calculation of A i In the picture coordinate (x) of the monitoring camera i ,y i ):
Figure FDA0003907366630000023
Figure FDA0003907366630000024
Wherein X is the pixel width of the image, Y is the pixel height, and the parameter values of X and Y can be obtained according to the X Y image resolution of the monitoring camera;
step 1.6: randomly selecting a plurality of picture coordinates in a picture of a monitoring camera to obtain a picture coordinate set of the monitoring camera, selecting three groups of coordinates from the picture coordinate set of the monitoring camera each time, converting the selected picture coordinates of the monitoring camera into longitude and latitude coordinates through the mapping relation between the picture coordinates of the monitoring camera and the longitude and latitude coordinates, and calculating according to the picture coordinates of the monitoring camera and the converted longitude and latitude coordinates to obtain a transformation matrix H, wherein the specific process comprises the following steps:
obtaining n transformation matrixes H through inverse matrix calculation j
Figure FDA0003907366630000025
Wherein (x) j1 ,y j1 )、(x j2 ,y j2 )、(x j3 ,y j3 ) Is a monitoring cameraThree sets of coordinates in the header, (lon) j1 ,lat j1 )、(lon j2 ,lat j2 )、(lon j3 ,lat j3 ) Is a longitude and latitude coordinate converted by three groups of monitoring camera picture coordinates; j is a positive integer ranging from 1 to n;
taking a plurality of transformation matrices H j Average value of (a):
Figure FDA0003907366630000026
the conversion relation between the picture coordinate of the monitoring camera and the remote sensing longitude and latitude coordinate is obtained as follows:
Figure FDA0003907366630000027
wherein, (x, y) is picture coordinates of the monitoring camera, and (lon, lat) is remote sensing longitude and latitude coordinates;
the target detection is performed on each frame of image in the monitoring camera to obtain detection target information, and the method comprises the following steps:
carrying out target detection on the monitoring camera by using a trained target detection model to obtain detection target information; the detection target information includes: the identification frame of the target, the type of the target and the visual characteristics of the target;
the step of performing simulated image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image comprises the following steps:
generating a target image in a remote sensing view according to the type of the target and the visual characteristics of the target based on the trained convolutional neural network;
converting the coordinates of the identification frame of the target in the monitoring camera picture into remote sensing longitude and latitude coordinates according to the mapping relation so as to obtain a rectangular area corresponding to the identification frame in the monitoring image in the remote sensing image;
stretching the target image in the remote sensing view field by using a bilinear interpolation algorithm to obtain a stretched image with the same size as the recognition frame of the target;
and superposing the stretched image and the rectangular area to obtain a target image of the remote sensing image.
2. The remote sensing image simulation generation method based on video monitoring as claimed in claim 1, wherein the extracting of the background image of the remote sensing image comprises:
extracting identification frame information sets of all targets in the remote sensing image based on an image target detection algorithm;
removing the target in the remote sensing image according to the identification frame information set;
and inputting the remote sensing image without the target into a trained generation confrontation network model to obtain the repaired background image.
3. A remote sensing image simulation generation system based on video monitoring is characterized by comprising:
the mapping relation determining module is used for establishing a mapping relation between a picture coordinate of the monitoring camera and a remote sensing longitude and latitude coordinate according to a preset monitoring camera parameter;
the extraction module is used for extracting a background image of the remote sensing image;
the target detection module is used for carrying out target detection on each frame of image in the monitoring camera to obtain detection target information;
the image processing module is used for carrying out analog image processing according to the detection target information and the mapping relation to obtain a target image of the remote sensing image;
the superposition module is used for carrying out image superposition on the background image and the target image to obtain a real-time remote sensing dynamic image;
the method for establishing the mapping relation between the picture coordinates of the monitoring camera and the remote sensing longitude and latitude coordinates according to the preset monitoring camera parameters comprises the following steps:
step 1.1: parameter acquisition preparation: clamp for measuring height h of monitoring camera from horizontal plane and central line and vertical line of monitoring cameraThe angle is theta, the included angle between the projection of the central line of the monitoring camera on the horizontal plane and the geographical true north direction is beta, and the horizontal field angle of the monitoring camera is omega x The vertical field angle of the monitoring camera is omega y Acquiring the resolution parameter information of the image of the monitoring camera as X multiplied by Y, wherein X is the pixel width of the image, and Y is the pixel height;
suppose that: the coordinate of the center of the picture of the monitoring camera is (0, 0), the vertical projection position of the position O of the monitoring camera on the horizontal plane is O', and the longitude and latitude of the vertical projection position are (lambda) 0 ,ψ 0 ) Aiming at any position A on the horizontal plane within the visual range of the monitoring camera i Latitude and longitude coordinates (λ) i ,ψ i ) Can be converted into the picture coordinate (x) of the monitoring camera in the following way i ,y i );
Step 1.2: according to a Haverine formula, calculating the vertical projection position O' of the position of the monitoring camera on the horizontal plane and the optional position A of the monitoring camera on the horizontal plane within the visible range i Straight horizontal distance d i In units of m, O' and A i Longitude horizontal distance s of i The unit is m:
Figure FDA0003907366630000041
Figure FDA0003907366630000042
Figure FDA0003907366630000043
Figure FDA0003907366630000044
wherein: a. b is the intermediate variable value, r is the radius of the earth and the unit is m;
step (ii) of1.3: calculation of O' and A i Angle beta between the connecting line of (A) and the true north direction of geography i
Figure FDA0003907366630000045
Step 1.4: calculation of O and A i Angle theta between the connecting line and the vertical line i
Figure FDA0003907366630000046
H is the height of the monitoring camera from the horizontal plane, and the unit is m;
step 1.5: calculation of A i In the picture coordinate (x) of the monitoring camera i ,y i ):
Figure FDA0003907366630000051
Figure FDA0003907366630000052
Wherein, X is the pixel width of the image, Y is the pixel height, and the parameter values of X and Y can be obtained according to the image resolution of the monitoring camera of X multiplied by Y;
step 1.6: randomly selecting a plurality of picture coordinates in a picture of a monitoring camera to obtain a picture coordinate set of the monitoring camera, selecting three groups of coordinates from the picture coordinate set of the monitoring camera each time, converting the selected picture coordinates of the monitoring camera into longitude and latitude coordinates through the mapping relation between the picture coordinates of the monitoring camera and the longitude and latitude coordinates, and calculating according to the picture coordinates of the monitoring camera and the converted longitude and latitude coordinates to obtain a transformation matrix H, wherein the specific process comprises the following steps:
obtaining n transformation matrixes H through inverse matrix calculation j
Figure FDA0003907366630000053
Wherein (x) j1 ,y j1 )、(x j2 ,y j2 )、(x j3 ,y j3 ) Is three groups of coordinates (lon) in the picture of the monitoring camera j1 ,lat j1 )、(lon j2 ,lat j2 )、(lon j3 ,lat j3 ) Is a longitude and latitude coordinate converted by three groups of monitoring camera picture coordinates; j is a positive integer ranging from 1 to n;
taking a plurality of transformation matrices H j Average value of (d):
Figure FDA0003907366630000054
the conversion relation between the picture coordinate of the monitoring camera and the remote sensing longitude and latitude coordinate is obtained as follows:
Figure FDA0003907366630000055
wherein, (x, y) is picture coordinates of the monitoring camera, and (lon, lat) is remote sensing longitude and latitude coordinates;
the target detection module specifically comprises:
the target detection unit is used for carrying out target detection on the monitoring camera by utilizing a trained target detection model to obtain the detection target information; the detection target information includes: the identification frame of the target, the type of the target and the visual characteristics of the target;
the image processing module specifically comprises:
the image generation unit is used for generating a target image in a remote sensing view according to the type of the target and the visual characteristics of the target based on the trained convolutional neural network;
the conversion unit is used for converting the coordinates of the identification frame of the target in the picture of the monitoring camera into remote sensing longitude and latitude coordinates according to the mapping relation so as to obtain a rectangular area corresponding to the identification frame in the monitoring image in the remote sensing image;
the stretching unit is used for stretching the target image in the remote sensing view by utilizing a bilinear interpolation algorithm to obtain a stretched image with the same size as the identification frame of the target;
and the superposition unit is used for superposing the stretched image and the rectangular area to obtain a target image of the remote sensing image.
4. The remote sensing image simulation generation system based on video monitoring of claim 3, wherein the extraction module specifically comprises:
the information set extraction unit is used for extracting the identification frame information sets of all targets in the remote sensing image based on an image target detection algorithm;
the target removing unit is used for removing a target in the remote sensing image according to the identification frame information set;
and the repairing unit is used for inputting the remote sensing image without the target into the trained generation confrontation network model to obtain the repaired background image.
CN202111525188.8A 2021-12-14 2021-12-14 Remote sensing image simulation generation method and system based on video monitoring Active CN114187179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111525188.8A CN114187179B (en) 2021-12-14 2021-12-14 Remote sensing image simulation generation method and system based on video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111525188.8A CN114187179B (en) 2021-12-14 2021-12-14 Remote sensing image simulation generation method and system based on video monitoring

Publications (2)

Publication Number Publication Date
CN114187179A CN114187179A (en) 2022-03-15
CN114187179B true CN114187179B (en) 2023-02-03

Family

ID=80543686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111525188.8A Active CN114187179B (en) 2021-12-14 2021-12-14 Remote sensing image simulation generation method and system based on video monitoring

Country Status (1)

Country Link
CN (1) CN114187179B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630825A (en) * 2023-06-09 2023-08-22 北京佳格天地科技有限公司 Satellite remote sensing data and monitoring video fusion method and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9245201B1 (en) * 2013-03-15 2016-01-26 Excelis Inc. Method and system for automatic registration of images
CN103743488B (en) * 2013-12-28 2015-07-22 华中科技大学 Infrared imaging simulation method for globe limb background characteristics of remote sensing satellite
IL243846B (en) * 2016-01-28 2020-11-30 Israel Aerospace Ind Ltd Systems and methods for detecting imaged clouds
CN108596055B (en) * 2018-04-10 2022-02-11 西北工业大学 Airport target detection method of high-resolution remote sensing image under complex background
CN109035188B (en) * 2018-07-16 2022-03-15 西北工业大学 Intelligent image fusion method based on target feature driving
CN109634507B (en) * 2018-12-03 2021-04-13 广东国图勘测地理信息有限公司 Touch electronic map control method and device
CN109903352B (en) * 2018-12-24 2021-03-30 中国科学院遥感与数字地球研究所 Method for making large-area seamless orthoimage of satellite remote sensing image
CN110992262B (en) * 2019-11-26 2023-04-07 南阳理工学院 Remote sensing image super-resolution reconstruction method based on generation countermeasure network
CN112507122A (en) * 2020-12-01 2021-03-16 浙江易智信息技术有限公司 High-resolution multi-source remote sensing data fusion method based on knowledge graph
CN113378686B (en) * 2021-06-07 2022-04-15 武汉大学 Two-stage remote sensing target detection method based on target center point estimation

Also Published As

Publication number Publication date
CN114187179A (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN110136170B (en) Remote sensing image building change detection method based on convolutional neural network
CN110276269B (en) Remote sensing image target detection method based on attention mechanism
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN107704857A (en) A kind of lightweight licence plate recognition method and device end to end
CN108052881A (en) The method and apparatus of multiclass entity object in a kind of real-time detection construction site image
CN106570893A (en) Rapid stable visual tracking method based on correlation filtering
CN109035327B (en) Panoramic camera attitude estimation method based on deep learning
CN109002752A (en) A kind of complicated common scene rapid pedestrian detection method based on deep learning
CN105005798B (en) One kind is based on the similar matched target identification method of structures statistics in part
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
CN112257741B (en) Method for detecting generative anti-false picture based on complex neural network
CN108229524A (en) A kind of chimney and condensing tower detection method based on remote sensing images
CN114187179B (en) Remote sensing image simulation generation method and system based on video monitoring
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN109447897B (en) Real scene image synthesis method and system
CN111683221B (en) Real-time video monitoring method and system for natural resources embedded with vector red line data
CN115424209A (en) Crowd counting method based on spatial pyramid attention network
CN108694716A (en) A kind of workpiece inspection method, model training method and equipment
CN114663880A (en) Three-dimensional target detection method based on multi-level cross-modal self-attention mechanism
CN112766381B (en) Attribute-guided SAR image generation method under limited sample
CN109829951B (en) Parallel equipotential detection method and device and automatic driving system
CN113052110A (en) Three-dimensional interest point extraction method based on multi-view projection and deep learning
CN117422619A (en) Training method of image reconstruction model, image reconstruction method, device and equipment
CN109919990B (en) Forest height prediction method by using depth perception network and parallax remote sensing image
CN114445726B (en) Sample library establishing method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant