CN114723773A - Monitoring method, device, system, storage medium and electronic equipment - Google Patents

Monitoring method, device, system, storage medium and electronic equipment Download PDF

Info

Publication number
CN114723773A
CN114723773A CN202110008522.6A CN202110008522A CN114723773A CN 114723773 A CN114723773 A CN 114723773A CN 202110008522 A CN202110008522 A CN 202110008522A CN 114723773 A CN114723773 A CN 114723773A
Authority
CN
China
Prior art keywords
monitoring
information
position information
target object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110008522.6A
Other languages
Chinese (zh)
Inventor
葛翔
刘斌禄
高跃清
穆立波
武凯
王文帅
赵建
田聚波
李宝莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202110008522.6A priority Critical patent/CN114723773A/en
Publication of CN114723773A publication Critical patent/CN114723773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a monitoring method, a monitoring device, a monitoring system, a storage medium and electronic equipment, relates to the technical field of scene monitoring, and aims to enable photoelectric equipment to monitor a target object under the condition of surface contour fluctuation of a monitored scene, and improve the monitoring capability of a monitoring system for the monitored scene with the surface contour fluctuation. The monitoring method comprises the following steps: and determining the height information of the target object according to the two-dimensional position information of the target object in the first coordinate system based on the pre-constructed three-dimensional live-action model of the monitoring scene. And determining the target photoelectric device and monitoring adjustment information corresponding to the target photoelectric device according to the two-dimensional position information, the height information and the monitoring position information of the plurality of photoelectric devices. And adjusting the target photoelectric equipment according to the monitoring adjustment information to acquire a monitoring image of the target object.

Description

Monitoring method, device, system, storage medium and electronic equipment
Technical Field
The present invention relates to the field of target object monitoring technologies, and in particular, to a monitoring method, apparatus, system, storage medium, and electronic device.
Background
When monitoring a target object (located on a scene contour surface) in a monitored scene with a relatively flat surface contour, a monitoring system based on a two-dimensional radar and a photoelectric device generally finds the target object through the two-dimensional radar and acquires two-dimensional position information of the target object. And determining monitoring adjustment information of the photoelectric equipment according to the two-dimensional position information of the target object, the two-dimensional position information and the current monitoring angle information of the photoelectric equipment meeting the monitoring range requirement, and the height of a rod body of the photoelectric equipment. Finally, the photoelectric equipment is adjusted according to each monitoring adjustment information, so that the photoelectric equipment can monitor the target object.
However, for a monitoring scene with a contoured surface, the height difference between the target object and the optoelectronic device cannot be determined by the height of the rod of the optoelectronic device alone. At this time, the height difference between the set height of the photoelectric device and the target object needs to be obtained through calculation. The set height of the photoelectric equipment is a known fixed value, but the two-dimensional radar can only acquire two-dimensional position information of a target object and cannot acquire height information of the target object, so that the existing monitoring system cannot realize linkage between the two-dimensional radar and the photoelectric equipment in a monitoring scene with a fluctuant surface profile, the photoelectric equipment is difficult to monitor the target object, and finally the function of the monitoring system is weakened or even lost.
Disclosure of Invention
The invention aims to provide a monitoring method, a monitoring device, a monitoring system, a storage medium and an electronic device, which are used for enabling a photoelectric device to monitor a target object under the condition of surface contour fluctuation of a monitoring scene, and improving the monitoring capability of a monitoring system for the monitoring scene of the surface contour fluctuation.
In a first aspect, the present invention provides a monitoring method. The monitoring method comprises the following steps:
determining height information of a target object according to two-dimensional position information of the target object in a first coordinate system based on a pre-constructed three-dimensional live-action model of a monitoring scene;
determining a target photoelectric device and monitoring adjustment information corresponding to the target photoelectric device according to the two-dimensional position information, the height information and the monitoring position information of the plurality of photoelectric devices;
and adjusting the target photoelectric equipment according to the monitoring adjustment information to acquire a monitoring image of the target object.
Compared with the prior art, in the monitoring method provided by the invention, the three-dimensional real-scene model of the monitored scene is a model constructed in advance according to the monitored scene, so that the structure of the three-dimensional real-scene model and each pixel feature included in the three-dimensional real-scene model are respectively matched with the structure of the monitored scene and the corresponding position point of the surface contour of the monitored scene. In this case, after the target object is found in the monitored scene and the two-dimensional position information of the target object in the first coordinate system is received, the height information of the target object in the monitored scene may be determined based on the three-dimensional real scene model of the monitored scene, which is constructed in advance, and according to the two-dimensional position information of the target object in the first coordinate system. Then, according to the two-dimensional position information, the height information, and the monitoring position information of the plurality of photoelectric devices, one or more target photoelectric devices capable of monitoring the target object and monitoring adjustment information corresponding to the target object to be monitored by the target photoelectric devices can be screened from the plurality of photoelectric devices in the monitoring scene. And finally, adjusting the target photoelectric equipment according to the determined monitoring adjustment information, acquiring a monitoring image of the target object, and realizing the monitoring of the target object by the target photoelectric equipment, thereby improving the monitoring capability of the monitoring system for monitoring scenes with fluctuant surface profiles.
In a second aspect, the invention also provides a monitoring device. The monitoring device includes:
the height information determining module is used for determining the height information of the target object according to the two-dimensional position information of the target object in the first coordinate system based on a pre-constructed three-dimensional live-action model of the monitoring scene;
the monitoring adjustment information determining module is used for determining target photoelectric equipment and monitoring adjustment information corresponding to the target photoelectric equipment according to the two-dimensional position information, the height information and the monitoring position information of the plurality of photoelectric equipment;
and the monitoring image acquisition module is used for adjusting the target photoelectric equipment according to the monitoring adjustment information so as to acquire a monitoring image of the target object.
In a third aspect, the present invention also provides a computer storage medium. The computer storage medium has stored therein instructions that, when executed, cause the monitoring method described in the first aspect or any of its possible implementations to be performed.
In a fourth aspect, the invention further provides an electronic device. The electronic device includes:
a memory having a computer program stored thereon;
a processor for executing a computer program in a memory for implementing the steps of the monitoring method described in the first aspect or any of its possible implementations.
In a fifth aspect, the invention also provides a monitoring system. The monitoring system includes:
the two-dimensional position information acquisition equipment is used for acquiring two-dimensional position information of a target object in a monitoring scene;
one or more optoelectronic devices for acquiring a surveillance image within a surveillance scene;
and the electronic device described in the fourth aspect or any possible implementation manner of the fourth aspect.
The beneficial effects of the second aspect to the fifth aspect and the various implementation manners thereof in the present invention can refer to the beneficial effect analysis of the first aspect and the various implementation manners thereof, and are not described herein again.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic structural diagram of a monitoring system according to an embodiment of the present invention;
FIG. 2 is a flow chart of a monitoring method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another monitoring method provided by the embodiment of the invention;
fig. 4 is a schematic view of a positional relationship among the target photoelectric device, the current monitoring object, and the target object in the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a monitoring device according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of another monitoring apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a chip according to an embodiment of the present invention.
Detailed Description
In order to facilitate clear description of technical solutions of the embodiments of the present invention, in the embodiments of the present invention, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. For example, the first threshold and the second threshold are only used for distinguishing different thresholds, and the order of the thresholds is not limited. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
It is to be understood that the terms "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the present invention, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a and b combination, a and c combination, b and c combination, or a, b and c combination, wherein a, b and c can be single or multiple.
Monitoring systems based on two-dimensional radars and optoelectronic devices generally find a target object by means of a two-dimensional radar and acquire two-dimensional position information (e.g., longitude and latitude) of the target object. And determining monitoring adjustment information of the photoelectric equipment meeting the requirement of the monitoring range according to the two-dimensional position information of the target object, the two-dimensional position information and the current monitoring angle information of the photoelectric equipment meeting the requirement of the monitoring range, and the height difference information between the target object and the photoelectric equipment meeting the requirement of the monitoring range. And finally, the photoelectric equipment meeting the monitoring range requirement is adjusted according to the monitoring adjustment information, so that the photoelectric equipment meeting the monitoring range requirement can monitor the target object, and linkage between the two-dimensional radar and the photoelectric equipment is realized.
As can be seen from the linkage process between the two-dimensional radar and the photoelectric device included in the monitoring system, in order to enable the photoelectric device satisfying the monitoring range requirement to monitor the target object after rotating, it is necessary to know the two-dimensional position information of the target object, the two-dimensional position information and the current monitoring angle information of the photoelectric device satisfying the monitoring range requirement, and the height difference information between the target object and each photoelectric device satisfying the monitoring range requirement. In this case, for a monitoring scene with a relatively flat surface profile, in the case where the target object is a target object located on a surface of a scene profile such as a ground surface, a building surface, or the like, a height difference between the target object and the optoelectronic device is substantially equal to a stick height of the optoelectronic device, and the stick height of the optoelectronic device is a known fixed value. And the two-dimensional position information of the optoelectronic device is also known fixed information, and the current monitoring angle information of the optoelectronic device can be obtained through the current working condition of the optoelectronic device. In this case, in order to acquire monitoring adjustment information of the photoelectric device that satisfies the monitoring range requirement, the two-dimensional radar only needs to acquire two-dimensional position information of the target object, and the two-dimensional radar and the photoelectric device can be linked.
However, for a monitoring scene with a contoured surface, the height difference between the target object and the optoelectronic device can no longer be determined purely by the rod height of the optoelectronic device. At this time, the height difference between the set height of the photoelectric device and the target object needs to be obtained through calculation. In an actual application process, the set height of the photoelectric equipment is a known fixed value, but the two-dimensional radar can only acquire two-dimensional position information of a target object and cannot acquire height information of the target object, so that the existing monitoring system cannot realize linkage between the two-dimensional radar and the photoelectric equipment in a monitoring scene with a fluctuating surface profile, the photoelectric equipment is difficult to monitor the target object, and finally the function of the monitoring system is weakened or even lost.
In order to solve the foregoing technical problem, embodiments of the present invention provide a monitoring method, apparatus, system, storage medium, and electronic device. The monitoring method provided by the embodiment of the invention can be suitable for various monitoring scenes with the undulating surface contour. For example: the monitoring scene may be a city monitoring scene, a mountain monitoring scene, or the like, but is not limited thereto. The surface contour of the city monitoring scene is the surface contour of structures such as roads, buildings and the like of a city. The surface contour of the mountain area monitoring scene is the surface contour of structures such as mountains, roads and the like in the mountain area. In addition, the monitoring method can be applied to a monitoring system.
Fig. 1 shows a schematic structural diagram of a monitoring system provided by an embodiment of the present invention. Referring to fig. 1, the monitoring system includes a two-dimensional position information collecting apparatus 100, one or more photoelectric devices 200, and an electronic device 300. The two-dimensional position information acquisition device 100 and the optoelectronic device 200 are both in communication with the electronic device 300, so as to realize data transmission. The communication method may be wireless communication or wired communication. The wireless communication can be based on networking technologies such as WiFi, ZigBee and the like. The wired communication may implement a communication connection based on a data line or a power line carrier. The communication interface may be a standard communication interface. The standard communication interface may be a serial interface or a parallel interface.
Specifically, referring to fig. 1, the electronic device 300 may be any device having a storage and control function, such as a tablet, a computer, and the like, so as to implement a monitoring policy of a target object. In addition, the electronic device 300 may further have a three-dimensional live-action model building and coordinate system conversion function to pre-build a three-dimensional live-action model matching the monitored scene before monitoring the target object. And converting the coordinate system corresponding to each pixel feature in the three-dimensional live-action model into the coordinate system corresponding to the two-dimensional position information of the target object, so as to determine the three-dimensional position information of each pixel feature in the three-dimensional live-action model in the coordinate system corresponding to the two-dimensional position information, facilitate the subsequent determination of the height information of the target object according to the three-dimensional position information and the two-dimensional position information, and prepare for the subsequent adjustment of the optoelectronic device 200 according to data such as the height information.
Referring to fig. 1, the two-dimensional position information collecting device 100 may be any device, such as a two-dimensional radar, which can find a target object in a monitored scene and can acquire two-dimensional position information of the target object. However, for the optoelectronic devices 200, the number of the optoelectronic devices 200 and the position of each optoelectronic device 200 in the monitoring scene may be set according to actual requirements, and is not limited specifically here. For example, when the range of the monitoring scene is small, the number of settings of the optoelectronic device 200 may be one. When the range of the monitoring scene is large, the number of settings of the optoelectronic device 200 may be plural. Further, the setting position of each optoelectronic device 200 may be set according to the terrain within the monitoring scene as long as it is possible to ensure that the monitoring range of all optoelectronic devices 200 covers at least the monitoring scene.
In one example, the monitoring system may further include a scene information collection device in communication with the electronic device. The scene information acquisition unit can comprise any unit capable of acquiring information such as geography, buildings and the like in a monitored scene, such as a detection sensor, a camera, remote sensing and the like. In this case, the scene information collection device may transmit the collected information such as geography and buildings to the electronic device to assist the electronic device in implementing the monitoring policy of the target object.
Fig. 2 is a flow chart of a monitoring method provided by the embodiment of the invention. The monitoring method provided by the embodiment of the invention is applied to the electronic equipment. Referring to fig. 2, the monitoring method includes:
step 101: and determining the height information of the target object according to the two-dimensional position information of the target object in the first coordinate system based on the pre-constructed three-dimensional live-action model of the monitoring scene.
Illustratively, in an actual application process, the structure of the pre-constructed three-dimensional live-action model, including each pixel feature, is respectively matched with the structure of the monitoring scene, and the corresponding position point of the surface contour thereof. Also, the target object is a monitoring object located on a contour surface of the monitoring scene. For example: in the case where the monitoring scene is a city monitoring scene, the target object may be a target object located on a roof of a building. Another example is: in the case where the monitoring scene is a mountain monitoring scene, the target object may be a target object located on the surface of a mountain. In this case, a target pixel feature matching each pixel feature included in the three-dimensional real-scene model may be screened out from the two-dimensional position information of the target object acquired by the two-dimensional position information acquisition unit, and the three-dimensional position information of the target pixel feature is known, so that the height information of the target object can be determined.
It should be noted that the three-dimensional real-scene model of the monitoring scene may be constructed before monitoring the monitoring scene, so that the optoelectronic device can timely and quickly monitor the target object after the target object is found. Of course, the three-dimensional live-action model may be constructed while monitoring a scene. Specifically, the construction time of the three-dimensional live-action model may be set according to actual requirements, and is not specifically limited herein.
In an example, referring to fig. 3, before determining the height information of the target object according to the two-dimensional position information of the target object based on the three-dimensional live-action model of the pre-constructed monitoring scene, the monitoring method may further include:
step 104: and constructing a three-dimensional live-action model of the monitoring scene. And each pixel feature in the three-dimensional live-action model corresponds to three-dimensional position information in a second coordinate system.
For example, the geographic information, building information, etc. within the monitored scene may be collected by fixed sensors or detection sensors such as remote sensing, drones, and cameras. And according to the collected multi-view remote sensing image, POS (position and orientation system) data, camera parameters and other information, a three-dimensional live-action model of the monitoring scene is formed through the operations of feature point detection and matching, aerial triangulation, dense matching, surface reconstruction, texture mapping, tiling processing and the like of the multi-view oblique image. And each pixel feature in the three-dimensional live-action model corresponds to three-dimensional position information in the second coordinate system. The three-dimensional position information in the second coordinate system generally includes position information of the pixel feature in three directions perpendicular to each other to represent a specific position of the pixel feature in the three-dimensional real-scene model.
Step 105: and converting the second coordinate system into the first coordinate system to determine the three-dimensional position information of each pixel feature in the first coordinate system.
For example, in an actual application process, although the pre-constructed three-dimensional real-scene model matches the monitoring scene, the second coordinate system corresponding to the three-dimensional real-scene model is not uniform with the first coordinate system corresponding to the two-dimensional position information of the target object, and therefore the second coordinate system needs to be converted into the first coordinate system according to a proportional relationship between the two coordinate systems. Specifically, the proportional relationship may be set according to a proportional size between the three-dimensional real-scene model and the monitoring scene and an actual application scene, and is not specifically limited herein. After the conversion into the first coordinate system, the three-dimensional position information of each pixel feature in the three-dimensional live-action model in the first coordinate system is determined, so that the height information of the target object is determined from the three-dimensional position information in the first coordinate system according to the two-dimensional position information of the target object.
For example: in the case that the three-dimensional live-action model is in the format of OSG scene data, SuperMap desktop tool conversion can be used. Specifically, a plane scene or a spherical scene may be newly created, a "normal layer" right button is clicked in the layer manager, and "add three-dimensional slice cache layer" is selected to load SCP data required to generate a DSM model. Then, in the group of the model drawing of the three-dimensional analysis tab, a DSM generating button is clicked, and a DSM generating dialog box is popped up. The output range is set according to the actual situation at the "selection range" in the dialog box. The output range is divided into a data range and a user-defined range. Where the upper left and lower right at "result range" are used to display the range of result data. Finally, at the result setting position of the dialog box, information such as the storage name, resolution, camera height and the like of the result data can be set, and the operation of generating DSM data can be executed by clicking the confirm button, so that the second coordinate system is converted into the first coordinate system.
For example, on the basis of the above example, determining the height information of the target object according to the two-dimensional position information of the target object based on the three-dimensional live-action model of the monitoring scene, which is constructed in advance, may include: and determining height information according to the two-dimensional position information and the three-dimensional position information of each pixel feature in the first coordinate system.
For example: taking the example that the two-dimensional position information includes longitude and latitude, and the three-dimensional position information in the first coordinate system includes longitude, latitude and altitude, the two-dimensional position information of the target object may be compared with the three-dimensional position information of each pixel feature in the first coordinate system one by one according to the two-dimensional position information of the target object, and when the longitude and the latitude included in the three-dimensional position information of a certain pixel feature in the first coordinate system are respectively the same as the longitude and the latitude included in the two-dimensional position information, the altitude included in the three-dimensional position information of the pixel feature in the first coordinate system is the altitude of the target object, thereby determining the altitude information of the target object.
Step 102: and determining the target photoelectric device and monitoring adjustment information corresponding to the target photoelectric device according to the two-dimensional position information, the height information and the monitoring position information of the plurality of photoelectric devices.
For example, in an actual application process, each optoelectronic device has a certain monitoring distance, and the setting position of each optoelectronic device in the monitoring scene is different, so that each optoelectronic device has a different monitoring range. In this case, after the two-dimensional position information acquisition unit finds the target object in the monitoring scene, before the target object is monitored by the optoelectronic device, the optoelectronic device capable of monitoring the target object (i.e., the target optoelectronic device) needs to be screened from the plurality of optoelectronic devices in the monitoring scene. In addition, monitoring adjustment information of the target photovoltaic device needs to be determined, so that the target photovoltaic device can monitor the target object after being adjusted according to the monitoring adjustment information.
Specifically, the monitoring position information may include any information that can directly or indirectly represent information related to monitoring of the optoelectronic device. For example, the monitoring position information may include spatial position information, monitoring distance information, and monitoring angle information of the optoelectronic device. The spatial position information of the optoelectronic device is used to directly or indirectly represent the spatial position information of the optoelectronic device in the monitored scene, for example: the spatial location information may include the longitude, latitude, and altitude of the optoelectronic device. In addition, the above-mentioned monitoring viewing angle information is used to directly or indirectly embody viewing angle information currently monitored by the optoelectronic device. For example: the monitoring perspective information may include an azimuth angle and a pitch angle to which the optoelectronic device currently corresponds. In this case, the monitoring position information can comprehensively represent the current monitoring situation of the optoelectronic device from the spatial position information, the monitoring distance information and the monitoring angle information, so that the target optoelectronic device can accurately monitor the target object after being adjusted according to the monitoring adjustment information.
For example, in a case where the monitoring position information includes spatial position information, monitoring distance information, and monitoring angle information, and the monitoring adjustment information includes rotation angle information and view angle information, the determining of the target optoelectronic device and the monitoring adjustment information corresponding to the target optoelectronic device based on the two-dimensional position information, the height information, and the monitoring position information of the plurality of optoelectronic devices may include:
step 102.1: and determining the target photoelectric equipment according to the two-dimensional position information, the spatial position information and the monitoring distance information.
Specifically, the monitoring range information of each optoelectronic device may be determined according to the spatial position information and the monitoring distance information. Next, in a case where it is determined that the monitoring range information corresponding to the photoelectric device includes the two-dimensional position information, the photoelectric device is a target photoelectric device.
For example: taking the case that the spatial location information includes longitude, latitude and altitude, and the two-dimensional location information includes longitude and latitude, the location of the optoelectronic device within the monitoring scene may be determined according to the longitude and latitude included in the spatial location information. And then, the monitoring range which can be monitored by the photoelectric equipment by taking the position of the photoelectric equipment as a center of a circle can be determined according to the monitoring distance information of the photoelectric equipment. If the longitude and latitude included in the two-dimensional position information are within the monitoring range of the optoelectronic device, and it is described that the optoelectronic device can monitor the target object, the optoelectronic device is the target optoelectronic device.
Step 102.2: and determining rotation angle information according to the two-dimensional position information, the height information, the spatial position information and the monitoring angle information.
For example, the rotation angle information may include a horizontal rotation angle and a pitch rotation angle corresponding to the target optoelectronic device. In this case, the horizontal rotation angle may be determined first based on the two-dimensional position information, the spatial position information, and the monitoring angle information. And then determining the pitching rotation angle according to the height information, the space position information and the monitoring angle information.
For example: referring to fig. 4, the two-dimensional position information includes a latitude X1 and a longitude Y1. The height information of the target object includes a height H1. The spatial location information includes latitude X2, longitude Y2, and altitude H2. The monitoring angle information includes a current heading angle a and a current pitch angle B. Gamma is the included angle between the current monitoring direction of the photoelectric equipment and the horizontal direction. In this case, the predetermined horizontal zero angle is north M. Accordingly, the horizontal rotation angle α is calculated as α ═ arc ((Y2-Y1)/(X2-X1)) -M-a. The calculation formula of the height difference H between the target photoelectric device and the target object is H1-H2. The calculation formula of the horizontal distance D between the target photoelectric equipment and the target object is D2=(X1-X2)2-(Y1-Y2)2. The calculation formula of the included angle beta between the connecting line of the photoelectric equipment and the target object and the horizontal direction is beta-arc (H/D). The calculation formula of the pitch rotation angle delta is beta-gamma. The horizontal rotation angle α and the pitch rotation angle δ can be obtained according to the above formulas, respectively.
Step 102.3: and determining the field angle information according to the two-dimensional position information, the height information and the space position information.
Illustratively, the field angle of the target optoelectronic device is adjusted according to the distance between the target optoelectronic device and the target object, so that the monitoring image of the target object acquired by the target optoelectronic device is clearer. Specifically, the preset relationship between the distance and the angle of view may be established in advance based on past monitoring experience or theoretical data before monitoring the monitoring scene. After the actual distance between the target photoelectric device and the target object is determined according to the spatial position information of the target photoelectric device, and the two-dimensional position information and the height information of the target object, the corresponding field angle can be determined from the preset relationship according to the actual distance, and accordingly the field angle information can be determined. For example: when the distance between the target photoelectric device and the target object is 1200 meters, the field angle information of the target photoelectric device may be 20 degrees.
The field angle of the optoelectronic device is related to the focal length of the optoelectronic device, and the focal length of the optoelectronic device can be adjusted by adjusting the field angle. Accordingly, by adjusting the focal length of the optoelectronic device, the field angle of the optoelectronic device may also be adjusted.
Step 103: and adjusting the target photoelectric equipment according to the monitoring adjustment information to acquire a monitoring image of the target object.
For example, the target photoelectric device may be controlled to rotate in the horizontal direction according to the horizontal rotation angle α obtained in the step 102.2, then the target photoelectric device may be controlled to rotate in the vertical direction according to the pitch rotation angle δ, and finally the target photoelectric device may be adjusted according to the field angle information obtained in the step 102.3, so that the target photoelectric device may accurately monitor the target object and may acquire a clear monitoring image of the target object. Of course, the target photoelectric device may be adjusted according to the pitch rotation angle δ and then adjusted according to the horizontal rotation angle α. Specifically, the adjustment sequence of the horizontal rotation angle and the pitch rotation angle may be set according to an actual application scenario, and is not specifically limited herein.
As can be seen from the above, in the monitoring method provided in the embodiment of the present invention, the three-dimensional real-scene model of the monitored scene is a model pre-constructed according to the monitored scene, and therefore the structure of the three-dimensional real-scene model and each pixel feature included in the three-dimensional real-scene model are respectively matched with the structure of the monitored scene and the corresponding position point of the surface contour of the monitored scene. In this case, after the target object is found in the monitored scene and the two-dimensional position information of the target object in the first coordinate system is received, the height information of the target object in the monitored scene may be determined based on the three-dimensional real scene model of the monitored scene, which is constructed in advance, and according to the two-dimensional position information of the target object in the first coordinate system. Then, according to the two-dimensional position information, the height information, and the monitoring position information of the plurality of photoelectric devices, one or more target photoelectric devices capable of monitoring the target object and monitoring adjustment information corresponding to the target object to be monitored by the target photoelectric devices can be screened from the plurality of photoelectric devices in the monitoring scene. And finally, adjusting the target photoelectric equipment according to the determined monitoring adjustment information to acquire a monitoring image of the target object, so that the target photoelectric equipment monitors the target object, and the monitoring capability of the monitoring system for monitoring scenes with surface contour fluctuations is improved.
The above description mainly introduces the solutions provided by the embodiments of the present invention from the perspective of electronic devices. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present invention can be implemented in hardware or a combination of hardware and computer software, with the exemplary elements and algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiment of the present invention, the electronic device and the like may be divided into functional modules according to the above method examples, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 5 is a schematic structural diagram of a monitoring device according to an embodiment of the present invention. Referring to fig. 5, the monitoring apparatus includes:
the height information determining module 400 is configured to determine height information of the target object according to two-dimensional position information of the target object in the first coordinate system based on a pre-constructed three-dimensional live-action model of the monitored scene.
A monitoring adjustment information determining module 401, configured to determine, according to the two-dimensional position information, the height information, and the monitoring position information of the plurality of optoelectronic devices, a target optoelectronic device and monitoring adjustment information corresponding to the target optoelectronic device.
The monitoring image acquisition module 402 adjusts the target photoelectric device according to the monitoring adjustment information to acquire a monitoring image of the target object.
In an example, referring to fig. 6, the monitoring apparatus may further include:
a model building module 403, configured to build a three-dimensional real-scene model of the monitoring scene. And each pixel feature in the three-dimensional live-action model corresponds to three-dimensional position information in the second coordinate system.
And the coordinate system conversion module 404 is used for converting the second coordinate system into the first coordinate system so as to determine the three-dimensional position information of each pixel feature in the first coordinate system.
In an example, referring to fig. 5, in the case that the monitoring apparatus further includes a model building module 403 and a coordinate system converting module 404, the height information determining module 400 is configured to:
and determining height information according to the two-dimensional position information and the three-dimensional position information of each pixel feature in the first coordinate system.
In one example, referring to fig. 5, the monitoring position information includes spatial position information, monitoring distance information, and monitoring angle information of the photoelectric device. The monitoring adjustment information includes rotation angle information and angle of view information.
The monitoring adjustment information determining module 401 is configured to:
and determining the target photoelectric equipment according to the two-dimensional position information, the spatial position information and the monitoring distance information.
And determining the rotation angle information according to the two-dimensional position information, the height information, the spatial position information and the monitoring angle information.
And determining the field angle information according to the two-dimensional position information, the height information and the space position information.
In an example, referring to fig. 5, in the case that the monitoring position information includes spatial position information, monitoring distance information, and monitoring angle information of the optoelectronic device, the monitoring adjustment information determining module 401 is configured to:
and determining the monitoring range information of each photoelectric device according to the spatial position information and the monitoring distance information.
And when the monitoring range information corresponding to the photoelectric equipment is determined to comprise the two-dimensional position information, the photoelectric equipment is the target photoelectric equipment.
In an example, referring to fig. 5, in a case that the monitoring position information includes spatial position information, monitoring distance information, and monitoring angle information of the optoelectronic device, and the rotation angle information includes a horizontal rotation angle and a pitch rotation angle corresponding to the target optoelectronic device, the monitoring adjustment information determining module 401 is configured to:
and determining the horizontal rotation angle according to the two-dimensional position information, the spatial position information and the monitoring angle information.
And determining the pitching rotation angle according to the height information, the spatial position information and the monitoring angle information.
Fig. 7 is a schematic diagram illustrating a hardware structure of an electronic device according to an embodiment of the present invention. Referring to fig. 7, the electronic device 500 includes a processor 510 and a memory 520.
Optionally, as shown in fig. 7, the electronic device 500 may further include a communication interface 530 and a communication line 540. Communication interface 530 is coupled to processor 510. Communication link 540 may include a path to transfer information between the aforementioned components.
As shown in fig. 7, the processor 510 may be a general processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs according to the present invention. The communication interface 530 may be one or more. Communication interface 530 may use any transceiver or the like for communicating with other devices or a communication network.
As shown in fig. 7, the memory 520 is used for storing computer instructions for implementing aspects of the present invention and is controlled by the processor 510 for execution. Processor 510 is configured to execute computer instructions stored in memory 520 to implement the monitoring methods provided by embodiments of the present invention.
As shown in fig. 7, the memory 520 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 520 may be separate and coupled to the processor 510 via a communication line 540. The memory 520 may also be integrated with the processor 510.
Optionally, the computer instructions in the embodiment of the present invention may also be referred to as application program codes, which is not specifically limited in this embodiment of the present invention.
In particular implementations, as one embodiment, processor 510 may include one or more CPUs, such as CPU0 and CPU1 in fig. 7, as shown in fig. 7.
In particular implementations, as one embodiment, as shown in fig. 7, electronic device 500 may include multiple processors 510, such as processor 510 and processor 550 in fig. 7. Each of these processors may be a single core processor or a multi-core processor.
Fig. 8 is a schematic structural diagram of a chip according to an embodiment of the present invention. As shown in fig. 8, the chip 600 includes one or more (including two) processors 610 and a communication interface 620.
Optionally, as shown in fig. 8, the chip 600 further includes a memory 630, and the memory 630 may include a read-only memory and a random access memory and provide operating instructions and data to the processor 610. The portion of memory may also include non-volatile random access memory (NVRAM).
In some embodiments, as shown in FIG. 8, memory 630 stores elements, execution modules or data structures, or a subset thereof, or an expanded set thereof.
In the embodiment of the present invention, as shown in fig. 8, the processor 610 executes a corresponding operation by calling an operation instruction stored in the memory (the operation instruction may be stored in an operating system).
As shown in fig. 8, the processor 610 controls processing operations of any one of the electronic devices, and the processor 610 may also be referred to as a Central Processing Unit (CPU).
As shown in fig. 8, memory 630 may include both read-only memory and random access memory and provides instructions and data to processor 610. A portion of the memory 630 may also include NVRAM. For example, in applications where the memory, communication interface, and memory are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 640 in fig. 8.
The method disclosed by the embodiment of the invention can be applied to a processor or realized by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an ASIC, an FPGA (field-programmable gate array) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The embodiment of the invention also provides a computer readable storage medium. The computer readable storage medium has stored therein instructions that, when executed, implement the functions performed by the electronic device in the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product described above includes one or more computer programs or instructions. When the above-described computer program or instructions are loaded and executed on a computer, the procedures or functions described in the embodiments of the present invention are wholly or partially performed. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or optical media such as Digital Video Disks (DVDs); it may also be a semiconductor medium, such as a Solid State Drive (SSD).
While the invention has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the invention. Accordingly, the specification and figures are merely exemplary of the invention as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method of monitoring, the method comprising:
determining height information of a target object according to two-dimensional position information of the target object in a first coordinate system based on a pre-constructed three-dimensional live-action model of a monitoring scene;
determining a target photoelectric device and monitoring adjustment information corresponding to the target photoelectric device according to the two-dimensional position information, the height information and monitoring position information of the plurality of photoelectric devices;
and adjusting the target photoelectric equipment according to the monitoring adjustment information to acquire a monitoring image of the target object.
2. The method of claim 1, wherein before determining the height information of the target object according to the two-dimensional position information of the target object based on the three-dimensional real scene model of the pre-constructed monitoring scene, the method further comprises:
building a three-dimensional real scene model of the monitoring scene; each pixel feature in the three-dimensional live-action model corresponds to three-dimensional position information in a second coordinate system;
and converting the second coordinate system into the first coordinate system to determine three-dimensional position information of each pixel feature in the first coordinate system.
3. The method of claim 2, wherein the determining the height information of the target object according to the two-dimensional position information of the target object based on the pre-constructed three-dimensional live-action model of the monitoring scene comprises:
and determining the height information according to the two-dimensional position information and the three-dimensional position information of each pixel feature in the first coordinate system.
4. The method of claim 1, wherein the monitoring position information includes spatial position information, monitoring distance information, and monitoring angle information of the optoelectronic device; the monitoring adjustment information comprises rotation angle information and field angle information;
the determining, according to the two-dimensional position information, the height information, and monitoring position information of a plurality of photoelectric devices, a target photoelectric device and monitoring adjustment information corresponding to the target photoelectric device includes:
determining the target photoelectric equipment according to the two-dimensional position information, the spatial position information and the monitoring distance information;
determining the rotation angle information according to the two-dimensional position information, the height information, the spatial position information and the monitoring angle information;
and determining the field angle information according to the two-dimensional position information, the height information and the space position information.
5. The method of claim 4, wherein determining the target optoelectronic device based on the spatial location information and the monitoring distance information comprises:
determining monitoring range information of each photoelectric device according to the spatial position information and the monitoring distance information;
and when the monitoring range information corresponding to the photoelectric equipment is determined to comprise the two-dimensional position information, the photoelectric equipment is the target photoelectric equipment.
6. The method of claim 4, wherein the rotation angle information comprises a horizontal rotation angle and a pitch rotation angle corresponding to the target optoelectronic device, and wherein the determining the rotation angle information corresponding to the target optoelectronic device according to the two-dimensional position information, the altitude information, the spatial position information, and the monitoring angle information comprises:
determining the horizontal rotation angle according to the two-dimensional position information, the spatial position information and the monitoring angle information;
and determining the pitching rotation angle according to the height information, the spatial position information and the monitoring angle information.
7. A monitoring device, the device comprising:
the height information determining module is used for determining the height information of the target object according to the two-dimensional position information of the target object in the first coordinate system based on a pre-constructed three-dimensional live-action model of the monitoring scene;
a monitoring adjustment information determining module, configured to determine, according to the two-dimensional position information, the height information, and monitoring position information of a plurality of optoelectronic devices, a target optoelectronic device and monitoring adjustment information corresponding to the target optoelectronic device;
and the monitoring image acquisition module is used for adjusting the target photoelectric equipment according to the monitoring adjustment information so as to acquire a monitoring image of the target object.
8. A computer storage medium having stored therein instructions that, when executed, cause the monitoring method of any of claims 1-6 to be performed.
9. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1 to 6.
10. A monitoring system, comprising:
the two-dimensional position information acquisition equipment is used for acquiring two-dimensional position information of a target object in a monitoring scene;
one or more optoelectronic devices for acquiring surveillance images within the surveillance scene;
and an electronic device as claimed in claim 9.
CN202110008522.6A 2021-01-05 2021-01-05 Monitoring method, device, system, storage medium and electronic equipment Pending CN114723773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110008522.6A CN114723773A (en) 2021-01-05 2021-01-05 Monitoring method, device, system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110008522.6A CN114723773A (en) 2021-01-05 2021-01-05 Monitoring method, device, system, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114723773A true CN114723773A (en) 2022-07-08

Family

ID=82234496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110008522.6A Pending CN114723773A (en) 2021-01-05 2021-01-05 Monitoring method, device, system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114723773A (en)

Similar Documents

Publication Publication Date Title
EP3611692B1 (en) Iot gateway for weakly connected settings
US11698449B2 (en) User interface for displaying point clouds generated by a LiDAR device on a UAV
CN112567201B (en) Distance measuring method and device
US8193909B1 (en) System and method for camera control in a surveillance system
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
CN108090959B (en) Indoor and outdoor integrated modeling method and device
CN113345028B (en) Method and equipment for determining target coordinate transformation information
US10922881B2 (en) Three dimensional/360 degree (3D/360°) real-time full information smart management integrated mapping system (SMIMS) and process of generating the same
CN110831030A (en) Method for acquiring signal coverage effect diagram and network equipment
US10904431B2 (en) Camera controller
CN113869231B (en) Method and equipment for acquiring real-time image information of target object
EP3989539B1 (en) Communication management apparatus, image communication system, communication management method, and carrier means
EP4220547A1 (en) Method and apparatus for determining heat data of global region, and storage medium
CN115439528A (en) Method and equipment for acquiring image position information of target object
EP2641395B1 (en) System and method for camera control in a surveillance system
CN114723773A (en) Monitoring method, device, system, storage medium and electronic equipment
CN116152471A (en) Factory safety production supervision method and system based on video stream and electronic equipment
CN115565092A (en) Method and equipment for acquiring geographical position information of target object
WO2023040137A1 (en) Data processing
KR20230047734A (en) Method for monitoring solar panels using video streams from uav
CN115575892A (en) Target position determining method and device, electronic equipment and storage medium
US20240098367A1 (en) Method and system for real-time geo referencing stabilization
CN117636404B (en) Fall detection method and system based on non-wearable equipment
CN115760964B (en) Method and equipment for acquiring screen position information of target object
CN118817261A (en) Parameter determining method, device, equipment and medium of image acquisition equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination