CN117746313A - Regional monitoring method and device, electronic equipment and electronic fence system - Google Patents

Regional monitoring method and device, electronic equipment and electronic fence system Download PDF

Info

Publication number
CN117746313A
CN117746313A CN202311474687.8A CN202311474687A CN117746313A CN 117746313 A CN117746313 A CN 117746313A CN 202311474687 A CN202311474687 A CN 202311474687A CN 117746313 A CN117746313 A CN 117746313A
Authority
CN
China
Prior art keywords
target
area
image
current moment
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311474687.8A
Other languages
Chinese (zh)
Inventor
李孟潇
曹玉佩
田富有
陈浩
吴炳方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Land Urban Rural Integration Development Group Co ltd
Aerospace Information Research Institute of CAS
Original Assignee
Shandong Land Urban Rural Integration Development Group Co ltd
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Land Urban Rural Integration Development Group Co ltd, Aerospace Information Research Institute of CAS filed Critical Shandong Land Urban Rural Integration Development Group Co ltd
Priority to CN202311474687.8A priority Critical patent/CN117746313A/en
Publication of CN117746313A publication Critical patent/CN117746313A/en
Pending legal-status Critical Current

Links

Landscapes

  • Alarm Systems (AREA)

Abstract

The invention provides a region monitoring method, a device, electronic equipment and an electronic fence system, wherein the method comprises the following steps: marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera; and sending out an alarm signal under the condition that a suspicious object exists in the area where the monitoring target is located in the image of the current moment of the target area. The area monitoring method, the device, the electronic equipment and the electronic fence system provided by the invention can monitor and protect the area where the monitoring target is located in the large-area more flexibly and at lower cost, can improve the management capability of the area where the monitoring target is located, and can better avoid the damage of the area where the monitoring target is located.

Description

Regional monitoring method and device, electronic equipment and electronic fence system
Technical Field
The present invention relates to the field of machine vision, and in particular, to a method and apparatus for monitoring a region, an electronic device, and an electronic fence system.
Background
An Electronic fence (Electronic fence) system is a perimeter burglar alarm system and is widely applied to various fenced or needed places.
Typically, conventional electronic fence systems consist of an electronic fence main unit and a front-end detection fence. However, in the case where the conventional electronic fence system is applied to the monitoring protection of large-area areas such as cultivated land, pasture area, forest area or wetland, if the front-end detection fence is provided around the boundary of the large-area, huge equipment costs and post-maintenance costs are required to be input. In addition, when the boundary of the large area is changed, it is necessary to newly set the front end fence or to change the position of the set front end fence, so that the flexibility of monitoring and protecting the large area is not high. Therefore, how to monitor and protect a large area more flexibly at lower cost is a technical problem to be solved in the field.
Disclosure of Invention
The invention provides a regional monitoring method, a regional monitoring device, electronic equipment and an electronic fence system, which are used for solving the defects of higher cost and low flexibility required by monitoring and protecting a large-area region in the prior art and realizing the monitoring and protecting of the large-area region more flexibly at lower cost.
The invention provides a regional monitoring method, which comprises the following steps:
acquiring an image of a current moment of a target area, position information of the current moment of a target camera and attitude information of a current moment of a lens of the target camera, wherein the image of the current moment of the target area is shot by the target camera;
marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera;
and identifying suspicious targets in the area where the monitoring targets are located in the image of the current moment of the target area, and sending out alarm signals under the condition that suspicious objects exist in the area where the monitoring targets are located in the image of the current moment of the target area.
According to the method for monitoring a region provided by the invention, the marking the region where the monitoring target is located in an image of the current moment of the target region based on the geographical position information and digital elevation model data of the region where the monitoring target is located in the target region, the position information of the current moment of the target camera, the posture information of the current moment of the lens of the target camera and the lens parameter information of the target camera comprises the following steps:
Determining target geographic position information and target digital elevation model data corresponding to an image of the current moment of the target area in geographic position information and digital elevation model data of the area where the monitoring target is located based on the position information of the current moment of the target camera or based on the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera;
acquiring pixel coordinate values corresponding to the target geographic position information based on the target digital elevation model data, the target geographic position information, the attitude information of the current moment of the lens of the target camera and the lens parameter information of the target camera;
and marking the area where the monitoring target is located in the image of the current moment of the target area based on the pixel coordinate value corresponding to the target geographic position information.
According to the region monitoring method provided by the invention, the acquiring of the pixel coordinate value corresponding to the target geographic position information based on the target digital elevation model data, the target geographic position information, the attitude information of the current moment of the lens of the target camera and the lens parameter information of the target camera comprises the following steps:
Acquiring a geodetic coordinate value corresponding to the target geographic position information based on the target digital elevation model data and the target geographic position information;
converting a coordinate system of a geodetic coordinate value corresponding to the target geographic position information through Gaussian forward calculation to obtain an object coordinate value corresponding to the target geographic position information;
acquiring an image coordinate value corresponding to the target geographic position information through a collineation equation based on the position information of the target camera at the current moment, the attitude information of the lens of the target camera at the current moment, the lens parameter information of the target camera and the object coordinate value corresponding to the target geographic position information;
and converting a coordinate system of an image coordinate value corresponding to the target geographic position information based on the lens parameter information of the target camera, and obtaining a pixel coordinate value corresponding to the target geographic position information.
According to the method for monitoring a region provided by the invention, the determining of the target geographic position information and the target digital elevation model data corresponding to the image of the current moment of the target region in the geographic position information and the digital elevation model data of the region where the monitoring target is located in the target region based on the position information of the current moment of the target camera comprises the following steps:
Based on the position information of the current moment of the target camera, the vertical projection point of the target camera in the target area;
determining a circular area taking the vertical projection point as a circle center and a preset distance as a radius as an associated area of the target camera;
and screening the geographic position information and the digital elevation model data of the related area of the target camera from the geographic position information and the digital elevation model data of the area where the monitoring target is located in the target area, and taking the geographic position information and the digital elevation model data of the related area of the target camera as the target geographic position information and the target digital elevation model data.
According to the method for monitoring a region provided by the invention, the marking the region where the monitoring target is located in the image of the current moment of the target region based on the pixel coordinate value corresponding to the target geographic position information comprises the following steps:
generating a pattern spot in the blank image based on the pixel coordinate value corresponding to the target geographic position information, and obtaining a pattern spot image corresponding to the target camera at the current moment;
cutting out a target image spot corresponding to the image of the current moment of the target area from the image spot corresponding to the target camera at the current moment;
And overlapping the target image spot image with the image of the current moment of the target area frame by frame to obtain the image of the current moment of the target area marked with the area where the monitoring target is located.
According to the region monitoring method provided by the invention, before the image of the current moment of the target region, the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera are obtained, the method further comprises:
and controlling the lens of the target camera to rotate in the horizontal direction and/or the vertical direction according to a preset rule.
The invention also provides a region monitoring device, which comprises:
the data acquisition module is used for acquiring an image of the current moment of the target area, the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera, wherein the image of the current moment of the target area is shot by the target camera;
the area determining module is used for marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera;
The regional monitoring module is used for identifying suspicious targets in the region where the monitoring targets are located in the image of the target region at the current moment, and sending out alarm signals under the condition that suspicious objects exist in the region where the monitoring targets are located in the image of the target region at the current moment.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the region monitoring method as described above when executing the program.
The invention also provides an electronic fence system, comprising: the electronic device and the target camera as described above;
the target camera is used for acquiring image data of a target area and sending the image data to the electronic equipment.
According to the invention, an electronic fence system further comprises: a bracket; the support is arranged at a preset position in the target area; the support is used for fixing the target camera.
According to the invention, an electronic fence system further comprises: unmanned plane; the unmanned aerial vehicle is used for fixing the target camera;
The electronic equipment is also used for controlling the unmanned aerial vehicle to fly in the target area according to a preset route;
the unmanned aerial vehicle is used for responding to the control of the electronic equipment and driving the target camera to fly in the target area according to the preset route.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of region monitoring as described in any of the above.
The invention also provides a computer program product comprising a computer program which when executed by a processor implements a method of region monitoring as described in any of the above.
According to the region monitoring method, the device, the electronic equipment and the electronic fence system, the region where the monitoring target is located is marked in the image of the current moment of the target region based on the geographic position information and the digital elevation model data of the region where the monitoring target is located in the target region, the position information of the current moment of the target camera, the posture information of the current moment of the lens of the target camera and the lens parameter information of the target camera, and further, when the suspicious object exists in the region where the monitoring target is located in the image of the current moment of the target region, an alarm signal is sent, the region where the monitoring target is located in a large-area region can be monitored and protected more flexibly and at lower cost, the management capability of the region where the monitoring target is located can be improved, and the damage of the region where the monitoring target is located can be better avoided.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a region monitoring method according to the present invention;
FIG. 2 is a second flow chart of the area monitoring method according to the present invention;
FIG. 3 is a schematic diagram of a coordinate system conversion process in the area monitoring method according to the present invention;
FIG. 4 is a diagram showing the comparison between the area of the monitoring target in the field of view of the target camera and the area of the monitoring target in the image of the current moment of the target area in the area monitoring method provided by the invention;
FIG. 5 is a schematic view of a region monitoring device according to the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the invention, it should be noted that, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
It should be noted that, along with the development of science and technology, the electronic fence system gradually develops into a novel electronic fence system with networking and integration from an original crude power grid type electronic fence system, and provides a stronger guarantee for the safety protection of various industries.
Grain is the basis for human survival, so that the method realizes farmland protection system, controls farmland occupied by non-agricultural construction, and monitors and supervises the quantity and quality of healthy farmland, and has important significance.
However, because the cultivated land is wide in range and large in quantity, if the cultivated land is monitored by means of manual inspection investigation, a large amount of labor cost and time cost are required to be input, the monitoring effect is poor, and large-scale all-weather cultivated land monitoring and supervision are difficult to realize; if the farmland is monitored by the traditional electronic fence system, a front end detection fence needs to be arranged around the boundary of the farmland, the equipment cost and the later maintenance cost which are required to be input are high, and under the condition that the boundary of the farmland is changed, the front end detection fence needs to be reset or the position of the front end detection fence which is arranged is changed, so that the flexibility of monitoring and protecting the farmland is not high.
Along with the development of network integration, the electronic fence presents a multidisciplinary mixed development situation. The virtual electronic fence system based on machine vision has the characteristics of low construction cost and flexible monitoring range, and is suitable for monitoring and protecting large-area areas such as cultivated land, pasture area, forest area or wetland.
Therefore, the invention provides a regional monitoring method for realizing a cultivated land monitoring and supervision mechanism. The regional monitoring method provided by the invention can monitor and protect the cultivated land more flexibly and at lower cost, can improve the management capability of the cultivated land protection, and can better avoid the damage of the cultivated land.
Fig. 1 is a flow chart of a method for monitoring an area according to the present invention. The area monitoring method of the present invention is described below with reference to fig. 1. As shown in fig. 1, the method includes: step 101, acquiring an image of a current time of a target area, position information of the current time of a target camera and posture information of a current time of a lens of the target camera, wherein the image of the current time of the target area is shot by the target camera.
It should be noted that, the execution body in the embodiment of the present invention is a region monitoring device.
It will be appreciated that for a large area, only a portion of the large area may be monitored, for example: in a certain village administrative area, including an area where cultivated lands are located, an area where roads are located, a resident living area and the like, the area monitoring can be performed only on the area where cultivated lands are located in the village administrative area; or, in a certain wetland natural protection area, the area where the wetland is located, the area where the road is located and the area where the building is located are included, but in the above-mentioned wetland natural protection area, only the area where the wetland is located may be monitored.
Therefore, in the embodiment of the invention, the target area and the monitoring target can be determined according to the actual requirements. Wherein, the monitoring target can comprise at least one of cultivated land, woodland, pasture area and wetland.
Accordingly, the regional monitoring method provided by the invention can be used for regional monitoring of the region where the monitoring target is located in the target region. It can be understood that the target area in the embodiment of the present invention is an area with a larger area.
In the embodiment of the present invention, a camera for acquiring an image of a target area may be determined as a target camera.
It can be understood that, because the shooting range of the cameras is limited, the number of the target cameras in the embodiment of the invention is multiple, each target camera can acquire images of a part of the target area, and the images acquired by all the target cameras at a certain moment are spliced to acquire the complete images of the target area.
Optionally, in the embodiment of the present invention, a plurality of camera supports may be preset in the target area, so that each target camera may be fixed at the top end of one camera support respectively. The size and the placement position of the camera support may be predefined based on a priori knowledge and/or actual conditions. The size and the placement position of the camera bracket are not particularly limited in the embodiment of the invention.
Optionally, in the embodiment of the invention, each target camera may be further disposed at the bottom of an unmanned aerial vehicle, and the target camera may acquire an image of the target area in a process that the unmanned aerial vehicle drives the target camera to fly in the target area.
It should be noted that, the target camera in the embodiment of the present invention may refer to any target camera; the image of the current time of the target area may generally refer to an image obtained by photographing the target area at the current time of the target camera.
It should be noted that, in the embodiment of the present invention, the target camera is built with a global navigation satellite system (Global Navigation Satellite System, GNSS) receiver and an inertial navigation system (Inertial Navigation System, INS). The lens of the target camera can be rotated in the horizontal direction and the vertical direction. The target camera is also equipped with a communication device.
As an optional embodiment, before acquiring the image of the current moment of the target area, the position information of the current moment of the target camera and the pose information of the current moment of the lens of the target camera, the method further includes: the lens of the control target camera rotates in the horizontal direction and/or the vertical direction according to a preset rule.
Optionally, in the embodiment of the present invention, the rotation angle of the lens of the target camera in the horizontal direction and/or the vertical direction may be controlled to be changed once every a preset period, and under the condition that the rotation angle of the lens of the target camera is changed, the target camera is controlled to collect an image of a target area;
correspondingly, in the embodiment of the invention, the time length of the interval between the previous time and the current time is the preset time length.
Optionally, in the embodiment of the invention, the uniform rotation of the lens of the target camera in the horizontal direction and/or the vertical direction can be controlled, and the target camera can be controlled to acquire the video image of the target area;
accordingly, in the embodiment of the invention, the time length of the interval between the current time and the next time can be determined according to the time length of the interval between any two frames of images in the video image data of the target area acquired by the target camera, namely, the image at the last time of the target area is the image of the last frame in the video images shot by the target camera, and the image at the current time of the target area is the image of the current frame in the video images shot by the target camera.
After the target camera shoots the image of the current moment of the target area, the image of the current moment of the target area can be sent to the area monitoring device in a data communication mode.
The target camera in the embodiment of the invention is used for acquiring video image data of a target area.
FIG. 2 is a second flow chart of the area monitoring method according to the present invention. As shown in fig. 2, after the target camera shoots the video image data of the target area, the video image data of the target area can be sent to the area monitoring device in a data communication mode, so that the video image data of the target area shot by the target camera can be read based on a machine vision technology algorithm, and the image of the current frame is extracted and used as the image of the target area at the current moment.
The OpenCV (Open Source Computer Vision Library) is a cross-platform computer vision and machine learning software library written based on the C++ language, can be operated on various systems, has good portability, provides Python, MATLAB, java, C # and other programming language interfaces, and can realize various functions such as image processing, target detection, motion tracking, object recognition and the like. OpenCV has unique advantages in the field of computer vision because it is dedicated to real-time processing of real-world images, and its execution speed is greatly improved by code optimization. The application of the OpenCV visual library to process the multi-frame pictures per second such as video images can embody the speed advantages of the images to the greatest extent.
According to the embodiment of the invention, based on an OpenCV image processing algorithm, video image data of a target area shot by a target camera can be read, and an image of a current frame is extracted and taken as an image of the target area at the current moment.
In the embodiment of the invention, the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera can be described by using the position and posture system (position and orientation system, POS) data of the current moment of the target camera. The POS data of the target camera may include information such as Latitude (latitudes), longitude (longitudes), altitude (Elevation), heading angle (Phi), pitch angle (Omega), roll angle (Kappa), and the like.
Based on the GNSS receiver built in the target camera and the operation data acquired at the current moment of the INS system, POS data of the target camera at the current moment can be acquired.
The target camera can send the running data acquired by the GNSS receiver built in the target camera and the INS system at the current moment to the area monitoring device in a data communication mode.
It should be noted that, the location information of the current time of the target camera may include latitude, longitude, and altitude of the current time of the target camera.
Based on the operation data acquired at the current moment of the built-in GNSS receiver of the target camera, the position information of the current moment of the target camera can be acquired.
Note that, the current-time pose information of the lens of the target camera may include euler angles (h, p, r) of the current time of the lens of the target camera (Heading Pitch Roll). Because the lens of the target camera can only rotate in the horizontal direction and the vertical direction, in the implementation of the invention, the lens rotation point of the target camera can be used as an origin, the horizontal direction is used as an X axis, the vertical direction is used as a Z axis, a space rectangular coordinate system is established, and further the rotation of the lens of the target camera can be regarded as Euler angle hpr which rotates around the X axis for an h angle, then around the Y axis for a p angle and finally around the Z axis for an r angle. Wherein the angle p is 0 °.
Based on the operation data acquired at the current moment of the INS system built in the target camera, the Euler angle (h, p, r) of the current moment of the lens of the target camera can be calculated.
Step 102, marking the area where the monitoring target is located in the image of the current moment of the target area based on the geographical position information and the digital elevation model data of the area where the monitoring target is located in the target area, the position information of the current moment of the target camera, the posture information of the current moment of the lens of the target camera and the lens parameter information of the target camera.
It should be noted that, the geographical location information of the area where the monitoring target is located in the target area may include geographical location information of each ground point on the boundary of the area where the monitoring target is located in the target area; the geographical position information of the area where the monitoring target is located in the target area may also include geographical position information of each ground point in the area where the monitoring target is located in the target area, and the like. The geographic location information in the embodiment of the invention may include longitude and latitude.
Optionally, in the embodiment of the present invention, the area where the monitoring target is located in the remote sensing image of the target area may be marked by means of a map spot, so as to obtain a map spot remote sensing image of the target area. The pattern spots of the area where the monitoring target is located in the remote sensing image of the target area can be generated by means of manual sketching, land block segmentation and the like.
After the image spot remote sensing image of the target area is obtained, the geographic position information of the area where the monitoring target is located in the target area can be obtained based on the image spot remote sensing image of the target area.
It should be noted that, a primary key is set in the geographical location information of the area where the monitoring target is located in the target area, so as to retrieve the geographical location information of the area where the monitoring target is located in the target area.
In the embodiment of the invention, the geographical position information of the area where the monitoring target is located in the target area can be obtained through data query, user input and other modes.
It can be understood that, because the geographical position information of the area where the monitoring target is located in the target area includes plane information, but the image of the current moment of the target area is acquired from the air by the target camera, the characteristics of the target area in the vertical direction also need to be expressed by the digital elevation model data of the area where the monitoring target is located in the target area.
The digital elevation model (Digital Elevation Model, abbreviated as DEM) realizes the digital simulation of the ground terrain (namely the digital expression of the terrain surface morphology) through the limited terrain elevation data.
In the embodiment of the invention, the digital elevation model data of the area where the monitoring target is located in the target area is downloaded and acquired from the network open source geographic database such as the geographic space data cloud, and the digital elevation model data of the area where the monitoring target is located in the target area can be input or received by a user.
In order to improve the accuracy of monitoring the area where the monitoring target is located in the target area, digital elevation model data of the area where the monitoring target is located in the target area with higher spatial resolution should be selected.
The lens parameter information of the target camera in the embodiment of the invention can include focal length, resolution and CMOS (Complementary Metal Oxide Semiconductor ) imaging size. The factory information of the target camera may include lens parameter information of the target camera.
According to the embodiment of the invention, the lens parameter information of the target camera can be obtained through information inquiry, user input and other modes.
After geographical position information and digital elevation model data of an area where a monitoring target is located in a target area, position information of the current moment of a target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera are obtained, the area where the monitoring target is located can be marked in an image of the current moment of the target area through numerical calculation, deep learning, model design and other modes.
As an optional embodiment, marking the area where the monitoring target is located in the image of the current moment of the target area based on the geographical position information and the digital elevation model data of the area where the monitoring target is located in the target area, the position information of the current moment of the target camera, the pose information of the current moment of the lens of the target camera and the lens parameter information of the target camera, includes: and determining target geographic position information and target digital elevation model data corresponding to the image of the current moment of the target area in the geographic position information and the digital elevation model data of the area where the monitoring target is located in the target area based on the position information of the current moment of the target camera or based on the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera.
Specifically, after geographical position information and digital elevation model data of an area where a monitoring target is located in a target area, position information of the current moment of a target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera are obtained, target geographical position information and target data elevation model data corresponding to an image of the current moment of the target area can be determined through numerical calculation and data screening.
As an optional embodiment, determining, based on the position information of the current moment of the target camera, the target geographic position information and the target digital elevation model data corresponding to the image of the current moment of the target area in the geographic position information and the digital elevation model data of the area where the monitoring target is located in the target area, includes: based on the position information of the target camera at the current moment, the vertical projection point of the target camera in the target area;
a circular area taking the vertical projection point as a circle center and the preset distance as a radius is determined as an associated area of the target camera;
and screening the geographic position information and the digital elevation model data of the related area of the target camera from the geographic position information and the digital elevation model data of the area where the monitoring target is located in the target area, and taking the geographic position information and the digital elevation model data as target geographic position information and target digital elevation model data.
Specifically, after geographical position information and digital elevation model data of an area where a monitoring target is located in a target area, position information of the current moment of a target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera are obtained, a projection point of the target camera in the target area can be determined based on the position information of the current moment of the target camera, and then a circular area taking the projection point as a circle center and a preset distance as a radius can be determined as a correlation area corresponding to the target camera.
It should be noted that the preset distance may be determined according to a priori knowledge and/or actual conditions. The specific value of the preset distance in the embodiment of the invention is not limited.
Alternatively, the preset distance may range from 400 meters to 600 meters, for example, the preset distance may be 400 meters, 500 meters, or 600 meters.
Preferably, the preset distance may be 500 meters.
After the relevant area corresponding to the target camera is determined, the area where the monitoring target is located in the relevant area corresponding to the target camera can be determined based on the geographic position information of the area where the monitoring target is located in the target area, and then the geographic position information of the area where the monitoring target is located in the relevant area corresponding to the target camera can be screened from the geographic position information of the area where the monitoring target is located in the target area, and the geographic position information is used as the target geographic position information corresponding to the image of the current moment of the target area.
It may be appreciated that the target geographic location information includes geographic location information of each ground point in the associated area corresponding to the target camera.
After determining the area where the monitoring target is located in the associated area corresponding to the target camera, the digital elevation model data of the area where the monitoring target is located in the associated area corresponding to the target camera can be screened from the digital elevation model data of the area where the monitoring target is located in the target area, and the digital elevation model data is used as the target digital elevation model data corresponding to the image of the current moment of the target area.
It will be appreciated that the target digital elevation model data includes digital elevation model data for each ground point in the associated region corresponding to the target camera.
And acquiring pixel coordinate values corresponding to the target geographic position information based on the target digital elevation model data, the target geographic position information, the attitude information of the current moment of the lens of the target camera and the lens parameter information of the target camera.
Specifically, after the target geographic position information and the target digital elevation model data corresponding to the image of the current time of the target area, the posture information of the current time of the lens of the target camera and the lens parameter information of the target camera are obtained, the area where the monitoring target is located can be marked in the image of the current time of the target area in a numerical calculation mode.
As an optional embodiment, acquiring pixel coordinate values corresponding to the target geographic position information based on the target digital elevation model data, the target geographic position information, the pose information of the current moment of the lens of the target camera, and the lens parameter information of the target camera includes: and acquiring the geodetic coordinate value corresponding to the target geographic position information based on the target digital elevation model data and the target geographic position information.
Fig. 3 is a schematic flow chart of coordinate system conversion in the area monitoring method provided by the invention. As shown in fig. 3, after the target geographic position information and the target digital elevation model data corresponding to the image at the current time of the target area are obtained, the coordinate value of each ground point in the area where the monitoring target is located in the target area in the geodetic coordinate system can be determined as the geodetic coordinate value corresponding to the target geographic position information based on the target geographic position information and the target digital elevation model data corresponding to the image at the current time of the target area.
The geodetic coordinate values may be used to describe the longitude L, latitude B, and elevation H of any point on the earth.
And converting the coordinate system of the geodetic coordinate value corresponding to the target geographic position information through Gaussian forward calculation, and obtaining the object coordinate value corresponding to the target geographic position information.
The coordinate system conversion between the geodetic coordinate system and the object coordinate system can be realized by gaussian forward calculation. Therefore, in the embodiment of the invention, gaussian forward calculation can be adopted to perform object coordinate system conversion on the geodetic coordinate value corresponding to the target geographic position information, so as to obtain the object coordinate value corresponding to the target geographic position information.
The object coordinate system is used for describing the position of the ground point in the object space.
Specifically, the forward and backward Gaussian algorithm essentially describes the mapping relationship between the geodetic coordinate system and the Gaussian-Kelvin projection coordinate system. The latitude and longitude are projected by Gaussian-Kelvin, and a Gaussian-Kelvin projection coordinate system is generated. The Y-axis in the gaussian-g-projection coordinate system points to the positive east along the equatorial line and the X-axis in the gaussian-g-projection coordinate system points to the positive north along the central meridian line.
It should be noted that, in the embodiment of the present invention, the gaussian-g projection coordinate system may be determined as the object coordinate system.
For any area where the monitoring target is located in the target areaA ground point i based on the ground coordinate value (L i ,B i ,H i ) The Gaussian projection coordinate value of the ground point i can be obtained through Gaussian forward calculation The specific calculation formula is as follows:
wherein, the angles in the Gaussian forward calculation are all radians;
in the above Gaussian forward calculation, the basic ellipsoid parameters include: an ellipsoidal long half shaft a, a flat rate f, a short half shaft b, a first eccentricity e and a second eccentricity e';
b=a(1-f)(3)
l "represents the longitude value of the ground point i and the central meridian (L 0 ) Longitude value differences of (2); when a 6 degree band is used, L 0 The calculation method of (2) is that the longitude of the ground point i is divided by 3, the obtained result is rounded, and then multiplied by 3 to obtain the local central meridian.
N represents a radius of curvature of the meridian corresponding to the ground point i, and the radius of curvature N of the meridian corresponding to the ground point i can be calculated by the following formula:
t、η 2 ρ″ can be calculated by the following formulaThe method comprises the following steps:
t=tan B i (7)
η 2 =e′ 2 cos 2 B i (8)
x represents the meridian arc length corresponding to the ground point i, and can be calculated by the following formula:
a 0 ,a 2 ,a 4 ,a 6 ,a 8 representing the basic constant, can be calculated by the following formula:
m 0 ,m 2 ,m 4 ,m 6 ,m 8 representing the basic constant, can be calculated by the following formula:
the conversion of the Gaussian projection coordinate values to the object coordinates includes: the Gaussian-Kelvin plane coordinate system is a left-hand system and a two-dimensional plane coordinate system, namely, the origin is located at the equator, the positive eastern direction is the positive Y-axis direction, and the positive north direction is the positive X-axis direction. The object coordinate system is a right-hand system and is a three-dimensional coordinate system, namely, the origin is located at the equator, the forward eastern direction is the positive X-axis direction, the positive north direction is the positive Y-axis direction, and the upward direction along the plumb direction is the positive Z-axis direction. Gao Sizheng the calculated Gaussian projection coordinate value of the ground point i To the object space coordinates (X A ,Y A ,Z A ) The conversion formula of (2) is as follows:
and acquiring image coordinate values corresponding to the target geographic position information through a collineation equation based on the position information of the target camera at the current moment, the posture information of the lens of the target camera at the current moment, the lens parameter information of the target camera and the object coordinate values corresponding to the target geographic position information.
It should be noted that, coordinate system conversion between the object-side coordinate system and the image-side coordinate system may be achieved based on the collineation equation. Therefore, in the embodiment of the invention, based on the attitude information of the current moment of the lens of the target camera and the lens parameter information of the target camera, the image space coordinate system conversion is performed on the object space coordinate value corresponding to the target geographic position information by a collinear method, so as to obtain the image space coordinate value corresponding to the target geographic position information.
The collinearity equation is a mathematical basis of center projection conception and is also an important theoretical basis of a photogrammetry processing method.
Specifically, in order to achieve coordinate system conversion between the object-side coordinate system and the image-side coordinate system, it is necessary to construct a collineation equation based on the inside and outside azimuth elements of the image at the current time of the target area, so as to establish a coordinate system conversion relationship between the pixels in the image at the current time of the target area and the ground point.
Based on the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera, the external azimuth element value (X) of the image of the current moment of the target area can be calculated S ,Y S ,Z S )。
Based on the lens parameter information of the target camera, the internal azimuth element value (x 0 ,y 0 F). Wherein x is 0 ,y 0 Representing the deviation of the principal point of the image at the current time of the target area from the center point of the image, F represents the principal distance (lensCenter-to-image surface sag).
It should be noted that, when the lens of the camera is imaging, if the image surface just falls at the focus, the size of the main distance can be determined to be equal to the focal length of the camera.
Acquiring an external orientation element value (X S ,Y S ,Z S ) And an intra-azimuth element value (x 0 ,y 0 After F), the external orientation element value (X) of the image at the current moment of the target area can be based on S ,Y S ,Z S ) And an intra-azimuth element value (x 0 ,y 0 F) constructing a collinearity equation by the target digital elevation pattern data. Based on the collinear system, when the ground point i is located in the shooting range of the current time of the target camera, the object coordinate value (T A ,X A ,Z A ) Acquiring an image space coordinate value corresponding to the ground point i
The above collinearity equation can be expressed by the following formula:
inversion of the above collineation equation can yield:
in the three-dimensional space, the coordinate rotation is generally represented by a 3×3 orthogonal matrix, and the rotation matrix of the euler angles (h, p, r) of the current moment of the lens of the target camera can be calculated by the following formula.
/>
Wherein R is h Representing a rotation matrix corresponding to the angle h; r is R p Representing a rotation matrix corresponding to the angle p; r is R r And the rotation matrix corresponding to the r angle is represented.
When a rotation matrix is calculated based on Euler angles (h, p, r) of a lens of a target camera at the current moment, the rotation matrix in each direction is multiplied right by the rotation matrix in each direction, and the rotation is multiplied left by the rotation matrix in each direction. The rotation matrix R corresponding to the euler angles (h, p, R) of the lens of the target camera at the current moment can be calculated by the following formula.
λ is a scaling factor between the image space coordinate system and the object space coordinate system, and λ can be expressed by the following formula:
it should be noted that, based on the above collineation equation, the object coordinate value corresponding to each ground point in the target geographic location information is traversed, and the image coordinate value corresponding to the target geographic location information may be obtained.
And converting a coordinate system of an image side coordinate value corresponding to the target geographic position information based on lens parameter information of the target camera, and obtaining a pixel coordinate value corresponding to the target geographic position information.
It should be noted that, in the image plane, the origin of the image side coordinates is the center point of the image, the x-axis is horizontal to the right, and the y-axis is vertical to the top, but in the pixel coordinate system of the image, the origin of the pixel coordinate system is at the top left corner of the image, the x-axis of the pixel coordinate system is rightward along the upper boundary of the image, the y-axis of the pixel coordinate system is downward along the left boundary of the image, and the pixel coordinate values are only integers.
Based on the image space coordinate value corresponding to the ground point iAnd lens parameter information of the target camera, the pixel coordinate value corresponding to the ground point i in the image of the current moment of the target area can be obtained through the following formula
Wherein, pixelsize represents the true size of any one pixel of the target camera.
It should be noted that, when using film to take an image, there may be a manual error between the principal point and the center point of the image, but in the case of a CMOS-equipped camera, the deviation between the principal point and the center point of the image is checked before the camera comes out of the field, so that it can be considered that the deviation between the principal point and the center point of the image is approximately 0, that is, x 0 And y 0 Approximately 0.
Based on lens parameter information of the target camera, the true size pixelsize of any pixel of the target camera can be obtained through calculation according to the following formula:
Wherein W represents the width of the target image; h represents the height of the target image; w and h represent CMOS imaging dimensions of the lens of the target camera; the width W and height H of the target image and the CMOS imaging size of the target camera lens may be determined based on lens parameter information of the target camera.
It should be noted that, in the case of capturing a video or an image with a ratio of 16:9, the CMOS is not fully enabled, and only the middle portion is enabled, so that, in the case of capturing a video or an image with a ratio of 4:3, the number of pixels of the image may reach a maximum, and special attention is required in calculating the true size pixelsize of the pixel.
And marking the area where the monitoring target is located in the image of the current moment of the target area based on the pixel coordinate value corresponding to the target geographic position information.
Specifically, after the pixel coordinate value corresponding to the target geographic position information is obtained, the area where the monitoring target is located may be marked in the image of the current time of the target area directly based on the pixel coordinate value, or a pattern spot may be generated in the blank image based on the pixel coordinate value, and the area where the monitoring target is located is marked in the image of the current time of the target area through image superposition, so as to obtain the image of the current time of the target area marked with the area where the monitoring target is located.
Marking the area where the monitoring target is located in the image of the current moment of the target area based on the pixel coordinate value corresponding to the target geographic position information, comprising: generating a pattern spot in the blank image based on the pixel coordinate value corresponding to the target geographic position information, and obtaining a pattern spot image corresponding to the target camera at the current moment;
cutting out a target image spot corresponding to the image of the current moment of the target area from the image spot corresponding to the target camera at the current moment;
and overlapping the target image spot image with the image of the current moment of the target area frame by frame to obtain the image of the current moment of the target area marked with the area where the monitoring target is located.
Specifically, in the embodiment of the invention, when the image of the image spot at the current moment and the image of the current moment of the target area are overlapped frame by frame, the method can be realized based on a machine vision technology algorithm.
In the embodiment of the invention, when the image spot image at the current moment and the image at the current moment of the target area are overlapped frame by frame, the image spot image can be realized based on an OpenCV image processing algorithm.
Fig. 4 is a diagram showing a comparison between an area of a monitoring target in a field of view of a target camera and an area of the monitoring target in an image of a current time of the target area in the area monitoring method.
Step 103, identifying suspicious targets in the area where the monitoring targets are located in the image of the current moment of the target area, and sending out alarm signals under the condition that suspicious objects exist in the area where the monitoring targets are located in the image of the current moment of the target area.
Specifically, after the image of the current time of the target area marked with the area where the monitoring target is located is obtained, suspicious target identification can be performed on the area where the monitoring target is located in the image of the current time of the target area in various manners, for example, suspicious target identification can be performed on the area where the monitoring target is located in the image of the current time of the target area in a machine learning manner, a target interpretation manner, a block segmentation manner and the like.
It should be noted that, in the embodiment of the present invention, the suspicious target may be determined based on the monitoring target, a priori knowledge, actual situations, and the like. For example, in the case where the monitored target is a cultivated land, a pasture area, the suspicious target may include an engineering vehicle such as a bulldozer, a crane, or the like; in the case where the monitored object is a wetland, the suspicious object may include a ship. The suspicious target is not particularly limited in the embodiment of the present invention.
And under the condition that the suspicious object exists in the area where the monitoring target is located in the image of the current moment of the target area, an alarm signal can be sent out.
It should be noted that the alarm signal may be any one or more of an acoustic signal, an optical signal, and an electrical signal. The alarm signal may also be an alarm information displayed on a display interface of a terminal used by the relevant person. The specific type of the alarm signal is not limited in the embodiment of the present invention.
According to the embodiment of the invention, the area where the monitoring target is located is marked in the image of the current moment of the target area based on the geographic position information and the digital elevation model data of the area where the monitoring target is located in the target area, the position information of the current moment of the target camera, the attitude information of the current moment of the lens of the target camera and the lens parameter information of the target camera, so that when the suspicious object exists in the area where the monitoring target is located in the image of the current moment of the target area, an alarm signal is sent, the area where the monitoring target is located in the large-area can be monitored and protected more flexibly and at lower cost, the management capability of the area where the monitoring target is located can be improved, and the damage of the area where the monitoring target is located can be better avoided.
Fig. 5 is a schematic structural diagram of the area monitoring device provided by the invention. The area monitoring apparatus provided by the present invention will be described below with reference to fig. 5, and the area monitoring apparatus described below and the area monitoring method provided by the present invention described above may be referred to correspondingly. As shown in fig. 5, the apparatus includes: a data acquisition module 501, a region determination module 502, and a region monitoring module 503.
The data acquisition module 501 is configured to acquire an image of a current time of a target area, position information of a current time of a target camera, and pose information of a current time of a lens of the target camera, where the image of the current time of the target area is captured by the target camera;
the area determining module 502 is configured to mark, in an image of the current time of the target area, an area where the monitoring target is located based on geographic location information and digital elevation model data of the area where the monitoring target is located in the target area, location information of the current time of the target camera, pose information of the current time of the lens of the target camera, and lens parameter information of the target camera;
the area monitoring module 503 is configured to identify a suspicious object in an area where the monitoring object is located in the image of the current time of the target area, and send an alarm signal when it is determined that the suspicious object exists in the area where the monitoring object is located in the image of the current time of the target area.
Specifically, the data acquisition module 501, the area determination module 502, and the area monitoring module 503 are electrically connected.
According to the region monitoring device provided by the embodiment of the invention, the region where the monitoring target is located is marked in the image of the current moment of the target region based on the geographic position information and the digital elevation model data of the region where the monitoring target is located in the target region, the position information of the current moment of the target camera, the attitude information of the current moment of the lens of the target camera and the lens parameter information of the target camera, so that when the suspicious object exists in the region where the monitoring target is located in the image of the current moment of the target region, an alarm signal is sent, the region where the monitoring target is located in the large-area region can be monitored and protected more flexibly and at lower cost, the management capability of the region where the monitoring target is located can be improved, and the damage of the region where the monitoring target is located can be better avoided.
Fig. 6 illustrates a physical schematic diagram of an electronic device, as shown in fig. 6, which may include: processor 610, communication interface (Communications Interface) 620, memory 630, and communication bus 640, wherein processor 610, communication interface 620, and memory 630 communicate with each other via communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a region monitoring method comprising: acquiring an image of a current moment of a target area, position information of the current moment of a target camera and attitude information of a lens of the target camera, wherein the image of the current moment of the target area is shot by the target camera; marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera; and identifying suspicious targets in the area where the monitoring targets are located in the image of the current moment of the target area, and sending out alarm signals under the condition that suspicious objects exist in the area where the monitoring targets are located in the image of the current moment of the target area.
Further, the logic instructions in the memory 630 may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Based on the foregoing, an electronic fence system includes: the electronic equipment, the target camera and the alarm equipment;
the target camera is used for shooting an image of the target area and sending the shot image of the current moment of the target area to the electronic equipment;
The alarm device is used for responding to the control of the electronic device and sending out an alarm signal.
It should be noted that in the embodiment of the present invention, the number of the target cameras is multiple, each target camera may acquire images of a part of the target area, and the images acquired by all the target cameras at a certain moment are spliced to acquire a complete image of the target area.
After the target camera shoots the image of the current moment of the target area, the image of the current moment of the target area can be sent to the electronic equipment in a data communication mode, so that the electronic equipment executes the area monitoring method provided by the invention to monitor the area where the monitoring target in the target area is located.
It should be noted that, the specific steps of performing the area monitoring on the area where the monitoring target is located in the target area by the electronic device executing the area monitoring method provided by the present invention may be referred to the content of each embodiment, which is not described in detail in the embodiments of the present invention.
The electronic fence system comprises the electronic equipment and the target camera, has the advantages of low construction cost and flexible monitoring range, can realize the regional monitoring of the region where the monitoring target is located in the target region without arranging a front-end detection fence, can reduce equipment cost investment and later maintenance cost investment, can more flexibly monitor the region where the monitoring target is located in the target region under the condition that the region where the monitoring target is located in the target region is changed, can avoid land damage caused by arranging the front-end detection fence, can improve the management capability of the region where the monitoring target is located, can better avoid the damage of the region where the monitoring target is located, has wider applicability, stronger practicability, wider application prospect and stronger popularization.
As an alternative embodiment, the electronic fence system further comprises: a bracket; the bracket is arranged at a preset position in the target area; the support is used for fixing the target camera.
In the embodiment of the invention, a plurality of brackets can be preset in the target area, and each target camera can be respectively fixed at the top end of one camera bracket.
The size and placement position of the stent may be predefined based on a priori knowledge and/or actual conditions. The size and placement position of the bracket are not particularly limited in the embodiment of the present invention.
As an alternative embodiment, the electronic fence system further comprises: unmanned plane; the unmanned aerial vehicle is used for fixing a target camera;
the electronic equipment is also used for controlling the unmanned aerial vehicle to fly in the target area according to a preset route;
the unmanned aerial vehicle is used for responding to the control of the electronic equipment and driving the target camera to fly in the target area according to the preset route.
In the embodiment of the invention, each target camera can be arranged at the bottom of one unmanned aerial vehicle. And the unmanned aerial vehicle responds to the control of the electronic equipment, and drives the target camera to fly in the target area according to a preset route, so that the target camera can acquire the image of the target area.
It should be noted that the preset route may be determined based on a priori knowledge and/or actual conditions. The preset route in the embodiment of the present invention is not limited.
According to the embodiment of the invention, the target camera in the electronic fence system can be fixedly arranged or can be movably arranged, so that the image of the target area can be acquired in various modes, more scenes can be adapted, and the applicability and the practicability of the electronic fence system can be further improved.
In another aspect, the present invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the region monitoring method provided by the methods described above, the method comprising: acquiring an image of a current moment of a target area, position information of the current moment of a target camera and attitude information of a lens of the target camera, wherein the image of the current moment of the target area is shot by the target camera; marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera; and identifying suspicious targets in the area where the monitoring targets are located in the image of the current moment of the target area, and sending out alarm signals under the condition that suspicious objects exist in the area where the monitoring targets are located in the image of the current moment of the target area.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for monitoring a region provided by the above methods, the method comprising: acquiring an image of a current moment of a target area, position information of the current moment of a target camera and attitude information of a lens of the target camera, wherein the image of the current moment of the target area is shot by the target camera; marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera; and identifying suspicious targets in the area where the monitoring targets are located in the image of the current moment of the target area, and sending out alarm signals under the condition that suspicious objects exist in the area where the monitoring targets are located in the image of the current moment of the target area.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A method for monitoring a region, comprising:
acquiring an image of a current moment of a target area, position information of the current moment of a target camera and attitude information of a current moment of a lens of the target camera, wherein the image of the current moment of the target area is shot by the target camera;
marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera;
and identifying suspicious targets in the area where the monitoring targets are located in the image of the current moment of the target area, and sending out alarm signals under the condition that suspicious objects exist in the area where the monitoring targets are located in the image of the current moment of the target area.
2. The area monitoring method according to claim 1, wherein the marking the area of the monitoring target in the image of the current time of the target area based on the geographical location information and the digital elevation model data of the area of the monitoring target in the target area, the location information of the current time of the target camera, the pose information of the current time of the lens of the target camera, and the lens parameter information of the target camera includes:
Determining target geographic position information and target digital elevation model data corresponding to an image of the current moment of the target area in geographic position information and digital elevation model data of the area where the monitoring target is located based on the position information of the current moment of the target camera or based on the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera;
acquiring pixel coordinate values corresponding to the target geographic position information based on the target digital elevation model data, the target geographic position information, the attitude information of the current moment of the lens of the target camera and the lens parameter information of the target camera;
and marking the area where the monitoring target is located in the image of the current moment of the target area based on the pixel coordinate value corresponding to the target geographic position information.
3. The method according to claim 2, wherein the obtaining pixel coordinate values corresponding to the target geographic position information based on the target digital elevation model data, the target geographic position information, the pose information of the current moment of the lens of the target camera, and the lens parameter information of the target camera includes:
Acquiring a geodetic coordinate value corresponding to the target geographic position information based on the target digital elevation model data and the target geographic position information;
converting a coordinate system of a geodetic coordinate value corresponding to the target geographic position information through Gaussian forward calculation to obtain an object coordinate value corresponding to the target geographic position information;
acquiring an image coordinate value corresponding to the target geographic position information through a collineation equation based on the position information of the target camera at the current moment, the attitude information of the lens of the target camera at the current moment, the lens parameter information of the target camera and the object coordinate value corresponding to the target geographic position information;
and converting a coordinate system of an image coordinate value corresponding to the target geographic position information based on the lens parameter information of the target camera, and obtaining a pixel coordinate value corresponding to the target geographic position information.
4. The area monitoring method according to claim 3, wherein determining, based on the location information of the current time of the target camera, the target geographic location information and the target digital elevation model data corresponding to the image of the current time of the target area from the geographic location information and the digital elevation model data of the area where the monitoring target is located in the target area includes:
Based on the position information of the current moment of the target camera, the vertical projection point of the target camera in the target area;
determining a circular area taking the vertical projection point as a circle center and a preset distance as a radius as an associated area of the target camera;
and screening the geographic position information and the digital elevation model data of the related area of the target camera from the geographic position information and the digital elevation model data of the area where the monitoring target is located in the target area, and taking the geographic position information and the digital elevation model data of the related area of the target camera as the target geographic position information and the target digital elevation model data.
5. The method for monitoring a region according to claim 4, wherein marking the region where the monitoring target is located in the image of the current time of the target region based on the pixel coordinate values corresponding to the target geographic position information comprises:
generating a pattern spot in the blank image based on the pixel coordinate value corresponding to the target geographic position information, and obtaining a pattern spot image corresponding to the target camera at the current moment;
cutting out a target image spot corresponding to the image of the current moment of the target area from the image spot corresponding to the target camera at the current moment;
And overlapping the target image spot image with the image of the current moment of the target area frame by frame to obtain the image of the current moment of the target area marked with the area where the monitoring target is located.
6. The area monitoring method according to any one of claims 1 to 5, wherein before the obtaining the image of the current time of the target area, the position information of the current time of the target camera, and the pose information of the current time of the lens of the target camera, the method further includes:
and controlling the lens of the target camera to rotate in the horizontal direction and/or the vertical direction according to a preset rule.
7. An area monitoring apparatus, comprising:
the data acquisition module is used for acquiring an image of the current moment of the target area, the position information of the current moment of the target camera and the posture information of the current moment of the lens of the target camera, wherein the image of the current moment of the target area is shot by the target camera;
the area determining module is used for marking the area where the monitoring target is located in an image of the current moment of the target area based on geographic position information and digital elevation model data of the area where the monitoring target is located in the target area, position information of the current moment of the target camera, attitude information of the current moment of a lens of the target camera and lens parameter information of the target camera;
The regional monitoring module is used for identifying suspicious targets in the region where the monitoring targets are located in the image of the target region at the current moment, and sending out alarm signals under the condition that suspicious objects exist in the region where the monitoring targets are located in the image of the target region at the current moment.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the area monitoring method of any one of claims 1 to 6 when the program is executed by the processor.
9. An electronic fence system, comprising: the electronic device of claim 8, and a target camera;
the target camera is used for acquiring image data of a target area and sending the image data to the electronic equipment.
10. The electronic fence system of claim 9, further comprising: a bracket; the support is arranged at a preset position in the target area; the support is used for fixing the target camera.
11. The electronic fence system of claim 9, further comprising: unmanned plane; the unmanned aerial vehicle is used for fixing the target camera;
The electronic equipment is also used for controlling the unmanned aerial vehicle to fly in the target area according to a preset route;
the unmanned aerial vehicle is used for responding to the control of the electronic equipment and driving the target camera to fly in the target area according to the preset route.
12. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the area monitoring method according to any one of claims 1 to 6.
CN202311474687.8A 2023-11-07 2023-11-07 Regional monitoring method and device, electronic equipment and electronic fence system Pending CN117746313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311474687.8A CN117746313A (en) 2023-11-07 2023-11-07 Regional monitoring method and device, electronic equipment and electronic fence system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311474687.8A CN117746313A (en) 2023-11-07 2023-11-07 Regional monitoring method and device, electronic equipment and electronic fence system

Publications (1)

Publication Number Publication Date
CN117746313A true CN117746313A (en) 2024-03-22

Family

ID=90259816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311474687.8A Pending CN117746313A (en) 2023-11-07 2023-11-07 Regional monitoring method and device, electronic equipment and electronic fence system

Country Status (1)

Country Link
CN (1) CN117746313A (en)

Similar Documents

Publication Publication Date Title
KR102001728B1 (en) Method and system for acquiring three dimentional position coordinates in non-control points using stereo camera drone
EP3569977A1 (en) Surveying system
CN108810473B (en) Method and system for realizing GPS mapping camera picture coordinate on mobile platform
US20110261187A1 (en) Extracting and Mapping Three Dimensional Features from Geo-Referenced Images
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
CN112470092A (en) Surveying and mapping system, surveying and mapping method, device, equipment and medium
CN109520500A (en) One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
CN109523471A (en) A kind of conversion method, system and the device of ground coordinate and wide angle cameras picture coordinate
CN112184786B (en) Target positioning method based on synthetic vision
CN115423863B (en) Camera pose estimation method and device and computer readable storage medium
CN115439531A (en) Method and equipment for acquiring target space position information of target object
CN113450253A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116883604A (en) Three-dimensional modeling technical method based on space, air and ground images
CN113415433B (en) Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
CN115439528A (en) Method and equipment for acquiring image position information of target object
IL267309B (en) Terrestrial observation device having location determination functionality
CN110411449B (en) Aviation reconnaissance load target positioning method and system and terminal equipment
CN111868656B (en) Operation control system, operation control method, device, equipment and medium
CN117746313A (en) Regional monitoring method and device, electronic equipment and electronic fence system
CN116027351A (en) Hand-held/knapsack type SLAM device and positioning method
CN114964248A (en) Target position calculation and indication method for motion trail out of view field
Madeira et al. Accurate DTM generation in sand beaches using mobile mapping
CN111581322B (en) Method, device and equipment for displaying region of interest in video in map window
CN114964249A (en) Synchronous association method of three-dimensional digital map and real-time photoelectric video
CN117745803A (en) Object positioning method, device, video monitoring system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication