CN111212272B - Disaster monitoring method and device, storage medium and electronic device - Google Patents
Disaster monitoring method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN111212272B CN111212272B CN202010072470.4A CN202010072470A CN111212272B CN 111212272 B CN111212272 B CN 111212272B CN 202010072470 A CN202010072470 A CN 202010072470A CN 111212272 B CN111212272 B CN 111212272B
- Authority
- CN
- China
- Prior art keywords
- image pickup
- image
- camera
- areas
- target area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
Abstract
The invention provides a disaster monitoring method, a disaster monitoring device, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring first image information of a target area in which a disaster occurs, which is respectively sent by at least two camera devices; determining image pickup areas of at least two image pickup apparatuses based on the first image information respectively transmitted; and under the condition that the image pickup areas of at least two image pickup devices do not completely cover the target area, carrying out shooting adjustment on the image pickup devices so that the adjusted image pickup areas of the image pickup devices completely cover the target area. According to the invention, the problems that the overall situation of the disaster site cannot be mastered and the most reasonable rescue strategy cannot be made in the related technology are solved, the situation of the disaster site can be comprehensively known, the most reasonable rescue strategy can be quickly made, and the rescue efficiency is improved.
Description
Technical Field
The invention relates to the field of communication, in particular to a disaster monitoring method and device, a storage medium and an electronic device.
Background
In real life, some disasters such as fire disasters, flood disasters, debris flow disasters and hurricane disasters can be avoided, and if the situations of the disaster sites can be comprehensively known when the disasters are processed, the most reasonable rescue strategy can be made if the situations of the disaster sites can be intuitively known, so that the situation of the disaster sites can be comprehensively known. The following description will be made by taking a fire as an example:
in the related art, the research on fire extinguishing equipment, fire suppression, local fire detection and the like is more, but the overall situation of a fire scene cannot be mastered, so that rescue goods and materials, the number of rescuers, the optimal rescue position and the like cannot be estimated correctly.
Therefore, the problems that the overall situation of a disaster site cannot be mastered and the most reasonable rescue strategy cannot be made exist in the related technology.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a disaster monitoring method and device, a storage medium and an electronic device, and at least solves the problems that in the related technology, the situation of a disaster site can not be mastered, and the most reasonable rescue strategy can not be made.
According to an embodiment of the present invention, there is provided a disaster monitoring method including: acquiring first image information of a target area in which a disaster occurs, which is respectively sent by at least two camera devices; determining image pickup areas of at least two image pickup apparatuses based on the first image information respectively transmitted; and under the condition that the image pickup areas of at least two image pickup devices do not completely cover the target area, carrying out shooting adjustment on the image pickup devices so that the adjusted image pickup areas of the image pickup devices completely cover the target area.
According to another embodiment of the present invention, there is provided a disaster monitoring device including: the acquiring module is used for acquiring first image information of a target area where a disaster occurs, which is respectively sent by at least two pieces of camera equipment; a determination module configured to determine image capturing areas of at least two image capturing apparatuses based on the first image information respectively transmitted; the adjusting module is used for carrying out shooting adjustment on the camera shooting equipment under the condition that the camera shooting areas of at least two camera shooting equipment do not completely cover the target area, so that the adjusted camera shooting areas of the camera shooting equipment completely cover the target area.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the shooting area of the camera equipment is determined according to the image information of the disaster target area shot by the camera equipment, whether the shooting area completely covers the target area is judged, and if the shooting area does not completely cover the target area, the shooting area of the camera equipment is adjusted so that the shooting area completely covers the target area, so that rescue workers can comprehensively know the situation of the disaster site, therefore, the problems that the overall situation of the disaster site cannot be mastered and the most reasonable rescue strategy cannot be made in the related technology can be solved, the situation of the disaster site can be comprehensively known, the most reasonable rescue strategy can be quickly made, and the rescue efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a disaster monitoring method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a disaster monitoring method according to an embodiment of the present invention;
FIG. 3 is a deployment diagram of a camera device in the event of a fire according to an alternative embodiment of the invention;
FIG. 4 is a reference diagram of the main components of a fire monitoring operation system according to an embodiment of the present invention;
FIG. 5 is a flow diagram of fire detection according to an embodiment of the present invention;
fig. 6 is a block diagram of a disaster monitoring device according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking the operation on a mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of the disaster monitoring method according to the embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the disaster monitoring method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a method for disaster monitoring is provided, and fig. 2 is a flowchart of a disaster monitoring method according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring first image information of a target area where a disaster occurs, which is respectively sent by at least two pieces of camera equipment;
step S204 of determining image capturing areas of at least two of the image capturing apparatuses based on the first image information respectively transmitted;
and step S206, under the condition that the image pickup areas of at least two image pickup devices do not completely cover the target area, carrying out shooting adjustment on the image pickup devices so that the adjusted image pickup areas of the image pickup devices completely cover the target area.
In the above embodiment, the camera device may be a mobile camera such as a traffic post camera or a camera near a disaster site, or may be a camera mounted on a mobile device such as an unmanned aerial vehicle, and the first image information may include information such as a building, a ground surface, and the like in a target area where a disaster occurs.
Optionally, the main body of the above steps may be a background processor, or other devices with similar processing capabilities, and may also be a machine integrated with at least an image acquisition device and a data processing device, where the image acquisition device may include a graphics acquisition module such as a camera, and the data processing device may include a terminal such as a computer and a mobile phone, but is not limited thereto.
According to the invention, the shooting area of the camera equipment is determined according to the image information of the disaster target area shot by the camera equipment, whether the shooting area completely covers the target area is judged, and if the shooting area does not completely cover the target area, the shooting area of the camera equipment is adjusted so that the shooting area completely covers the target area, so that rescue workers can comprehensively know the situation of the disaster site, therefore, the problems that the overall situation of the disaster site cannot be mastered and the most reasonable rescue strategy cannot be made in the related technology can be solved, the situation of the disaster site can be comprehensively known, the most reasonable rescue strategy can be quickly made, and the rescue efficiency is improved.
In an alternative embodiment, determining the image capturing areas of at least two of the image capturing apparatuses based on the first image information transmitted respectively includes: splicing the respectively sent first image information to obtain a spliced image; and determining the image pickup areas of at least two image pickup devices based on the spliced image. In this embodiment, the first image may be an image captured by the image capturing apparatus for a target area of a disaster, the image capturing apparatus may capture the target area of the disaster in a top view to obtain the first image, and may capture the target area of the disaster in a parallel manner to obtain the first image. In this embodiment, stitching the images may be performed by a server, which stitches the images according to the first image information in the first image and determines the image capturing area of the image capturing apparatus based on the stitched images. Wherein the server may be a central analysis server.
In an optional embodiment, after determining the image capturing regions of at least two image capturing apparatuses based on the stitched image, the method further includes: judging whether the spliced image contains a complete image of the target area; if the judgment result is yes, determining that the image pickup areas of at least two image pickup devices completely cover the target area; and under the condition that the judgment result is negative, determining that the image pickup areas of at least two image pickup devices do not completely cover the target area. In this embodiment, if the stitched image includes a complete image of the target area, it can be determined that the image pickup area of the image pickup apparatus completely covers the target area, that is, the 360-degree all-around monitoring of the disaster is achieved, and then rescue workers can be helped to comprehensively know about the scene of the disaster, so as to make the most reasonable rescue strategy.
In an optional embodiment, performing shooting adjustment on the image pickup apparatus so that the adjusted image pickup area of the image pickup apparatus completely covers the target area includes: determining an uncovered area in the target area, which is not covered by the image pickup areas of at least two image pickup apparatuses; and carrying out shooting adjustment on the camera equipment so that the camera area of the adjusted camera equipment covers the uncovered area. In this embodiment, the camera device may be mounted on a camera device auxiliary device (e.g., an unmanned aerial vehicle), and the server analyzes the stitched image, and issues a corresponding adjustment instruction to the camera device auxiliary device according to the current camera device camera area, so that the camera device camera area after adjustment covers the uncovered area.
In an optional embodiment, performing shooting adjustment on the image pickup apparatus so that the image pickup area of the adjusted image pickup apparatus covers the uncovered area includes at least one of: determining the number of the camera devices to be added and issuing a device adding instruction, wherein the device adding instruction comprises the number information of the camera devices to be added and the position information of the uncovered area; the height and/or position of at least two camera devices are adjusted by adjusting the height and/or position of a drone carrying the at least two camera devices. In this embodiment, the server analyzes the stitched image, determines whether a camera needs to be added, and if the camera needs to be added, the server may issue an order to add the camera through the intercom device, the individual soldier, or other interactive devices, and issue the number of the camera devices that need to be added and the position information of the uncovered area of the camera area of the current camera device. If the height and the position of the current camera equipment and the unmanned aerial vehicle which need to be adjusted are not required to be increased, the command is issued to the camera or the unmanned aerial vehicle, so that the camera area of the camera equipment is covered as a covering area. Aiming at key positions of disaster sites, the method also supports the close-up close-range display of details so that rescue workers can know the key positions and make the most reasonable rescue strategy.
In this embodiment, the number and positions of the image capturing apparatuses can be flexibly deployed according to information such as the size and shape of a disaster site when the image capturing apparatuses are deployed. For example, in an open disaster scene, when two image capturing devices are deployed, the positions of the two image capturing devices can be set relatively, and when a plurality of image capturing devices need to be deployed, the positions of the plurality of image capturing devices can be set to be the same as the included angle between the two adjacent image capturing devices and a disaster center. In urban rescue, taking a fire as an example, a deployment drawing of the camera device in the fire can be shown in fig. 3, and generally, four unmanned aerial vehicles and cameras are deployed in four directions, so that a fire site can be monitored globally.
In an optional embodiment, after determining the image capturing areas of at least two of the image capturing apparatuses based on the first image information respectively transmitted, the method further includes: when it is determined that the image pickup areas of at least two image pickup apparatuses completely cover the target area and the overlapping degree of the image pickup areas of at least two image pickup apparatuses exceeds a predetermined threshold, determining the number of the image pickup apparatuses to be reduced; issuing an equipment reduction instruction, wherein the equipment reduction instruction comprises the number information of the image pickup equipment to be reduced and the position information of an overlapped area. In this embodiment, monitoring devices can be added or reduced according to actual situations, and all-around monitoring is guaranteed. When the added image pickup devices are larger than the required image pickup devices, the image pickup regions of the image pickup devices have overlapped regions, and when the overlapped regions exceed a preset threshold value, the image pickup devices can be reduced, and resources are saved. The predetermined threshold may be 10% (this value is only an optional embodiment, and specifically, the predetermined threshold may also be determined according to different disaster situations, for example, 5%, 15%, or the like may also be used).
In an optional embodiment, the camera device accesses the server by: 4G communication network connection, 5G communication network connection and broadband base station connection. In this embodiment, the camera device auxiliary device (e.g., a drone) and the camera device may both access the server through a 4G network, a 5G network, a broadband base station, or other networks.
In the above-described embodiment, the system that performs the above-described operations may include an image pickup apparatus assisting apparatus for assisting the image pickup apparatus in adjusting the height and angle, an image pickup apparatus, a server, and other apparatuses. The camera equipment is used for shooting the situation of the whole disaster scene. The server is used for video splicing analysis, butting other equipment, issuing commands and the like. The system can be applied to urban rescue or other rescue scenes. The server can have the functions of camera equipment access, camera equipment configuration adjustment, image splicing analysis, interaction with other equipment and the like. The other device may be a device for rescue or communication, such as an ambulance, a walkie-talkie, a tablet, a cell phone, etc.
Taking fire monitoring as an example, referring to fig. 4, the main components of the fire monitoring operation system may include a camera, a drone, a central analysis server, and other fire fighting equipment, as shown in fig. 4. The system comprises a camera, a central analysis server, an unmanned aerial vehicle and other fire fighting equipment. Unmanned aerial vehicle mainly used carries the camera, adjusts the shooting region of camera through the height and the angle of adjustment unmanned aerial vehicle, and wherein, unmanned aerial vehicle also can be other equipment that can satisfy the demand. Other fire fighting devices may include fire fighting devices for rescue and communication, such as walkie talkies, tablets, cell phones, and the like. The central analysis server can have the functions of camera access, unmanned aerial vehicle flight control, camera configuration adjustment, video splicing analysis, interaction with other equipment and the like.
The following describes how to detect the disaster with reference to the specific embodiment of the present invention:
fig. 5 is a flowchart of fire detection according to an embodiment of the present invention, and as shown in fig. 5, the fire detection process in the embodiment of the present invention includes the following steps:
step S502, a camera (corresponding to the camera device) shoots an initial video and transmits the initial video back to a central analysis server, a fire fighting team leader (corresponding to the rescue personnel) deploys an omnibearing monitoring camera and an unmanned aerial vehicle carrying the camera on site according to experience, after the initial deployment is finished, the unmanned aerial vehicle and the camera access the central analysis server by utilizing a 4G network, a 5G network, a broadband base station or other networks, the camera primarily transmits the video back, and the unmanned aerial vehicle transmits flight control information back.
Step S504, the center analysis server splices the videos and analyzes the videos to judge whether seamless splicing and 360-degree panoramic scene are achieved. If yes, step S512 is executed, and if no, it is further determined whether or not a camera needs to be added. If the determination result is no, step S506 is performed, and if the determination result is yes, step S518 is performed.
Step S506, the central analysis server issues the heights and positions of the existing cameras and unmanned aerial vehicles which need to be adjusted to the cameras and unmanned aerial vehicles.
And step S508, automatically adjusting the camera and the unmanned aerial vehicle according to the height and angle information.
And step S510, after the camera and the unmanned aerial vehicle are adjusted, continuously transmitting the shot video. After the step is executed, step S504 is executed, the returned video is continuously analyzed, and the height and the angle are adjusted until 360-degree panoramic seamless splicing is realized.
And S512, utilizing a large screen or other equipment to seamlessly display 360-degree monitoring information in an all-around manner.
And step S514, displaying the close shot of the details, supporting the closing of the key position of the disaster site, and displaying the close shot of the details.
And step S516, calling appropriate materials and rescue teams to realize scheduling rescue according to the omnibearing grasped fire condition.
And S518, issuing a camera adding command through the intercom device, the individual soldier or other interactive devices, and issuing approximate direction information.
And step S520, the added unmanned aerial vehicles and cameras arrive at the disaster site to shoot videos. After the step is executed, step S510 is executed.
In the embodiment, carry out image mosaic analysis through the server, according to the concatenation image, issue the order whether to increase camera equipment and camera equipment auxiliary assembly, or issue camera equipment and camera equipment auxiliary assembly's height and angle modulation instruction, through the height and the angle of adjusting camera equipment auxiliary assembly, realize that 360 degrees all around of video are looked and are shot, provide most comprehensive information for the rescue, the disaster situation can more comprehensive grasp to the rescue personnel, make the most accurate judgement, more accurate rescue goods and materials and rescue personnel etc. are dispatched, rescue efficiency has been improved. It should be noted that the present invention does not conflict with other local detection devices and prevention devices (e.g., fire prevention devices), and can be used in combination to better solve the disaster.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a disaster monitoring device is further provided, and the disaster monitoring device is used to implement the above embodiments and preferred embodiments, which have already been described and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a block diagram of a disaster monitoring apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus includes:
an obtaining module 62, configured to obtain first image information of a target area where a disaster occurs, which is sent by at least two pieces of image capturing equipment respectively; a determination module 64 configured to determine image capturing areas of at least two image capturing apparatuses based on the first image information respectively transmitted; and the adjusting module 66 is configured to, in a case that it is determined that the target area is not completely covered by the image capturing areas of at least two image capturing apparatuses, perform shooting adjustment on the image capturing apparatuses so that the adjusted image capturing areas of the image capturing apparatuses completely cover the target area.
In an alternative embodiment, the determining module 64 may determine the image capturing areas of at least two image capturing apparatuses based on the respectively transmitted first image information by: splicing the respectively sent first image information to obtain a spliced image; and determining the image pickup areas of at least two image pickup devices based on the spliced image.
In an optional embodiment, the apparatus may be configured to determine, after determining the image capturing regions of at least two image capturing devices based on the stitched image, whether the stitched image includes a complete image of the target region; if the judgment result is yes, determining that the image pickup areas of at least two image pickup devices completely cover the target area; and under the condition that the judgment result is negative, determining that the image pickup areas of at least two image pickup devices do not completely cover the target area.
In an alternative embodiment, the adjusting module 66 may perform shooting adjustment on the image capturing apparatus in the following manner, so that the adjusted image capturing area of the image capturing apparatus completely covers the target area: determining an uncovered area in the target area, which is not covered by the image pickup areas of at least two image pickup apparatuses; and carrying out shooting adjustment on the camera equipment so that the camera area of the adjusted camera equipment covers the uncovered area.
In an optional embodiment, the adjusting module 66 may perform shooting adjustment on the image capturing apparatus to make the image capturing area of the adjusted image capturing apparatus cover the uncovered area by at least one of: determining the number of the camera devices to be added and issuing a device adding instruction, wherein the device adding instruction comprises the number information of the camera devices to be added and the position information of the uncovered area; the height and/or position of at least two camera devices are adjusted by adjusting the height and/or position of a drone carrying the at least two camera devices.
In an alternative embodiment, the apparatus may be configured to, after determining the image capturing areas of at least two image capturing devices based on the respectively transmitted first image information, determine the number of the image capturing devices to be reduced when it is determined that the image capturing areas of at least two image capturing devices completely cover the target area and the overlapping degree of the image capturing areas of at least two image capturing devices exceeds a predetermined threshold; issuing an equipment reduction instruction, wherein the equipment reduction instruction comprises the number information of the image pickup equipment to be reduced and the position information of an overlapped area.
In an optional embodiment, the camera device accesses the server by: 4G communication network connection, 5G communication network connection and broadband base station connection.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring first image information of the target area where the disaster occurs, which is respectively sent by at least two image pickup devices;
s2, determining image capturing areas of at least two of the image capturing apparatuses based on the first image information respectively transmitted;
and S3, under the condition that the image pickup areas of at least two image pickup devices do not completely cover the target area, carrying out shooting adjustment on the image pickup devices so that the adjusted image pickup areas of the image pickup devices completely cover the target area.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring first image information of the target area where the disaster occurs, which is respectively sent by at least two image pickup devices;
s2, determining image capturing areas of at least two of the image capturing apparatuses based on the first image information respectively transmitted;
and S3, under the condition that the image pickup areas of at least two image pickup devices do not completely cover the target area, carrying out shooting adjustment on the image pickup devices so that the adjusted image pickup areas of the image pickup devices completely cover the target area.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A disaster monitoring method is characterized by comprising the following steps:
acquiring first image information of a target area in which a disaster occurs, which is respectively sent by at least two camera devices;
determining image pickup areas of at least two image pickup apparatuses based on the first image information respectively transmitted;
under the condition that the image pickup areas of at least two image pickup devices do not completely cover the target area, carrying out shooting adjustment on the image pickup devices so that the adjusted image pickup areas of the image pickup devices completely cover the target area;
wherein, after determining image capturing areas of at least two of the image capturing apparatuses based on the first image information transmitted respectively, the method further comprises:
when it is determined that the image pickup areas of at least two image pickup apparatuses completely cover the target area and the overlapping degree of the image pickup areas of at least two image pickup apparatuses exceeds a predetermined threshold, determining the number of the image pickup apparatuses to be reduced;
issuing an equipment reduction instruction, wherein the equipment reduction instruction comprises the number information of the image pickup equipment to be reduced and the position information of an overlapped area.
2. The method according to claim 1, wherein determining image capturing areas of at least two of the image capturing apparatuses based on the first image information transmitted respectively comprises:
splicing the respectively sent first image information to obtain a spliced image;
and determining the image pickup areas of at least two image pickup devices based on the spliced image.
3. The method according to claim 2, wherein after determining the imaging regions of at least two of the imaging apparatuses based on the stitched image, the method further comprises:
judging whether the spliced image contains a complete image of the target area;
if the judgment result is yes, determining that the image pickup areas of at least two image pickup devices completely cover the target area;
and under the condition that the judgment result is negative, determining that the image pickup areas of at least two image pickup devices do not completely cover the target area.
4. The method of claim 1, wherein performing the camera adjustment on the camera device so that the adjusted camera device has a camera area that completely covers the target area comprises:
determining an uncovered area in the target area, which is not covered by the image pickup areas of at least two image pickup apparatuses;
and carrying out shooting adjustment on the camera equipment so that the camera area of the adjusted camera equipment covers the uncovered area.
5. The method of claim 4, wherein performing the camera adjustment to make the adjusted camera device cover the uncovered area comprises at least one of:
determining the number of the camera devices to be added and issuing a device adding instruction, wherein the device adding instruction comprises the number information of the camera devices to be added and the position information of the uncovered area;
the height and/or position of at least two camera devices are adjusted by adjusting the height and/or position of a drone carrying the at least two camera devices.
6. The method of claim 1, wherein the camera device accesses a server by:
4G communication network connection, 5G communication network connection and broadband base station connection.
7. A disaster monitoring device, comprising:
the acquiring module is used for acquiring first image information of a target area where a disaster occurs, which is respectively sent by at least two pieces of camera equipment;
a determination module configured to determine image capturing areas of at least two image capturing apparatuses based on the first image information respectively transmitted;
the adjusting module is used for carrying out shooting adjustment on the camera equipment under the condition that the camera areas of at least two camera equipment do not completely cover the target area, so that the adjusted camera areas of the camera equipment completely cover the target area;
wherein the means is configured to, after determining image capturing areas of at least two of the image capturing apparatuses based on the first image information transmitted respectively,
when it is determined that the image pickup areas of at least two image pickup apparatuses completely cover the target area and the overlapping degree of the image pickup areas of at least two image pickup apparatuses exceeds a predetermined threshold, determining the number of the image pickup apparatuses to be reduced;
issuing an equipment reduction instruction, wherein the equipment reduction instruction comprises the number information of the image pickup equipment to be reduced and the position information of an overlapped area.
8. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 6 when executed.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010072470.4A CN111212272B (en) | 2020-01-21 | 2020-01-21 | Disaster monitoring method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010072470.4A CN111212272B (en) | 2020-01-21 | 2020-01-21 | Disaster monitoring method and device, storage medium and electronic device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111212272A CN111212272A (en) | 2020-05-29 |
CN111212272B true CN111212272B (en) | 2022-04-19 |
Family
ID=70789896
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010072470.4A Active CN111212272B (en) | 2020-01-21 | 2020-01-21 | Disaster monitoring method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111212272B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738150B (en) * | 2020-06-22 | 2024-02-09 | 中国银行股份有限公司 | Automatic supervision method, device and system |
CN113596136A (en) * | 2021-07-23 | 2021-11-02 | 深圳市警威警用装备有限公司 | Aid communication method based on law enforcement recorder |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867086A (en) * | 2012-09-10 | 2013-01-09 | 安科智慧城市技术(中国)有限公司 | Automatic deploying method for monitoring camera, system and electronic equipment |
CN204316671U (en) * | 2015-01-12 | 2015-05-06 | 上海弘视智能科技有限公司 | panoramic video monitoring system |
CN105100580A (en) * | 2014-05-12 | 2015-11-25 | 索尼公司 | Monitoring system and control method for the monitoring system |
CN106034196A (en) * | 2015-03-10 | 2016-10-19 | 青岛通产软件科技有限公司 | Multi-visual-angle image integration acquisition system |
CN107396042A (en) * | 2017-06-30 | 2017-11-24 | 郑州云海信息技术有限公司 | A kind of monitoring method of recreation ground, apparatus and system |
CN109050932A (en) * | 2018-09-20 | 2018-12-21 | 深圳市安思科电子科技有限公司 | A kind of Intelligent flight device for fire-fighting emergent |
CN110602438A (en) * | 2018-06-13 | 2019-12-20 | 浙江宇视科技有限公司 | Road network-based video monitoring layout optimization method and device |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9270841B2 (en) * | 2005-04-15 | 2016-02-23 | Freeze Frame, Llc | Interactive image capture, marketing and distribution |
CN101355693B (en) * | 2008-08-29 | 2011-07-13 | 中兴通讯股份有限公司 | Omnidirection monitoring system and monitoring method without blind spot |
EP2802149B1 (en) * | 2012-06-28 | 2020-03-18 | Nec Corporation | Camera position/posture evaluation device, camera position/posture evaluation method, and camera position/posture evaluation program |
CN103237312B (en) * | 2013-04-07 | 2016-04-13 | 江南大学 | A kind of wireless sensor network node coverage optimization method |
SG10201505251XA (en) * | 2015-07-02 | 2017-02-27 | Nec Asia Pacific Pte Ltd | Surveillance System With Fixed Camera And Temporary Cameras |
CN108413939A (en) * | 2018-01-26 | 2018-08-17 | 广州市红鹏直升机遥感科技有限公司 | A kind of image pickup method for shooting the aviation oblique photograph of matrix form image |
CN110475226A (en) * | 2018-05-11 | 2019-11-19 | 深圳Tcl新技术有限公司 | A kind of base station signal covering method, system and unmanned plane based on unmanned plane |
CN109613975A (en) * | 2018-11-13 | 2019-04-12 | 宁波视睿迪光电有限公司 | The operating method and device of virtual reality |
CN110266936B (en) * | 2019-04-25 | 2021-01-22 | 维沃移动通信(杭州)有限公司 | Photographing method and terminal equipment |
-
2020
- 2020-01-21 CN CN202010072470.4A patent/CN111212272B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102867086A (en) * | 2012-09-10 | 2013-01-09 | 安科智慧城市技术(中国)有限公司 | Automatic deploying method for monitoring camera, system and electronic equipment |
CN105100580A (en) * | 2014-05-12 | 2015-11-25 | 索尼公司 | Monitoring system and control method for the monitoring system |
CN204316671U (en) * | 2015-01-12 | 2015-05-06 | 上海弘视智能科技有限公司 | panoramic video monitoring system |
CN106034196A (en) * | 2015-03-10 | 2016-10-19 | 青岛通产软件科技有限公司 | Multi-visual-angle image integration acquisition system |
CN107396042A (en) * | 2017-06-30 | 2017-11-24 | 郑州云海信息技术有限公司 | A kind of monitoring method of recreation ground, apparatus and system |
CN110602438A (en) * | 2018-06-13 | 2019-12-20 | 浙江宇视科技有限公司 | Road network-based video monitoring layout optimization method and device |
CN109050932A (en) * | 2018-09-20 | 2018-12-21 | 深圳市安思科电子科技有限公司 | A kind of Intelligent flight device for fire-fighting emergent |
Also Published As
Publication number | Publication date |
---|---|
CN111212272A (en) | 2020-05-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017024975A1 (en) | Unmanned aerial vehicle portable ground station processing method and system | |
CN111212272B (en) | Disaster monitoring method and device, storage medium and electronic device | |
CN104995558B (en) | A kind of method obtaining panoramic picture and terminal | |
CN109151792A (en) | Emergency communication method, device, computer storage medium and equipment | |
EP3299925B1 (en) | Method, apparatus and system for controlling unmanned aerial vehicle | |
CN110463199A (en) | Dead pixels of image sensor surveys method, filming apparatus, unmanned plane and storage medium | |
KR101851539B1 (en) | Monitoring system using a drone | |
KR102159786B1 (en) | System for Serarching Using Intelligent Analyzing Video | |
EP3059717A1 (en) | Article delivery system | |
WO2020227996A1 (en) | Photography control method and apparatus, control device and photography device | |
WO2021189650A1 (en) | Real-time video stream display method, headset, storage medium, and electronic device | |
CN110933297B (en) | Photographing control method and device of intelligent photographing system, storage medium and system | |
CN105933614A (en) | Photographing and camera shooting method and system | |
CN104967814A (en) | Monitoring equipment interconnection control method and system | |
CN115550860A (en) | Unmanned aerial vehicle networking communication system and method | |
CN103512557B (en) | Electric room is relative to location determining method and electronic equipment | |
CN111739346A (en) | Air-ground cooperative scheduling command method and platform system | |
CN112037127A (en) | Privacy shielding method and device for video monitoring, storage medium and electronic device | |
CN111164962B (en) | Image processing method, device, unmanned aerial vehicle, system and storage medium | |
CN111427352A (en) | Interaction method for laying mobile roadblocks, terminal, unmanned aerial vehicle and storage medium | |
KR101791045B1 (en) | Method for tracking multiple object and apparatus and system for executing the method | |
CN109375640A (en) | A kind of methods of exhibiting, system and the terminal device of multiple no-manned plane sports | |
KR102162331B1 (en) | Method of controlling drone shooting | |
CN105915797A (en) | Panorama camera and shooting processing method thereof | |
KR20180134459A (en) | Remote control apparatus of unmanned vehicle and its operating method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |