CN112954234A - Method, system, device and medium for multi-video fusion - Google Patents
Method, system, device and medium for multi-video fusion Download PDFInfo
- Publication number
- CN112954234A CN112954234A CN202110117655.7A CN202110117655A CN112954234A CN 112954234 A CN112954234 A CN 112954234A CN 202110117655 A CN202110117655 A CN 202110117655A CN 112954234 A CN112954234 A CN 112954234A
- Authority
- CN
- China
- Prior art keywords
- video
- signal
- sensor
- acquiring
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000004927 fusion Effects 0.000 title claims abstract description 29
- 239000000779 smoke Substances 0.000 claims description 21
- 230000009466 transformation Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 230000005855 radiation Effects 0.000 claims description 3
- 239000000463 material Substances 0.000 abstract description 4
- 238000012544 monitoring process Methods 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 239000003570 air Substances 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 239000012080 ambient air Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The invention provides a method, a system, a device and a storage medium for multi-video fusion, wherein the method comprises the following steps: acquiring a sensor signal, and generating a video request according to the sensor signal; according to the sensor signals, determining acquisition devices related to the sensors, and acquiring a plurality of acquisition devices to acquire first video signals according to the video requests; and splicing the first video, removing the overlapped part in the first video signal, and outputting and visualizing the second video signal. The method comprises the steps that a sensor receives signals, an acquisition device in the environment is scheduled according to the sensor signals, so that a plurality of video signals are obtained, and then the video signals are spliced and integrated, so that the panoramic video display of a terminal is realized; the problem of can't show the warning point panorama is solved to can realize that intelligent target detects, trails and show, use manpower sparingly and material resources, but wide application in internet of things technical field.
Description
Technical Field
The invention relates to the technical field of Internet of things, in particular to a method, a system, a device and a storage medium for multi-video fusion.
Background
With the development of the internet of things technology, the fusion application of technologies such as video analysis, signals and space positioning is combined, so that the application prospect is large, and in the aspect of video monitoring application, a traditional monitoring scheme adopts a plurality of cameras to shoot scenes with different visual angles. And returns the scene information to multiple independent windows of the client for monitoring. Due to the space complexity of a monitoring place, particularly the condition that monitoring areas are partially overlapped, the existing monitoring technology needs monitoring personnel to monitor a plurality of windows, analyze information correlation and realize target identification and tracking, needs to manually select videos of other lines, and is inconvenient and not intuitive to monitor; in addition, the terminal in the prior art distributes the code stream through the system platform, and the visual field of the video has certain limitation.
Disclosure of Invention
In view of the above, to at least partially solve one of the above technical problems, embodiments of the present invention provide a method for intuitive and efficient multi-video fusion; meanwhile, the application also provides a system, a device and a computer readable storage medium for correspondingly realizing the method.
In a first aspect, a technical solution of the present application provides a method for multi-video fusion, which includes the steps of: acquiring a sensor signal, and generating a video request according to the sensor signal;
according to the sensor signals, determining acquisition devices related to the sensors, and acquiring a plurality of first video signals acquired by the acquisition devices according to the video requests;
and splicing the first video, removing the overlapped part in the first video signal, and outputting and visualizing a second video signal.
In a possible embodiment of the present disclosure, the step of determining, according to the sensor signal, the acquisition devices associated with the sensor, and acquiring, according to the video request, the first video signal acquired by the acquisition devices includes:
and analyzing the sensor signal to obtain time period information and position information, and screening to obtain the acquisition device according to the time period information and the position information.
In a possible embodiment of the present disclosure, the step of determining, according to the sensor signal, a sensor-associated acquisition device, and acquiring, according to the video request, a first video signal includes;
and determining the first video signal as a real-time video signal according to the sensor signal, adjusting the focal length and the angle of the acquisition device according to the sensor signal, and acquiring the real-time video signal through the adjusted acquisition device.
In a possible embodiment of the solution of the present application, the acquiring a sensor signal and generating a video request according to the sensor signal includes at least one of the following steps:
acquiring the sensor signal by a smoke sensor;
acquiring the sensor signal through an infrared sensor;
acquiring the sensor signal through a door magnetic sensor;
the sensor signal is acquired by a radio frequency identification sensor.
In a possible embodiment of the present disclosure, the step of splicing the first video signal, removing an overlapping portion in the first video signal, and outputting and visualizing the second video signal includes:
carrying out projection transformation on an image picture of the first video signal to obtain a plane image;
determining characteristic points in the plane image, and determining a reference coordinate system of the plane image;
and aligning the overlapped parts of the plane images according to the characteristic points in the reference coordinate system, and fusing the aligned plane images to obtain the second video picture.
In a possible embodiment of the present disclosure, the method for multi-video fusion further includes the following steps:
determining that a vacant area exists in an image picture of the second video signal, and generating an adjusting instruction;
determining a shooting range according to the adjusting instruction and the sensor signal, and acquiring a third video signal according to the shooting range;
and filling the vacant area according to the third video signal.
In a second aspect, a technical solution of the present invention further provides a system for multi-video fusion, including:
the signal acquisition unit is used for acquiring a sensor signal;
the video request unit is used for generating a video request according to the sensor signal;
the signal scheduling unit is used for determining the acquisition devices related to the sensor according to the sensor signals and acquiring a plurality of first video signals acquired by the acquisition devices according to the video request;
and the signal processing unit is used for splicing the first video signals, removing the overlapped part in the first video signals, and outputting and visualizing the second video signals.
In a possible embodiment of the present disclosure, the signal acquisition unit includes:
the smoke sensor module is used for generating a smoke alarm signal according to the smoke concentration in the environment;
the infrared sensor module is used for generating a smoke alarm signal according to the radiation intensity in the environment;
a door magnetic sensor module for detecting door magnetic signal and generating alarm signal
And the radio frequency identification sensor module is used for acquiring a radio frequency signal and generating the alarm signal.
In a third aspect, a technical solution of the present invention further provides a device for multi-video fusion, including:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to perform the method of multi-video fusion of the first aspect.
In a fourth aspect, the present invention also provides a storage medium, in which a processor-executable program is stored, and the processor-executable program is used for executing the method in the first aspect when being executed by a processor.
Advantages and benefits of the present invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention:
according to the technical scheme, the sensor receives signals, the acquisition devices in the environment are scheduled according to the sensor signals, so that a plurality of video signals are obtained, and then the video signals are spliced and integrated, so that the panoramic video display of the terminal is realized; the problem of can't show the warning point panorama is solved to can realize that intelligent target detects, trails and show, use manpower sparingly and material resources.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart illustrating steps of a method for multi-video fusion according to an embodiment of the present invention;
fig. 2 is an interaction flowchart of another method for multi-video fusion according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a multi-video fusion apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
According to the technical scheme, the specific video code stream is automatically acquired, and the mixed display of the multi-path monitoring video stream and the sensing data stream is completed according to the requirements, so that the problem that the panoramic view of the alarm point cannot be displayed at the terminal is solved.
In a first aspect, as shown in fig. 1, the technical solution of the present application provides an embodiment of a method for multi-video fusion, where the method includes steps S01-S03:
and S01, acquiring the sensor signal, and generating a video request according to the sensor signal.
The sensor information is various environmental signals acquired by various sensors preset in the monitored environment, such as the temperature and humidity in the environment, the smoke concentration in the air and the like. If the signal of the sensor exceeds a preset threshold value, generating corresponding alarm information, for example, generating a smoke alarm after detecting that the smoke concentration in the ambient air exceeds the preset threshold value, tracing back to the sensor for detecting the signal according to the emitted smoke alarm, and determining a collection device for monitoring to be called according to the information of the sensor, namely, obtaining a video request for calling a collection device, such as a monitoring camera and the like, which is installed in the environment in advance.
And S02, determining the acquisition devices associated with the sensors according to the sensor signals, and acquiring a plurality of acquisition devices according to the video requests to acquire first video signals.
The sensor and the acquisition device can be associated through a position relationship when the sensor and the acquisition device are installed in advance, for example, a camera installed within a range of 20 meters of the installation position of the sensor and the sensor can be associated, namely when the sensor detects an abnormal signal or a dangerous signal exceeding a preset threshold value, alarm information is generated, source tracing is carried out according to the alarm information, the sensor sending the alarm information is determined, the position of the sensor is determined, and a picture shot by the camera near the position of the sensor, namely an acquired first video signal, is called.
And S03, splicing the first video signals, removing the overlapped part in the first video signals, and outputting and visualizing the second video signals.
The method comprises the steps of screening a first video signal, determining that a picture shot by the first video is consistent with acquired sensor information, matching the picture shot by the screened video signal with the information of a current sensor, but obtaining a video signal or a video picture with a plurality of overlapped visual areas, integrating the screened video signals, removing overlapped parts in the picture, visually marking a position signal of alarm information and other information in the synthesized picture, finally obtaining a panoramic video, and displaying the panoramic video on a display window of a terminal, wherein the displayed picture is the video picture obtained by transcoding of a second video signal.
In some possible embodiments, in the step S01 of acquiring a sensor signal and generating a video request based on the sensor signal, the source of the sensor information may include, but is not limited to, a smoke sensor, an infrared sensor, a door sensor, or a radio frequency identification sensor. Step S01 may be further subdivided into steps S011-S014:
s011, acquiring a sensor signal through a smoke sensor;
s012, acquiring a sensor signal through an infrared sensor;
s013, acquiring a sensor signal through a door magnetic sensor;
and S014, acquiring a sensor signal through the radio frequency identification sensor.
Specifically, the embodiment can trigger a plurality of front ends to simultaneously transmit a specific video code stream through sensor signals of various acquisition ends, such as smoke sensing signals, door magnetic signals, infrared signals, RFID signals, and the like, and then process a plurality of paths of video contents to display a spliced picture.
In some possible embodiment manners, the acquiring device associated with the sensor is determined according to the sensor signal, and the acquiring devices acquire the first video signal according to the video request, and this step S02 may further include a subdividing step S021: and analyzing the sensor signal to obtain time period information and position information, and screening to obtain the acquisition device according to the time period information and the position information.
Specifically, the time period information is used for backtracking or replaying historical video records, performing investigation and prevention work of accident sources, and selecting real-time video signals for real-time monitoring. The position information is used for determining the position of a sensor which currently sends out alarm information or a video request, and further determining an acquisition device of the installation range of the sensor, such as a camera, and the position information can be latitude and longitude information, global positioning system information, Location Based Services (LBS) information and the like. And after the corresponding time period information is determined according to the position information fed back from the sensor information, the target video signal is called in a video request mode, and then the target video signal is transcoded to obtain the first video signal.
Based on step S021, in some possible embodiments, the first video signal may include, but is not limited to, a real-time video signal and historical video information. When it is determined that the acquired first video signal is a real-time video signal, in this embodiment of the application, the process of acquiring the first video signal may further specifically be: and adjusting the focal length and the angle of the acquisition device according to the sensor signal, and acquiring a real-time video signal through the adjusted acquisition device.
Specifically, the acquisition device, such as a camera, in the embodiment may provide a real-time monitored video stream signal, i.e., a real-time video signal; or the collected video information is stored, and the video information can be stored in a local or background, a cloud server and the like. The corresponding video requests can be divided into real-time video signal requests and historical video information requests, and are called according to time period information in the historical video information requests, so that the accident occurrence reasons can be traced more conveniently.
When the video request is a real-time video signal request, that is, the first video signal is a real-time video signal, the real-time video signal request may include a control instruction of the acquisition device to adjust the acquired video picture, that is, the first video signal. For example, when the acquisition device is a camera, the control instruction may include a focal length adjustment instruction and an angle adjustment instruction, and the front-end camera adjusts the focal length, the angle, and the like to re-shoot the key video of the alarm point, so as to obtain more detailed and accurate information, thereby improving the accuracy and controllability of monitoring.
In some possible embodiments, the step S03 of splicing the first video signal, removing the overlapping portion of the first video signal, and outputting and visualizing the second video signal may be further subdivided into steps S031-S033:
s031, projecting the image picture of the first video signal to obtain a planar image
S032, determining feature points in the plane image, and determining a reference coordinate system of the plane image;
and S033, aligning the overlapped parts of the planar images according to the feature points in the reference coordinate system, and fusing according to the aligned planar images to obtain a second video picture.
Specifically, taking the acquisition device as a camera as an example, since each image is obtained by shooting with the camera at different angles, the obtained video pictures are not on the same projection plane, and if the overlapped images are directly spliced seamlessly, the visual consistency of the actual scenery is damaged. Therefore, the images need to be subjected to projection transformation and then spliced, and the image obtained after projection transformation is the planar image in step S031. The planar projection is based on the coordinate system of one image in the sequence images, the image projection is transformed into the reference coordinate system, the overlapping areas of the adjacent images are aligned, and the formed stitching is planar projection stitching. Further, matching point selection and calibration are carried out; the method of feature points is often used because it is easier to handle the transformation relations of rotation, affine, perspective, etc. between images, and the feature points include the corner points of the images and the interest points that exhibit some singularity with respect to their field. In the embodiment, SIFT feature points with scaling invariance and a SIFT algorithm are adopted to calibrate the feature points. The image splicing needs to find effective feature matching points in an image sequence, and the finding of the feature points of the image directly influences the precision and the efficiency of image splicing. For the image sequence in the embodiment, if the number of the feature points is more than or equal to 4, automatically calibrating the image matching points; if the number of the feature points is small, the image stitching often cannot achieve a more ideal effect. Finally, image splicing fusion, namely registration (registration) and blending (blending), is carried out, and the aim is to register the images into the same coordinate system according to the geometric motion model; and the fusion is to synthesize the registered images into a panoramic mosaic image. The geometric motion model in the embodiment is an affine model of an affine model image, which is a 6-parameter transformation model, i.e., has the general characteristics that parallel lines are transformed into parallel lines, and finite points are mapped to finite points, and the concrete expression can be uniform scale transformation with consistent scale transformation coefficients in all directions or non-uniform scale transformation and shearing transformation with inconsistent transformation coefficients, and the like, and can describe translational motion, rotational motion and small-range scaling and deformation. After the registration is completed, the non-multiresolution technology and the value filtering method are adopted in the embodiment to perform image fusion to obtain the final panoramic image, namely the second video picture. The algorithm processes involved in the method, which are not the core improvement part of the present application, are processed by using well-established algorithms or models, and therefore the processing principle is not described herein again. If necessary, the panoramic image can be subjected to brightness and color equalization processing, so that the obtained panoramic image is clearer and clearer.
In some possible embodiments, the method for multi-video fusion may further include steps S04-S06:
s04, determining that the image picture of the second video signal has a vacant area, and generating an adjusting instruction;
s05, determining a shooting range according to the adjusting instruction and the sensor signal, and acquiring a third video signal according to the shooting range;
and S06, filling the vacant area according to the third video signal.
Specifically, after the panoramic picture is synthesized, the embodiment method further includes reviewing the panoramic picture, and when it is determined that a vacant region exists in the picture, generating an adjustment instruction correspondingly, where the adjustment instruction, similar to the adjustment instruction in step S021, may adjust parameters such as a focal length and an angle of the front-end camera to determine a new shooting range, performing re-acquisition, and re-acquiring the obtained video signal, which is the third video signal, and supplementing the vacant region in the panoramic picture according to the video stream or the video signal.
In a second aspect, the present application provides a system for multi-video fusion for the method of the first aspect, comprising:
the signal acquisition unit is used for acquiring a sensor signal;
the video request unit is used for generating a video request according to the sensor signal;
the signal scheduling unit is used for determining the acquisition devices related to the sensor according to the sensor signals and acquiring a plurality of acquisition devices to acquire first video signals according to the video request;
and the signal processing unit is used for splicing the first video signal, removing the overlapped part in the first video signal, and outputting and visualizing the second video signal.
In some optional embodiments, the signal acquisition unit comprises:
the smoke sensor module is used for generating a smoke alarm signal according to the smoke concentration in the environment;
the infrared sensor module is used for generating a smoke alarm signal according to the radiation intensity in the environment;
a door magnetic sensor module for detecting door magnetic signal and generating alarm signal
And the radio frequency identification sensor module is used for acquiring the radio frequency signal and generating an alarm signal.
Taking forest fire prevention tasks in mountainous areas as an example, as shown in fig. 2, an embodiment of the system is implemented by synthesizing a panoramic video according to the installation position and longitude and latitude of a camera, including removing overlaps and adding vacant areas; when the sensors in the area sense the information sent by the fire, the system is triggered to call the corresponding cameras nearby. When an emergency occurs, the camera receives alarm information sent by the sensor, the focal length and the shooting range of the camera are synchronously adjusted according to the position and the range of the alarm point, and the alarm key information video is shot again. And the display end is spliced into the panoramic video according to the longitude and latitude information of the alarm information, shields the overlapped pictures and the added vacant pictures in the multipoint, forms a more complete picture for increasing the coverage of the alarm area, outputs the synthesized video signal, synthesizes the panoramic video and displays the synthesized panoramic video on a window interface of the monitoring end. The synthesized panoramic video comprises the integrated panoramic video monitoring and the hot spot display of a plurality of sensors, so that a user can conveniently and integrally control the integral condition of a certain space area, and the monitoring effect is obviously improved.
In a third aspect, as shown in fig. 3, the present disclosure further provides an embodiment of an apparatus for multi-video fusion, including at least one processor; at least one memory for storing at least one program; when the at least one program is executed by the at least one processor, the at least one processor is caused to perform a method of multi-video fusion as in the first aspect.
An embodiment of the present invention further provides a storage medium storing a program, where the program is executed by a processor to implement the method in the first aspect.
From the above specific implementation process, it can be concluded that the technical solution provided by the present invention has the following advantages or advantages compared to the prior art:
1) the technical scheme of the application is convenient for the user to integrally control the integral condition of a certain space area, and the monitoring effect is improved.
2) The technical scheme of this application adopts video analysis, radio frequency identification, three kinds of technologies of spatial localization to combine each other, for example when infrared inductor detects someone activity in the region, but system automatic start video monitoring realizes the video acquisition under someone's active state, and the video carries out effective concatenation to a plurality of activity videos, shows whole regional dynamic image.
3) The technical scheme of the application is beneficial to the terminal to obtain the effective information, and the displayed specific video can be used for quickly locking the alarm video and providing the effective information.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the functions and/or features may be integrated in a single physical device and/or software module, or one or more of the functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A method of multi-video fusion, comprising the steps of:
acquiring a sensor signal, and generating a video request according to the sensor signal;
determining a sensor-associated acquisition device according to the sensor signal, and acquiring a first video signal through the acquisition device according to the video request;
and splicing the first video signals, removing the overlapped part in the first video signals, and outputting and visualizing a second video signal.
2. The method of claim 1, wherein the step of determining the capturing device associated with the sensor according to the sensor signal and obtaining a plurality of capturing devices to capture the first video signal according to the video request comprises:
and analyzing the sensor signal to obtain time period information and position information, and screening to obtain the acquisition device according to the time period information and the position information.
3. The method of claim 2, wherein the step of determining the sensor-associated capture device based on the sensor signal and obtaining the first video signal based on the video request comprises:
and determining the first video signal as a real-time video signal according to the sensor signal, adjusting the focal length and the angle of the acquisition device according to the sensor signal, and acquiring the real-time video signal through the adjusted acquisition device.
4. The method of claim 1, wherein the acquiring the sensor signal and generating the video request according to the sensor signal comprises at least one of the following steps:
acquiring the sensor signal by a smoke sensor;
acquiring the sensor signal through an infrared sensor;
acquiring the sensor signal through a door magnetic sensor;
the sensor signal is acquired by a radio frequency identification sensor.
5. The method of claim 1, wherein the step of splicing the first video signal, removing the overlapped part of the first video signal, and outputting and visualizing the second video signal comprises:
carrying out projection transformation on an image picture of the first video signal to obtain a plane image;
determining characteristic points in the plane image, and determining a reference coordinate system of the plane image;
and aligning the overlapped parts of the plane images according to the characteristic points in the reference coordinate system, and fusing according to the aligned plane images to obtain the second video signal.
6. A method of multi-video fusion according to any of claims 1-5, further comprising the steps of:
determining that a vacant area exists in an image picture of the second video signal, and generating an adjusting instruction;
determining a shooting range according to the adjusting instruction and the sensor signal, and acquiring a third video signal according to the shooting range;
and filling the vacant area according to the third video signal.
7. A system for multi-video fusion, comprising:
the signal acquisition unit is used for acquiring a sensor signal;
the video request unit is used for generating a video request according to the sensor signal;
the signal scheduling unit is used for determining the acquisition devices related to the sensor according to the sensor signals and acquiring a plurality of first video signals acquired by the acquisition devices according to the video request;
and the signal processing unit is used for splicing the first video signals, removing the overlapped part in the first video signals, and outputting and visualizing the second video signals.
8. The system for multi-video fusion according to claim 7, wherein the signal acquisition unit comprises:
the smoke sensor module is used for generating a smoke alarm signal according to the smoke concentration in the environment;
the infrared sensor module is used for generating a smoke alarm signal according to the radiation intensity in the environment;
a door magnetic sensor module for detecting door magnetic signal and generating alarm signal
And the radio frequency identification sensor module is used for acquiring a radio frequency signal and generating the alarm signal.
9. An apparatus for multi-video fusion, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to perform a method of multi-video fusion as claimed in any one of claims 1 to 7.
10. A storage medium having stored therein a processor-executable program, the processor-executable program when executed by a processor being for executing a method of multi-video fusion as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110117655.7A CN112954234A (en) | 2021-01-28 | 2021-01-28 | Method, system, device and medium for multi-video fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110117655.7A CN112954234A (en) | 2021-01-28 | 2021-01-28 | Method, system, device and medium for multi-video fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112954234A true CN112954234A (en) | 2021-06-11 |
Family
ID=76238622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110117655.7A Pending CN112954234A (en) | 2021-01-28 | 2021-01-28 | Method, system, device and medium for multi-video fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112954234A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114355972A (en) * | 2021-12-27 | 2022-04-15 | 天翼物联科技有限公司 | Unmanned Aerial Vehicle (UAV) convoying method, system, device and medium under limited communication condition |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004186922A (en) * | 2002-12-02 | 2004-07-02 | Chuo Electronics Co Ltd | Wide range photographing method using a plurality of cameras |
US20120287222A1 (en) * | 2010-01-29 | 2012-11-15 | Huawei Device Co., Ltd. | Video communication method, device and system |
US20150042845A1 (en) * | 2012-04-28 | 2015-02-12 | Huawei Technologies Co., Ltd. | Video Signal Processing Method and Camera Device |
CN105872377A (en) * | 2016-04-14 | 2016-08-17 | 深圳天珑无线科技有限公司 | Panoramic photographed image filling method capable of automatically reducing focal length |
CN106358022A (en) * | 2016-11-01 | 2017-01-25 | 合肥华贝信息科技有限公司 | Security and protection monitoring method based on videos |
JP2018207254A (en) * | 2017-06-01 | 2018-12-27 | キヤノン株式会社 | Imaging apparatus, control method, program, and imaging system |
CN109714563A (en) * | 2017-10-25 | 2019-05-03 | 北京航天长峰科技工业集团有限公司 | A kind of overall view monitoring system based on critical position |
US20200035075A1 (en) * | 2018-07-30 | 2020-01-30 | Axis Ab | Method and camera system combining views from plurality of cameras |
CN110782394A (en) * | 2019-10-21 | 2020-02-11 | 中国人民解放军63861部队 | Panoramic video rapid splicing method and system |
CN111163286A (en) * | 2018-11-08 | 2020-05-15 | 北京航天长峰科技工业集团有限公司 | Panoramic monitoring system based on mixed reality and video intelligent analysis technology |
CN112218099A (en) * | 2020-08-28 | 2021-01-12 | 新奥特(北京)视频技术有限公司 | Panoramic video generation method, panoramic video playing method, panoramic video generation device, and panoramic video generation system |
-
2021
- 2021-01-28 CN CN202110117655.7A patent/CN112954234A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004186922A (en) * | 2002-12-02 | 2004-07-02 | Chuo Electronics Co Ltd | Wide range photographing method using a plurality of cameras |
US20120287222A1 (en) * | 2010-01-29 | 2012-11-15 | Huawei Device Co., Ltd. | Video communication method, device and system |
US20150042845A1 (en) * | 2012-04-28 | 2015-02-12 | Huawei Technologies Co., Ltd. | Video Signal Processing Method and Camera Device |
CN105872377A (en) * | 2016-04-14 | 2016-08-17 | 深圳天珑无线科技有限公司 | Panoramic photographed image filling method capable of automatically reducing focal length |
CN106358022A (en) * | 2016-11-01 | 2017-01-25 | 合肥华贝信息科技有限公司 | Security and protection monitoring method based on videos |
JP2018207254A (en) * | 2017-06-01 | 2018-12-27 | キヤノン株式会社 | Imaging apparatus, control method, program, and imaging system |
CN109714563A (en) * | 2017-10-25 | 2019-05-03 | 北京航天长峰科技工业集团有限公司 | A kind of overall view monitoring system based on critical position |
US20200035075A1 (en) * | 2018-07-30 | 2020-01-30 | Axis Ab | Method and camera system combining views from plurality of cameras |
CN111163286A (en) * | 2018-11-08 | 2020-05-15 | 北京航天长峰科技工业集团有限公司 | Panoramic monitoring system based on mixed reality and video intelligent analysis technology |
CN110782394A (en) * | 2019-10-21 | 2020-02-11 | 中国人民解放军63861部队 | Panoramic video rapid splicing method and system |
CN112218099A (en) * | 2020-08-28 | 2021-01-12 | 新奥特(北京)视频技术有限公司 | Panoramic video generation method, panoramic video playing method, panoramic video generation device, and panoramic video generation system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114355972A (en) * | 2021-12-27 | 2022-04-15 | 天翼物联科技有限公司 | Unmanned Aerial Vehicle (UAV) convoying method, system, device and medium under limited communication condition |
CN114355972B (en) * | 2021-12-27 | 2023-10-27 | 天翼物联科技有限公司 | Unmanned aerial vehicle piloting method, system, device and medium under communication limited condition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7583815B2 (en) | Wide-area site-based video surveillance system | |
US8410441B2 (en) | Thermal imaging camera for taking thermographic images | |
US9398214B2 (en) | Multiple view and multiple object processing in wide-angle video camera | |
CN102148965B (en) | Video monitoring system for multi-target tracking close-up shooting | |
US9602700B2 (en) | Method and system of simultaneously displaying multiple views for video surveillance | |
US9197866B2 (en) | Method for monitoring a traffic stream and a traffic monitoring device | |
US10817747B2 (en) | Homography through satellite image matching | |
TW200818916A (en) | Wide-area site-based video surveillance system | |
US20100119177A1 (en) | Editing apparatus and method | |
US20090079830A1 (en) | Robust framework for enhancing navigation, surveillance, tele-presence and interactivity | |
CN101517431A (en) | Video surveillance system providing tracking of a moving object in a geospatial model and related methods | |
CN112449093A (en) | Three-dimensional panoramic video fusion monitoring platform | |
JP2006333132A (en) | Imaging apparatus and method, program, program recording medium and imaging system | |
JP2006262030A (en) | Angle of view adjusting apparatus, camera system, and angle of view adjusting method | |
US20100002074A1 (en) | Method, device, and computer program for reducing the resolution of an input image | |
US20150169964A1 (en) | Surveillance process and apparatus | |
CN106559656B (en) | Monitoring picture covering method and device and network camera | |
JP5183152B2 (en) | Image processing device | |
US20170244895A1 (en) | System and method for automatic remote assembly of partially overlapping images | |
KR20160094655A (en) | The System and Method for Panoramic Video Surveillance with Multiple High-Resolution Video Cameras | |
KR100888935B1 (en) | Method for cooperation between two cameras in intelligent video surveillance systems | |
CN112954234A (en) | Method, system, device and medium for multi-video fusion | |
KR101452342B1 (en) | Surveillance Camera Unit And Method of Operating The Same | |
CN113810609B (en) | Video transmission method, server, user terminal and video transmission system | |
KR20110121426A (en) | System for observation moving objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210611 |