CN113329171A - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113329171A
CN113329171A CN202110509848.7A CN202110509848A CN113329171A CN 113329171 A CN113329171 A CN 113329171A CN 202110509848 A CN202110509848 A CN 202110509848A CN 113329171 A CN113329171 A CN 113329171A
Authority
CN
China
Prior art keywords
camera
video
abnormal
shooting
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110509848.7A
Other languages
Chinese (zh)
Inventor
李赟洁
陈思思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110509848.7A priority Critical patent/CN113329171A/en
Publication of CN113329171A publication Critical patent/CN113329171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Studio Devices (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the application discloses a video processing method, a device, equipment and a storage medium, wherein the method is applied to the video processing equipment and comprises the following steps: determining a front-end camera with abnormal shooting according to the video stream of each connected front-end camera; selecting one of the normal front-end cameras as a capturing camera, wherein the selected capturing camera is the front-end camera with the target shooting parameters within the standard parameter range of the capturing camera, and the target shooting parameters are determined according to the first monitoring range of the abnormal camera and the second monitoring range of the capturing camera; and according to the target shooting parameters, shooting by the capture camera to obtain a compensation video. The compensation video shot by the camera is captured, so that the abnormal video shot by the front-end camera with abnormal shooting is compensated.

Description

Video processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of fault diagnosis technologies, and in particular, to a video processing method, apparatus, device, and storage medium.
Background
With the continuous development of video monitoring technology and the continuous enhancement of social security awareness, the scale of a video monitoring system is increasing day by day, and cameras used as front-end picture acquisition are distributed in various environments of various industries. The camera must withstand the effects of outdoor weather and harsh environments while operating continuously for 7 x 24 hours. In this way, the damage of the camera caused by weather or environment causes, the abnormal video shot by the damaged camera, or the abnormal video shot by the camera caused by other reasons sometimes happens.
In the related art, when a video shot by one or more cameras is abnormal, the monitoring video obtained by the video monitoring system is incomplete, which is not beneficial to the normal monitoring of the monitoring system.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device, video processing equipment and a storage medium, which are used for compensating abnormal videos shot by front-end cameras with abnormal shooting by applying compensation videos shot by capture cameras when videos shot by part of the front-end cameras are abnormal.
In a first aspect, an embodiment of the present application provides a video processing method, which is applied to a video processing device, and the method includes:
determining a front-end camera with abnormal shooting according to the video stream of each connected front-end camera;
selecting one from the normal front-end cameras as a capturing camera, wherein the selected capturing camera is the front-end camera with the target shooting parameters within the standard parameter range of the capturing camera, and the target shooting parameters are determined according to the first monitoring range of the abnormal camera and the second monitoring range of the capturing camera;
and shooting by the capturing camera according to the target shooting parameters to obtain a compensated video.
In the embodiment of the application, the video processing device is connected with each front-end camera, the front-end camera with abnormal shooting is determined firstly, and then one of the normal front-end cameras is selected as the capturing camera, so that the compensation video can be obtained by shooting according to the target shooting parameters determined according to the first monitoring range of the abnormal camera and the second monitoring range of the normal camera through the capturing camera. In this way, the compensation of the abnormal video shot by the front-end camera with abnormal shooting is realized by using the compensation video shot by the capture camera.
In some exemplary embodiments, the target photographing parameters are determined by a target monitoring range of the capturing camera, the target monitoring range including a part or all of the first monitoring range and a part or all of the second monitoring range.
In the embodiment, the target monitoring range is determined through the first monitoring range and the second monitoring range, and thus, the target shooting parameters of the capturing camera are determined through the target monitoring range, so that the capturing camera can shoot the shooting area of the abnormal front-end video camera and the shooting area of the capturing video camera by applying the target shooting parameters, and the compensation of the abnormal video is realized.
In some exemplary embodiments, the first monitoring range is determined by:
determining the first monitoring range according to the numerical value of each group of first shooting parameters; the type of the first shooting parameter comprises the type, the position, the shooting angle and the focal length of the abnormal front-end camera, and at least one of the numerical values of any two groups of the first shooting parameters is different;
determining the second monitoring range by:
determining the second monitoring range according to the numerical value of each group of second shooting parameters; the type of the second shooting parameter comprises the type, the position, the shooting angle and the focal length of the abnormal front-end camera, and at least one of the numerical values of every two groups of the second shooting parameters is different.
In the embodiment, at least one of the values of any two groups of first shooting parameters is different, and the first monitoring range can be determined by the values of the multiple groups of first shooting parameters, so that the monitoring range of the abnormal front-end camera is determined more comprehensively. Similarly, the determined second monitoring range of the capturing camera is relatively comprehensive.
In some exemplary embodiments, the determining the first monitoring range according to the values of the respective sets of the first photographing parameters includes:
determining each first monitoring area corresponding to the numerical value of each group of first shooting parameters, and combining the first monitoring areas to obtain a first monitoring range; or
Determining a first monitoring parameter value according to the values of the first shooting parameters of each group, and determining the first monitoring range by using the first monitoring parameter value;
the determining the second monitoring range according to the values of the second shooting parameters of each group includes:
determining each second monitoring area corresponding to the value of each group of second shooting parameters, and combining the second monitoring areas to obtain a second monitoring range; or
And determining a second monitoring parameter value according to the values of the second shooting parameters of each group, and determining the second monitoring range by using the second monitoring parameter value.
In the above embodiment, the obtained first monitoring areas are relatively comprehensive in a manner of performing merging processing on each first monitoring area; the mode of the first monitoring parameter value is determined, and the determined first monitoring range is accurate. Similarly, the second monitoring areas obtained by combining the second monitoring areas are relatively comprehensive; the mode of the second monitoring parameter value is determined, and the determined second monitoring range is accurate.
In some exemplary embodiments, after obtaining the compensated video by capturing with the capture camera according to the target capturing parameters, the method further includes:
and if the abnormal type of the video shot by the abnormal front-end camera is a set type, determining an abnormal event corresponding to the abnormal video according to the abnormal video and the compensation video.
In the embodiment, after the compensation video is obtained, the video with the set type of abnormality is analyzed to determine the corresponding abnormal event, so that the reason for the abnormality can be found, and operation and maintenance personnel can analyze and process the abnormal event according to the reason for the abnormality.
In some exemplary embodiments, after obtaining the compensated video, the method further includes:
and sending the abnormal events corresponding to the abnormal cameras in the front-end cameras connected with the current video processing equipment and the compensation videos shot by the capture cameras to the operation and maintenance equipment connected with other video processing equipment.
According to the embodiment, operation and maintenance personnel can conveniently acquire the compensation video and the abnormal event corresponding to the abnormal video in time, and then the front-end camera for shooting the abnormal video is maintained and the like.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the determining module is used for determining the front-end camera with abnormal shooting according to the video stream of each connected front-end camera;
a selection module, configured to select one of the normal front-end cameras as a capture camera, where the selected capture camera is a front-end camera whose target shooting parameters are within a standard parameter range of the capture camera, and the target shooting parameters are determined according to a first monitoring range of the abnormal camera and a second monitoring range of the capture camera;
and the shooting module is used for shooting through the capturing camera according to the target shooting parameters to obtain a compensation video.
In some exemplary embodiments, the target photographing parameters are determined by a target monitoring range of the capturing camera, the target monitoring range including a part or all of the first monitoring range and a part or all of the second monitoring range.
In some exemplary embodiments, the monitoring device further comprises a monitoring range determining module, configured to determine the first monitoring range by:
determining the first monitoring range according to the numerical value of each group of first shooting parameters; the type of the first shooting parameter comprises the type, the position, the shooting angle and the focal length of the abnormal front-end camera, and at least one of the numerical values of any two groups of the first shooting parameters is different;
the monitoring range determining module is further configured to determine the second monitoring range by:
determining the second monitoring range according to the numerical value of each group of second shooting parameters; the type of the second shooting parameter comprises the type, the position, the shooting angle and the focal length of the abnormal front-end camera, and at least one of the numerical values of every two groups of the second shooting parameters is different.
In some exemplary embodiments, the monitoring range determining module is specifically configured to:
determining each first monitoring area corresponding to the numerical value of each group of first shooting parameters, and combining the first monitoring areas to obtain a first monitoring range; or
Determining a first monitoring parameter value according to the values of the first shooting parameters of each group, and determining the first monitoring range by using the first monitoring parameter value;
the monitoring range determining module is specifically configured to:
determining each second monitoring area corresponding to the value of each group of second shooting parameters, and combining the second monitoring areas to obtain a second monitoring range; or
And determining a second monitoring parameter value according to the values of the second shooting parameters of each group, and determining the second monitoring range by using the second monitoring parameter value.
In some exemplary embodiments, the video processing device further includes an abnormal video determining module, configured to determine, after the compensated video is obtained by shooting with the capture camera according to the target shooting parameter, an abnormal event corresponding to the abnormal video according to the abnormal video and the compensated video if an abnormal type of the video shot by the abnormal front-end camera is a set type.
In some exemplary embodiments, the video sending module is further configured to, after the obtained compensated video is obtained, send an abnormal event corresponding to an abnormal camera in each front-end camera connected to the current video processing device and a compensated video corresponding to a shot by the capture camera to the operation and maintenance device connected to the other video processing device.
In a third aspect, an embodiment of the present application provides a video processing apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of any one of the methods when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, implement the steps of any of the methods described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a monitoring system according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of another monitoring system provided in an embodiment of the present application;
fig. 3 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating relationships among front-end cameras in a monitoring system according to an embodiment of the present application;
fig. 5 is a diagram illustrating a page of parameter configuration in a video diagnosis process according to an embodiment of the present application;
fig. 6 is an overlapped schematic view of a monitoring area according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating a front-end camera adjustment according to an embodiment of the present application;
fig. 8 is a schematic panoramic view of compensation shooting by a front-end camera according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
For convenience of understanding, terms referred to in the embodiments of the present application are explained below:
1. the NVR (Network Video Recorder) is a store-and-forward part of a Network Video monitoring system, and cooperates with a Video encoder or a Network camera to complete Video recording, storing and forwarding functions.
2. The rifle bolt is one of monitoring cameras. The bolt is cuboid in appearance, the front of the bolt is provided with a C/CS lens interface, and the bolt does not comprise a lens.
3. The ball machine, which is called a ball camera, is a representative of the modern television monitoring development, integrates multiple functions of a color integrated camera, a holder, a decoder, a protective cover and the like, is convenient to install, simple to use and powerful in function, and is widely applied to monitoring in an open area.
4. A pan-tilt camera, a camera with a pan-tilt. The device is provided with a device for bearing the camera to rotate in the horizontal direction and the vertical direction, and the camera is arranged on the cradle head to enable the camera to shoot from a plurality of angles. Two motors are arranged in the tripod head. The horizontal and vertical rotation angles can be adjusted by a limit switch.
Any number of elements in the drawings are by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The video quality diagnosis is a method for analyzing and obtaining diagnosis data based on multiple dimensions such as definition, interference items and the like of a monitoring video image picture, and faults or abnormal situations generated by monitoring equipment and a place where the monitoring equipment is located can be reflected by the technology.
However, the application of the video diagnosis technology only remains in the aspect of how to improve the detection efficiency and the detection result display through the performance, and the subsequent real fault solution can be solved only by manual operation and maintenance. With the huge expansion of the number of monitoring devices (such as front-end cameras) and the increase of operation and maintenance costs, the timeliness from fault discovery to fault resolution still represents a significant bottleneck for monitoring effects; if the fault or abnormality happens to occur at night or during unattended hours, the video loss generated during the period also poses a significant threat to video forensics of critical events.
To this end, the present application provides a video processing method that may be integrated on a video processing device, which may be an intelligent NVR device. In this way, the improved intelligent NVR device in the embodiment of the present application can execute the video processing method in the embodiment of the present application while recording video. Specifically, when the abnormal picture of a certain front-end camera is detected, the shooting angle or shooting behavior of the front-end camera with normal residual functions can be adjusted according to the recorded front-end distribution position relation in the process of pre-deployment, and the whole monitoring visual angle is compensated and shot by utilizing the front-end distribution position relation and the shooting behavior to ensure that the video material as complete as possible is obtained. For some specific detection items, such as video occlusion, besides considering natural fault factors, some human damage factors can be identified: for example, criminals with strong anti-detective consciousness can perform video shielding item alarm caused by shielding in advance and overexposure detection alarm caused by fire, and the reasons for the abnormal conditions can be analyzed. Therefore, when the abnormal items are detected, the peripheral cameras are linked immediately to aim at the periphery of the abnormal items for recording or capturing, and intelligent event analysis is carried out, so that more field information can be obtained.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Fig. 1 is a schematic view of an application scenario of a video processing method according to an embodiment of the present application. In the video processing system, one video processing device and a plurality of bound front-end cameras are exemplified, and the specific binding mode may be to store the device identification code of each front-end camera to the video processing device, so that the video processing device may determine the front-end camera bound to itself through the device identification code. In this way, each front-end camera sends the acquired video stream to the corresponding video processing device, and the video processing device sends the processing result to the operation and maintenance platform.
In addition, fig. 1 shows only a video processing procedure between a group of front-end cameras and one video processing apparatus. In an actual application process, the intelligent NVR device can be bound according to the backup relation of videos between the intelligent NVR device and the front-end camera, namely, the intelligent NVR device is responsible for storing the front-end videos, simultaneously carries out video quality diagnosis on the code streams, and reports the diagnosis result to a superior operation and maintenance platform. Thus, taking four sets of front-end cameras and intelligent NVR devices as an example, fig. 2 shows a schematic structural diagram of another video processing system. Therefore, the video processing function is integrated on one video processing device to manage the front-end cameras accessed by the video processing device, the plurality of video processing devices respectively manage the corresponding front-end cameras, distributed detection is achieved, multi-point simultaneous detection is achieved, efficiency is greatly improved, detection coverage rate is larger, and deployment is easier. In addition, distributed processing utilizes the method thinking and means technology of the Internet of things to generate self-adaptive connection and interaction between objects, thereby reducing the dependence on people as much as possible and lowering the operation and maintenance cost.
Of course, the method provided in the embodiment of the present application is not limited to be used in the application scenario shown in fig. 1, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described in the following method embodiments, and will not be described in detail herein.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide method steps as shown in the following embodiments or figures, more or fewer steps may be included in the method based on conventional or non-inventive efforts. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
The following describes the technical solution provided by the embodiment of the present application with reference to the application scenarios shown in fig. 1 and fig. 2.
Referring to fig. 3, an embodiment of the present application provides a video processing method applied to a video processing device, where the video processing method includes the following steps:
s301, determining the front-end camera with abnormal shooting according to the video stream of each connected front-end camera.
S302, one of the normal front-end cameras is selected as a capturing camera, wherein the selected capturing camera is the front-end camera with the target shooting parameters within the standard parameter range of the capturing camera, and the target shooting parameters are determined according to the first monitoring range of the abnormal camera and the second monitoring range of the capturing camera.
And S303, shooting by the capture camera according to the target shooting parameters to obtain a compensated video.
In the embodiment of the application, the video processing device is connected with each front-end camera, the front-end camera with abnormal shooting is determined according to the result of video quality diagnosis, and then one front-end camera is selected from the normal front-end cameras as the capturing camera, so that the compensation video can be obtained by shooting according to the target shooting parameters determined according to the first monitoring range of the abnormal camera and the second monitoring range of the normal camera through the capturing camera. In this way, the compensation of the abnormal video shot by the front-end camera with abnormal shooting is realized by using the compensation video shot by the capture camera.
Referring to S301, taking fig. 1 as an example, the video processing device is connected to a plurality of front-end cameras, the connection mode may be wireless network connection, and each front-end camera may send a captured video to the video processing device through protocols such as ONVIF (open network video interface forum)/RTSP (Real Time Streaming Protocol). Each front-end camera is distributed and controlled according to a set distribution and control mode, each front-end camera has a sampling range according to parameter setting during distribution and control, the sampling ranges of the front-end cameras are not completely overlapped, and most of the front-end cameras have a pan-tilt function, so that the front-end cameras can turn along with the dispatching of a higher-level platform in the system. Illustratively, the front-end camera may be a gun, a ball or a pan-tilt camera with a pan-tilt steering function.
Exemplarily, in the early deployment stage of the whole monitoring system, the attributes of each front-end camera under each video processing equipment node, such as the camera type, the position relationship, the shooting angle and the like, need to be input into the corresponding video processing equipment, an alternative shooting scheme can be set for the cameras at some important points, and the shooting parameters of the alternative shooting scheme are entered; therefore, when a video abnormity occurs in the subsequent process, shooting parameters in the alternative scheme or shooting parameters in an adjustment scheme calculated by the video processing equipment according to the technical attribute data of each camera can be applied to the surrounding capturing cameras according to the set alternative shooting scheme so as to capture and shoot a problem picture.
In a specific example, fig. 4 shows a relationship diagram of each front-end camera in a monitoring system, and based on the deployment in fig. 4, the information is quantized and input to a video processing device, for example, the information may be stored in a sqlite database as a carrier. Table 1 shows a front-end camera relationship storage data table in a video processing apparatus, table 1 stores setting parameters of the respective front-end cameras exemplified in fig. 4, and fig. 4 shows a positional relationship and a shooting angle of the respective front-end cameras at a certain time among the setting parameters.
TABLE 1 front-end Camera relationship storage data sheet in video processing device
Figure RE-GDA0003120074020000101
Figure RE-GDA0003120074020000111
For the pan/tilt camera a, the rotatable angle parameter ranges in table 1 refer to the movable angle in the horizontal direction and the movable angle in the vertical direction at which the pan/tilt camera a is configured in the current monitoring system. For ball machine B, the rotatable angle parameter ranges in table 1 refer to the movable angle in the horizontal direction, the movable angle in the vertical direction, and the angle at which ball machine B can rotate, which are configured in the current monitoring system.
As can be seen from fig. 4 and table 1, different types of front-end cameras perform shooting according to corresponding parameters by setting corresponding parameters thereof to obtain different shooting areas, and the whole of the shooting area of each front-end camera under the set parameters (including shooting angle, position, and rotatable angle) is referred to as a monitoring range. At a certain time, 41 is an imaging area of the pan/tilt camera a, 42 is an imaging area of the dome camera B, 43 is an imaging area of the gun camera C, and 44 is an imaging area of the gun camera D.
After the front-end cameras are installed in the monitoring system and control is ready, the shot video streams are sent to the video processing equipment, the corresponding video processing equipment can detect the video streams according to a video quality diagnosis detection scheme, and determine which video streams are normal and which video streams are abnormal, the abnormal video streams may be fire disasters near the range of the front-end cameras shooting the abnormal video streams, and therefore the front-end video streams can be detected to be overexposed. The abnormal video stream is generated in two situations, one is that the hardware or software of the front-end camera is damaged, which causes abnormal video shooting; in another case, the front-end camera is blocked or the environment where the front-end camera is located is abnormal, which causes the shot video to be abnormal.
In one specific example, the diagnosis of the video quality may be implemented in such a manner that it is possible to determine whether the video taken by each front-end camera is abnormal or normal.
The video diagnosis parameter configuration is imported into the video processing equipment in advance, and the types of abnormal videos which can be detected can include: video occlusion, scene change, low contrast, virtual focus detection, video jitter, noise detection, streak interference, video loss, video freezing, overexposure detection, video color cast, scene sharp change, snowflake screen detection, and the like. Each front-end camera corresponds to one detection channel, so that in the detection process, the corresponding detection channel can be determined according to the channel selection operation of a user, and different detection thresholds and detection time are set for diagnosis.
Illustratively, fig. 5 shows a page display diagram of parameter configuration in a video diagnosis process, and as can be seen from fig. 5, through parameter configuration, diagnosis of video streams from different front-end cameras (detection channels) can be realized, and abnormal types of abnormal videos can be output.
Referring to S302, it is possible to determine which front-end camera captures a normal video and which front-end camera captures an abnormal video through video diagnosis for each front-end camera to which the video processing apparatus is connected. The front-end camera whose captured video is normal is referred to as a normal front-end camera, and the front-end camera whose captured video is abnormal is referred to as an abnormal front-end camera.
Assume that a plurality of front-end cameras connected to one video processing apparatus have 4 in total, wherein the front-end camera determined to be abnormal is No. 1, and the normal front-end cameras are No. 2, No. 3, and No. 4. At this time, one of the normal front-end cameras needs to be selected as the capturing camera, which can be selected in the following way:
according to the traversal rules with the numbers from small to large or the traversal rules from near to far away from the abnormal front-end camera, firstly, one front-end camera is selected as a candidate camera, and if the candidate camera meets the condition of screening the capture cameras, the candidate camera can be used as the capture camera.
In one specific example, the condition for screening the capture cameras may be a front-end camera whose target capture parameters are within the standard parameters of the capture camera. The standard parameters include standard values of the rotational angle parameters, which are usually set when the front-end camera is shipped from a factory, for example, a rotational angle range formed by a maximum angle and a minimum angle that the pan-tilt camera can rotate in the horizontal direction and/or the vertical direction, for example, the horizontal direction is 0 ° to 180 °, and the vertical direction is 0 ° to 180 °. The angle range formed by the minimum angle and the maximum angle of the ball machine in the horizontal direction, the vertical direction and the rotating direction is, for example, 0 to 180 degrees in the horizontal direction, 0 to 180 degrees in the vertical direction, and the rotating angle: 0 to 360 degrees.
The target photographing parameters applied in determining the capturing camera are determined according to the first monitoring range of the abnormal video camera and the second monitoring range of the capturing camera.
In a specific example, the initial shooting range of the abnormal camera is referred to as a first monitoring range, the initial shooting range of the capturing camera is referred to as a second monitoring range, and in order to apply the shooting of the capturing camera to the initial shooting range of the abnormal camera, a target monitoring range, that is, a target monitoring range formed by a part or all of the first monitoring range and a part or all of the second monitoring range, needs to be determined.
Illustratively, the first monitoring range is determined by:
for the abnormal front-end camera, the set parameters are called first shooting parameters, the types of the parameters comprise the type, the position, the shooting angle and the focal length of the abnormal front-end camera, in the four parameters, each set parameter value can obtain a first monitoring area, and at least one of the values of any two sets of parameters is different. Thus, a plurality of first monitoring areas are obtained by adjusting each parameter, and the plurality of first monitoring areas form a first monitoring range.
For example, the specific implementation manner that the plurality of first monitoring areas form the first monitoring range includes at least the following two manners:
firstly, determining each first monitoring area corresponding to the numerical value of each group of first shooting parameters, and combining each first monitoring area to obtain a first monitoring range, where "combining processing" may refer to the superposition of each first monitoring area, see fig. 6, for example, all areas included after one group of shooting parameters is determined as a first monitoring area 61, and another group of monitoring parameters is determined as first monitoring areas 62, 61, and 62 are superposed are the first monitoring range.
And secondly, determining a first monitoring parameter value according to the values of all groups of first shooting parameters, and determining a first monitoring range by using the first monitoring parameter value. Optionally, each group of first parameters is calculated according to a certain calculation rule, where the calculation rule may be that each parameter is given different weights, or each group of parameters is given different weights, so as to obtain a first monitoring parameter value, and then the first monitoring parameter value is applied to determine to obtain a first monitoring range.
Similarly, the second monitoring range is determined by the following method:
the second monitoring range is for a normal front-end camera, specifically, for the normal front-end camera, the setting parameters thereof are called second shooting parameters, the types of the parameters include the type, position, shooting angle and focal length of the normal front-end camera, in the four parameters, each set of parameter values can obtain a second monitoring area, and the values of any two sets of parameters are at least one different. Thus, a plurality of second monitoring areas are obtained by adjusting each parameter, and the plurality of second monitoring areas form a second monitoring range.
For example, the specific implementation manner of the plurality of second monitoring areas constituting the second monitoring range includes at least the following two manners:
and firstly, determining each second monitoring area corresponding to the numerical value of each group of second shooting parameters, and combining the second monitoring areas to obtain a second monitoring range.
And secondly, determining a second monitoring parameter value according to the values of the second shooting parameters of each group, and determining a second monitoring range by using the second monitoring parameter value.
For a specific determination manner of the second monitoring range, reference is made to the determination manner of the first monitoring range, which is not described herein again.
Referring to S303, since the monitoring range and the shooting parameter have a corresponding relationship, the shooting parameters are different, and the obtained shooting ranges are also different, and different shooting ranges are obtained by adjusting the shooting parameters of each front-end camera. Therefore, the target shooting parameters can be determined according to the target monitoring range, and in addition, because each front-end camera has the standard shooting parameters, whether the target shooting parameters are in the corresponding standard parameter range is determined, and if so, the target shooting parameters of the capture camera can be applied for shooting. For example, the standard parameter may be determined by the attribute of the corresponding front-end camera, for example, the maximum rotation angle of a ball machine is set at the time of factory shipment.
After the target shooting parameters are determined, the shooting parameters of the capture camera are adjusted to the target shooting parameters for shooting, and a compensation video is obtained, so that the compensation video can take account of part or all of the abnormal shooting range of the front-end camera.
If the abnormal front-end camera exists, the abnormal front-end camera cannot meet the current monitoring requirement, and then the compensation video is obtained by shooting through the capture camera. At this time, in order to facilitate the operation and maintenance personnel to better understand the cause of the abnormality and solve the abnormality problem, the abnormality type of the video shot by the abnormal front-end camera can be determined to be a set type, and then the abnormal event corresponding to the abnormal video is determined according to the abnormal video and the compensation video.
Specifically, the abnormal type of the video shot by the abnormal front-end camera may be, for example, one or more of video occlusion, scene change, low contrast, virtual focus detection, video jitter, noise detection, streak interference, video loss, video freezing, overexposure detection, video color cast, scene sharp change, and snowflake screen detection. The camera shooting the abnormal video of the set type is not damaged, but the shot video is abnormal due to the fact that the camera is damaged by the surrounding environment or thought to be damaged, for example, the set type can be overexposure detection, video occlusion and the like, so that the abnormal event corresponding to the specific abnormal video can be analyzed by analyzing the video.
In a specific example, for example, if the abnormal type of the video shot by the abnormal front-end camera is overexposure detection, it indicates that the abnormal video is overexposed and cannot play a monitoring role, and the shooting range of the capturing camera can shoot the range of the abnormal front-end camera, so that the compensation video shot by the capturing camera is normally exposed, and thus, by analyzing the compensation video, it can be determined that there is a fire in the shooting range near the abnormal front-end camera, and it is determined that the abnormal event is a fire event.
For another example, the abnormal video is of an abnormal type and is subjected to occlusion detection, so that the abnormal front-end camera can be determined to be occluded by analyzing the compensation video, and events such as fighting, fighting or smoking in a nearby shooting range can be determined by analyzing the compensation video, such as three-dimensional behavior analysis and face detection analysis. Furthermore, after the face analysis detection is applied, the comparison can be performed with a preset blacklist.
In this way, the recording is carried out for the environment when the video quality abnormity occurs, and the generation reason is analyzed and mined from the recording. And capturing and collecting the surrounding environment of the abnormal video camera by using a surrounding camera, carrying out secondary intelligent analysis, and mining potential barrier factors.
In order to enable operation and maintenance personnel to timely know the abnormal event and process the abnormal event, after the compensation video is obtained, the abnormal event corresponding to the abnormal camera in each front-end camera connected with the current video processing equipment and the compensation video corresponding to the captured video shot by the capturing camera are sent to the operation and maintenance equipment connected with other video processing equipment. Therefore, operation and maintenance personnel can timely know the monitoring condition of each front-end camera and can timely maintain or treat the abnormal front-end camera to be shielded and the like.
In this way, the loss of monitoring of the entire environment during a failure period with abnormal video production is compensated.
In order to make the technical solution of the present application easier to understand, the following describes a video processing method according to an embodiment of the present application with a specific example:
(1) and accessing the planned front-end camera into the bound video processing equipment through protocols such as ONVIF/RTSP and the like.
(2) According to the imported video quality diagnosis scheme, a front-end code stream is pulled, the obtained video data is decoded and frame-extracted to obtain YUV data corresponding to each frame, and an algorithm module starts analysis on the data to obtain abnormal results.
(3) Reporting the abnormal result to a superior data/operation and maintenance platform, and starting an artificial operation and maintenance processing flow after the platform receives the result; meanwhile, the video processing equipment performs analysis and processing on the abnormal result to achieve a compensation effect.
(4) According to the position correlation information of the front-end camera in the imported system, another front-end camera with a shooting range capable of reaching the vicinity of the fault node range is found out, a corresponding adjusting scheme is calculated, and after the shooting position is adjusted, snapshot and video recording are started.
Referring to fig. 7, a front-end camera adjustment diagram is shown. Specifically, the left side of fig. 7 is the respective monitoring ranges of the No. 5 front-end camera and the No. 6 front-end camera before the abnormality is detected, and when the abnormality of the video shot by the No. 5 front-end camera is detected, the target shooting parameters are determined according to the target shooting range in the foregoing embodiment, and the shooting angle, the focal length, and the like of the No. 6 front-end camera are adjusted according to the target shooting parameters to perform snapshot and video recording. Therefore, the video obtained by shooting by the No. 6 front-end camera by using the target parameters covers the shooting ranges of the No. 5 front-end camera and the No. 6 front-end camera between abnormal events as much as possible, and the compensation of abnormal videos is realized. After that, the shot compensation video is sent to the upper platform together, and the abnormal generation process can be captured.
(5) The video processing equipment provides video quality diagnosis service and also has a plurality of intelligent behavior analysis functions; for the diagnosis result generated in (4), further mining can be performed, for example:
selecting a capture camera channel for fire analysis when the video quality detection diagnosis is overexposure detection, and reporting an analysis result to a superior data/operation and maintenance platform; for the video quality detection diagnosis of shielding detection, selecting a capture camera to perform three-dimensional behavior analysis (fighting, smoking, etc.) and face detection analysis, and reporting the analysis result to a superior data/operation and maintenance platform; for the event that the established stereo behavior analysis event can be analyzed in the capture camera, the face matting is equal to the preset black name single ratio. If the analysis results in these plans are generated, the value of event reporting, linked video recording and capturing at this time is greatly increased.
Since the recovery of the front-end camera after the abnormality depends on human repair, the loss of the abnormal material is inevitable in the conventional scheme. In the embodiment of the application, a compensation type panoramic shooting scheme can be calculated according to the position related information of the front-end camera.
In the following example, there are 2 cameras on the same vertical plane, taking shots in the horizontal direction at 90 ° and 200 ° respectively:
referring to fig. 8, a panoramic schematic diagram of compensation shooting of a front-end camera is shown, where a shooting angle of a front-end camera No. 7 is 200 °, and a shooting angle of a front-end camera No. 8 is 90 °, and when one of the cameras fails (for example, the front-end camera No. 7 fails), by adjusting an angle and a focal length of the other camera (the front-end camera No. 8), for example, the shooting angle is 45 °, the original shooting scene can be compensated and covered at the expense of a certain definition. Therefore, on the basis of carrying out video diagnosis on the front-end camera, intelligent fault analysis and automatic compensation on the front-end camera are realized.
As shown in fig. 9, based on the same inventive concept as the video processing method described above, the embodiment of the present application further provides a video processing apparatus, which includes a determination module 91, a selection module 92, and a shooting module 93.
Wherein: a determining module 91, configured to determine a front-end camera with abnormal shooting according to the video stream of each connected front-end camera;
a selection module 92, configured to select one of the normal front-end cameras as a capture camera, where the selected capture camera is the front-end camera whose target shooting parameters are within a standard parameter range of the capture camera, and the target shooting parameters are determined according to a first monitoring range of the abnormal camera and a second monitoring range of the capture camera;
and the shooting module 93 is used for shooting by the capturing camera according to the target shooting parameters to obtain a compensated video.
In some exemplary embodiments, the target photographing parameters are determined by capturing a target monitoring range of the camera, the target monitoring range including a part or all of the first monitoring range and a part or all of the second monitoring range.
In some exemplary embodiments, the monitoring device further comprises a monitoring range determining module, configured to determine the first monitoring range by:
determining a first monitoring range according to the numerical value of each group of first shooting parameters; the type of the first shooting parameter comprises the type, the position, the shooting angle and the focal length of an abnormal front-end camera, and at least one of the numerical values of any two groups of the first shooting parameters is different;
the monitoring range determining module is further configured to determine a second monitoring range by:
determining a second monitoring range according to the numerical value of each group of second shooting parameters; the types of the second shooting parameters comprise the type, the position, the shooting angle and the focal length of an abnormal front-end camera, and at least one of the numerical values of every two groups of the second shooting parameters is different.
In some exemplary embodiments, the monitoring range determining module is specifically configured to:
determining each first monitoring area corresponding to the value of each group of first shooting parameters, and combining the first monitoring areas to obtain a first monitoring range; or
Determining a first monitoring parameter value according to the values of all groups of first shooting parameters, and determining a first monitoring range by using the first monitoring parameter value;
the monitoring range determining module is specifically configured to:
determining each second monitoring area corresponding to the value of each group of second shooting parameters, and combining the second monitoring areas to obtain a second monitoring range; or
And determining a second monitoring parameter value according to the values of the second shooting parameters of each group, and determining a second monitoring range by using the second monitoring parameter value.
In some exemplary embodiments, the video processing device further includes an abnormal video determining module, configured to determine, after obtaining the compensated video by capturing the video by the capture camera according to the target shooting parameter, an abnormal event corresponding to the abnormal video according to the abnormal video and the compensated video if an abnormal type of the video captured by the abnormal front-end camera is a set type.
In some exemplary embodiments, the video sending module is further configured to, after obtaining the compensated video, send the abnormal event corresponding to the abnormal camera in each front-end camera connected to the current video processing device and the compensated video corresponding to the captured camera shot to the operation and maintenance device connected to the other video processing device.
The video processing apparatus and the video processing method provided by the embodiment of the application adopt the same inventive concept, can obtain the same beneficial effects, and are not described herein again.
Based on the same inventive concept as the video processing method, the embodiment of the present application further provides a video processing device, which may be specifically a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a server, and the like. As shown in fig. 10, the video processing device may include a processor 1001 and a memory 1002.
The Processor 1001 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 1002, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1002 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; the computer storage media may be any available media or data storage device that can be accessed by a computer, including but not limited to: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof that contribute to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program codes include a removable Memory device, a Random Access Memory (RAM), a magnetic Memory (e.g., a flexible disk, a hard disk, a magnetic tape, a magneto-optical disk (MO), etc.), an optical Memory (e.g., a CD, a DVD, a BD, an HVD, etc.), and a semiconductor Memory (e.g., a ROM, an EPROM, an EEPROM, a nonvolatile Memory (NAND FLASH), a Solid State Disk (SSD)).
The above embodiments are only used to describe the technical solutions of the present application in detail, but the above embodiments are only used to help understanding the method of the embodiments of the present application, and should not be construed as limiting the embodiments of the present application. Modifications and substitutions that may be readily apparent to those skilled in the art are intended to be included within the scope of the embodiments of the present application.

Claims (10)

1. A video processing method applied to a video processing apparatus, comprising:
determining a front-end camera with abnormal shooting according to the video stream of each connected front-end camera;
selecting one from the normal front-end cameras as a capturing camera, wherein the selected capturing camera is the front-end camera with the target shooting parameters within the standard parameter range of the capturing camera, and the target shooting parameters are determined according to the first monitoring range of the abnormal camera and the second monitoring range of the capturing camera;
and shooting by the capturing camera according to the target shooting parameters to obtain a compensated video.
2. The method according to claim 1, wherein the target shooting parameters are determined by a target monitoring range of the capturing camera, the target monitoring range including part or all of the first monitoring range and part or all of the second monitoring range.
3. The method of claim 1, wherein the first monitoring range is determined by:
determining the first monitoring range according to the numerical value of each group of first shooting parameters; the type of the first shooting parameter comprises the type, the position, the shooting angle and the focal length of the abnormal front-end camera, and at least one of the numerical values of any two groups of the first shooting parameters is different;
determining the second monitoring range by:
determining the second monitoring range according to the numerical value of each group of second shooting parameters; the type of the second shooting parameter comprises the type, the position, the shooting angle and the focal length of the abnormal front-end camera, and at least one of the numerical values of every two groups of the second shooting parameters is different.
4. The method according to claim 3, wherein the determining the first monitoring range according to the values of the respective sets of the first photographing parameters comprises:
determining each first monitoring area corresponding to the numerical value of each group of first shooting parameters, and combining the first monitoring areas to obtain a first monitoring range; or
Determining a first monitoring parameter value according to the values of the first shooting parameters of each group, and determining the first monitoring range by using the first monitoring parameter value;
the determining the second monitoring range according to the values of the second shooting parameters of each group includes:
determining each second monitoring area corresponding to the value of each group of second shooting parameters, and combining the second monitoring areas to obtain a second monitoring range; or
And determining a second monitoring parameter value according to the values of the second shooting parameters of each group, and determining the second monitoring range by using the second monitoring parameter value.
5. The method according to any one of claims 1 to 4, further comprising, after obtaining the compensated video by capturing with the capture camera according to the target capturing parameters:
and if the abnormal type of the video shot by the abnormal front-end camera is a set type, determining an abnormal event corresponding to the abnormal video according to the abnormal video and the compensation video.
6. The method of claim 5, wherein after obtaining the compensated video, further comprising:
and sending the abnormal events corresponding to the abnormal cameras in the front-end cameras connected with the current video processing equipment and the compensation videos shot by the capture cameras to the operation and maintenance equipment connected with other video processing equipment.
7. A video processing apparatus, comprising:
the determining module is used for determining the front-end camera with abnormal shooting according to the video stream of each connected front-end camera;
a selection module, configured to select one of the normal front-end cameras as a capture camera, where the selected capture camera is a front-end camera whose target shooting parameters are within a standard parameter range of the capture camera, and the target shooting parameters are determined according to a first monitoring range of the abnormal camera and a second monitoring range of the capture camera;
and the shooting module is used for shooting through the capturing camera according to the target shooting parameters to obtain a compensation video.
8. The apparatus according to claim 7, wherein the target photographing parameter is determined by a target monitoring range of the capturing camera, the target monitoring range including a part or all of the first monitoring range and a part or all of the second monitoring range.
9. A video processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium having computer program instructions stored thereon, which, when executed by a processor, implement the steps of the method of any one of claims 1 to 6.
CN202110509848.7A 2021-05-11 2021-05-11 Video processing method, device, equipment and storage medium Pending CN113329171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110509848.7A CN113329171A (en) 2021-05-11 2021-05-11 Video processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110509848.7A CN113329171A (en) 2021-05-11 2021-05-11 Video processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113329171A true CN113329171A (en) 2021-08-31

Family

ID=77415241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110509848.7A Pending CN113329171A (en) 2021-05-11 2021-05-11 Video processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113329171A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074480A (en) * 2023-04-03 2023-05-05 银河航天(北京)通信技术有限公司 Image acquisition method and device based on double cameras and storage medium
CN117880626A (en) * 2024-03-11 2024-04-12 珠海创能科世摩电气科技有限公司 Video monitoring method, device and system for power transmission line and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543200A (en) * 2003-04-22 2004-11-03 ���µ�����ҵ��ʽ���� Monitoring device composed of united video camera
JP2005341295A (en) * 2004-05-27 2005-12-08 Sharp Corp Monitoring system, monitoring device, and monitoring method
CN1914918A (en) * 2004-02-03 2007-02-14 松下电器产业株式会社 Monitoring system and camera terminal
CN101061721A (en) * 2005-06-07 2007-10-24 松下电器产业株式会社 Monitoring system, monitoring method, and camera terminal
CN102811311A (en) * 2011-05-30 2012-12-05 株式会社日立制作所 Monitoring camera system
CN103561212A (en) * 2013-10-30 2014-02-05 上海广盾信息系统有限公司 Camera system
CN104918014A (en) * 2015-06-04 2015-09-16 广州长视电子有限公司 Monitoring system enabling post-obstacle-encounter monitoring area automatic filling
CN105282427A (en) * 2014-05-26 2016-01-27 安讯士有限公司 Automatic configuration of a replacement camera
CN108650503A (en) * 2018-04-28 2018-10-12 努比亚技术有限公司 Camera fault determination method, device and computer readable storage medium
CN111757065A (en) * 2020-07-02 2020-10-09 广州博冠智能科技有限公司 Method and device for automatically switching lens, storage medium and monitoring camera

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543200A (en) * 2003-04-22 2004-11-03 ���µ�����ҵ��ʽ���� Monitoring device composed of united video camera
CN1914918A (en) * 2004-02-03 2007-02-14 松下电器产业株式会社 Monitoring system and camera terminal
JP2005341295A (en) * 2004-05-27 2005-12-08 Sharp Corp Monitoring system, monitoring device, and monitoring method
CN101061721A (en) * 2005-06-07 2007-10-24 松下电器产业株式会社 Monitoring system, monitoring method, and camera terminal
CN102811311A (en) * 2011-05-30 2012-12-05 株式会社日立制作所 Monitoring camera system
CN103561212A (en) * 2013-10-30 2014-02-05 上海广盾信息系统有限公司 Camera system
CN105282427A (en) * 2014-05-26 2016-01-27 安讯士有限公司 Automatic configuration of a replacement camera
CN104918014A (en) * 2015-06-04 2015-09-16 广州长视电子有限公司 Monitoring system enabling post-obstacle-encounter monitoring area automatic filling
CN108650503A (en) * 2018-04-28 2018-10-12 努比亚技术有限公司 Camera fault determination method, device and computer readable storage medium
CN111757065A (en) * 2020-07-02 2020-10-09 广州博冠智能科技有限公司 Method and device for automatically switching lens, storage medium and monitoring camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴胜益,熊哲源: "《监狱智能化安全防范关键技术研究》", 上海交通大学出版社, pages: 288 - 292 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074480A (en) * 2023-04-03 2023-05-05 银河航天(北京)通信技术有限公司 Image acquisition method and device based on double cameras and storage medium
CN117880626A (en) * 2024-03-11 2024-04-12 珠海创能科世摩电气科技有限公司 Video monitoring method, device and system for power transmission line and storage medium
CN117880626B (en) * 2024-03-11 2024-05-24 珠海创能科世摩电气科技有限公司 Video monitoring method, device and system for power transmission line and storage medium

Similar Documents

Publication Publication Date Title
US10075758B2 (en) Synchronizing an augmented reality video stream with a displayed video stream
US7428314B2 (en) Monitoring an environment
US8107680B2 (en) Monitoring an environment
CN201248107Y (en) Master-slave camera intelligent video monitoring system
KR102239530B1 (en) Method and camera system combining views from plurality of cameras
US8531525B2 (en) Surveillance system and method for operating same
KR100883632B1 (en) System and method for intelligent video surveillance using high-resolution video cameras
KR102146042B1 (en) Method and system for playing back recorded video
CN113329171A (en) Video processing method, device, equipment and storage medium
CN101924923B (en) Embedded intelligent automatic zooming snapping system and method thereof
IL249739A (en) System and method for secured capturing and authenticating of video clips
WO2016125946A1 (en) Panorama image monitoring system using plurality of high-resolution cameras, and method therefor
CN105763868A (en) Detection method and device of PTZ failure
CN110062198B (en) Monitoring evidence obtaining method, device and system, electronic equipment and storage medium
CN110248089B (en) Image transmission method, system and terminal equipment
CN111371985A (en) Video playing method and device, electronic equipment and storage medium
KR20190026625A (en) Image displaying method, Computer program and Recording medium storing computer program for the same
KR101842564B1 (en) Focus image surveillant method for multi images, Focus image managing server for the same, Focus image surveillant system for the same, Computer program for the same and Recording medium storing computer program for the same
CN116017136A (en) Shooting equipment control method and device, storage medium and electronic device
KR20210108691A (en) apparatus and method for multi-channel image back-up based on event, and network surveillance camera system including the same
CN111105505A (en) Method and system for quickly splicing dynamic images of holder based on three-dimensional geographic information
CN114245006B (en) Processing method, device and system
KR101264667B1 (en) Method for creating thumbnail image of video file and recording-medium recorded program thereof
CN115150548A (en) Method, equipment and medium for outputting panoramic image of power transmission line based on holder
KR200492429Y1 (en) Integrated server for remote controlling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210831

RJ01 Rejection of invention patent application after publication