CN112752064A - Processing method and system for power communication optical cable monitoring video - Google Patents

Processing method and system for power communication optical cable monitoring video Download PDF

Info

Publication number
CN112752064A
CN112752064A CN201911040621.1A CN201911040621A CN112752064A CN 112752064 A CN112752064 A CN 112752064A CN 201911040621 A CN201911040621 A CN 201911040621A CN 112752064 A CN112752064 A CN 112752064A
Authority
CN
China
Prior art keywords
video
image
video frame
frame
optical cable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911040621.1A
Other languages
Chinese (zh)
Inventor
赵志平
张亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinzhou Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Original Assignee
Xinzhou Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinzhou Power Supply Co of State Grid Shanxi Electric Power Co Ltd filed Critical Xinzhou Power Supply Co of State Grid Shanxi Electric Power Co Ltd
Priority to CN201911040621.1A priority Critical patent/CN112752064A/en
Publication of CN112752064A publication Critical patent/CN112752064A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/22Adaptations for optical transmission
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method and a system for processing a power communication optical cable monitoring video. Wherein, the method comprises the following steps: the method comprises the steps that first equipment obtains a first video to be transmitted, wherein the first video is a video acquired when the power communication optical cable is monitored; the first equipment carries out concentration processing on the first video to obtain a second video, and transmits the second video to the second equipment; and after receiving the second video, the second device preprocesses the second video to obtain a third video, wherein the third video is used for performing fault analysis, and the preprocessing is used for eliminating interference information of the fault analysis. The method and the device solve the technical problem that the monitoring video in the related technology is interfered.

Description

Processing method and system for power communication optical cable monitoring video
Technical Field
The application relates to the field of monitoring, in particular to a method and a system for processing a power communication optical cable monitoring video.
Background
In the research and application of intelligent monitoring of communication optical cables, it is very important to maintain high-quality and high-timeliness wireless communication transmission of various kinds of monitoring information, and is a basic technical support for research and application of an intelligent monitoring system of communication optical cables based on NB-IoT (Narrow-Band Internet of Things) based on honeycomb.
The random time-varying property of the wireless channel, the high error burst property, the unpredictability of the transmission link, and the characteristic that the video data with high compression ratio is sensitive to the channel error can seriously affect the quality of the wireless video transmission, and the characteristics of the video data and the characteristics of the wireless channel make the wireless video transmission face new challenges, which mainly appear in the following three aspects: firstly, the characteristics of huge data volume, sensitive time-delay and the like of video communication determine that the video service is obviously different from the general data service, and the requirements on transmission targets, instantaneity, error rate and the like are different. Secondly, radio spectrum resources are scarce, transmission bandwidth allocated to users is limited for different service requirements, and video transmission faces new technical challenges due to the characteristics of time-varying property of radio channels and the like. Thirdly, there are differences in the technologies, standards, protocols, architectures, etc. used by different wireless networks, and there is diversity in the user terminals in the networks, which also makes wireless video transmission face new problems.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the application provides a method and a system for processing a power communication optical cable monitoring video, which are used for at least solving the technical problem that the monitoring video in the related technology is interfered.
According to an aspect of the embodiments of the present application, there is provided a method for processing a power communication optical cable monitoring video, including: the method comprises the steps that first equipment obtains a first video to be transmitted, wherein the first video is a video acquired when the power communication optical cable is monitored; the first equipment carries out concentration processing on the first video to obtain a second video, and transmits the second video to the second equipment; and after receiving the second video, the second device preprocesses the second video to obtain a third video, wherein the third video is used for performing fault analysis, and the preprocessing is used for eliminating interference information of the fault analysis.
According to another aspect of the embodiments of the present application, there is also provided a processing system for power communication optical cable monitoring videos, including: the first equipment is used for acquiring a first video to be transmitted, wherein the first video is a video acquired when the power communication optical cable is monitored; the first equipment is also used for carrying out concentration processing on the first video to obtain a second video and transmitting the second video to the second equipment; and the second equipment is used for preprocessing the second video after receiving the second video to obtain a third video, wherein the third video is used for carrying out fault analysis, and the preprocessing is used for eliminating interference information of the fault analysis.
According to another aspect of the embodiments of the present application, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.
According to another aspect of the embodiments of the present application, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.
In the embodiment of the application, first equipment acquires a first video to be transmitted, wherein the first video is a video acquired when the power communication optical cable is monitored; the first equipment carries out concentration processing on the first video to obtain a second video, and transmits the second video to the second equipment; after receiving the second video, the second device preprocesses the second video to obtain a third video, wherein the third video is used for performing fault analysis, and the preprocessing is used for eliminating interference information of the fault analysis, so that the technical problem that the monitoring video in the related technology has interference is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of an alternative method for processing a power communication cable monitoring video according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an alternative video enrichment scheme according to an embodiment of the present application;
FIG. 3 is a schematic illustration of an alternative defogging process according to an embodiment of the present application;
FIG. 4 is a diagram of an alternative Laplace neighborhood template according to an embodiment of the present application;
and
FIG. 5 is a schematic diagram of an alternative transformation function according to an embodiment of the application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For the power communication optical cable monitoring video images, in the process from the acquisition of the digital video images to the subsequent targeted processing, due to the interference and influence of acquisition equipment, transmission equipment and the external environment, the degradation and distortion of the quality of the video images are inevitable, and the failure and even the error of the subsequent processing algorithm are caused in serious cases.
In order to solve the problems, the application provides electric power communication optical cable monitoring video image processing technologies such as an electric power communication optical cable monitoring video concentration method, an electric power communication optical cable monitoring video image defogging technology, an electric power communication optical cable monitoring video denoising technology based on compressed sensing, an electric power communication optical cable monitoring video adaptive image enhancement technology based on dynamic scene estimation, an electric power communication optical cable monitoring video image restoration technology based on frequency domain filtering and the like, so that the image quality of the electric power communication optical cable monitoring video is improved, and a theorem and an application basis are laid for intelligent analysis of the electric power communication optical cable video image.
According to an aspect of the embodiments of the present application, an embodiment of a method for processing a power communication optical cable monitoring video is provided. Fig. 1 is a flowchart of an alternative power communication cable monitoring video processing method according to an embodiment of the present application, and as shown in fig. 1, the method may include the following steps:
step S102, first equipment acquires a first video to be transmitted, wherein the first video is a video acquired when the power communication optical cable is monitored;
step S104, the first equipment carries out concentration processing on the first video to obtain a second video, and transmits the second video to the second equipment;
step S106, after the second device receives the second video, the second device preprocesses the second video to obtain a third video, wherein the third video is used for fault analysis, and the preprocessing is used for eliminating interference information of the fault analysis
Through the steps S102 to S106, the interference in the video can be eliminated through the preprocessing, and the technical problem of the interference in the monitoring video in the related art can be solved.
Communication optical cable surveillance video concentration scheme:
the video compression structure and the hierarchy of the video compression method are shown in fig. 2. Firstly, an original video is divided into video frames, then related video frames are arranged in sequence according to the requirement of a search target, and finally a short and effective video is formed, and the core content of the method is a key frame selection method. In the video condensation process, the core content of the video condensation consists of three parts: video analysis, key frame extraction and concentrated video generation. The core of video concentration is extraction of key frames, and the key frame extraction used in the method can be realized through extraction of underlying visual characteristic values.
And video input, wherein the video monitoring system of the power communication optical cable is constructed based on a network topology structure, so that the input video format can be set to be AVI format. The AVI format is a video data compression format developed by microsoft that can be decoded and encoded by all WINDOWS systems. Meanwhile, the format meets the requirements of a network video server, and a user can conveniently browse videos through a network. Frame extraction, the frame extraction is a separation of videos, because videos are composed of frames, the number of the frames has a direct relation with the size of the videos, and the larger the video is, the more the frames are, the smaller the video is, the fewer the frames are. Since a frame will occupy a large memory space, a large space is provided for the frame to be divided. The video coding and decoding algorithm of H.264 is used, and 24 frames or 48 frames can be selected according to the performance of a CPU and a display card of a computer. And (3) extracting a characteristic value, wherein the method for extracting the visual characteristic of the key frame adopts a color histogram, an edge detection method and a motion compensation method. The color features, the edge detection and the block compensation values extracted by the three methods are subjected to weight relation calculation, and different frames are distinguished through the relation or the sequence of the frames according to the calculation result.
The color histogram is easy to calculate and has good robustness for reflecting motion noise shot by a camera because colors in an image are the most remarkable features in visual feature values, and the HSV-based color histogram is adopted as an extraction method in the application. The key to this approach is to compare the difference in the histogram between two frames with an unchanged background and an unchanged target.
Suppose two adjacent frames IiAnd IjA distance D (I) therebetweeni,Ij) Then, the color formula is calculated as follows:
Figure RE-RE-GSB0000185953380000051
Iiand Ii+1Color histogram representing adjacent first video frame, Hi(j) And Hi+1(j) And expressing the color value of the jth pixel point in the color histogram of the first video frame, wherein the value of j is 1 to an integer n.
Edge detection, where an edge is a boundary between an object and a background in an image, and also between multiple objects, means that if the edge in an image can be analyzed very accurately, the object in the image can be located and the basic properties of the image (such as area, perimeter, shape, etc.) can be measured. The method adopts a Canny algorithm to carry out edge detection and analysis:
step 1: gaussian blur. Similar to the LoG operator (Laplacian of Gaussian) for Gaussian blur, the main function is to remove noise. Since noise is also concentrated on high frequency signals, it is easily recognized as a false edge. And removing noise by applying Gaussian blur, and reducing the identification of false edges. However, since the image edge information is also a high frequency signal, the selection of the radius of the gaussian blur is important, and an excessively large radius easily makes some weak edges undetectable.
Step 2: the gradient magnitude and direction are calculated. The edges of the image can point in different directions, so the classical Canny algorithm uses four gradient operators to compute the gradient in the horizontal, vertical and diagonal directions, respectively. But four gradient operators are generally not used to compute four directions separately. Commonly used edge difference operators (e.g., Rober, Prewitt, Sobel) calculate the differences Gx and Gy in the horizontal and vertical directions. The gradient mode and direction can then be calculated as follows.
And step 3: non-maximum suppression. Non-maximum suppression is an edge refinement method. The gradient edge that is usually obtained is more than one pixel wide, but a plurality of pixels wide. Just as the Sobel operator produces edges that are thick and bright, non-maximum suppression can help preserve local maximum gradients while suppressing all other gradient values. This means that only the sharpest positions in the gradient change remain. And comparing the gradient strength of the current point with the gradient strength of the positive and negative gradient direction points. If the gradient strength of the current point is maximum as compared with the gradient strength of other points in the same direction, the value is retained. Otherwise, inhibit, i.e., set to 0. For example, the direction of the current point is directed 90 deg. directly above, which requires comparison with the vertical direction, the pixels directly above and below it.
And 4, step 4: and (4) double thresholds. Typical edge detection algorithms use a threshold to filter out small gradient values caused by noise or color variations, while retaining large gradient values. The Canny algorithm applies dual thresholds, a high threshold and a low threshold, to distinguish edge pixels. If the gradient value of the edge pixel point is larger than the high threshold value, the edge pixel point is considered as a strong edge point. If the edge gradient value is less than the high threshold and greater than the low threshold, the edge point is marked as a weak edge point. Points below the low threshold are suppressed. This step of the algorithm is simple.
Late boundary tracking. Strong edge points can be considered as true edges. Weak edge points may be true edges or may be caused by noise or color changes. For accurate results, the weak edge points caused by the latter should be removed. It is generally considered that the weak edge points and the strong edge points caused by the real edges are connected, but the weak edge points caused by the noise are not. The so-called lag boundary tracking algorithm examines the 8 connected domain pixels of a weak edge point, which is considered to be true edge left as long as a strong edge point exists.
Two threshold images are obtained by using two thresholds, then edges in the images with high thresholds are connected into a contour, and when the edges reach the end points of the contour during connection, the edges which can be connected are found on the images with low thresholds. Collection is continued until all gaps are connected.
Block compensation, which is to divide each frame into blocks of pixels (e.g., 16 × 16 blocks in MPEG). The current block is predicted from a block with equal size at a certain position of a reference frame, only translation is carried out in the prediction process, and the translation size is called a motion vector. The motion vector is an essential parameter of the model and must be coded together into the code stream. Since the motion vectors are not independent (correlation between two adjacent blocks is very large), differential coding is used to reduce the code rate. That is, adjacent motion vectors are differenced before they are encoded, and only the halved portions are encoded. The use of quotient coding in coding the motion vector components further eliminates the statistical redundancy of motion vectors. Such an approach may result in a smaller energy residual than a simple subtraction, and thus a better compression ratio.
Selecting a key frame, namely setting a first frame as the key frame at the beginning stage of extracting the key frame, then calculating the difference between the current key frame and the last extracted key frame, and if the calculated value meets the set threshold value, taking the current frame as the key frame.
The video generation template selects the videos with concise contents generated by orderly key frames in the form of a story template. The story template is a generation form based on contents, and the number of images in the story template can be selected according to requirements, and the number of images can set the number of images in each minute according to the time proportion relation. The start of a story is determined by the importance of the key frames and data redundancy is reduced. And finally, generating a simplified video comprising a title object, a shot object and a key object according to the sequence of the importance degrees and the time sequence.
The defogging technology for the monitoring video image of the communication optical cable has the advantages that the quality of the collected video image is reduced, the contrast is reduced, the image is blurred, the color is distorted and the like under the foggy weather condition, and the defogging of the video image aims to restore the contrast and the real color of the image, improve the resolution and the quality of the image and acquire a clear image.
A defogging algorithm for video images of communication cables mainly adopts a method that fog in a video image sequence is taken as a mask or an optical path propagation diagram to realize defogging treatment. The main purpose is to solve the problem of the common transmittance map of the video image and then apply the transmittance map to each frame of the video image sequence. The method does not need to obtain the transmittance map operation for each frame of the video image sequence, reduces the calculated amount of the video defogging algorithm, and improves the defogging processing speed.
The key element of image defogging is acquisition of a propagation map, so the bottleneck of efficiency of video defogging is how to quickly acquire the propagation map of each frame of a video image. As shown in fig. 3, the video processing needs to take high redundancy of video image data and low-frequency characteristics of a propagation image into consideration, and the haze is taken as an optical path propagation map to achieve defogging of the video image.
The atmospheric scattering model is expressed by the following formula:
I(x)=A*ρ(x)e-β*d(x)+A*(1-e-β*d(x)),
where i (x) is the point light intensity received by the observation point (i.e. the input foggy image), ρ (x) is the scene reflectivity, d (x) is the distance from the scene point to the observation point, i.e. the optical path, a represents the total intensity of the atmospheric light, i.e. the ambient light, and β represents the scattering coefficient of the atmosphere.
The defogging process of the video image is shown in fig. 3, firstly, the fog video image estimates an atmospheric light intensity value A by means of a dark primary color prior algorithm, and then B (x) ═ 1-I (x)/A is obtained through calculation; meanwhile, extracting a background image of the video by adopting a statistical average method; then, processing the background image by adopting an improved partial differential equation algorithm to obtain a light path propagation image of the mist of the background image; obtaining c (x) 1- ρ (x) from the background map propagation map, and obtaining an albedo ρ (x) by correlation calculation; finally, multiplying the calculated image albedo rho (x) by the estimated image atmospheric light intensity value A by using an atmospheric scattering model to obtain a defogged video image sequence; and after all the frames of the video are subjected to defogging treatment by using the video background propagation map, recombining the video image sequences to form a video, and finally obtaining the defogged video.
The extraction of the background image of the video image and the defogging processing of the video image need to fully utilize the correlation between the video image frame and the frame, and the correlation between frames is mainly reflected in the background image of the video image. The video image background is divided into the following parts according to the motion situation of video content: static background video and dynamic background video. For a dynamic background video, the scene depth in a video image changes constantly, and it is very complicated to acquire scene depth information, and when the video image is processed, each frame of a video image sequence needs to be subjected to defogging processing. Processing each frame of a video image sequence is a very large workload, and the processing time of the whole video image is prolonged. In order to reduce the operation time of the defogging processing of the video images, the processing speed of each frame of image needs to be increased so as to meet the requirement of video processing. For a static background video, the scene depth of a video image is kept basically unchanged, the correlation between video image frames is large, the correlation between the static background video image frames is utilized, the scene depth information does not need to be estimated for each frame of the video, and only the background depth information of several key frames of a video image sequence needs to be extracted and applied to each frame of the video image, so that the defogging speed of the video image is greatly improved.
The extraction of the background of the video image sequence can be realized by adopting the following scheme:
and a statistical average method, wherein the statistical average method adopts a video image continuous sequence, and a background image of the video image sequence is obtained by taking the statistical average value. Collecting N frames of continuous images, setting fk(x, y) is the kth frame of the video image sequence, BkFor the k frame background estimate, B1=f1(x, y), (x, y) represent coordinates, and the background image BkExpressed as:
Figure RE-RE-GSB0000185953380000091
the method for obtaining the background image of the video image has the advantages that the background image is obtained by adopting a recursion model in mathematics, and the model is simple and has calculation aspect. However, when the method is used for extracting the background image of the dynamic background video from the video image sequence with frequent target motion, the obtained result has larger error;
and estimating a background propagation map, namely obtaining the background light path propagation map after obtaining the background map of the video sequence frame. When the irradiance of the target surfaces of two scenes varies widely, their values may differ on the propagation map for the same depth of field region. The irradiance level of the surface of the object can be measured by the brightness level, so the relationship between the propagation map and the brightness is set as: the change in the propagation map is caused by the brightness. The specific scheme of the propagation map estimation method is as follows:
firstly, the color space of image conversion is selected, in order to separate the chroma component and the brightness component of the image, the image is converted into the HIS or YUS or YCbCr color space, etc., and the brightness component is processed. If the conversion space is the HIS space, but for the RGB color image, when the brightness component is processed, trigonometric function operation needs to be carried out on each pixel point of the image, the calculation amount is large, and the consumed time is long. In order to reduce the amount of calculation and improve the operation speed, the character selects the YCbCr space as the image conversion space, the main reason is that the conversion between the YCbCr and the RGB is simple, and then MSR processing is carried out on the brightness component in the YCbCr conversion space, and the mathematical form is as follows:
Figure RE-RE-GSB0000185953380000101
wherein R ism(x, y) is the output in the ith color space; n is the number of scales; wnThe weight value of each scale is taken as the weight value of each scale; y (x, Y) is a luminance image distribution; fn(x, y) is the weight WnThe gaussian form of the ith surround function of (1). Processing the luminance component using the MSR algorithm may result in an estimated propagation map with unnecessary detail that should not be present. Aiming at the problem, a bilateral filter method is adopted to obtain a background propagation map. The principle of the bilateral filter is to use the original value in its spaceThe average value of the adjacent pixel values with similar gray levels is used for replacement, namely the compromise processing of the image space proximity gray level similarity is carried out, so that the filtering effect is achieved. When bilateral filtering processing is carried out under the same scale, small edges are filtered out, and large edges are reserved. The mathematical form of the process of using the bilateral filter method to find the propagation map can be expressed as:
Figure RE-RE-GSB0000185953380000102
wherein, I (mu) is the original fog image; coordinates μ ═ x, y, and N (μ) is the area of μ; wc() Is a spatial domain similarity function; ws() As a function of the gray scale similarity. When the background propagation map is estimated, the image is processed by adopting bilateral filtering, so that not only can the scene information be well reflected, but also the phenomenon of redundant details appearing in the propagation map which is not obtained by adopting a filtering method can be eliminated.
The video scene depth information can be well reflected by the background propagation map. The background of the static background video is basically unchanged, and the video background propagation map is regarded as the propagation map of the whole video image frame according to the characteristic, so that the repeated operation of obtaining the atmospheric light value and the propagation map of each frame of the video is avoided, and the video processing speed is improved.
The communication optical cable monitoring image denoising, enhancing and restoring technology comprises the following steps:
the communication optical cable monitoring image denoising method based on compressed sensing has the advantages that noise is inevitably generated by the influence of illumination, rain and snow, wind power, temperature and the like of a video image, and adverse effects are generated on the identification matching and the like of the image, so that the denoising of the image has certain significance on a video monitoring system. According to the method, a compression perception theory is applied to denoising of the power equipment video image, a sparse representation denoising method is used, an overcomplete dictionary is automatically updated through a K-SVD algorithm, a sparse coefficient is solved on the basis of the dictionary, and finally the image is reconstructed through the sparse coefficient to obtain a denoised image.
Compressed Sensing and sparse representation, a Compressed Sensing (CS) theory divides signal sampling into three steps of sparse representation, projection observation and signal reconstruction, the theory is established on the basis that a signal is sparse or sparse under a certain transformation, sparsity of the signal is the basis of processing the signal by the Compressed Sensing, and sparse representation means that an original signal is accurately represented by atoms as few as possible in a certain transformation domain.
The sparse representation idea in image processing is to construct an overcomplete dictionary D by using images of a training set, then solve a sparse solution x of a test image b under the overcomplete dictionary D, wherein only the representation coefficients of training image samples belonging to the same class as the test image in the x are nonzero, the representation coefficients of other class image samples are all zero, and if the class number is enough, the representation coefficients can present the characteristic of sparsity relative to the whole dictionary.
The dictionary is updated by adopting the K-SVD algorithm, the column vectors in the dictionary are updated, the coefficient is updated at the same time, the convergence speed is accelerated, the average reconstruction error in the aspect of image denoising is lower, in addition, the higher the signal to noise ratio is, namely, the closer the image to the original image is used for training the dictionary, and the more obvious the image denoising effect is. Therefore, the K-SVD algorithm is used for dictionary learning, and the overcomplete dictionary D is obtained.
The reconstruction algorithm is another important content of processing the image by using compressed sensing, and the application selects a sparse representation method based on the overcomplete dictionary, so the image reconstruction can be regarded as a process of solving sparse decomposition coefficients of the image B on the overcomplete dictionary D.
And the OMP algorithm core is to carry out orthogonalization treatment on atoms in an overcomplete dictionary according to a Schmidt orthogonalization method, and then decompose signals on a space formed by the orthogonal atoms to obtain sparse decomposition coefficients. In each iteration process of the OMP algorithm, important components of a residual signal need to be determined again, and the best atoms which are matched with the components most are found in a sparse dictionary, but when the vector similarity degree is calculated, if an inner product principle is used for measurement, it is not easy to find atoms which are very close to the residual signal, therefore, a Dice coefficient is introduced into the matching process of the atoms and the residual signal and is used as a selection principle of the optimal atoms, because the Dice coefficient can enable larger coefficients in vectors to be more prominent, and the OMP can quickly and accurately locate important components in the residual signal, the DOMP can more accurately screen the atoms which are matched with the residual signal from the sparse dictionary. The DOMP algorithm introduces a Dice similarity coefficient when measuring the matching standard of atoms and residual signals, so that the modulus of the updated residual signals is smaller.
The method and the device optimize the denoising of the shot image of the power grid video, process the image by combining the compressed sensing theory knowledge and the sparse representation denoising method, and obtain a good denoising effect.
Under the severe conditions of haze, dust, insufficient light at night and the like, the image quality captured by the system is usually greatly reduced, the contrast is low, the picture is thick, useful information is difficult to extract from the image, the visual effect of the video image can be obviously improved by adopting a real-time video image enhancement technology, the contrast between the target image and the background is improved, the target information is highlighted, and some useless background information is weakened.
The method comprises the steps of firstly sharpening an image by adopting a Laplacian operator to project detail information of an original image, secondly selecting a corresponding gray mapping characteristic transformation function according to detection and judgment results of a dynamic scene, and finally correspondingly adjusting the gray value of the image according to a mapping relation, thereby obtaining the enhanced image.
The image sharpening method based on the Laplace operator is characterized in that the Laplace operator is used as a second-order differential operator, and details and textures of the image can be sharpened. The sharpening algorithm takes a pixel to be processed as a center, an n multiplied by n neighborhood filtering window needs to be constructed, in order to guarantee the processing effect and reduce the complexity, and meanwhile, the calculation efficiency and the energy efficiency of the algorithm are considered, n is 3 in the algorithm design. The band processing pixel adopts a calculation expression obtained by 3 × 3 laplacian, and the corresponding neighborhood template of the laplacian is shown in fig. 4 as follows:
Figure RE-RE-GSB0000185953380000131
in the formula (f)lapAnd (x, y) represents the gray value of the pixel point f (x, y) to be processed after the filtering processing of the Laplace operator.
The function expression of the pixel gray value after sharpening enhancement treatment is as follows:
Figure RE-RE-GSB0000185953380000132
p denotes a predetermined parameter, and for a pixel f (x, y) having a high-luminance gray scale value, the result after the enhancement filtering may exceed the upper limit 255 of the gray scale value of the 8-bit gray scale image to cause a positive overflow, and a calculation error occurs. In consideration of the performance of realizing the effect and the amount of calculation, the gradation value is replaced by the maximum value 255 in this case in the text. Vice versa, for the pixel point with negative overflow, the gray value is replaced by the minimum value 0. From the simulation results, it is clear that the slight error has little influence on the enhanced image quality.
And the scene judgment module is mainly used for extracting the gray value scene information of the original image after the original image is subjected to Laplace sharpening, analyzing and judging the scene type of the original image, selecting a proper gray mapping function according to different scene characteristics, and adjusting the gray range of the image to improve the contrast of the image.
Figure RE-RE-GSB0000185953380000141
The first expression is an average gradation value calculation expression of an image, in which
Figure RE-RE-GSB0000185953380000142
Representing the pixel value of the sharpened image at coordinates (x, y), wherein M and N are the pixel numbers of rows and columns in the image; the latter two expressions are respectively an overly dark scene andthe judgment basis of the over-bright scene, wherein mean _ darkstd、mean_brightstdThe judgment threshold values are respectively the judgment threshold values of the over-dark scene and the over-bright scene. When the calculated average gray value meets a second expression, namely the average gray value of the image is smaller than the threshold value of a dark scene, the image can be judged to be in an over-dark scene mode; but when the third expression is satisfied, that is, the average gradation of the image is larger than the threshold value of the bright scene, it can be determined that the image is in the over-bright scene mode.
When the average gray scale of the image does not satisfy the latter two expressions, whether the image is in a dim scene or not and an infrared scene mode can be judged according to the following formula.
Figure RE-RE-GSB0000185953380000151
Figure RE-RE-GSB0000185953380000152
dark_ratio+brigh_ratio<IR_ratiopth
In the formula brightpthAnd darkpthTo distinguish whether the pixel point to be processed is the judgment threshold of the pixel with high gray value and the pixel with low gray value, the bright _ ratiopthAnd dark _ ratiopthIt represents the threshold value of the proportion of light and dark pixels in an image respectively. If the sharpened image pixel points meet the second expression, namely the proportion of the bright pixels and the dark pixels in the image is smaller than the threshold value, most of the pixel points in the image are concentrated in a smaller gray scale range, and therefore the image is judged to be in a dark scene.
When the sharpened image meets a third expression, namely when the sum of the proportion of light and dark pixels in an image is less than a certain specific value, the maximum pixel information of the image can be concentrated in a certain middle area by default.
And after the gray mapping transformation function is subjected to identification processing based on a dynamic scene, the sharpened image can be distinguished into an ideal image, an excessively dark image, an excessively bright image, a dark image and an infrared image. In the adaptive enhancement algorithm of the present application, corresponding grayscale range mapping is performed for the features of the five types of images respectively. If the sharpened image is identified as an excessively dark image, in order to improve the average gray value of the image to a greater extent, stretching the dynamic range of the pixels with low gray values in the image is required; if the image is identified as an over-bright image, the average gray value of the image needs to be reduced to a certain degree, so that the dynamic range of pixels with high gray values in the image needs to be compressed; if the image is identified as a dark image, i.e. indicating that the gray values of the pixels in the image are distributed around a certain middle gray value, the middle gray value should be stretched as much as possible to the dynamic range of the gray within the peripheral range: for an infrared image, because light right in front of a CCD camera is strong, the gray pixel value of an object is high, light spots from light to dark are gradually formed around the object, and opposite to a dim image, a pixel part area with light and dark changes caused by irradiation of an active camera in the same area is compressed, the brightest part reduces the pixel value of the brightest part as much as possible, and different parameters are adopted in different brightness ranges to enable the brightest part to be close to a scene restored to a normal mode as much as possible. For an ideal image, no large adjustments need to be made to the gray value distribution of the image.
In view of the analysis, the algorithm of the application meets different enhancement requirements under five scene modes by setting different parameter values by taking advantage of the effect that the logarithmic function has brightness softness in the image enhancement process. The adaptive enhancement algorithm constructs a specific curve based on the power transformation function to complete the enhancement processing of the dim image. Corresponding to the above five different scenes, five types of transformation functions with different gray scale mapping characteristics are constructed by the algorithm, as shown in fig. 5, and are respectively function curves for adaptively adjusting the gray scale mapping of the image gray scale values under different scenes.
And (3) the transformation functions of the excessively dark image and the ideal image, and for the excessively dark image and the ideal image of the scene, completing the mapping processing process of the gray value by using the following formula:
Figure RE-RE-GSB0000185953380000161
in the formula (I), the compound is shown in the specification,
Figure RE-RE-GSB0000185953380000162
a pixel value at (x, y) representing the sharpened image; is composed of
Figure RE-RE-GSB0000185953380000163
The pixel value of the finally obtained enhanced image; p is a control parameter and changes the mapping characteristic of the transformation curve; mean is a measure ofstdAnd meansharpRespectively representing the standard mean parameter and the gray mean of the image to be processed.
Figure RE-RE-GSB0000185953380000171
Composed of two parts of linear addition, the logarithmic function part can softly expand the gray scale range of the pixel value part of the specific part of the image, and meanstdAnd meansharpThe overall gray value of the original image is properly raised. As shown in fig. 5, the mapping curve corresponding to the ideal image only performs a small amplitude adjustment on the image gray scale range to increase the contrast, and retains the original information of the image, whereas for the mapping curve corresponding to the too dark image, the gray scale dynamic range of the image needs to be greatly expanded by a larger mapping slope in the area with lower image gray scale value.
And (3) a transformation function of the over-bright image, and for the over-bright image of the scene, completing a mapping processing process of the gray value by using the following formula:
Figure RE-RE-GSB0000185953380000172
in the formula, similarly to the overshadowing scene,
Figure RE-RE-GSB0000185953380000173
representing the pixel at (x, y) of the sharpened imageA value;
Figure RE-RE-GSB0000185953380000174
a pixel value at (x, y) for the finally obtained enhanced image; p is a control parameter; mean is a measure ofstdAnd meansharpAnd respectively representing the standard mean parameter and the gray mean of the image to be processed. For an image in an over-bright scene, the corresponding gray scale mapping curve is as shown in the mapping curve corresponding to the over-bright image in fig. 5, the dynamic range of the high gray scale region of the image is compressed, and the overall gray scale value of the image is appropriately reduced to complete the over-bright restoration of the image.
The transformation function of the dim image is used for enhancing the image with a dim scene, and the algorithm of the application adopts the Semi-S transformation function shown in the following formula to map the gray value:
Figure RE-RE-GSB0000185953380000181
in the formula (I), the compound is shown in the specification,
Figure RE-RE-GSB0000185953380000182
a pixel value at (x, y) representing the sharpened image;
Figure RE-RE-GSB0000185953380000183
a pixel value at (x, y) for the finally obtained enhanced image; where c and γ are both control parameters. In order to make the middle part of the mapping curve steeper, the algorithm of the present application uses a piecewise exponential function, so as to realize two opposite expansions of the dynamic range near the middle gray scale region of the image, as shown by the mapping curve corresponding to the dark image in fig. 5.
The transformation function of the infrared image, for the image enhancement of the infrared dim, the application maps the gray value according to the Semi-S transformation function shown in the following formula:
Figure RE-RE-GSB0000185953380000184
a pixel value of the processed image at (x, y);
Figure RE-RE-GSB0000185953380000185
a pixel value at (x, y) for the finally obtained enhanced image; by adopting the segmented exponential function, the right part of the curve can be more terrestrial (shown by the mapping curve corresponding to the infrared image in fig. 5), so that the dynamic range of the image in a larger gray scale region can be effectively compressed.
The algorithm adaptively selects a gray mapping transformation function to adjust the gray dynamic range according to the judged different image scenes, and improves the contrast of the image so as to realize the enhancement processing of the image in different scenes.
The algorithm has satisfactory objective evaluation effect in all scenes, has the strongest adaptive capacity and excellent performance, and can effectively enhance images in different scenes. The adaptive image enhancement algorithm based on dynamic scene estimation has satisfactory processing effect in various scenes, obviously improves the visual effect, and can better retain the detail information of the original image.
A communication optical cable monitoring video image restoration technology based on frequency domain filtering monitors an electric power communication optical cable by using a video system, and considers that image blurring phenomena in the communication process are common due to the imaging conditions of the optical system, parameter limitations of a transmission medium and transmitting and receiving equipment, medium characteristics in scene transmission and various noises introduced in the image acquisition and processing processes, and digital images need to be restored to restore the original form of image video files in order to obtain image video files with good quality. The method mainly researches an inverse filtering restoration technology and a wiener filtering restoration technology, and is applied to restoration of power communication optical cable video images.
Image degradation and image restoration, image degradation or image blur due to geometric distortion and gray scale distortion of an image caused by an imaging system, or motion blur of an image caused by relative motion, and the like, are collectively referred to as image degradation or image blur.
Image restoration is also called graphImage restoration, which is different from image enhancement, objectively emphasizes the improvement of image quality, researches and analyzes to find the reason of image degradation, and performs related mathematical modeling; starting from the cause of image degradation, the contaminated or distorted image is analyzed to extract the key data in the video image frame. The recovery process is to establish a degradation model according to key information of the obtained degraded video image frame, analyze the cause of image frame distortion, design a filter to enable the filter to calculate the predicted value of the real image frame from the original degraded image g (x, y)
Figure RE-RE-GSB0000185953380000191
And applying a predefined error criterion to maximally predict and estimate the real image f (x, y). In the video transmission process, for modeling the image frame degradation process polluted by noise, a degradation system function on an original image frame f (x, y) is set as H (x, y), a degraded image frame is set as g (x, y), and after the degradation system is passed, interference of n (x, y) subjected to random noise generates a degraded image frame g (x, y), so that g (x) is H [ f (x)]+ n (x, y), if the degraded image g (x, y) is subjected to inverse degradation restoration processing according to the model H (x, y) of the degraded system, an approximate result of the original image can be obtained, thereby realizing restoration of the original image.
The degraded image restoration modeling can adopt the method of dividing the domain where the image frame processing is analyzed into a space domain restoration and a frequency domain restoration. Spatial domain filtering restoration is to filter the spatial domain of noise on the basis of a known noise model (for example, known as gaussian noise, uniformly distributed noise or impulse noise, etc.); the frequency domain filtering estimates the degraded system model, and performs image restoration in the frequency domain, and the application of hardware circuit can be better realized due to better image quality of restoration of frequency domain complex filtering.
The inverse filtering is used for image restoration, and the degradation model of the image of the linear shift-invariant system is as follows:
g (x, y) ═ f (x, y) × h (x, y) + n (x, y), g (x, y) degraded image, h (x, y) is the degradation function, f (x, y) is the input image, n (x, y) is the noise interference.
In the frequency domain, one can obtain:
G(u,v)=H(u,v)F(u,v)+N(u,v),
g (u, v), H (u, v), F (u, v), N (u, v) correspond to G (x, y), H (x, y), F (x, y) and N (x, y), respectively.
Assuming that the effects of noise are ignored, the frequency domain of the degradation model can be reduced to:
G(u,v)=H(u,v)F(u,v),
under the condition of a transmission function H (u, v) of a known system, a restored image F (u, v) ═ G (u, v)/H (u, v) can be obtained, and inverse filtering is a method for restoring an image frame by applying the idea of Fourier transform and inverse transform; however, in general, noise interference exists in the system, so that the frequency domain representation of the degradation model and the frequency domain representation of the image restored by inverse filtering are represented as follows:
Figure RE-RE-GSB0000185953380000201
when the image frame is degraded when being polluted by smaller noise, the original image can be better recovered by adopting inverse filtering, if the frequency domain space of noise distribution is wider, or the transmission function H (u, v) of the system is smaller and close to 0, the second term in the formula becomes very large, the influence of noise on the image quality is very large, because H (0, 0) is equal to the average value of H (x, y), the noise is rapidly reduced along with the increase of the distance between (u, v) and the origin in practice, the noise transformation is slower, therefore, the inverse filtering recovery is carried out in the central frequency range or the vicinity of the origin when the image frame with slight degradation is recovered, and the effect is better.
Wiener filtering to restore image, setting vectors f, g and n as f (x, y), g (x, y) and n (x, y), respectively, then n is g-Hf, H represents, finding an estimated value of f
Figure RE-RE-GSB0000185953380000211
So that
Figure RE-RE-GSB0000185953380000212
Approaching g in the sense of minimum mean square error, then the norm of n is required to be minimal:
Figure RE-RE-GSB0000185953380000213
obtaining
Figure RE-RE-GSB0000185953380000214
One linear operation of the transformation matrix of Q, so that
Figure RE-RE-GSB0000185953380000215
And (3) establishing an objective function:
Figure RE-RE-GSB0000185953380000216
two sides about
Figure RE-RE-GSB0000185953380000217
Differentiating and making it 0, we get:
Figure RE-RE-GSB0000185953380000218
in the formula: γ is 1/α and the adjustment can be changed to satisfy the constraint.
Estimation function to be recovered in the frequency domain
Figure RE-RE-GSB0000185953380000219
Expressed as:
Figure RE-RE-GSB00001859533800002110
in the formula: h (u, v) is a degenerate function, | H (u, v) <' >2=H(u,v)*H(u,v)Smn(u,v)=|H(u,v)|2Is the power spectrum of the noise, Sff(u,v)=|H(u,v)|2Is the power spectrum of the degraded image.
Transfer function of wiener filter:
Figure RE-RE-GSB0000185953380000221
under the condition of obvious original image degradation, if the signal-to-noise ratio is higher, Sff(u, v) is much greater than Smn(u, v), i.e. K takes a small value, Hw(u, v) tends to be 1/Hw(u, v) the wiener filtering becomes inverse filtering, and therefore, inverse filtering is a special wiener filtering; if the signal-to-noise ratio is low, Smn(u, v) is much greater than Sff(u, v), then Hw(u, v) tends towards 0, i.e., wiener filtering avoids the problem of inverse filtering amplifying noise.
Because the inverse filtering method is extremely sensitive to noise, and the required image is slightly degraded and the signal-to-noise ratio is higher, the inverse filtering can be adopted to carry out image restoration on the influence of small noise, the wiener filtering can automatically inhibit the noise, and the wiener filtering can be adopted to carry out image restoration on the image with larger degradation.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
According to another aspect of the embodiment of the application, a system for implementing the processing method of the power communication optical cable monitoring video is further provided. The method comprises the following steps:
the first equipment is used for acquiring a first video to be transmitted, wherein the first video is a video acquired when the power communication optical cable is monitored;
the first equipment is also used for carrying out concentration processing on the first video to obtain a second video and transmitting the second video to the second equipment;
and the second equipment is used for preprocessing the second video after receiving the second video to obtain a third video, wherein the third video is used for carrying out fault analysis, and the preprocessing is used for eliminating interference information of the fault analysis.
The first device is further configured to: acquiring a first video frame to be processed currently in a first video; under the condition that a first video frame is a key frame, the first video frame is divided into a first sub-image and a second sub-image, wherein the first sub-image is an image carrying an electric power communication optical cable, and the second sub-image is a background image not carrying the electric power communication optical cable; and using the second sub-image as a background image of a second video frame, wherein the second video frame is a video frame of the first video, and the acquisition time of the second video frame is after that of the first video frame.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Embodiments of the present application also provide a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a processing method of the power communication cable monitoring video.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (9)

1. A processing method for monitoring videos of an electric power communication optical cable is characterized by comprising the following steps:
the method comprises the steps that first equipment obtains a first video to be transmitted, wherein the first video is a video acquired when the power communication optical cable is monitored;
the first equipment carries out concentration processing on the first video to obtain a second video, and transmits the second video to second equipment;
after receiving the second video, the second device preprocesses the second video to obtain a third video, wherein the third video is used for performing fault analysis, and the preprocessing is used for eliminating interference information of the fault analysis.
2. The method of claim 1, wherein the first device performing the enrichment on the first video comprises:
acquiring a first video frame to be processed currently in the first video;
under the condition that the first video frame is a key frame, dividing the first video frame into a first sub-image and a second sub-image, wherein the first sub-image is an image carrying the power communication optical cable, and the second sub-image is a background image not carrying the power communication optical cable;
and using the second sub-image as a background image of a second video frame, wherein the second video frame is a video frame of the first video, and the acquisition time of the second video frame is after the first video frame.
3. The method of claim 2, wherein the first device performing the enrichment on the first video comprises:
acquiring a feature value of the first video frame, wherein the feature value comprises at least one of a feature value of a color histogram of the first video frame, a feature value of edge detection of the first video frame, and a feature value of motion compensation of the first video frame;
and under the condition that the difference value between the characteristic value of the first video frame and the characteristic value of a third video frame reaches a target threshold value, determining that the first video frame is a key frame in the first video, wherein the third video frame is a key frame which is acquired before the first video frame and has the closest acquisition time to the first video frame.
4. The method of claim 3, wherein obtaining feature values of a color histogram of the first video frame comprises:
obtaining a characteristic value D (I) of a color histogram of the first video frame according to the following formulai,Ii+1):
Figure FSA0000192806910000021
Wherein, IiAnd Ii+1A color histogram, H, representing adjacent said first video framei(j)、Hi+1(j) And representing the color value of the jth pixel point in the color histogram of the corresponding first video frame, wherein the value of j is 1 to an integer n.
5. The method of any of claims 1-4, wherein pre-processing the second video comprises:
performing at least one of a defogging process, an image denoising process, an image enhancement process, and an image restoration process on the second video.
6. A processing system for monitoring videos of power communication optical cables is characterized by comprising:
the device comprises a first device and a second device, wherein the first device is used for acquiring a first video to be transmitted, and the first video is acquired when the power communication optical cable is monitored;
the first equipment is also used for carrying out concentration processing on the first video to obtain a second video and transmitting the second video to second equipment;
the second device is configured to, after receiving the second video, pre-process the second video to obtain a third video, where the third video is used for performing fault analysis, and the pre-process is used to eliminate interference information of the fault analysis.
7. The system of claim 6, wherein the first device is further configured to:
acquiring a first video frame to be processed currently in the first video;
under the condition that the first video frame is a key frame, dividing the first video frame into a first sub-image and a second sub-image, wherein the first sub-image is an image carrying the power communication optical cable, and the second sub-image is a background image not carrying the power communication optical cable;
and using the second sub-image as a background image of a second video frame, wherein the second video frame is a video frame of the first video, and the acquisition time of the second video frame is after the first video frame.
8. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 5.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 5 by means of the computer program.
CN201911040621.1A 2019-10-29 2019-10-29 Processing method and system for power communication optical cable monitoring video Pending CN112752064A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911040621.1A CN112752064A (en) 2019-10-29 2019-10-29 Processing method and system for power communication optical cable monitoring video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911040621.1A CN112752064A (en) 2019-10-29 2019-10-29 Processing method and system for power communication optical cable monitoring video

Publications (1)

Publication Number Publication Date
CN112752064A true CN112752064A (en) 2021-05-04

Family

ID=75640271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911040621.1A Pending CN112752064A (en) 2019-10-29 2019-10-29 Processing method and system for power communication optical cable monitoring video

Country Status (1)

Country Link
CN (1) CN112752064A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117978271A (en) * 2024-04-02 2024-05-03 浙江大学 Optical fiber communication strong interference suppression method, system, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117978271A (en) * 2024-04-02 2024-05-03 浙江大学 Optical fiber communication strong interference suppression method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
Shin et al. Radiance–reflectance combined optimization and structure-guided $\ell _0 $-Norm for single image dehazing
Gu et al. Multiscale natural scene statistical analysis for no-reference quality evaluation of DIBR-synthesized views
Gibson et al. Fast single image fog removal using the adaptive Wiener filter
Rajkumar et al. A comparative analysis on image quality assessment for real time satellite images
CN107301624B (en) Convolutional neural network defogging method based on region division and dense fog pretreatment
US8908989B2 (en) Recursive conditional means image denoising
Dharejo et al. A color enhancement scene estimation approach for single image haze removal
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
Pan et al. De-scattering and edge-enhancement algorithms for underwater image restoration
KR20140017776A (en) Image processing device and image defogging method
Zhang et al. Image dehazing based on dark channel prior and brightness enhancement for agricultural remote sensing images from consumer-grade cameras
Das et al. A comparative study of single image fog removal methods
Yousaf et al. Single Image Dehazing and Edge Preservation Based on the Dark Channel Probability‐Weighted Moments
Yu et al. Content-adaptive rain and snow removal algorithms for single image
Jia et al. Real-time content adaptive contrast enhancement for see-through fog and rain
CN115131229A (en) Image noise reduction and filtering data processing method and device and computer equipment
Fuh et al. Mcpa: A fast single image haze removal method based on the minimum channel and patchless approach
Toka et al. A fast method of fog and haze removal
Hong et al. Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches
CN112752064A (en) Processing method and system for power communication optical cable monitoring video
Negru et al. Exponential image enhancement in daytime fog conditions
Chandana et al. An optimal image dehazing technique using dark channel prior
Alluhaidan et al. Retinex-based framework for visibility enhancement during inclement weather with tracking and estimating distance of vehicles
Pal et al. A robust visibility restoration framework for rainy weather degraded images
CN116977228B (en) Image noise reduction method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210504

WD01 Invention patent application deemed withdrawn after publication