CN113163173B - Video data acquisition processing and transmission method based on big data - Google Patents

Video data acquisition processing and transmission method based on big data Download PDF

Info

Publication number
CN113163173B
CN113163173B CN202110461376.2A CN202110461376A CN113163173B CN 113163173 B CN113163173 B CN 113163173B CN 202110461376 A CN202110461376 A CN 202110461376A CN 113163173 B CN113163173 B CN 113163173B
Authority
CN
China
Prior art keywords
monitoring
value
marking
video
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110461376.2A
Other languages
Chinese (zh)
Other versions
CN113163173A (en
Inventor
沈勤标
李军辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TaiAn Power Supply Co of State Grid Shandong Electric Power Co Ltd
Original Assignee
TaiAn Power Supply Co of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TaiAn Power Supply Co of State Grid Shandong Electric Power Co Ltd filed Critical TaiAn Power Supply Co of State Grid Shandong Electric Power Co Ltd
Priority to CN202110461376.2A priority Critical patent/CN113163173B/en
Publication of CN113163173A publication Critical patent/CN113163173A/en
Application granted granted Critical
Publication of CN113163173B publication Critical patent/CN113163173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The invention discloses a video data acquisition processing and transmission method based on big data, relating to the technical field of data processing; the method comprises the following steps: classifying the monitoring subregions, judging whether the monitoring subregions need to be adjusted or not, and if so, adjusting and combining a plurality of monitoring subregions to form a new monitoring subregion; the method can adjust and combine the adjacent monitoring sub-areas with high attention values, avoids the problems of difficult management and low investigation efficiency caused by the dispersion of the video monitoring areas, and simultaneously improves the video data acquisition efficiency; sending the multi-channel video data to a corresponding data processing terminal for splicing and fusing to form panoramic video data; the data processing efficiency is improved, and security personnel can browse and check conveniently; the method and the system can reasonably send the corresponding verification video to security personnel for browsing and checking according to the monitoring value of the verification video; the early warning and active defense functions are achieved.

Description

Video data acquisition processing and transmission method based on big data
Technical Field
The invention relates to the technical field of data processing, in particular to a video data acquisition processing and transmission method based on big data.
Background
With the development of multimedia technology and the increasing demand for exchanging various types of information, the computer industry and the consumer electronics industry are being integrated into a new digital information industry, and especially with the integration and development of display technology and control technology, large-scale and smooth video information becomes an important new subject to be solved. In the fields of large-screen splicing display walls, security video monitoring, video conferences, digital television equipment, education and training, business demonstration, game entertainment and the like, massive video information needs to be processed, and the requirement of mixed processing of multiple different video input sources appears in various high-end engineering fields.
The existing video monitoring systems are many simple non-intelligent video storage systems, and have obvious defects; due to the dispersion of network security and video monitoring areas, the management is difficult, meanwhile, the monitoring areas cannot be classified, security personnel can easily cause the fatigue of personnel when browsing and troubleshooting the monitoring video, and key information can be easily missed carelessly; meanwhile, the video data cannot be reasonably sent to security personnel for investigation according to the monitoring value of the video data, and the functions of early warning and active defense are achieved; therefore, a video data acquisition processing and transmission method based on big data is provided.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a video data acquisition processing and transmission method based on big data. The method can adjust and combine the adjacent monitoring sub-regions with high attention values to form a new monitoring sub-region; the problems of difficult management and low troubleshooting efficiency caused by the dispersion of video monitoring areas are avoided, and the video data acquisition efficiency is improved; meanwhile, the corresponding data processing terminal can be reasonably selected according to the transmission value to receive the multi-channel video data corresponding to the monitoring sub-area and splice and fuse the multi-channel video data to form panoramic video data, so that the data processing efficiency is improved, and security personnel can browse and check conveniently.
The purpose of the invention can be realized by the following technical scheme: a video data acquisition processing and transmission method based on big data comprises the following steps:
the method comprises the following steps: setting a plane coordinate system according to a plane where a monitoring area is located, uniformly dividing the monitoring area into a plurality of monitoring sub-areas, and marking the monitoring sub-areas as i, wherein i is a non-zero positive integer; i =1, …, n; classifying the monitoring subareas, judging whether the monitoring subareas need to be adjusted or not, and if so, adjusting and combining a plurality of monitoring subareas to form a new monitoring subarea;
according to the invention, adjacent monitoring subregions with high concern values are adjusted and combined to form a new monitoring subregion; the problems of difficult management and low troubleshooting efficiency caused by the dispersion of video monitoring areas are avoided, and the video data acquisition efficiency is improved;
step two: the method comprises the steps of collecting video data corresponding to a monitoring subarea, wherein a plurality of video collectors are arranged in the monitoring subarea, the video data are multi-path video data, and the multi-path video data are sent to corresponding data processing terminals for splicing and fusing to form panoramic video data; the method can reasonably select the corresponding data processing terminal according to the transmission value to receive the multi-channel video data corresponding to the monitoring subarea and splice and fuse the multi-channel video data to form panoramic video data, thereby improving the data processing efficiency and facilitating the browsing and the troubleshooting of security personnel;
step three: analyzing panoramic video data; when the situation that people exist in the corresponding monitoring sub-area is monitored, a marking starting instruction is generated, and when the situation that no people exist in the monitoring sub-area is detected again, a marking stopping instruction is generated; the controller starts marking the panoramic video data after receiving the marking starting command, stops marking after receiving the marking stopping command, and marks the panoramic video data between the marking starting command and the marking stopping command as a verification video; marking the unmarked panoramic video data as a common video;
step four: analyzing a monitoring value of the check video to obtain the monitoring value of the check video; comparing the monitoring value GK with a monitoring threshold value;
if the monitoring value GK is larger than or equal to the monitoring threshold value, marking the corresponding verification video as an early warning video; sending the early warning video to a mobile phone terminal of a security worker; meanwhile, sending reminding information for reminding the security personnel to browse and check the early warning video to a mobile phone terminal of the security personnel; the early warning and active defense effects are achieved;
step five: and transmitting the check video and the common video to a cloud platform for storage.
Further, in the first step, the monitored sub-regions are classified, and the specific steps are as follows:
s11: marking corresponding video data collected by the monitoring subarea as area video data; obtaining attention information of regional video data; the attention information comprises attention number, attention frequency and attention duration;
s12: labeling the number of people in the area video data as C1; marking the attention times of the regional video data as C2, and marking the attention duration of the regional video data as C3;
acquiring a focus value DC of the region video data by using a formula DC = C1 × a1+ C2 × a2+ C3 × a3, wherein a1, a2 and a3 are coefficient factors;
s13: comparing the attention value DC with an attention threshold value; if the attention value DC is less than or equal to the attention threshold value, the corresponding monitoring subarea does not need to be adjusted;
if the attention value DC is larger than the attention threshold value, the corresponding monitoring subarea needs to be adjusted; continuing to execute step S14; simultaneously marking the monitoring subarea needing to be adjusted as an area to be adjusted;
s14: acquiring a monitoring sub-region adjacent to the region to be regulated, executing the step S12 aiming at the adjacent monitoring sub-region, and acquiring a focus value of the region video data corresponding to the adjacent monitoring sub-region; comparing the attention value to an attention threshold;
if the attention value is larger than the attention threshold, adjusting and combining the two monitoring sub-regions to form a new monitoring sub-region, marking the new monitoring sub-region as a region to be adjusted, continuously acquiring the adjacent monitoring sub-regions of the region to be adjusted, and the like;
and if the attention value is less than or equal to the attention threshold, the two monitoring sub-regions are not adjusted and combined, the monitoring sub-regions adjacent to the region to be adjusted are continuously obtained, and the like.
Further, in the second step, the multiple paths of video data are sent to corresponding data processing terminals for splicing and fusion to form panoramic video data, and the specific steps are as follows:
s21: marking the number of video collectors in the monitoring subarea as L1; marking the number of video collectors currently accessed by the data processing terminal as G1; setting the maximum capacity of the data processing terminal accessed to the video sensor as G2; minimum capacity G3;
calculating the difference between G1 and the maximum capacity G2 to obtain the maximum residual capacity; marking the data processing terminal with the maximum residual capacity larger than L1 as a primary selection terminal;
s22: obtaining an access coefficient GF of the initially selected terminal by using a formula GF = (G2-G1)/(G1-G3);
s23: the delay of transmitting the video data to the primary selection terminal by the video collector is marked as Hm; m =1, …, L1;
comparing the delay Hm to a delay threshold; if Hm > the delay threshold. Then the delay is marked as an impact delay; the number of occurrences of impact delays is counted and labeled as L2; calculating the difference between the influence delay and the delay threshold to obtain a super-delay value, and marking the super-delay value as L3;
s24: setting the coefficient of super-extensibility to Kc, c =1, 2, … …, 20; wherein K1 is more than K2 is more than … … is more than K20;
each super-extension coefficient Kc corresponds to a preset super-extension value range and is respectively (k 1, k 2), (k 2, k 3), (…), (k 20, k 21), (k 1 < k2 < … < k20 < k 21);
when L3 belongs to (Kc, Kc +1], the corresponding hyper-delay coefficient of the preset hyper-delay value range is Kc;
obtaining an influence value L4 corresponding to the super-delay value by using a formula L4= L3 XKc, summing all the corresponding influence values of the super-delay value to obtain a total super-delay influence value, and marking the total super-delay influence value as L5;
obtaining a delay coefficient SH by using a formula SH = L2 × a4+ L5 × a5, wherein a4 and a5 are coefficient factors;
s25: marking the code rate of transmitting video data to the primary selection terminal by the video collector as Gm; wherein Gm corresponds to Hm one by one; summing all code rates, taking the average value to obtain an average code rate, and marking the average code rate as Gs;
s26: acquiring a device value of the initially selected terminal and marking the device value as DH;
s27: carrying out normalization processing on the access coefficient, the delay coefficient, the average code rate and the equipment value and taking the numerical values;
obtaining a transmission value CH of the primary selection terminal by using a formula CH = (GF multiplied by b1+ Gs multiplied by b2+ DH multiplied by b3)/(SH multiplied by b 4); wherein b1, b2, b3 and b4 are all coefficient factors;
s28: and selecting the initial selection terminal with the maximum transmission value CH as a selected terminal, wherein the selected terminal is used for receiving the multi-channel video data corresponding to the monitoring sub-area and splicing and fusing the multi-channel video data to form panoramic video data.
Further, the method for calculating the device value of the initially selected terminal is as follows:
v1: acquiring a real-time position of a primary terminal, calculating a distance difference between the real-time position of the primary terminal and the position of a monitoring sub-region to obtain a transmission distance, and marking the transmission distance as LG;
v2: setting the operation period of the primary selection terminal as LN; setting the maintenance frequency of the primary selection terminal as LW;
acquiring the throughput of the initially selected terminal in thirty days before the current time of the system, summing and taking the average value of the throughput to obtain a throughput average value mark R1;
v3: setting all the models of the data processing terminals to correspond to a terminal value, matching the model of the initially selected terminal with all the models of the data processing terminals to obtain a corresponding terminal value, and marking the terminal value as R2;
v4: setting the total splicing and fusing times of the initially selected terminal as R3;
v5: obtaining a device value DR of the initially selected terminal by using a formula DR = (R1 × g1+ R2 × g2+ R3 × g3)/(LG × g4+ LN × g5+ LW × g6), wherein g1, g2, g3, g4, g5 and g6 are coefficient factors.
Further, in the fourth step, monitoring value analysis is performed on the verification video to obtain a monitoring value of the verification video; the method comprises the following specific steps:
s41: marking the marking time length of the check video as BT;
counting and checking the pedestrian flow of the corresponding monitoring sub-area in the video marking period and marking as BR;
s42: and obtaining a monitoring value GK of the check video by using a formula GK = BT × d1+ BR × d2, wherein d1 and d2 are coefficient factors.
The beneficial effects of the invention are:
1. according to the method, a plane coordinate system is set according to a plane where a monitoring area is located, the monitoring area is uniformly divided into a plurality of monitoring sub-areas, the monitoring sub-areas are classified, and corresponding video data collected by the monitoring sub-areas are marked as area video data; obtaining attention information of regional video data; calculating to obtain an attention value of the regional video data; if the attention value DC is less than or equal to the attention threshold value, the corresponding monitoring subarea does not need to be adjusted; if the attention value DC is larger than the attention threshold value, the corresponding monitoring subarea needs to be adjusted; regulating and combining the plurality of monitoring sub-areas to form a new monitoring sub-area; the method adjusts and combines the adjacent monitoring sub-regions with high attention values to form a new monitoring sub-region; the problems of difficult management and low troubleshooting efficiency caused by the dispersion of video monitoring areas are avoided, and the video data acquisition efficiency is improved;
2. the method comprises the steps of collecting video data corresponding to a monitoring subregion, wherein the video data are multi-channel video data, and sending the multi-channel video data to corresponding data processing terminals for splicing and fusing to form panoramic video data; the method and the system have the advantages that the transmission value of the data processing terminal is obtained through calculation by combining the access coefficient, the delay coefficient, the average code rate and the equipment value, and the primary selection terminal with the maximum transmission value is selected as the selected terminal;
3. the panoramic video data are analyzed; dividing panoramic video data into a check video and a common video; analyzing the monitoring value of the check video to obtain the monitoring value of the check video; if the monitoring value GK is larger than or equal to the monitoring threshold value, marking the corresponding verification video as an early warning video; sending the early warning video to a mobile phone terminal of a security worker; meanwhile, sending reminding information for reminding the security personnel to browse and check the early warning video to a mobile phone terminal of the security personnel; the early warning and active defense functions are achieved.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a method for collecting, processing and transmitting video data based on big data includes the following steps:
the method comprises the following steps: setting a plane coordinate system according to a plane where a monitoring area is located, uniformly dividing the monitoring area into a plurality of monitoring sub-areas, and marking the monitoring sub-areas as i, wherein i is a non-zero positive integer; i =1, …, n; classifying the monitoring subregions, judging whether the monitoring subregions need to be adjusted or not, and if so, adjusting and combining a plurality of monitoring subregions to form a new monitoring subregion; the problems that due to the fact that video monitoring areas are scattered, management is difficult and the troubleshooting efficiency is low are avoided; the method specifically comprises the following steps:
s11: marking corresponding video data collected by the monitoring subarea as area video data; obtaining attention information of regional video data; the attention information comprises attention number, attention frequency and attention duration;
s12: labeling the number of people in the area video data as C1; marking the attention times of the regional video data as C2, and marking the attention duration of the regional video data as C3;
obtaining a focus value DC of the region video data by using a formula DC = C1 × a1+ C2 × a2+ C3 × a3, where a1, a2, and a3 are coefficient factors, for example, a1 takes 0.58, a2 takes 0.69, and a3 takes 0.98;
s13: comparing the value of interest DC with a threshold of interest; if the attention value DC is less than or equal to the attention threshold, the corresponding monitoring subarea does not need to be adjusted;
if the attention value DC is larger than the attention threshold value, the corresponding monitoring subarea needs to be adjusted; continuing to execute step S14; simultaneously marking the monitoring subarea needing to be adjusted as an area to be adjusted;
s14: acquiring a monitoring sub-region adjacent to the region to be regulated, executing the step S12 aiming at the adjacent monitoring sub-region, and acquiring a focus value of the region video data corresponding to the adjacent monitoring sub-region; comparing the attention value to an attention threshold;
if the attention value is larger than the attention threshold, adjusting and combining the two monitoring sub-regions to form a new monitoring sub-region, marking the new monitoring sub-region as a region to be adjusted, continuously acquiring the adjacent monitoring sub-regions of the region to be adjusted, and the like;
if the attention value is less than or equal to the attention threshold, the two monitoring sub-areas are not adjusted and combined, the monitoring sub-areas adjacent to the area to be adjusted are continuously obtained, and the like;
according to the invention, adjacent monitoring subregions with high concern values are adjusted and combined to form a new monitoring subregion; the problems of difficult management and low troubleshooting efficiency caused by the dispersion of video monitoring areas are avoided, and the video data acquisition efficiency is improved;
step two: the method comprises the steps of collecting video data corresponding to a monitoring subarea, wherein a plurality of video collectors are arranged in the monitoring subarea, the video data are multi-path video data, and the multi-path video data are sent to corresponding data processing terminals for splicing and fusing to form panoramic video data, so that security personnel can browse and check conveniently; the method comprises the following specific steps:
s21: marking the number of video collectors in the monitoring subarea as L1; marking the number of video collectors currently accessed by the data processing terminal as G1; setting the maximum capacity of the data processing terminal accessed to the video sensor as G2; minimum capacity G3;
calculating the difference between G1 and the maximum capacity G2 to obtain the maximum residual capacity; marking the data processing terminal with the maximum residual capacity larger than L1 as a primary selection terminal;
s22: obtaining an access coefficient GF of the initially selected terminal by using a formula GF = (G2-G1)/(G1-G3);
s23: the delay of transmitting the video data to the primary selection terminal by the video collector is marked as Hm; m =1, …, L1;
comparing the delay Hm to a delay threshold; if Hm > the delay threshold. Then the delay is marked as an impact delay; the number of occurrences of impact delays is counted and labeled as L2; calculating the difference between the influence delay and the delay threshold to obtain a super-delay value, and marking the super-delay value as L3;
s24: setting the coefficient of super-extensibility to Kc, c =1, 2, … …, 20; wherein K1 is more than K2 is more than … … is more than K20;
each super-extension coefficient Kc corresponds to a preset super-extension value range and is respectively (k 1, k 2), (k 2, k 3), …, (k 20, k 21), k1 is more than k2, more than … is more than k20, and more than k 21;
when L3 belongs to (Kc, Kc +1], the corresponding hyper-delay coefficient of the preset hyper-delay value range is Kc;
obtaining an influence value L4 corresponding to the super-delay value by using a formula L4= L3 XKc, summing all the corresponding influence values of the super-delay value to obtain a total super-delay influence value, and marking the total super-delay influence value as L5;
obtaining a retardation coefficient SH by using a formula SH = L2 × a4+ L5 × a5, wherein a4 and a5 are coefficient factors; for example, a4 takes the value of 1.01, a5 takes the value of 0.88;
s25: marking the code rate of transmitting video data to the primary selection terminal by the video collector as Gm; wherein Gm corresponds to Hm one by one; summing all code rates, taking the average value to obtain an average code rate, and marking the average code rate as Gs;
s26: acquiring a device value of the initially selected terminal and marking the device value as DH;
s27: carrying out normalization processing on the access coefficient, the delay coefficient, the average code rate and the equipment value and taking the numerical values; obtaining a transmission value CH of the initially selected terminal by using a formula CH = (GF multiplied by b1+ Gs multiplied by b2+ DH multiplied by b3)/(SH multiplied by b 4); wherein b1, b2, b3 and b4 are coefficient factors, for example, b1 takes 0.58, b2 takes 0.71, b3 takes 0.28 and b4 takes 1.21;
s28: selecting a primary selection terminal with the largest transmission value CH as a selection terminal, wherein the selection terminal is used for receiving the multi-channel video data corresponding to the monitoring sub-region and splicing and fusing the multi-channel video data to form panoramic video data;
step three: analyzing the panoramic video data; when people are detected in the corresponding monitoring sub-area, a marking starting instruction is generated, and when no people are detected in the monitoring sub-area again, a marking stopping instruction is generated; the controller starts marking the panoramic video data after receiving the marking starting command, stops marking after receiving the marking stopping command, and marks the panoramic video data between the marking starting command and the marking stopping command as a verification video; marking the unmarked panoramic video data as a common video;
step four: analyzing a monitoring value of the check video to obtain the monitoring value of the check video; the method comprises the following specific steps:
s41: marking the marked time length of the check video as BT; counting and checking the flow of people in the corresponding monitoring sub-area during the video marking period and marking the flow of people as BR;
s42: obtaining a monitoring value GK of the check video by using a formula GK = BT × d1+ BR × d2, wherein d1 and d2 are coefficient factors; for example, d1 takes the value 0.88, d2 takes the value 0.79;
comparing the monitoring value GK with a monitoring threshold value;
if the monitoring value GK is larger than or equal to the monitoring threshold value, marking the corresponding check video as an early warning video; sending the early warning video to a mobile phone terminal of a security worker; meanwhile, sending reminding information for reminding the security personnel to browse and check the early warning video to a mobile phone terminal of the security personnel;
step five: and transmitting the check video and the common video to a cloud platform for storage.
The calculation method of the equipment value of the primary selection terminal comprises the following steps:
v1: acquiring a real-time position of a primary selection terminal, calculating a distance difference between the real-time position of the primary selection terminal and the position of a monitoring sub-area to acquire a transmission distance, and marking the transmission distance as LG;
v2: setting the operation period of the primary selection terminal as LN; setting the maintenance frequency of the primary selection terminal as LW;
acquiring the throughput of the initially selected terminal in thirty days before the current time of the system, summing and taking the average value of the throughput to obtain a throughput average value mark R1;
v3: setting all the models of the data processing terminals to correspond to a terminal value, matching the model of the initially selected terminal with all the models of the data processing terminals to obtain a corresponding terminal value, and marking the terminal value as R2;
v4: setting the total splicing and fusing times of the initially selected terminal as R3;
v5: obtaining the device value DR of the initially selected terminal by using a formula DR = (R1 × g1+ R2 × g2+ R3 × g3)/(LG × g4+ LN × g5+ LW × g6), wherein g1, g2, g3, g4, g5 and g6 are coefficient factors, for example, g1 takes 0.66, g2 takes 0.77, g3 takes 1.33, g4 takes 0.28, g5 takes 0.79 and g6 takes 0.99.
The working principle of the invention is as follows:
a video data acquisition processing and transmission method based on big data, while working, set up the plane coordinate system according to the level where the monitoring area locates, divide the monitoring area into several monitoring subareas evenly, classify the monitoring subareas, mark the corresponding video data collected by the monitoring subareas as the regional video data; obtaining attention information of regional video data; calculating to obtain an attention value of the regional video data; if the attention value DC is less than or equal to the attention threshold value, the corresponding monitoring subarea does not need to be adjusted; if the attention value DC is larger than the attention threshold value, the corresponding monitoring subarea needs to be adjusted; regulating and combining the plurality of monitoring sub-areas to form a new monitoring sub-area; the method adjusts and combines the adjacent monitoring sub-regions with high attention values to form a new monitoring sub-region; the problems of difficult management and low troubleshooting efficiency caused by the dispersion of video monitoring areas are avoided, and the video data acquisition efficiency is improved;
collecting video data corresponding to the monitoring sub-area, wherein the video data are multi-channel video data, and sending the multi-channel video data to a corresponding data processing terminal for splicing and fusing to form panoramic video data; the method and the system have the advantages that the transmission value of the data processing terminal is obtained through calculation by combining the access coefficient, the delay coefficient, the average code rate and the equipment value, and the primary selection terminal with the maximum transmission value is selected as the selected terminal;
analyzing panoramic video data; dividing panoramic video data into a check video and a common video; analyzing a monitoring value of the check video to obtain the monitoring value of the check video; if the monitoring value GK is larger than or equal to the monitoring threshold value, marking the corresponding verification video as an early warning video; sending the early warning video to a mobile phone terminal of a security worker; meanwhile, sending reminding information for reminding the security personnel to browse and check the early warning video to a mobile phone terminal of the security personnel; the early warning and active defense functions are achieved.
The formula and the coefficient factor are both obtained by acquiring a large amount of data to perform software simulation and performing parameter setting processing by corresponding experts, and the formula and the coefficient factor which are consistent with a real result are obtained.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (4)

1. A video data acquisition processing and transmission method based on big data is characterized by comprising the following steps:
the method comprises the following steps: setting a plane coordinate system according to a plane where a monitoring area is located, uniformly dividing the monitoring area into a plurality of monitoring sub-areas, and marking the monitoring sub-areas as i, wherein i is a non-zero positive integer; i =1, …, n; classifying the monitoring subregions, judging whether the monitoring subregions need to be adjusted or not, and if so, adjusting and combining a plurality of monitoring subregions to form a new monitoring subregion;
step two: the method comprises the steps of collecting video data corresponding to a monitoring subarea, wherein a plurality of video collectors are arranged in the monitoring subarea, the video data are multi-path video data, and the multi-path video data are sent to corresponding data processing terminals for splicing and fusing to form panoramic video data; the method comprises the following specific steps:
s21: marking the number of video collectors in the monitoring subarea as L1; marking the number of video collectors currently accessed by the data processing terminal as G1; setting the maximum capacity of the data processing terminal accessed to the video sensor as G2; minimum capacity G3;
calculating the difference between G1 and the maximum capacity G2 to obtain the maximum residual capacity; marking the data processing terminal with the maximum residual capacity larger than L1 as a primary selection terminal;
s22: obtaining an access coefficient GF of the initially selected terminal by using a formula GF = (G2-G1)/(G1-G3);
s23: the delay of transmitting the video data to the primary selection terminal by the video collector is marked as Hm; m =1, …, L1; comparing the delay Hm to a delay threshold; if Hm is larger than the delay threshold; then the delay is marked as an impact delay; the number of occurrences of impact delays is counted and labeled as L2; calculating the difference between the influence delay and the delay threshold to obtain a super-delay value, and marking the super-delay value as L3;
s24: setting the coefficient of super-extensibility to Kc, c =1, 2, … …, 20; wherein K1 is more than K2 is more than … … is more than K20; each super-extension coefficient Kc corresponds to a preset super-extension value range and is respectively (k 1, k 2), (k 2, k 3), …, (k 20, k 21), k1 is more than k2, more than … is more than k20, and more than k 21;
when L3 belongs to (Kc, Kc +1], the corresponding super-delay coefficient of the preset super-delay value range is Kc;
obtaining an influence value L4 corresponding to the super-delay value by using a formula L4= L3 × Kc, summing all the influence values to obtain a total super-delay influence value, and marking the total super-delay influence value as L5; obtaining a retardation coefficient SH by using a formula SH = L2 × a4+ L5 × a5, wherein a4 and a5 are coefficient factors;
s25: marking the code rate of transmitting video data to the primary selection terminal by the video collector as Gm; wherein Gm corresponds to Hm one by one; summing all code rates, taking the average value to obtain an average code rate, and marking the average code rate as Gs;
s26: acquiring a device value of the initially selected terminal and marking the device value as DH;
s27: normalizing the access coefficient, the delay coefficient, the average code rate and the equipment value and taking the numerical values; obtaining a transmission value CH of the primary selection terminal by using a formula CH = (GF multiplied by b1+ Gs multiplied by b2+ DH multiplied by b3)/(SH multiplied by b 4); wherein b1, b2, b3 and b4 are coefficient factors;
s28: selecting a primary selection terminal with the maximum transmission value CH as a selection terminal, wherein the selection terminal is used for receiving the multi-channel video data corresponding to the monitoring sub-regions and splicing and fusing the multi-channel video data to form panoramic video data;
the method for calculating the equipment value of the initially selected terminal comprises the following steps:
v1: acquiring a real-time position of a primary terminal, calculating a distance difference between the real-time position of the primary terminal and the position of a monitoring sub-region to obtain a transmission distance, and marking the transmission distance as LG;
v2: setting the operation age of the primary selection terminal as LN; setting the maintenance frequency of the primary selection terminal as LW;
acquiring the throughput of the initially selected terminal within thirty days before the current time of the system, summing and taking the average value of the throughput to obtain a throughput average value mark R1;
v3: setting all the models of the data processing terminals to correspond to a terminal value, matching the model of the initially selected terminal with all the models of the data processing terminals to obtain a corresponding terminal value, and marking the terminal value as R2;
v4: setting the total splicing and fusing times of the initially selected terminal as R3;
v5: obtaining a device value DR of a primarily selected terminal by using a formula DR = (R1 × g1+ R2 × g2+ R3 × g3)/(LG × g4+ LN × g5+ LW × g6), wherein g1, g2, g3, g4, g5 and g6 are coefficient factors;
step three: analyzing panoramic video data; when the situation that people exist in the corresponding monitoring sub-area is monitored, a marking starting instruction is generated, and when the situation that no people exist in the monitoring sub-area is detected again, a marking stopping instruction is generated; the controller starts marking the panoramic video data after receiving the marking starting instruction, stops marking after receiving the marking stopping instruction, and marks the panoramic video data between the marking starting instruction and the marking stopping instruction as a verification video; marking the unmarked panoramic video data as a common video;
step four: analyzing a monitoring value of the check video to obtain the monitoring value of the check video; comparing the monitoring value GK with a monitoring threshold value; if the monitoring value GK is larger than or equal to the monitoring threshold value, marking the corresponding verification video as an early warning video; sending the early warning video to a mobile phone terminal of a security worker; meanwhile, sending reminding information for reminding the security personnel to browse and check the early warning video to a mobile phone terminal of the security personnel;
step five: and transmitting the check video and the common video to a cloud platform for storage.
2. The method for video data acquisition, processing and transmission based on big data according to claim 1, wherein the classification of the monitoring sub-area in step one is as follows:
s11: marking corresponding video data collected by the monitoring subarea as area video data; obtaining attention information of regional video data; the attention information comprises attention number, attention frequency and attention duration;
s12: labeling the number of people in the area video data as C1; marking the attention times of the regional video data as C2, and marking the attention duration of the regional video data as C3;
acquiring a attention value DC of the region video data by using a formula DC = C1 × a1+ C2 × a2+ C3 × a3, wherein a1, a2 and a3 are coefficient factors;
s13: comparing the attention value DC with an attention threshold value; if the attention value DC is less than or equal to the attention threshold value, the corresponding monitoring subarea does not need to be adjusted;
if the attention value DC is larger than the attention threshold value, the corresponding monitoring subarea needs to be adjusted; continuing to execute step S14; simultaneously marking the monitoring subarea needing to be adjusted as an area to be adjusted;
s14: acquiring a monitoring sub-region adjacent to the region to be regulated, executing the step S12 aiming at the adjacent monitoring sub-region, and acquiring a focus value of the region video data corresponding to the adjacent monitoring sub-region; comparing the attention value to an attention threshold;
if the attention value is larger than the attention threshold, adjusting and combining the two monitoring sub-regions to form a new monitoring sub-region, marking the new monitoring sub-region as a region to be adjusted, continuously acquiring the adjacent monitoring sub-regions of the region to be adjusted, and the like;
and if the attention value is less than or equal to the attention threshold, the two monitoring sub-regions are not adjusted and combined, the monitoring sub-regions adjacent to the region to be adjusted are continuously obtained, and the like.
3. The method for video data acquisition, processing and transmission based on big data according to claim 1, wherein in step two, the multiple paths of video data are sent to corresponding data processing terminals for splicing and fusion to form panoramic video data.
4. The method for acquiring, processing and transmitting video data based on big data according to claim 1, wherein the monitoring value analysis is performed on the verification video in the fourth step to obtain the monitoring value of the verification video; the method comprises the following specific steps:
s41: marking the marking time length of the check video as BT;
counting and checking the flow of people in the corresponding monitoring sub-area during the video marking period and marking the flow of people as BR;
s42: and obtaining a monitoring value GK of the check video by using a formula GK = BT × d1+ BR × d2, wherein d1 and d2 are coefficient factors.
CN202110461376.2A 2021-04-27 2021-04-27 Video data acquisition processing and transmission method based on big data Active CN113163173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110461376.2A CN113163173B (en) 2021-04-27 2021-04-27 Video data acquisition processing and transmission method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110461376.2A CN113163173B (en) 2021-04-27 2021-04-27 Video data acquisition processing and transmission method based on big data

Publications (2)

Publication Number Publication Date
CN113163173A CN113163173A (en) 2021-07-23
CN113163173B true CN113163173B (en) 2022-09-23

Family

ID=76871521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110461376.2A Active CN113163173B (en) 2021-04-27 2021-04-27 Video data acquisition processing and transmission method based on big data

Country Status (1)

Country Link
CN (1) CN113163173B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117097572A (en) * 2023-10-19 2023-11-21 吉林省东启铭网络科技有限公司 Household Internet of things terminal and operation method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011061594A (en) * 2009-09-11 2011-03-24 Brother Industries Ltd Terminal device, communication method and communication program
CN102244680A (en) * 2011-07-04 2011-11-16 东华大学 Generation method of panoramic video code stream based on body area sensing array
CN103577852A (en) * 2013-10-29 2014-02-12 电子科技大学 Graded monitoring method and system based on active RFID
CN106534787A (en) * 2016-11-16 2017-03-22 北京明泰朗繁精密设备有限公司 Video display system
CN112350904A (en) * 2020-10-27 2021-02-09 广州市网优优信息技术开发有限公司 Cloud monitoring system based on big data
CN112637551A (en) * 2020-11-18 2021-04-09 合肥市卓迩无人机科技服务有限责任公司 Panoramic data management software system for multi-path 4K quasi-real-time spliced videos
CN112672010A (en) * 2020-12-17 2021-04-16 珍岛信息技术(上海)股份有限公司 Video generation system based on face recognition

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101333918B1 (en) * 2006-01-05 2013-11-27 엘지전자 주식회사 Point-to-multipoint service communication of mobile communication system
US9559767B2 (en) * 2011-12-19 2017-01-31 Gilat Satellite Networks Ltd. Adaptive fade mitigation
US9271245B2 (en) * 2012-10-29 2016-02-23 Lg Electronics Inc. Method for determining transmission power
CN106713852B (en) * 2016-12-08 2020-11-13 南京邮电大学 Multi-platform wireless vehicle-mounted monitoring system
CN111757064B (en) * 2020-06-30 2021-02-23 普瑞达建设有限公司 Intelligent high-definition monitoring control system and method
CN112489252A (en) * 2020-10-26 2021-03-12 马鞍山黑火信息科技有限公司 Real-time network engineering monitoring alarm system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011061594A (en) * 2009-09-11 2011-03-24 Brother Industries Ltd Terminal device, communication method and communication program
CN102244680A (en) * 2011-07-04 2011-11-16 东华大学 Generation method of panoramic video code stream based on body area sensing array
CN103577852A (en) * 2013-10-29 2014-02-12 电子科技大学 Graded monitoring method and system based on active RFID
CN106534787A (en) * 2016-11-16 2017-03-22 北京明泰朗繁精密设备有限公司 Video display system
CN112350904A (en) * 2020-10-27 2021-02-09 广州市网优优信息技术开发有限公司 Cloud monitoring system based on big data
CN112637551A (en) * 2020-11-18 2021-04-09 合肥市卓迩无人机科技服务有限责任公司 Panoramic data management software system for multi-path 4K quasi-real-time spliced videos
CN112672010A (en) * 2020-12-17 2021-04-16 珍岛信息技术(上海)股份有限公司 Video generation system based on face recognition

Also Published As

Publication number Publication date
CN113163173A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN112633758B (en) Intelligent on-line management system for maintenance personnel
CN108924082B (en) Special operator actual operation examination control method and system
EP3413212A1 (en) Random forest model training method, electronic apparatus and storage medium
WO2021072949A1 (en) Method and device for automatically checking pressing plates
CN110769283B (en) Video pushing method and device, computer equipment and storage medium
CN104601370A (en) Information processing method and cloud server
CN113163173B (en) Video data acquisition processing and transmission method based on big data
CN110515793A (en) System performance monitoring method, device, equipment and storage medium
CN106209513B (en) 61850 multi-client implementation method of IEC based on burse mode
CN104410877A (en) Method and device for processing user behavior data of network television
CN111416960B (en) Video monitoring system based on cloud service
CN111523646A (en) Remote education learning center intelligent perception network based on Internet of things and management method
CN107665423A (en) The tutoring system and method that a kind of rapid field is called the roll
CN109445388A (en) Industrial control system data analysis processing device and method based on image recognition
WO2023051318A1 (en) Model training method, wireless resource scheduling method and apparatus therefor, and electronic device
CN113361897A (en) Service system for robot maintenance
CN110619456A (en) Sensory evaluation data acquisition method and system based on mobile terminal
CN104410873A (en) Method and device of detecting number of television channel users
CN114143344A (en) AI multimedia customer service system and application method
CN113691390A (en) Cloud-end-coordinated edge node alarm system and method
CN107634983B (en) Relay protection remote cloud service system
CN106534768A (en) Field operation real-time monitoring system and method
CN113840131A (en) Video call quality evaluation method and device, electronic equipment and readable storage medium
CN116743958B (en) Integrated operation and maintenance method and system for integrating management and control data of audio and video conference system
CN112365380A (en) Virtual reality training economic management teaching system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220907

Address after: No. 201, Dongyue street, Taishan District, Taian City, Shandong Province 271000

Applicant after: TAIAN POWER SUPPLY COMPANY OF STATE GRID SHANDONG ELECTRIC POWER Co.

Address before: 311113 Hangzhou Dayu Network Technology Co.,Ltd., No. 998, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: Shen Qinbiao

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant