CN110087041B - Video data processing and transmitting method and system based on 5G base station - Google Patents

Video data processing and transmitting method and system based on 5G base station Download PDF

Info

Publication number
CN110087041B
CN110087041B CN201910364933.1A CN201910364933A CN110087041B CN 110087041 B CN110087041 B CN 110087041B CN 201910364933 A CN201910364933 A CN 201910364933A CN 110087041 B CN110087041 B CN 110087041B
Authority
CN
China
Prior art keywords
video
data
video data
transmission
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910364933.1A
Other languages
Chinese (zh)
Other versions
CN110087041A (en
Inventor
纪雯
许精策
陈益强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201910364933.1A priority Critical patent/CN110087041B/en
Publication of CN110087041A publication Critical patent/CN110087041A/en
Application granted granted Critical
Publication of CN110087041B publication Critical patent/CN110087041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention relates to a video data processing and transmitting method based on a 5G base station, which comprises the following steps: selecting an acquisition set, and accessing a video sensor in the acquisition set to a 5G base station serving as a fog node; acquiring video data acquired by the sensor through the fog node and compressing the video data into transmission data; and transmitting the transmission data to a cloud data center directly or through other fog nodes in communication connection with the fog node.

Description

Video data processing and transmitting method and system based on 5G base station
Technical Field
The invention belongs to the technical field of Internet of things, and particularly relates to a video data processing and transmitting method and system adopting a fog computing technology.
Background
In recent years, with the rapid development of technologies such as internet of things, deep learning, artificial intelligence and the like, the intelligent traffic monitoring technology has shown a great practical application value in the aspects of social life. In order to better maintain the traffic order of cities, the traffic department installs a plurality of monitoring devices on urban roads and accesses an urban traffic management center. By comprehensive application of cloud computing, big data and artificial intelligence, automatic traffic management can be realized to a certain extent in some domestic cities at present. For example, an ET urban brain platform of Alibara realizes intelligent traffic management in Hangzhou.
The core of the intelligent traffic monitoring technology is that the related technology of computer vision is utilized to process and analyze the video shot by the monitoring camera, extract the objects such as vehicles, pedestrians, roads and the like in the video, judge the behavior of the target object on the basis of the objects, and realize automatic judgment of traffic violation and automatic traffic management. However, a large number of cameras access the urban traffic management center through a network, so that the existing cloud computing and network transmission equipment face huge pressure. Taking Beijing as an example, the whole city has 30 ten thousand traffic monitoring cameras, the data volume generated every day can reach 30PB, and more than 80% of data is real-time data. The existing technology based on cloud computing needs to upload all the data to a cloud data center and then perform centralized video processing and analysis, which causes considerable pressure on a network of the cloud data center. In order to solve the problem, the distributed cloud computing method is used by the company of the Alibara to respectively establish distributed cloud nodes in different districts in a city, so that the data transmission delay is reduced, and the response speed of computing is accelerated. However, such a method needs to change the topology of the existing traffic monitoring video transmission network, and the distributed cloud nodes have the problem of high construction and maintenance costs.
With the development of communication technology, the manner of wireless communication has also evolved from 4G communication to 5G communication. In addition to the great improvement of the data transmission rate, the 4G-to-5G conversion enables the 5G base station to have stronger storage and calculation capabilities than the 4G base station, and can process a part of video analysis and processing tasks. In the environment of a 5G communication network, the architecture of current cloud computing will be shifted to the direction of fog computing, so that the storage and computing capacity of a 5G base station is better utilized. Different from distributed cloud computing, the 5G wireless network utilizes fog computing to transmit videos without changing the topological structure of the network and additional construction and maintenance cost. However, how to transmit the urban traffic monitoring video in the 5G network by using the fog calculation has no specific scheme in the industry at present.
Disclosure of Invention
Aiming at the problem of low real-time performance of traffic monitoring video transmission in the prior art, the invention arranges a fog computing video transmission architecture in a 5G base station to meet the requirements of traffic monitoring real-time data processing and analysis.
Specifically, the video data processing and transmitting method of the present invention comprises: selecting a video sensor to access a 5G base station, and taking the 5G base station as a fog node; screening the video data collected by the video sensor through the fog node, and compressing the screened video data into transmission data; and directly transmitting or routing the transmission data to the cloud data center.
The video data processing and transmission method of the invention specifically comprises the following steps: according to the video sensor CjNode B towards fogiDelay D for transmitting video datai,jSum code rate Ri,jObtaining a representative video sensor CjConnecting fog node BiUtility index U ofi,j(ii) a Wherein,
Figure GDA0002682726200000021
Umax=maxnij∈M'Ui,j,ni=|M'|min,|M'|min+1,...,|M'|max,niindicating current access to foggy node Bi(ii) number of video sensors, | M'. countmaxAnd | M'minAre respectively a node BiMaximum and minimum capacities of accessible video sensors; by utility index Ui,jObtaining a video sensor CjTotal utility U of sensor set C to have maximum total utility UmaxThe sensor set of (a) is the acquisition set C'; and accessing the video sensors in the collection set C' to the fog node.
The video data processing and transmission method of the present invention, wherein the process of compressing the video data specifically comprises: performing target segmentation on the video data V through a video segmentation algorithm to obtain the video data V to be compressedtrans={Vp|Fp1}, wherein VpFor the segment of video data at the p-th detection of the characteristic object T, FpVideo features of the video data Vtarget after segmentation; video data VtransDivided into M by N grids with which video data V are pairedtransPerforming downsampling to form compressed data; wherein the height and width of the grid are respectively
Figure GDA0002682726200000022
And
Figure GDA0002682726200000023
meet the requirement of minEARAPTime of flight
Figure GDA0002682726200000024
And is
Figure GDA0002682726200000025
EARAPAs a function of the loss of compression of the video data,
Figure GDA0002682726200000026
omega is with respect to video data VtransW × H dimensional feature matrix of, if and only if VlkElement omega of omega when epsilon is TlkWhen 1 is equal to
Figure GDA0002682726200000037
Time omegalkWhen the resolution of the video data V is 0, W × H is the resolution of the video data V, W '× H' is the resolution of the video data V after compression, and M, N, W, H, W 'and H' are positive integers; to video data V with the gridtransDown-sampling to form compressed data
Figure GDA0002682726200000038
And
Figure GDA0002682726200000039
compressed data is added to form the transmission data.
The invention relates to a video data processing and transmission method, wherein the process of transmitting the transmission data comprises the following steps: dividing the transmission data into a plurality of data blocks; each data block is directly transmitted to a cloud data center or transmitted to the cloud data center in a routing mode through at least one fog node transfer; and selecting the transferred fog nodes according to the transmission bandwidth and delay among the fog nodes.
The invention also provides a video data processing and transmission system based on the 5G base station, which comprises the following components: the sensor access module is used for selecting a video sensor to be accessed as a 5G base station, and taking the 5G base station as a fog node; the data processing module is used for screening the video data acquired by the video sensor through the fog node and compressing the screened video data into transmission data; and the data transmission module is used for directly transmitting or routing the transmission data to the cloud data center.
The video data processing and transmission system of the invention, wherein the sensor access module includes: a sensor utility obtaining module for obtaining the utility of the sensor according to the videojNode B towards fogiDelay D for transmitting video datai,jSum code rateRi,jObtaining a representative video sensor CjConnecting the node B with the fog nodeiUtility index U ofi,j(ii) a Wherein,
Figure GDA0002682726200000031
Umax=maxnij∈M'Ui,j,ni=|M'|min,|M'|min+1,...,|M'|max,niindicating current access to 5G base station Bi(ii) number of video sensors, | M'. countmaxAnd | M'minAre respectively a node BiMaximum and minimum capacities of accessible video sensors; a sensor collection selection module for selecting a utility index Ui,jObtaining a video sensor CjTotal utility U of sensor set C to have maximum total utility UmaxThe sensor set of (a) is the acquisition set C'; and accessing the video sensors in the collection set C' to the fog node.
The video data processing and transmission system of the invention, wherein the data processing module includes: a video data selection module for performing target segmentation on the video data V by a video segmentation algorithm to obtain the video data V to be compressedtrans={Vp|Fp1}, wherein VpFor the segment of video data at the p-th detection of the characteristic object T, FpVideo features of the video data Vtarget after segmentation; a compressed data generation module for generating video data VtransDivided into M by N grids with which video data V are pairedtransPerforming downsampling to form compressed data; wherein the height and width of the grid are respectively
Figure GDA0002682726200000032
And
Figure GDA0002682726200000033
meet the requirement of minEARAPTime of flight
Figure GDA0002682726200000034
And is
Figure GDA0002682726200000035
EARAPAs a function of the loss of compression of the video data,
Figure GDA0002682726200000036
omega is with respect to video data VtransW × H dimensional feature matrix of, if and only if VlkElement omega of omega when epsilon is TlkWhen 1 is equal to
Figure GDA0002682726200000043
Time omegalkWhen the resolution of the video data V is 0, W × H is the resolution of the video data V, W '× H' is the resolution of the video data V after compression, and M, N, W, H, W 'and H' are positive integers; a transmission data generation module for generating video data V by the meshtransDown-sampling to form compressed data
Figure GDA0002682726200000041
And
Figure GDA0002682726200000042
compressed data is added to form the transmission data.
The video data processing and transmission system of the invention, wherein the data transmission module includes: a transmission data blocking module for dividing the transmission data into a plurality of data blocks; the data block transmission module is used for directly transmitting each data block to the cloud data center or transmitting the data blocks to the cloud data center in a routing manner through at least one fog node transfer; and selecting the transferred fog nodes according to the transmission bandwidth and delay among the fog nodes.
The invention also provides a readable storage medium, which stores executable instructions for executing the video data processing and transmission method based on the 5G base station.
The invention also provides a fog calculation video data acquisition system, which comprises: the video sensor is used for acquiring video data; the fog node is a 5G base station, and comprises a processor and the readable storage medium as claimed in claim 9, wherein the processor calls and executes executable instructions in the readable storage medium to obtain the video data, analyzes and processes the video data, compresses the video data into transmission data, and sends the transmission data to the cloud data center; and the cloud data center is used for receiving the transmission data and decompressing the transmission data to obtain the video data.
Drawings
Fig. 1 is a 5G base station traffic monitoring video transmission network architecture diagram of the present invention.
Fig. 2 is a schematic diagram of a video compression algorithm process based on video characteristics according to the present invention.
Fig. 3 is a schematic diagram of a 5G base station cooperative transmission process according to the present invention.
Fig. 4 is a schematic structural diagram of a 5G base station video data processing and transmission system according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly understood, the following describes in detail a video data processing and transmission method based on 5G base station fog calculation, which is proposed by the present invention, with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention aims to solve the problem of low real-time performance of traffic monitoring video transmission by deploying a fog computing video transmission architecture in a 5G base station.
In order to achieve the above object, the present invention provides a low-delay video transmission method based on fog calculation, comprising:
deploying a fog computing framework on a 5G base station to enable the fog computing framework to become a fog node, maintaining information of a video sensor (such as a traffic camera in the embodiment of the invention) accessed to the fog node, receiving and processing monitoring video data, and finally sending the monitoring video data to a cloud data center;
the fog computing framework of the 5G base station comprises a sensor access module, a data processing module and a data transmission module;
the sensor access module collects characteristic information of the video sensor, wherein the characteristic information comprises information such as video code rate, resolution, position and the like shot by the video sensor, and video sensor equipment accessed to the base station is dynamically managed based on the information;
the data processing module analyzes and processes video data transmitted by the video sensor, and the video data comprises video motion characteristics, content characteristic extraction and video compression processing, so that the data volume uploaded to the cloud data center is reduced, and a part of calculation tasks are shared for the cloud data center;
the data transmission module is responsible for video data processing and transmission between the base station and the cloud data center and cooperative transmission between the base stations and is used for ensuring that the transmission real-time performance of the traffic monitoring video acquired by the video sensor is the highest.
At least one example of the invention provides a fog computing traffic monitoring video transmission system for a 5G base station. By realizing access management, video analysis and processing and cooperative transmission among base stations of the traffic cameras on the 5G base station, video data of a large number of traffic cameras can be accessed to the cloud data center with low delay, and response time of traffic monitoring tasks is further reduced. The embodiment of the invention is deployed on the existing 5G base station, does not need to additionally add network equipment, does not change the topological structure of the traffic video access network, and has higher practical application value.
Aiming at the problem of low real-time performance of traffic monitoring video transmission in the prior art, the invention arranges a fog computing video transmission architecture in a 5G base station to meet the requirements of traffic monitoring real-time data processing and analysis.
Specifically, the fog calculation video transmission method of the present invention includes:
1) selecting an acquisition set, and accessing a video sensor in the acquisition set to a 5G base station serving as a fog node;
2) acquiring video data acquired by a sensor through a fog node, and compressing the video data into transmission data;
3) and transmitting the transmission data to the cloud data center directly or through other fog nodes in communication connection with the fog nodes.
The process of selecting the collection set specifically comprises the following steps: according to the video sensor CjNode B towards fogiDelay D for transmitting video datai,jSum code rate Ri,jObtaining a representative video sensor CjConnecting fog node BiUtility index U ofi,j(ii) a Wherein,
Figure GDA0002682726200000061
Umax=maxnij∈M'Ui,j,ni=|M'|min,|M'|min+1,...,|M'|max,niindicating current access to foggy node Bi(ii) number of video sensors, | M'. countmaxAnd | M'minAre respectively a node BiMaximum and minimum capacities of accessible video sensors; by utility index Ui,jObtaining a video sensor CjTotal utility U of sensor set C to have maximum total utility UmaxThe sensor set is an acquisition set C'; and (4) accessing the video sensor in the collection set C' into the fog node.
The process of compressing the video data specifically includes: performing target segmentation on the video data V through a video segmentation algorithm to obtain the video data V to be compressedtrans={Vp|Fp1}, wherein VpFor the segment of video data at the p-th detection of the characteristic object T, FpVideo features of the video data Vtarget after segmentation; video data VtransDivided into M by N grids with which video data V are pairedtransPerforming downsampling to form compressed data; wherein the height and width of the grid are respectively
Figure GDA0002682726200000062
And
Figure GDA0002682726200000063
meet the requirement of minEARAPTime of flight
Figure GDA0002682726200000064
And is
Figure GDA0002682726200000065
EARAPAs a function of the loss of compression of the video data,
Figure GDA0002682726200000066
omega is with respect to video data VtransW × H dimensional feature matrix of, if and only if VlkElement omega of omega when epsilon is TlkWhen 1 is equal to
Figure GDA0002682726200000069
Time omegalkWhen the resolution of the video data V is 0, W × H is the resolution of the video data V, W '× H' is the resolution of the video data V after compression, and M, N, W, H, W 'and H' are positive integers; to video data V with the gridtransDown-sampling to form compressed data
Figure GDA0002682726200000067
And
Figure GDA0002682726200000068
the compressed data is added to form the transmission data.
The process of transmitting the transmission data includes: dividing transmission data into a plurality of data blocks; each data block is directly transmitted or transmitted to a cloud data center through one or more fog nodes in a transfer mode; and selecting the transferred fog nodes according to the transmission bandwidth and delay among the fog nodes.
The invention also provides a video data processing and transmission system based on the 5G base station, which comprises the following components:
the sensor access module selects a video sensor to be accessed as a 5G base station, and takes the 5G base station as a fog node of a video data processing and transmitting system;
the data processing module is used for screening the video data acquired by the video sensor through the fog node and compressing the screened video data into transmission data;
and the data transmission module is used for directly transmitting the transmission data or transmitting the transmission data to the cloud data center through other fog node routes in communication connection with the fog nodes.
Wherein the sensor access module includes:
a sensor utility obtaining module for obtaining the utility of the sensor according to the videojNode B towards fogiDelay D for transmitting video datai,jSum code rate Ri,jObtaining a representative video sensor CjConnecting the node B with the fog nodeiUtility index U ofi,j(ii) a Wherein,
Figure GDA0002682726200000071
Umax=maxnij∈M'Ui,j,ni=|M'|min,|M'|min+1,...,|M'|max,niindicating current access to 5G base station Bi(ii) number of video sensors, | M'. countmaxAnd | M'minAre respectively a node BiMaximum and minimum capacities of accessible video sensors;
a sensor collection selection module for selecting a utility index Ui,jObtaining a video sensor CjTotal utility U of sensor set C to have maximum total utility UmaxThe sensor set of (a) is the acquisition set C'; and accessing the video sensors in the collection set C' to the fog node.
The data processing module specifically comprises:
a video data selection module for performing target segmentation on the video data V by a video segmentation algorithm to obtain the video data V to be compressedtrans={Vp|Fp1}, wherein VpIs as followspVideo data segment, F, when the characteristic object T is detectedpVideo features of the video data Vtarget after segmentation;
a compressed data generation module for generating video data VtransDivided into M by N grids with which video data V are pairedtransPerforming downsampling to form compressed data; wherein the height and width of the grid are respectively
Figure GDA0002682726200000072
And
Figure GDA0002682726200000073
meet the requirement of minEARAPTime of flight
Figure GDA0002682726200000074
And is
Figure GDA0002682726200000075
EARAPAs a function of the loss of compression of the video data,
Figure GDA0002682726200000076
omega is with respect to video data VtransW × H dimensional feature matrix of, if and only if VlkElement omega of omega when epsilon is TlkWhen 1 is equal to
Figure GDA0002682726200000079
Time omegalkWhen the resolution of the video data V is 0, W × H is the resolution of the video data V, W '× H' is the resolution of the video data V after compression, and M, N, W, H, W 'and H' are positive integers;
a transmission data generation module for generating video data V by the meshtransDown-sampling to form compressed data
Figure GDA0002682726200000077
And
Figure GDA0002682726200000078
the compressed data is added to form the transmission data.
The data transmission module includes:
a transmission data blocking module for dividing the transmission data into a plurality of data blocks;
the data block transmission module is used for directly transmitting each data block to a cloud data center or transmitting the data blocks to the cloud data center in a routing manner through at least one fog node transfer; and selecting the transferred fog nodes according to the transmission bandwidth and delay among the fog nodes.
The invention also provides a readable storage medium, which stores executable instructions for executing the video data processing and transmission method based on the 5G base station.
The invention also provides a fog calculation video data acquisition system, which comprises: the video sensor is used for acquiring video data; the fog node is a 5G base station, the 5G base station comprises a processor and a readable storage medium, wherein the processor calls and executes an executable instruction in the readable storage medium to obtain video data, the video data is analyzed and processed, then the video data is compressed into transmission data, and the transmission data is sent to a cloud data center; and the cloud data center is used for receiving the transmission data and decompressing the transmission data to obtain the video data.
To facilitate understanding of the working of the invention, before describing the method of the invention in detail, a possible application scenario of the invention is first presented. Taking the beijing city as an example, more than 30 ten thousand traffic monitoring cameras are deployed in the beijing city, and all the cameras are connected to the network in a wired network form, and the cameras are mainly located on two sides of a city road and at each intersection. And the video shot by the camera is uploaded to a traffic command center through a network for storage. The video shot by each camera has differences, and the differences are mainly expressed in the following aspects:
(1) the shooting coverage of the cameras is different. Because the positions of different cameras are different and the violation behaviors for shooting are also different, the road condition video range which can be shot by a single camera is also different. For example, a monitoring camera located at a crossing tends to shoot a large range, and a one-way camera located at the edge of a road shoots a small range.
(2) The video parameters shot by the cameras have differences. Since cameras in the whole market are deployed in batches, there are differences in the specifications of video parameters, and there are differences in the code rate, resolution, and frame rate of videos shot by different cameras.
Besides the above differences, the traffic monitoring video also has the characteristic of large redundant information amount. Most of the pictures of the traffic monitoring video are street or road backgrounds, and the effective vehicle and pedestrian information of the road traffic monitoring task usually only occupies a small amount of information for monitoring against traffic regulations.
Based on the characteristics, the base station is required to select the camera for management according to the residual bandwidth, calculation and storage resources, so that the minimum total delay of video transmission of the camera accessed into the system is realized; secondly, analyzing and processing the traffic monitoring video in the base station, and extracting and compressing the characteristics of the video; and finally, in the aspect of cooperative transmission among the base stations, each base station uploads the processed video to the cloud after the video is processed, and the video needs to be transmitted to other base stations and then to the cloud when necessary, so that the lowest delay of video uploading is ensured.
In order to achieve the above object, the present invention firstly needs to implement access management of a camera. The base station first needs to measure the delay and code rate of video transmission by the camera. Consider a 5G base station traffic monitoring video transmission network as shown in FIG. 1, let BiRepresenting the 5G base station numbered i and acting as a transport network fog node, CjIs a camera numbered j, Ri,jIs a camera CjTransmitting video to base station BiCode rate of (D)i,jIs a camera CjTransmitting video to base station BiThe delay of (2). Di,jAnd Ri,jCan be measured by the base station in real time. The total bandwidth available to the base station is limited, with this upper limit being set to RmaxAnd therefore a single base station cannot manage all cameras. In order to make the selection of the base station to the access camera more reasonable, the invention uses the following utility function to measure the camera CjAccess base station BiUtility ofi,j
Figure GDA0002682726200000091
In the above formula, niIndicating the current access base station BiThe number of cameras. The camera access utility function defined by the formula (1) can be interpreted that the base station manages the cameras as much as possible, and the camera with higher video shooting rate and lower transmission delay is preferentially selected and managed.
After defining the utility function of the camera accessing the base station, the base station may selectively access and manage the camera according to the access utility, and the process may be represented by the following formula:
Figure GDA0002682726200000092
Figure GDA0002682726200000093
in the above formula, the set M' represents a set of camera numbers selected and managed by the base station. Since in a real network, it hardly occurs that two cameras R for the difference are presenti,j/Di,jEqual to each other, the above optimization problem is therefore a 01-knapsack optimization problem, which proves to be an NP-complete problem, i.e. it is not possible to find an algorithm with a complex polynomial, and therefore a heuristic greedy algorithm is given to solve the model defined by the equations (2) (3):
step 1: computing base station Bi(ii) maximum capacity | M'maxAnd minimum capacity | M'. OccidentallyminThe calculation method is as follows:
step 11: initializing | M'. Liquidmax=0,|M'|min=0
Step 12: let C1=C,C2From camera set C ═ C1And C2Pick-up head C with maximum and minimum code ratemaxAnd Cmin
Step 13: | M'max=|M'|max+1,|M'|min=|M'|min+1, and CmaxAnd CminFrom C1、C2Middle deletion
Step 14: repeating the steps 11 to 13 until
Figure GDA0002682726200000101
And is
Figure GDA0002682726200000102
Step 15: at this time, | MmaxAndi M'minIs the value of base station BiMaximum and minimum capacity of
Step 2: take ni=|M'|min,|M'|min+1,...,|M'|maxExecuting the following steps:
step 21: according to U of each devicei,jThe value of (c) ranks the priority of device selection
Step 22: selecting the priority obtained from the previous step, and selecting n in the order of priority from high to lowiDevice set C 'taking devices as base stations'
Step 23: judging whether the sum of the code rates of all the devices in C' is greater than the code rate limit R of the base stationmaxIf not more than RmaxGo to step 25, otherwise go to step 24
Step 24: let Δ R ═ Σj∈C'Ri,j-RmaxFor each selected camera Ci,j∈C',
Figure GDA0002682726200000103
Step 25: calculating the total utility n of the base station after selecting the cameraij∈M'Ui,jAnd saving the value of the total utility and the currently selected camera set C'
Step 26: repeating steps 21-25 until ni=|M'|max
And step 3: and (3) selecting the equipment set with the maximum total utility from the camera set sequence obtained in the step (2), and then the base station brings the cameras in the set into management, so that the networking process is completed.
The above algorithm shows the working process and principle of the base station as the base station to perform access management on the camera, and is only a specific example of a networking process, the networking described in this patent is not limited to the description of the above algorithm, but the technology of performing camera access management on a 5G base station by using the relevant camera device information all belongs to the coverage of this patent.
In order to achieve the purpose of reducing the video transmission data volume of the base station and the traffic monitoring center, the invention needs to utilize the computing resources of the base station to extract and analyze the characteristics of the video, and further, the invention also needs to use the computing resources of the base station to carry out the characteristic extraction and analysis on the videoAnd compressing the video to be transmitted according to the characteristics in the monitoring video. The specific steps are as follows, as shown in fig. 2, let V denote the video collected by the camera, and the duration is tVAnd the resolution of the video is W multiplied by H, and the following steps are carried out on the video V received by the base station:
step 1: the video is segmented by using a video segmentation algorithm, the segmentation algorithm can use Deeplab V3 with MobilenetV2 as the front end and is used for reducing the calculated amount of video segmentation and accelerating the analysis speed;
step 2: let F denote the video features after the video segmentation algorithm, FpIf and only if there is a vehicle in the video at the time of the p-th detection, and the video is divided every t times, then [ t ] can be obtained at this timeV/t]A feature, while V is also divided into [ t ]V/t]And (4) section. In the technology, a video segment set V transmitted by an object terminaltransIs determined by the following formula:
Vtrans={Vp|Fp=1} (4)
obviously, | Vtrans| V | is less than or equal to | V |, so the technology can reduce the data volume of the object end visual transmission.
And step 3: after step 2, a feature map representing the types of objects contained in the video is generated, according to the task of traffic monitoring, objects such as vehicles, pedestrians and the like in the video can be set as feature objects, T represents the feature objects, Ω represents the feature matrix of the video, Ω is a W × H matrix, and if and only if V is alk∈T,Ωlk1, otherwise Ωlk=0;
And 4, step 4: according to the feature matrix omega, the following steps are carried out:
step 41: dividing a video into an M multiplied by N grid, and setting the resolution of the compressed video to be W 'multiplied by H' according to the transmission capability of the object-side equipment;
step 42: the loss function for video compression is established as follows
Figure GDA0002682726200000111
In the loss function
Figure GDA0002682726200000112
Respectively representing the row height of the ith grid and the column width of the kth grid;
step 43: solving the following models by using a serialized quadratic programming algorithm to obtain the size of each grid
minEARAP (6)
Figure GDA0002682726200000113
Figure GDA0002682726200000114
Step 44: the original video is downsampled according to the size of each grid, so that the size of the video is reduced, the accuracy of a characteristic area is guaranteed not to be lost, and the data volume of the video is greatly reduced;
step 45: the size of the video grid
Figure GDA0002682726200000115
Figure GDA0002682726200000116
And transmitting the compressed video, and performing up-sampling on the video according to the grid size during decoding to recover the original video.
The above algorithm shows the process and principle of data compression of the monitoring video collected by the camera by the base station as the base station, and the algorithm of self-adaptive compression of video resolution by using the video feature extraction method in the 5G base station belongs to the coverage of the patent.
In order to ensure that the base stations transmit low-delay videos to the cloud, cooperative transmission needs to be performed between the base stations. The principle of the cooperative transmission is that the video sent by the camera is divided into blocks, and then the video of each block is directly transmitted to the cloud or transmitted to the cloud through the transfer of one or more base stations, so that the effects of increasing the throughput and reducing the transmission delay are achieved.
As shown in fig. 3, the process of cooperative transmission by the base station is that the base station segments the received video, then obtains the bandwidth and delay of the base station connected to the base station, and selects a node with high bandwidth and low delay for transmission.
The embodiment of the invention also provides a fog calculation video data acquisition system. As shown in fig. 4, the fog computing video data collecting system of the present invention includes at least one video sensor, at least one fog node, and a cloud data center located in a cloud end, in an embodiment of the present invention, the video sensor is a traffic camera or other video collecting device, but the present invention is not limited thereto, the video sensor collects video data and transmits the video data to the fog nodes communicatively connected thereto, each video sensor is only connected to one fog node, and one fog node may be connected to one or more video sensors; the fog node comprises a processor and a readable storage medium, wherein the readable storage medium stores executable instructions, and the executable instructions are executed by the processor of the fog node to realize the video data processing and transmission method based on the 5G base station; the fog nodes are directly connected to the cloud data center in a communication mode or are switched to the cloud data center through other fog nodes; after receiving the transmission data sent by the fog node, the cloud data center decompresses the transmission data first and then carries out further processing. It will be understood by those skilled in the art that all or part of the steps of the above methods may be implemented by a program instructing associated hardware (e.g., a processor) and the program may be stored in a readable storage medium, such as a read-only memory, a magnetic or optical disk, etc. All or some of the steps of the above embodiments may also be implemented using one or more integrated circuits. Accordingly, the modules in the above embodiments may be implemented in hardware, for example, by an integrated circuit, or in software, for example, by a processor executing programs/instructions stored in a memory. Embodiments of the invention are not limited to any specific form of hardware or software combination.
Although the present invention has been described with reference to the above embodiments, it should be understood that the invention is not limited to the embodiments, and that various changes and modifications can be made by one skilled in the art without departing from the spirit and scope of the invention.

Claims (8)

1. A video data processing and transmission method based on a 5G base station is characterized by comprising the following steps:
selecting a video sensor to access a 5G base station, and taking the 5G base station as a fog node; the process of selecting the video sensor specifically comprises the following steps: according to the video sensor CjNode B towards fogiDelay D for transmitting video datai,jSum code rate Ri,jObtaining a representative video sensor CjConnecting fog node BiUtility index U ofi,j(ii) a Wherein,
Figure FDA0002682726190000011
Umax=max nij∈M'Ui,j,ni=|M'|min,|M'|min+1,...,|M'|maxand satisfy
Figure FDA0002682726190000012
niIndicating current access to foggy node Bi(ii) number of video sensors, | M'. countmaxAnd | M'minAre respectively a node BiMaximum and minimum accessible video sensor capacity, RmaxAn upper limit of the available total bandwidth for the 5G base station; by utility index Ui,jObtaining a video sensor CjTotal utility U of sensor set C to have maximum total utility UmaxThe sensor set is an acquisition set C'; accessing the video sensor in the collection set C' to the fog node;
screening the video data collected by the video sensor through the fog node, and compressing the screened video data into transmission data;
and directly transmitting or routing the transmission data to the cloud data center.
2. The method of claim 1, wherein compressing the video data comprises:
performing target segmentation on the video data V through a video segmentation algorithm to obtain the video data V to be compressedtrans={Vp|Fp1}, wherein VpFor the segment of video data at the p-th detection of the characteristic object T, FpVideo features of the video data Vtarget after segmentation;
video data VtransDivided into M by N grids with which video data V are pairedtransPerforming downsampling to form compressed data; wherein the height and width of the grid are respectively
Figure FDA0002682726190000013
And
Figure FDA0002682726190000014
meet the requirement of minEARAPTime of flight
Figure FDA0002682726190000015
And is
Figure FDA0002682726190000016
EARAPAs a function of the loss of compression of the video data,
Figure FDA0002682726190000017
omega is with respect to video data VtransW × H dimensional feature matrix of, if and only if VlkElement omega of omega when epsilon is TlkWhen 1 is equal to
Figure FDA0002682726190000018
Time omegalkWhen the resolution of the video data V is 0, W × H is the resolution of the video data V, W '× H' is the resolution of the video data V after compression, and M, N, W, H, W 'and H' are positive integers;
to video data V with the gridtransDown-sampling to form compressed data
Figure FDA0002682726190000019
And
Figure FDA00026827261900000110
compressed data is added to form the transmission data.
3. The video data processing and transmission method of claim 1, wherein transmitting the transmission data comprises:
dividing the transmission data into a plurality of data blocks;
each data block is directly transmitted to the cloud data center or transmitted to the cloud data center in a routing mode through at least one fog node transit; and selecting the transferred fog nodes according to the transmission bandwidth and delay among the fog nodes.
4. A video data processing and transmission system based on a 5G base station is characterized by comprising:
the sensor access module is used for selecting a video sensor to be accessed as a 5G base station, and taking the 5G base station as a fog node; the sensor access module specifically comprises: a sensor utility obtaining module for obtaining the utility of the sensor according to the videojNode B towards fogiDelay D for transmitting video datai,jSum code rate Ri,jObtaining a representative video sensor CjConnecting the node B with the fog nodeiUtility index U ofi,j(ii) a Wherein,
Figure FDA0002682726190000021
Umax=maxnij∈M'Ui,j,ni=|M'|min,|M'|min+1,...,|M'|maxand satisfy
Figure FDA0002682726190000022
niIndicating current access to 5G base station Bi(ii) number of video sensors, | M'. countmaxAnd | M'minAre respectively a node BiMaximum and minimum accessible video sensor capacity, RmaxAn upper limit of the available total bandwidth for the 5G base station; a sensor collection selection module for selecting a utility index Ui,jObtaining a video sensor CjTotal utility U of sensor set C to have maximum total utility UmaxThe sensor set is an acquisition set C'; accessing the video sensor in the collection set C' to the fog node;
the data processing module is used for screening the video data acquired by the video sensor through the fog node and compressing the screened video data into transmission data;
and the data transmission module is used for directly transmitting or routing the transmission data to the cloud data center.
5. The video data processing and transmission system of claim 4, wherein the data processing module comprises:
a video data selection module for performing target segmentation on the video data V by a video segmentation algorithm to obtain the video data V to be compressedtrans={Vp|Fp1}, wherein VpFor the segment of video data at the p-th detection of the characteristic object T, FpVideo features of the video data Vtarget after segmentation;
a compressed data generation module for generating video data VtransDivided into M by N grids with which video data V are pairedtransPerforming downsampling to form compressed data; wherein the height and width of the grid are respectively
Figure FDA0002682726190000031
And
Figure FDA0002682726190000032
meet the requirement of minEARAPTime of flight
Figure FDA0002682726190000033
And is
Figure FDA0002682726190000034
EARAPAs a function of the loss of compression of the video data,
Figure FDA0002682726190000035
omega is with respect to video data VtransW × H dimensional feature matrix of, if and only if VlkElement omega of omega when epsilon is TlkWhen 1 is equal to
Figure FDA0002682726190000036
Time omegalkWhen the resolution of the video data V is 0, W × H is the resolution of the video data V, W '× H' is the resolution of the video data V after compression, and M, N, W, H, W 'and H' are positive integers;
a transmission data generation module for generating video data V by the meshtransDown-sampling to form compressed data
Figure FDA0002682726190000037
And
Figure FDA0002682726190000038
compressed data is added to form the transmission data.
6. The video data processing and transmission system of claim 4, wherein the data transmission module comprises:
a transmission data blocking module for dividing the transmission data into a plurality of data blocks;
the data block transmission module is used for directly transmitting each data block to the cloud data center or transmitting the data blocks to the cloud data center in a routing manner through at least one fog node transfer; and selecting the transferred fog nodes according to the transmission bandwidth and delay among the fog nodes.
7. A computer readable storage medium storing executable instructions for performing the 5G base station based video data processing and transmission method according to any one of claims 1 to 3.
8. A fog computing video data acquisition system, comprising:
the video sensor is in communication connection with the fog node and used for acquiring video data;
the fog node, which is a 5G base station, comprising a processor and the computer-readable storage medium according to claim 7, wherein the processor calls and executes executable instructions in the computer-readable storage medium to obtain the video data, analyzes and processes the video data, compresses the video data into transmission data, and sends the transmission data to a cloud data center;
and the cloud data center is used for receiving the transmission data and decompressing the transmission data to obtain the video data.
CN201910364933.1A 2019-04-30 2019-04-30 Video data processing and transmitting method and system based on 5G base station Active CN110087041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910364933.1A CN110087041B (en) 2019-04-30 2019-04-30 Video data processing and transmitting method and system based on 5G base station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910364933.1A CN110087041B (en) 2019-04-30 2019-04-30 Video data processing and transmitting method and system based on 5G base station

Publications (2)

Publication Number Publication Date
CN110087041A CN110087041A (en) 2019-08-02
CN110087041B true CN110087041B (en) 2021-01-08

Family

ID=67418373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910364933.1A Active CN110087041B (en) 2019-04-30 2019-04-30 Video data processing and transmitting method and system based on 5G base station

Country Status (1)

Country Link
CN (1) CN110087041B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111565303B (en) * 2020-05-29 2021-12-14 广东省电子口岸管理有限公司 Video monitoring method, system and readable storage medium based on fog calculation and deep learning
CN112398158B (en) * 2020-10-27 2022-11-01 国网经济技术研究院有限公司 Distributed collection device and method for operation indexes of hybrid high-voltage direct-current power transmission system
CN113203439A (en) * 2021-05-07 2021-08-03 南京邮电大学 Master-slave dynamic edge sensor ad hoc network system for water information detection
CN115150404A (en) * 2022-06-09 2022-10-04 安徽天元通信发展有限公司 5G base station information analysis processing method and system based on integrated structure

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316279A (en) * 2008-07-09 2008-12-03 南京邮电大学 Subjective interest driven wireless multimedia sensor network design method
CN103024400B (en) * 2011-12-19 2015-04-29 北京捷成世纪科技股份有限公司 Video compression fault-tolerant transmission method and system based on network
CN107808122B (en) * 2017-09-30 2020-08-11 中国科学院长春光学精密机械与物理研究所 Target tracking method and device
CN107731011B (en) * 2017-10-27 2021-01-19 中国科学院深圳先进技术研究院 Port berthing monitoring method and system and electronic equipment
US10644961B2 (en) * 2018-01-12 2020-05-05 Intel Corporation Self-adjusting data processing system
CN108377264A (en) * 2018-02-05 2018-08-07 江苏大学 Vehicular ad hoc network quorum-sensing system data report De-weight method
CN109547541B (en) * 2018-11-12 2021-08-27 安徽师范大学 Node low-overhead cooperation method based on filtering and distribution mechanism in fog computing environment

Also Published As

Publication number Publication date
CN110087041A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110087041B (en) Video data processing and transmitting method and system based on 5G base station
CN108216252A (en) A kind of subway driver vehicle carried driving behavior analysis method, car-mounted terminal and system
CN112417953A (en) Road condition detection and map data updating method, device, system and equipment
CN106682592A (en) Automatic image recognition system and method based on neural network method
CN113837097B (en) Unmanned aerial vehicle edge calculation verification system and method for visual target identification
CN110557633B (en) Compression transmission method, system and computer readable storage medium for image data
CN116719339A (en) Unmanned aerial vehicle-based power line inspection control method and system
CN108875555B (en) Video interest area and salient object extracting and positioning system based on neural network
CN110796580B (en) Intelligent traffic system management method and related products
CN111582323A (en) Power transmission line channel detection method, device and medium
CN113160272B (en) Target tracking method and device, electronic equipment and storage medium
CN112597995B (en) License plate detection model training method, device, equipment and medium
CN111402301B (en) Water accumulation detection method and device, storage medium and electronic device
CN113392793A (en) Method, device, equipment, storage medium and unmanned vehicle for identifying lane line
CN113743151A (en) Method and device for detecting road surface sprinkled object and storage medium
CN112668675B (en) Image processing method and device, computer equipment and storage medium
CN112037255B (en) Target tracking method and device
CN114639076A (en) Target object detection method, target object detection device, storage medium, and electronic device
CN113470012B (en) Marking identification method and device, storage medium and electronic device
CN114170560B (en) Multi-device edge video analysis system based on deep reinforcement learning
WO2022127576A1 (en) Site model updating method and system
CN114419018A (en) Image sampling method, system, device and medium
CN114897126A (en) Time delay prediction method and device, electronic equipment and storage medium
CN114241792A (en) Traffic flow detection method and system
CN117830646B (en) Method for rapidly extracting building top elevation based on stereoscopic image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant