CN117201733A - Real-time unmanned aerial vehicle monitoring and sharing system - Google Patents

Real-time unmanned aerial vehicle monitoring and sharing system Download PDF

Info

Publication number
CN117201733A
CN117201733A CN202311066601.8A CN202311066601A CN117201733A CN 117201733 A CN117201733 A CN 117201733A CN 202311066601 A CN202311066601 A CN 202311066601A CN 117201733 A CN117201733 A CN 117201733A
Authority
CN
China
Prior art keywords
transmission rate
network transmission
time sequence
time
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311066601.8A
Other languages
Chinese (zh)
Other versions
CN117201733B (en
Inventor
黄理
吴伟
马艺洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhonghui Tonghang Aviation Technology Co ltd
Original Assignee
Hangzhou Zhonghui Tonghang Aviation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhonghui Tonghang Aviation Technology Co ltd filed Critical Hangzhou Zhonghui Tonghang Aviation Technology Co ltd
Priority to CN202311066601.8A priority Critical patent/CN117201733B/en
Publication of CN117201733A publication Critical patent/CN117201733A/en
Application granted granted Critical
Publication of CN117201733B publication Critical patent/CN117201733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A real-time unmanned aerial vehicle monitoring and sharing system is disclosed. Firstly, a monitoring video is collected through a camera arranged on an unmanned aerial vehicle, the monitoring video is transmitted to a server through a wireless network module, then, the monitoring video is compressed, encoded, stored and distributed through the server to change the quality and format of the monitoring video, and then, the server is accessed through a browser or mobile equipment to watch the monitoring video in real time and share and cooperate with other users. Therefore, a plurality of users can watch videos shot by the unmanned aerial vehicle in real time through the network, and communicate and cooperate.

Description

Real-time unmanned aerial vehicle monitoring and sharing system
Technical Field
The application relates to the field of unmanned aerial vehicles, and in particular relates to a real-time unmanned aerial vehicle monitoring and sharing system.
Background
Unmanned aerial vehicles (Unmanned Aerial Vehicle, unmanned aerial vehicles for short) are unmanned aerial vehicles which are not occupied by persons and are generally controlled by remote controls, an automation system or a preset flight plan. Unmanned aerial vehicles may carry various sensors, cameras, and other equipment for performing various tasks, such as aerial photography, monitoring, surveying, rescue, and the like. Along with the continuous development of unmanned aerial vehicle technology, unmanned aerial vehicles are increasingly widely applied in various fields, and one of the unmanned aerial vehicles is a monitoring system. The unmanned aerial vehicle monitoring system can realize flexible visual angles and wide coverage range through the unmanned aerial vehicle with the camera, and meanwhile, the unmanned aerial vehicle monitoring system has high mobility. The system has wide application in various fields, such as security monitoring, disaster relief, agricultural observation and the like.
However, conventional drone monitoring systems typically only provide video viewing functionality, lacking the ability to communicate and cooperate in real time. The user cannot directly communicate with other users in text, voice or video, and cannot share information, make decisions or cooperate. In addition, the existing unmanned aerial vehicle monitoring system is simple in interface, weak in interactivity and limited in expansibility, and poor in user experience can be caused. Thus, the user may not be able to conveniently browse and switch different camera views, and may not be able to adjust the video quality and format to accommodate the needs of the user.
Accordingly, a real-time unmanned aerial vehicle monitoring sharing system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a real-time unmanned aerial vehicle monitoring and sharing system. The system can enable a plurality of users to watch videos shot by the unmanned aerial vehicle in real time through a network, and communicate and cooperate.
According to one aspect of the present application, there is provided a real-time unmanned aerial vehicle monitoring sharing system, comprising:
the video acquisition and transmission module is used for acquiring a monitoring video through a camera arranged on the unmanned aerial vehicle and transmitting the monitoring video to the server through the wireless network module;
the video processing module is used for compressing, encoding, storing and distributing the monitoring video through the server so as to change the quality and format of the monitoring video; and
and the video sharing module is used for accessing the server through a browser or mobile equipment, watching the monitoring video in real time and sharing and cooperating with other users.
Compared with the prior art, the real-time unmanned aerial vehicle monitoring and sharing system provided by the application has the advantages that firstly, the monitoring video is collected through the camera arranged on the unmanned aerial vehicle, the monitoring video is transmitted to the server through the wireless network module, then, the compression, coding, storage and distribution processing are carried out on the monitoring video through the server so as to change the quality and format of the monitoring video, and then, the server is accessed through the browser or the mobile equipment to watch the monitoring video in real time, and sharing and cooperation are carried out with other users. Therefore, a plurality of users can watch videos shot by the unmanned aerial vehicle in real time through the network, and communicate and cooperate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly introduced below, the following drawings not being drawn to scale with respect to actual dimensions, emphasis instead being placed upon illustrating the gist of the present application.
Fig. 1 is a block diagram of a real-time unmanned aerial vehicle monitoring and sharing system according to an embodiment of the application.
Fig. 2 is a schematic block diagram of the video processing module in the real-time unmanned aerial vehicle monitoring and sharing system according to an embodiment of the application.
Fig. 3 is a schematic block diagram of the network transmission rate timing feature extraction unit in the real-time unmanned aerial vehicle monitoring and sharing system according to an embodiment of the application.
Fig. 4 is a schematic block diagram of the video compression quality control unit in the real-time unmanned aerial vehicle monitoring and sharing system according to an embodiment of the application.
Fig. 5 is a flowchart of a real-time unmanned aerial vehicle monitoring sharing method according to an embodiment of the application.
Fig. 6 is a schematic diagram of a system architecture of a sub-step S120 of the real-time unmanned aerial vehicle monitoring sharing method according to an embodiment of the present application.
Fig. 7 is an application scenario diagram of a real-time unmanned aerial vehicle monitoring and sharing system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are also within the scope of the application.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Specifically, in the technical scheme of the present application, a real-time unmanned aerial vehicle monitoring and sharing system is provided, and fig. 1 is a schematic block diagram of the real-time unmanned aerial vehicle monitoring and sharing system according to an embodiment of the present application. As shown in fig. 1, a real-time unmanned aerial vehicle monitoring sharing system 100 according to an embodiment of the present application includes: the video acquisition and transmission module 110 is configured to acquire a monitoring video through a camera deployed in the unmanned aerial vehicle, and transmit the monitoring video to a server through the wireless network module; the video processing module 120 is configured to compress, encode, store, and distribute the surveillance video through the server, so as to change the quality and format of the surveillance video; and the video sharing module 130 is configured to access the server through a browser or a mobile device, watch the surveillance video in real time, and share and cooperate with other users. The real-time unmanned aerial vehicle monitoring and sharing system can enable a plurality of users to watch videos shot by the unmanned aerial vehicle in real time through a network and conduct communication and cooperation.
Accordingly, consider that in an unmanned aerial vehicle monitoring system, video needs to be transmitted to a user terminal through a network for real-time viewing. However, if the network bandwidth is low and the transmission speed is limited, transmitting high quality video may cause delay and a jam phenomenon, affecting the viewing experience of the user.
In view of the above technical problems, the technical idea of the present application is to adaptively adjust video compression quality based on network bandwidth. In particular, by dynamically adjusting video compression quality according to current network bandwidth conditions, the system can provide as high video quality as possible while guaranteeing smoothness of video transmission. When the network bandwidth is low, the system can reduce the compression quality of the video and reduce the data volume so as to ensure that the video can be transmitted to the user terminal in time. When the network bandwidth is higher, the system can increase the compression quality of the video and provide clearer and finer pictures.
Accordingly, as shown in fig. 2, the video processing module 120 includes: a network transmission bandwidth data acquisition unit 121, configured to acquire network transmission bandwidth values at a plurality of predetermined time points within a predetermined time period; a network transmission rate timing arrangement unit 122, configured to arrange the network transmission bandwidth values at the plurality of predetermined time points into a network transmission rate timing input vector according to a time dimension; an up-sampling unit 123, configured to up-sample the network transmission rate timing input vector by using a linear interpolation up-sampling module to obtain an up-sampled network transmission rate timing input vector; a network transmission rate timing characteristic extraction unit 124, configured to perform timing characteristic extraction on the up-sampled network transmission rate timing input vector to obtain a network transmission rate timing characteristic; and a video compression quality control unit 125 for determining to reduce video compression quality, improve video compression quality, or maintain video compression quality based on the network transmission rate timing characteristics.
Specifically, in the technical scheme of the present application, first, network transmission bandwidth values at a plurality of predetermined time points in a predetermined time period are acquired. Next, it is considered that since the network transmission bandwidth values are constantly changing in the time dimension, that is, the network transmission bandwidth values at the plurality of predetermined time points have a dynamic correlation characteristic of time sequence therebetween. Therefore, in the technical scheme of the application, in order to better adaptively regulate and control video compression quality, network transmission bandwidth values at a plurality of preset time points are firstly arranged into network transmission rate time sequence input vectors according to a time dimension, so that distribution information of the network transmission bandwidth values in time sequence is integrated.
Then, in order to improve the capturing capability of the characteristic of the slight change of the network transmission bandwidth value in the time dimension within the preset time period, in the technical scheme of the application, the network transmission rate time sequence input vector is further processed through a linear interpolation up-sampling module to obtain an up-sampling network transmission rate time sequence input vector, so that the density and smoothness of data are increased, and the time sequence change characteristic information of the network transmission bandwidth is conveniently and better represented. It will be appreciated that by upsampling by linear interpolation, the data points in the original network transmission rate timing input vector may be interpolated to generate more data points. In this way, it is helpful to increase the resolution in the time dimension, so that the time-series variation of the network transmission bandwidth is more visible. Meanwhile, the linear interpolation can carry out smooth interpolation among sampling points, so that the influence of noise and abrupt change is reduced, and the continuity and stability of data are improved. That is, the up-sampled network transmission rate timing input vector can provide more detailed and accurate network transmission bandwidth variation information, and provide richer data for subsequent feature extraction and video compression quality regulation.
Then, when extracting the time sequence variation characteristic of the network transmission bandwidth value, in order to better capture the time sequence variation characteristic information of the network transmission bandwidth value in the time dimension, vector segmentation is further required to be performed on the up-sampling network transmission rate time sequence input vector so as to obtain a sequence of network transmission rate local time sequence input vectors, so that the network transmission bandwidth local time sequence detail variation characteristic information in different time periods can be better extracted later.
Further, the sequence of the local time sequence input vector of the network transmission rate is subjected to feature extraction in a time sequence feature extractor based on a one-dimensional convolution layer so as to extract the local time sequence fine change feature information of the network transmission rate in each local time period in the time dimension, thereby obtaining the sequence of the local time sequence feature vector of the network transmission rate. Thus, the time sequence change trend analysis of the network transmission rate value and the adaptive adjustment of video compression quality are facilitated.
Next, it is also considered that the network transmission rate value has a dynamic change law of time sequence over the predetermined period of time, that is, the network transmission rate value has a correlation based on time sequence entirety between local time sequence detail change features in respective local periods of time. Therefore, in the technical scheme of the application, the sequence of the local time sequence feature vector of the network transmission rate is further encoded in a context encoder based on the network transmission rate mode feature of the converter module, so that the local time sequence feature of the network transmission rate value in each time period is extracted based on context associated feature information of time sequence global, and the global time sequence context network transmission rate feature vector is obtained. In particular, the global time sequence context network transmission rate feature vector contains mode semantic feature information of overall time sequence change of a network transmission rate value, so that time sequence association features and change trends of the network transmission rate can be better expressed, and compression quality of video can be better adaptively adjusted.
Accordingly, as shown in fig. 3, the network transmission rate timing characteristic extraction unit 124 includes: vector slicing subunit 1241, configured to perform vector slicing on the up-sampled network transmission rate timing input vector to obtain a sequence of network transmission rate local timing input vectors; a network transmission rate local time sequence change feature extraction subunit 1242, configured to perform feature extraction on the sequence of network transmission rate local time sequence input vectors by using a time sequence feature extractor based on a deep neural network model to obtain a sequence of network transmission rate local time sequence feature vectors; and a network transmission rate global timing association coding subunit 1243, configured to perform association coding on the sequence of the network transmission rate local timing feature vectors to obtain a global timing context network transmission rate feature vector as the network transmission rate timing feature. It should be understood that vector slicing is a process of dividing one vector into a plurality of sub-vectors, and in the network transmission rate timing feature extraction unit, the vector slicing subunit 1241 is used to perform vector slicing on the up-sampled network transmission rate timing input vector, so as to obtain a sequence of network transmission rate local timing input vectors. The purpose of vector slicing is to break up a longer vector into shorter sub-vectors in order to better capture local timing characteristics. By splitting the input vector into sequences, each sub-vector can be made more concerned with local timing variations, which has the advantage that finer granularity features can be extracted to more accurately describe the timing variations of the network transmission rate. In the network transmission rate time sequence feature extraction, a local time sequence input vector sequence obtained by vector segmentation is sent to a time sequence feature extractor of a deep neural network model for feature extraction. This allows extracting features of local timing variations on each local timing input vector. Finally, the local time sequence characteristic vectors are encoded into global time sequence context network transmission rate characteristic vectors through the associated encoding subunit, so that more comprehensive and more accurate network transmission rate time sequence characteristics are obtained.
More specifically, in the network transmission rate local timing change feature extraction subunit 1242, the timing feature extractor based on the deep neural network model is a timing feature extractor based on a one-dimensional convolutional layer. It is worth mentioning that one-dimensional convolutional layer is a common layer type in deep neural networks for processing data with a time-sequential structure. Unlike conventional two-dimensional convolution layers, one-dimensional convolution layers are mainly used for processing one-dimensional sequence data, such as time sequences, signal sequences, and the like. The one-dimensional convolution layer performs a convolution operation on the input sequence by sliding a convolution kernel (one-dimensional filter), thereby extracting local features in the input sequence. The size and stride of the convolution kernel may be adjusted to control the granularity of the extracted features and the length of the output sequence, and in the convolution operation, the weight parameter of the convolution kernel may be multiplied and summed with the corresponding portion of the input sequence to obtain an element of the output sequence. Through convolution operation, the one-dimensional convolution layer can automatically learn and extract local features in an input sequence, and the features can represent important information such as patterns, trends, periodicity and the like of the sequence. The one-dimensional convolution layer can control the length of the output sequence by adjusting the size and the stride of the convolution kernel, so that the sequence dimension is reduced, the parameter quantity and the calculation complexity of a model are reduced, and meanwhile, the abstract features of a higher level are extracted. The one-dimensional convolution layers share weight parameters in the convolution operation and thus have invariance to translation of the input sequence, which means that the one-dimensional convolution layers are able to detect and extract the corresponding features no matter where in the sequence the features occur. The one-dimensional convolution layer can effectively capture local features of sequence data in time sequence feature extraction, and helps to improve understanding and expression capacity of a model on the time sequence data.
More specifically, the network transmission rate global timing association coding subunit 1243 is configured to: and passing the sequence of the local time sequence characteristic vectors of the network transmission rate through a network transmission rate mode characteristic context encoder based on a converter module to obtain the global time sequence context network transmission rate characteristic vector. It is worth mentioning that the converter module is a neural network module based on an attention mechanism for establishing associations and interactions between global contexts in the sequence data. In the network transmission rate global timing association coding subunit 1243, the converter module is used as a network transmission rate mode feature context encoder for generating a global timing context network transmission rate feature vector. At the heart of the converter module is a Self-Attention mechanism (Self-Attention) which captures global context information by calculating the correlation between different positions in the input sequence. Specifically, the converter module calculates the attention weight of each element (local timing feature vector) from other elements in the sequence by comparing each element with all other elements in the sequence, these weights representing the correlation between the elements, which can be used for weighted summation to obtain a contextual representation of each element. The advantage of the converter module is that it can take into account all positions in the sequence at the same time, not just locally adjacent positions. This allows the model to better understand the relationships and dependencies between different positions in the sequence, thereby better capturing global timing context information. By application of the converter module, the sequence of network transmission rate local timing feature vectors can be encoded into a global timing context network transmission rate feature vector. The global feature vector contains the context information of the whole sequence, and can better express the global time sequence feature of the network transmission rate, thereby improving the modeling and predicting capability of the model on the time sequence data of the network transmission rate.
Further, the global timing context network transmission rate feature vector is passed through a classifier to obtain a classification result that is indicative of reducing video compression quality, improving video compression quality, or maintaining video compression quality. That is, the full-time-series associated characteristic information of the network transmission rate is utilized to perform classification processing, so that the video compression quality is adaptively adjusted based on the network bandwidth, thereby balancing the relationship between the video transmission rate and the video transmission quality and providing better user experience. That is, the user can enjoy suitable video quality according to his own network environment and viewing requirements, and can obtain better viewing and video sharing effects both in a low bandwidth environment and in a high bandwidth environment.
Accordingly, as shown in fig. 4, the video compression quality control unit 125 includes: the feature gain subunit 1251 is configured to perform a distributed gain based on a probability density feature imitation paradigm on the global timing context network transmission rate feature vector to obtain a post-gain global timing context network transmission rate feature vector; and a video compression quality classification adjustment subunit 1252, configured to pass the post-gain global timing context network transmission rate feature vector through a classifier to obtain a classification result, where the classification result is used to represent reducing video compression quality, improving video compression quality, or maintaining video compression quality. It should be appreciated that the function of the feature gain subunit 1251 is to perform feature gain on the global timing context network transmission rate feature vector to improve video compression quality control, and in particular, the feature gain subunit performs distributed gain on the global timing context network transmission rate feature vector by adopting a method based on probability density feature emulation paradigm, which means that it better conforms to the desired feature distribution by adjusting the distribution of the feature vector to improve the video compression effect. The function of the video compression quality classification adjustment subunit 1252 is to obtain a classification result according to the gained global time sequence context network transmission rate feature vector through the classifier, where the classification result is used to represent the adjustment direction of the video compression quality, i.e., reduce the video compression quality, improve the video compression quality or keep the video compression quality. Parameters and strategies of video compression can be adjusted according to specific requirements through prediction results of the classifier so as to achieve required compression quality. That is, the characteristic gain subunit and the video compression quality classification adjustment subunit play different roles in the video compression quality control unit. The characteristic gain subunit improves the video compression effect by gain the distribution of the transmission rate characteristic vector of the global time sequence context network. And the video compression quality classification adjustment subunit obtains a classification result through the classifier according to the gained feature vector, and the classification result is used for guiding the adjustment direction of the video compression quality. Thus, the quality and effect of video compression can be flexibly controlled according to specific requirements and targets.
In particular, in the technical scheme of the application, when the sequence of the local time sequence input vectors of the network transmission rate passes through the time sequence feature extractor based on the one-dimensional convolution layer to obtain the sequence of the local time sequence feature vectors of the network transmission rate, each local time sequence feature vector of the network transmission rate can express the local time sequence associated feature of the network transmission rate under the segmented local time domain, so that after the local time sequence feature vectors of the network transmission rate pass through the context encoder of the network transmission rate mode feature based on the converter module, the obtained global time sequence context network transmission rate feature vector can further express the segmented short-distance dual context associated representation of the feature of each segmented local time domain under the global time domain. In this way, when the local time domain context correlation expression in the global time domain is performed, the background distribution noise related to the time domain context correlation feature distribution interference in each local time domain is also introduced, and the global time domain context network transmission rate feature vector also has time domain space hierarchical time domain correlation feature expressions in the local time domain and the global time domain, so that the expression effect is expected to be enhanced based on the distribution characteristics of the global time domain context network transmission rate feature vector. Accordingly, applicants of the present application subject the global timing context network transmission rate feature vector to a distributed gain based on a probability density feature emulation paradigm.
Accordingly, in one specific example, the feature gain subunit 1251 is configured to: carrying out distributed gain based on probability density characteristic imitation paradigm on the global time sequence context network transmission rate characteristic vector by using the following optimization formula to obtain the post-gain global time sequence context network transmission rate characteristic vector; wherein, the optimization formula is:
wherein V is the global timing context network transmission rate feature vector, V i Is the characteristic value of the ith position of the global time sequence context network transmission rate characteristic vector, L is the length of the global time sequence context network transmission rate characteristic vector,representing the square of the two norms of the global timing context network transmission rate feature vector, and α is a weighted hyper-parameter, exp (·) represents an exponential operation, v i Is the characteristic value of the ith position of the transmission rate characteristic vector of the global time sequence context network after gain.
Here, based on the characteristic simulation paradigm of the standard cauchy distribution on the probability density for the natural gaussian distribution, the distribution gain based on the probability density characteristic simulation paradigm can use the characteristic scale as a simulation mask to distinguish foreground object characteristics and background distribution noise in a high-dimensional characteristic space, so that the unconstrained distribution gain of the high-dimensional characteristic distribution is obtained by carrying out semantic cognition distribution soft matching on the characteristic space mapping on the high-dimensional space based on the space hierarchical semantics of the high-dimensional characteristics, the expression effect of the global time sequence context network transmission rate characteristic vector based on the characteristic distribution characteristic is improved, the accuracy of a classification result obtained by the global time sequence context network transmission rate characteristic vector through a classifier is improved, and the adaptive adjustment of video compression quality and the adaptation degree of a network bandwidth value are improved. In this way, the video compression quality can be adaptively adjusted based on its own network bandwidth, thereby balancing the relationship between video transmission speed and video transmission quality, providing a better user experience. In particular, the user can enjoy proper video quality according to own network environment and viewing requirements, and can obtain better viewing and video sharing effects in both low-bandwidth environment and high-bandwidth environment.
Further, the video compression quality classification adjustment subunit 1252 includes: the full-connection coding secondary subunit is used for carrying out full-connection coding on the transmission rate characteristic vector of the gain global time sequence context network by using a full-connection layer of the classifier so as to obtain a coding classification characteristic vector; and a classification secondary subunit, configured to input the encoded classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
That is, in the technical solution of the present disclosure, the labels of the classifier include a reduced video compression quality (first label), an improved video compression quality (second label), and a maintained video compression quality (third label), wherein the classifier determines to which classification label the post-gain global timing context network transmission rate feature vector belongs through a soft maximum function. It should be noted that the first tag p1, the second tag p2, and the third tag p3 do not include the concept of artificial setting, and in fact, during the training process, the computer model does not have the concept of "reducing video compression quality, improving video compression quality, or maintaining video compression quality", which is only two kinds of classification tags, and the probability that the output feature is under the two classification tags, that is, the sum of p1, p2, and p3 is one. Therefore, the classification result of reducing video compression quality, improving video compression quality or maintaining video compression quality is actually converted into a classified probability distribution conforming to the natural law through classifying the labels, and the physical meaning of the natural probability distribution of the labels is essentially used instead of the language text meaning of reducing video compression quality, improving video compression quality or maintaining video compression quality.
It should be appreciated that the role of the classifier is to learn the classification rules and classifier using a given class, known training data, and then classify (or predict) the unknown data. Logistic regression (logistics), SVM, etc. are commonly used to solve the classification problem, and for multi-classification problems (multi-class classification), logistic regression or SVM can be used as well, but multiple bi-classifications are required to compose multiple classifications, but this is error-prone and inefficient, and the commonly used multi-classification method is the Softmax classification function.
In summary, the real-time unmanned aerial vehicle monitoring and sharing system 100 according to the embodiment of the present application is illustrated, which enables a plurality of users to watch videos shot by unmanned aerial vehicles in real time through a network, and to communicate and cooperate.
As described above, the real-time unmanned aerial vehicle monitoring and sharing system 100 according to the embodiment of the present application may be implemented in various terminal devices, for example, a server having the real-time unmanned aerial vehicle monitoring and sharing algorithm according to the embodiment of the present application. In one example, the real-time unmanned aerial vehicle monitoring sharing system 100 according to an embodiment of the present application may be integrated into a terminal device as a software module and/or a hardware module. For example, the real-time unmanned aerial vehicle monitoring and sharing system 100 according to the embodiment of the present application may be a software module in the operating system of the terminal device, or may be an application program developed for the terminal device; of course, the real-time unmanned aerial vehicle monitoring and sharing system 100 according to the embodiment of the present application may also be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the real-time unmanned aerial vehicle monitoring and sharing system 100 and the terminal device according to the embodiment of the present application may be separate devices, and the real-time unmanned aerial vehicle monitoring and sharing system 100 may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to the agreed data format.
Fig. 5 is a flowchart of a real-time unmanned aerial vehicle monitoring sharing method according to an embodiment of the application. As shown in fig. 5, a real-time unmanned aerial vehicle monitoring sharing method according to an embodiment of the present application includes: s110, collecting a monitoring video through a camera arranged on the unmanned aerial vehicle, and transmitting the monitoring video to a server through a wireless network module; s120, compressing, encoding, storing and distributing the monitoring video through the server to change the quality and format of the monitoring video; and S130, accessing the server through a browser or mobile equipment, watching the monitoring video in real time, and sharing and cooperating with other users.
Fig. 6 is a schematic diagram of a system architecture of a sub-step S120 of the real-time unmanned aerial vehicle monitoring sharing method according to an embodiment of the present application. As shown in fig. 6, in a specific example, in the above-mentioned real-time unmanned aerial vehicle monitoring sharing method, the compressing, encoding, storing and distributing processes are performed on the monitoring video by the server to change the quality and format of the monitoring video, including: acquiring network transmission bandwidth values of a plurality of preset time points in a preset time period; arranging the network transmission bandwidth values of the plurality of preset time points into a network transmission rate time sequence input vector according to a time dimension; the network transmission rate time sequence input vector is subjected to linear interpolation up-sampling module to obtain an up-sampling network transmission rate time sequence input vector; extracting time sequence characteristics of the up-sampling network transmission rate time sequence input vector to obtain network transmission rate time sequence characteristics; and determining to reduce video compression quality, improve video compression quality, or maintain video compression quality based on the network transmission rate timing characteristics.
Here, it will be appreciated by those skilled in the art that the specific operations of the respective steps in the above-described real-time unmanned aerial vehicle monitoring and sharing method have been described in detail in the above description of the real-time unmanned aerial vehicle monitoring and sharing system 100 with reference to fig. 1 to 5, and thus, repetitive descriptions thereof will be omitted.
Fig. 7 is an application scenario diagram of a real-time unmanned aerial vehicle monitoring and sharing system according to an embodiment of the present application. As shown in fig. 7, in this application scenario, first, network transmission bandwidth values at a plurality of predetermined time points in a predetermined period of time (for example, D illustrated in fig. 7) are acquired, and then, the network transmission bandwidth values at the plurality of predetermined time points are input to a server (for example, S illustrated in fig. 7) where a real-time unmanned aerial vehicle monitoring and sharing algorithm is deployed, where the server can process the network transmission bandwidth values at the plurality of predetermined time points using the real-time unmanned aerial vehicle monitoring and sharing algorithm to obtain a classification result for representing reducing video compression quality, improving video compression quality, or maintaining video compression quality.
Further, in another embodiment of the present application, the real-time unmanned aerial vehicle monitoring and sharing system belongs to a real-time monitoring platform, and the platform defaults to a real-time monitoring interface, which can view statistics and online quantity and map information of all registered unmanned aerial vehicles in the unit/department where the account is located. Clicking on the corresponding units/departments may present the corresponding real-time monitoring interface map, and the units/departments register the drone statistics and online quantity. When the interface is on line, the real-time image transmission picture of the corresponding equipment can be clicked to be checked (the image is transmitted to the remote controller and then to the platform, and the map points in to check the real-time picture). The system main interface is an integral two-dimensional map under the account, and is provided with a mark of an online unmanned aerial vehicle, the online unmanned aerial vehicle looks over positions and detailed information of all flying police unmanned aerial vehicles in real time, and the real-time positions of the unmanned aerial vehicle flying are dynamically shown on the map in real time, and meanwhile, the returned video information can be looked over. For the cradle head mounted on the unmanned aerial vehicle, the control of cradle head direction, zooming and whether target tracking is performed is realized at the web end.
For example, the upper left corner displays the total registration number and the online number of the unmanned aerial vehicle, and the lower left corner corresponds to the department to which the unmanned aerial vehicle belongs; the lower right corner is a function key, including searching for a map location; viewing live weather, temperature, wind direction, and air humidity; and checking various information layers and displaying the checked information layers on a map. If the online unmanned aerial vehicle exists, a button on the right side of the name of the online unmanned aerial vehicle on the map is clicked, a live picture can be checked, and live video is live broadcast. And moreover, the two-dimension code can be generated by clicking, other equipment can scan the code and simultaneously view the picture or the video on line and store the picture or the video to the local computer, so that the local computer can conveniently view the WeChat scanning two-dimension code, the real-time view of any mobile phone terminal can be realized, and the real-time view of any computer terminal can be realized by clicking the link.
In a comprehensive way, the real-time unmanned aerial vehicle monitoring and sharing system is provided, and a plurality of users can watch videos shot by the unmanned aerial vehicle in real time through a network and communicate and cooperate. The main functions of the system include: unmanned aerial vehicle video acquisition and transmission: the unmanned aerial vehicle is provided with a camera and a wireless network module, and video shot in real time can be transmitted to a server through a network; server video processing and distribution: the server receives the video transmitted by the unmanned aerial vehicle, compresses, codes, stores and distributes the video, and can provide different video quality and formats according to the requirements of users; user video viewing and interaction: the user can access the server through the browser or the mobile device, watch videos shot by the unmanned aerial vehicle in real time, and communicate with other users in text, voice or video, so that information sharing and collaboration are realized. The main advantages of this system include: real-time performance: the user can watch the video shot by the unmanned aerial vehicle in real time without delay or blocking; scalability: the system can support simultaneous access of a plurality of unmanned aerial vehicles and a plurality of users, and the quality and stability of the video are not affected; safety: the system adopts encryption and authentication technology, ensures the safe transmission and access of video, and prevents data leakage and tampering.
The application uses specific words to describe embodiments of the application. Reference to "a first/second embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present application and is not to be construed as limiting thereof. Although a few exemplary embodiments of this application have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this application. Accordingly, all such modifications are intended to be included within the scope of this application as defined in the following claims. It is to be understood that the foregoing is illustrative of the present application and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The application is defined by the claims and their equivalents.

Claims (8)

1. Real-time unmanned aerial vehicle control sharing system, characterized by, include:
the video acquisition and transmission module is used for acquiring a monitoring video through a camera arranged on the unmanned aerial vehicle and transmitting the monitoring video to the server through the wireless network module;
the video processing module is used for compressing, encoding, storing and distributing the monitoring video through the server so as to change the quality and format of the monitoring video; and
and the video sharing module is used for accessing the server through a browser or mobile equipment, watching the monitoring video in real time and sharing and cooperating with other users.
2. The real-time unmanned aerial vehicle monitoring and sharing system of claim 1, wherein the video processing module comprises:
the network transmission bandwidth data acquisition unit is used for acquiring network transmission bandwidth values of a plurality of preset time points in a preset time period;
a network transmission rate time sequence arrangement unit, configured to arrange the network transmission bandwidth values of the plurality of predetermined time points into a network transmission rate time sequence input vector according to a time dimension;
the up-sampling unit is used for up-sampling the network transmission rate time sequence input vector through a linear interpolation up-sampling module to obtain an up-sampling network transmission rate time sequence input vector;
the network transmission rate time sequence feature extraction unit is used for extracting time sequence features of the up-sampling network transmission rate time sequence input vector to obtain network transmission rate time sequence features; and
and the video compression quality control unit is used for determining to reduce video compression quality, improve video compression quality or keep video compression quality based on the network transmission rate time sequence characteristics.
3. The real-time unmanned aerial vehicle monitoring and sharing system according to claim 2, wherein the network transmission rate timing feature extraction unit comprises:
vector segmentation subunit, configured to perform vector segmentation on the up-sampled network transmission rate timing input vector to obtain a sequence of network transmission rate local timing input vectors;
the network transmission rate local time sequence change feature extraction subunit is used for carrying out feature extraction on the sequence of the network transmission rate local time sequence input vector through a time sequence feature extractor based on a deep neural network model so as to obtain the sequence of the network transmission rate local time sequence feature vector; and
and the network transmission rate global time sequence associated coding subunit is used for carrying out associated coding on the sequence of the network transmission rate local time sequence characteristic vector so as to obtain a global time sequence context network transmission rate characteristic vector as the network transmission rate time sequence characteristic.
4. The real-time unmanned aerial vehicle monitoring and sharing system of claim 3, wherein the deep neural network model-based timing feature extractor is a one-dimensional convolutional layer-based timing feature extractor.
5. The real-time unmanned aerial vehicle monitoring and sharing system of claim 4, wherein the network transmission rate global timing correlation encoding subunit is configured to:
and passing the sequence of the local time sequence characteristic vectors of the network transmission rate through a network transmission rate mode characteristic context encoder based on a converter module to obtain the global time sequence context network transmission rate characteristic vector.
6. The real-time unmanned aerial vehicle monitoring and sharing system of claim 5, wherein the video compression quality control unit comprises:
the characteristic gain subunit is used for carrying out distributed gain based on a probability density characteristic imitation model on the global time sequence context network transmission rate characteristic vector so as to obtain a post-gain global time sequence context network transmission rate characteristic vector; and
and the video compression quality classification adjustment subunit is used for enabling the post-gain global time sequence context network transmission rate feature vector to pass through a classifier to obtain a classification result, wherein the classification result is used for representing that the video compression quality is reduced, the video compression quality is improved or the video compression quality is kept.
7. The real-time unmanned aerial vehicle monitoring and sharing system of claim 6, wherein the feature gain subunit is configured to:
carrying out distributed gain based on probability density characteristic imitation paradigm on the global time sequence context network transmission rate characteristic vector by using the following optimization formula to obtain the post-gain global time sequence context network transmission rate characteristic vector;
wherein, the optimization formula is:
wherein V is the global timing context network transmission rate feature vector, V i Is the characteristic value of the ith position of the global time sequence context network transmission rate characteristic vector, L is the length of the global time sequence context network transmission rate characteristic vector,representing the square of the two norms of the global timing context network transmission rate feature vector, and α is a weighted hyper-parameter, exp (·) represents an exponential operation, v i Is the characteristic value of the ith position of the transmission rate characteristic vector of the global time sequence context network after gain.
8. The real-time unmanned aerial vehicle monitoring and sharing system of claim 7, wherein the video compression quality classification adjustment subunit comprises:
the full-connection coding secondary subunit is used for carrying out full-connection coding on the transmission rate characteristic vector of the gain global time sequence context network by using a full-connection layer of the classifier so as to obtain a coding classification characteristic vector; and
and the classification secondary subunit is used for inputting the coding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
CN202311066601.8A 2023-08-22 2023-08-22 Real-time unmanned aerial vehicle monitoring and sharing system Active CN117201733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311066601.8A CN117201733B (en) 2023-08-22 2023-08-22 Real-time unmanned aerial vehicle monitoring and sharing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311066601.8A CN117201733B (en) 2023-08-22 2023-08-22 Real-time unmanned aerial vehicle monitoring and sharing system

Publications (2)

Publication Number Publication Date
CN117201733A true CN117201733A (en) 2023-12-08
CN117201733B CN117201733B (en) 2024-03-12

Family

ID=88989764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311066601.8A Active CN117201733B (en) 2023-08-22 2023-08-22 Real-time unmanned aerial vehicle monitoring and sharing system

Country Status (1)

Country Link
CN (1) CN117201733B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117419828A (en) * 2023-12-18 2024-01-19 南京品傲光电科技有限公司 New energy battery temperature monitoring method based on optical fiber sensor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101800697A (en) * 2010-01-27 2010-08-11 深圳市宇速科技有限公司 Method for real-time video transmission self-adapting to network bandwidth
CN107197213A (en) * 2017-07-10 2017-09-22 哈尔滨市舍科技有限公司 Scenic spot based on LAN monitoring unmanned System and method for
CN108574621A (en) * 2017-03-10 2018-09-25 西安司坤电子科技有限公司 A method of it is interacted in video monitoring
CN109788254A (en) * 2019-01-30 2019-05-21 安徽睿极智能科技有限公司 A kind of the real-time high-definition video stream distributing method and its system of adaptive network
US20200053314A1 (en) * 2018-08-07 2020-02-13 Vega Systems, Inc. Dynamic rate adaptation across multiple video sources
WO2020244066A1 (en) * 2019-06-04 2020-12-10 平安科技(深圳)有限公司 Text classification method, apparatus, device, and storage medium
CN116486524A (en) * 2023-05-24 2023-07-25 重庆赛力斯新能源汽车设计院有限公司 Alternating-current charging electronic lock control method based on scene recognition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101800697A (en) * 2010-01-27 2010-08-11 深圳市宇速科技有限公司 Method for real-time video transmission self-adapting to network bandwidth
CN108574621A (en) * 2017-03-10 2018-09-25 西安司坤电子科技有限公司 A method of it is interacted in video monitoring
CN107197213A (en) * 2017-07-10 2017-09-22 哈尔滨市舍科技有限公司 Scenic spot based on LAN monitoring unmanned System and method for
US20200053314A1 (en) * 2018-08-07 2020-02-13 Vega Systems, Inc. Dynamic rate adaptation across multiple video sources
CN109788254A (en) * 2019-01-30 2019-05-21 安徽睿极智能科技有限公司 A kind of the real-time high-definition video stream distributing method and its system of adaptive network
WO2020244066A1 (en) * 2019-06-04 2020-12-10 平安科技(深圳)有限公司 Text classification method, apparatus, device, and storage medium
CN116486524A (en) * 2023-05-24 2023-07-25 重庆赛力斯新能源汽车设计院有限公司 Alternating-current charging electronic lock control method based on scene recognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117419828A (en) * 2023-12-18 2024-01-19 南京品傲光电科技有限公司 New energy battery temperature monitoring method based on optical fiber sensor
CN117419828B (en) * 2023-12-18 2024-05-03 南京品傲光电科技有限公司 New energy battery temperature monitoring method based on optical fiber sensor

Also Published As

Publication number Publication date
CN117201733B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
WO2020221278A1 (en) Video classification method and model training method and apparatus thereof, and electronic device
Ma et al. dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs
CN109104620B (en) Short video recommendation method and device and readable medium
CN109344884B (en) Media information classification method, method and device for training picture classification model
CN113936339B (en) Fighting identification method and device based on double-channel cross attention mechanism
US10565684B2 (en) Super-resolution method and system, server, user device and method therefor
CN109543714B (en) Data feature acquisition method and device, electronic equipment and storage medium
CN110633669B (en) Mobile terminal face attribute identification method based on deep learning in home environment
KR20190119548A (en) Method and apparatus for processing image noise
CN117201733B (en) Real-time unmanned aerial vehicle monitoring and sharing system
Guo et al. Distributed and efficient object detection via interactions among devices, edge, and cloud
KR102296274B1 (en) Method for providing object recognition with deep learning using fine tuning by user
KR20220044828A (en) Facial attribute recognition method, device, electronic device and storage medium
CN109902681B (en) User group relation determining method, device, equipment and storage medium
KR102333143B1 (en) System for providing people counting service
CN113052150B (en) Living body detection method, living body detection device, electronic apparatus, and computer-readable storage medium
CN113255625B (en) Video detection method and device, electronic equipment and storage medium
CN112052759B (en) Living body detection method and device
CN115511892A (en) Training method of semantic segmentation model, semantic segmentation method and device
CN116245086A (en) Text processing method, model training method and system
CN113570512A (en) Image data processing method, computer and readable storage medium
CN117095252A (en) Target detection method
Chen et al. A mobile cloud framework for deep learning and its application to smart car camera
CN111291597B (en) Crowd situation analysis method, device, equipment and system based on image
EP3683733A1 (en) A method, an apparatus and a computer program product for neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant