CN113747198A - Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device - Google Patents
Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device Download PDFInfo
- Publication number
- CN113747198A CN113747198A CN202110714571.1A CN202110714571A CN113747198A CN 113747198 A CN113747198 A CN 113747198A CN 202110714571 A CN202110714571 A CN 202110714571A CN 113747198 A CN113747198 A CN 113747198A
- Authority
- CN
- China
- Prior art keywords
- processing
- post
- rendering
- data
- shader
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 70
- 238000012805 post-processing Methods 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012545 processing Methods 0.000 claims abstract description 43
- 238000006243 chemical reaction Methods 0.000 claims abstract description 18
- 238000004364 calculation method Methods 0.000 claims abstract description 11
- 238000004590 computer program Methods 0.000 claims description 4
- 230000002776 aggregation Effects 0.000 claims description 2
- 238000004220 aggregation Methods 0.000 claims description 2
- 238000004806 packaging method and process Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 16
- 238000013461 design Methods 0.000 description 9
- 230000010365 information processing Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention provides a method, a medium and a device for rendering and post-processing an unmanned aerial vehicle cluster video picture, relates to the technical field of image processing, can avoid the problem of large time consumption caused by data decoding and information post-processing by adopting a computer CPU (central processing unit), effectively improves rendering processing efficiency and meets the requirement of real-time property; the method adopts a shader to perform rendering processing and information post-processing on video streams or pictures; for a video stream: hardware decoding is carried out on the unmanned aerial vehicle download data packet to extract first shell data, YUV compressed data in the first shell data are packaged into second shell data for rendering, and attributes of the second shell data are adjusted to be shader resources; for a picture: decompressing the downloaded picture to obtain YUV data, and performing format conversion calculation by adopting a shader; and finally, the shader resources enter a custom shader pipeline corresponding to the textures to perform an information post-processing flow. The technical scheme provided by the invention is suitable for the image processing process.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method, medium and device for rendering and post-processing video pictures of an unmanned aerial vehicle cluster.
Background
With the continuous improvement of data bandwidth brought by unmanned aerial vehicle clusters, the video data volume transmitted to the ground station under the carried task load is increased, and higher-quality video data can be provided for information processing software. The data stream transmitted to the ground station equipment by the unmanned aerial vehicle load is taken as H.264 data as an example, due to the characteristic of large high-definition video data volume, the time consumption of the data decoding calculation process by means of a computer CPU is large in the past, the requirement of video data real-time performance cannot be met, meanwhile, the calculation resource of a processor is also greatly occupied, and the use of other unmanned aerial vehicle modules such as information processing and the like is not facilitated.
Microsoft provides a whole set of specifications for video hardware decoding, which indicates a general efficient processing means for high-definition video data in the current mainstream technology, and documents that the decoding is realized by means of OpenCL for video data research, but the research on efficient rendering and post-processing of decoded data is lacked. It can be seen that high-definition video data decoding supported by hardware has become a useful technical method, and efficient display rendering and post-processing technologies of hard decoded data adapted to the method have become urgent to solve and implement.
The relatively common high-definition video hard decoding data display rendering directly depends on a fixed pipeline mode to directly display the video memory data after data format conversion, rendering data in the video memory data cannot be acquired and post-processed in the fixed pipeline, flexibility of data operation is lost, and subsequent information processing for the rendering data is not facilitated.
Accordingly, there is a need to develop a method, medium, and apparatus for unmanned aerial vehicle cluster video picture rendering and post-processing to address the deficiencies of the prior art and to solve or mitigate one or more of the problems.
Disclosure of Invention
In view of the above, the present invention provides a method, medium, and apparatus for rendering and post-processing an unmanned aerial vehicle cluster video picture, which can avoid the problem of large time consumption caused by data decoding and information post-processing by using a computer CPU, effectively improve rendering processing efficiency, and meet the requirement of video data real-time performance.
In one aspect, the invention provides a method for rendering and post-processing video pictures of an unmanned aerial vehicle cluster, which is characterized in that a shader is adopted to perform rendering processing and information post-processing on video streams or pictures.
The foregoing aspects and any possible implementations further provide an implementation where the step of processing the video stream with the shader includes:
s1, carrying out hardware decoding on the data packet downloaded by the unmanned aerial vehicle, and extracting first shell data;
s2, packaging the YUV compressed data in the first shell data into rendering-purpose second shell data to realize data conversion;
s3, adjusting the attribute of the second shell data into shader resources;
s4, the shader resources corresponding to the textures enter the custom shader pipeline to perform the information post-processing flow.
The foregoing aspects and any possible implementations further provide an implementation where the step of processing the picture with the shader includes:
s1, decompressing the single picture obtained by downloading to obtain YUV data;
s2, adopting a shader to perform format conversion calculation on the YUV data;
s3, the shader resources corresponding to the textures enter the custom shader pipeline to perform the information post-processing flow.
As for the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, and the specific content of step S1 includes: and decoding by using Directx hardware to obtain shell data of the data packet.
The above-described aspects and any possible implementations further provide an implementation in which the intelligence post-processing is any one or both of a pixel-by-pixel post-processing and a multi-graph-layer post-processing.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where the multi-graph-layer post-processing specifically includes: rendering the information of the multiple layers by adopting a parallel rendering mode; and creating a plurality of shader processing pipelines similar to the image layer, wherein each pipeline is responsible for rendering processing of one piece of subdivided intelligence information and forming a final rendering result after aggregation.
As for the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, where the pixel-by-pixel post-processing implementation manner is: and connecting a plurality of pixel shaders in series for rendering, wherein the processing result of the previous pixel shader is used as the input data of the next pixel shader.
The above-described aspects and any possible implementations further provide an implementation in which the intelligence post-processing flow includes a defogging process and a tracking indication of the target of interest.
In another aspect, the present invention provides a computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of drone cluster video picture rendering and post-processing as described in any one of the above.
In another aspect, the present invention provides an apparatus for unmanned aerial vehicle cluster video picture rendering and post-processing, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein: the processor, when executing the computer program, performs the steps of any of the methods described above.
Compared with the prior art, one of the technical schemes has the following advantages or beneficial effects: the problem of large time consumption in the calculation process caused by data decoding and information post-processing of a computer CPU is avoided, and the requirement on the real-time performance of video data can be met;
another technical scheme in the above technical scheme has the following advantages or beneficial effects: the processor does not need to be occupied by a large area of computing resources of the processor, and the processor is favorably used by other unmanned aerial vehicle modules such as information processing and the like;
another technical scheme in the above technical scheme has the following advantages or beneficial effects: a method of combining hardware accelerated decoding with non-hardware direct rendering is abandoned, and copying operation of a large amount of data is effectively reduced.
Of course, it is not necessary for any one product in which the invention is practiced to achieve all of the above-described technical effects simultaneously.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic workflow diagram of a parsing and rendering process of multiple video data streams of an unmanned aerial vehicle cluster according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a multi-step pixel shader post-processing provided by one embodiment of the present invention;
FIG. 3 is a functional block diagram of an intelligence display rendering pipeline according to one embodiment of the present invention;
fig. 4 is a functional block diagram of a hardware YUV data conversion and rendering pipeline according to an embodiment of the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The unmanned aerial vehicle cluster video picture rendering and post-processing method solves the decoding and display rendering pressure problems caused by large data volume video images of an unmanned aerial vehicle cluster, and meanwhile, a set of information post-processing method aiming at the technology is designed, so that the problem of strong information post-processing operation space in the later period is solved while the data processing pressure of the unmanned aerial vehicle cluster is solved.
The main workflow of the unmanned aerial vehicle cluster multi-channel video data stream parsing rendering process is shown in fig. 1, the unmanned aerial vehicle downloads a data packet to a video memory device, the video memory device processes the data packet, and aiming at a video stream mode, the steps include:
step 1, initializing multiple paths of DirectX hardware accelerated decoding to obtain respective Surface (namely first shell data);
step 2, encapsulating the YUV compressed data in the Surface acquired in the previous step into the Surface for rendering (namely, second shell data) to realize data conversion;
the method comprises the steps that decoded image data are stored in a surface acquired by a decoder, the data cannot be directly used for display rendering, one-step surface-to-surface format conversion is needed, and the converted target format is 'render target', namely a target body to be rendered;
step 3, converting each Surface attribute into a shader resource;
and 4, enabling the plurality of shader resources to enter a unified custom shader pipeline corresponding to the textures to perform an information post-processing flow.
Under the restriction of certain algorithms, platforms and other conditions, the return of the video stream cannot be met, and only sparse return of the pictures can be performed, so that the overall cluster bandwidth pressure is reduced; therefore, there is a need for efficient accelerated processing of compressed data in picture mode, i.e. picture download mode, wherein the steps for picture download mode include:
step 1, decompressing a single JPEG picture obtained by downloading to obtain YUV data;
step 2, format conversion calculation is needed to be carried out on the YUV data before display rendering, a conversion equation refers to the ITU-R BT.601 standard, and the process can be carried out in parallel at a high speed through a pixel shader;
and 3, enabling the plurality of shader resources to enter a unified custom shader pipeline corresponding to the textures to perform an information post-processing flow.
There are two types of post-processing designs for post-processing of intelligence processing video data of an unmanned aerial vehicle cluster in a shader, the first type is a post-processing design of pixel-by-pixel operation, as shown in fig. 2, a pixel processing process of multiple passes of the shader is shown: the output pixel of each pass is used as the input of the next pass, and the post-processing operation aiming at the video image is realized through the pixel-by-pixel operation of the shaders of a plurality of passes; the second type is a multi-layer post-processing design, as shown in fig. 3, which is a multi-layer parallel method outside the pixel-by-pixel multi-pass processing flow of fig. 2; the method of fig. 2 serially connects a plurality of passes to complete pixel processing, while fig. 3 parallels a plurality of processing procedures such as fig. 2, and realizes the effects of common calculation, layer superposition and centralized rendering display of a plurality of layers.
In order to deal with the problem that a method which only downloads key frames is often adopted due to the limitation of downlink bandwidth, the data format is the JPEG picture format, and the video stream processing flow cannot be directly applied, the solution (see fig. 4 for design) is as follows: and performing texture processing on the data and finishing hardware accelerated pixel calculation to obtain a complete downloaded picture.
1. DirectX custom data stream processing pipeline design
By means of a hard decoding function which is realized in an open source decoding library FFMPEG and conforms to DXVA specification, the high definition video H.264 stream is decoded into a shell data format which is adaptive to a DirectX rendering interface in a staged mode, and the shell data format is a data source for hard decoding subsequent display rendering and post-processing.
After the preliminary hard decoded shell data is obtained, two key points need to be considered for subsequent display rendering: firstly, considering the acceleration characteristic of display rendering, copy operation to a memory is required to be avoided as much as possible when the shell data decoded by the hardware is displayed and rendered, and data operation in the memory is more efficient; secondly, on the premise that the video memory decoding data is not copied, the custom display rendering post-processing capacity aiming at the video memory data also needs to be reserved.
The relatively common high-definition video hard decoding data display rendering directly depends on a fixed pipeline mode to directly display the video memory data after data format conversion, rendering data in the video memory data cannot be acquired and post-processed in the fixed pipeline, flexibility of data operation is lost, and subsequent information processing aiming at the rendering data is not facilitated.
By adopting the custom pipeline design shown in fig. 1, the data after hard decoding is used as shader resources to be accessed into the custom pipeline, so that the required custom operation is completed on the data, meanwhile, the data copy to the memory is avoided in the whole process, and the processing efficiency is improved.
2. Information post-processing of clustered multi-channel high-definition video data
The information post-processing of the unmanned aerial vehicle cluster needs to perform algorithm post-processing on the acquired video data, such as video real-time defogging processing, tracking and marking of an interest target in a video, and the like. In the prior art, the rendering shader pipeline design is finished by decoding high-definition video data hardware, and decoded shell data is accessed into a shader to be used as shader resources. Post-processing is the process of operating on the shader resources.
Post-processing of the intelligence post-processing video data of the unmanned aerial vehicle cluster in the shader comprises two types, one type is that the efficient parallelization advantage of the shader is directly utilized to carry out direct pixel-by-pixel processing, and the process is independent and parallel to the multi-channel data of the cluster. For example, video real-time hardware defogging; the other type is a plurality of shader processing pipelines that create similar layers as layer renderings of a variety of informative information data renderings.
The pixel-by-pixel algorithm processing of video data often requires a plurality of pixel shaders connected in series to complete, and the processing result of the previous pixel shader is used as an input resource for subsequent processing.
The information rendering design mode of the multiple layers is in parallel, and each single pipeline is responsible for rendering and processing of one piece of subdivided information and is integrated to form a final rendering result.
3. Hardware acceleration processing method under cluster downloading picture mode
Under the restriction of certain algorithms, platforms and other conditions, the return of the video stream cannot be met, and only sparse return of the pictures can be performed, so that the overall cluster bandwidth pressure is reduced; there is therefore a need for efficient accelerated processing of compressed data in picture mode.
Taking a YUV compression format as an example, format conversion calculation needs to be carried out on the YUV compression format before display rendering, and a conversion equation refers to an ITU-R BT.601 standard:
the constant parts are as follows:
the conversion process is performed pixel by pixel according to the formula, and the conversion process is obviously suitable for completing the conversion process by utilizing a parallelization processing mode of the display card aiming at the conversion of high-definition video data (the algorithm design corresponding to the pixel-by-pixel processing is calculated by one pixel, the calculation process is designed, and all pixels are completed simultaneously when the operation is performed in a shader of the display card, so that the parallelization effect is achieved.
4. Comparative analysis of experimental results
Finally, according to an actual autonomous reconnaissance test of the unmanned aerial vehicle cluster flight video, the invention is verified as follows: the number of clusters is 12, 3 of them real-time video streaming, and 9 of them picture downloading mode (limited by telemetry downlink bandwidth). The results of the rendering display performed by the method of the present invention and the prior art method are shown in table 1. The invention keeps greater efficiency advantage in the aspects of unmanned aerial vehicle cluster video processing and information post-processing.
TABLE 1
Content of test | CPU image and information processing method thereof | Results of the invention |
Average frame number for video stream rendering | 4.8 Frames/sec | 23.3 Frames/sec |
Average frame number for picture rendering | 0.4 Frames/sec | 1.9 Frames/sec |
Detailed descriptions are given above to a method, medium, and apparatus for rendering and post-processing an unmanned aerial vehicle cluster video picture provided in the embodiments of the present application. The above description of the embodiments is only for the purpose of helping to understand the method of the present application and its core ideas; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
As used in the specification and claims, certain terms are used to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. "substantially" means within an acceptable error range, and a person skilled in the art can solve the technical problem within a certain error range to substantially achieve the technical effect. The description which follows is a preferred embodiment of the present application, but is made for the purpose of illustrating the general principles of the application and not for the purpose of limiting the scope of the application. The protection scope of the present application shall be subject to the definitions of the appended claims.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The foregoing description shows and describes several preferred embodiments of the present application, but as aforementioned, it is to be understood that the application is not limited to the forms disclosed herein, but is not to be construed as excluding other embodiments and is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the application as described herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the application, which is to be protected by the claims appended hereto.
Claims (10)
1. A method for rendering and post-processing video pictures of an unmanned aerial vehicle cluster is characterized in that a shader is adopted to perform rendering processing and information post-processing on video streams or pictures.
2. The method for unmanned aerial vehicle cluster video picture rendering and post-processing as claimed in claim 1, wherein the step of processing the video stream using a shader comprises:
s1, carrying out hardware decoding on the data packet downloaded by the unmanned aerial vehicle, and extracting first shell data;
s2, packaging the YUV compressed data in the first shell data into rendering-purpose second shell data to realize data conversion;
s3, adjusting the attribute of the second shell data into shader resources;
s4, the shader resources corresponding to the textures enter the custom shader pipeline to perform the information post-processing flow.
3. The method for unmanned aerial vehicle cluster video picture rendering and post-processing as claimed in claim 1, wherein the step of processing the picture using a shader comprises:
s1, decompressing the single JPEG picture obtained by downloading to obtain YUV data;
s2, adopting a shader to perform format conversion calculation on the YUV data;
s3, the shader resources corresponding to the textures enter the custom shader pipeline to perform the information post-processing flow.
4. The unmanned aerial vehicle cluster video picture rendering and post-processing method according to claim 2, wherein the specific content of step S1 includes: and decoding by using Directx hardware to obtain shell data of the data packet.
5. The method for unmanned aerial vehicle cluster video picture rendering and post-processing according to claim 2 or 3, wherein the intelligence post-processing is any one or two of pixel-by-pixel post-processing and multi-graph-layer post-processing.
6. The unmanned aerial vehicle cluster video picture rendering and post-processing method of claim 5, wherein the multi-graph-layer post-processing specifically comprises: rendering the information of the multiple layers by adopting a parallel rendering mode; and creating a plurality of shader processing pipelines similar to the image layer, wherein each pipeline is responsible for rendering processing of one piece of subdivided intelligence information and forming a final rendering result after aggregation.
7. The unmanned aerial vehicle cluster video picture rendering and post-processing method of claim 5, wherein the pixel-by-pixel post-processing is implemented by: and connecting a plurality of pixel shaders in series for rendering, wherein the processing result of the previous pixel shader is used as the input data of the next pixel shader.
8. The method for unmanned aerial vehicle cluster video picture rendering and post-processing as claimed in claim 2 or 3, wherein the intelligence post-processing flow comprises defogging processing and tracking marking of an interest target.
9. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of drone cluster video picture rendering and post-processing as claimed in any one of claims 1-8.
10. An apparatus for unmanned aerial vehicle cluster video picture rendering and post-processing, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein: the processor, when executing the computer program, realizes the steps of the method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110714571.1A CN113747198B (en) | 2021-06-25 | 2021-06-25 | Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110714571.1A CN113747198B (en) | 2021-06-25 | 2021-06-25 | Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113747198A true CN113747198A (en) | 2021-12-03 |
CN113747198B CN113747198B (en) | 2024-02-09 |
Family
ID=78728529
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110714571.1A Active CN113747198B (en) | 2021-06-25 | 2021-06-25 | Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113747198B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103686084A (en) * | 2013-12-10 | 2014-03-26 | 中国航天科工集团第四研究院 | Panoramic video monitoring method used for cooperative real-time reconnaissance of multiple unmanned aerial vehicles |
CN103700385A (en) * | 2012-09-27 | 2014-04-02 | 深圳市快播科技有限公司 | Media player, playing method, and video post-processing method in hardware acceleration mode |
CN106210883A (en) * | 2016-08-11 | 2016-12-07 | 浙江大华技术股份有限公司 | A kind of method of Video Rendering, equipment |
US20170295379A1 (en) * | 2016-04-12 | 2017-10-12 | Microsoft Technology Licensing, Llc | Efficient decoding and rendering of blocks in a graphics pipeline |
US9928637B1 (en) * | 2016-03-08 | 2018-03-27 | Amazon Technologies, Inc. | Managing rendering targets for graphics processing units |
US20180295367A1 (en) * | 2017-04-10 | 2018-10-11 | Intel Corporation | Technology to accelerate scene change detection and achieve adaptive content display |
US20180300905A1 (en) * | 2017-04-17 | 2018-10-18 | Intel Corporation | Encoding 3d rendered images by tagging objects |
CN111357289A (en) * | 2017-11-17 | 2020-06-30 | Ati科技无限责任公司 | Game engine application for video encoder rendering |
CN112348732A (en) * | 2019-08-08 | 2021-02-09 | 华为技术有限公司 | Model reasoning method and device based on graphics rendering pipeline and storage medium |
-
2021
- 2021-06-25 CN CN202110714571.1A patent/CN113747198B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103700385A (en) * | 2012-09-27 | 2014-04-02 | 深圳市快播科技有限公司 | Media player, playing method, and video post-processing method in hardware acceleration mode |
CN103686084A (en) * | 2013-12-10 | 2014-03-26 | 中国航天科工集团第四研究院 | Panoramic video monitoring method used for cooperative real-time reconnaissance of multiple unmanned aerial vehicles |
US9928637B1 (en) * | 2016-03-08 | 2018-03-27 | Amazon Technologies, Inc. | Managing rendering targets for graphics processing units |
US20170295379A1 (en) * | 2016-04-12 | 2017-10-12 | Microsoft Technology Licensing, Llc | Efficient decoding and rendering of blocks in a graphics pipeline |
CN106210883A (en) * | 2016-08-11 | 2016-12-07 | 浙江大华技术股份有限公司 | A kind of method of Video Rendering, equipment |
US20180295367A1 (en) * | 2017-04-10 | 2018-10-11 | Intel Corporation | Technology to accelerate scene change detection and achieve adaptive content display |
US20180300905A1 (en) * | 2017-04-17 | 2018-10-18 | Intel Corporation | Encoding 3d rendered images by tagging objects |
CN111357289A (en) * | 2017-11-17 | 2020-06-30 | Ati科技无限责任公司 | Game engine application for video encoder rendering |
CN112348732A (en) * | 2019-08-08 | 2021-02-09 | 华为技术有限公司 | Model reasoning method and device based on graphics rendering pipeline and storage medium |
Non-Patent Citations (3)
Title |
---|
SHEILA N. MUGALA: "Leveraging the Technology of Unmanned Aerial Vehicles for Developing Countries", 《SAIEE AFRICA RESEARCH JOURNAL》, vol. 111, no. 4 * |
靳高峰: "DVE下无人机动态三维感知、理解与可视化构建", 《中国硕士学位论文全文数据库工程科技Ⅱ辑》 * |
黄翔翔;朱全生;江万寿;: "多视纹理映射中无需设定偏差的快速可见性检测", 测绘学报, no. 01 * |
Also Published As
Publication number | Publication date |
---|---|
CN113747198B (en) | 2024-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11429852B2 (en) | Convolution acceleration and computing processing method and apparatus, electronic device, and storage medium | |
CN109522902B (en) | Extraction of space-time feature representations | |
US11570477B2 (en) | Data preprocessing and data augmentation in frequency domain | |
US20210314629A1 (en) | Using residual video data resulting from a compression of original video data to improve a decompression of the original video data | |
CN111681167A (en) | Image quality adjusting method and device, storage medium and electronic equipment | |
US20140086309A1 (en) | Method and device for encoding and decoding an image | |
US10891525B1 (en) | Streaming-based deep learning models for computer vision tasks | |
DE102021207678A1 (en) | STREAMING A COMPRESSED FIELD OF LIGHT | |
CN110428382A (en) | A kind of efficient video Enhancement Method, device and storage medium for mobile terminal | |
DE102021127982A1 (en) | STREAMING A FIELD OF LIGHT WITH LOSSLESS OR LOSSY COMPRESSION | |
CN116958534A (en) | Image processing method, training method of image processing model and related device | |
CN114761968B (en) | Method, system and storage medium for frequency domain static channel filtering | |
CN108010113B (en) | Deep learning model execution method based on pixel shader | |
US20220103831A1 (en) | Intelligent computing resources allocation for feature network based on feature propagation | |
CN113747198B (en) | Unmanned aerial vehicle cluster video picture rendering and post-processing method, medium and device | |
CN111860557A (en) | Image processing method and device, electronic equipment and computer storage medium | |
CN111383158A (en) | Remote sensing image preprocessing method | |
CN114170082A (en) | Video playing method, image processing method, model training method, device and electronic equipment | |
CN114390307A (en) | Image quality enhancement method, device, terminal and readable storage medium | |
CN116996695B (en) | Panoramic image compression method, device, equipment and medium | |
US20220292344A1 (en) | Processing data in pixel-to-pixel neural networks | |
JP2024516550A (en) | Learning-Based Point Cloud Compression with Tearing Transform | |
KR20240009421A (en) | Method and apparatus for generating a 3D face including at least one deformed region | |
CN116563114A (en) | Image processing method, device, electronic equipment and storage medium | |
WO2023091260A1 (en) | Outlier grouping based point cloud compression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |