WO2023093474A1 - 一种多媒体处理方法、装置、设备及介质 - Google Patents

一种多媒体处理方法、装置、设备及介质 Download PDF

Info

Publication number
WO2023093474A1
WO2023093474A1 PCT/CN2022/129137 CN2022129137W WO2023093474A1 WO 2023093474 A1 WO2023093474 A1 WO 2023093474A1 CN 2022129137 W CN2022129137 W CN 2022129137W WO 2023093474 A1 WO2023093474 A1 WO 2023093474A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
memory
multimedia
multimedia data
computing devices
Prior art date
Application number
PCT/CN2022/129137
Other languages
English (en)
French (fr)
Inventor
章龙涛
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023093474A1 publication Critical patent/WO2023093474A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to a multimedia processing method, device, equipment and medium.
  • the present disclosure provides a multimedia processing method, device, device and medium.
  • An embodiment of the present disclosure provides a multimedia processing method applied to a terminal, including:
  • the first multimedia data including at least one of video frame data and images
  • At least two computing devices included in the terminal respectively call and process the at least two first data blocks to obtain at least two a second data block, wherein each computing device processes a first data block;
  • An embodiment of the present disclosure also provides a multimedia processing device, which is set in a terminal and includes:
  • a data acquisition module configured to acquire first multimedia data, the first multimedia data including at least one of video frame data and images;
  • a data division module configured to store the first multimedia data in a shared memory, and divide the first multimedia data into at least two first data blocks, wherein each of the first data blocks block has its memory information in the shared memory;
  • a data processing module configured to call the at least two first data blocks through at least two computing devices included in the terminal based on the memory information of the at least two first data blocks in the shared memory and processing to obtain at least two second data blocks, wherein each computing device processes one first data block;
  • the data display module is configured to splice the at least two second data blocks to obtain second multimedia data, and display the second multimedia data.
  • An embodiment of the present disclosure also provides an electronic device, which includes: a processor; a memory for storing instructions executable by the processor; and the processor, for reading the instruction from the memory.
  • the instructions can be executed, and the instructions are executed to implement the multimedia processing method provided by the embodiments of the present disclosure.
  • the embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the multimedia processing method provided by the embodiment of the present disclosure.
  • the multimedia processing solution provided by the embodiment of the present disclosure obtains the first multimedia data, and the first multimedia data includes the frame data of the video and the frame data in the image. At least one; storing the first multimedia data in the shared memory, and dividing the first multimedia data into at least two first data blocks, wherein each first data block has its memory in the shared memory information; based on the memory information of the at least two first data blocks in the shared memory, the at least two first data blocks are respectively called and processed by at least two computing devices included in the terminal to obtain at least two second data blocks, Wherein, each computing device processes a first data block; splices at least two second data blocks to obtain second multimedia data, and displays the second multimedia data.
  • FIG. 1 is a schematic flowchart of a multimedia processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another multimedia processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of multimedia processing provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a multimedia processing device provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • algorithm technology can be used to improve the resolution and definition of video or images, such as video super-resolution algorithm, and processing efficiency can be improved through multi-thread parallel processing during processing, but when the computational complexity of the algorithm is high, the processing efficiency Still can not meet the demand.
  • an embodiment of the present disclosure provides a multimedia processing method, which will be introduced in conjunction with specific embodiments below.
  • FIG. 1 is a schematic flowchart of a multimedia processing method provided by an embodiment of the present disclosure.
  • the method can be executed by a multimedia processing device, where the device can be implemented by software and/or hardware, and generally can be integrated into an electronic device. As shown in Figure 1, this method is applied to the terminal, including:
  • Step 101 Acquire first multimedia data, where the first multimedia data includes at least one of video frame data and images.
  • the first multimedia data can be any multimedia data that needs image quality enhancement processing, and the first multimedia data can include at least one of video frame data and images, and the frame data can be understood as decoding the video to obtain The video frame data.
  • the embodiments of the present disclosure are not limited to the file format and source of the first multimedia data.
  • the first multimedia data may be images captured in real time, or video frame data or images downloaded from the Internet.
  • the first multimedia data when the first multimedia data is video frame data, at least two computing devices include graphics processors, and obtaining the first multimedia data includes: decoding the video by the graphics processor, A plurality of texture images are obtained, and the plurality of texture images are determined as frame data.
  • a graphics processing unit may be a microprocessor set in a terminal for performing image and graphics-related operations.
  • the computing device may be a device set in the terminal for computing and processing, and multiple computing devices may be set in one terminal.
  • the computing device may include the above-mentioned graphics processor, network processor (Neural-network Processing Unit, NPU), digital signal processor (Digital Signal Process, DSP), accelerated processor (Accelerated Processing Unit, APU), etc., the specific is not limited.
  • the multimedia processing device may first obtain the video to be processed, and then decode the video through a graphics processor to obtain multiple texture images, and convert each texture The image is determined as frame data, which is then stored in the memory of the GPU.
  • the above-mentioned decoding method is not limited, for example, in the terminal of the Android system, the video can be decoded by using a hard decoding method through an open graphics library (Open Graphics Library, OpenGL) to obtain a texture image (Texture).
  • OpenGL Open Graphics Library
  • the above-mentioned OpenGL can be used to render 2D, 3D A cross-language, cross-platform application programming interface (Application Programming Interface, API) for vector graphics.
  • Step 102 Store the first multimedia data in a shared memory, and divide the first multimedia data into at least two first data blocks.
  • the first data block may be part of the data included in the first multimedia data, obtained by dividing the data into blocks, the first multimedia data may include at least two first data blocks, and each first data block has its Memory information for shared memory.
  • dividing the first multimedia data into at least two first data blocks includes: according to the memory information of the first multimedia data, using a pointer offset to the first multimedia data The data is divided into blocks to obtain at least two first data blocks, and the number of the at least two first data blocks is the same as the number of at least two computing devices.
  • the first multimedia data is stored in the shared memory.
  • the shared memory may be a memory structure capable of realizing memory sharing.
  • the shared memory may be an EGLImage
  • the EGLImage may represent a shared resource type created by an EGL client API (such as the aforementioned OpenGL).
  • the memory information of the first multimedia data may be the relevant information of the first multimedia data in the above-mentioned shared memory, and in the embodiment of the present disclosure, the memory information of the first multimedia data may include the starting position in the shared memory and data length.
  • the pointer can be the processing pointer of the central processing unit (Central Processing Unit, CPU), pointing to the specific memory location in the shared memory, and the pointer offset can be the change of the memory location, that is, the change of the processing target.
  • CPU Central Processing Unit
  • FIG. 2 is a schematic flowchart of another multimedia processing method provided by an embodiment of the present disclosure.
  • Carrying out data block to the first multimedia data by means of pointer offset to obtain at least two first data blocks may include the following steps:
  • Step 201 Determine the block length corresponding to each computing device.
  • the block length may represent the data length of a first data block.
  • determining the block length corresponding to each computing device may include: determining the block length of each computing device as a unit length according to an average division method, and the unit length is the data length of the first multimedia data divided by A result of the number of at least two counting devices.
  • the unit length may be a length obtained by equally dividing the data length of the first multimedia data according to the number of multiple computing devices. Specifically, when determining the block length corresponding to each computing device, the multimedia processing apparatus may determine the block length of each computing device as a unit length in an evenly divided manner. It can be understood that at this time, since the block lengths corresponding to each computing device are the same, the offset lengths in the above pointer offset process are also the same.
  • determining the block length corresponding to each computing device may include: determining the block length corresponding to each computing device according to the processing speed of each computing device, and the processing speed of the computing device is proportional to the block length.
  • the multimedia processing device determines the block length corresponding to each computing device, it can determine the corresponding block length according to the processing speed of each computing device.
  • Step 202 Starting from the starting position of the first multimedia data, the pointer is offset according to the block length corresponding to each computing device, and the data block between two adjacent pointers is extracted to obtain at least two first data blocks until the processing length of the first multimedia data is reached.
  • the multimedia processing device After determining the block length corresponding to each computing device, the multimedia processing device performs a pointer offset from the starting position of the first multimedia data according to the block length corresponding to each computing device, and the offset length is The length of the block corresponding to each computing device, each offset, the data block between the memory locations corresponding to two adjacent pointers can be extracted to obtain a first data block, the data length of the first data block If the block length is above, multiple first data blocks can be obtained as the pointer shifts multiple times, until the pointer points to the memory location corresponding to the processing length of the first multimedia data, then stop.
  • Step 103 Based on the memory information of the at least two first data blocks in the shared memory, use at least two computing devices included in the terminal to call and process the at least two first data blocks to obtain at least two second data blocks .
  • each computing device included in the terminal can access the above-mentioned shared memory, that is, through the system underlying data structure of the terminal, the communication between each computing device and between the first multimedia data and each computing device can be realized.
  • Memory sharing can reduce the performance loss caused by memory copy in the subsequent processing.
  • the memory information of each first data block in the shared memory includes the above-mentioned block length and the block start position to which the pointer is offset.
  • each computing device processes a first data block.
  • the multimedia processing device after the multimedia processing device divides the first multimedia data into multiple first data blocks, it can obtain the memory information of each first data block in the shared memory, and then each first data block can be The memory information of the block in the shared memory is sent to the corresponding computing device, so that each computing device can receive the memory information of the first data block in the shared memory, and call and process the corresponding memory information in the shared memory based on the memory information
  • the first data block is obtained to obtain the second data block, and then at least two second data blocks are obtained.
  • the processing algorithm of each computing device on the first data block may be any algorithm that can realize image quality enhancement processing, for example, it may be a super-resolution algorithm, and the super-resolution algorithm may use algorithm technology to improve the resolution of a video or image, Simultaneously improve and generate texture details of videos or images, improve content details and contrast of videos or images, for example, optimize SD videos to HD videos, and comprehensively improve video clarity and subjective quality.
  • the computational complexity of the super-resolution algorithm is relatively high. high.
  • Step 104 splicing at least two second data blocks to obtain second multimedia data, and displaying the second multimedia data.
  • each computing device processes the corresponding first data block based on memory information, and after obtaining at least two second data blocks, each second data block may be output to a preset location in the above-mentioned shared memory, and then The multimedia processing device can splice or combine multiple second data blocks in the shared memory to obtain the second multimedia data, and then the graphics processor can obtain the second multimedia data from the shared memory, and display the second multimedia data on the screen of the terminal in the rendering display. Since at least two second data blocks are obtained after image quality enhancement processing is performed on at least two first data blocks, the image quality of the second multimedia data obtained by splicing at least two second data blocks is higher than that of the above-mentioned first data block. The multimedia data is enhanced, thereby improving the quality experience effect of the user on the multimedia data.
  • the first multimedia data is acquired, and the first multimedia data includes at least one of video frame data and images; the first multimedia data is stored in a shared memory, And the first multimedia data is divided into at least two first data blocks, wherein, each first data block has its memory information in shared memory; Based on the memory information of at least two first data blocks in shared memory, At least two first data blocks are called and processed by at least two computing devices included in the terminal to obtain at least two second data blocks, wherein each computing device processes one first data block; the at least two The second data blocks are concatenated to obtain the second multimedia data and display the second multimedia data.
  • the multimedia processing method may further include: implementing memory sharing between the first multimedia data and at least two computing devices through a multi-device cache object interface, so that each computing device can access the shared memory First multimedia data.
  • realizing the memory sharing between the first multimedia data and at least two computing devices through the multi-device cache object interface may include: creating a shared memory and the memory of each computing device in the hardware memory through the multi-device cache object interface The device memory, and creating a first memory corresponding to the first multimedia data in the shared memory, so that the first memory and the device memory realize memory sharing.
  • the multi-device cache object interface may be an implementation interface of buffers accessible to various computing devices in the bottom system data structure of the terminal.
  • the multi-device cache object interface can be AHardwareBuffer
  • HardwareBuffer is a bottom-level object of the Android system, which can represent a buffer that can be accessed by various computing devices, and can be mapped to the memory of various computing devices.
  • the HardwareBuffer implementation interface provided by the Android system in the local service (Native) layer is AHardwareBuffer.
  • the embodiments of the present disclosure take the Android system as an example, and other operating systems may be implemented in other ways.
  • the hardware memory may be a hardware storage area of the terminal.
  • the multimedia processing device may also create a shared memory in the hardware memory of the terminal through the ClientBuffer corresponding to the multi-device cache object interface, and create a device memory corresponding to each computing device, and then create a shared memory in the shared memory
  • the first memory corresponding to the first multimedia data is created in the first memory. Since the first memory and the device memory of each computing device are part of the shared memory, the memory sharing between the first memory and the device memory is realized, thereby making During subsequent processing, each computing device can access the first multimedia data in the first memory.
  • the multiple computing devices in the terminal can realize the state of full memory sharing and connection, which can reduce the time-consuming and performance loss caused by memory copying, and thus improve the processing efficiency of subsequent computing devices.
  • At least one computing device of the at least two computing devices is provided with a heterogeneous computing architecture, and the heterogeneous computing architecture is used to parallelly process the corresponding first data block through multiple threads.
  • the multimedia processing method may further include: creating a second memory corresponding to the heterogeneous computing architecture of at least one computing device in the shared memory, so that the second memory is shared with the first memory and the device memory.
  • the above-mentioned heterogeneous computing architecture can be a framework for implementing heterogeneous parallel data processing within a computing device, which can be set according to actual conditions.
  • the heterogeneous computing architecture can be constructed based on Open Computing Language (OpenCL), and OpenCL can It is a programming environment for general-purpose parallel programming of heterogeneous systems, and it can be a framework for writing programs for heterogeneous platforms.
  • OpenCL Open Computing Language
  • a heterogeneous computing framework may be set in one or more of the at least two computing devices included in the terminal, so as to realize the parallel processing of the first data block corresponding to the current computing device through multiple threads, so as to improve Processing efficiency.
  • the multimedia processing device may share the first memory corresponding to the first multimedia data in the shared memory with the second memory corresponding to the heterogeneous computing framework through a shared interface, and the shared interface is used to create a heterogeneous computing framework corresponding to the shared memory.
  • the second memory because both the second memory and the first memory are part of the shared memory, the sharing of the second memory and the first memory is realized, and because the first memory is created based on the multi-device cache object interface, the second memory can The memory is shared with each computing device, thereby realizing memory sharing between the second memory, the first memory, and the device memory.
  • the multimedia processing method may further include: setting the same image processing algorithm in each computing device, so that each computing device uses the image processing algorithm to process the corresponding first data block.
  • the multimedia processing device may also set the same image processing algorithm in each computing device included in the terminal, so that each subsequent computing device uses the same image processing algorithm to process its corresponding first data block.
  • FIG. 3 is a schematic diagram of multimedia processing provided by an embodiment of the present disclosure.
  • the system is an Android system
  • the first multimedia data is video frame data as an example
  • the computing device includes GPU and NPU
  • the OpenCL heterogeneous computing framework is set in the GPU
  • the process of multimedia processing may include: the terminal obtains the video from the video source, and decodes the video through the GPU to obtain the OpenGL graphics texture, that is, obtains multiple texture images as Frame data of the video;
  • the frame data of the video is stored in the shared memory, and memory sharing is realized in advance between the shared memory of the underlying data structure of the system and the memory of the above-mentioned heterogeneous computing framework of GPU, CPU, NPU, and OpenCL; for each
  • the frame data can be divided into data blocks.
  • each frame data can be divided into two first data blocks; ) to perform the processing of the super-resolution algorithm to obtain two second data blocks, and the two second data blocks are output to the specified output memory block in the shared memory as the processing results, and then the two second data blocks in the output memory block can be Blocks are combined and completed to obtain the second multimedia data, that is, the processed OpenGL graphics texture in the figure; the OpenGL graphics texture can be read by the CPU and displayed on the screen, that is, the second multimedia data is displayed on the display screen displayed in .
  • the frame data of the above video can be processed by GPU through hard decoding, and used as the input data for subsequent algorithm processing. It can be understood that the frame data of the video can be combined with pre-processing or not, which can be determined according to the actual situation. . It can be understood that the above-mentioned processing algorithm uses the super-resolution algorithm as an example, and other processing algorithms are also applicable. This solution has a strong correlation with the underlying structure of the system.
  • the above-mentioned Android system is taken as an example, and the implementation methods of different systems can be different.
  • the input data in the above multimedia processing process can be obtained by system hard decoding of the video, the resolution is equal to the resolution of the processed video, the decoded data can be accessed by the GPU, and the data can also be accessed by DSP, NPU, etc. through memory sharing.
  • Computing device access after which the data can be divided into blocks, and different data blocks can be processed on different computing devices, and finally the output data results are sent to the GPU for rendering and display on the screen.
  • the resolution of the output data is higher than that of the previous video, for example, the output data can be twice the resolution of the previous video.
  • This solution makes full use of the computing power of multiple computing devices included in the terminal to complete the real-time processing of multimedia on the terminal by means of heterogeneous parallelism.
  • the advantage lies in multi-device memory sharing and multi-device parallel computing, which can effectively improve algorithm operation. efficiency and reduce performance loss.
  • FIG. 4 is a schematic structural diagram of a multimedia processing device provided by an embodiment of the present disclosure.
  • the device may be implemented by software and/or hardware, and may generally be integrated into an electronic device. As shown in Figure 4, the device is set on the terminal, including:
  • a data acquisition module 401 configured to acquire first multimedia data, the first multimedia data including at least one of video frame data and images;
  • a data division module 402 configured to store the first multimedia data in a shared memory, and divide the first multimedia data into at least two first data blocks, wherein each of the first a data block has its memory information in said shared memory;
  • the data processing module 403 is configured to, based on the memory information of the at least two first data blocks in the shared memory, respectively process the at least two first data blocks through at least two computing devices included in the terminal calling and processing to obtain at least two second data blocks, wherein each computing device processes one first data block;
  • the data display module 404 is configured to splice the at least two second data blocks to obtain second multimedia data, and display the second multimedia data.
  • the at least two computing devices include graphics processors, and the data acquisition module 401 is specifically used for:
  • the video is decoded by the graphics processor to obtain multiple texture images, and the multiple texture images are determined as frame data.
  • the data division module 402 is used for:
  • the first multimedia data is divided into data blocks by pointer offset to obtain at least two first data blocks, and the at least two first data blocks are The number of blocks is the same as the number of the at least two computing devices.
  • the memory information of the first multimedia data in the shared memory includes a starting position and a data length in the shared memory.
  • the data division module 402 includes:
  • a block length unit configured to determine the block length corresponding to each of the computing devices
  • the pointer unit is used to start from the starting position of the first multimedia data, perform pointer offset according to the block length corresponding to each of the computing devices, and extract the data blocks between two adjacent pointers to obtain At least two first data blocks, until the processing length of the first multimedia data is reached.
  • the block length unit is used for:
  • the block length of each computing device is determined as a unit length, and the unit length is a result of dividing the data length of the first multimedia data by the number of the at least two computing devices.
  • the block length unit is used for:
  • the block length corresponding to each computing device is determined according to the processing speed of each computing device, and the processing speed of the computing device is proportional to the block length.
  • the memory information of each first data block in the shared memory includes the block length and the block start position to which the pointer is offset.
  • the device also includes a memory sharing module for:
  • the memory sharing between the first multimedia data and the at least two computing devices is implemented through a multi-device cache object interface, so that each of the computing devices can access the first multimedia data in the shared memory. media data.
  • the memory sharing module is used for:
  • the device also includes an algorithm setting module for:
  • the same image processing algorithm is set in each of the computing devices, so that each of the computing devices uses the image processing algorithm to process the corresponding first data block.
  • At least one computing device of the at least two computing devices is provided with a heterogeneous computing architecture, and the heterogeneous computing architecture is used to parallelly process the corresponding first data block through multiple threads.
  • the device further includes a heterogeneous memory module, which is used for:
  • a second memory corresponding to the heterogeneous computing architecture of the at least one computing device is created in the shared memory, so that the second memory is shared with the first memory and the device memory.
  • the multimedia processing device provided by the embodiment of the present disclosure can execute the multimedia processing method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • An embodiment of the present disclosure further provides a computer program product, including a computer program/instruction, and when the computer program/instruction is executed by a processor, the multimedia processing method provided in any embodiment of the present disclosure is implemented.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. Referring to FIG. 5 in detail below, it shows a schematic structural diagram of an electronic device 500 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 500 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and stationary terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 5 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • an electronic device 500 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 501, which may be randomly accessed according to a program stored in a read-only memory (ROM) 502 or loaded from a storage device 508.
  • ROM read-only memory
  • RAM random access memory
  • various appropriate actions and processes are executed by programs in the memory (RAM) 503 .
  • RAM random access memory
  • various programs and data necessary for the operation of the electronic device 500 are also stored.
  • the processing device 501, ROM 502, and RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504 .
  • the following devices can be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 507 such as a computer; a storage device 508 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 509.
  • the communication means 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data. While FIG. 5 shows electronic device 500 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 509, or from storage means 508, or from ROM 502.
  • the processing device 501 When the computer program is executed by the processing device 501, the above functions defined in the multimedia processing method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires first multimedia data, and the first multimedia data includes video At least one of frame data and images; storing the first multimedia data in a shared memory, and dividing the first multimedia data into at least two first data blocks, wherein each The first data block has its memory information in the shared memory; based on the memory information of the at least two first data blocks in the shared memory, at least two computing devices included in the terminal respectively The at least two first data blocks are called and processed to obtain at least two second data blocks, wherein each computing device processes one first data block; splicing the at least two second data blocks to obtain the second multimedia data, and display the second multimedia data.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本公开实施例涉及一种多媒体处理方法、装置、设备及介质,其中该方法应用于终端,包括:获取第一多媒体数据,第一多媒体数据包括视频的帧数据和图像中的至少一种;将第一多媒体数据存储在共享内存中,并将第一多媒体数据划分为至少两个第一数据块,其中,每个第一数据块具有其在共享内存的内存信息;基于至少两个第一数据块在共享内存的内存信息,通过终端内包括的至少两个计算设备分别对至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;将至少两个第二数据块拼接得到第二多媒体数据,并展示第二多媒体数据。本公开有效提高处理效率,并且通过多计算设备之间内存共享的方式,降低性能损耗。

Description

一种多媒体处理方法、装置、设备及介质
相关申请的交叉引用
本申请是以申请号为202111406380.5,申请日为2021年11月24日的中国申请为基础,并主张其优先权,该中国申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及图像处理技术领域,尤其涉及一种多媒体处理方法、装置、设备及介质。
背景技术
随着智能终端与视频处理技术的发展,在智能终端上消费视频或图像成为强需求,同时屏幕分辨率已越来越高,普通视频或图像已无法满足人眼观看的需求。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种多媒体处理方法、装置、设备及介质。
本公开实施例提供了一种多媒体处理方法,应用于终端,包括:
获取第一多媒体数据,所述第一多媒体数据包括视频的帧数据和图像中的至少一种;
将所述第一多媒体数据存储在共享内存中,并将所述第一多媒体数据划分为至少两个第一数据块,其中,每个所述第一数据块具有其在所述共享内存的内存信息;
基于所述至少两个第一数据块在所述共享内存的内存信息,通过所述终端内包括的至少两个计算设备分别对所述至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;
将所述至少两个第二数据块拼接得到第二多媒体数据,并展示所述第二多媒体数据。
本公开实施例还提供了一种多媒体处理装置,设置于终端,包括:
数据获取模块,用于获取第一多媒体数据,所述第一多媒体数据包括视频的帧数据和图像中的至少一种;
数据划分模块,用于将所述第一多媒体数据存储在共享内存中,并将所述第一多媒体数据划分为至少两个第一数据块,其中,每个所述第一数据块具有其在所述共享内存的内存信息;
数据处理模块,用于基于所述至少两个第一数据块在所述共享内存的内存信息,通过所述终端内包括的至少两个计算设备分别对所述至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;
数据展示模块,用于将所述至少两个第二数据块拼接得到第二多媒体数据,并展示所述第二多媒体数据。
本公开实施例还提供了一种电子设备,所述电子设备包括:处理器;用于存储所述处理器可执行指令的存储器;所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现如本公开实施例提供的多媒体处理方法。
本公开实施例还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行如本公开实施例提供的多媒体处理方法。
本公开实施例提供的技术方案与相关技术相比具有如下优点:本公开实施例提供的多媒体处理方案,获取第一多媒体数据,第一多媒体数据包括视频的帧数据和图像中的至少一种;将第一多媒体数据存储在共享内存中,并将第一多媒体数据划分为至少两个第一数据块,其中,每个第一数据块具有其在共享内存的内存信息;基于至少两个第一数据块在共享内存的内存信息,通过终端内包括的至少两个计算设备分别对至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;将至少两个第二数据块拼接得到第二多媒体数据,并展示第二多媒体数据。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的一种多媒体处理方法的流程示意图;
图2为本公开实施例提供的另一种多媒体处理方法的流程示意图;
图3为本公开实施例提供的一种多媒体处理的示意图;
图4为本公开实施例提供的一种多媒体处理装置的结构示意图;
图5为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
相关技术中可以利用算法技术提升视频或图像的分辨率和清晰度,例如视频超分算法,并且处理时通过多线程并行处理可以提升处理效率,但是当算法的计算复杂度较高时,处理效率依然不能满足需求。
为了解决相关技术中多媒体的处理效率不能满足要求的技术问题,本公开实施例提供了一种多媒体处理方法,下面结合具体的实施例对该方法进行介绍。
图1为本公开实施例提供的一种多媒体处理方法的流程示意图,该方法可以由多媒体处理装置执行,其中该装置可以采用软件和/或硬件实现,一般可集成在电子设备中。如图1所示,该方法应用于终端,包括:
步骤101、获取第一多媒体数据,第一多媒体数据包括视频的帧数据和图像中的至少一种。
其中,第一多媒体数据可以是任意需要进行画质增强处理的多媒体数据,第一多媒体数据可以包括视频的帧数据和图像中的至少一种,帧数据可以理解为对视频解码得到的视 频帧的数据。本公开实施例对第一多媒体数据的文件格式和来源不限,例如第一多媒体数据可以为实时拍摄得到的图像,也可以为从互联网上下载得到的视频的帧数据或图像。
在本公开实施例中,当第一多媒体数据为视频的帧数据,至少两个计算设备中包括图形处理器,获取第一多媒体数据,包括:通过图形处理器对视频进行解码,得到多个纹理图像,将多个纹理图像确定为帧数据。
图形处理器(Graphics Processing Unit,GPU)可以是终端中设置的用于进行图像和图形相关运算工作的微处理器。计算设备可以是终端中设置的用于进行计算处理的设备,一个终端中可以设置多个计算设备,本公开实施例中计算设备可以包括上述图形处理器、网络处理器(Neural-network Processing Unit,NPU)、数字信号处理器(Digital Signal Process,DSP)、加速处理器(Accelerated Processing Unit,APU)等,具体不限。
具体的,当第一多媒体数据为视频的帧数据时,多媒体处理装置可以先获取待处理的视频,之后可以通过图形处理器对该视频进行解码,得到多个纹理图像,将每个纹理图像确定为帧数据,之后将帧数据存储在GPU的内存中。上述解码方式不限,例如在安卓系统的终端中可以通过开放式图形库(Open Graphics Library,OpenGL)采用硬解码方式解码视频,得到纹理图像(Texture),上述OpenGL可以是用于渲染2D、3D矢量图形的跨语言、跨平台的应用程序编程接口(Application Programming Interface,API)。
步骤102、将第一多媒体数据存储在共享内存中,并将第一多媒体数据划分为至少两个第一数据块。
其中,第一数据块可以是第一多媒体数据包括的部分数据,通过数据分块得到,第一多媒体数据可以包括至少两个第一数据块,每个第一数据块具有其在共享内存的内存信息。
在本公开实施例中,将第一多媒体数据划分为至少两个第一数据块,包括:根据第一多媒体数据的内存信息,采用指针偏移的方式对第一多媒体数据进行数据分块,得到至少两个第一数据块,至少两个第一数据块的数量与至少两个计算设备的数量相同。
本公开实施例中,由于GPU的内存在共享内存中,因此将第一多媒体数据存储在共享内存中。其中,共享内存可以是一种能够实现内存共享的内存结构,例如在安卓系统的终端中,共享内存可以是EGLImage,EGLImage可以代表一种由EGL客户API(例如上述OpenGL)创建的共享资源类型。第一多媒体数据的内存信息可以是第一多媒体数据在上述共享内存中的相关信息,本公开实施例中第一多媒体数据的内存信息可以包括在共享内存中的起始位置和数据长度。指针可以是中央处理器(Central Processing Unit,CPU)的处理 指针,指向共享内存中具体内存位置,指针偏移可以是内存位置指向的改变,也即处理目标的改变。
示例性的,图2为本公开实施例提供的另一种多媒体处理方法的流程示意图,如图2所示,在一种可选的实施方式中,根据第一多媒体数据的内存信息,采用指针偏移的方式对第一多媒体数据进行数据分块,得到至少两个第一数据块,可以包括如下步骤:
步骤201、确定各计算设备对应的块长度。
其中,块长度可以表征一个第一数据块的数据长度。
在一种实施方式中,确定各计算设备对应的块长度,可以包括:按照平均划分方式,将各计算设备的块长度确定为单位长度,单位长度为第一多媒体数据的数据长度除以至少两个计算设备的数量的结果。
其中,单位长度可以是将第一多媒体数据的数据长度根据多个计算设备的数量平均划分的长度。具体的,多媒体处理装置在确定每个计算设备对应的块长度时,可以按照平均划分方式,将每个计算设备的块长度均确定为单位长度。可以理解的是,此时由于每个计算设备对应的块长度相同,则上述指针偏移过程中的偏移长度也相同。
在另一种实施方式中,确定各计算设备对应的块长度,可以包括:按照各计算设备的处理速度,确定各计算设备对应的块长度,计算设备的处理速度与块长度成正比。
多媒体处理装置在确定每个计算设备对应的块长度时,可以根据各计算设备的处理速度确定对应的块长度,处理速度越快,块长度越大,也即计算设备的处理速度与块长度成正比。可以理解的是,此时由于每个计算设备对应的块长度可能不同,则上述指针偏移过程中的偏移长度也不同。
步骤202、从第一多媒体数据的起始位置开始,根据各计算设备对应的块长度进行指针偏移,将相邻两个指针之间的数据块提取出来,得到至少两个第一数据块,直到达到第一多媒体数据的处理长度。
具体的,多媒体处理装置在确定每个计算设备对应的块长度之后,根据每个计算设备对应的块长度,从第一多媒体数据的起始位置开始进行指针偏移,偏移长度即为每个计算设备对应的块长度,每偏移一次,可以将相邻两个指针对应的内存位置之间的数据块提取出来,即可得到一个第一数据块,该第一数据块的数据长度为上述块长度,随着指针多次偏移,可以得到多个第一数据块,直到指针指向第一多媒体数据的处理长度对应的内存位置,则停止。
步骤103、基于至少两个第一数据块在共享内存的内存信息,通过终端内包括的至少两个计算设备分别对至少两个第一数据块进行调用和处理,得到至少两个第二数据块。
在本公开实施例中,终端包括的各计算设备均可访问上述共享内存,也即通过终端的系统底层数据结构可以实现各计算设备之间以及第一多媒体数据与各计算设备之间的内存共享,能够在后续处理的过程中减少内存拷贝带来的性能损耗。其中,各第一数据块在共享内存的内存信息包括上述块长度和指针偏移到的块起始位置。其中,每个计算设备处理一个第一数据块。
本公开实施例中,多媒体处理装置在将第一多媒体数据划分为多个第一数据块之后,即可得到各第一数据块在共享内存的内存信息,之后可以将每个第一数据块在共享内存的内存信息发送至对应的计算设备中,以使每个计算设备均可以接收到一个第一数据块在共享内存的内存信息,并基于该内存信息在共享内存中调用和处理对应的第一数据块,得到第二数据块,进而得到至少两个第二数据块。
可以理解的是,各计算设备对第一数据块的处理算法可以是任意能够实现画质增强处理的算法,例如可以是超分算法,超分算法可以利用算法技术提升视频或图像的分辨率,同时进行视频或图像的纹理细节的改善与生成,改善视频或图像的内容细节与对比度,例如可将标清视频优化为高清视频,全面提升视频清晰度与主观质量,超分算法的计算复杂度较高。
步骤104、将至少两个第二数据块拼接得到第二多媒体数据,并展示第二多媒体数据。
在本公开实施例中,各计算设备基于内存信息处理对应的第一数据块,得到至少两个第二数据块之后,可以将各第二数据块输出到上述共享内存中一个预设位置,之后多媒体处理装置可以将共享内存中的多个第二数据块拼接或组合,得到第二多媒体数据,之后图形处理器可以从共享内存中获得该第二多媒体数据,并在终端的屏幕中进行渲染展示。由于至少两个第二数据块是对至少两个第一数据块进行画质增强处理之后得到,因此至少两个第二数据块拼接得到的第二多媒体数据的画质相较于上述第一多媒体数据得到增强,进而提升用户对多媒体数据的画质体验效果。
本公开实施例提供的多媒体处理方案,获取第一多媒体数据,第一多媒体数据包括视频的帧数据和图像中的至少一种;将第一多媒体数据存储在共享内存中,并将第一多媒体数据划分为至少两个第一数据块,其中,每个第一数据块具有其在共享内存的内存信息;基于至少两个第一数据块在共享内存的内存信息,通过终端内包括的至少两个计算设备分 别对至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;将至少两个第二数据块拼接得到第二多媒体数据,并展示第二多媒体数据。采用上述技术方案,通过将多媒体数据分块并存储在共享内存中,然后能够基于不同数据块在共享内存的内存信息在终端的不同计算设备中进行处理,能够充分利用终端上计算设备的计算能力,有效提高处理效率,并且通过多计算设备之间内存共享的方式,减少数据提取的耗时,降低性能损耗,进而能够提升用户对多媒体数据的画质体验效果。
在一些实施例中,多媒体处理方法还可以包括:通过多设备缓存对象接口实现第一多媒体数据与至少两个计算设备之间的内存共享,以使各计算设备均可在共享内存中访问第一多媒体数据。可选的,通过多设备缓存对象接口实现第一多媒体数据与至少两个计算设备之间的内存共享,可以包括:通过多设备缓存对象接口在硬件内存中创建共享内存和各计算设备的设备内存,并在共享内存中创建第一多媒体数据对应的第一内存,以使第一内存与设备内存实现内存共享。
其中,多设备缓存对象接口可以是终端的系统底层数据结构中对各种计算设备能够访问的缓冲区的实现接口。示例性的,在安卓系统中该多设备缓存对象接口可以是AHardwareBuffer,HardwareBuffer是一个安卓系统的底层对象,可以代表可由各种计算设备访问的缓冲区,能够映射到各种计算设备的内存中,安卓系统在在本地服务(Native)层提供的HardwareBuffer实现接口即为AHardwareBuffer。本公开实施例以安卓系统为例,在其他操作系统中可以通过其他方式实现。
硬件内存可以是终端的硬件存储区域。具体的,多媒体处理装置在执行上述步骤101之前,还可以在终端的硬件内存中通过多设备缓存对象接口对应的ClientBuffer创建共享内存,并创建每个计算设备对应的设备内存,之后可以在共享内存中创建第一多媒体数据对应的第一内存,由于第一内存和每个计算设备的设备内存均为共享内存中的一部分,因此,实现了第一内存与设备内存的内存共享,进而使得后续处理时各计算设备均可以访问上述第一内存中的第一多媒体数据。
上述方案中,终端中的多个计算设备之间能够实现内存完全共享打通的状态,能够减少内存拷贝带来的耗时和性能损耗,进而提升了后续计算设备的处理效率。
在一些实施例中,至少两个计算设备中的至少一个计算设备设置有异构计算架构,异构计算架构用于通过多线程并行处理对应的第一数据块。可选的,多媒体处理方法还可以包括:在共享内存中创建至少一个计算设备的异构计算架构对应的第二内存,以使第二内 存与第一内存、设备内存实现内存共享。
上述异构计算架构可以是一个计算设备内部用于实现异构并行数据处理的框架,具体可以根据实际情况设置,例如异构计算架构可以基于开放运算语言(Open Computing Language,OpenCL)构建,OpenCL可以是一个面向异构系统通用目的并行编程的编程环境,可以是一个为异构平台编写程序的框架。
本公开实施例中,终端包括的至少两个计算设备中的一个或多个计算设备中可以设置异构计算框架,用于实现通过多线程并行处理当前计算设备对应的第一数据块,以提升处理效率。多媒体处理装置可以将上述共享内存中第一多媒体数据对应的第一内存通过共享接口与异构计算框架对应的第二内存共享,上述共享接口用于在共享内存中创建异构计算框架对应的第二内存,由于第二内存和第一内存均为共享内存的一部分,实现了第二内存和第一内存的共享,并且由于第一内存基于多设备缓存对象接口创建,第二内存又能与各计算设备之间内存共享,进而实现了第二内存与第一内存、设备内存之间的内存共享。
上述方案中,通过在终端的一个或多个计算设备内部设置异构计算框架,能够提升各计算设备的数据处理效率,进而提升了整体多媒体数据的处理效率。
在一些实施例中,多媒体处理方法还可以包括:在各计算设备中设置相同的图像处理算法,以使各计算设备均采用图像处理算法处理对应的第一数据块。
具体的,多媒体处理装置在执行上述步骤101之前,还可以在终端包括的每个计算设备中设置相同的图像处理算法,以使后续各计算设备均采用相同的图像处理算法处理其对应的第一数据块。
接下来通过一个具体的示例对本公开实施例的多媒体处理方法进行进一步说明。示例性的,图3为本公开实施例提供的一种多媒体处理的示意图,如图3所示,以系统为安卓系统,第一多媒体数据为视频的帧数据为例,并且计算设备包括GPU和NPU,GPU中设置有OpenCL的异构计算框架,多媒体处理的过程可以包括:终端从视频源中获取视频,并通过GPU对视频进行解码得到OpenGL图形纹理,也即得到多个纹理图像作为视频的帧数据;视频的帧数据存储在共享内存中,通过系统底层数据结构共享内存与上述GPU、CPU、NPU、OpenCL的异构计算框架的内存之间预先实现内存共享;针对视频的每个帧数据可以进行数据分块,由于终端中包括的计算设备为GPU和NPU,可以将每个帧数据划分为两个第一数据块;两个第一数据块分别在GPU和其他设备(即NPU)上进行超分算法的处理,得到两个第二数据块,两个第二数据块作为处理结果均输出到共享内存中指定 输出内存块,之后可以将输出内存块中的两个第二数据块进行组合完成得到第二多媒体数据,也即图中的处理过的OpenGL图形纹理;之后可以通过CPU读取该OpenGL图形纹理并上屏显示,也即将第二多媒体数据在显示屏中进行显示。
上述视频的帧数据可以为通过GPU采用硬解码的方式处理得到,作为后续算法处理的输入数据,可以理解的是该视频的帧数据可以选择结合前处理或不结合前处理,可以根据实际情况确定。可以理解的是,上述处理算法以超分算法为例,其他处理算法也可适用。本方案与系统底层结构的相关性较强,上述以安卓系统为例,不同系统实现方式可以不同。
上述多媒体处理过程中输入数据可以为对视频进行系统硬解码得到,分辨率大小等于所处理视频的分辨率,解码后数据能够被GPU访问,通过内存共享的方式数据也能够被DSP、NPU等其他计算设备结访问,之后可以将数据分块,并在不同计算设备上执行不同数据块的处理,最终将输出数据结果送GPU上屏渲染显示,输出数据的分辨率比之前视频高,例如输出数据的分辨率可以是之前视频的两倍。
本方案通过将异构并行的方式,充分利用终端中包括的多个计算设备的计算能力,完成多媒体在端上的实时处理,优势在于多设备内存共享以及多设备并行计算,能够有效提高算法运行效率,降低性能损耗。
图4为本公开实施例提供的一种多媒体处理装置的结构示意图,该装置可由软件和/或硬件实现,一般可集成在电子设备中。如图4所示,该装置设置于终端,包括:
数据获取模块401,用于获取第一多媒体数据,所述第一多媒体数据包括视频的帧数据和图像中的至少一种;
数据划分模块402,用于将所述第一多媒体数据存储在共享内存中,并将所述第一多媒体数据划分为至少两个第一数据块,其中,每个所述第一数据块具有其在所述共享内存的内存信息;
数据处理模块403,用于基于所述至少两个第一数据块在所述共享内存的内存信息,通过所述终端内包括的至少两个计算设备分别对所述至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;
数据展示模块404,用于将所述至少两个第二数据块拼接得到第二多媒体数据,并展示所述第二多媒体数据。
可选的,当所述第一多媒体数据为视频的帧数据,所述至少两个计算设备中包括图形 处理器,所述数据获取模块401具体用于:
通过所述图形处理器对所述视频进行解码,得到多个纹理图像,将所述多个纹理图像确定为帧数据。
可选的,所述数据划分模块402用于:
根据所述第一多媒体数据的内存信息,采用指针偏移的方式对所述第一多媒体数据进行数据分块,得到至少两个第一数据块,所述至少两个第一数据块的数量与所述至少两个计算设备的数量相同。
可选的,所述第一多媒体数据在共享内存的内存信息包括在所述共享内存中的起始位置和数据长度。
可选的,所述数据划分模块402包括:
块长度单元,用于确定各所述计算设备对应的块长度;
指针单元,用于从所述第一多媒体数据的起始位置开始,根据各所述计算设备对应的块长度进行指针偏移,将相邻两个指针之间的数据块提取出来,得到至少两个第一数据块,直到达到所述第一多媒体数据的处理长度。
可选的,所述块长度单元用于:
按照平均划分方式,将各所述计算设备的块长度确定为单位长度,所述单位长度为所述第一多媒体数据的数据长度除以所述至少两个计算设备的数量的结果。
可选的,所述块长度单元用于:
按照各所述计算设备的处理速度,确定各所述计算设备对应的块长度,所述计算设备的处理速度与所述块长度成正比。
可选的,各所述第一数据块在共享内存的内存信息包括所述块长度和所述指针偏移到的块起始位置。
可选的,所述装置还包括内存共享模块,用于:
通过多设备缓存对象接口实现所述第一多媒体数据与所述至少两个计算设备之间的内存共享,以使各所述计算设备均可在所述共享内存中访问所述第一多媒体数据。
可选的,所述内存共享模块用于:
通过所述多设备缓存对象接口在硬件内存中创建共享内存和各所述计算设备的设备内存,并在所述共享内存中创建所述第一多媒体数据对应的第一内存,以使所述第一内存与所述设备内存实现内存共享。
可选的,所述装置还包括算法设置模块,用于:
在各所述计算设备中设置相同的图像处理算法,以使各所述计算设备均采用所述图像处理算法处理对应的第一数据块。
可选的,所述至少两个计算设备中的至少一个计算设备设置有异构计算架构,所述异构计算架构用于通过多线程并行处理对应的第一数据块。
可选的,所述装置还包括异构内存模块,用于:
在所述共享内存中创建所述至少一个计算设备的异构计算架构对应的第二内存,以使所述第二内存与所述第一内存、所述设备内存实现内存共享。
本公开实施例所提供的多媒体处理装置可执行本公开任意实施例所提供的多媒体处理方法,具备执行方法相应的功能模块和有益效果。
本公开实施例还提供了一种计算机程序产品,包括计算机程序/指令,该计算机程序/指令被处理器执行时实现本公开任意实施例所提供的多媒体处理方法。
图5为本公开实施例提供的一种电子设备的结构示意图。下面具体参考图5,其示出了适于用来实现本公开实施例中的电子设备500的结构示意图。本公开实施例中的电子设备500可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的 装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的多媒体处理方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取第一多媒体数据,所述第一多媒体数据包括视频的帧数据和图像中的至少一种;将所述第一多媒体数据存储在共享内存中,并将所述第一多媒体数据划分为至少两个第一数据块,其中,每个所述第一数据块具有其在所述共享内存的内存信息;基于所述至少两个第一数据块在所述共享内存的内存信息,通过所述终端内包括的至少两个计算设备分别对所述至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;将所述至少两个第二数据块拼接得到第二多媒体数据,并展示所述第二多媒体数据。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用 集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (17)

  1. 一种多媒体处理方法,应用于终端,包括:
    获取第一多媒体数据,所述第一多媒体数据包括视频的帧数据和图像中的至少一种;
    将所述第一多媒体数据存储在共享内存中,并将所述第一多媒体数据划分为至少两个第一数据块,其中,每个所述第一数据块具有其在所述共享内存的内存信息;
    基于所述至少两个第一数据块在所述共享内存的内存信息,通过所述终端内包括的至少两个计算设备分别对所述至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;
    将所述至少两个第二数据块拼接得到第二多媒体数据,并展示所述第二多媒体数据。
  2. 根据权利要求1所述的方法,其中,当所述第一多媒体数据为视频的帧数据,所述至少两个计算设备中包括图形处理器,所述获取第一多媒体数据,包括:
    通过所述图形处理器对所述视频进行解码,得到多个纹理图像,将所述多个纹理图像确定为帧数据。
  3. 根据权利要求1所述的方法,其中,将所述第一多媒体数据划分为至少两个第一数据块,包括:
    根据所述第一多媒体数据的内存信息,采用指针偏移的方式对所述第一多媒体数据进行数据分块,得到至少两个第一数据块,所述至少两个第一数据块的数量与所述至少两个计算设备的数量相同。
  4. 根据权利要求3所述的方法,其中,所述第一多媒体数据的内存信息包括在所述共享内存中的起始位置和数据长度。
  5. 根据权利要求4所述的方法,其中,根据所述第一多媒体数据的内存信息,采用指针偏移的方式对所述第一多媒体数据进行数据分块,得到至少两个第一数据块,包括:
    确定各所述计算设备对应的块长度;
    从所述第一多媒体数据的起始位置开始,根据各所述计算设备对应的块长度进行指针偏移,将相邻两个指针之间的数据块提取出来,得到至少两个第一数据块,直到达到所述第一多媒体数据的处理长度。
  6. 根据权利要求5所述的方法,其中,确定各所述计算设备对应的块长度,包括:
    按照平均划分方式,将各所述计算设备的块长度确定为单位长度,所述单位长度为所 述第一多媒体数据的数据长度除以所述至少两个计算设备的数量的结果。
  7. 根据权利要求5所述的方法,其中,确定各所述计算设备对应的块长度,包括:
    按照各所述计算设备的处理速度,确定各所述计算设备对应的块长度,所述计算设备的处理速度与所述块长度成正比。
  8. 根据权利要求5所述的方法,其中,各所述第一数据块在共享内存的内存信息包括所述块长度和所述指针偏移到的块起始位置。
  9. 根据权利要求1所述的方法,其中,所述方法还包括:
    通过多设备缓存对象接口实现所述第一多媒体数据与所述至少两个计算设备之间的内存共享,以使各所述计算设备均可在所述共享内存中访问所述第一多媒体数据。
  10. 根据权利要求9所述的方法,其中,通过多设备缓存对象接口实现所述第一多媒体数据与所述至少两个计算设备之间的内存共享,包括:
    通过所述多设备缓存对象接口在硬件内存中创建共享内存和各所述计算设备的设备内存,并在所述共享内存中创建所述第一多媒体数据对应的第一内存,以使所述第一内存与所述设备内存实现内存共享。
  11. 根据权利要求1所述的方法,其中,所述方法还包括:
    在各所述计算设备中设置相同的图像处理算法,以使各所述计算设备均采用所述图像处理算法处理对应的第一数据块。
  12. 根据权利要求10所述的方法,其中,所述至少两个计算设备中的至少一个计算设备设置有异构计算架构,所述异构计算架构用于通过多线程并行处理对应的第一数据块。
  13. 根据权利要求12所述的方法,其中,所述方法还包括:
    在所述共享内存中创建所述至少一个计算设备的异构计算架构对应的第二内存,以使所述第二内存与所述第一内存、所述设备内存实现内存共享。
  14. 一种多媒体处理装置,设置于终端,包括:
    数据获取模块,被配置为获取第一多媒体数据,所述第一多媒体数据包括视频的帧数据和图像中的至少一种;
    数据划分模块,被配置为将所述第一多媒体数据存储在共享内存中,并将所述第一多媒体数据划分为至少两个第一数据块,其中,每个所述第一数据块具有其在所述共享内存的内存信息;
    数据处理模块,被配置为基于所述至少两个第一数据块在所述共享内存的内存信息, 通过所述终端内包括的至少两个计算设备分别对所述至少两个第一数据块进行调用和处理,得到至少两个第二数据块,其中,每个计算设备处理一个第一数据块;
    数据展示模块,被配置为将所述至少两个第二数据块拼接得到第二多媒体数据,并展示所述第二多媒体数据。
  15. 一种电子设备,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-13中任一所述的多媒体处理方法。
  16. 一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-13中任一所述的多媒体处理方法。
  17. 一种计算机程序产品,当所述计算机程序产品被计算机执行时,使得所述计算机实现上述权利要求1-13中任一项所述的多媒体处理方法。
PCT/CN2022/129137 2021-11-24 2022-11-02 一种多媒体处理方法、装置、设备及介质 WO2023093474A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111406380.5 2021-11-24
CN202111406380.5A CN116170634A (zh) 2021-11-24 2021-11-24 一种多媒体处理方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2023093474A1 true WO2023093474A1 (zh) 2023-06-01

Family

ID=86413680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129137 WO2023093474A1 (zh) 2021-11-24 2022-11-02 一种多媒体处理方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN116170634A (zh)
WO (1) WO2023093474A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169361A1 (en) * 2013-12-12 2015-06-18 International Business Machines Corporation Dynamic predictor for coalescing memory transactions
CN106797388A (zh) * 2016-12-29 2017-05-31 深圳前海达闼云端智能科技有限公司 跨系统多媒体数据编解码方法、装置、电子设备和计算机程序产品
CN107680030A (zh) * 2017-09-21 2018-02-09 中国科学院半导体研究所 一种图像处理器及处理方法
CN107945098A (zh) * 2017-11-24 2018-04-20 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备和存储介质
CN112385225A (zh) * 2019-09-02 2021-02-19 北京航迹科技有限公司 用于改进图像编码的方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169361A1 (en) * 2013-12-12 2015-06-18 International Business Machines Corporation Dynamic predictor for coalescing memory transactions
CN106797388A (zh) * 2016-12-29 2017-05-31 深圳前海达闼云端智能科技有限公司 跨系统多媒体数据编解码方法、装置、电子设备和计算机程序产品
CN107680030A (zh) * 2017-09-21 2018-02-09 中国科学院半导体研究所 一种图像处理器及处理方法
CN107945098A (zh) * 2017-11-24 2018-04-20 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机设备和存储介质
CN112385225A (zh) * 2019-09-02 2021-02-19 北京航迹科技有限公司 用于改进图像编码的方法和系统

Also Published As

Publication number Publication date
CN116170634A (zh) 2023-05-26

Similar Documents

Publication Publication Date Title
EP3876161A1 (en) Method and apparatus for training deep learning model
CN108206937B (zh) 一种提升智能分析性能的方法和装置
CN114089920B (zh) 数据存储方法、装置、可读介质及电子设备
CN110728622B (zh) 鱼眼图像处理方法、装置、电子设备及计算机可读介质
CN111598902B (zh) 图像分割方法、装置、电子设备及计算机可读介质
CN113886019B (zh) 虚拟机创建方法、装置、系统、介质和设备
US20240177374A1 (en) Video processing method, apparatus and device
CN111259636B (zh) 文档渲染方法、装置和电子设备
CN109426473B (zh) 无线可编程媒体处理系统
CN114066722B (zh) 用于获取图像的方法、装置和电子设备
CN113535105B (zh) 媒体文件处理方法、装置、设备、可读存储介质及产品
CN112258622B (zh) 图像处理方法、装置、可读介质及电子设备
CN111199569A (zh) 数据处理的方法、装置、电子设备及计算机可读介质
WO2023093474A1 (zh) 一种多媒体处理方法、装置、设备及介质
CN109672931B (zh) 用于处理视频帧的方法和装置
KR20210042992A (ko) 딥러닝 모델을 트레이닝하는 방법 및 장치
CN113837918B (zh) 多进程实现渲染隔离的方法及装置
CN114647472B (zh) 图片处理方法、装置、设备、存储介质和程序产品
US12020347B2 (en) Method and apparatus for text effect processing
CN111984890B (zh) 一种显示信息生成的方法、装置、介质和电子设备
CN114153620B (zh) Hudi运行环境资源优化分配方法及装置
CN118286682A (zh) 图像处理方法、装置、终端和存储介质
WO2021018176A1 (zh) 文字特效处理方法及装置
CN117788669A (zh) 图像处理方法、装置、终端和存储介质
CN116977468A (zh) 图像帧的绘制处理方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897557

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE