CN115550709B - Data processing method and electronic equipment - Google Patents

Data processing method and electronic equipment Download PDF

Info

Publication number
CN115550709B
CN115550709B CN202210019453.3A CN202210019453A CN115550709B CN 115550709 B CN115550709 B CN 115550709B CN 202210019453 A CN202210019453 A CN 202210019453A CN 115550709 B CN115550709 B CN 115550709B
Authority
CN
China
Prior art keywords
video data
frame
buffer
electronic device
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210019453.3A
Other languages
Chinese (zh)
Other versions
CN115550709A (en
Inventor
李鹏飞
姚远
李玉
许嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210019453.3A priority Critical patent/CN115550709B/en
Publication of CN115550709A publication Critical patent/CN115550709A/en
Application granted granted Critical
Publication of CN115550709B publication Critical patent/CN115550709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a data processing method and electronic equipment. The data processing method comprises the following steps: the first frame of video data is read from an output buffer of a decoding module of the electronic device. Then, a second video data frame written to the first buffer prior to the first video data frame is determined, the second video data frame being spaced from the first video data frame by a first number of video data frames, the first buffer being for storing video data frames to be sent to a display system of the electronic device. Next, a first time difference is determined based on a first time at which the first frame of video data is to be written to the first buffer and a second time at which the second frame of video data is to be written to the first buffer. And discarding the first video data frame when the first time difference is less than or equal to the current vertical synchronization period of the display system. In this way, by controlling the number of video data frames to be displayed, which are buffered in one Vsync period, the duration of waiting for display of the video data frames can be reduced, thereby reducing the display delay.

Description

Data processing method and electronic equipment
Technical Field
The present application relates to the field of terminal devices, and in particular, to a data processing method and an electronic device.
Background
Currently, in a scene that video data is transmitted between two electronic devices through a network and content corresponding to the video data is displayed by a receiving-end electronic device, the receiving-end electronic device receives the video data and directly sends and displays the video data after decoding, but display time delay of some video data frames is very large and user experience is poor.
Disclosure of Invention
The embodiment of the application provides a data processing method and electronic equipment, which are used for reducing the display time delay of video data frames.
In some scenarios, due to instability of network transmission, decoding and sending time of video data by the receiving end electronic device is unstable. The SurfaceFlinger of the receiving electronic device performs screen refreshing according to the Vsync period (i.e., the vertical synchronization period), so that the decoded video data of each frame needs to further wait for the Vsync signal to be displayed by the SurfaceFlinger according to the sequence. This causes a significant delay in the display of some frames of video data.
In a first aspect, an embodiment of the present application provides a data processing method. The data processing method is applied to the electronic equipment, and comprises the following steps: and reading the first video data frame from an output buffer of the decoding module of the electronic device, wherein the output buffer is used for storing the video data frame decoded by the decoding module from the encoded data sent by the source electronic device. Then, a second video data frame written to the first buffer prior to the first video data frame is determined, the second video data frame being spaced from the first video data frame by a first number of video data frames, the first buffer being for storing video data frames to be sent to a display system of the electronic device. Next, a first time difference is determined based on a first time at which the first frame of video data is to be written to the first buffer and a second time at which the second frame of video data is to be written to the first buffer. And discarding the first video data frame when the first time difference is less than or equal to the current vertical synchronization period of the display system. Thus, if the number of frames of video data to be displayed buffered in one Vsync period reaches a certain number (the number is equal to the sum of the first number and 1), the electronic device discards the frames of video data so that the number of frames of video data to be displayed buffered in one Vsync period does not exceed the first number. By controlling the number of video data frames to be displayed, which are buffered in a Vsync period, the duration of waiting for display of the video data frames can be reduced, thereby reducing the display delay.
In an implementation manner, according to the first aspect, the data processing method may further include: reading the third video data frame from the output buffer of the decoding module of the electronic device, determining a fourth video data frame written into the first buffer before the third video data frame, the fourth video data frame being spaced from the third video data frame by a first number of video data frames. Then, a second time difference is determined based on a third time at which a third frame of video data is to be written to the first buffer and a fourth time at which a fourth frame of video data is to be written to the first buffer, and when the second time difference is greater than a current vertical synchronization period of the display system, the third frame of video data is to be written to the first buffer. Thus, if the time difference between the time when the video data frame is written into the first buffer memory and the time when the (first number+1) th video data frame is written into the first buffer memory before the video data frame exceeds one Vsync period, the video data frame is written so that the number of video data frames to be displayed buffered in one Vsync period does not exceed the first number, and thus the video data frame can be written into the first buffer memory for subsequent transmission of the video data frame to the display system for display.
In one implementation manner, when the second time difference is greater than the current vertical synchronization period of the display system, after writing the third video data frame to the first buffer, the method further includes: the time at which the third frame of video data is written to the first buffer is recorded. In this way, it may be determined whether the (first number+1) th video data frame after the third video data frame may be written into the first buffer according to the writing time of the third video data frame, so as to ensure that the number of video data frames to be displayed buffered in one Vsync period does not exceed the first number, and reduce the display delay.
In an implementation manner, according to the first aspect, the method further includes: and under the condition that the time difference between the current time and the time of transmitting the video data frame to the display system last time is equal to the current vertical synchronization period, reading the target video data frame with the earliest writing time from the first buffer, and transmitting the target video data frame to the display system. In this way, the video data frames are sent to the display system according to the current vertical synchronization period, and the video data frames are consistent with the screen refreshing frequency of the display system, so that the requirement on the cache capacity of the display system can be reduced, and the frame loss number of the display system is reduced.
In one implementation manner, after the target video data frame is sent to the display system according to the first aspect, the method further includes: the target video data frame is deleted from the first buffer. In this way, the video data frames which are already sent and displayed are deleted from the first buffer memory, so that the memory resource of the first buffer memory can be released in time, and the first buffer memory has enough space to store the video data frames which are decoded by the decoding module subsequently.
In an implementation manner, before the first video data frame is read from the output buffer of the decoding module of the electronic device, the method further includes: and reading the fifth video data frame from an output buffer of a decoding module of the electronic device, and writing the fifth video data frame into the first buffer when the number of the video data frames written into the first buffer is smaller than or equal to the first number. At the initial stage of transmitting data from a source end to a receiving end, a first buffer is empty, no video data frame is needed to be discarded at the moment, and after the video data frames in the first buffer reach a first quantity, whether the subsequent video data frames are written into the first buffer is determined.
In an implementation manner, before the fifth video data frame is read from the output buffer of the decoding module of the electronic device, the method further includes: the method comprises the steps of receiving encoded data sent by source-side electronic equipment, wherein the encoded data are obtained by encoding video data frames, writing the encoded data into an input buffer of a decoding module so that the decoding module decodes the encoded data to obtain decoded video data frames, and storing the decoded video data frames into an output buffer of the decoding module. In this way, the receiving end electronic device can continuously receive and decode the encoded data sent by the source end electronic device.
In one implementation, when the first time difference is less than or equal to the current vertical synchronization period of the display system, discarding the first frame of video data further includes: the first frame of video data is deleted from the output buffer. In this way, the video data frames which are already discarded are deleted from the output buffer, so that the storage resources of the output buffer can be timely released, and the video data frames which are subsequently decoded by the decoding module and are not transmitted yet can be prevented from being covered by the video data frames which are not transmitted yet because of the insufficient residual storage space of the output buffer.
In one implementation manner, when the second time difference is greater than the current vertical synchronization period of the display system, after writing the third video data frame to the first buffer, the method further includes: and deleting the third video data frame from the output buffer. In this way, the video data frame written into the first buffer memory is deleted from the output buffer memory, so that the storage resource of the output buffer memory can be released in time, and the video data frame which is decoded by the decoding module and is not transmitted yet is prevented from being covered by the video data frame which is decoded later due to insufficient storage space of the output buffer memory.
In one implementation, the first number is less than or equal to a capacity of the first cache. In this way, the first buffer space can be ensured to store at least a first number of frames of video data, ensuring a certain display quality.
In a second aspect, the present application provides an electronic device comprising: the electronic device includes a memory and a processor coupled to the memory, the memory storing program instructions that, when executed by the processor, cause the electronic device to perform the steps of: reading a first video data frame from an output buffer of a decoding module of the electronic device; the output buffer is used for storing video data frames decoded by the decoding module from encoded data sent by the source electronic device, determining second video data frames written into the first buffer before the first video data frames, spacing a first number of video data frames between the second video data frames and the first video data frames, the first buffer is used for storing video data frames to be sent to a display system of the electronic device, determining a first time difference according to a first time when the first video data frames are written into the first buffer and a second time when the second video data frames are written into the first buffer, and discarding the first video data frames when the first time difference is smaller than or equal to the current vertical synchronization period of the display system.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: reading the third video data frame from the output buffer of the decoding module of the electronic device, determining a fourth video data frame written into the first buffer before the third video data frame, the fourth video data frame being spaced from the third video data frame by a first number of video data frames. Then, a second time difference is determined based on a third time at which a third frame of video data is to be written to the first buffer and a fourth time at which a fourth frame of video data is to be written to the first buffer, and when the second time difference is greater than a current vertical synchronization period of the display system, the third frame of video data is to be written to the first buffer. Thus, if the time difference between the time when the video data frame is written into the first buffer memory and the time when the (first number+1) th video data frame is written into the first buffer memory before the video data frame exceeds one Vsync period, the video data frame is written so that the number of video data frames to be displayed buffered in one Vsync period does not exceed the first number, and thus the video data frame can be written into the first buffer memory for subsequent transmission of the video data frame to the display system for display.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: the time at which the third frame of video data is written to the first buffer is recorded. In this way, it may be determined whether the (first number+1) th video data frame after the third video data frame may be written into the first buffer according to the writing time of the third video data frame, so as to ensure that the number of video data frames to be displayed buffered in one Vsync period does not exceed the first number, and reduce the display delay.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and under the condition that the time difference between the current time and the time of transmitting the video data frame to the display system last time is equal to the current vertical synchronization period, reading the target video data frame with the earliest writing time from the first buffer, and transmitting the target video data frame to the display system. In this way, the video data frames are sent to the display system according to the current vertical synchronization period, and the video data frames are consistent with the screen refreshing frequency of the display system, so that the requirement on the cache capacity of the display system can be reduced, and the frame loss number of the display system is reduced.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: the target video data frame is deleted from the first buffer. In this way, the video data frames which are already sent and displayed are deleted from the first buffer memory, so that the memory resource of the first buffer memory can be released in time, and the first buffer memory has enough space to store the video data frames which are decoded by the decoding module subsequently.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and reading the fifth video data frame from an output buffer of a decoding module of the electronic device, and writing the fifth video data frame into the first buffer when the number of the video data frames written into the first buffer is smaller than or equal to the first number. At the initial stage of transmitting data from a source end to a receiving end, a first buffer is empty, no video data frame is needed to be discarded at the moment, and after the video data frames in the first buffer reach a first quantity, whether the subsequent video data frames are written into the first buffer is determined.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: the method comprises the steps of receiving encoded data sent by source-side electronic equipment, wherein the encoded data are obtained by encoding video data frames, writing the encoded data into an input buffer of a decoding module so that the decoding module decodes the encoded data to obtain decoded video data frames, and storing the decoded video data frames into an output buffer of the decoding module. In this way, the receiving end electronic device can continuously receive and decode the encoded data sent by the source end electronic device.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: the first frame of video data is deleted from the output buffer. In this way, the video data frames which are already discarded are deleted from the output buffer, so that the storage resources of the output buffer can be timely released, and the video data frames which are subsequently decoded by the decoding module and are not transmitted yet can be prevented from being covered by the video data frames which are not transmitted yet because of the insufficient residual storage space of the output buffer.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and deleting the third video data frame from the output buffer. In this way, the video data frame written into the first buffer memory is deleted from the output buffer memory, so that the storage resource of the output buffer memory can be released in time, and the video data frame which is decoded by the decoding module and is not transmitted yet is prevented from being covered by the video data frame which is decoded later due to insufficient storage space of the output buffer memory.
In accordance with a second aspect, in one implementation, the first number is less than or equal to a capacity of the first cache. In this way, the first buffer space can be ensured to store at least a first number of frames of video data, ensuring a certain display quality.
In a third aspect, the present application provides a computer readable storage medium comprising a computer program which, when run on an electronic device, causes the electronic device to perform the data processing method of any one of the preceding first aspects.
Drawings
Fig. 1 is a schematic structural diagram of an exemplary electronic device 100;
fig. 2 is a software architecture block diagram of an electronic device 100 of an exemplary illustrated embodiment of the present application;
FIG. 3 is a schematic view of an application scenario of a data processing method according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a data processing procedure in the application scenario shown in FIG. 3;
FIG. 5 is a schematic diagram illustrating a process of transferring data in a data processing method according to an embodiment of the present application;
fig. 6 is a timing diagram during processing of an exemplary video frame.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
Herein, the display delay refers to a difference between a time when a video data frame is displayed in the receiving-end electronic device and a time when the video data frame is displayed in the source-end electronic device.
Herein, the Vsync period, i.e., the vertical synchronization period, may also be referred to as a screen refresh period. The Vsync period has an inverse relationship with the screen refresh frequency. For example, when the screen refresh frequency is 60Hz, the Vsync period is 16.6ms (millisecond), and when the screen refresh frequency is 120Hz, the Vsync period is 8.3ms.
The data processing method of the embodiment of the application can be applied to a screen projection scene. In the screen-throwing scene, the first electronic device is used as a source end, after video data are coded, the video data can be transmitted to the second electronic device through a network, such as Wi-Fi, bluetooth and the like, and the second electronic device is used as a receiving end. And after the second electronic equipment decodes the video data, the decoded video data is sent to a display system for display (short for sending display).
In the conventional scheme, video data is transmitted immediately after being decoded. The SurfaceFlinger of the receiving side electronic device displays video data frame by frame in accordance with the Vsync period. The video frames to be displayed are all waited in the cache of the surface eFlinger, and the longer the video frames are waited, the larger the display delay is.
For example, assuming that 5 video frames are waiting for display in the buffer, if the 1 st video frame needs to wait for 1 Vsync period to be displayed, the 2 nd video frame needs to wait for 2 Vsync periods, the 3 rd video frame needs to wait for 3 Vsync periods, the 4 th video frame needs to wait for 4 Vsync periods, and the 5 th video frame needs to wait for 5 Vsync periods. It can be seen that the later video frames are ordered, the longer the latency, the greater the display delay. If there are many video frames waiting, the display delay of the video frames ordered later will be large and the experience for the user will be poor.
In one example, the first electronic device may be, for example, a PC (Personal Computer ) machine, which may install a Windows system. The second electronic device may be a mobile phone, tablet, etc. with an Android (Android) system installed.
In another example, the first electronic device may be, for example, a mobile phone, a tablet, etc. installed with an android system, and the second electronic device may be a PC installed with a Windows system.
In another example, the first electronic device may be, for example, a mobile phone, a tablet, etc. with an android system installed, and the second electronic device may also be a mobile phone, a tablet, etc. with an android system installed.
The embodiment of the application provides a data processing method, which can reduce the display time delay of second electronic equipment and improve the use experience of a user.
The data processing method of the embodiment of the application can be applied to the second electronic equipment. The structure of the second electronic device may be as shown in fig. 1.
Fig. 1 is a schematic diagram of an exemplary illustrated electronic device 100. It should be understood that the electronic device 100 shown in fig. 1 is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
Referring to fig. 1, an electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the application takes an Android (Android) system with a layered architecture as an example, and illustrates a software structure of the electronic device 100.
Fig. 2 is a software structural block diagram of the electronic device 100 of the exemplary embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each with a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may include an application layer, an application framework layer, a kernel layer, and the like.
The application layer may include a series of application packages.
As shown in fig. 2, an application package of an application layer of the electronic device 100 may include modules such as a service, a service management, a service setting, a transmission module, a decoding logic module, a display module, and the like.
The service module is used for realizing specific service functions. The business management module is used for realizing the corresponding management function of the business corresponding to the business module. The service management module is used for realizing the setting function corresponding to the service module.
For example, the business module may be a wireless screen-drop module.
The decoding logic module is used for executing the data processing method. For specific functionality of the decode logic module, please refer to the detailed description of the embodiments described later herein.
The display module is used for displaying the video data.
As shown in fig. 2, the application framework layer may include modules such as basic capabilities, video codec, socket, and the like.
The basic capability module is used for providing various APIs (Application Programming Interface, application program interfaces) which can be used when constructing the application program.
The video coding and decoding module is used for realizing the coding and decoding functions of video data. The video codec module may include an encoding module and a decoding module. The encoding module is used for encoding the video data, and the decoding module is used for decoding the video data. The video codec module may be a module that encodes and decodes video data in a hardware manner. The coding logic module of the application program layer can call the coding module in the video coding and decoding module to code, and the decoding logic module of the application program layer can call the decoding module in the video coding and decoding module to decode.
Socket is an abstraction of an endpoint that communicates bi-directionally between application processes on different hosts in a network. One socket is the end of the network where processes communicate, providing a mechanism for application layer processes to exchange data using network protocols. In terms of the position, the socket is connected with the application process in an upper mode, and the socket is connected with the network protocol stack in a lower mode, so that the socket is an interface for the application program to communicate through the network protocol, and is an interface for the application program to interact with the network protocol stack.
The application framework layer may also include, among other things, surfeflinger (not shown in fig. 2).
The kernel layer is a layer between hardware and software.
As shown in FIG. 2, the kernel layer may include sensor drivers, wi-Fi drivers, USB drivers, and the like.
It will be appreciated that the layers and components contained in the layers in the software structure shown in fig. 2 do not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer layers than shown and may include more or fewer components per layer, as the application is not limited.
Fig. 3 is a schematic view of an application scenario of the data processing method according to the embodiment of the present application. Referring to fig. 3, an application scenario of the present embodiment includes a source end and a receiving end, which are both electronic devices. The software architecture of the receiving end may be shown in fig. 2, and will not be described herein. The software architecture of the source may include an application layer, an application framework layer, and a kernel layer. The application program layer of the source end can comprise modules such as a service, a service management module, a service setting module, a transmission module, a screen capturing module, an encoding logic module and the like. The application framework layer of the source end may be the same as the application framework layer of the receiving end, and the kernel layer of the source end may be the same as the kernel layer of the receiving end, which is described in the related description of the software structure shown in fig. 2, and will not be repeated here.
Fig. 4 is a schematic diagram illustrating a data processing procedure in the application scenario shown in fig. 3. Referring to fig. 4, in this embodiment, the data processing process may include the following steps:
s1, after a service of a source end is started, a service module of the source end sends a service starting notification to a screen capturing module of the source end.
In this embodiment, the source terminal is taken as a mobile phone a with an android system, the receiving terminal is taken as a tablet B with an android system, and the source terminal projects a video to the receiving terminal.
After a user starts a wireless screen-throwing function of the mobile phone A, searching equipment capable of receiving the screen throwing of the mobile phone A in the mobile phone A to obtain an available equipment list C capable of receiving the screen throwing of the mobile phone A, wherein the available equipment list C comprises a flat plate B. Then, the user may select the tablet B from the searched available device list C as the receiving screen-throwing device of this time. A wireless connection, such as a Wi-Fi connection or a bluetooth connection, is then established between handset a and tablet B.
After the wireless connection between the mobile phone A and the tablet B is successfully established, the wireless screen throwing module in the mobile phone A sends a service starting notification to the screen grabbing module in the mobile phone A. The service initiation notification is used to instruct the screen capture module to capture video data on the screen.
After the wireless connection between the mobile phone A and the tablet B is successfully established, a decoding logic module in the tablet B creates a first thread, a second thread, a third thread and a first cache.
The capacity N of the first buffer needs to be larger than the capacity of the output buffer (OutputBuffer) of the decoding module in the tablet B. N represents the maximum number of frames that can be stored in the first buffer.
In practical applications, the capacity of the first buffer may be set empirically. For example, if the decoding speed of the decoding module in tablet B differs less from the screen refresh frequency of tablet B, a smaller capacity first buffer may be provided. Conversely, if the decoding speed of the decoding module in panel B differs greatly from the screen refresh frequency of panel B, a larger capacity first buffer may be set.
S2, capturing video data on a source end screen by a screen capturing module of the source end.
For example, after receiving the service start notification, the screen capturing module of the mobile phone a starts capturing video data on the screen of the mobile phone a. The screen capture module may capture video data at a frame rate, such as 60FPS.
S3, the screen capturing module of the source end sends video data to the coding logic module of the source end.
For example, the screen capturing module of the mobile phone a sends captured video data to the coding logic module of the mobile phone a.
S4, the coding logic module of the source end calls the coding module to code the video data, and coded data are obtained.
Here, the encoding module is a module in the video encoding and decoding module of the source end.
Wherein the encoding module may encode the video data based on video encoding standard h.264. The encoding module may also encode by any encoding method of JPEG (Joint Photographic Experts Group ), h.261, MPEG (Moving Picture Experts Group, moving picture experts group), and the like.
In another example, the encoding module may also perform hybrid encoding using more than two encoding methods.
It should be noted that the foregoing is only a schematic description of the coding method adopted by the coding module, and does not exclude other coding methods, and the coding method adopted by the coding module is not limited in the embodiment of the present application.
S5, the coding logic module of the source end sends coded data to the transmission module of the source end.
Wherein in one example, the transmission module may be a Wi-Fi module. In another example, the transmission module may be a bluetooth module.
The mobile phone A and the tablet B are provided with a Wi-Fi module and a Bluetooth module, and when the wireless connection between the mobile phone A and the tablet B is Wi-Fi connection in the wireless screen, the coding logic module of the mobile phone A sends coding data to the Wi-Fi module of the mobile phone A. If the wireless connection between the mobile phone A and the tablet B is Bluetooth connection in the wireless screen throwing process, the coding logic module of the mobile phone A sends coding data to the Bluetooth module of the mobile phone A.
S6, the transmission module of the source end transmits the coded data to the transmission module of the receiving end.
For example, if the wireless connection between handset a and tablet B is a Wi-Fi connection in a wireless screen, the Wi-Fi module of handset a sends encoded data to the Wi-Fi module of tablet B.
If the wireless connection between the mobile phone A and the tablet B is Bluetooth connection in the wireless screen throwing process, the Bluetooth module of the mobile phone A sends coded data to the Bluetooth module of the tablet B.
The number of frames received by the receiving end in one Vsync period is not fixed due to the instability of network transmission. For example, in the 1 st Vsync period, the reception side receives 1 frame of data; in the 2 nd Vsync period, the receiving end receives 2 frames of data; in the 3 rd Vsync period, the receiving end receives 1 frame data … ….
S7, the transmission module of the receiving end sends the encoded data to the decoding logic module of the receiving end.
For example, after the Wi-Fi module of the tablet B receives the encoded data sent by the Wi-Fi module of the mobile phone a, the Wi-Fi module may send the encoded data to the decoding code logic module of the tablet B.
S8, the coding logic module of the receiving end receives the coded data through the created first thread.
S9, the coding logic module of the receiving end sends the coded data to an input buffer of the decoding module through the first thread.
Here, the decoding module is a module in the video encoding and decoding module of the source end. The decoding module itself has two buffers, one being the input buffer (InputBuffer) and the other being the output buffer (OutputBuffer). The first buffer does not belong to the decoding module, and the first buffer is created by the decoding logic module.
S10, a decoding module at the receiving end decodes the encoded data to obtain video data, and the video data is stored in an output buffer of the decoding module.
After the decoding module decodes the video data from the encoded data, the video data is immediately stored in an output buffer of the decoding module.
S11, a decoding logic module of the receiving end obtains video data from an output buffer of the decoding module through the created second thread, and whether the obtained video data is stored in the created first buffer is determined according to a preset transmission threshold M.
The second thread may monitor the output buffer OutputBuffer of the decoding module, and once the video data is written in the output buffer OutputBuffer, immediately reads the video data in the output buffer OutputBuffer into the first buffer. Then, the output buffer OutputBuffer may delete the video data that the second thread has read.
The video data read from the output buffer by the second thread each time may be one or more frames.
The display threshold M represents the maximum display frame number in one Vsync period. The transmission threshold value M may be preset, where M is less than or equal to N. Wherein the value of M can be determined according to the requirements for the delay. When the required time delay is smaller, the M value can be set to be a smaller value; when the required time delay is large, the value of M can be set to a large value. Thus, the number of frames to be displayed in one Vsync period is controlled to be less than M frames.
In the embodiment of the present application, determining whether to store the acquired video data in the created first buffer according to the preset sending threshold M may include:
acquiring time T1 when a current video data frame enters a first buffer memory;
acquiring time T2 when the M-1 st frame video data before the current video data frame enters the first buffer memory;
comparing the difference value of T1 and T2 with the Vsync period to obtain a comparison result;
if the comparison result indicates that the difference between T1 and T2 is greater than the Vsync period, storing the current video data frame into a first buffer; if the comparison result indicates that the difference between T1 and T2 is less than or equal to the Vsync period, the current video data frame is discarded, and the current video frame is not stored in the first buffer.
And S12, the decoding logic module of the receiving end periodically reads video data from the first cache through the created third thread, wherein the period is a Vsync period.
For example, if the Vsync period of the panel B is 16.6ms, the third line Cheng Meige, 16.6ms reads one frame of video data from the first buffer. If the Vsync period of the panel B is 8.3ms, the third line Cheng Meige 8.3.3 ms reads one frame of video data from the first buffer.
It should be noted that the above values of the Vsync period are only exemplary, and the Vsync period is not limited in the embodiments of the present application. In practical applications, the actual Vsync period may be determined according to the screen refresh frequency employed by the surfeflinger Vsync in the second electronic device.
S13, the decoding logic module of the receiving end sends video data to the display module according to the Vsync period through the third thread.
For example, if the Vsync period of the panel B is 16.6ms, the third line Cheng Meige, 16.6ms, transmits one frame of video data to the display module. If the Vsync period of the panel B is 8.3ms, the third line Cheng Meige 8.3.3 ms transmits one frame of video data to the display module.
It should be noted that, in the same Vsync period, the time t1 when the third thread reads the video data frame from the first buffer is earlier than the time t2 when the third thread sends the video data to the display module.
After the display module receives the video data frame, a releaseOutputBuffer method can be called, the releaseOutputBuffer method calls MediaCodec, mediaCodec of the android display system and then calls SurfaceFlinger, and the SurfaceFlinger displays an interface corresponding to the video data frame.
Fig. 5 is a schematic diagram illustrating a circulation process of data in the data processing method according to the embodiment of the present application. Referring to fig. 4, as shown in fig. 5, the first thread receives encoded data transmitted from the source, and then stores the encoded data in an input buffer of the decoding module. Then, the decoding module decodes the data in the input buffer, and stores the decoded video data in the output buffer of the decoding module. Then, the second thread acquires a video data frame from an output buffer of the decoding module, and if the difference T0 between the time T1 when the video data frame enters the first buffer and the time T2 when the M-1 frame video data before the video data frame enters the first buffer is greater than the Vsync period, the video data frame is stored in the first buffer; if the difference T0 between T1 and T2 is less than or equal to the Vsync period, the video data frame is discarded, and the video data frame is not stored in the first buffer. Then, the third thread reads video data from the first buffer frame by frame according to the Vsync period, and the third thread transmits the read video data to the SurfaceFinger for display after receiving the Vsync signal. After the surfeflinger receives the video data, the video images are displayed frame by frame according to the Vsync period.
The embodiment of the application can reduce the waiting time of the video data frames by controlling the number of the video data frames to be sent and displayed, which are cached in one Vsync period, so as to reduce the display time delay. This is explained below by the timing during video frame processing.
Fig. 6 is a timing diagram during processing of an exemplary video frame. All time axes in fig. 6 are aligned. Referring to fig. 6, in a first Vsync period, a second thread reads first frame video data from an output buffer at time t1, reads second frame video data from the output buffer at time t2, reads third frame video data from the output buffer at time t3, reads fourth frame video data from the output buffer at time t4, and reads fifth frame video data from the output buffer at time t 5. During the second Vsync period, the second thread reads the sixth frame of video data from the output buffer at time t 6.
With continued reference to fig. 6, the first frame of video data enters the first buffer at time t1', the second frame of video data enters the first buffer at time t2', the third frame of video data enters the first buffer at time t3', the fourth frame of video data enters the first buffer at time t4', the fifth frame of video data enters the first buffer at time t5', and the sixth frame of video data enters the first buffer at time t 6'.
Let the capacity of the surfeflinger buffer be 6 frames. In the conventional scheme, the first frame video data, the second frame video data, the third frame video data, the fourth frame video data, the fifth frame video data and the sixth frame video data are all sent to the SurfaceFlinger buffer memory, and then the SurfaceFlinger displays frame by frame according to the Vsync period. In this way, the display time interval between the sixth frame of video data and the first frame of video data is 5 Vsync periods, that is, the sixth frame of video data needs to wait for 5 Vsync periods to be displayed after the first frame of video data is displayed, the display delay of the video data after the sixth frame of video data is greater than the display delay of the sixth frame of video data, and the display delay is greater and greater with the increase of the number of frames.
Herein, the timing at which a video data frame enters the first buffer refers to the timing at which the video data frame will be written to the first buffer, but whether to actually write to the first buffer also requires the subsequent judgment according to the present embodiment, if it is determined that the video data frame is discarded, the video data frame will not be written to the first buffer, and if it is determined that the video data frame is written to the first buffer, the video data frame is written to the first buffer at the timing at which it has been determined that the video data frame enters the first buffer.
With continued reference to fig. 6, in accordance with the data processing method of the embodiment of the present application, assuming that the send threshold m=5, since the difference between the time t5 'when the fifth frame of video data enters the first buffer and the time t1' when the first frame of video data enters the first buffer is smaller than one Vsync period, the second thread discards the fifth frame of video data, and the fifth frame of video data is not stored in the first buffer. The difference between the time t6 'when the sixth frame video data enters the first buffer and the time t1' when the first frame video data (the previous M-1 frame video data of the sixth frame video data is the first frame video data because the fifth frame video data is discarded) enters the first buffer is greater than one Vsync period, and thus the second thread writes the sixth frame video data into the first buffer. In this case, the sixth frame of video data can be displayed by waiting for 4 Vsync periods after the first frame of video data is displayed. It can be seen that the display delay of the sixth frame of video data is reduced compared to the conventional scheme. And, each frame of video data after the sixth frame of video data is judged according to the sending threshold value M, and then the video data meets the condition (the condition refers to that the difference between the time when the video data frame enters the first buffer and the time when the previous M-1 frame of video data of the video data frame enters the first buffer is greater than one Vsync period). Therefore, the frame number waiting for display in any time window with the duration equal to one Vsync period is ensured not to exceed the display threshold M, and the display time delay of the video data frame is effectively reduced.
It should be noted that the above data processing method is only one embodiment of the data processing method of the present application, and other embodiments of the data processing method of the present application may also be adopted.
For example, 3 processing threads are used in the foregoing embodiment and a new cache, i.e., the first cache, is built, while in other embodiments of the present application 2 processing threads may be used and the first cache described above need not be built.
In an embodiment employing 2 processing threads, thread 1 may be the same as the first thread described above and another thread 2 may be used to read video data from the output cache of the decode module and send it out in Vsync cycles. For example, assuming that the transmission threshold m=5, in the 1 st Vsync period, the receiving-side electronic device receives 1 st to 5 th frames of video data; in the 2 nd Vsync period, the receiving end receives 6 th to 9 th frames of video data; in the 3 rd Vsync period … …. Thread 2 constructs a send-display period, the duration of which is equal to the duration of the Vsync period.
In the 1 st transmission period, the thread 2 reads the 1 st frame of video data from the output buffer of the decoding module and sends the 1 st frame of video data to the surface eFlinger; in the 2 nd transmission period, the thread 2 reads the 2 nd frame of video data from the output buffer of the decoding module and sends the 2 nd frame of video data to the surfeflinger … …, in the 5 th transmission period, the thread 2 reads the 5 th frame of video data from the output buffer of the decoding module, and if the difference between the time of sending the 5 th frame of video data to the surfeflinger and the time of sending the 1 st frame of video data to the surfeflinger is smaller than or equal to the Vsync period, the thread 2 discards the 5 th frame of video data. Next, the thread 2 reads the 6 th frame of video data from the output buffer of the decoding module, and if the difference between the time when the 6 th frame of video data is sent to the SurfaceFlinger and the time when the 1 st frame of video data is sent to the SurfaceFlinger is greater than the Vsync period, the thread 2 sends the 6 th frame of video data to the SurfaceFlinger. Then, thread 2 reads the 7 th frame video data from the output buffer of the decoding module, if the difference between the time when the 7 th frame video data is sent to surfefliger and the time when the 2 nd frame video data is sent to surfefliger is less than or equal to the Vsync period, thread 2 discards the 7 th frame video data … …, and so on.
It can be seen that, in the embodiment of the present application, the number of frames waiting for display in each Vsync period does not exceed the display threshold M. In this way, the number of the video data frames to be displayed, which are cached in one Vsync period, is controlled, so that the time length for waiting to display the video data frames is reduced, and the display time delay is reduced.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory is coupled with the processor, the memory stores program instructions, and when the program instructions are executed by the processor, the electronic equipment can make the electronic equipment execute the data processing method.
It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware and/or software modules that perform the respective functions. The present application can be implemented in hardware or a combination of hardware and computer software, in conjunction with the example algorithm steps described in connection with the embodiments disclosed herein. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present embodiment also provides a computer storage medium having stored therein computer instructions which, when executed on an electronic device, cause the electronic device to perform the above-described related method steps to implement the data processing method in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-mentioned related steps to implement the data processing method in the above-mentioned embodiments.
In addition, the embodiment of the application also provides a device, which can be a chip, a component or a module, and can comprise a processor and a memory which are connected; the memory is configured to store computer-executable instructions, and when the device is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip executes the data processing method in the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are used to execute the corresponding methods provided above, so that the beneficial effects thereof can be referred to the beneficial effects in the corresponding methods provided above, and will not be described herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts shown as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Any of the various embodiments of the application, as well as any of the same embodiments, may be freely combined. Any combination of the above is within the scope of the application.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
The steps of a method or algorithm described in connection with the present disclosure may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in random access Memory (Random Access Memory, RAM), flash Memory, read Only Memory (ROM), erasable programmable Read Only Memory (Erasable Programmable ROM), electrically Erasable Programmable Read Only Memory (EEPROM), registers, hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1. A data processing method, applied to an electronic device, comprising:
Reading a first video data frame from an output buffer of a decoding module of the electronic device, wherein the output buffer is used for storing the video data frame decoded by the decoding module from the encoded data sent by the source electronic device;
determining a second video data frame written into a first buffer before the first video data frame, wherein a first number of video data frames are spaced between the second video data frame and the first video data frame, the first buffer is used for storing video data frames to be sent to a display system of the electronic equipment, and the first number of video data frames are written into the first buffer;
determining a first time difference according to a first time when the first video data frame is to be written into the first buffer memory and a second time when the second video data frame is to be written into the first buffer memory;
discarding the first frame of video data when the first time difference is less than or equal to a current vertical synchronization period of the display system;
reading a target video data frame with earliest writing time from the first buffer under the condition that the time difference between the current time and the time of transmitting the video data frame to the display system last time is equal to the current vertical synchronization period;
And sending the target video data frame to the display system.
2. The method as recited in claim 1, further comprising:
reading a third video data frame from an output buffer of a decoding module of the electronic device;
determining a fourth frame of video data to write to the first buffer prior to the third frame of video data, the fourth frame of data being spaced from the third frame of video data by the first number of frames of video data;
determining a second time difference according to a third time instant when the third video data frame is to be written into the first buffer memory and a fourth time instant when the fourth video data frame is to be written into the first buffer memory;
and when the second time difference is larger than the current vertical synchronization period of the display system, writing the third video data frame into the first buffer.
3. The method of claim 2, wherein after writing the third frame of video data to the first buffer when the second time difference is greater than a current vertical synchronization period of the display system, further comprising:
recording a time of writing the third frame of video data to the first buffer.
4. The method of claim 1, wherein after transmitting the target frame of video data to the display system, further comprising:
Deleting the target video data frame from the first buffer.
5. The method of claim 1, further comprising, prior to reading the first frame of video data from the output buffer of the decoding module of the electronic device:
reading a fifth video data frame from an output buffer of a decoding module of the electronic device;
and writing the fifth video data frame to the first buffer when the number of video data frames written to the first buffer is less than or equal to the first number.
6. The method of claim 5, further comprising, prior to reading a fifth frame of video data from an output buffer of a decoding module of the electronic device:
receiving coded data sent by the source electronic equipment, wherein the coded data is obtained by coding a video data frame;
writing the encoded data into an input buffer of the decoding module so that the decoding module decodes the encoded data to obtain decoded video data frames;
and storing the decoded video data frames to an output buffer of the decoding module.
7. The method of any of claims 1-6, wherein after discarding the first frame of video data when the first time difference is less than or equal to a current vertical synchronization period of the display system, further comprising:
Deleting the first frame of video data from the output buffer.
8. The method of claim 2, wherein after writing the third frame of video data to the first buffer when the second time difference is greater than a current vertical synchronization period of the display system, further comprising:
deleting the third video data frame from the output buffer.
9. The method of claim 1, wherein the first number is less than or equal to a capacity of the first cache.
10. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the data processing method of any one of claims 1 to 9.
CN202210019453.3A 2022-01-07 2022-01-07 Data processing method and electronic equipment Active CN115550709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210019453.3A CN115550709B (en) 2022-01-07 2022-01-07 Data processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210019453.3A CN115550709B (en) 2022-01-07 2022-01-07 Data processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115550709A CN115550709A (en) 2022-12-30
CN115550709B true CN115550709B (en) 2023-09-26

Family

ID=84723977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210019453.3A Active CN115550709B (en) 2022-01-07 2022-01-07 Data processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115550709B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115834793B (en) * 2023-02-16 2023-04-25 深圳曦华科技有限公司 Image data transmission control method in video mode

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE591124A (en) * 1959-06-15 1960-09-16 Western Electric Co Alternate change coded pulse transmission.
JPH1032821A (en) * 1996-07-15 1998-02-03 Nec Corp Decoding device for mpeg encoding image data
JP2006166034A (en) * 2004-12-07 2006-06-22 Sanyo Electric Co Ltd Video and sound output device
CN102792682A (en) * 2010-09-26 2012-11-21 联发科技(新加坡)私人有限公司 Method for performing video display control, and associated video processing circuit and display system
CN108476306A (en) * 2016-12-30 2018-08-31 华为技术有限公司 A kind of method that image is shown and terminal device
JP2020042125A (en) * 2018-09-10 2020-03-19 日本放送協会 Real-time editing system
CN111246178A (en) * 2020-02-05 2020-06-05 浙江大华技术股份有限公司 Video processing method and device, storage medium and electronic device
CN111954067A (en) * 2020-09-01 2020-11-17 杭州视洞科技有限公司 Method for improving video rendering efficiency and user interaction fluency
CN112153082A (en) * 2020-11-25 2020-12-29 深圳乐播科技有限公司 Method and device for smoothly displaying real-time streaming video picture in android system
CN112929741A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Video frame rendering method and device, electronic equipment and storage medium
CN113364767A (en) * 2021-06-03 2021-09-07 北京字节跳动网络技术有限公司 Streaming media data display method and device, electronic equipment and storage medium
CN113873345A (en) * 2021-09-27 2021-12-31 中国电子科技集团公司第二十八研究所 Distributed ultrahigh-definition video synchronous processing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9084002B2 (en) * 2010-05-03 2015-07-14 Microsoft Technology Licensing, Llc Heterogeneous image sensor synchronization
US8736700B2 (en) * 2010-09-30 2014-05-27 Apple Inc. Techniques for synchronizing audio and video data in an image signal processing system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE591124A (en) * 1959-06-15 1960-09-16 Western Electric Co Alternate change coded pulse transmission.
JPH1032821A (en) * 1996-07-15 1998-02-03 Nec Corp Decoding device for mpeg encoding image data
JP2006166034A (en) * 2004-12-07 2006-06-22 Sanyo Electric Co Ltd Video and sound output device
CN102792682A (en) * 2010-09-26 2012-11-21 联发科技(新加坡)私人有限公司 Method for performing video display control, and associated video processing circuit and display system
CN113225427A (en) * 2016-12-30 2021-08-06 荣耀终端有限公司 Image display method and terminal equipment
CN108476306A (en) * 2016-12-30 2018-08-31 华为技术有限公司 A kind of method that image is shown and terminal device
JP2020042125A (en) * 2018-09-10 2020-03-19 日本放送協会 Real-time editing system
CN111246178A (en) * 2020-02-05 2020-06-05 浙江大华技术股份有限公司 Video processing method and device, storage medium and electronic device
CN111954067A (en) * 2020-09-01 2020-11-17 杭州视洞科技有限公司 Method for improving video rendering efficiency and user interaction fluency
CN112153082A (en) * 2020-11-25 2020-12-29 深圳乐播科技有限公司 Method and device for smoothly displaying real-time streaming video picture in android system
CN112929741A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Video frame rendering method and device, electronic equipment and storage medium
CN113364767A (en) * 2021-06-03 2021-09-07 北京字节跳动网络技术有限公司 Streaming media data display method and device, electronic equipment and storage medium
CN113873345A (en) * 2021-09-27 2021-12-31 中国电子科技集团公司第二十八研究所 Distributed ultrahigh-definition video synchronous processing method

Also Published As

Publication number Publication date
CN115550709A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN110121114B (en) Method for transmitting stream data and data transmitting apparatus
WO2022052773A1 (en) Multi-window screen projection method and electronic device
CN109862409B (en) Video decoding method, video playing method, device, system, terminal and storage medium
WO2023284445A1 (en) Video stream processing method and apparatus, device, storage medium, and program product
CN104869461A (en) Video data processing system and method
WO2022017205A1 (en) Method for displaying multiple windows and electronic device
CN115550709B (en) Data processing method and electronic equipment
US20170104804A1 (en) Electronic device and method for encoding image data thereof
CN110024395A (en) Image real time transfer, transmission method and controlling terminal
CN114173183B (en) Screen projection method and electronic equipment
CN103686077A (en) Double buffering method applied to realtime audio-video data transmission of 3G wireless network
CN115550708B (en) Data processing method and electronic equipment
EP4044513A1 (en) Method, apparatus and system for displaying alarm file
CN110798688A (en) High-definition video compression coding system based on real-time transmission
CN108462679B (en) Data transmission method and device
CN111212285A (en) Hardware video coding system and control method of hardware video coding system
CN117193685A (en) Screen projection data processing method, electronic equipment and storage medium
CN114449200B (en) Audio and video call method and device and terminal equipment
WO2021114950A1 (en) Multipath http channel multiplexing method and terminal
CN113691815A (en) Video data processing method, device and computer readable storage medium
CN113115039B (en) Working frequency determination method and device and electronic equipment
CN114827514B (en) Electronic device, data transmission method and medium for electronic device and other electronic devices
WO2023197268A1 (en) Resource processing method and apparatus, communication device, and storage medium
US20240073415A1 (en) Encoding Method, Electronic Device, Communication System, Storage Medium, and Program Product
CN103313017B (en) Multichannel kinescope method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant