CN115550709A - Data processing method and electronic equipment - Google Patents

Data processing method and electronic equipment Download PDF

Info

Publication number
CN115550709A
CN115550709A CN202210019453.3A CN202210019453A CN115550709A CN 115550709 A CN115550709 A CN 115550709A CN 202210019453 A CN202210019453 A CN 202210019453A CN 115550709 A CN115550709 A CN 115550709A
Authority
CN
China
Prior art keywords
video data
frame
buffer
time
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210019453.3A
Other languages
Chinese (zh)
Other versions
CN115550709B (en
Inventor
李鹏飞
姚远
李玉
许嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210019453.3A priority Critical patent/CN115550709B/en
Publication of CN115550709A publication Critical patent/CN115550709A/en
Application granted granted Critical
Publication of CN115550709B publication Critical patent/CN115550709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application provides a data processing method and electronic equipment. The data processing method comprises the following steps: the first video data frame is read from an output buffer of a decoding module of the electronic device. Then, a second video data frame written into the first buffer before the first video data frame is determined, a first number of video data frames are spaced between the second video data frame and the first video data frame, and the first buffer is used for storing the video data frames to be sent to a display system of the electronic device. Next, a first time difference is determined based on a first time instant at which a first frame of video data is to be written to the first buffer and a second time instant at which a second frame of video data is to be written to the first buffer. When the first time difference is less than or equal to a current vertical synchronization period of the display system, the first frame of video data is discarded. Therefore, the time length of the video data frames waiting for display can be reduced by controlling the number of the video data frames to be sent and displayed in one Vsync period, and the display time delay is reduced.

Description

Data processing method and electronic equipment
Technical Field
The present application relates to the field of terminal devices, and in particular, to a data processing method and an electronic device.
Background
At present, in a scene that video data is transmitted between two electronic devices through a network and content corresponding to the video data is displayed by a receiving end electronic device, the receiving end electronic device receives the video data and directly sends the video data to be displayed after decoding the video data, but the display delay of some video data frames is very large, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a data processing method and electronic equipment, so as to reduce the display time delay of video data frames.
In some scenarios, the decoding and presentation time of the video data by the electronic device at the receiving end is unstable due to instability of network transmission. The surface flickers (graphic synthesizers) of the receiving-side electronic device perform screen refreshing according to the Vsync period (i.e. vertical synchronization period) fixed, so that each decoded frame of video data needs to wait for the Vsync signal in order before being displayed by the surface flickers. This results in a significant delay in the display of some frames of video data.
In a first aspect, an embodiment of the present application provides a data processing method. The data processing method is applied to the electronic equipment, and comprises the following steps: and reading the first video data frame from an output buffer of a decoding module of the electronic equipment, wherein the output buffer is used for storing the video data frame decoded by the decoding module from the coded data sent by the source-end electronic equipment. Then, a second video data frame written into the first buffer before the first video data frame is determined, a first number of video data frames are spaced between the second video data frame and the first video data frame, and the first buffer is used for storing the video data frames to be sent to a display system of the electronic device. Next, a first time difference is determined based on a first time instant at which a first frame of video data is to be written to the first buffer and a second time instant at which a second frame of video data is to be written to the first buffer. When the first time difference is less than or equal to a current vertical synchronization period of the display system, the first frame of video data is discarded. Thus, if the frames of video data to be presented buffered during a Vsync period reach a certain number (equal to the sum of the first number and 1), the electronics discard the frames of video data such that the number of frames of video data to be presented buffered during a Vsync period does not exceed the first number. By controlling the number of video data frames to be sent and displayed which are buffered in one Vsync period, the time length of the video data frames waiting for display can be reduced, and the display time delay is reduced.
According to the first aspect, in one implementation manner, the data processing method may further include: and reading the third video data frame from an output buffer of a decoding module of the electronic equipment, and determining a fourth video data frame written into the first buffer before the third video data frame, wherein a first number of video data frames are spaced between the fourth video data frame and the third video data frame. And then, determining a second time difference according to a third moment when a third video data frame is written into the first cache and a fourth moment when a fourth video data frame is written into the first cache, and writing the third video data frame into the first cache when the second time difference is larger than the current vertical synchronization period of the display system. In this way, if the time difference between the time when a frame of video data is written in the first buffer and the time when a (first number + 1) th frame of video data is written in the first buffer before the frame of video data exceeds one Vsync period, the frame of video data is written in the first buffer without causing the number of frames of video data to be displayed buffered in one Vsync period to exceed the first number, so that the frame of video data can be written in the first buffer for subsequent transmission of the frame of video data to the display system for display.
According to the first aspect, in an implementation manner, after writing the third frame of video data into the first buffer when the second time difference is larger than the current vertical synchronization period of the display system, the method further includes: and recording the time when the third video data frame is written into the first buffer. In this way, whether the (first number + 1) th video data frame after the third video data frame can be written into the first buffer can be determined according to the writing time of the third video data frame, so as to ensure that the number of the video data frames to be sent to be displayed buffered in one Vsync period does not exceed the first number, and reduce the display time delay.
According to the first aspect, in one implementation manner, the method further comprises: and under the condition that the time difference between the current time and the time of sending the video data frame to the display system last time is equal to the current vertical synchronization period, reading the target video data frame with the earliest writing time from the first cache, and sending the target video data frame to the display system. Therefore, the video data frames are sent to the display system according to the current vertical synchronization period and are consistent with the screen refreshing frequency of the display system, the requirement on the cache capacity of the display system can be reduced, and the frame loss quantity of the display system is reduced.
According to the first aspect, in one implementation manner, after the target video data frame is sent to the display system, the method further includes: the target video data frame is deleted from the first buffer. Therefore, the video data frames which are already sent to be displayed are deleted from the first buffer, and the storage resources of the first buffer can be released in time, so that the first buffer has enough space to store the video data frames decoded by the decoding module subsequently.
According to the first aspect, in an implementation manner, before reading the first video data frame from the output buffer of the decoding module of the electronic device, the method further includes: and reading the fifth video data frame from an output buffer of a decoding module of the electronic equipment, and writing the fifth video data frame into the first buffer under the condition that the number of the written video data frames into the first buffer is less than or equal to the first number. At the initial stage of data transmission from the source end to the receiving end, the first buffer is empty, and at this time, video data frames do not need to be discarded, and when the video data frames in the first buffer reach the first number, whether the subsequent video data frames are written into the first buffer is determined.
According to the first aspect, in an implementation manner, before reading the fifth video data frame from the output buffer of the decoding module of the electronic device, the method further includes: receiving encoded data sent by the source-end electronic equipment, wherein the encoded data is obtained by encoding a video data frame, writing the encoded data into an input buffer of a decoding module so that the decoding module decodes the encoded data to obtain a decoded video data frame, and storing the decoded video data frame into an output buffer of the decoding module. In this way, the sink electronic device can continuously receive and decode the encoded data transmitted by the source electronic device.
According to the first aspect, in one implementation manner, after discarding the first video data frame when the first time difference is less than or equal to the current vertical synchronization period of the display system, the method further includes: the first video data frame is deleted from the output buffer. Therefore, the discarded video data frames are deleted from the output buffer, and the storage resource of the output buffer can be released in time, so that the video data frames decoded subsequently by the decoding module can be prevented from covering the video data frames which are not sent for display due to insufficient residual storage space of the output buffer.
According to the first aspect, in an implementation manner, after writing the third frame of video data into the first buffer when the second time difference is larger than the current vertical synchronization period of the display system, the method further includes: the third frame of video data is deleted from the output buffer. Therefore, the video data frames written into the first buffer are deleted from the output buffer, and the storage resources of the output buffer can be released in time, so that the video data frames decoded subsequently by the decoding module can be prevented from covering the video data frames which are not yet sent to be displayed due to the fact that the remaining storage space of the output buffer is insufficient.
According to a first aspect, in one implementation, the first number is less than or equal to a capacity of the first cache. Therefore, the first buffer space can store at least a first number of video data frames, and certain display quality is ensured.
In a second aspect, the present application provides an electronic device comprising: a memory and a processor, the processor coupled to the memory, the memory storing program instructions that, when executed by the processor, cause the electronic device to perform the steps of: reading a first video data frame from an output buffer of a decoding module of the electronic device; the output buffer is used for storing video data frames decoded from encoded data sent by a decoding module from a source-end electronic device, determining a second video data frame written in the first buffer before the first video data frame, wherein a first number of video data frames are spaced between the second video data frame and the first video data frame, the first buffer is used for storing the video data frame to be sent to a display system of the electronic device, determining a first time difference according to a first moment of writing the first video data frame into the first buffer and a second moment of writing the second video data frame into the first buffer, and discarding the first video data frame when the first time difference is less than or equal to the current vertical synchronization period of the display system.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: reading a third video data frame from an output buffer of a decoding module of the electronic device, and determining a fourth video data frame written into the first buffer before the third video data frame, wherein a first number of video data frames are spaced between the fourth video data frame and the third video data frame. And then, determining a second time difference according to a third moment when a third video data frame is written into the first cache and a fourth moment when a fourth video data frame is written into the first cache, and writing the third video data frame into the first cache when the second time difference is larger than the current vertical synchronization period of the display system. In this way, if the time difference between the time at which a frame of video data is written into the first buffer and the time at which a (first number + 1) th frame of video data is written into the first buffer before the frame of video data exceeds one Vsync period, the frame of video data is written into the first buffer without causing the number of frames of video data to be displayed buffered during one Vsync period to exceed the first number, so that the frame of video data can be written into the first buffer for subsequent transmission of the frame of video data to the display system for display.
According to a second aspect, in one implementation, the program instructions, when executed by a processor, cause an electronic device to perform the steps of: and recording the time when the third video data frame is written into the first buffer. In this way, whether the (first number + 1) th video data frame after the third video data frame can be written into the first buffer can be determined according to the writing time of the third video data frame, so as to ensure that the number of the video data frames to be sent to be displayed buffered in one Vsync period does not exceed the first number, and reduce the display time delay.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and under the condition that the time difference between the current time and the time of sending the video data frame to the display system last time is equal to the current vertical synchronization period, reading the target video data frame with the earliest writing time from the first cache, and sending the target video data frame to the display system. Therefore, the video data frames are sent to the display system according to the current vertical synchronization period and are consistent with the screen refreshing frequency of the display system, the requirement on the cache capacity of the display system can be reduced, and the frame loss quantity of the display system is reduced.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: the target video data frame is deleted from the first buffer. Therefore, the video data frames which are already sent to be displayed are deleted from the first buffer, and the storage resources of the first buffer can be released in time, so that the first buffer has enough space to store the video data frames decoded by the decoding module subsequently.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: and reading the fifth video data frame from an output buffer of a decoding module of the electronic equipment, and writing the fifth video data frame into the first buffer under the condition that the number of the written video data frames into the first buffer is less than or equal to the first number. At the initial stage of data transmission from the source end to the receiving end, the first buffer is empty, and at this time, video data frames do not need to be discarded, and when the video data frames in the first buffer reach the first number, whether the subsequent video data frames are written into the first buffer is determined.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: receiving encoded data sent by the source-end electronic equipment, wherein the encoded data is obtained by encoding a video data frame, writing the encoded data into an input buffer of a decoding module so that the decoding module decodes the encoded data to obtain a decoded video data frame, and storing the decoded video data frame into an output buffer of the decoding module. In this way, the sink electronic device can continuously receive and decode the encoded data transmitted by the source electronic device.
According to a second aspect, in one implementation, the program instructions, when executed by a processor, cause an electronic device to perform the steps of: the first video data frame is deleted from the output buffer. Therefore, the discarded video data frames are deleted from the output buffer, and the storage resource of the output buffer can be released in time, so that the video data frames decoded subsequently by the decoding module can be prevented from covering the video data frames which are not sent for display due to insufficient residual storage space of the output buffer.
According to a second aspect, in one implementation, the program instructions, when executed by the processor, cause the electronic device to perform the steps of: the third frame of video data is deleted from the output buffer. Therefore, the video data frames written into the first buffer are deleted from the output buffer, and the storage resource of the output buffer can be released in time, so that the video data frames decoded subsequently by the decoding module can be prevented from covering the video data frames which are not sent for display due to the shortage of the residual storage space of the output buffer.
According to a second aspect, in one implementation, the first number is less than or equal to a capacity of the first cache. Therefore, the first buffer space can store at least a first number of video data frames, and certain display quality is ensured.
In a third aspect, the present application provides a computer-readable storage medium comprising a computer program which, when run on an electronic device, causes the electronic device to perform the data processing method of any of the preceding first aspects.
Drawings
Fig. 1 is a schematic structural diagram of an exemplary illustrated electronic device 100;
fig. 2 is a block diagram illustrating a software structure of the electronic device 100 according to the embodiment of the present application;
fig. 3 is a schematic view illustrating an application scenario of the data processing method according to the embodiment of the present application;
FIG. 4 is a diagram illustrating an exemplary data processing procedure in the application scenario of FIG. 3;
fig. 5 is a schematic diagram illustrating a flow process of data in the data processing method according to the embodiment of the present application;
fig. 6 is a timing diagram illustrating an exemplary video frame processing procedure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like in the description and in the claims of the embodiments of the present application, are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "such as" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; the plurality of systems refers to two or more systems.
Herein, the display delay refers to a difference between a time at which a video data frame is displayed in the receiving-end electronic device and a time at which the video data frame is displayed in the source-end electronic device.
Herein, the Vsync period, i.e., the vertical synchronization period, may also be referred to as a screen refresh period. The Vsync period is inverse to the screen refresh frequency. For example, the Vsync period is 16.6ms (milliseconds) when the screen refresh frequency is 60Hz, and 8.3ms when the screen refresh frequency is 120 Hz.
The data processing method of the embodiment of the application can be applied to a screen projection scene. In a screen projection scene, the first electronic device serves as a source end, video data are coded and then transmitted to the second electronic device through a network, such as Wi-Fi, bluetooth and the like, and the second electronic device serves as a receiving end. After the second electronic device decodes the video data, the decoded video data is sent to a display system for display (sending and displaying for short).
In conventional schemes, video data is sent to the display immediately after being decoded. The surface flicker of the receiving-side electronic device displays video data frame by frame in the Vsync period. The video frames to be displayed are all waited in the buffer of the surface flag, and the later the sorting is, the longer the video frame waiting time is, the larger the display time delay is.
For example, assuming that there are 5 video frames waiting to be displayed in the buffer, if the 1 st video frame needs to wait for 1 Vsync period to be displayed, the 2 nd video frame needs to wait for 2 Vsync periods, the 3 rd video frame needs to wait for 3 Vsync periods, the 4 th video frame needs to wait for 4 Vsync periods, and the 5 th video frame needs to wait for 5 Vsync periods. It can be seen that the later the video frames are sorted, the longer the waiting time is, the larger the display delay is. If there are many waiting video frames, the display delay of the sequenced video frames will be large, and the experience for the user will be poor.
In one example, the first electronic device may be, for example, a PC (Personal Computer) that may be equipped with a Windows system. The second electronic device may be a mobile phone, tablet, etc. installed with an Android (Android) system.
In another example, the first electronic device may be, for example, a mobile phone, tablet, etc. with an android system installed, and the second electronic device may be a PC with a Windows system installed.
In another example, the first electronic device may be, for example, a mobile phone, tablet, etc. with an android system installed, and the second electronic device may also be a mobile phone, tablet, etc. with an android system installed.
The embodiment of the application provides a data processing method, which can reduce the display time delay of second electronic equipment and improve the user experience.
The data processing method of the embodiment of the application can be applied to the second electronic device. The structure of the second electronic device may be as shown in fig. 1.
Fig. 1 is a schematic structural diagram of an exemplary electronic device 100. It should be understood that the electronic device 100 shown in fig. 1 is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
Referring to fig. 1, an electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present application, an Android (Android) system with a layered architecture is taken as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 2 is a block diagram illustrating a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may include an application layer, an application framework layer, and a kernel layer, among others.
The application layer may include a series of application packages.
As shown in fig. 2, the application package of the application layer of the electronic device 100 may include modules such as a service, service management, service setting, transmission module, decoding logic module, and display module.
The service module is used for realizing specific service functions. The service management module is used for realizing the management function corresponding to the service module. The service management module is used for realizing the setting function corresponding to the service module.
For example, the service module may be a wireless screen projection module.
The decoding logic module is used for executing the data processing method. For the specific functions of the decoding logic module, please refer to the detailed description in the following embodiments herein.
The display module is used for displaying the video data.
As shown in fig. 2, the application framework layer may include modules such as basic capability, video codec, socket, and the like.
The basic capability module is used to provide various APIs (Application Programming interfaces) that may be used when constructing the Application program.
The video coding and decoding module is used for realizing the coding and decoding functions of video data. The video codec module may include an encoding module and a decoding module. The encoding module is used for encoding the video data, and the decoding module is used for decoding the video data. The video encoding and decoding module may be a module that encodes and decodes video data in a hardware manner. The coding logic module of the application layer can call a coding module in the video coding and decoding module to code, and the decoding logic module of the application layer can call a decoding module in the video coding and decoding module to decode.
Socket is an abstraction of an endpoint for two-way communication between application processes on different hosts in a network. A socket is the end of a process's communication over a network and provides a mechanism for application layer processes to exchange data using a network protocol. From the position, the socket upper connection application process and the lower connection network protocol stack are interfaces for the application program to communicate through the network protocol, and are interfaces for the application program to interact with the network protocol stack.
Wherein the application framework layer may further include a surfaceFlinger (not shown in FIG. 2).
The kernel layer is a layer between hardware and software.
As shown in FIG. 2, the kernel layer may include modules such as a sensor driver, a Wi-Fi driver, a USB driver, and the like.
It is to be understood that the layers in the software structure and the components included in each layer shown in fig. 2 do not constitute a specific limitation to the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer layers than those shown, and may include more or fewer components in each layer, which is not limited in this application.
Fig. 3 is a schematic view of an application scenario of the data processing method according to the embodiment of the present application. Referring to fig. 3, the application scenario of the present embodiment includes a source end and a sink end, both of which are electronic devices. The software architecture of the receiving end can be as shown in fig. 2, and is not described here again. The software architecture of the source may include an application layer, an application framework layer, and a kernel layer. The application program layer of the source end may include modules such as a service, a service management module, a service setting module, a transmission module, a screen capture module, and an encoding logic module. The application framework layer of the source end may have the same structure as the application framework layer of the receiving end, and the kernel layer of the source end may have the same structure as the kernel layer of the receiving end, please refer to the foregoing description about the software structure shown in fig. 2, and details are not repeated here.
Fig. 4 is a schematic diagram illustrating an exemplary data processing procedure in the application scenario shown in fig. 3. Referring to fig. 4, in this embodiment, the data processing process may include the following steps:
s1, after the service of the source end is started, the service module of the source end sends a service starting notice to a screen capturing module of the source end.
In this embodiment, a source end is a mobile phone a with an android system installed, a receiving end is a flat panel B with an android system installed, and the source end projects a video to the receiving end as an example for description.
After a user starts a wireless screen projection function of the mobile phone A, searching equipment capable of receiving screen projection of the mobile phone A in the mobile phone A, and obtaining an available equipment list C capable of receiving screen projection of the mobile phone A, wherein the available equipment list C comprises a panel B. Then, the user can select the tablet B in the searched available device list C as the receiving screen projection device of this time. Then, a wireless connection, such as a Wi-Fi connection or a bluetooth connection, is established between the handset a and the tablet B.
After the wireless connection between the mobile phone A and the panel B is successfully established, the wireless screen projection module in the mobile phone A sends a service starting notice to the screen capture module in the mobile phone A. The service start notification is used for instructing the screen capture module to capture the video data on the screen.
After the wireless connection between the mobile phone a and the tablet B is successfully established, a decoding logic module in the tablet B creates a first thread, a second thread, a third thread and a first cache.
The capacity N of the first buffer needs to be greater than the capacity of an output buffer (OutputBuffer) of the decoding module in the panel B. N represents the maximum number of frames that can be stored in the first buffer.
In practical applications, the capacity of the first buffer may be set empirically. For example, if the decoding speed of the decoding module in the panel B is smaller than the screen refresh frequency of the panel B, a smaller capacity of the first buffer may be set. On the contrary, if the difference between the decoding speed of the decoding module in the panel B and the screen refresh frequency of the panel B is large, a first cache with a large capacity can be set.
And S2, the screen capturing module of the source terminal captures video data on a screen of the source terminal.
For example, after receiving the service start notification, the screen capture module of the mobile phone a starts capturing video data on the screen of the mobile phone a. The screen capture module can capture video data at a certain frame rate, for example, 60FPS.
And S3, the screen capture module of the source end sends video data to the coding logic module of the source end.
For example, the screen capture module of the mobile phone a sends the captured video data to the coding logic module of the mobile phone a.
And S4, the coding logic module of the source terminal calls a coding module to code the video data to obtain coded data.
Here, the encoding module is a module in a video encoding and decoding module of the source end.
Wherein the encoding module may encode the video data based on video encoding standard h.264. The encoding module may also encode by using any one of JPEG (Joint Photographic Experts Group), h.261, MPEG (Moving Picture Experts Group), and the like.
In another example, the encoding module may also perform hybrid encoding using more than two encoding methods.
It should be noted that, the foregoing is only a schematic description of the encoding method adopted by the encoding module, and does not exclude other encoding methods, and the encoding method adopted by the encoding module is not limited in the embodiment of the present application.
And S5, the coding logic module of the source end sends the coded data to the transmission module of the source end.
Wherein, in one example, the transmission module may be a Wi-Fi module. In another example, the transmission module may be a bluetooth module.
Assuming that the mobile phone A and the tablet B are both provided with a Wi-Fi module and a Bluetooth module, when the wireless connection between the mobile phone A and the tablet B is Wi-Fi connection in the wireless screen projection process, the coding logic module of the mobile phone A sends coded data to the Wi-Fi module of the mobile phone A. If the wireless connection between the mobile phone A and the panel B is Bluetooth connection in the wireless screen projection, the coding logic module of the mobile phone A sends coding data to the Bluetooth module of the mobile phone A.
And S6, the transmission module of the source end sends the coded data to the transmission module of the receiving end.
For example, if the wireless connection between the mobile phone a and the tablet B in the wireless screen projection is a Wi-Fi connection, the Wi-Fi module of the mobile phone a sends encoded data to the Wi-Fi module of the tablet B.
If the wireless connection between the mobile phone A and the panel B is Bluetooth connection in the wireless screen projection, the Bluetooth module of the mobile phone A sends coded data to the Bluetooth module of the panel B.
The number of frames received by the receiving end within one Vsync period is not fixed due to instability of the network transmission. For example, in the 1 st Vsync period, the receiving end receives 1 frame data; in the 2 nd Vsync period, the receiving end receives 2 frames of data; in the 3 rd Vsync period, the receiving end receives 1 frame data \8230;.
And S7, the transmission module of the receiving end sends the coded data to the decoding logic module of the receiving end.
For example, after the Wi-Fi module of the tablet B receives the encoded data sent by the Wi-Fi module of the mobile phone a, the encoded data may be sent to the decoding code logic module of the tablet B.
And S8, receiving the coded data by the coding logic module of the receiving end through the established first thread.
And S9, the coding logic module of the receiving end sends the coded data to the input buffer of the decoding module through the first thread.
Here, the decoding module is a module in a video coding and decoding module of the source end. The decoding module itself has two buffers, one is an input buffer (InputBuffer) and the other is an output buffer (OutputBuffer). The aforementioned first buffer does not belong to the decoding module, and the first buffer is a buffer created by the decoding logic module.
And S10, decoding the coded data by a decoding module of the receiving end to obtain video data, and storing the video data in an output buffer of the decoding module.
After the decoding module decodes the video data from the encoded data, the video data is immediately stored in an output buffer OutputBuffer of the decoding module.
S11, the decoding logic module of the receiving end acquires video data from the output cache of the decoding module through the established second thread, and determines whether to store the acquired video data into the established first cache according to a preset display threshold value M.
The second thread may monitor the output buffer OutputBuffer of the decoding module, and once the video data is written in the output buffer OutputBuffer, the video data in the output buffer OutputBuffer is immediately read into the first buffer. Then, the output buffer OutputBuffer may delete the video data that has been read by the second thread.
The video data read from the output buffer OutputBuffer by the second thread may be one or more frames at a time.
The display threshold M represents the maximum number of display frames in one Vsync period. The display threshold M can be preset, and M is less than or equal to N. Wherein, the value of M can be determined according to the requirement of time delay. When the required time delay is smaller, the value of M can be set to be a smaller value; when the delay is required to be large, the value of M may be set to a large value. Thus, the number of frames to be displayed in one Vsync period is controlled to be M frames or less.
In this embodiment of the present application, determining whether to store the acquired video data in the created first cache according to a preset display sending threshold M may include:
acquiring time T1 of a current video data frame entering a first cache;
acquiring time T2 for the video data of the M-1 frame before the current video data frame to enter a first cache;
comparing the difference value of the T1 and the T2 with the Vsync period to obtain a comparison result;
if the comparison result indicates that the difference value between T1 and T2 is greater than the Vsync period, storing the current frame of video data into a first buffer; if the comparison indicates that the difference between T1 and T2 is less than or equal to the Vsync period, then the current frame of video data is discarded and the current frame of video data is not stored in the first buffer.
And S12, periodically reading the video data from the first buffer by the decoding logic module at the receiving end through the created third thread, wherein the period is a Vsync period.
For example, if the Vsync period of panel B is 16.6ms, the third thread reads a frame of video data from the first buffer every 16.6 ms. If the Vsync period of panel B is 8.3ms, the third thread reads a frame of video data from the first buffer every 8.3ms.
It should be noted that the above values of the Vsync period are only exemplary, and do not limit the Vsync period according to the embodiment of the present application. In practical applications, the actual Vsync period may be determined based on the screen refresh frequency employed by the surface fingervsync in the second electronic device.
And S13, the decoding logic module of the receiving end sends the video data to the display module through a third thread according to the Vsync period.
For example, if the Vsync period of panel B is 16.6ms, the third process sends a frame of video data to the display module every 16.6 ms. If the Vsync period of panel B is 8.3ms, the third process sends a frame of video data to the display module every 8.3ms.
It should be noted that, in the same Vsync period, the time t1 when the third thread reads the video data frame from the first buffer is earlier than the time t2 when the third thread sends the video data to the display module.
After the display module receives the video data frame, a releaseOutputBuffer method can be called, the releaseOutputBuffer method calls MediaCodec of an android display system, the MediaCodec calls surfaceFlanger again, and the surfaceFlanger displays an interface corresponding to the video data frame.
Fig. 5 is a schematic diagram illustrating an exemplary flow process of data in the data processing method according to the embodiment of the present application. Referring to fig. 4, as shown in fig. 5, the first thread receives the encoded data transmitted from the source end, and then stores the encoded data in the input buffer of the decoding module. Then, the decoding module decodes the data in the input buffer, and stores the decoded video data into the output buffer of the decoding module. Then, the second thread acquires a video data frame from an output buffer of the decoding module, and if the difference value T0 between the time T1 when the video data frame enters the first buffer and the time T2 when the video data of the M-1 frame before the video data frame enters the first buffer is greater than the Vsync period, the video data frame is stored in the first buffer; if the difference T0 between T1 and T2 is less than or equal to the Vsync period, then the frame of video data is discarded and not stored in the first buffer. Next, the third thread reads the video data from the first buffer frame by frame according to the Vsync period, and when receiving the Vsync signal, the third thread transmits the read video data to the surface flag for display. Upon receiving the video data, the surfaflinger displays video images frame by frame in a Vsync period.
According to the embodiment of the application, the number of the video data frames to be sent and displayed, which are cached in one Vsync period, is controlled, so that the time length of the video data frames waiting for display can be reduced, and the display time delay is reduced. This is explained below by the timing in the video frame processing.
Fig. 6 is a timing diagram illustrating an exemplary video frame processing procedure. All time axes in fig. 6 are aligned. Referring to fig. 6, during a first Vsync cycle, the second thread reads the first frame of video data from the output buffer at time t1, the second frame of video data from the output buffer at time t2, the third frame of video data from the output buffer at time t3, the fourth frame of video data from the output buffer at time t4, and the fifth frame of video data from the output buffer at time t 5. During the second Vsync period, the second thread reads the sixth frame of video data from the output buffer at time t 6.
Referring to fig. 6, the first frame of video data enters the first buffer at the time t1', the second frame of video data enters the first buffer at the time t2', the third frame of video data enters the first buffer at the time t3', the fourth frame of video data enters the first buffer at the time t4', the fifth frame of video data enters the first buffer at the time t5', and the sixth frame of video data enters the first buffer at the time t 6'.
Assume that the capacity of the surfefinger buffer is 6 frames. In the conventional scheme, the first frame of video data, the second frame of video data, the third frame of video data, the fourth frame of video data, the fifth frame of video data, and the sixth frame of video data are all sent to the surface flag buffer, and then the surface flag is displayed frame by frame according to the Vsync period. In this way, the display time interval between the sixth frame of video data and the first frame of video data is 5 Vsync periods, that is, the sixth frame of video data needs to wait for 5 Vsync periods before being displayed after the first frame of video data is displayed, the display delay of the video data after the sixth frame of video data is longer than that of the sixth frame of video data, and the display delay becomes larger and larger as the number of frames increases later.
Herein, the time when the video data frame enters the first buffer refers to the time when the video data frame is to be written into the first buffer, but whether the video data frame is actually written into the first buffer further needs subsequent judgment according to this embodiment, if it is determined that the video data frame is discarded, the video data frame is not written into the first buffer, and if it is determined that the video data frame is written into the first buffer, the video data frame is written into the first buffer at the determined time when the video data frame enters the first buffer.
With reference to fig. 6, according to the data processing method of the embodiment of the present application, assuming that the display sending threshold M =5, since a difference between a time t5 'when the fifth frame of video data enters the first buffer and a time t1' when the first frame of video data enters the first buffer is less than one Vsync period, the second thread discards the fifth frame of video data, and the fifth frame of video data is not stored in the first buffer. The difference between the time t6 'when the sixth frame of video data enters the first buffer and the time t1' when the first frame of video data (the first M-1 frame of video data of the sixth frame of video data is the first frame of video data because the fifth frame of video data is discarded) enters the first buffer is more than one Vsync period, so the second thread writes the sixth frame of video data into the first buffer. In this case, the sixth frame of video data needs to wait for 4 Vsync periods to be displayed after the first frame of video data is displayed. It can be seen that the display latency of the sixth frame of video data is reduced compared to the conventional scheme. And, each frame of video data after the sixth frame of video data is written into the first buffer only when the video data meets the condition (the condition means that the difference between the moment when the video data frame enters the first buffer and the moment when the video data of the first M-1 frame of the video data frame enters the first buffer is more than one Vsync period) after being judged according to the display threshold M. Therefore, the number of the frames waiting for display sending in a time window with any time length equal to one Vsync period is not more than the display sending threshold value M, and the display time delay of the video data frames is effectively reduced.
It should be noted that the above data processing method is only one embodiment of the data processing method of the present application, and other embodiments may also be adopted in the data processing method of the present application.
For example, 3 processing threads are adopted in the foregoing embodiment, and a new cache, that is, the first cache, is constructed, but in other embodiments of the present application, 2 processing threads may also be adopted, and the first cache described above need not be constructed.
In an embodiment employing 2 processing threads, thread 1 may be the same as the first thread described above, and another thread 2 may be used to read video data from the output buffer of the decode module and send it to the display according to the Vsync cycle. For example, assuming that the transmission/display threshold M =5, the receiving-side electronic device receives the 1 st to 5 th frames of video data in the 1 st Vsync period; in the 2 nd Vsync period, the receiving end receives the video data of the 6 th to 9 th frames; in Vsync period 3, \8230;. Thread 2 constructs a send-to-display period having a duration equal to the duration of the Vsync period.
In the 1 st display sending period, the thread 2 reads the 1 st frame of video data from the output buffer of the decoding module and sends the 1 st frame of video data to the surfaceFlinger; in the 2 nd display sending period, the thread 2 reads the 2 nd frame of video data from the output buffer of the decoding module and sends the 2 nd frame of video data to the SurfaceFlinger \8230, in the 5 th display sending period, the thread 2 reads the 5 th frame of video data from the output buffer of the decoding module, and if the difference value between the time of sending the 5 th frame of video data to the SurfaceFlinger and the time of sending the 1 st frame of video data to the SurfaceFlinger is less than or equal to the Vsync period, the thread 2 discards the 5 th frame of video data. Next, thread 2 reads the 6 th frame of video data from the output buffer of the decoding module, and if the difference between the time when the 6 th frame of video data is transmitted to the surface flag and the time when the 1 st frame of video data is transmitted to the surface flag is greater than the Vsync period, thread 2 transmits the 6 th frame of video data to the surface flag. Then, thread 2 reads the 7 th frame of video data from the output buffer of the decoding module, and if the difference between the time when the 7 th frame of video data is transmitted to the surface flag and the time when the 2 nd frame of video data is transmitted to the surface flag is less than or equal to the Vsync period, thread 2 discards the 7 th frame of video data \8230, and so on.
It can be seen that, in the embodiment of the present application, the number of frames waiting for display sending in each Vsync period does not exceed the display sending threshold M. Therefore, the time length of the video data frames waiting for display is reduced by controlling the number of the video data frames to be sent and displayed which are buffered in one Vsync period, and the display time delay is reduced.
An embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory is coupled to the processor, and the memory stores program instructions, and when the program instructions are executed by the processor, the electronic device is enabled to execute the data processing method executed by the electronic device.
It will be appreciated that the electronic device, in order to implement the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application can be realized in hardware or a combination of hardware and computer software in connection with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present embodiment also provides a computer storage medium, in which computer instructions are stored, and when the computer instructions are run on an electronic device, the electronic device is caused to execute the above related method steps to implement the data processing method in the above embodiment.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the data processing method in the above embodiments.
In addition, the embodiment of the present application further provides an apparatus, which may specifically be a chip, a component or a module, and the apparatus may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the data processing method in the above-mentioned method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is only one type of logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
Any of the various embodiments of the present application, as well as any of the same embodiments, can be freely combined. Any combination of the above is within the scope of the present application.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.
The steps of a method or algorithm described in connection with the disclosure of the embodiments of the application may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, read Only Memory (ROM), erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A data processing method is applied to electronic equipment and comprises the following steps:
reading a first video data frame from an output buffer of a decoding module of the electronic device, wherein the output buffer is used for storing the video data frame decoded by the decoding module from the encoded data sent by the source-end electronic device;
determining a second video data frame written into a first cache before the first video data frame, wherein a first number of video data frames are spaced between the second video data frame and the first video data frame, and the first cache is used for storing video data frames to be sent to a display system of the electronic equipment;
determining a first time difference according to a first time when the first video data frame is written into the first cache and a second time when the second video data frame is written into the first cache;
discarding the first video data frame when the first time difference is less than or equal to a current vertical synchronization period of the display system.
2. The method of claim 1, further comprising:
reading a third video data frame from an output buffer of a decoding module of the electronic device;
determining a fourth frame of video data written to the first buffer before the third frame of video data, the fourth frame of data being spaced from the third frame of video data by the first number of frames of video data;
determining a second time difference according to a third time when the third video data frame is written into the first cache and a fourth time when the fourth video data frame is written into the first cache;
and when the second time difference is larger than the current vertical synchronization period of the display system, writing the third video data frame into the first cache.
3. The method of claim 2, wherein after writing the third frame of video data to the first buffer when the second time difference is greater than a current vertical synchronization period of the display system, further comprising:
and recording the time when the third video data frame is written into the first buffer.
4. The method of claim 1, further comprising:
under the condition that the time difference between the current time and the time of sending the video data frame to the display system last time is equal to the current vertical synchronization period, reading the target video data frame with the earliest writing time from the first cache;
and sending the target video data frame to the display system.
5. The method of claim 4, wherein after transmitting the target frame of video data to the display system, further comprising:
deleting the target video data frame from the first buffer.
6. The method of claim 1, wherein prior to reading the first frame of video data from the output buffer of the decoding module of the electronic device, further comprising:
reading a fifth video data frame from an output buffer of a decoding module of the electronic device;
writing the fifth frame of video data to the first buffer if the number of frames of video data written to the first buffer is less than or equal to the first number.
7. The method of claim 6, wherein before reading the fifth frame of video data from the output buffer of the decoding module of the electronic device, further comprising:
receiving encoded data sent by the source electronic device, wherein the encoded data is obtained by encoding a video data frame;
writing the coded data into an input buffer of the decoding module so that the decoding module decodes the coded data to obtain a decoded video data frame;
and storing the decoded video data frame to an output buffer of the decoding module.
8. The method according to any of claims 1-7, further comprising, after discarding the first frame of video data when the first time difference is less than or equal to a current vertical synchronization period of the display system:
deleting the first video data frame from the output buffer.
9. The method of claim 2, wherein after writing the third frame of video data to the first buffer when the second time difference is greater than a current vertical synchronization period of the display system, further comprising:
deleting the third frame of video data from the output buffer.
10. The method of claim 1, wherein the first amount is less than or equal to a capacity of the first buffer.
11. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the data processing method of any one of claims 1 to 10.
CN202210019453.3A 2022-01-07 2022-01-07 Data processing method and electronic equipment Active CN115550709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210019453.3A CN115550709B (en) 2022-01-07 2022-01-07 Data processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210019453.3A CN115550709B (en) 2022-01-07 2022-01-07 Data processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115550709A true CN115550709A (en) 2022-12-30
CN115550709B CN115550709B (en) 2023-09-26

Family

ID=84723977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210019453.3A Active CN115550709B (en) 2022-01-07 2022-01-07 Data processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115550709B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115834793A (en) * 2023-02-16 2023-03-21 深圳曦华科技有限公司 Image data transmission control method under video mode
CN117215426A (en) * 2023-01-28 2023-12-12 荣耀终端有限公司 Display method and electronic equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE591124A (en) * 1959-06-15 1960-09-16 Western Electric Co Alternate change coded pulse transmission.
JPH1032821A (en) * 1996-07-15 1998-02-03 Nec Corp Decoding device for mpeg encoding image data
JP2006166034A (en) * 2004-12-07 2006-06-22 Sanyo Electric Co Ltd Video and sound output device
US20110267269A1 (en) * 2010-05-03 2011-11-03 Microsoft Corporation Heterogeneous image sensor synchronization
US20120081567A1 (en) * 2010-09-30 2012-04-05 Apple Inc. Techniques for synchronizing audio and video data in an image signal processing system
CN102792682A (en) * 2010-09-26 2012-11-21 联发科技(新加坡)私人有限公司 Method for performing video display control, and associated video processing circuit and display system
CN108476306A (en) * 2016-12-30 2018-08-31 华为技术有限公司 A kind of method that image is shown and terminal device
JP2020042125A (en) * 2018-09-10 2020-03-19 日本放送協会 Real-time editing system
CN111246178A (en) * 2020-02-05 2020-06-05 浙江大华技术股份有限公司 Video processing method and device, storage medium and electronic device
CN111954067A (en) * 2020-09-01 2020-11-17 杭州视洞科技有限公司 Method for improving video rendering efficiency and user interaction fluency
CN112153082A (en) * 2020-11-25 2020-12-29 深圳乐播科技有限公司 Method and device for smoothly displaying real-time streaming video picture in android system
CN112929741A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Video frame rendering method and device, electronic equipment and storage medium
CN113364767A (en) * 2021-06-03 2021-09-07 北京字节跳动网络技术有限公司 Streaming media data display method and device, electronic equipment and storage medium
CN113873345A (en) * 2021-09-27 2021-12-31 中国电子科技集团公司第二十八研究所 Distributed ultrahigh-definition video synchronous processing method

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE591124A (en) * 1959-06-15 1960-09-16 Western Electric Co Alternate change coded pulse transmission.
JPH1032821A (en) * 1996-07-15 1998-02-03 Nec Corp Decoding device for mpeg encoding image data
JP2006166034A (en) * 2004-12-07 2006-06-22 Sanyo Electric Co Ltd Video and sound output device
US20110267269A1 (en) * 2010-05-03 2011-11-03 Microsoft Corporation Heterogeneous image sensor synchronization
CN102792682A (en) * 2010-09-26 2012-11-21 联发科技(新加坡)私人有限公司 Method for performing video display control, and associated video processing circuit and display system
US20120081567A1 (en) * 2010-09-30 2012-04-05 Apple Inc. Techniques for synchronizing audio and video data in an image signal processing system
CN108476306A (en) * 2016-12-30 2018-08-31 华为技术有限公司 A kind of method that image is shown and terminal device
CN113225427A (en) * 2016-12-30 2021-08-06 荣耀终端有限公司 Image display method and terminal equipment
JP2020042125A (en) * 2018-09-10 2020-03-19 日本放送協会 Real-time editing system
CN111246178A (en) * 2020-02-05 2020-06-05 浙江大华技术股份有限公司 Video processing method and device, storage medium and electronic device
CN111954067A (en) * 2020-09-01 2020-11-17 杭州视洞科技有限公司 Method for improving video rendering efficiency and user interaction fluency
CN112153082A (en) * 2020-11-25 2020-12-29 深圳乐播科技有限公司 Method and device for smoothly displaying real-time streaming video picture in android system
CN112929741A (en) * 2021-01-21 2021-06-08 杭州雾联科技有限公司 Video frame rendering method and device, electronic equipment and storage medium
CN113364767A (en) * 2021-06-03 2021-09-07 北京字节跳动网络技术有限公司 Streaming media data display method and device, electronic equipment and storage medium
CN113873345A (en) * 2021-09-27 2021-12-31 中国电子科技集团公司第二十八研究所 Distributed ultrahigh-definition video synchronous processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117215426A (en) * 2023-01-28 2023-12-12 荣耀终端有限公司 Display method and electronic equipment
CN115834793A (en) * 2023-02-16 2023-03-21 深圳曦华科技有限公司 Image data transmission control method under video mode

Also Published As

Publication number Publication date
CN115550709B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN113556598A (en) Multi-window screen projection method and electronic equipment
CN116055786B (en) Method for displaying multiple windows and electronic equipment
US10237318B2 (en) Electronic device and method for encoding image data thereof
EP3697088A1 (en) Video sending and receiving method, device, and terminal
CN115550709B (en) Data processing method and electronic equipment
CN110024395A (en) Image real time transfer, transmission method and controlling terminal
US12061662B2 (en) Methods, apparatuses and systems for displaying alarm file
WO2020029088A1 (en) Resource allocation indication method and apparatus, and base station and terminal
CN113259729B (en) Data switching method, server, system and storage medium
CN115550708B (en) Data processing method and electronic equipment
CN108462679B (en) Data transmission method and device
CN118233402A (en) Data transmission method and electronic equipment
CN114449200B (en) Audio and video call method and device and terminal equipment
CN115665707A (en) Display device and data transmission method
CN117472304A (en) Screen acquisition method and device, terminal equipment and storage medium
CN117193685A (en) Screen projection data processing method, electronic equipment and storage medium
CN113691815A (en) Video data processing method, device and computer readable storage medium
CN113115039B (en) Working frequency determination method and device and electronic equipment
CN117135299B (en) Video recording method and electronic equipment
CN112154665A (en) Video display method, receiving end, system and storage medium
US20240073415A1 (en) Encoding Method, Electronic Device, Communication System, Storage Medium, and Program Product
CN115515001B (en) Screen mirroring method, device, equipment and storage medium
US20230379434A1 (en) Data transmission method and apparatus
CN118450240A (en) Image processing method, electronic device, and computer-readable storage medium
CN116170622A (en) Audio and video playing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant