CN113473229B - Method for dynamically adjusting frame loss threshold and related equipment - Google Patents

Method for dynamically adjusting frame loss threshold and related equipment Download PDF

Info

Publication number
CN113473229B
CN113473229B CN202110713481.0A CN202110713481A CN113473229B CN 113473229 B CN113473229 B CN 113473229B CN 202110713481 A CN202110713481 A CN 202110713481A CN 113473229 B CN113473229 B CN 113473229B
Authority
CN
China
Prior art keywords
frame
time
electronic device
video
loss threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110713481.0A
Other languages
Chinese (zh)
Other versions
CN113473229A (en
Inventor
贾睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110713481.0A priority Critical patent/CN113473229B/en
Publication of CN113473229A publication Critical patent/CN113473229A/en
Application granted granted Critical
Publication of CN113473229B publication Critical patent/CN113473229B/en
Priority to PCT/CN2022/092369 priority patent/WO2022267733A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Abstract

The application discloses a method for dynamically adjusting a frame loss threshold and related equipment. The method comprises the following steps: the second electronic equipment can receive the video frame sent by the first electronic equipment and record the time for receiving the video frame; the second electronic equipment determines a frame receiving time difference, wherein the frame receiving time difference is a difference value between the time of receiving the video frame and the time of receiving an Mth frame video frame, and the Mth frame video frame is a video frame sent by the first electronic equipment before the video frame is sent; if the frame receiving time difference is not less than the frame loss threshold, the second electronic device can adjust the frame loss threshold; in addition, after the frame loss threshold is adjusted, the second electronic device can also perform a threshold test, that is, whether the decoding delay and the display transmission delay after the frame loss threshold is adjusted are reduced or not is judged. The method not only can deal with the problem of video frame blockage under different network conditions and different states of equipment, but also can feed back and adjust the effect of the frame loss threshold value in time, and ensures that the current frame loss threshold value can effectively solve the problem of video frame blockage.

Description

Method for dynamically adjusting frame loss threshold and related equipment
Technical Field
The present application relates to the field of mirror projection, and in particular, to a method for dynamically adjusting a frame loss threshold and a related device.
Background
With the continuous development of the internet technology, the screen projection technology is widely applied, and great convenience is brought to the work and life of people. The screen projection is divided into two types, namely mirror image screen projection and non-mirror image screen projection, wherein the mirror image screen projection means that the picture of one device is projected to the other device in real time for display, and the effect of the screen projection is particularly obvious in scenes such as business conferences.
The transmission of video frames can be carried out in the process of image screen projection, the transmission of the video frames is easily influenced by a network, if the network condition is not good, a device needing to receive the video frames cannot really receive the video frames, after the subsequent network is recovered, the device can receive a large number of video frames in a short time, the decoding capability of a decoder of the device is exceeded, a large number of video frames are accumulated in the decoder, the decoder cannot decode the video frames in time, and the screen projection time delay is increased.
Therefore, how to effectively solve the problem of video frame blocking under different network conditions and different device states is an urgent problem to be solved at present.
Disclosure of Invention
The application provides a method for dynamically adjusting a frame loss threshold, which can set an initial frame loss threshold, dynamically adjust the frame loss threshold according to the time difference of a second electronic device for receiving a video frame, namely judge the current network condition according to the time difference of the received video frame and correspondingly adjust the frame loss threshold, and then perform frame loss operation, so that the problem of video frame blocking under different network conditions can be effectively solved. In addition, after the frame loss is finished, whether the decoding time delay and the display time delay after the frame loss threshold value is adjusted are reduced or not is judged, whether the frame loss threshold value needs to be adjusted again or not is determined according to the judgment result, the effect of the frame loss threshold value can be fed back in time, and the problem that the video frame is blocked can be effectively solved by the current frame loss threshold value.
In a first aspect, the present application provides a method for dynamically adjusting a frame loss threshold. The method may be applied to a second electronic device. The method can comprise the following steps: receiving a video frame sent by first electronic equipment; determining a frame receiving time difference; the frame receiving time difference is the difference value between the time of receiving the video frame and the time of receiving the Mth frame video frame; the Mth frame video frame is a video frame sent by the first electronic equipment before the video frame is sent; under the condition that the frame receiving time difference is not smaller than the frame loss threshold, if the first time length does not reach the preset time, reducing the frame loss threshold; the first time length is the time length from the last adjustment of the frame loss threshold value to the current time; the frame loss threshold is used for judging whether to discard the video frame; or, under the condition that the frame receiving time difference is not less than the frame loss threshold, if the first time length reaches the preset time and the frame loss threshold is less than the upper limit of the threshold, increasing the frame loss threshold; if the first time reaches the preset time and the frame loss threshold value is not smaller than the upper limit of the threshold value, reducing the frame loss threshold value; the upper threshold is the maximum value of the frame loss threshold; recording decoding time delay and display time delay of N frames of video frames received after receiving the video frames; the decoding time delay is the time from the time when the video frame arrives at the decoder to the time when the decoding is completed; the display time delay is the time from the completion of decoding to the display of the video frame on the display screen; judging whether N is equal to a preset frame number or not; if N is equal to the preset frame number, judging whether the adjusted frame loss threshold is effective or not; if the adjusted frame loss threshold value is invalid, reducing the adjusted frame loss threshold value and stopping the threshold value test; the threshold test is used to determine whether the adjustment to the frame loss threshold is valid.
In the scheme provided by the application, the second electronic device can adjust the frame loss threshold value to adapt to the problem of video frame blocking under different network conditions and device states. In addition, after the frame loss is completed, the second electronic device can also perform threshold value test, namely, whether the decoding time delay and the display time delay after the frame loss threshold value is adjusted are reduced or not is judged, and whether the frame loss threshold value needs to be adjusted again or not is determined according to the judgment result, so that the effect of the frame loss threshold value can be fed back in time, and the problem of video frame blockage can be effectively solved by the current frame loss threshold value.
With reference to the first aspect, in a possible implementation manner of the first aspect, the video frame is a video frame that is not used for a threshold test.
In the solution provided by the present application, the video frame transmitted by the first electronic device received by the second electronic device may not be used for the threshold test. At this time, the second electronic device may determine whether to adjust the frame loss threshold according to the time for receiving the video frame, so as to avoid inaccuracy of the threshold test caused by adjusting the frame loss threshold when the video frame is used for the threshold test.
With reference to the first aspect, in a possible implementation manner of the first aspect, the determining whether the adjusted frame loss threshold is valid includes: determining the average decoding time delay and the average display sending time delay in the current full time period; the current full time period is a period of time from receiving the first video frame to determining the average decoding delay and the average display delay; if the decoding time delay and the display time delay of the N frames of video frames are respectively reduced by at least c% compared with the average decoding time delay and the average display time delay, the adjusted frame loss threshold value is determined to be effective.
In the scheme provided by the application, the second electronic device may determine whether the adjusted frame loss threshold can effectively solve the problem of video frame blocking, that is, whether the decoding delay and the display transmission delay can be reduced, according to the average decoding delay and the average display transmission delay in the current full time period and the average decoding delay and the average display transmission delay of the N frames of video frames received after the frame loss threshold is adjusted.
With reference to the first aspect, in a possible implementation manner of the first aspect, the second electronic device may discard the video frame when the frame receiving time difference is smaller than a frame loss threshold.
In the scheme provided by the application, under the condition that the frame receiving time difference is smaller than the frame loss threshold, the second electronic device judges that a large number of video frames are received in a short time, so that the video frames just received can be directly discarded. This can reduce the waiting time for decoding, thereby solving the problem of video frame blocking.
With reference to the first aspect, in a possible implementation manner of the first aspect, after the second electronic device receives the video frame sent by the first electronic device, the time for receiving the video frame may also be recorded, and the frame loss threshold is initialized. It can be understood that the sequence of recording the time of receiving the video frame by the second electronic device and initializing the frame loss threshold is not limited.
In the solution provided by the present application, the second electronic device may record the time of receiving the video frame, so as to calculate the frame receiving time difference subsequently.
With reference to the first aspect, in a possible implementation manner of the first aspect, after the recording of the time for receiving the video frame, the second electronic device may further store the time for receiving the video frame in a first queue; the first queue stores the time when the second electronic device receives the Mth frame of video frames.
In the scheme provided by the application, a queue can be set to store the time when the second electronic device receives the video frame, so that the time when the second electronic device receives the video frame can be conveniently processed.
With reference to the first aspect, in a possible implementation manner of the first aspect, before recording decoding delay and presentation delay of N frames of video frames received after receiving the video frame, the second electronic device may further remove an element written first in the first queue, and write time of receiving the video frame into the first queue.
In the scheme provided by the application, the second electronic device can adjust the first queue in time, so that the next frame receiving time difference can be calculated conveniently.
In a second aspect, the present application provides an electronic device that may include a display screen, a memory, one or more processors, wherein the memory is configured to store a computer program; the processor is configured to invoke the computer program to cause the electronic device to perform: receiving a video frame sent by first electronic equipment; determining a frame receiving time difference; the frame receiving time difference is the difference value between the time of receiving the video frame and the time of receiving the Mth frame video frame; the Mth frame video frame is a video frame sent by the first electronic equipment before the video frame is sent; under the condition that the frame receiving time difference is not smaller than the frame loss threshold, if the first time length does not reach the preset time, reducing the frame loss threshold; the first time length is the time length from the last adjustment of the frame loss threshold value to the current time; the frame loss threshold is used for judging whether to discard the video frame; or, under the condition that the frame receiving time difference is not less than the frame loss threshold, if the first time length reaches the preset time and the frame loss threshold is less than the upper limit of the threshold, increasing the frame loss threshold; if the first time reaches the preset time and the frame loss threshold value is not smaller than the upper limit of the threshold value, reducing the frame loss threshold value; the upper threshold is the maximum value of the frame loss threshold; recording decoding time delay and display time delay of N frames of video frames received after receiving the video frames; the decoding time delay is the time from the time when the video frame arrives at the decoder to the time when the decoding is completed; the display time delay is the time from the completion of decoding to the display of the video frame on the display screen; judging whether N is equal to a preset frame number or not; if N is equal to the preset frame number, judging whether the adjusted frame loss threshold is effective or not; if the adjusted frame loss threshold value is invalid, reducing the adjusted frame loss threshold value and stopping the threshold value test; the threshold test is used to determine whether the adjustment to the frame loss threshold is valid.
With reference to the second aspect, in a possible implementation manner of the second aspect, the video frame is a video frame that is not used for a threshold test.
With reference to the second aspect, in a possible implementation manner of the second aspect, the processor is configured to invoke the computer program, so that when the electronic device performs the determination on whether the adjusted frame loss threshold is valid, the processor is specifically configured to invoke the computer program, so that the electronic device performs: determining the average decoding time delay and the average display sending time delay in the current full time period; the current full time period is a period of time from receiving the first video frame to determining the average decoding delay and the average display delay; if the decoding time delay and the display time delay of the N frames of video frames are respectively reduced by at least c% compared with the average decoding time delay and the average display time delay, the adjusted frame loss threshold value is determined to be effective.
With reference to the second aspect, in a possible implementation manner of the second aspect, the processor may be further configured to drop the video frame when the frame receiving time difference is smaller than a frame dropping threshold.
With reference to the second aspect, in a possible implementation manner of the second aspect, after the processor is configured to invoke the computer program to make the electronic device execute receiving of a video frame sent by a first electronic device, the processor may be further configured to invoke the computer program to make the electronic device execute: recording the time of receiving the video frame; and initializing the frame loss threshold.
With reference to the second aspect, in a possible implementation manner of the second aspect, the processor, after being configured to invoke the computer program to cause the electronic device to perform recording of the time for receiving the video frame, may be further configured to invoke the computer program to cause the electronic device to perform: storing a time at which the video frame was received in a first queue; the first queue stores the time when the second electronic device receives the Mth frame of video frames.
With reference to the second aspect, in a possible implementation manner of the second aspect, the processor, before being configured to invoke the computer program to cause the electronic device to execute decoding latency and presentation latency of N frames of video frames received after receiving the video frames, may be further configured to invoke the computer program to cause the electronic device to execute: and removing the element written first in the first queue, and writing the time for receiving the video frame into the first queue.
In a third aspect, the present application provides a computer storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform any one of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a chip applied to an electronic device, where the chip includes one or more processors, and the processor is configured to invoke computer instructions to cause the electronic device to execute any one of the possible implementation manners of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product including instructions, which, when run on a device, cause the electronic device to perform any one of the possible implementations of the first aspect.
It is understood that the electronic device provided by the second aspect, the computer storage medium provided by the third aspect, the chip provided by the fourth aspect, and the computer program product provided by the fifth aspect are all used to execute the method provided by the embodiments of the present application. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
Fig. 1 is a schematic view of a mirror projection screen provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a mirror projection provided in an embodiment of the present application;
fig. 3A is a schematic decoding diagram of a network in a good condition according to an embodiment of the present disclosure;
fig. 3B is a schematic diagram of decoding when the network condition is bad according to an embodiment of the present disclosure;
fig. 3C is a schematic diagram of decoding after frame loss according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for dynamically adjusting a frame loss threshold according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of another method for dynamically adjusting a frame loss threshold according to an embodiment of the present disclosure;
fig. 6 is a flowchart of adjusting a frame loss threshold according to an embodiment of the present application;
fig. 7A is a schematic diagram of adjusting a frame loss threshold according to an embodiment of the present application;
FIG. 7B is a diagram illustrating another example of adjusting a frame loss threshold according to an embodiment of the present disclosure;
fig. 7C is a schematic diagram of another method for adjusting a frame loss threshold according to an embodiment of the present application;
fig. 8 is a schematic flowchart of audio and video synchronization provided in an embodiment of the present application;
FIG. 9 is a flowchart of another method for dynamically adjusting a frame loss threshold according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of a hardware structure of an electronic device 100 according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a software structure of an electronic device 100 according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application are described below clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First, some terms and related technologies referred to in the present application are explained to facilitate understanding by those skilled in the art.
An Application (APP), generally referred to as mobile phone software, mainly refers to software installed on a smart phone, and improves the defects and personalization of an original system. The mobile phone is improved in functions, and a main means of richer use experience is provided for users. The operation of mobile phone software requires a corresponding mobile phone system, and as for 6 months and 1 day in 2017, the main mobile phone system is as follows: iOS by apple, Android (Android) system by google, saiban platform, and microsoft platform.
The Window class represents the top Window of a View, which manages the top View of the View, and provides standard UI handling policies for background, title bar, default buttons, etc. Meanwhile, the system has only one Surface for drawing own content, when the app creates one Window through the WindowManager, the WindowManager creates one Surface for each Window and transmits the Surface to the app so as to be applied to draw the content. In short, it can be considered that Window and Surface have a one-to-one relationship.
The Window Management Service (WMS) is a system level service, started by SystemService, implementing iwindowmanager.
The WindowManager will control window objects, which are containers for holding view objects. The window object is always supported by the Surface object. WindowManager will supervise the lifecycle, input and focus events, screen orientation, transitions, animations, position, morphing, Z-axis order, and many other aspects of the windows. The WindowManager will send all window metadata to the surfefinger so that the surfefinger can use the data to compose a Surface on the screen.
Surface generally represents the producer of the buffer queue consumed by the Surface flinger. When rendered onto Surface, the results produced will go into the relevant buffer, which is passed to the consumer. In short, the Surface can be regarded as a layer view, an image is drawn on the Surface, view data is produced from and to the buffer queue, and then the view data is sent to the consumer Surface flunger to be synthesized with other view layers, and finally the view data is displayed on a screen.
The Surface flunger is a service of Android, and is responsible for managing the surfaces of an application end and compounding all the surfaces. The surfaceflag is a layer between the graphics library and the application that functions to accept graphics display data from multiple sources, compose them, and then send them to the display device. Such as opening an application, the common three-layer display, the navigation bar at the bottom or side of the status bar at the top, and the interfaces of the application, each layer being updated and rendered separately, are composed by the surface flickers into a refresh to hardware display. In the display process, buffer queue is used, and the Surface flunger is used as a consumer side, for example, the Surface managed by the WindowManager is used as a production side to generate a page, and the page is synthesized by the Surface flunger.
(Hardware Composer, HWC) is used to determine the most efficient method to synthesize buffers by available Hardware. As a HAL, its implementation is device-specific and is typically done by the display device hardware Original Equipment Manufacturer (OEM).
The Display subsystem (DSS) is a hardware dedicated to image synthesis, and the images on the main screen of the mobile phone are synthesized by DSS.
Virtual display refers to a virtual display, which is one of a plurality of Android-supported screens (Android-supported screens include: main display, explicit display, and virtual display). The VirtualDisplay has many usage scenarios, such as a screen recording, a WFD display, and the like. The effect is to capture the content displayed on the screen. The VirtualDisplay captures screen content, and the implementation ways of the VirtualDisplay are many. An ImageReader is provided in the API to read the content in the VirtualDisplay.
MediaCodec can be used to obtain multimedia coding for the android underlying layer, can be used for coding and decoding, and is an important component of the android low-level multimedia infrastructure. MediaCodec functions to process input data to generate output data. Firstly, an input data buffer area is generated, the data is filled into the buffer area and is provided for the codec, the codec processes the input data in an asynchronous mode, then the filled output buffer area is provided for a consumer, and the buffer area is returned to the codec after the consumer consumes the data.
The Transmission Control Protocol (TCP) is a connection-oriented, reliable transport layer communication Protocol based on a byte stream, defined by RFC 793 of the IETF. TCP is intended to accommodate layered protocol hierarchies that support multiple network applications. Reliable communication services are provided by means of TCP between pairs of processes in host computers connected to different but interconnected computer communication networks. TCP assumes that it can obtain simple, possibly unreliable, datagram service from lower level protocols. In principle, TCP should be able to operate over a variety of communication systems connected from hard wire to packet switched or circuit switched networks.
Internet Protocol (IP) is an abbreviation of Internet Protocol (Internet Protocol), which is a network layer Protocol in the TCP/IP architecture. The purpose of designing IP is to improve the scalability of the network: firstly, the problem of the internet is solved, and interconnection and intercommunication of large-scale and heterogeneous networks are realized; and secondly, the coupling relation between the top network application and the bottom network technology is divided, so that the independent development of the top network application and the bottom network technology is facilitated. IP provides only a connectionless, unreliable, best-effort packet transport service for hosts according to the end-to-end design principle.
A Packet (Packet) is a unit of data in TCP/IP protocol communication transmission, and is also generally referred to as a "data Packet".
The Real-time Transport Protocol (RTP/RTTP) is a network Transport Protocol. The RTP protocol specifies a standard packet format for the delivery of audio and video over the internet. It was originally designed as a multicast protocol but was later used in many unicast applications. The RTP protocol is commonly used in streaming media systems (in conjunction with RTCP protocol), video conferencing and Push to Talk (Push to Talk) systems (in conjunction with h.323 or SIP), making it the technological base of the IP telephony industry. The RTP protocol is used together with the RTP control protocol RTCP and is built on the user datagram protocol.
RTP itself does not provide an on-time delivery mechanism or other Quality of Service (QoS) guarantee, and relies on lower-level services to implement this process. RTP does not guarantee delivery or prevent out-of-order delivery, nor does it determine the reliability of the underlying network. RTP implements in-order delivery, where sequence numbers allow the receiver to reassemble the sender's packet sequence, and can also be used to determine the appropriate packet position, for example: in video decoding, sequential decoding is not required.
The RTP standard defines two sub-protocols, RTP and RTCP.
And the data transmission protocol RTP is used for transmitting data in real time. The information provided by the protocol includes: time stamps (for synchronization), sequence numbers (for packet loss and reordering detection), and payload format (for specifying the coding format of the data).
The control protocol RTCP is used for QoS feedback and for synchronizing media streams, i.e. RTCP is used to monitor the quality of service and to convey information about ongoing session participants. The bandwidth occupied by RTCP is very small, typically only 5%, relative to RTP. The functionality of the RTCP second aspect is sufficient for "loosely controlled" sessions, i.e. it is not necessary to support all control communication requests of an application without explicit member control and organization.
In the information transmission between the TCP nodes, the content transmitted each time is a structure, so that the data in the structure is packaged each time when the data is transmitted, and then the data in the received buf parameter is unpacked after the data is received by one end.
Frame Per Second (FPS) is defined in the field of images, and refers to the number of Frames Per Second that a chip can or actually render, and in popular terms, refers to the number of Frames of visual animation or video. The FPS measures the amount of information used to store and display the motion video. The greater the number of frames per second, the more fluid the displayed motion will be. Typically, a minimum of 30 to avoid motion blindness is used. Some computer video formats can only provide 15 frames per second.
Pts (presentation Time stamp): i.e. a presentation time stamp, which is used to tell the player when to present the data of this frame.
Vertical Synchronization (VSYNC), also known as field synchronization, is a display principle of a crt (cathode Ray tube) display, in which a horizontal scanning line is formed by a single pixel and stacked in the Vertical direction to form a complete picture. The refresh rate of the display is controlled by a display card DAC (digital-to-analog converter, also called D/a converter), and the display card DAC generates a vertical synchronization signal after completing scanning of one frame. Opening vertical synchronization means sending the signal to the 3D graphics processing part of the graphics card, so that the graphics card is restricted by the vertical synchronization signal when generating 3D graphics.
In daily life, the mirror image projection screen brings great convenience to the work and life of a user, for example, in a meeting scene, the user can project the content on personal electronic equipment such as a computer and a mobile phone to a large screen through the mirror image projection screen, so that other participants can watch the content without carrying out corresponding operation on the large screen, and the user experience is greatly improved.
Fig. 1 is a schematic view of a mirror image screen projection provided in an embodiment of the present application, where the mirror image screen projection process specifically projects content displayed on a first electronic device onto a second electronic device for display. The wireless screen projection option can be found on the setting interface of the first electronic device, the wireless screen projection is opened, and the screen projection can be carried out in a mirror image mode after the first electronic device is in communication connection with other devices (for example, a second electronic device). As shown in fig. 1, the first electronic device is a mobile phone, the second electronic device is a smart screen, and when the mobile phone is in communication with the smart screen, the video image played on the mobile phone can be projected to the smart screen for display. It can be understood that if the interface of the mobile phone (the picture displayed on the display screen of the mobile phone) changes due to some user operations in the subsequent process, the interface on the smart screen also changes accordingly.
It can be understood that the first electronic device may be one of electronic devices such as a mobile phone, a tablet computer, and a PC, and the second electronic device may be one of electronic devices such as a tablet computer, a PC, and a smart screen.
In addition, a communication connection may be established between the first electronic device and the second electronic device in a variety of ways. Optionally, a communication connection may be established between the first electronic device and the second electronic device by using a Wireless communication technology, for example, connecting the first electronic device and the second electronic device through a Wireless Fidelity (Wi-Fi) network; a communication connection may also be established between a first electronic device and a second electronic device using a wired communication technique, for example, connecting the first electronic device and the second electronic device using a medium such as coaxial cable, twisted pair, optical fiber, and the like.
The specific process of mirror projection is described below with reference to fig. 2.
Fig. 2 is a schematic flow chart of a mirror image screen projection provided in an embodiment of the present application, and as shown in fig. 1, the mirror image screen projection process is to mirror screen content displayed on a first electronic device onto a second electronic device.
First, on the first electronic device side, two operations need to be performed:
one is the display screen.
Specifically, the APP on the first electronic device creates windows through the WMS, and creates one Surface for each Window, and passes the corresponding Surface to the application program, so that the application program draws the graphic data onto the Surface, that is, renders the layer through the WMS. The WMS provides buffer and window metadata (rendered Surface) for the Surface flinger, and the Surface flinger may synthesize the Surface using the information to obtain a synthesized image, and then display the synthesized image on the screen of the first electronic device, that is, the image on the first electronic device that can be seen by the user, through its own display system (HWC/DSS).
And secondly, transmitting the picture to be projected to the second electronic equipment.
Specifically, after the surfafinger synthesizes images, screen content needs to be captured for virtual display through virtual display, and it is understood that the captured screen content may be audio-video data, such as h.264 data, h.265 data, VP9 data, AV1 data, AAC data, and the like. And then, encoding the captured audio and video data by an encoder (for example, MediaCodec, etc.), then performing Encryption (Encryption) and multi-layer packaging, for example, performing RTP packaging and VTP/TCP packaging, and finally sending the data packet obtained after packaging to the second electronic device.
It is understood that the data packet may be transmitted to the second electronic device through a wireless communication means (e.g., WiFi), and may also be transmitted to the second electronic device through a wired communication means.
It should be noted that the data packet sent by the first electronic device to the second electronic device includes audio data or video data, that is, the audio data and the video data are transmitted independently, where the audio data may include audio frames, and the video data may include video frames. Therefore, the process of sending the data packet from the first electronic device to the second electronic device is the process of sending the video frame and the audio frame from the first electronic device to the second electronic device.
Secondly, the second electronic device side also needs to perform related operations to successfully project the screen.
Specifically, after receiving the data packet sent by the first electronic device, the second electronic device performs corresponding depacketizing (RTP depacketizing and VTP/TCP depacketizing), decrypting and decoding (MediaCodec decoding), then synchronizes the obtained audio data and video data, performs MediaCodec display, and finally performs layer composition by the surface flicker and displays the layer composition on the screen of the second electronic device.
It should be noted that, correspondingly, the process of the second electronic device receiving the data packet sent by the first electronic device is that the second electronic device receives the video frame and the audio frame sent by the first electronic device.
It can be understood that the transmission of the data packet is an important ring of the mirror image screen projection, and whether the first electronic device can smoothly and timely send the data packet to the second electronic device directly influences the screen projection effect. Since the audio data is transmitted more stably (even in the case of poor network conditions), the present application mainly considers the problem of transmitting video data, i.e., the problem of transmitting video frames.
If the network condition is not good, for example, the network fluctuates or the network is weak for a short time, the second electronic device may not be able to receive the video frames sent by the first electronic device in time, and after the network recovers to a stable state, the second electronic device may receive many video frames in a short time.
The difference in processing video frames by the second electronic device in different network environments is described below (fig. 3A-3C). As shown in fig. 3A, when the network condition is good, the second electronic device receives the video frames A, B, C, D at a stable rate (the time difference of the received video frames is stable), and also decodes the received video frames A, B, C, D in sequence at the stable rate, and the decoding time is equal to the actual decoding time. The decoding time refers to the time required by the second electronic device to actually start decoding the video frame until the decoding is completed; the actual decoding time refers to the time from when the video frame is received by the second electronic device to the completion of decoding. For example, if the time taken by the second electronic device from the start of receiving a video frame to the completion of decoding the video frame is 30 milliseconds (ms), and the time taken from the actual start of decoding the video frame to the completion of decoding is 10ms, the decoding time is 10ms, and the actual decoding time is 30 ms. After decoding is complete, the second electronic device may then synchronize the decoded video frames A, B, C, D with the audio frames at a steady rate and send the synchronized video frames to the display, i.e., transmit the video frames to the display screen for display. Generally, the display condition of the video frame can be checked through a display callback after the display. As shown in fig. 3A, in a case where the network condition is good, the video frames may be displayed on the display screen of the second electronic device at a stable rate.
When the network condition is not good, the video frames cannot be transmitted to the second electronic device in time, and when the network is stable, the second electronic device is likely to receive a plurality of video frames in a short time, so that the video frames cannot be decoded in time.
Illustratively, as shown in fig. 3B, the second electronic device receives the video frame A, B, C for a short time, and since the video frame a is received first, the video frame a is decoded first. However, in the process of decoding the video frame a, the second electronic device receives the video frame B, and the video frame B should be decoded immediately after, but the video frame B can only wait temporarily because the decoding of the video frame a is not completed yet. And after the video frame A is decoded, decoding the video frame B. Similarly, video frame C also needs to wait for video frame B to finish decoding before decoding.
In this case, the decoding times of the video frames A, B, C are the same, but the actual decoding times are not the same. Since the video a is the first frame received by the second electronic device, decoding can be performed without waiting, and thus the decoding time of the video frame a is the same as the actual decoding time. However, the decoding of the video frame B needs to be completed before the decoding of the video frame a is completed, so the actual decoding time of the video frame B is longer than that of the video frame a. Similarly, the actual decoding time of video frame C also includes the time waiting for decoding. In other words, the actual decoding time refers to the time required by the electronic device from the receipt of the video frame to the completion of decoding.
It can be understood that when the network condition is bad, the display delay of the video frame is increased (as shown in fig. 3B), so that the video frame cannot be displayed on the second electronic device in time.
It will be appreciated that waiting for decoding increases the decoding latency, resulting in delayed display of the video frame. Overall, the delay of the mirror projection increases. That is, the screen displayed on the first electronic device may not be synchronized with the screen displayed on the second electronic device. For example, when the 5 th second(s) of the video is displayed on the first electronic device, the 3 rd second(s) of the video is displayed on the second electronic device. From a user perspective, the above phenomena greatly affect the user experience.
In order to solve the above problem, a frame loss threshold may be set, and frames may be selectively lost by determining a relationship between a time difference of receiving the video frame by the second electronic device and the frame loss threshold, so as to reduce a time of waiting for decoding of the video frame. For example, as shown in fig. 3C, when the time difference between receiving video frame a and video frame B is less than a predetermined threshold, video frame B may be selected to be dropped. In this case, the video frame may also be displayed on the display screen of the second electronic device in time.
The method can improve the video frame blocking situation, and further optimization space exists. On one hand, because a feedback mechanism is not established, whether the set frame loss threshold value can effectively solve the problem of video frame blockage or not can not be determined, namely the gains of decoding time delay and display time delay caused by frame loss can not be quantized; on the other hand, the preset frame loss threshold value can not effectively solve the problem of video frame blocking under different network conditions and different states of the equipment.
Based on the above, the present application provides a method and related device for dynamically adjusting a frame loss threshold, which can set an initial frame loss threshold, dynamically adjust the frame loss threshold according to the time difference of receiving a video frame by a second electronic device, i.e., determine the current network condition according to the time difference of receiving the video frame and correspondingly adjust the frame loss threshold, and then perform frame loss operation, so as to effectively solve the problem of video frame blocking under different network conditions. In addition, after the frame loss is finished, whether the decoding time delay and the display time delay after the frame loss threshold value is adjusted are reduced or not is judged, whether the frame loss threshold value needs to be adjusted again or not is determined according to the judgment result, the effect of the frame loss threshold value can be fed back in time, and the problem that the video frame is blocked can be effectively solved by the current frame loss threshold value.
It can be appreciated that the user needs to trigger the mirror projection before proceeding with the mirror projection as shown in fig. 1.
Illustratively, a user triggers a setting application control on the first electronic device, and in response to the user operation, the first electronic device displays a setting interface, which includes a wireless screen projection control. The first electronic device can detect a user operation acting on the wireless screen projection control, and in response to the user operation, the first electronic device can display the wireless screen projection interface. The wireless screen projection interface comprises one or more controls which are used for representing equipment capable of carrying out mirror image screen projection with the first electronic equipment. The first electronic device may detect a user operation acting on the first control, and in response to the user operation, the first electronic device may perform a mirror image screen projection with the second electronic device. The first electronic equipment not only displays the picture on the display screen of the equipment, but also sends the video frame to the second electronic equipment, so that the picture displayed on the first electronic equipment can be displayed on the second electronic equipment.
It should be noted that the second electronic device needs to go through a series of processes to display the video frame sent by the first electronic device on the display screen. The processing of the second electronic device can refer to the following embodiments.
Fig. 4 is a flowchart illustrating a method for dynamically adjusting a frame loss threshold according to an embodiment of the present application.
S401: a video frame is received.
And the second electronic equipment receives the video frame sent by the first electronic equipment, and records the received video frame as A. In one embodiment of the application, a video frame sent by a first electronic device is received by a TCP/VTP module in a second electronic device.
It is understood that the manner of transmitting the video frames between the first electronic device and the second electronic device includes, but is not limited to, transmitting through a Wireless communication manner such as a Wireless Local Area Network (WLAN), for example, a Wireless Fidelity (Wi-Fi) network, and transmitting through a wired communication manner, for example, a medium such as a coaxial cable, a twisted pair, an optical fiber, and the like.
S402: and judging whether frame loss operation needs to be executed or not.
The second electronic device determines a difference in the time of the received video frame (a) and the time of receipt of a previously received video frame. When the difference value is smaller than the frame loss threshold value, the frame loss operation is executed, namely the video frame A is dropped. And when the difference value is not less than the frame loss threshold value, judging whether to adjust the frame loss threshold value.
It is understood that the process of determining whether to perform the frame dropping operation and the process of adjusting the frame dropping threshold will be specifically described in the following embodiments, and will not be described herein first.
S403: decoding and displaying.
And if the received video frame is not discarded, transmitting the video frame to a decoder for decoding, and displaying the video frame after the decoding is finished so that the video frame is displayed on a display screen of the second electronic equipment.
Fig. 5 is a flowchart illustrating another method for dynamically adjusting a frame loss threshold according to an embodiment of the present application.
S501: a video frame is received.
Specifically, the second electronic device receives a video frame transmitted by the first electronic device. In one embodiment of the application, a video frame sent by a first electronic device is received by a TCP/VTP module in a second electronic device.
It is understood that the manner of transmitting the video frames between the first electronic device and the second electronic device includes, but is not limited to, transmitting through a Wireless communication manner such as a Wireless Local Area Network (WLAN), for example, a Wireless Fidelity (Wi-Fi) network, and transmitting through a wired communication manner, for example, a medium such as a coaxial cable, a twisted pair, an optical fiber, and the like.
S502: the time at which the video frame was received is recorded.
In particular, the second electronic device may record the time of arrival of the video frame, i.e. the time the second electronic device receives the video frame. In one embodiment of the application, the second electronic device may further set a number to the received video frame and record both the number and the time of arrival of the video frame. For example, the time when the second electronic device receives the nth frame of video frame is T, and the second electronic device can find the arrival time of the video frame corresponding to the number T according to the number N.
It will be appreciated that the time of arrival of a video frame, which as used herein is not necessarily the actual time at which the video frame is received by the second electronic device, may be represented numerically or in other forms.
Illustratively, the second electronic device records that the time for receiving a frame of video is 20210511235643.
For example, the time when the second electronic device receives the first video frame may be set to 1ms, and the time when the video frame is subsequently received may be calculated with reference to the time when the first frame is received.
In an embodiment of the application, the second electronic device may further be configured with a queue to store the time of arrival of the video frame and/or the number of the video frame. I.e. the queue element indicates the time at which the second electronic device receives the video frame. It is understood that the queue may be referred to as a first queue.
It will be appreciated that the queue provided may be a first-in-first-out linear table that allows only insertion at one end of the table, while deleting elements at the other end. Queue elements refer to data elements in a queue or refer to data elements that perform related operations using a queue data structure. The data type of the queue data element can adopt an existing data type or a custom data type. For example, if the time for the second electronic device to receive the first video frame is set to 1ms, the corresponding queue element may be 1.
In addition, the length of the queue may be determined when the queue is set, that is, the maximum number of elements that the queue can accommodate is determined, and when the number of elements in the queue reaches the upper limit that the queue can accommodate (the queue is full), the element that is written first in the queue needs to be removed, so that a new element can be written into the queue.
In one embodiment of the present application, the length of the queue is 3, that is, the number of elements that can be accommodated in the queue is at most 3, that is, the queue can contain the receiving time of three video frames at most.
S503: a frame loss threshold is initialized.
Specifically, the second electronic device initializes a frame loss threshold, which is used to determine whether the video frame is dropped.
In one embodiment of the present application, the initial drop frame threshold may be set to mThreshold. For example, mThreshold is 1 × VsyncDuration, which refers to the time between the transmission of two adjacent video frames at a preset frame rate. For example, when the preset frame rate is 60FPS, the time for transmitting the two adjacent video frames is 1000/60 ═ 16.67 ms.
It is understood that the preset frame rate refers to a rate at which the first electronic device transmits video frames to the second electronic device, that is, the number of video frames transmitted to the second electronic device by the first electronic device per second, and is determined when the first electronic device and the second electronic device establish a communication connection.
In addition, the adjustment range of the frame loss threshold may be set, for example, the adjustment range of the frame loss threshold is set as follows: 1 VsyncDuration ≦ mThreshold ≦ 2 VsyncDuration.
S504: it is determined whether the queue is full.
Specifically, if the second electronic device sets a queue to record the time when the second electronic device receives the video frame (in step S502), the second electronic device further needs to determine whether the queue is full, that is, whether the number of elements contained in the queue reaches the upper limit. If the queue is full, the process continues to step S505, and if the queue is not full, the receiving time of the recorded video frame is written into the queue (step S507).
In other words, the data in the queue is used to determine and adjust the frame loss threshold. In some embodiments, the adjustment of the frame loss threshold is started when the queue is full. In other embodiments, the frame loss threshold may be adjusted based on the elapsed time in the event the queue is not full.
S505: and judging whether the average frame rate is larger than the minimum frame rate.
Specifically, the average frame rate and the minimum frame rate are compared. If the average frame rate is less than the minimum frame rate, it indicates that the original frame rate is small or the frame loss is excessive, in order to avoid affecting the continuity of the screen-projected picture, the received video frame is not discarded, the received video frame is directly transmitted to a decoder to wait for decoding, and the element written in the queue first is removed, and then the receiving time of the video frame is written into the queue (step S507); if the average frame rate is not less than the minimum frame rate, calculating the difference value between the time of receiving the video frame and the time of receiving the Mth frame video frame by the second electronic equipment, and recording the difference value as a.
In some embodiments of the present application, the time of the mth frame video frame may be the element written first in the first queue.
Generally, frame rate refers to the number of frames of a picture refreshed per second, which can be understood herein as: when the video frames of the first electronic device are projected onto the second electronic device in a mirror image mode, the number of video frames refreshed on the screen of the second electronic device per second, and therefore, the frame rate refers to the number of video frames displayed on the screen of the second electronic device per second.
The average frame rate mentioned above refers to the average of the frame rates over a period of time, which is not necessarily 1 s. For example, the average frame rate may be an average rate at which the second electronic device receives video frames transmitted by the first electronic device within 10 s.
In one embodiment of the present application, the average frame rate may be an average frame rate from when the second electronic device receives the first frame of video frames transmitted by the first electronic device to when the second electronic device receives the latest frame of video frames transmitted by the first electronic device. In yet another embodiment of the present application, the average frame rate may also be an average frame rate at which the second electronic device receives N video frames that were recently transmitted by the first electronic device.
It should be noted that, if the second electronic device discards the video frame sent by the first electronic device, the discarded video frame may not be calculated in the calculation process of the average frame rate.
In addition, the minimum frame rate is set for ensuring continuity of screen projection pictures, and because the screen projection pictures are not smooth when the frame rate is too low and greatly influence user experience, the minimum frame rate can be set in advance according to actual needs, and the relationship between the average frame rate and the minimum frame rate is judged before frame dropping, so that the problem of too low frame rate possibly caused by continuous frame dropping is avoided.
It should be noted that, in order to ensure the continuity of the picture projected onto the second electronic device, the number of consecutive frame dropping may not be too large, after step S505 is executed, it may also be determined whether the frame dropping operation can be performed according to the number of consecutive frame dropping, if the number of consecutive frame dropping exceeds the preset number of frame dropping, the frame dropping operation cannot be performed, and the frame dropping threshold is not adjusted, at this time, the received video frame is directly transmitted to the decoder to wait for decoding, and the number of consecutive frame dropping is cleared, the element written first in the queue is removed, and then the receiving time of the video frame is written into the queue (step S507).
Illustratively, the preset number of dropped frames may be 2, that is, when the number of consecutive dropped frames exceeds 2, the second electronic device cannot drop the received video frame, but directly transmits the video frame to the decoder to wait for decoding, and clears the number of consecutive dropped frames.
Optionally, a parameter may also be set to indicate the feasibility of the frame dropping operation, for example, a parameter Adiust is set to indicate the feasibility of the frame dropping operation, that is, it is determined whether the frame dropping operation can be performed and the frame dropping threshold value is adjusted according to the value of Adjust (step S506), specifically, after step S505 is performed, the value of Adjust is checked, and it is determined whether the frame dropping operation can be performed according to the value of Adjust, when Adjust is 0, it indicates that the frame dropping threshold value cannot be adjusted at this time, and the frame dropping operation cannot be performed, the video frame is directly transmitted to the decoder to wait for decoding, and the element written first in the queue is removed, and then the receiving time of the video frame is written into the queue (step S507); when Adjust is 1, the subsequent step (step S506) may be executed, and whether to perform the frame dropping operation also requires a specific judgment in the subsequent step (step S506). For another example, when Adjust is not good, it is set that the frame loss threshold cannot be adjusted, and the frame loss operation cannot be performed; when Adjust is true, the subsequent steps may be performed (as in step S506). It is understood that other determination methods are possible, and the present application is not limited thereto.
S506: the frame loss threshold is adjusted.
Specifically, the magnitude relationship between a and the frame loss threshold value mThreshold is determined, and the frame loss threshold value is adjusted according to the magnitude relationship, there are 2 cases as follows:
l、a<mThreshold。
in this case, the second electronic device directly drops the received video frames due to an excessive number of video frames received in a short time, counts the number of consecutive dropped frames, and calculates the average frame rate. It is to be appreciated that the average frame rate may be calculated here without factoring in video frames received but dropped by the second electronic device. If the number of continuous frame loss exceeds the preset frame loss number, the frame loss operation is not executed on a next received video frame, the frame loss threshold value is not adjusted, and the video frame is directly transmitted to a decoder to wait for decoding.
In addition, when a parameter is set to indicate the feasibility of the frame dropping operation, the value of the parameter may be updated after a < mThreshold is determined and the number of consecutive frames dropped is counted, for example, when a parameter Adjust is set to indicate the feasibility of the frame dropping operation, after a < mThreshold is determined and the number of consecutive frames dropped is counted, if the number of consecutive frames dropped exceeds the frame dropping threshold, the value of Adjust is updated to zero, that is, Adjust is 0, indicating that the second electronic device cannot drop a video frame received next.
2、a≥mThreshold。
The adjustment of the frame loss threshold value under the condition that a is greater than or equal to mThreshod is specifically explained by combining with the graph of FIG. 6:
s601: it is determined whether the second electronic device is performing a threshold test.
The threshold test will first be briefly explained. The threshold test refers to: after threshold adjustment is carried out, the second electronic device judges whether the adjustment is effective according to the decoding time delay and the display time delay of the next received N video frames, when the adjustment is effective, the adjusted frame loss threshold is kept, otherwise, the threshold is adjusted again.
The ongoing threshold test shows that it is currently not known whether the last threshold adjustment is valid, so that the threshold is not adjusted again at this time to avoid interfering with the determination of the effect of the last threshold adjustment.
Therefore, if the second electronic device is performing the threshold test, the received video frame is directly transmitted to the decoder to wait for decoding, and the element written first in the queue is removed, and then the receiving time of the video frame is written into the queue (step S507); if the second electronic device is not performing the threshold test, the step S602 is continuously performed.
S602: and judging whether the frame loss threshold value is not adjusted in the preset time.
If the frame loss threshold is not adjusted within the predetermined time, possibly because the frame loss threshold is too large, a < mththreshold is always satisfied within a predetermined time (case 1), and the threshold may be considered to be decreased. If the frame loss threshold is adjusted within the preset time, the step S603 is continuously executed.
It should be noted that the preset time may be set according to actual needs, and in an embodiment of the present application, the preset time may be set to 1.5 s.
S603: and judging whether the frame loss threshold is smaller than the upper threshold.
As shown in step S503, an adjustment range may be set for the frame loss threshold, that is, an upper limit and/or a lower limit of the adjustment may be set for the frame loss threshold, in an embodiment of the present application, the lower limit of the frame loss threshold is set to 1 × VsyncDuration, and the upper limit of the frame loss threshold is set to 2 × VsyncDuration.
If the frame loss threshold is not less than the upper threshold, which indicates that the frame loss threshold has reached the adjustable upper limit (because the adjustment of the frame loss threshold cannot exceed the adjustment range), the frame loss threshold may be considered to be decreased. If the frame loss threshold is smaller than the upper threshold, it can be considered to continue increasing the frame loss range to alleviate the video frame blocking, i.e. increasing the frame loss threshold (as shown in step S604).
It should be noted that after the adjustment of the frame loss threshold (increase or decrease by a certain step size) is completed, it can be understood that the threshold test is started.
S604: the frame loss threshold is increased.
The specific way to adjust the frame loss threshold may be: increasing the frame loss threshold by a certain step length, and recording the step length as b, wherein the adjusted frame loss threshold is as follows: mThreshold ═ mThreshold + b.
It can be understood that the value of b can be set according to practical situations, and for example, b is set to 0.1 × VsyncDuration, and at this time, the frame loss threshold is: mThreshold-0.1 × VsyncDuration. It should be noted that if the adjustment range of the frame loss threshold has been set (as in step S503), the frame loss threshold can only be adjusted within this range.
S605: the frame loss threshold is decreased.
The specific way to adjust the frame loss threshold may be: reducing the frame loss threshold by a certain step length, and recording the step length as b, wherein the adjusted frame loss threshold is as follows: mThreshold-b.
It can be understood that the value of b can be set according to practical situations, and for example, b is set to 0.1 × VsyncDuration, and at this time, the frame loss threshold is: mThreshold-0.1 × VsyncDuration. It should be noted that if the adjustment range of the frame loss threshold has been set (as in step S503), the frame loss threshold can only be adjusted within this range.
The adjustment of the frame loss threshold is illustratively described in conjunction with fig. 7A-7C. In step S503, the frame loss threshold is initialized, and the frame loss threshold after the initialization is mThreshold ═ 1 × VsyncDuration, and as shown in fig. 7A, when a is less than mThreshold, the frame loss process is performed. When the frame loss threshold is adjusted by increasing the frame loss threshold by a certain step (as shown in fig. 7B), if the step is set to 0.1 × VsyncDuration, the updated frame loss threshold is mThreshold of 1.1 × VsyncDuration, and as shown in fig. 7C, when a < mThreshold, the frame loss process is performed, that is, when a < 1.1 × VsyncDuration, the frame loss process is performed. It can be understood that the upper limit of the drop frame threshold can be adjusted to 2 VsyncDuration, i.e., mThreshold < 2 VsyncDuration, as shown in fig. 7A-7C.
S507: and adjusting the queue.
Specifically, if the second electronic device sets a queue to record the time when the second electronic device receives the video frame (as in step S502), after step S506 is executed, the second electronic device may remove the element written first in the queue and write the time when the video frame is received (the time when the video frame is received and recorded in step S502) into the queue.
Illustratively, the queue is {1, 5, 10}, i.e. the receiving time of three video frames stored in the queue is 1ms, 5ms and 10ms, respectively, wherein 1 is the element written into the queue first, 10 is the element written into the queue last, the element written into the queue first-1 is removed, the time of receiving the video frame is written into the queue, if the time of receiving the video frame is 16ms, 16 is written into the queue, and the adjusted queue is {5, 10, 16 }.
Illustratively, the queue is {3, 4, 5}, i.e., the queue stores the numbers of the 3 rd, 4 th, 5 th frames of video frames received by the second electronic device. Through these numbers, the second electronic device can find the receiving time of the second electronic device for receiving the 3 rd, 4 th and 5 th frames of video frames.
Illustratively, the queue is {20210511235643, 20210511235666, 20210511235733}, and the three strings of numbers stored in the queue indicate the time at which the second electronic device received three frames of video frames.
S508: and (6) decoding.
Specifically, the decoder in the second electronic device decodes the received video frame and records the decoding delay. It is understood that the decoding delay refers to the period of time from when a video frame is transmitted to the decoder to when the decoding of the video frame is completed.
S509: and synchronously processing and displaying the audio and video.
Specifically, according to the frame loss condition in the previous step, the audio frame is adjusted accordingly, and whether the video frame is sent to display is judged.
As shown in fig. 8, the second electronic device may determine whether there is a frame dropping operation (step S801), and if a video frame received before a video frame to be synchronized (received video frame) is dropped, it is also necessary to drop audio frames with the same number as the dropped video frames (step S802) to obtain the audio frame corresponding to the video frame, otherwise, the video frame and the audio frame may not be in one-to-one correspondence, and the audio and video images displayed on the screen of the second electronic device are not synchronized. If there is no frame dropping operation or the same number of audio frames have been dropped (step S802), it is necessary to calculate the average PTS interval of all the video frames already sent to be marked as c, and then estimate the sending time of the next video frame (step S803), where the estimated PTS of the video frame is equal to the actual PTS + c of the previous video frame. It will be appreciated that the actual PTS of the previous video frame refers to the actual presentation time of the previous video frame. If the estimated video frame PTS is different from the real display time, when the difference between the estimated video frame PTS and the real display time is smaller than the first preset threshold, the video frame is displayed with the estimated video frame PTS, and since the video frame can be displayed only when the video frame is displayed at the VSYNC time point, the VSYNC time point closest to the display time point (the estimated video frame PTS) needs to be found (step S804). After finding the corresponding VSYNC time point, determining whether the difference between the VSYNC time point and the audio frame display time point is less than a second preset threshold (step S805), if the difference between the VSYNC time point and the audio frame display time point is not less than the second preset threshold, not displaying the video frame (step S806), otherwise, displaying the video frame (step S807), so that the video frame is finally displayed on the screen of the second electronic device.
It is understood that both the first preset threshold and the second preset threshold can be set according to actual needs, and the present application is not limited thereto.
It should be noted that after the rendering is completed, the upper layer application of the second electronic device receives the callback information of the rendering completion of the MediaCodec (as shown in fig. 2), updates the decoding delay and the rendering delay, and updates the average frame rate. It will be appreciated that the presentation delay refers to the time required for the decoding of the video frame to be completed until it is actually displayed on the screen of the second electronic device.
S510: and testing whether the frame loss threshold is effective.
If the frame loss threshold is adjusted in the above steps, the second electronic device needs to perform a threshold test to test whether the adjustment of the frame loss threshold is effective. Specifically, the second electronic device monitors the decoding delay and the display transmission delay of the subsequently received N video frames, and if the decoding delay and the display transmission delay of the N video frames received by the second electronic device after the adjustment of the frame loss threshold are respectively reduced by at least c% compared with the average decoding delay and the average display transmission delay in the current full time period, it is determined that the adjustment of the frame loss threshold in the above steps is effective adjustment, and the adjusted frame loss threshold is continuously used; otherwise, it is determined that the adjustment effect of the frame loss threshold is not good, and the frame loss threshold mThreshold is decreased by a certain step length, and the details can refer to step S506, which is not described herein again.
It is understood that the current full period refers to a period of time from when the first video frame is received by the second electronic device until the average decoding delay and the average presentation delay are calculated.
It should be noted that c may be adjusted according to actual needs, for example, c may be 10, and under this condition, if the decoding delay and the display delay of the N video frames received by the second electronic device after the adjustment of the frame loss threshold are respectively reduced by at least 10% from the average decoding delay and the average display delay of the current full time period, it is determined that the adjustment of the frame loss threshold is effective adjustment.
Fig. 9 is a flowchart illustrating another method for dynamically adjusting a frame loss threshold according to an embodiment of the present application.
S901: a video frame is received.
The second electronic equipment receives the video frame sent by the first electronic equipment. In one embodiment of the application, a video frame sent by a first electronic device is received by a TCP/VTP module in a second electronic device.
It is understood that the specific implementation manner of step S901 can refer to step S501, and is not described herein again.
S902: the time to receive a video frame is recorded and a frame loss threshold is initialized.
And after the second electronic equipment receives the video frame sent by the first electronic equipment, recording the receiving time of the video frame and initializing a frame loss threshold. It is understood that the specific implementation of step S902 can refer to step S502 and step S503, which are not described herein again.
S903: it is determined whether the queue is full.
The second electronic device may set a queue for storing the receiving time of the video frame, and before writing the receiving time of the video frame into the queue, the second electronic device may determine whether the queue is full, and the specific determination process may refer to step S504, which is not described herein again.
S904: and judging whether the average frame rate is larger than the minimum frame rate.
In order to display the video frame on the second electronic device at a suitable frame rate and ensure consistency of a picture displayed on the second electronic device, the second electronic device may determine whether the average frame rate is greater than the minimum frame rate. It is understood that the specific implementation of step S904 can refer to step S505, and will not be described herein again.
S905: and calculating the difference value between the arrival time of the current video frame and the first element of the queue, and recording the difference value as a.
The second electronic device calculates the difference between the time of the current video frame received and the element written first in the queue, and marks the difference as a.
S906: it is determined whether the frame loss threshold can be adjusted.
In particular, the second electronic device may set a parameter indicating the feasibility of adjusting the frame loss threshold. For example, the second electronic device may determine whether the frame loss threshold may be adjusted based on the value of Adjust. When Adjust is 0, it indicates that frame dropping operation can not be performed at this time, and directly transmits the video frame to the decoder to wait for decoding, and removes the element written first in the queue, and then writes the receiving time of the video frame into the queue (as in step S915); when Adjust is 1, the subsequent step (as in step S506) may be executed, and whether to perform the frame dropping operation also requires a specific judgment in the subsequent step (as in step S907).
It is understood that the step S505 can be referred to for specific implementation of the step S906, and the detailed description thereof is omitted here.
S907: and judging whether a is smaller than a frame loss threshold value.
The second electronic device may determine whether a is less than a frame loss threshold. If a is smaller than the frame loss threshold, directly dropping frames and counting the number of continuous frame loss (in step S908); if a is not less than the frame loss threshold, it is determined whether a threshold test is being performed (step S909).
S908: directly dropping frames and counting the number of continuous dropped frames.
It is understood that the step S506 can be referred to for a specific implementation manner of the step S908, and is not described herein again.
S909: it is determined whether a threshold test is being performed.
The second electronic device may determine whether a threshold test is being performed. If the threshold test is being performed, directly execute step S914; if the threshold test is not being performed, the process continues to step S910.
It is understood that the step S909 can be referred to step S601 for specific implementation, and will not be described herein.
S910: and judging whether the frame loss threshold value is not adjusted in the preset time.
The second electronic device may determine whether the frame loss threshold has not been adjusted within a preset time. If the second electronic device adjusts the frame loss threshold within the preset time, directly executing step S911; if the second electronic device does not adjust the frame loss threshold within the preset time, the step S913 continues to be executed.
It is understood that the specific implementation manner of step S910 can refer to step S602, and is not described herein again.
S911: and judging whether the frame loss threshold is smaller than the upper threshold.
The second electronic device may determine whether the frame loss threshold is less than the upper threshold. If the frame loss threshold is smaller than the upper threshold, continue to execute step S912; if the frame loss threshold is not less than the upper threshold, step S913 is executed.
It is understood that the specific implementation of step S911 can refer to step S603, and is not described herein again.
S912: the frame loss threshold is increased by one step and the threshold test is started.
It is understood that the specific implementation manner of step S912 can refer to step S604, and is not described herein again.
S913: the frame loss threshold is decreased by one step and the threshold test is started.
It is understood that the step S913 can be realized in a specific manner with reference to the step S605, and the details are not repeated herein.
S914: the queue head element is removed.
It is understood that the specific implementation of step S914 can refer to step S507, and is not described herein again.
S915: the time of the received video frame is written to the queue.
It is understood that the specific implementation of step S915 may refer to step S507, and is not described herein again.
S916: and decoding and counting decoding time delay.
It is understood that the step S508 can be referred to for the specific implementation of the step S916, and is not described herein again.
S917: and synchronizing and displaying the audio and video.
It is understood that the step S509 may be referred to for specific implementation of the step S917, and is not described herein again.
S918: and displaying and calling back and updating the decoding delay, the displaying delay and the average frame rate.
The second electronic device may check the display condition of the video frame on the second electronic device by sending the display callback, which may specifically refer to step S509 and is not described herein again.
S919: determining whether a threshold test is being performed
It is understood that the details of steps S919-S923 can refer to step S510, and are not described herein.
S920: and counting the decoding time delay and the display time delay of the video frame.
It is appreciated that after adjusting the frame loss threshold, the second electronic device can detect the decoding delay and presentation delay of subsequently received video frames.
S921: it is determined whether 60 video frames have been reached.
It is appreciated that the second electronic device may determine whether the adjustment to the frame loss threshold is a valid adjustment after receiving 60 frames of video frames. Therefore, the second electronic device needs to determine whether the received video frame reaches 60 frames after adjusting the frame loss threshold.
S922: and judging whether the adjustment of the frame loss threshold value is effective adjustment.
It is understood that the specific determination method can refer to step S510, and is not described herein again.
S923: the frame loss threshold is decreased by one step.
It is understood that the specific implementation of step S923 can refer to step S605, and will not be described herein.
The following describes an apparatus according to an embodiment of the present application.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device 100 according to an embodiment of the present disclosure.
It is understood that the electronic device 100 may perform the method of dynamically adjusting the frame loss threshold shown in fig. 4, 5, and 9. It is understood that the first electronic device and the second electronic device may be the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management Module 140, a power management Module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication Module 150, a wireless communication Module 160, an audio Module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor Module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the Processor 110 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
In an embodiment of the present application, the first electronic device may be the electronic device 100, and a specific process of displaying a screen of the first electronic device is as follows: the compositing of the plurality of screen layers (WMS layer display, and surface maker layer composition) is completed by the processor 110, and then sent to the display module 194 for display of the display screen (HWC/DSS display, and main screen display). In addition, the processor 110 of the first electronic device completes encoding and packaging (virtual display, VTP/TCP packet) of the video frames, and finally the packaged video frames are sent to the second electronic device through the wireless communication module 160.
In yet another embodiment of the present application, the second electronic device, i.e. the device receiving the video frames, may be the electronic device 100. The wireless communication module 160 of the second electronic device receives the video frame data from the first electronic device. These video frame data are processed by the processor 110 through a series of reverse unpacking (VTP/TCP unpacking and RTP unpacking) and decoding (MediaCodec) operations to obtain the video frame data that can be actually used for display. These video frame data are also subjected to display (MediaCodec) and layer composition (surfefinger layer composition), and finally sent to the display screen 194 for display (main screen display).
It is understood that the processing flow of the audio data is similar to that of the video data (video frames), and is not described herein again.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The Interface may include an Integrated Circuit (I2C) Interface, an Inter-Integrated Circuit built-in audio (I2S) Interface, a Pulse Code Modulation (PCM) Interface, a Universal Asynchronous Receiver/Transmitter (UART) Interface, a Mobile Industry Processor Interface (MIPI), a General-Purpose Input/Output (GPIO) Interface, a Subscriber Identity Module (SIM) Interface, and/or a Universal Serial Bus (USB) Interface, etc.
The I2C interface is a bi-directional synchronous Serial bus that includes a Serial Data Line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI Interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices 100, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power Amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The Wireless Communication module 160 may provide solutions for Wireless Communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Time-Division Code Division Multiple Access (Time-Division Multiple Access, TD-SCDMA), Long Term Evolution (Long Term Evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a Global Navigation Satellite System (GLONASS), a Beidou Navigation Satellite System (BDS), a Quasi-Zenith Satellite System (QZSS), and/or a Satellite Based Augmentation System (SBAS).
In one embodiment of the present application, communication between the first electronic device and the second electronic device may be achieved through the wireless communication module 760. It is understood that the first electronic device and the second electronic device may communicate with each other in a peer-to-peer manner or through a server.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The Display panel may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), an Active Matrix Organic Light-Emitting Diode (Active-Matrix Organic Light-Emitting Diode, AMOLED), a flexible Light-Emitting Diode (FLED), a Mini LED, a Micro-OLED, a Quantum Dot Light-Emitting Diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement the acquisition function via the ISP, camera 193, video codec, GPU, display screen 194, application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image or video visible to the naked eye. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a Complementary Metal-Oxide-Semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image or video signal. And the ISP outputs the digital image or video signal to the DSP for processing. The DSP converts the digital image or video signal into image or video signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1. For example, in some embodiments, the electronic device 100 may acquire images of multiple exposure coefficients using the N cameras 193, and further, in video post-processing, the electronic device 100 may synthesize an HDR image by an HDR technique from the images of multiple exposure coefficients.
The digital signal processor is used for processing digital signals, and can process digital images or video signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image and video playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The headphone interface 170D is used to connect a wired headphone. The headphone interface 170D may be the USB interface 130, or may be 3. 5mm Open Mobile Terminal Platform (OMTP) standard interface, and the American Cellular Telecommunications Industry Association (Cellular Telecommunications Industry Association of the USA, CTIA) standard interface.
The sensor module 180 may include 1 or more sensors, which may be of the same type or different types, and it is understood that the sensor module 180 shown in fig. 1 is only an exemplary division manner, and other division manners are possible, which is not limited in this application.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for identifying the posture of the electronic equipment 100, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode.
The ambient light sensor 180L is used to sense the ambient light level.
The fingerprint sensor 180H is used to acquire a fingerprint. The electronic device 100 can utilize the obtained fingerprint characteristics to unlock the fingerprint, access an application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and the like.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
Fig. 11 is a schematic diagram of a software structure of an electronic device 100 according to an embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the system is divided into four layers, an application layer, an application framework layer, a Runtime (Runtime) and system library, and a kernel layer, from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 11, the application package may include applications (also referred to as applications) such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
In an embodiment of the application, the application package may further include another application, and the user may complete the mirror image projection after triggering the application by touching, clicking, gesture, voice, or the like, and during the mirror image projection, the electronic device 100 may serve as a device (e.g., a first electronic device) that transmits the video frame and the audio frame, and may also serve as a device (e.g., a second electronic device) that receives the video frame and the audio frame. It is understood that the application may be named "wireless screen projection," which is not limited in this application.
The Application framework layer provides an Application Programming Interface (API) and a Programming framework for the Application program of the Application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 11, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog interface. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Runtime (Runtime) includes a core library and a virtual machine. Runtime is responsible for scheduling and management of the system.
The core library comprises two parts: one part is a function that a programming language (for example, java language) needs to call, and the other part is a core library of the system.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes programming files (e.g., java files) of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface Manager (Surface Manager), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), two-dimensional graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2-Dimensional, 2D) and three-Dimensional (3-Dimensional, 3D) layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and a virtual card driver.
The workflow of the software and hardware of the electronic device 100 is exemplarily described below in connection with a mirror projection scenario.
If the electronic device 100 is a device (e.g., a first electronic device) that transmits video frames and audio frames during the mirror projection, when the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking the control corresponding to the click operation as the control of the wireless screen-casting icon as an example, the wireless screen-casting application calls the interface of the application framework layer to start the wireless screen-casting application, and further calls the kernel layer to start the drive, so that the video frame and the audio frame are transmitted to another device through the wireless communication module 160 (a device which receives the video frame and the audio frame in the mirror screen-casting process, for example, a second electronic device).
It is understood that the device being screened (e.g., the second electronic device) may turn on the wireless screen projection application by default, or turn on the wireless screen projection application upon receiving a mirror screen projection request sent by another device. And the first electronic equipment can select the second electronic equipment which starts the wireless screen projection application when the wireless screen projection application is started, so that the mirror image screen projection can be started after the first electronic equipment is selected and communication connection is established between the first electronic equipment and the second electronic equipment.
It should be noted that the communication connection between the first electronic device and the second electronic device can be established through the wireless communication technology provided by the wireless communication module 160 in fig. 10.
If the electronic device 100 is a device (e.g., a second electronic device) receiving the video frame and the audio frame in the mirror projection, the wireless projection application is started accordingly, the video frame and the audio frame are received through the wireless communication module 160, and the kernel layer is called to start the display driver and the audio driver, so that the received video frame is displayed through the display screen 194, and the received audio frame is played through the speaker 170A.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be understood that the reference herein to first, second, third, fourth, and various numerical designations is merely a convenient division to describe and is not intended to limit the scope of the present application.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be merged, divided and deleted according to actual needs.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (8)

1. A method for dynamically adjusting a frame loss threshold, applied to a second electronic device, the method comprising:
receiving a video frame sent by first electronic equipment;
determining a frame receiving time difference; the frame receiving time difference is the difference value between the time of receiving the video frame and the time of receiving the Mth frame video frame; the Mth frame video frame is a video frame sent by the first electronic equipment before the video frame is sent;
under the condition that the frame receiving time difference is not smaller than a frame loss threshold, if the first time length does not reach the preset time, reducing the frame loss threshold; the first time length is the time length from the last adjustment of the frame loss threshold value to the current time; the frame loss threshold is used for judging whether to discard the video frame;
or, under the condition that the frame receiving time difference is not less than the frame loss threshold, if the first time reaches the preset time and the frame loss threshold is less than the upper limit of the threshold, increasing the frame loss threshold; if the first time reaches the preset time and the frame loss threshold value is not smaller than the upper limit of the threshold value, reducing the frame loss threshold value; the upper threshold is the maximum value of the frame loss threshold;
recording decoding time delay and display sending time delay of N frames of video frames received after the video frames are received; the decoding time delay is the time from the time when the video frame arrives at the decoder to the time when the decoding is finished; the display time delay is the time from the completion of decoding to the display of the video frame on the display screen;
judging whether the N is equal to a preset frame number or not;
if the N is equal to the preset frame number, determining the average decoding time delay and the average display sending time delay in the current full time period; the current full time period is a period of time from receiving a first video frame until determining the average decoding delay and the average presentation delay;
if the decoding delay and the display sending delay of the N frames of video frames are respectively reduced by at least c% compared with the average decoding delay and the average display sending delay, the adjusted frame loss threshold is determined to be effective; if the decoding delay and the display transmission delay of the N frames of video frames are not respectively reduced by at least c% compared with the average decoding delay and the average display transmission delay, determining that the adjusted frame loss threshold is invalid;
if the adjusted frame loss threshold value is invalid, reducing the adjusted frame loss threshold value and stopping the threshold value test; the threshold test is used to determine whether the adjustment to the frame loss threshold is valid.
2. The method of claim 1, wherein the video frame is a video frame not used for the threshold test.
3. The method of claim 2, wherein the method further comprises: and under the condition that the frame receiving time difference is smaller than a frame loss threshold value, discarding the video frame.
4. The method of any of claims 1-3, wherein after receiving the video frame transmitted by the first electronic device, the method further comprises: recording the time of receiving the video frame; and initializing the frame loss threshold.
5. The method of claim 4, wherein the recording is subsequent to a time at which the video frame was received, the method further comprising: storing a time at which the video frame was received in a first queue; and the time for receiving the Mth frame of video frame by the second electronic equipment is stored in the first queue.
6. The method of claim 5, wherein the recording precedes a decoding latency and a presentation latency of N frames of video frames received after the receiving of the video frames, the method further comprising: and removing the element written first in the first queue, and writing the time for receiving the video frame into the first queue.
7. An electronic device comprising a display screen, memory, one or more processors, wherein the memory is configured to store a computer program; the processor is configured to invoke the computer program to cause the electronic device to perform the method of any of claims 1-6.
8. A computer storage medium, comprising: computer instructions; the computer instructions, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-6.
CN202110713481.0A 2021-06-25 2021-06-25 Method for dynamically adjusting frame loss threshold and related equipment Active CN113473229B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110713481.0A CN113473229B (en) 2021-06-25 2021-06-25 Method for dynamically adjusting frame loss threshold and related equipment
PCT/CN2022/092369 WO2022267733A1 (en) 2021-06-25 2022-05-12 Method for dynamically adjusting frame-dropping threshold value, and related devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110713481.0A CN113473229B (en) 2021-06-25 2021-06-25 Method for dynamically adjusting frame loss threshold and related equipment

Publications (2)

Publication Number Publication Date
CN113473229A CN113473229A (en) 2021-10-01
CN113473229B true CN113473229B (en) 2022-04-12

Family

ID=77873126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110713481.0A Active CN113473229B (en) 2021-06-25 2021-06-25 Method for dynamically adjusting frame loss threshold and related equipment

Country Status (2)

Country Link
CN (1) CN113473229B (en)
WO (1) WO2022267733A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473229B (en) * 2021-06-25 2022-04-12 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment
CN114025233B (en) * 2021-10-27 2023-07-14 网易(杭州)网络有限公司 Data processing method and device, electronic equipment and storage medium
CN114157902B (en) * 2021-12-02 2024-03-22 瑞森网安(福建)信息科技有限公司 Wireless screen projection method, system and storage medium
CN115550708B (en) * 2022-01-07 2023-12-19 荣耀终端有限公司 Data processing method and electronic equipment
CN116521115A (en) * 2022-01-30 2023-08-01 荣耀终端有限公司 Data processing method and related device
CN114449309B (en) * 2022-02-14 2023-10-13 杭州登虹科技有限公司 Dynamic diagram playing method for cloud guide
CN115102931B (en) * 2022-05-20 2023-12-19 阿里巴巴(中国)有限公司 Method for adaptively adjusting audio delay and electronic equipment
CN117041669B (en) * 2023-09-27 2023-12-08 湖南快乐阳光互动娱乐传媒有限公司 Super-division control method and device for video stream and electronic equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101990087A (en) * 2010-09-28 2011-03-23 深圳中兴力维技术有限公司 Wireless video monitoring system and method for dynamically regulating code stream according to network state
CN104144032A (en) * 2013-05-10 2014-11-12 华为技术有限公司 Frame detection method and device
CN104539917A (en) * 2015-02-03 2015-04-22 成都金本华科技股份有限公司 Method for improving definition of video image
CN105847926A (en) * 2016-03-31 2016-08-10 乐视控股(北京)有限公司 Multimedia data synchronous playing method and device
CN106331835A (en) * 2015-06-26 2017-01-11 成都鼎桥通信技术有限公司 Method of dynamically adjusting data reception cache and video decoding device
CN109155868A (en) * 2016-05-16 2019-01-04 Nec显示器解决方案株式会社 Image display, frame transmission interval control method and image display system
CN109714634A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 A kind of decoding synchronous method, device and the equipment of live data streams
CN111162964A (en) * 2019-12-17 2020-05-15 国网智能科技股份有限公司 Intelligent station message integrity analysis method and system
CN112073751A (en) * 2020-09-21 2020-12-11 苏州科达科技股份有限公司 Video playing method, device, equipment and readable storage medium
CN112153446A (en) * 2020-09-27 2020-12-29 海信视像科技股份有限公司 Display equipment and streaming media video audio-video synchronization method
CN112154665A (en) * 2019-09-05 2020-12-29 深圳市大疆创新科技有限公司 Video display method, receiving end, system and storage medium
CN112312229A (en) * 2020-10-27 2021-02-02 唐桥科技(杭州)有限公司 Video transmission method and device, electronic equipment and storage medium
CN112822505A (en) * 2020-12-31 2021-05-18 杭州星犀科技有限公司 Audio and video frame loss method, device, system, storage medium and computer equipment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102368823A (en) * 2011-06-28 2012-03-07 上海盈方微电子有限公司 Video framedropping strategy based on grading mechanism
CN107818789B (en) * 2013-07-16 2020-11-17 华为技术有限公司 Decoding method and decoding device
CN104394421B (en) * 2013-09-23 2018-08-17 贵阳朗玛信息技术股份有限公司 The processing method and processing device of video frame
WO2016207688A1 (en) * 2015-06-26 2016-12-29 Intel Corporation Method and system for improving video quality during call handover
CN105933800A (en) * 2016-04-29 2016-09-07 联发科技(新加坡)私人有限公司 Video play method and control terminal
CN105955688B (en) * 2016-05-04 2018-11-02 广州视睿电子科技有限公司 Play the method and system of PPT frame losings processing
CN106817614B (en) * 2017-01-20 2020-08-04 浙江瑞华康源科技有限公司 Audio and video frame loss device and method
CN106954101B (en) * 2017-04-25 2020-04-28 华南理工大学 Frame loss control method for low-delay real-time video streaming media wireless transmission
CN108737818B (en) * 2018-05-21 2020-09-15 深圳市梦网科技发展有限公司 Frame loss method and device under congestion network and terminal equipment
CN110177308A (en) * 2019-04-15 2019-08-27 广州虎牙信息科技有限公司 Mobile terminal and its audio-video frame losing method in record screen, computer storage medium
CN110351595B (en) * 2019-07-17 2023-08-18 北京百度网讯科技有限公司 Buffer processing method, device, equipment and computer storage medium
CN112087627A (en) * 2020-08-04 2020-12-15 西安万像电子科技有限公司 Image coding control method, device, equipment and storage medium
CN113473229B (en) * 2021-06-25 2022-04-12 荣耀终端有限公司 Method for dynamically adjusting frame loss threshold and related equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101990087A (en) * 2010-09-28 2011-03-23 深圳中兴力维技术有限公司 Wireless video monitoring system and method for dynamically regulating code stream according to network state
CN104144032A (en) * 2013-05-10 2014-11-12 华为技术有限公司 Frame detection method and device
CN104539917A (en) * 2015-02-03 2015-04-22 成都金本华科技股份有限公司 Method for improving definition of video image
CN106331835A (en) * 2015-06-26 2017-01-11 成都鼎桥通信技术有限公司 Method of dynamically adjusting data reception cache and video decoding device
CN105847926A (en) * 2016-03-31 2016-08-10 乐视控股(北京)有限公司 Multimedia data synchronous playing method and device
CN109155868A (en) * 2016-05-16 2019-01-04 Nec显示器解决方案株式会社 Image display, frame transmission interval control method and image display system
CN109714634A (en) * 2018-12-29 2019-05-03 青岛海信电器股份有限公司 A kind of decoding synchronous method, device and the equipment of live data streams
CN112154665A (en) * 2019-09-05 2020-12-29 深圳市大疆创新科技有限公司 Video display method, receiving end, system and storage medium
CN111162964A (en) * 2019-12-17 2020-05-15 国网智能科技股份有限公司 Intelligent station message integrity analysis method and system
CN112073751A (en) * 2020-09-21 2020-12-11 苏州科达科技股份有限公司 Video playing method, device, equipment and readable storage medium
CN112153446A (en) * 2020-09-27 2020-12-29 海信视像科技股份有限公司 Display equipment and streaming media video audio-video synchronization method
CN112312229A (en) * 2020-10-27 2021-02-02 唐桥科技(杭州)有限公司 Video transmission method and device, electronic equipment and storage medium
CN112822505A (en) * 2020-12-31 2021-05-18 杭州星犀科技有限公司 Audio and video frame loss method, device, system, storage medium and computer equipment

Also Published As

Publication number Publication date
WO2022267733A1 (en) 2022-12-29
CN113473229A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN113473229B (en) Method for dynamically adjusting frame loss threshold and related equipment
US20220229624A1 (en) Screen projection method, system, and related apparatus
CN111316598B (en) Multi-screen interaction method and equipment
WO2020253719A1 (en) Screen recording method and electronic device
CN113726950B (en) Image processing method and electronic equipment
KR102577396B1 (en) Recording frame rate control method and related devices
CN112119641B (en) Method and device for realizing automatic translation through multiple TWS (time and frequency) earphones connected in forwarding mode
WO2022007862A1 (en) Image processing method, system, electronic device and computer readable storage medium
KR102558615B1 (en) Method and electronic device for presenting a video on an electronic device when there is an incoming call
CN112398855A (en) Method and device for transferring application contents across devices and electronic device
WO2022042769A2 (en) Multi-screen interaction system and method, apparatus, and medium
KR102491006B1 (en) Data Transmission Methods and Electronic Devices
JP2022537012A (en) Multi-terminal multimedia data communication method and system
CN114040242A (en) Screen projection method and electronic equipment
US20240045643A1 (en) Codec negotiation and switching method
CN115048012A (en) Data processing method and related device
WO2022156721A1 (en) Photographing method and electronic device
WO2021052388A1 (en) Video communication method and video communication apparatus
WO2022161006A1 (en) Photograph synthesis method and apparatus, and electronic device and readable storage medium
EP4206865A1 (en) Brush effect picture generation method, image editing method and device, and storage medium
CN111131019B (en) Multiplexing method and terminal for multiple HTTP channels
CN114173184A (en) Screen projection method and electronic equipment
WO2022222691A1 (en) Call processing method and related device
WO2022135254A1 (en) Text editing method, electronic device and system
WO2024040990A1 (en) Photographing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant