CN115297291A - Data processing method, data processing device, electronic equipment and medium - Google Patents

Data processing method, data processing device, electronic equipment and medium Download PDF

Info

Publication number
CN115297291A
CN115297291A CN202210934780.1A CN202210934780A CN115297291A CN 115297291 A CN115297291 A CN 115297291A CN 202210934780 A CN202210934780 A CN 202210934780A CN 115297291 A CN115297291 A CN 115297291A
Authority
CN
China
Prior art keywords
frame rate
frame
video
video frames
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210934780.1A
Other languages
Chinese (zh)
Inventor
周文欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202210934780.1A priority Critical patent/CN115297291A/en
Publication of CN115297291A publication Critical patent/CN115297291A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter

Abstract

The present disclosure provides a data processing method, an apparatus, an electronic device and a medium, which relate to the field of computers, and in particular to the field of video processing and intelligent vehicle-mounted devices. The data processing method comprises the following steps: obtaining at least two video frames based on a screen picture of a first device, wherein the at least two video frames have a first frame rate, and the first frame rate is within a frame rate range supported by the first device; storing at least two video frames in a first buffer; and reading the first buffer at the frequency of a second frame rate to obtain at least one of the at least two video frames, wherein the second frame rate is within a frame rate range supported by the second device.

Description

Data processing method, data processing device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video processing method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
With the development of intelligent devices, users are often faced with a scenario of video transmission between different devices. In particular, the user may need to share the screen between different devices. However, since the data processing performance of different devices may be different and the screen refresh frequency may be different, sharing the display of the screen between different devices may cause delay, jamming, or other problems.
Disclosure of Invention
The present disclosure provides a data processing method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a data processing method including: obtaining at least two video frames based on a screen of a first device, wherein the at least two video frames have a first frame rate, and the first frame rate is within a frame rate range supported by the first device; storing the at least two video frames in a first buffer; and reading the first buffer at a frequency of a second frame rate to obtain at least one of the at least two video frames, wherein the second frame rate is within a range of frame rates supported by a second device.
According to another aspect of the present disclosure, there is provided a data processing apparatus including: a first video frame obtaining unit, configured to obtain at least two video frames based on a screen of a first device, where the at least two video frames have a first frame rate, and the first frame rate is within a range of frame rates supported by the first device; the video frame buffer unit is used for storing the at least two video frames into a first buffer; and a second video frame obtaining unit, configured to read the first buffer at a frequency of a second frame rate to obtain at least one video frame of the at least two video frames, where the second frame rate is within a frame rate range supported by a second device.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a data processing method according to one or more embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a first apparatus including: a display; a communication interface; and a controller for performing the data processing method according to one or more embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a second apparatus including: a display; and a communication interface; wherein the communication interface is to receive at least one video frame generated by a data processing method according to one or more embodiments of the present disclosure for display on the display.
According to another aspect of the present disclosure, a vehicle including a second apparatus is provided.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a data processing method according to one or more embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program, when executed by a processor, implements a data processing method according to one or more embodiments of the present disclosure.
According to one or more embodiments of the present disclosure, frame rate adjustment of video frames can be effectively achieved, thereby facilitating scenes such as sharing among devices.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of example only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a data processing method according to an embodiment of the present disclosure;
FIG. 3 shows a data flow diagram according to an embodiment of the present disclosure;
FIG. 4 shows a timing diagram of a data processing method according to another embodiment of the present disclosure;
FIG. 5 shows a flow diagram of an inter-device communication process according to one embodiment of the present disclosure;
FIG. 6 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure;
FIG. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In an embodiment of the present disclosure, the server 120 may run one or more services or software applications that enable the execution of the data processing method according to the present disclosure.
In some embodiments, the server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein, and is not intended to be limiting.
A user may use client devices 101, 102, 103, 104, 105, and/or 106 to perform various operations and processes. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various Mobile operating systems, such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablets, personal Digital Assistants (PDAs), and the like. Wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 can also run any of a variety of additional server applications and/or mid-tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, the server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of the client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The database 130 may be of different types. In certain embodiments, the database used by the server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve data to and from the databases in response to the commands.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with this disclosure.
A data processing method 200 according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2.
At step S201, at least two video frames are obtained based on a screen of a first device, the at least two video frames having a first frame rate, and the first frame rate is within a frame rate range supported by the first device.
At step S202, the at least two video frames are stored into a first buffer.
At step S203, the first buffer is read at a frequency of a second frame rate to obtain at least one of the at least two video frames, where the second frame rate is within a frame rate range supported by a second device.
According to the method disclosed by the embodiment of the disclosure, the frame rate adjustment of the video frame can be effectively realized, so that scenes such as video or screen sharing among devices are facilitated.
It is understood that the range of frame rates supported by the first device and the range of frame rates supported by the second device may be different, but this does not mean that the range of frame rates supported by the first device and the range of frame rates supported by the second device need to be completely different or not overlap. Therefore, the relationship between the second frame rate and the first frame rate may be various.
The at least one video frame generated based on the second frame rate may be compatible with and therefore available to the second device. The generated at least one video frame may be transmitted to the second device, or may be stored on the first device or other devices for use as needed, etc., and the disclosure is not limited thereto.
As a specific, non-limiting example, a first device may support a frame rate of 30 frames per second (fps) to 80fps, while a second device may support a frame rate range of 15fps to 20fps, in which case the second frame rate is always less than the first frame rate. As another specific, non-limiting example, the second device may support a frame rate range of 15fps to 40fps, or the second device may support a frame rate range of 40fps to 50 fps. In such a case, it may be the case that the second frame rate is less than the first frame rate for a certain period of time and the second frame rate may be greater than or equal to the first frame rate for another period of time. In further non-limiting examples, the maximum value of the frame rate range supported by the second device may be greater than the maximum value of the frame rate range supported by the first device, e.g., the second device may support a frame rate of 30fps to 80fps, while the first device may support a frame rate range of 15fps to 20fps, a frame rate range of 15fps to 40fps, or a frame rate range of 40fps to 50fps, and such data processing schemes may include repeated fetching or copying of at least two video frames to achieve the effect of raising the frame rate, etc. It is to be understood that the above numbers are merely examples and the disclosure is not limited thereto.
In the related art, in a scenario of sharing a screen between devices, if it is desired to implement frame rate adjustment (for example, frame dropping) between devices, frame dropping is often performed by performing systematic frame dropping on a high-performance device or by performing frame dropping at an encoding frame rate of an encoder. However, the former may cause experience degradation on high performance devices, while the latter is not applicable or works poorly for data stream forms such as bare streams. For example, the frame rate of the hard encoder is determined by various factors such as the code rate, the key frame interval, and the coding difference of different devices, which results in a larger difference between the set frame rate of the bare bit stream and the actual frame rate. In contrast, according to the embodiments of the present disclosure, the buffer may be used as a relay, and the buffer may be stored at a first frequency and read at a second frequency, so as to achieve the purpose of physically dropping frames in a simple manner.
According to some embodiments, storing the at least two video frames in a first buffer may comprise: and sequentially storing the at least two video frames into a first buffer at the frequency of the first frame rate. For example, the first buffer may record the transmission time, capture time, storage time, or other time stamp of each video frame, thereby forming an ordered sequence of the at least two video frames that increases over time in the first buffer.
According to some further optional embodiments, reading the first buffer at a frequency of a second frame rate may comprise reading a most recently stored video frame in the first buffer at a frequency of the second frame rate.
Thereby, it is possible to achieve a decoupling of the screen frame sequence of the first device from the frame sequence to be transmitted to the second device and a conversion of the frame rate without consuming excessive computing resources or memory space by simply pushing frames upstream at the first frequency and receiving frames downstream at the second frequency.
As an example, sequentially storing the at least two video frames into the first buffer at a frequency of the first frame rate comprises: sequentially storing a current video frame of the at least two video frames into the first buffer at a frequency of the first frame rate, and discarding a previously stored video frame in the first buffer, and reading a most recently stored video frame in the first buffer comprises: and reading the currently stored video frame in the first cache as the latest stored video frame.
In such an example, the first cache may be configured to store one of the at least two video frames and discard previously stored video frames in response to receiving a storage request for the one video frame, and reading a most recently stored video frame in the first cache comprises reading a currently stored video frame in the first cache as the most recently stored video frame.
In such an example, the buffer may be configured to simply store the current last frame data, and reading the most recently stored video frame of the at least two stored video frames in the first buffer at the frequency of the second frame rate may simply comprise reading the currently stored video frame in the first buffer. In accordance with one or more embodiments of the present disclosure, in the case where the first frame rate is lower than the second frequency (or in the case of, for example, a first lower frequency, which will be described in detail below), sequentially reading the first cache at the frequency of the second frame rate may result in some video frames stored in the first cache being read more than once, so as to ensure a more uniform and smooth user experience. This will be described in detail below with reference to specific embodiments.
As other examples, reading the latest stored frame data in the buffer may be implemented in other ways as understood by those skilled in the art, for example, using last-in first-out logic, and the disclosure is not limited thereto.
According to some embodiments, reading the first buffer at a frequency of a second frame rate to obtain at least one of the at least two video frames may comprise: reading frame data from the first buffer to an encoder at a frequency of a second frame rate, and obtaining the at least one video frame by encoding the read frame data via the encoder.
According to such an embodiment, the frame rate-converted data frame can be output to the encoder. The encoder may be an encoder on the first device. That is, according to the scheme of the present embodiment, the frame rate at the input of the encoder is controlled, instead of controlling the output frame rate of the encoder by the parameter setting of the encoder as in the conventional scheme. By addressing the encoder input frame rate from the source, frame rate control can be achieved with simpler parameter settings.
According to some examples, reading frame data currently stored in the first buffer to an encoder may include drawing the frame data to a buffer rendering component of the encoder. According to such an embodiment, instead of rendering using a component such as a virtual screen, the frame rate converted data may be directly drawn to the encoder. The cache rendering component may include a Surface as known to those skilled in the art, or may include other units that cache or render video, or both, as will be appreciated by those skilled in the art. As one non-limiting example, where the encoder is mediacode, eglsface created by Surface of mediacode may be drawn through OpenGl, and it is understood that the disclosure is not limited thereto.
According to some embodiments, encoding the read frame data via the encoder may include: and performing hardware compression coding on the read frame data through the coder.
In the field of encoding and decoding, hardware compression encoding is faster, performance is high, and CPU occupation is low, so that the method has advantages in encoding and decoding. However, the hardware codec itself is not suitable for frame rate adjustment due to the problem of parameter setting and the like.
By using the scheme of the application, the frame rate adjustment and the encoder are unhooked, so that the advantage of hard encoding can be realized, and the flexible frame rate adjustment can be realized.
According to some embodiments, the method 200 may further comprise: configuring the encoder based on a configuration request message received from the second device; in response to the encoder configuration being complete, sending a configuration complete message to the second device; and in response to receiving a start message from the second device, transmitting the at least one video frame to the second device.
According to such an embodiment, flexible and controllable video frame transmission is achieved through a communication protocol between the first device and the second device. A configuration request may be received from the second device and the encoder configured based on such request. The configuration request message may also be referred to as an encoder configuration request message or an initialization request message and may be used to initialize an encoder, re-initialize, configure, reconfigure, or adjust parameters of the encoder. Thereafter, a configuration complete message may be sent, and optionally, the configuration complete message may include a corresponding encoder configuration for the second device to check or a corresponding decoder initialization or decoder configuration. After receiving the start message again as an acknowledgement of the second device, the first device starts the transmission of video frames. According to the scheme, the communication completeness and compatibility of the first device and the second device can be ensured, and the situation of transmission error or incompatibility can be avoided.
According to some embodiments, the configuration request message may indicate that connection establishment of the second device with the first device is complete, and the initiation message may indicate that a decoder has been configured at the second device based on the configuration complete message.
For example, the communication may be initiated by the second device as the recipient of the data, thereby enabling a reduction in the number of communication transmissions and the number of possible communication failures. In particular, the request message may include support parameters of the second device. Support parameters may include, but are not limited to, permissions, frame rate parameters, communication protocol parameters, screen parameters, display parameters, and the like. The second equipment initiates a request, and the first equipment confirms, so that the communication efficiency can be improved.
As one example, the second device may be configured to: sending the configuration request message to the first device in response to completion of connection establishment with the first device; creating a decoder at the second device based on the configuration complete message; and in response to the decoder creation being complete, sending the initiation message to the first device.
As one example, the second device may be vehicle-side, and thus may have more limiting factors. By configuring the communication protocol to be initiated by the car machine side, the number of communication transmissions and possible communication failures may be further reduced.
According to some embodiments, the method 200 may further comprise: determining the second frame rate based on the configuration request message, wherein the configuration request message includes a frame rate limitation parameter of the second device, and the configuration completion message includes the second frame rate.
For example, the configuration request message may include a maximum frame rate limiting parameter or frame rate support range of the second device, and the second frame rate may be determined at least in part from such limiting parameter received. Determining the second frame rate "at least partially" according to such received limiting parameters may for example mean that the determination or adjustment of the second frame rate may also optionally be based on the frame rate range of the first device, user expectations, application scenarios, and other parameters, etc., and the disclosure is not limited thereto.
According to some embodiments, the method 200 may further comprise: determining a size parameter of the at least one video frame based on the configuration request message, wherein the configuration complete message includes the determined size parameter.
For example, the configuration request message may include a screen size, a resolution, or a desired size parameter range of the second device, and through such a communication protocol, the coordination and dynamic adjustment of parameters between the first device and the second device can be achieved, and the compatibility of transmission between devices is improved.
In addition, according to the message configuration between the first device and the second device in one or more embodiments of the present disclosure, the flexibility of transmission between devices may also be improved, so that there is no need to configure various parameters in advance or require some first devices and second devices configured specifically, compatibility between the first devices and the second devices of various different types may be achieved, and dynamic adjustment may also be performed according to the network speed, the data frame type, the application scenario, and the like, so as to optimize user experience.
According to some embodiments, the method 200 may further comprise: acquiring a frame rate updating parameter; adjusting the second frame rate based on the frame rate update parameter; and reading the first buffer based on the adjusted second frame rate.
Obtaining the frame rate update parameter may comprise receiving the frame rate update parameter from the second device. Obtaining the frame rate update parameter may also include obtaining a frame rate update parameter generated or received according to other means. For example, in response to determining a usage scenario change, such as a device occupancy change, entering or exiting a power saving mode, a network change, and/or the like, the first device may generate frame rate update parameters to adjust the transmission frame rate. As another example, the first device may receive (e.g., itself or via the second device, etc.) the user-entered frame rate update parameter. Or the first device may receive (e.g., itself or via the second device, etc.) an instruction input by a user to update the frame rate, or feedback that the current frame rate is not satisfied, etc., and thereby generate the frame rate update parameter. In any such exemplary scenario, the complex process of re-initializing the encoder is not required. Conversely, dynamic adjustment of the frame rate can be achieved by controlling the frequency of reading the buffer.
According to some embodiments, obtaining at least two video frames based on a screen of a first device may include obtaining at least three video frames during a first period, at least a portion of the at least three video frames having a first higher frame rate and a remaining portion of the at least three video frames having a first lower frame rate lower than the first higher frame rate. In such embodiments, reading the first buffer at a frequency of a second frame rate may comprise: reading the first buffer based on a constant second frame rate during the first period.
In such embodiments, the second frame rate may be kept constant, thereby ensuring fluency in the user experience and reducing computational resources.
As one example, the first device may be screen refreshed at different frame rates in different states, and in such an example, the first higher frame rate may be a dynamic picture frame rate (e.g., 60 fps) and the first lower frame rate is a still picture frame rate (e.g., 10 fps). To ensure a smooth user experience and to reduce computational resources, a constant second frame rate may be used for different first device frame rates.
This may be accomplished by determining or dynamically adjusting a second constant frame rate based at least in part on a frame rate range or frame rate parameter supported by the first device. For example, where it is known that the first device will refresh at two or more different frequencies, frequencies compatible with the two or more different frequencies and within the support range of the second device may be predetermined according to such a frame rate range to achieve a better user experience.
According to some embodiments, the second frame rate is less than the first higher frame rate and greater than or equal to the first lower frame rate. Therefore, the fluency of user experience can be ensured. In particular, continuing the above example, a callback for the frame rate of a still picture may be only 10fps, while a picture at a dynamic frame rate may be 60fps. In this case, the frame dropping can be achieved by taking the value every 50 ms. Specifically, the frame rate 60fps is to refresh one frame of video frame every 16ms, and at this time, as the second frame rate, the frame is refreshed in 50ms, and the frame dropping can be completed by discarding the video frame data in the following 50ms interval after the first frame available frame is acquired in a frame covering manner. Thus, the minimum frame rate at which the user can perceive the smoothness of the moving image is 20fps, and this can reduce the amount of data to be transmitted and improve the smoothness. As another example, if the frame callback data for the first frame rate is 10fps, but it is desired to guarantee a smooth frame rate of 20fps, a frame fetch frequency of once every 50ms may be employed. In such a case, the first buffer may be read at a frequency of the second frame rate at a higher frequency than a data storage frequency (e.g., a first lower frame rate) at which the at least two video frames are stored in the first buffer. As a specific example, if the buffer still does not receive the updated frame data from the upstream when the frame is fetched for the second time at the downstream (e.g., encoder), the last stored data is still retained in the buffer, and therefore, the step of reading the first buffer to obtain at least one of the at least two video frames will obtain the repeated previous frame number to ensure the fluency of the user experience. In other words, the same video frame may be buffered for transmission twice. In such an example, the second frame rate is twice the first lower frame rate to enable a match between the second frame rate and the first lower frame rate and a convenient frame fetching policy. It is to be understood that the above data are examples and the disclosure is not limited thereto. As another specific example, when the first device includes three or more different first frame rates, high, medium, and low, the second frame rate may be set to a frame rate in the middle of the multiple first frame rate ranges to achieve a balance between fluency of the user experience and reduced data volume.
According to some embodiments, obtaining at least two video frames based on a screen of the first device comprises: obtaining graphic data corresponding to the at least two video frames respectively by recording the screen of the first device; and converting the graphics data into picture data to obtain the at least two video frames.
By converting the graphic data into picture data, real picture data which is convenient to display can be obtained. As a specific example, converting the graphic data into the picture data may be performed by an image parsing and decoding component, and the image parsing and decoding component may be an ImageReader, but may also be other components, logics or units capable of parsing and decoding an image, which can be understood by those skilled in the art, and the disclosure is not limited thereto.
The data processing scheme of the present disclosure is described below in conjunction with a specific embodiment. It is to be understood that the following detailed description and code logic are merely examples, and the disclosure is not limited thereto.
In such a non-limiting example, the first device may be a smartphone and the second device may be a car machine, the smartphone and car machine being connectable by a data connection line, or wirelessly. If the frame reduction is performed on the mobile phone, the frame reduction smoothness of the mobile phone system is too poor, which will affect the experience of the mobile phone application and the smoothness of the mobile phone system.
Fig. 3 shows a data flow diagram 300 of an interception implementation of a frame dropping scheme according to an embodiment of the present disclosure. As shown in fig. 3, a system screen Activity (e.g., UI Activity) 301 may be input to a virtual screen (VirtualDisplay) 302 through a system screen (e.g., mediaproject system screen). In the conventional method, after the mobile phone is connected to the car machine, screen data recorded by the mobile phone through a system screen recording MediaProject needs to be displayed on a Surface of a VistualDisplay virtual screen, and the data is then drawn to a cache rendering component (Surface) 303' of the virtual screen. And then passed to the encoder (e.g., mediaCodec). The Surface of the conventional scheme is created by an encoder mediacode, which can be implemented by the encoder using createInputSurface method. Then, the encoded data encoded frame buffer may be obtained by an encoder, and then the video frame data may be sent to the car machine via a channel protocol such as carrife, and the car machine decodes and displays the video frame according to the encoding format of the video frame. However, as described herein, there may be a problem that setting the frame rate for the hard coded MeidaCodec may not be effective because the video data recorded by the system is a bare stream, at least because the frame rate of the encoder is jointly solved by several parameters, such as the code rate, the key frame interval, and the frame rate.
Referring to fig. 3, according to a solution of an embodiment of the present disclosure, instead of a cache rendering component (Surface) 303' of a virtual screen, data in the virtual screen 302 may be transferred to a cache rendering component of an image parsing and decoding unit. For example, data may be input to the ImageReader Surface. For example, according to such specific embodiment, the mobile phone displays the screen data recorded by the system recording screen MediaProject on the VistualDisplay virtual screen Surface, where the Surface intercepts the video stream originally displayed on the Surface of the encoder through the ImageReader, and displays and outputs the video stream on the Surface of the ImageReader.
The data may then enter the image parsing and decoding unit 304. And passed to the cache 306 by the callback method 305. The data may then be drawn to a cache rendering component 307 of the encoder, such as the Surface of the encoder. Thus, the data is input to the encoder 308. For example, imageReader may continuously call back the oneimageavailable method, so that the latest video frame data can be maintained. And then, the encoded video frame data is transferred to a Surface encoder through eglswapBuffer of OpenGl, and the encoded video frame data is sent to a vehicle machine by a MediaCodec, so that the flow stopping process is completed.
A dynamic frame dropping process according to a specific embodiment of the present disclosure is described below in conjunction with fig. 4. Fig. 4 depicts a timing diagram 400 of a dynamic down frame process as one specific embodiment of the present disclosure. As shown in fig. 4, at the image parsing and decoding unit 401, the oneimageavailable method of the ImageReader may be continuously called back after the data of the system frame picture is refreshed, and input or pushed to the buffer 402 at the frequency of the first frame rate to maintain the latest frame buffer (equivalent to one frame of video frame data). Alternatively, the buffer 402 may read data from the image parsing and decoding unit 401 at a frequency of the first frame rate. At this point, the client (e.g., on the first device) may launch a rendering thread 403, such as rendering thread render thread. The rendering thread 403 may periodically retrieve available frame buffer data from the cache 402 based on the second frame rate.
As already described in this disclosure, the second frame rate may be a value according to a frame rate issued by the car machine to the mobile device, or may be set or adjusted according to other conditions. For example, based on that the maximum frame rate that can be supported by the car machine frame rate is 20fps, if the frame rate is 20fps, the mobile device is distributed with the frame rate value of 20fps through a communication protocol established between the mobile device and the car machine, and the rendering thread started by the mobile device will fetch 20 times of available video frame buffer data per second, that is, fetch the latest frame buffer data every 50 ms. As an optional example, considering that the frame rate of the mobile device system itself may be optimized, the frame rate callbacks may be different for the static picture and the dynamic picture of the system, and the second frame rate may be determined according to different frame rates of the mobile device, and will not be described herein again. As another alternative, when the car machine is connected to another car machine or the performance of the car machine of the user is optimized for other reasons, for example, the car machine can support 25fps or 30fps, after the frame rate is issued by the car machine, the rendering thread can directly modify the frame rate of the video that is available for obtaining, and thus the frame rate change of the video can be completed quickly without re-initializing the encoder.
After the video frame buffer is obtained, the video frame may be encoded, rendered, etc. in a manner understood by those skilled in the art. For example, it can be drawn through OpenGl on EGLSurface created by Surface of MediaCodec. The eglswapBuffer method of OpenGl may be invoked to associate the rendered frame buffer to Surface of MediaCodec. A MediaCodec encoder may be employed to perform hardware compression encoding of video frame data, and so on. It will be appreciated that the specific methods and code logic described above are merely examples, and that the disclosure is not limited thereto. After the encoding and rendering, the transmission process of the data can be completed through a communication protocol established between the mobile phone and the vehicle machine.
An example flow diagram 500 of an inter-device communication process according to an example embodiment of the present disclosure is described below in conjunction with fig. 5.
As shown in fig. 5, after the connection establishment is completed, the second device 520 may send a configuration request message 501, for example, a message MSG _ CMD _ VIDEO _ ENCODER _ INIT for initializing an ENCODER, to the first device 510 side in a command message channel. The message body may contain required parameter requirements such as a desired width and height parameter of the video or a frame rate limiting parameter that the second device can support, etc. The first device 510 may perform initialization or configuration or reconfiguration of the ENCODER upon receiving the message 501, and reply to a configuration complete message 502, such as a configuration complete message MSG _ CMD _ VIDEO _ ENCODER _ INIT _ DONE, upon completing the ENCODER initialization or configuration or reconfiguration. The message body of the configuration completion message 502 may carry the video parameters finally determined by the first device 510, such as the width and height parameters and the maximum frame rate limit (e.g., the determined second frame rate). The second device 520 may perform creation, initialization, configuration, reconfiguration, etc. of the decoder according to the parameters therein after receiving the message.
After the second device 520 has completed the configuration of the decoder and possibly other VIDEO processing related preparation work, a START message 503, for example a START message MSG _ CMD _ VIDEO _ ENCODER _ START, may be sent to the first device 510. Thereafter, the first device 510 may begin transmitting video data to the second device 520. As an alternative example, the first device 510 may start generating video data upon receiving the start message 503 and may transmit the video data to the second device 520 through a channel (e.g., a tcPsocket channel) specifically established for video data transmission, but the disclosure is not limited thereto.
For example, the second device 520 can activate a hardware decoder of MediaCodec and decode the corresponding video frame picture according to the encoder parameter configuration information transmitted from the first device 510 and display it on the Surface of the corresponding UI canvas. Alternatively, the second device may employ other decoding schemes and display schemes that can be understood by those skilled in the art, and the present disclosure is not limited thereto.
Taking the example that the first device 510 is a mobile device or a mobile phone and the second device 520 is a car machine, according to the optimization scheme of the embodiment of the present disclosure, the delay in the sliding scene is reduced from 531ms to 194ms, and when the map is slid, the map base map can move normally along with the fingers, so that the use experience of the user is greatly improved. It will be understood that the above are examples only, and the disclosure is not limited thereto.
According to one or more embodiments of the present disclosure, the provided dynamic frame rate adjustment scheme can not only solve the problem of sliding delay caused by different frame rate ranges supported between devices, but also dynamically realize frame dropping according to different frame rates of different devices to ensure smoothness, thereby greatly improving the sliding experience after screen projection.
A data processing apparatus 600 according to an embodiment of the present disclosure will now be described with reference to fig. 6. The data processing apparatus 600 may include a first video frame obtaining unit 601, a video frame buffering unit 602, and a second video frame obtaining unit 603. The first video frame obtaining unit 601 may be configured to obtain at least two video frames based on a screen of a first device, where the at least two video frames have a first frame rate, and the first frame rate is within a frame rate range supported by the first device. The video frame buffer unit 602 may be configured to store the at least two video frames in a first buffer. The second video frame obtaining unit 603 may be configured to read the first buffer at a frequency of a second frame rate to obtain at least one of the at least two video frames, where the second frame rate is within a range of frame rates supported by a second device.
According to the device disclosed by the embodiment of the disclosure, the frame rate adjustment of the video frame can be effectively realized, so that scenes such as sharing among devices are facilitated.
According to an embodiment of the present disclosure, there is also provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of one or more embodiments of the present disclosure.
There is also provided, in accordance with an embodiment of the present disclosure, a first device, including: a display; a communication interface; and a controller for performing the method of one or more embodiments of the present disclosure.
In some embodiments, the first device may be a smart device, such as a smartphone, a smart television, a game control device, a tablet, a wearable device, or the like, although the disclosure is not limited thereto. For example, the first device may also be a car machine device.
According to an embodiment of the present disclosure, there is also provided a second device including: a display; and a communication interface; wherein the communication interface is to receive at least one video frame generated by a method according to one or more embodiments of the present disclosure for display on the display.
In some embodiments, the second device may be a car machine device. In the process of connecting mobile phones and other devices with a car machine for screen projection, when a page is frequently operated or slides to a map page, the delay is obviously larger, and after a finger leaves the screen, the map moves, because the frame rate of a normal mobile phone system can reach 60fps, generally speaking, the maximum frame rate supported by the car machine can only be 30fps or even 15fps, the production is far greater than the consumption, and the consumption at the car end cannot be realized. Along with the accumulation of time, the operation interface is more and more clamped, and the problem of high delay of screen projection sliding occurs. According to the data processing scheme disclosed by the invention, when the data processing scheme is applied to a scene of connecting a mobile phone and a vehicle machine, the problem of sliding delay after the screen is projected when the mobile phone is connected with the vehicle machine can be effectively solved.
Optionally, the second device may be configured to perform a communication method according to one or more embodiments of the present disclosure.
According to an embodiment of the present disclosure, there is also provided a vehicle including the second apparatus according to one or more embodiments of the present disclosure.
In the technical scheme of the disclosure, the collection, acquisition, storage, use, processing, transmission, provision, public application and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations, and do not violate the good customs of the public order.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 7, a block diagram of a structure of an electronic device 700, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 can also be stored. The calculation unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
A plurality of components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the electronic device 700, and the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. Storage unit 708 may include, but is not limited to, magnetic or optical disks. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
Computing unit 701 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 701 performs the various methods and processes described above, such as the method 200 and its variants. For example, in some embodiments, method 200, variations thereof, and so forth, may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. One or more steps of the method 200 described above and variations thereof may be performed when the computer program is loaded into RAM 703 and executed by the computing unit 701. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g., by means of firmware) to perform the method 200, variations thereof, and so forth.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (21)

1. A method of data processing, comprising:
obtaining at least two video frames based on a screen of a first device, wherein the at least two video frames have a first frame rate, and the first frame rate is within a frame rate range supported by the first device;
storing the at least two video frames in a first buffer; and
reading the first buffer at a frequency of a second frame rate to obtain at least one of the at least two video frames, wherein the second frame rate is within a range of frame rates supported by a second device.
2. The method of claim 1, wherein storing the at least two video frames into a first buffer comprises: and sequentially storing the at least two video frames into the first buffer at the frequency of the first frame rate.
3. The method of claim 2, wherein reading the first cache at a frequency of a second frame rate comprises reading a most recently stored video frame in the first cache at a frequency of a second frame rate.
4. The method of claim 3, wherein,
sequentially storing the at least two video frames into the first buffer at the frequency of the first frame rate comprises: sequentially storing a current video frame of the at least two video frames into the first buffer at a frequency of the first frame rate, and discarding a previously stored video frame in the first buffer, and
reading the most recently stored video frame in the first cache comprises: and reading the currently stored video frame in the first cache as the latest stored video frame.
5. The method of any of claims 1-4, wherein reading the first buffer at a frequency of a second frame rate to obtain at least one of the at least two video frames comprises:
reading frame data from the first buffer to an encoder at a frequency of a second frame rate; and
obtaining the at least one video frame by encoding the read frame data via the encoder.
6. The method of claim 5, wherein encoding, via the encoder, the read frame data comprises: hardware compression encoding the read frame data by the encoder.
7. The method of claim 5 or 6, further comprising:
configuring the encoder based on a configuration request message received from the second device;
sending a configuration completion message to the second device; and
transmitting the at least one video frame to the second device in response to receiving a start message from the second device.
8. The method of claim 7, wherein the configuration request message indicates that connection establishment of the second device with the first device is complete, and the initiation message indicates that a decoder has been configured at the second device based on the configuration complete message.
9. The method of claim 7 or 8, further comprising:
determining the second frame rate based on the configuration request message, wherein the configuration request message includes a frame rate limitation parameter of the second device, and the configuration completion message includes the second frame rate.
10. The method according to any one of claims 7-9, further comprising:
determining a size parameter of the at least one video frame based on the configuration request message, wherein the configuration complete message includes the determined size parameter.
11. The method according to any one of claims 1-10, further comprising:
acquiring a frame rate updating parameter;
adjusting the second frame rate based on the frame rate update parameter; and
reading the first cache based on the adjusted second frame rate.
12. The method of any one of claims 1-11,
wherein obtaining at least two video frames based on a screen of a first device comprises obtaining at least three video frames during a first period, at least a portion of the at least three video frames having a first higher frame rate and a remaining portion of the at least three video frames having a first lower frame rate lower than the first higher frame rate; and is
Wherein reading the first buffer at a frequency of a second frame rate comprises: reading the first buffer based on a constant second frame rate during the first period.
13. The method of claim 12, wherein the second frame rate is less than the first higher frame rate and greater than or equal to the first lower frame rate.
14. The method of any of claims 1-13, wherein obtaining at least two video frames based on a screen of a first device comprises:
obtaining graphic data corresponding to the at least two video frames respectively by recording the screen picture of the first device; and
converting the graphics data into picture data to obtain the at least two video frames.
15. A data processing apparatus comprising:
a first video frame obtaining unit, configured to obtain at least two video frames based on a screen of a first device, where the at least two video frames have a first frame rate, and the first frame rate is within a range of frame rates supported by the first device;
the video frame buffer unit is used for storing the at least two video frames into a first buffer; and
a second video frame obtaining unit, configured to read the first buffer at a frequency of a second frame rate to obtain at least one video frame of the at least two video frames, where the second frame rate is within a frame rate range supported by a second device.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-14.
17. A first device, comprising:
a display;
a communication interface; and
a controller for performing the method of any one of claims 1-14.
18. A second device, comprising:
a display; and
a communication interface;
wherein the communication interface is to receive at least one video frame according to any one of claims 1-14 for display on the display.
19. A vehicle comprising the second apparatus of claim 18.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-14.
21. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-14 when executed by a processor.
CN202210934780.1A 2022-08-04 2022-08-04 Data processing method, data processing device, electronic equipment and medium Pending CN115297291A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210934780.1A CN115297291A (en) 2022-08-04 2022-08-04 Data processing method, data processing device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210934780.1A CN115297291A (en) 2022-08-04 2022-08-04 Data processing method, data processing device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN115297291A true CN115297291A (en) 2022-11-04

Family

ID=83825928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210934780.1A Pending CN115297291A (en) 2022-08-04 2022-08-04 Data processing method, data processing device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115297291A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070230898A1 (en) * 2006-03-31 2007-10-04 Masstech Group Inc. Random access searching with frame accuracy and without indexing within windows media video
CN108347580A (en) * 2018-03-27 2018-07-31 聚好看科技股份有限公司 A kind of method and electronic equipment of processing video requency frame data
CN111541919A (en) * 2020-05-13 2020-08-14 北京百度网讯科技有限公司 Video frame transmission method and device, electronic equipment and readable storage medium
CN111886864A (en) * 2019-03-01 2020-11-03 阿里巴巴集团控股有限公司 Resolution adaptive video coding
CN113163260A (en) * 2021-03-09 2021-07-23 北京百度网讯科技有限公司 Video frame output control method and device and electronic equipment
CN113225619A (en) * 2021-04-23 2021-08-06 深圳创维-Rgb电子有限公司 Frame rate self-adaption method, device, equipment and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070230898A1 (en) * 2006-03-31 2007-10-04 Masstech Group Inc. Random access searching with frame accuracy and without indexing within windows media video
CN108347580A (en) * 2018-03-27 2018-07-31 聚好看科技股份有限公司 A kind of method and electronic equipment of processing video requency frame data
CN111886864A (en) * 2019-03-01 2020-11-03 阿里巴巴集团控股有限公司 Resolution adaptive video coding
CN111541919A (en) * 2020-05-13 2020-08-14 北京百度网讯科技有限公司 Video frame transmission method and device, electronic equipment and readable storage medium
CN113163260A (en) * 2021-03-09 2021-07-23 北京百度网讯科技有限公司 Video frame output control method and device and electronic equipment
CN113225619A (en) * 2021-04-23 2021-08-06 深圳创维-Rgb电子有限公司 Frame rate self-adaption method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US10397627B2 (en) Desktop-cloud-based media control method and device
US9549152B1 (en) Application content delivery to multiple computing environments using existing video conferencing solutions
WO2022052773A1 (en) Multi-window screen projection method and electronic device
US9485290B1 (en) Method and system for controlling local display and remote virtual desktop from a mobile device
US10715577B2 (en) Virtual desktop encoding based on user input behavior
CN111221491A (en) Interaction control method and device, electronic equipment and storage medium
KR102078894B1 (en) Updating services during real-time communication and sharing-experience sessions
KR20140044840A (en) Media encoding using changed regions
WO2016197590A1 (en) Method and apparatus for providing screenshot service on terminal device and storage medium and device
CN113393367B (en) Image processing method, apparatus, device and medium
US20180357748A1 (en) System and method for dynamic transparent scaling of content display
US20170371614A1 (en) Method, apparatus, and storage medium
WO2023273562A1 (en) Video playback method and apparatus, electronic device, and medium
US9779466B2 (en) GPU operation
CN113810773B (en) Video downloading method and device, electronic equipment and storage medium
CN115297291A (en) Data processing method, data processing device, electronic equipment and medium
CN113965779A (en) Cloud game data transmission method, device and system and electronic equipment
CN114359017B (en) Multimedia resource processing method and device and electronic equipment
CN113382258B (en) Video encoding method, apparatus, device, and medium
KR102547320B1 (en) Electronic device and method for control thereof
CN114510308A (en) Method, device, equipment and medium for storing application page by mobile terminal
CN109710359B (en) Moving picture display method, moving picture display device, computer-readable storage medium and terminal
CN115334159B (en) Method, apparatus, device and medium for processing stream data
US10826838B2 (en) Synchronized jitter buffers to handle codec switches
CN114265648A (en) Code scheduling method, server, client and system for acquiring remote desktop

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination