CN113728622A - Method and device for wirelessly transmitting image, storage medium and electronic equipment - Google Patents

Method and device for wirelessly transmitting image, storage medium and electronic equipment Download PDF

Info

Publication number
CN113728622A
CN113728622A CN201980095632.1A CN201980095632A CN113728622A CN 113728622 A CN113728622 A CN 113728622A CN 201980095632 A CN201980095632 A CN 201980095632A CN 113728622 A CN113728622 A CN 113728622A
Authority
CN
China
Prior art keywords
processor
gui data
application processor
frame timing
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980095632.1A
Other languages
Chinese (zh)
Other versions
CN113728622B (en
Inventor
王晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Feilai Information Technology Co ltd
Original Assignee
Shanghai Feilai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Feilai Information Technology Co ltd filed Critical Shanghai Feilai Information Technology Co ltd
Publication of CN113728622A publication Critical patent/CN113728622A/en
Application granted granted Critical
Publication of CN113728622B publication Critical patent/CN113728622B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Digital Computer Display Output (AREA)

Abstract

A method and apparatus for wirelessly transmitting an image, a storage medium, and an electronic device are provided. The method includes generating GUI data with an application processor and transmitting the GUI data to a graph transmission processor through the application processor (S101); receiving video and/or OSD data with the graphics rendering processor (S102); merging the received GUI data and video and/or OSD data to form display data by using the image transmission processor (S103); and sending the display data to a display for displaying by using the graph transmission processor (S104), wherein the application processor and the graph transmission processor are independent from each other and are connected through an interface. The method can obtain the image transmission experience with low delay and reduce the calculation and drawing resources required by the image transmission chip.

Description

Method and device for wirelessly transmitting image, storage medium and electronic equipment
Description
Technical Field
The present invention relates generally to the field of wireless image transmission technology, and more particularly, to a method and apparatus for wirelessly transmitting an image, a storage medium, and an electronic device.
Background
In a wireless video transmission system, an image transmission chip at a receiving end is responsible for receiving video and generating OSD (on-screen display) data. In order to obtain the minimum image transmission delay, the image transmission chip is required to synthesize the video and the OSD by the time sequence control of the sub-frame level, and the video and the OSD are directly sent to the screen for displaying. However, such application scenarios also typically require the display and operation of a graphical user interface. Drawing a complex GUI (Graphical User Interface) usually requires more computing resources and 2D/3D drawing resources, which is often not satisfied by a dedicated graphics chip. If the GUI is to be rendered in a dedicated purpose graphics chip, then there are requirements on the resources and capabilities of the graphics chip, which can increase the cost of the graphics chip.
At present, a common method is that an image transmission chip transmits received image transmission data to an external application processor, the external application processor finishes drawing of a GUI and combines image layers and then transmits the combined image layers to a display for displaying, but the method increases delay of image transmission display, and cannot achieve very good user experience of low-delay image transmission.
Disclosure of Invention
The present invention has been made to solve at least one of the above problems. The invention provides a method and a device for wirelessly transmitting images, a storage medium and electronic equipment, wherein a Graphical User Interface (GUI) is drawn through an external application processor and then sent to an image transmission processor, the GUI and other image layers are merged and displayed by the image transmission processor, and a mechanism for synchronizing the time sequence of externally inputting the GUI and the frame timing of screen refreshing display of the image transmission processor is designed, so that the time delay of GUI display is reduced, and the DDR bandwidth occupation of a target chip is reduced.
Specifically, an embodiment of the present invention provides a method for wirelessly transmitting an image, including:
generating GUI data by using an application processor, and sending the GUI data to a graph transmission processor through the application processor;
receiving video and/or OSD data using the graphics rendering processor;
merging the received GUI data and video and/or OSD data to form display data by using the image transmission processor;
sending the display data to a display for display by the map transmission processor,
wherein the application processor and the graph transmission processor are independent of each other and are connected through an interface.
An embodiment of the present invention further provides an apparatus for wirelessly transmitting an image, including: an application processor and a graph transmission processor,
the application processor is used for generating GUI data and sending the GUI data to the graph transmission processor;
the image transmission processor is used for receiving the GUI data sent by the application processor and the video and/or OSD data sent by other equipment, and combining the received GUI data and the video and/or OSD data to form display data;
the map transmission processor is also used for sending the display data to a display for displaying,
wherein the application processor and the graph transmission processor are independent of each other and are connected through an interface.
An embodiment of the present invention further provides a storage medium having a computer program stored thereon, where the computer program executes the method for wirelessly transmitting an image according to the present invention when running.
The embodiment of the invention also provides electronic equipment which comprises the device for wirelessly transmitting the image and a display, wherein the display is connected with the image transmission processor.
The embodiment of the invention provides a method and a device for wirelessly transmitting images, a storage medium and electronic equipment, wherein an external application processor is used for drawing a GUI (graphical user interface), and the GUI data sent by the external application processor is combined with other layers by the image transmission processor for display, so that on one hand, for the image transmission processor with limited 2D/3D drawing capability, the external application processor can be used for drawing a complex GUI to complete wireless image transmission; on the other hand, the graphics processor does not need the capability and the computing resources for drawing a complex GUI, so that the computing and drawing resources required by the graphics processor are reduced, and the requirement and the cost of the graphics processor are reduced.
Drawings
Fig. 1 illustrates a schematic structural diagram of an example electronic device for implementing a method and apparatus for wirelessly transmitting an image according to an embodiment of the present invention;
FIG. 2 shows a schematic flow diagram of a method for wireless transmission of images according to an embodiment of the invention;
FIG. 3 shows a schematic flow diagram of a method for implementing synchronization in wireless graph transmission according to an embodiment of the invention;
FIG. 4 shows a schematic flow chart diagram of a method for synchronization in wireless graph transmission according to another embodiment of the present invention;
FIG. 5 shows a schematic block diagram of an apparatus for wirelessly transmitting an image according to an embodiment of the present invention;
FIG. 6 shows a schematic block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In the following description, for the purposes of thorough understanding of the present invention, detailed procedures and detailed structures are set forth in order to explain the present invention, but the present invention may be embodied in other specific forms besides those detailed description.
First, an example electronic device 100 for implementing the method and apparatus for wirelessly transmitting an image according to an embodiment of the present invention is described with reference to fig. 1. As shown in FIG. 1, electronic device 100 includes one or more processors 102, one or more memory devices 104, input/output devices 106, and a communication interface 108, which are interconnected via a bus system 110 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are merely exemplary and not limiting, and the electronic device may have other components and structures, or may not include some of the aforementioned components, as desired.
The processor 102 generally represents any type or form of processing unit capable of processing data or interpreting and executing instructions. In general, a processor may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions. For example, the processor 102 can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware Finite State Machines (FSMs), Digital Signal Processors (DSPs), or a combination thereof. In particular embodiments, processor 102 may receive instructions from a software application or module. The instructions may cause the processor 102 to perform the methods for device hybrid navigation and self-moving devices and methods described and/or illustrated herein.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored and executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input/output device 106 may be a device used by a user to input instructions and output various information to the outside, for example, the input device may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. The output devices may include one or more of a display, speakers, and the like.
Communication interface 108 broadly represents any type or form of adapter or communication device capable of facilitating communication between example electronic device 100 and one or more additional devices. For example, the communication interface 108 may facilitate communication between the electronic device 100 and front-end or accessory electronic devices as well as back-end servers or clouds. Examples of communication interface 108 include, but are not limited to, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, and any other suitable interface. In an embodiment, the communication interface 108 provides direct connection to a remote server/remote head end device through direct connection to a network such as the internet. In particular embodiments, communication interface 108 provides direct connection to a remote server/remote head end device through direct connection to a network, such as a private network. Communication interface 108 may also indirectly provide such connection through any other suitable connection.
Exemplary electronic devices for implementing the method and apparatus for wirelessly transmitting images according to embodiments of the present invention may be implemented as smart phones, tablets, PDAs, remote controllers, computers, and the like.
Fig. 2 shows a schematic flow diagram of a method for wireless transmission of images according to an embodiment of the invention. The method for wirelessly transmitting images according to the embodiment of the present invention is described in detail below with reference to fig. 2.
The method for wirelessly transmitting images disclosed in this embodiment is applied to a receiving end of wireless image transmission, and the receiving end device includes an Application Processor (AP) and an image transmission processor, where the application processor has a relatively strong 2D/3D processing capability and can draw a complex GUI (graphical user interface), the image transmission processor has a certain image processing capability, and the image transmission processor can receive an image sent by a transmitting end, for example, receive an image sent by the transmitting end through a wireless transmission mode such as WIFI, and a wireless transmission chip such as WIFI may be integrated in the image transmission processor, or may be configured separately for the receiving end. The application processor and the graph transmission processor are independent of each other and are connected through an interface. Namely, the application Processor and the image-passing Processor receive-end device are independent hardware, and the Interface is, for example, a MIPI (Mobile Industry Processor Interface, MIPI for short) Interface.
As shown in fig. 2, the method for wirelessly transmitting an image provided by the present embodiment includes:
step S101, GUI data is generated by the application processor and sent to the graph transmission processor through the application processor.
Namely, the required GUI is rendered by an application processor provided in the image receiving terminal device, and the rendered GUI is transmitted to the image transmission processor.
Further, in this embodiment, after the application processor finishes drawing the required GUI and generates GUI data, the GUI data is stored in the buffer of the application processor. As an example, the buffer is a segment of a contiguous buffer of a memory connected to the application processor, which may be, for example, a part of a DDR of an external memory. As another example, the buffer is integrally disposed in an application processor, such as a static RAM of the application processor.
Illustratively, in this embodiment, the GUI data is in an ARGB format, that is, after the application processor finishes drawing the required GUI, the GUI data is stored in a buffer of the application processor in the ARGB format after the GUI data is generated.
Illustratively, in this embodiment, the application processor sends the GUI data in the buffer to the graph-rendering processor via the interface in RGB888 format. Illustratively, the application processor outputs GUI data in its own buffer (buffer) as display data in the format of RGB888 through the interface, e.g., MIPI output interface (MIPI DSI out), for transmission to the graphics processor.
Further, since the ARGB format is 4 bytes per pixel and the RGB888 format is 3 bytes per pixel, assuming the size of the GUI data is m pixels by n pixels, where m and n are positive integers, m or n needs to be divisible by 3 in order to be stored in the ARGB format and output in the RGB888 format. Illustratively, assume that m is divisible by 3, and thus the output of GUI data is in the size of (m × 4/3) × n.
It should be appreciated that the storage and output format of the GUI data is exemplary, and in other embodiments, other formats may be employed as desired, and are not limited to the ARGB format and the RGB888 format.
Step S102, receiving video and/or OSD data by the image transmission processor.
That is, the image transmission processor receives the video and/or OSD data sent by the transmitting terminal, for example, the video and/or OSD data sent by the transmitting terminal is received in a WIFI manner. As an example, the transmitting end is, for example, an unmanned aerial vehicle, and the receiving end is, for example, a smartphone or a remote controller.
And step S103, utilizing the image transmission processor to carry out image layer combination on the received GUI data and the video and/or OSD data to form display data.
Specifically, after steps S101 and S102 are completed, first, the graphics pass processor receives GUI data of the application processor through the interface, for example, a MIPI input interface (MIPI DSI in). Illustratively, in this embodiment, the graphics processor receives the GUI data in RGB888 format and stores the GUI data in a buffer of the graphics processor. That is, the rendering processor inputs data into the rendering processor according to the size of (m × 4/3) × n (it is assumed here that the size of the GUI data is m pixels × n pixels), and then stores it in a section of a continuous buffer (buffer) of the rendering processor. Illustratively, the buffer is an integrated buffer of the graphics processor, such as a static RAM of the graphics processor.
Then, the graphics processor reads the GUI data in the ARGB format, the size of the GUI data is m pixels × n pixels, and locally (locally to the graphics processor), the GUI data and the video and/or OSD data received by the graphics processor are subjected to layer merging, and display data is generated.
And step S104, sending the display data to a display for displaying by using the image transmission processor.
That is, the image transmission processor transmits the display data to the display for display through a connection interface, such as an MIPI interface.
In this embodiment, since the graphics rendering processor only needs to perform layer merging of GUI data and video and/or OSD data, and does not need to render a complicated GUI, it is sufficient without requiring a strong calculation and drawing capability. In other words, the method for wirelessly transmitting images according to the present embodiment can cause the image transmission processor to render a complex GUI with less computing and drawing capabilities using an external application processor, thereby refining the computing and drawing resources required by the image transmission processor and reducing the cost.
Further, in this embodiment, in order to reduce the delay of the application processor inputting the GUI content of the image processor to be displayed on the screen, the timing of sending the GUI data by the application processor and the frame timing of the screen refreshing by the image processor may be synchronized, so as to reduce the display delay of the GUI data. Therefore, the image transmission processor can receive and send the GUI data to the LCDC directly by caching a small amount of data through a small buffer (buffer) on one chip, thereby avoiding writing and reading from the memory (DDR) of the device, reducing the GUI display delay and avoiding occupying the DDR bandwidth of the image transmission processor.
A method of synchronizing a timing at which the application processor transmits the GUI data and a frame timing of a screen-swiping of the image transfer processor according to the present embodiment is described below with reference to fig. 3 and 4.
Fig. 3 shows a schematic flow diagram of a method for synchronization in wireless graph transmission according to an embodiment of the invention.
As shown in fig. 3, the method for implementing synchronization in wireless map transmission disclosed in this embodiment includes:
step S201, the image-rendering processor obtains a frame header signal of the input GUI data, and latches a counter of the image-rendering processor through the frame header signal to obtain a 1 st counting snapshot of the counter.
Illustratively, the mapping processor obtains a frame header signal of the input GUI data through an input interface (e.g., MIPI DSI in interface), and then latches a local counter of the mapping processor through the frame header signal, so as to obtain a 1 st count snapshot of the local counter.
Step S202, the image transmission processor obtains a frame timing signal displayed by a screen refreshing mode, and a counter of the image transmission processor is latched through the frame timing signal displayed by the screen refreshing mode to obtain the 2 nd counting snapshot of the counter.
Namely, the image-transmission processor acquires a frame timing signal of the screen-swiping display of a local Liquid Crystal Display Controller (LCDC), and latches a counter of the image-transmission processor through the frame timing signal of the screen-swiping display to obtain a 2 nd counting snapshot of the counter.
Step S203, determining an adjustment amount required by a frame header of the GUI data according to a difference between the 1 st counted snapshot and the 2 nd counted snapshot and a set advance value of the 1 st counted snapshot relative to the 2 nd counted snapshot, as an adjustment amount of a frame timing when the application processor sends the GUI data.
Illustratively, the advanced value of the 1 st counted snapshot relative to the 2 nd counted snapshot is determined according to a set time length that a frame header of the GUI data needs to be advanced relative to a frame header of the screen-refreshing display. The preset time length needed to be advanced is used for resisting the jitter caused by different timing of the image transmission processor and the application processor in the system, because the frame head timing of the external chip can be adjusted according to the frame, the time can not be very long, and if the time is small, the buffer is less, such as 1 or 2 lines of time.
As an example, to ensure that the frame header of the GUI data is kept at the frame header of the local screen-refreshing display for a preset time, a certain data buffer is needed to resist jitter, for example, L line time needs to be advanced, and the 1 st counting snapshot needs to be advanced by a fixed value, for example, k, from the 2 nd counting snapshot; and counting to obtain the difference value between the 1 st counting snapshot and the 2 nd counting snapshot and the difference value of k, namely the amount of the GUI data frame head required to be adjusted. As an example, for example, it is desirable that the 1 st count snapshot be 2 lines ahead of the 2 nd count snapshot, and the value of that k is the count value of the snapshot counter corresponding to the 2 line time; in fact, the difference between snapshot 1 and snapshot 2, not necessarily equal to k, is how much the adjustment is.
Step S204, sending the frame timing adjustment amount of the GUI data to the application processor so as to adjust the frame timing of the GUI data sent by the application processor.
After the frame timing of sending the GUI data by the application processor and the frame timing of the screen refreshing by the image transmission processor are compared through steps S201 to S203, and the adjustment amount of the frame timing of sending the GUI data by the application processor is determined according to the comparison result, the adjustment amount of the frame timing of sending the GUI data by the application processor is sent to the application processor, so as to adjust the frame timing of sending the GUI data by the application processor. Illustratively, the graph-passing processor feeds back the frame timing adjustment of the GUI data to the application processor through a data path, and the application processor adjusts the frame header position of the output interface (e.g., DSI out interface) directly according to the adjustment.
It should be appreciated that in the method for achieving synchronization in wireless image transmission disclosed in the present embodiment, the application processor is required to have the capability of actively adjusting the output interface frame timing.
Fig. 4 shows a schematic flow chart of a method for synchronization in wireless graph transmission according to another embodiment of the present invention.
As shown in fig. 4, the method for implementing synchronization in wireless map transmission disclosed in this embodiment includes:
step S301, controlling a clock source of the application processor through a digital-to-analog conversion module of the graphics processor.
Namely, the clock source of the application processor is controlled by outputting a control voltage through the digital-to-analog conversion module of the image transmission processor.
As an example, the application processor uses a voltage controlled oscillator VCXO as a clock source, the control voltage for the VCXO being derived from the voltage output by the digital-to-analog conversion module DAC of the graphics processor.
Step S302, comparing the frame timing of the GUI data sent by the application processor with the frame timing of the screen refreshing of the image transmission processor, and determining the adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result.
The adjustment amount of the frame timing for the application processor to send the GUI data may be obtained by the method described in the foregoing steps S201 to S203, and is not described herein again.
Step S303, adjusting the control voltage of the clock source of the application processor according to the frame timing adjustment amount of the GUI data, thereby adjusting the frequency of the clock source of the application processor.
Namely, the digital-to-analog conversion module DAC of the control transmission processor outputs the control voltage of the clock source VCXO of the application processor through the frame timing adjustment quantity of the GUI data, and the frequency of the clock of the application processor is finely adjusted, so that the input GUI data are synchronized with the frame timing of screen refreshing of the image transmission processor in time sequence. In other words, the output frequency of the VCXO is fine-tuned, that is, the working clock of the application processor is adjusted, so that the frame header position of the GUI data output by the application processor is fine-tuned, and finally, synchronization with the local frame header is achieved.
Illustratively, when the adjustment amount of the frame timing of the GUI data requires an advance of the frame timing of the GUI data, adjusting the control voltage of the clock source of the application processor to make the frequency of the clock source of the application processor high; or
When the adjustment amount of the frame timing of the GUI data requires a delay of the frame timing of the GUI data, adjusting a control voltage of a clock source of the application processor to make a frequency of the clock source of the application processor lower; or
When the frame timing of the GUI data is not required to change by the adjustment amount of the frame timing of the GUI data, maintaining the frequency of the clock source of the application processor unchanged.
It should be understood that the above description in conjunction with fig. 3 and 4 gives an exemplary method for implementing synchronization in wireless map transmission, however, the embodiments of the present invention are not limited to the methods shown in fig. 3 and 4, and other suitable methods may be adopted as needed.
Fig. 5 shows a schematic block diagram of an apparatus for wirelessly transmitting an image according to an embodiment of the present invention. As shown in fig. 5, the apparatus 200 for wirelessly transmitting an image of the present embodiment includes an application processor 210 and an image transmission processor 220, wherein the application processor 210 and the image transmission processor 220 are independent of each other and are connected through an interface. The Interface is, for example, a Mobile Industry Processor Interface (MIPI) Interface.
The application processor 210 may be various types of application processor chips or SOC (system on chip) chips, and illustratively, as shown in fig. 5, the application processor 210 includes a processor (CPU and/or GPU)211, a liquid crystal display controller (i.e., LCDC)212, and an output interface 213 (e.g., MIPI output interface). The application processor 210 is connected to and transmits data to the image processor 220 through the output interface 213 and the input interface 221 of the image processor 220. In this embodiment, the application processor 210 is further connected with an external memory, and a part of the external memory may be used as the first buffer 214 of the application processor 210. Further, in an example of the present invention, the application processor 210 uses a voltage controlled oscillator (VCXO)215 as a clock source, which is controlled by a voltage output by a digital-to-analog conversion module (DAC)226 of the graphics processor 220.
In this embodiment, the application processor 210 is configured to generate GUI data required for wireless image transmission, and send the GUI data to the image transmission processor 220. The first buffer 214 is used for storing the GUI data generated by the application processor 210. In the present embodiment, the first buffer 214 is a continuous buffer of the external memory of the application processor 210, such as the memory DDR. Of course, in other embodiments, the first buffer 214 may also be integrally disposed in the application processor 210, for example, a static RAM of the application processor 210.
More specifically, in the present embodiment, the processor 211 (i.e., CPU and/or GPU) of the application processor 210 renders a desired GUI, generating GUI data. After the GUI data is generated, the GUI data is stored in the first buffer 214. Illustratively, in this embodiment, the GUI data is in an ARGB format, i.e., the processor 211 stores the GUI data in the first buffer 214 in the ARGB format.
Illustratively, in this embodiment, the lcd controller 212 of the application processor 210 sends the GUI data in the first buffer 214 to the map-rendering processor 220 in RGB888 format through the output interface 213. Illustratively, the liquid crystal display controller 212 outputs the GUI data in the first buffer 214 as display data in the format of RGB888 through the output interface 213, for example, a MIPI output interface (MIPI DSI out), so as to transmit to the graphics processor 220.
Further, since the ARGB format is 4 bytes per pixel and the RGB888 format is 3 bytes per pixel, assuming the size of the GUI data is m pixels by n pixels, where m and n are positive integers, m or n needs to be divisible by 3 in order to be output in the ARGB format and in the RGB888 format. Illustratively, assume that m is divisible by 3, and thus the output of GUI data is in the size of (m × 4/3) × n.
It should be appreciated that the storage and output format of the GUI data is exemplary, and in other embodiments, other formats may be employed as desired, and are not limited to the ARGB format and the RGB888 format.
The image-rendering processor 220 may employ various types of image-rendering chips. As shown in fig. 5, the graphics processor 220 includes an input interface 221, a second buffer 222, a liquid crystal display controller (i.e., LCDC)223, an output interface 224, a timing processing module 225, and a digital-to-analog conversion module 225. The input interface 221 and the output interface 224 are, for example, MIPI interfaces, and the image transmission processor 220 is connected to the application processor 210 and the display 230 through the input interface 221 and the output interface 224, respectively, and performs data transmission. The display 230 is, for example, an LCD display. The liquid crystal display controller (i.e., LCDC)223 is used to control the display data and the timing of the screen refresh of the display 230, and to receive the video/OSD data transmitted from the transmitting terminal. The second buffer 222 is used to store GUI data transmitted by the application processor 210. Illustratively, the second buffer 222 is a buffer integrated with the graphics processor 222, such as a static RAM of the graphics processor.
In this embodiment, the image rendering processor 220 is configured to receive the GUI data sent by the application processor 210 and the video and/or OSD data sent by other devices, combine the received GUI data and the video and/or OSD data to form display data, and send the display data to a display for displaying.
Specifically, in this embodiment, the graphics rendering processor 220 receives video/OSD data transmitted by the transmitting end on the one hand, and receives GUI data transmitted by the application processor 210 through the input interface 221, for example, a MIPI input interface (MIPI DSI in), on the other hand, and then stores the GUI data in the second buffer 222. Illustratively, in the present embodiment, the rendering processor 220 receives the GUI data in the RGB888 format, i.e., the rendering processor 220 inputs the data into the rendering processor 220 according to the size of (m × 4/3) × n (it is assumed herein that the GUI data has the size of m pixels × n pixels), and then stores the data in the second buffer 222 of the rendering processor 220, for example, in a segment of a continuous buffer (buffer). Next, the graphics processor 220 reads the GUI data in the ARGB format, which has a size of m pixels × n pixels, and locally (locally to the graphics processor) merges the GUI data with the video and/or OSD data received by the graphics processor to generate display data. The display data is transmitted to the display 230 through the liquid crystal display controller 223 and the output interface 224 to be displayed.
In this embodiment, since the graphics rendering processor only needs to perform layer merging of GUI data and video and/or OSD data, and does not need to render a complicated GUI, it is sufficient without requiring a strong calculation and drawing capability. In other words, the method for wirelessly transmitting images according to the present embodiment can cause the image transmission processor to render a complex GUI with less computing and drawing capabilities using an external application processor, thereby refining the computing and drawing resources required by the image transmission processor and reducing the cost.
Further, in this embodiment, the timing processing module 225 is configured to synchronize a timing at which the application processor 210 sends the GUI data with a frame timing of the screen refreshing of the image transmission processor 220, that is, a screen refreshing timing of the LCDC223, so as to reduce a display delay of the GUI data.
As an example, in particular, the timing processing module 225 is specifically configured to compare a frame timing of the application processor 210 sending the GUI data with a frame timing of the image transmission processor 220 swiping, and determine an adjustment amount of the frame timing of the application processor 210 sending the GUI data according to a comparison result; and sending the adjusted amount of frame timing of the GUI data to the application processor 210 to adjust the frame timing at which the application processor sends the GUI data.
As another example, the digital-to-analog conversion module 226 is used to control the voltage of the clock source 215 of the application processor 210. The timing processing module 225 is configured to compare the frame timing of the GUI data sent by the application processor 210 with the frame timing of the screen refreshing of the image transmission processor 220, and determine an adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result; the digital-to-analog conversion module 226 is further configured to adjust a control voltage of the clock source of the application processor according to the adjustment amount of the frame timing of the GUI data, so as to adjust a frequency of the clock source of the application processor.
Specifically, when the adjustment amount of the frame timing of the GUI data requires the advance of the frame timing of the GUI data, the control voltage of the clock source of the application processor is adjusted to make the frequency of the clock source of the application processor high; or
When the adjustment amount of the frame timing of the GUI data requires a delay of the frame timing of the GUI data, adjusting a control voltage of a clock source of the application processor to make a frequency of the clock source of the application processor lower; or
When the frame timing of the GUI data is not required to change by the adjustment amount of the frame timing of the GUI data, maintaining the frequency of the clock source of the application processor unchanged.
Further, in this embodiment, the timing processing module 225 includes a counter, and the timing processing module 225 is configured to: acquiring a frame header signal of the input GUI data, and latching the counter through the frame header signal to obtain a 1 st counting snapshot of the counter; acquiring a frame timing signal displayed by a screen, and latching the counter through the frame timing signal displayed by the screen to obtain the 2 nd counting snapshot of the counter; and determining the adjustment quantity required by the frame header of the GUI data according to the difference value between the 1 st counting snapshot and the 2 nd counting snapshot and the set advance value of the 1 st counting snapshot relative to the 2 nd counting snapshot, wherein the adjustment quantity is used as the adjustment quantity of the frame timing of the GUI data sent by the application processor.
Illustratively, the advanced value of the 1 st counted snapshot relative to the 2 nd counted snapshot is determined according to a set time length that a frame header of the GUI data needs to be advanced relative to a frame header of the screen-refreshing display.
Exemplarily, in this embodiment, the timing processing module 225 obtains a frame header signal of the input GUI data through an input interface 221, for example, an MIPI input interface. The timing processing module 225 obtains a frame timing signal of the screen-refreshing display from the liquid crystal display controller 223.
According to the device for wirelessly transmitting the image, the GUI is drawn by the application processor, the GUI is sent to the image transmission processor through the MIPI, the GUI and other image layers are combined and displayed by the image transmission processor, and due to the fact that a mechanism for synchronizing the time sequence of the externally input GUI and the frame timing of the screen refreshing display of the image transmission processor is designed, the GUI display time delay is reduced, the DDR bandwidth occupation of a target chip is reduced, meanwhile, the low-delay image transmission experience can be obtained, and the calculation and drawing resources required by the image transmission processor are simplified.
FIG. 6 shows a schematic block diagram of an electronic device according to an embodiment of the invention. As shown in fig. 6, electronic device 300 includes one or more processors 310 and one or more memories 320.
The processor 310 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, such as an Application Processor (AP) and a graphics processor, and may control other components in the electronic device 300 to perform desired functions. Illustratively, the memory 320 has stored therein one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors 310 to implement a method for wirelessly transmitting images according to an embodiment of the present invention.
The memory 320 is used for implementing program codes of corresponding steps in the method for wirelessly transmitting an image according to an embodiment of the present invention. The memory 320 may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, a Read Only Memory (ROM), hard disk, flash memory, or other persistent storage. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 310 to implement the method for wirelessly transmitting images of the embodiments of the invention described above (implemented by the processor) and/or other desired functions.
In addition, the electronic device may also include a display that is connected to the processor, such as through a MIPI interface.
In one embodiment, the following steps are performed when the program code is executed by the processor 310:
generating GUI data by using an application processor, and sending the GUI data to a graph transmission processor through the application processor;
receiving video and/or OSD data using the graphics rendering processor;
merging the received GUI data and video and/or OSD data to form display data by using the image transmission processor;
and sending the display data to a display for displaying by using the image transmission processor. Further, when the program code is executed by the processor 310, the following steps are performed:
comparing the frame timing of the GUI data sent by the application processor with the frame timing of screen refreshing of the image transmission processor, and determining the adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result;
and sending the adjustment quantity of the frame timing of the GUI data to the application processor so as to adjust the frame timing of the GUI data sent by the application processor.
Or, controlling a clock source of the application processor through a digital-to-analog conversion module of the image transmission processor;
comparing the frame timing of the GUI data sent by the application processor with the frame timing of screen refreshing of the image transmission processor, and determining the adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result;
and adjusting the control voltage of the clock source of the application processor according to the frame timing adjustment amount of the GUI data, so as to adjust the frequency of the clock source of the application processor.
Furthermore, according to an embodiment of the present invention, there is also provided a storage medium on which program instructions are stored, which when executed by a computer or a processor, are used for executing the respective steps of the method for wirelessly transmitting an image of an embodiment of the present invention. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
In one embodiment, the computer program instructions, when executed by a computer, perform the steps of: generating GUI data by using an application processor, and sending the GUI data to a graph transmission processor through the application processor; receiving video and/or OSD data using the graphics rendering processor; merging the received GUI data and video and/or OSD data to form display data by using the image transmission processor; and sending the display data to a display for displaying by using the image transmission processor.
The embodiment of the invention provides a method and a device for wirelessly transmitting images, a storage medium and electronic equipment, wherein an external application processor is used for drawing a GUI (graphical user interface), and the GUI data sent by the external application processor is combined with other layers by the image transmission processor for display, so that on one hand, for the image transmission processor with limited 2D/3D drawing capability, the external application processor can be used for drawing a complex GUI to complete wireless image transmission; on the other hand, the graphics processor does not need the capability and the computing resources for drawing a complex GUI, so that the computing and drawing resources required by the graphics processor are reduced, and the requirement and the cost of the graphics processor are reduced.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiment of the present invention or the description thereof, and the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (34)

  1. A method for wirelessly transmitting an image, comprising:
    generating GUI data by using an application processor, and sending the GUI data to a graph transmission processor through the application processor;
    receiving video and/or OSD data using the graphics rendering processor;
    merging the received GUI data and video and/or OSD data to form display data by using the image transmission processor;
    sending the display data to a display for display by the map transmission processor,
    wherein the application processor and the graph transmission processor are independent of each other and are connected through an interface.
  2. The method of claim 1, further comprising:
    storing the GUI data in a buffer of the application processor.
  3. The method of claim 1, wherein the GUI data is in an ARGB format.
  4. The method of claim 1, wherein the GUI data has a size of m pixels by n pixels, wherein m and n are positive integers, and wherein m or n is divisible by 3.
  5. The method of claim 1, wherein said application processor sends said GUI data to said graph rendering processor in RGB888 format via said interface.
  6. The method of claim 5, wherein said graph-rendering processor receives said GUI data in RGB888 format and stores said GUI data in a buffer of said graph-rendering processor.
  7. The method of claim 6, wherein the graphics rendering processor reads the GUI data from the graphics rendering processor's buffer in ARGB format and then layer combines with the received video and/or OSD data.
  8. The method of claim 1, wherein the application processor and the graphics processor have MIPI interfaces, and wherein the application processor and the graphics processor are connected via a MIPI interface.
  9. The method according to any one of claims 1-8, further comprising:
    synchronizing a timing at which the application processor transmits the GUI data with a frame timing of a screen swipe of the graphics processor to reduce a display delay of the GUI data.
  10. The method of claim 9, wherein synchronizing the timing of the application processor sending the GUI data with the frame timing of the image processor's swiping comprises:
    comparing the frame timing of the GUI data sent by the application processor with the frame timing of screen refreshing of the image transmission processor, and determining the adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result;
    and sending the adjustment quantity of the frame timing of the GUI data to the application processor so as to adjust the frame timing of the GUI data sent by the application processor.
  11. The method of claim 9, wherein synchronizing the timing of the application processor sending the GUI data with the frame timing of the image processor's swiping comprises:
    controlling a clock source of the application processor through a digital-to-analog conversion module of the image transmission processor;
    comparing the frame timing of the GUI data sent by the application processor with the frame timing of screen refreshing of the image transmission processor, and determining the adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result;
    and adjusting the control voltage of the clock source of the application processor according to the frame timing adjustment amount of the GUI data, so as to adjust the frequency of the clock source of the application processor.
  12. The method of claim 11,
    when the frame timing of the GUI data is required to be advanced by the frame timing adjustment amount of the GUI data, adjusting the control voltage of the clock source of the application processor to enable the frequency of the clock source of the application processor to be high; or
    When the adjustment amount of the frame timing of the GUI data requires a delay of the frame timing of the GUI data, adjusting a control voltage of a clock source of the application processor to make a frequency of the clock source of the application processor lower; or
    When the frame timing of the GUI data is not required to change by the adjustment amount of the frame timing of the GUI data, maintaining the frequency of the clock source of the application processor unchanged.
  13. The method of claim 10 or 11, wherein comparing the frame timing of the application processor transmitting the GUI data with the frame timing of the image rendering processor's screen refreshing and determining the amount of adjustment of the frame timing of the application processor transmitting the GUI data based on the comparison comprises:
    the image transmission processor obtains a frame header signal of the input GUI data, and latches a counter of the image transmission processor through the frame header signal to obtain a 1 st counting snapshot of the counter;
    the image transmission processor obtains a frame timing signal displayed by screen refreshing, and latches a counter of the image transmission processor through the frame timing signal displayed by screen refreshing to obtain a 2 nd counting snapshot of the counter;
    and determining the adjustment quantity required by the frame header of the GUI data according to the difference value between the 1 st counting snapshot and the 2 nd counting snapshot and the set advance value of the 1 st counting snapshot relative to the 2 nd counting snapshot, wherein the adjustment quantity is used as the adjustment quantity of the frame timing of the GUI data sent by the application processor.
  14. The method according to claim 13, wherein the advanced value of the 1 st counted snapshot relative to the 2 nd counted snapshot is determined according to a set time length that a frame header of the GUI data needs to be advanced relative to a frame header of the screen-refreshing display.
  15. The method of claim 13, wherein the graphics processor obtains a header signal of the GUI data input through a MIPI receiving interface.
  16. An apparatus for wirelessly transmitting an image, comprising an application processor and an image transmission processor,
    the application processor is used for generating GUI data and sending the GUI data to the graph transmission processor;
    the image transmission processor is used for receiving the GUI data sent by the application processor and the video and/or OSD data sent by other equipment, and combining the received GUI data and the video and/or OSD data to form display data;
    the map transmission processor is also used for sending the display data to a display for displaying,
    wherein the application processor and the graph transmission processor are independent of each other and are connected through an interface.
  17. The apparatus of claim 16, further comprising:
    a first buffer into which the application processor stores the generated GUI data,
    wherein the first buffer is integrally disposed in the application processor or disposed in a memory coupled to the application processor.
  18. The apparatus of claim 16, wherein the GUI data is in an ARGB format.
  19. The apparatus of claim 16, wherein the GUI data has a size of m pixels by n pixels, wherein m and n are positive integers, and wherein m or n is divisible by 3.
  20. The apparatus of claim 16, wherein said application processor sends said GUI data to said graph rendering processor in RGB888 format.
  21. The apparatus of claim 20,
    the graphics processor is integrated with a second buffer, the graphics processor receiving the GUI data in RGB888 format and storing the GUI data in the second buffer.
  22. The apparatus of claim 21,
    the graphics rendering processor reads the GUI data from the second buffer in the ARGB format and then merges with the received video and/or OSD data.
  23. The apparatus of claim 16, wherein the application processor and the graphics processor have MIPI interfaces, and wherein the application processor and the graphics processor are connected via a MIPI interface.
  24. The apparatus of any of claims 16-23, wherein the graph-passing processor further comprises a timing processing module,
    the timing processing module is used for synchronizing the time sequence of the GUI data sent by the application processor and the frame timing of the screen refreshing of the image transmission processor so as to reduce the display delay of the GUI data.
  25. The apparatus according to claim 24, wherein the timing processing module is specifically configured to:
    comparing the frame timing of the GUI data sent by the application processor with the frame timing of screen refreshing of the image transmission processor, and determining the adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result;
    and sending the adjustment quantity of the frame timing of the GUI data to the application processor so as to adjust the frame timing of the GUI data sent by the application processor.
  26. The apparatus of claim 24, wherein the graphics processor further comprises a digital-to-analog conversion module, the digital-to-analog conversion module being configured to control a clock source voltage of the application processor;
    the timing processing module is used for comparing the frame timing of the GUI data sent by the application processor with the frame timing of the screen refreshing of the image transmission processor and determining the adjustment amount of the frame timing of the GUI data sent by the application processor according to the comparison result;
    the digital-to-analog conversion module is further configured to adjust a control voltage of a clock source of the application processor according to the frame timing adjustment amount of the GUI data, so as to adjust a frequency of the clock source of the application processor.
  27. The apparatus of claim 26,
    when the frame timing of the GUI data is required to be advanced by the frame timing adjustment amount of the GUI data, adjusting the control voltage of the clock source of the application processor to enable the frequency of the clock source of the application processor to be high; or
    When the adjustment amount of the frame timing of the GUI data requires a delay of the frame timing of the GUI data, adjusting a control voltage of a clock source of the application processor to make a frequency of the clock source of the application processor lower; or
    When the frame timing of the GUI data is not required to change by the adjustment amount of the frame timing of the GUI data, maintaining the frequency of the clock source of the application processor unchanged.
  28. The apparatus according to claim 25 or 26, wherein the timing processing module comprises a counter, and the timing processing module is configured to:
    acquiring a frame header signal of the input GUI data, and latching the counter through the frame header signal to obtain a 1 st counting snapshot of the counter;
    acquiring a frame timing signal displayed by a screen, and latching the counter through the frame timing signal displayed by the screen to obtain the 2 nd counting snapshot of the counter;
    and determining the adjustment quantity required by the frame header of the GUI data according to the difference value between the 1 st counting snapshot and the 2 nd counting snapshot and the set advance value of the 1 st counting snapshot relative to the 2 nd counting snapshot, wherein the adjustment quantity is used as the adjustment quantity of the frame timing of the GUI data sent by the application processor.
  29. The apparatus according to claim 28, wherein the advanced value of the 1 st counted snapshot relative to the 2 nd counted snapshot is determined according to a set time length that a frame header of the GUI data needs to be advanced relative to a frame header of the screen-refreshing display.
  30. The apparatus of claim 28, wherein the timing processing module obtains a frame header signal of the input GUI data through a MIPI receiving interface.
  31. The apparatus of claim 28, wherein the graphics processor comprises a liquid crystal display control module, and wherein the timing processing module obtains a frame timing signal for a screen-refresh display from the liquid crystal display control module.
  32. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed, performs the method according to any one of claims 1-15.
  33. An electronic device, comprising: the apparatus of any one of claims 16-31 and a display coupled to the map rendering processor.
  34. The electronic device of claim 33, wherein the map pass processor is coupled to the display via a MIPI interface.
CN201980095632.1A 2019-12-31 2019-12-31 Method and device for wirelessly transmitting images, storage medium and electronic equipment Active CN113728622B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130306 WO2021134397A1 (en) 2019-12-31 2019-12-31 Method and apparatus for wireless transmission of an image, storage medium, and electronic device

Publications (2)

Publication Number Publication Date
CN113728622A true CN113728622A (en) 2021-11-30
CN113728622B CN113728622B (en) 2024-07-30

Family

ID=76686348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980095632.1A Active CN113728622B (en) 2019-12-31 2019-12-31 Method and device for wirelessly transmitting images, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN113728622B (en)
WO (1) WO2021134397A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257772B (en) * 2021-11-26 2024-09-10 苏州华兴源创科技股份有限公司 Data transmission adjustment method and device, computer equipment and readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR950016076A (en) * 1993-11-27 1995-06-17 김광호 Frame Timing Signal Extraction Method and System Using Synchronous Signal in Digital Wireless Communication System
US20030231259A1 (en) * 2002-04-01 2003-12-18 Hideaki Yui Multi-screen synthesis apparatus, method of controlling the apparatus, and program for controlling the apparatus
JP2006148227A (en) * 2004-11-16 2006-06-08 Nippon Telegr & Teleph Corp <Ntt> Clock synchronizing apparatus and program
US20070206018A1 (en) * 2006-03-03 2007-09-06 Ati Technologies Inc. Dynamically controlled power reduction method and circuit for a graphics processor
US20080205568A1 (en) * 2007-02-28 2008-08-28 Matsushita Electric Industrial Co., Ltd. Dsrc communication circuit and dsrc communication method
WO2009045245A1 (en) * 2007-09-28 2009-04-09 Thomson Licensing Time-frequency synchronization and frame number detection for dmb-t systems
US20100017717A1 (en) * 2008-07-16 2010-01-21 Kabushiki Kaisha Toshiba Video processing device and control method therefor
US20140335897A1 (en) * 2013-05-09 2014-11-13 KERBspace, Inc. Intelligent urban communications portal and methods
CN105872418A (en) * 2016-03-30 2016-08-17 浙江大华技术股份有限公司 Method and device for superimposing a GUI (Graphical User Interface) image layer on a digital image
US9927809B1 (en) * 2014-10-31 2018-03-27 State Farm Mutual Automobile Insurance Company User interface to facilitate control of unmanned aerial vehicles (UAVs)
CN108132786A (en) * 2017-12-21 2018-06-08 广州路派电子科技有限公司 A kind of GUI design methods based on OSD

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005057324A (en) * 2003-08-01 2005-03-03 Pioneer Electronic Corp Picture display device
KR101445603B1 (en) * 2007-08-27 2014-09-29 삼성전자주식회사 Adaptive video processing apparatus and video scaling method based on screen size of display device
CN205453901U (en) * 2015-12-31 2016-08-10 大连捷成科技有限公司 A display control system for video broadcasts
CN105761120A (en) * 2016-03-31 2016-07-13 南京云创大数据科技股份有限公司 Virtual fitting system automatically matching fitting scene and application method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR950016076A (en) * 1993-11-27 1995-06-17 김광호 Frame Timing Signal Extraction Method and System Using Synchronous Signal in Digital Wireless Communication System
US20030231259A1 (en) * 2002-04-01 2003-12-18 Hideaki Yui Multi-screen synthesis apparatus, method of controlling the apparatus, and program for controlling the apparatus
JP2006148227A (en) * 2004-11-16 2006-06-08 Nippon Telegr & Teleph Corp <Ntt> Clock synchronizing apparatus and program
US20070206018A1 (en) * 2006-03-03 2007-09-06 Ati Technologies Inc. Dynamically controlled power reduction method and circuit for a graphics processor
US20080205568A1 (en) * 2007-02-28 2008-08-28 Matsushita Electric Industrial Co., Ltd. Dsrc communication circuit and dsrc communication method
WO2009045245A1 (en) * 2007-09-28 2009-04-09 Thomson Licensing Time-frequency synchronization and frame number detection for dmb-t systems
US20100017717A1 (en) * 2008-07-16 2010-01-21 Kabushiki Kaisha Toshiba Video processing device and control method therefor
US20140335897A1 (en) * 2013-05-09 2014-11-13 KERBspace, Inc. Intelligent urban communications portal and methods
US9927809B1 (en) * 2014-10-31 2018-03-27 State Farm Mutual Automobile Insurance Company User interface to facilitate control of unmanned aerial vehicles (UAVs)
CN105872418A (en) * 2016-03-30 2016-08-17 浙江大华技术股份有限公司 Method and device for superimposing a GUI (Graphical User Interface) image layer on a digital image
CN108132786A (en) * 2017-12-21 2018-06-08 广州路派电子科技有限公司 A kind of GUI design methods based on OSD

Also Published As

Publication number Publication date
CN113728622B (en) 2024-07-30
WO2021134397A9 (en) 2021-11-04
WO2021134397A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
US8300056B2 (en) Seamless display migration
JP5755333B2 (en) Technology to control display operation
US20160027146A1 (en) Display driver, display system, and method of operating the display driver
US10096304B2 (en) Display controller for improving display noise, semiconductor integrated circuit device including the same and method of operating the display controller
US20150138212A1 (en) Display driver ic and method of operating system including the same
US20140184611A1 (en) Method and apparatus for sending partial frame updates rendered in a graphics processor to a display using framelock signals
JP5079589B2 (en) Display control apparatus and display control method
US20140285505A1 (en) Image processing apparatus and image display system
CN112468863A (en) Screen projection control method and device and electronic device
US20170208219A1 (en) Display controller for generating video sync signal using external clock, an application processor including the controller, and an electronic system including the controller
US20200376375A1 (en) Method and apparatus for performing client side latency enhancement with aid of cloud game server side image orientation control
US11200636B2 (en) Method and apparatus for generating a series of frames with aid of synthesizer to offload graphics processing unit rendering in electronic device
US10504278B1 (en) Blending neighboring bins
KR102155479B1 (en) Semiconductor device
US20140253598A1 (en) Generating scaled images simultaneously using an original image
US11249640B2 (en) Electronic apparatus and controlling method thereof
US11069021B2 (en) Mechanism for providing multiple screen regions on a high resolution display
US20200364926A1 (en) Methods and apparatus for adaptive object space shading
CN113728622B (en) Method and device for wirelessly transmitting images, storage medium and electronic equipment
CN110377534B (en) Data processing method and device
US8675026B2 (en) Image processing apparatus, image processing method, and computer program storage medium
US10785512B2 (en) Generalized low latency user interaction with video on a diversity of transports
CN111199569A (en) Data processing method and device, electronic equipment and computer readable medium
EP2953058A1 (en) Method for displaying images and electronic device for implementing the same
US9135036B2 (en) Method and system for reducing communication during video processing utilizing merge buffering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant