WO2021253141A1 - Image data processing apparatus and method - Google Patents

Image data processing apparatus and method Download PDF

Info

Publication number
WO2021253141A1
WO2021253141A1 PCT/CN2020/096018 CN2020096018W WO2021253141A1 WO 2021253141 A1 WO2021253141 A1 WO 2021253141A1 CN 2020096018 W CN2020096018 W CN 2020096018W WO 2021253141 A1 WO2021253141 A1 WO 2021253141A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
graphics
video
thread
size
Prior art date
Application number
PCT/CN2020/096018
Other languages
French (fr)
Chinese (zh)
Inventor
张运强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202080102044.9A priority Critical patent/CN116075804A/en
Priority to PCT/CN2020/096018 priority patent/WO2021253141A1/en
Publication of WO2021253141A1 publication Critical patent/WO2021253141A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • This application relates to the field of display technology, and in particular to a device and method for image data processing.
  • the content that can be displayed by the display device includes graphic data and video data.
  • the graphic data includes, for example, status bar, navigation bar, and icon data.
  • status bar, navigation bar, and icon data each correspond to a graphic layer
  • video data corresponds to a video.
  • Layer when multiple graphics data and video data are displayed at the same time, the picture that the user sees on the display is the result of the synthesis of multiple graphics and video layers.
  • the composition of the graphics layer is relatively time-consuming, and usually exceeds a vertical synchronization (Vsync) period, resulting in frame loss during video playback.
  • Vsync vertical synchronization
  • the embodiments of the present application provide an image data processing device and method, which are used to solve the problem of frame loss caused by time-consuming composition of graphics layers during video playback.
  • the first aspect of the present application provides a method for processing image data.
  • the method includes: synthesizing multiple graphics layers in a first thread and digging holes for at least one graphics layer of the multiple graphics layers to obtain The synthesized graphics layer, the synthesized graphics layer includes the excavated area; the video layer is processed in the second thread to obtain the processed video layer, and the processed video layer can be displayed through the excavated area; The synthesized graphics layer and the processed video layer are superimposed to obtain display data.
  • the synthesized graphics layer data includes a digging area, and the digging area is usually set to be transparent so that when the graphics layer and the video layer are synthesized, the video layer can be displayed through the digging area.
  • the synthesis and digging of the graphics layer and the processing of the video layer are performed in parallel in two threads. Therefore, the processing of video images will no longer be affected by the synthesis of the graphics layer, and video playback will also It is no longer affected by the composition of the graphics layer, and can effectively solve the problem of frame loss caused by the time-consuming composition of the graphics layer during video playback.
  • the method before synthesizing multiple graphics layers and digging holes on at least one graphics layer in the first thread to obtain the synthesized graphics layer, the method further includes: in the first thread Setting information related to the size of the video layer in the video layer; the digging of at least one graphics layer specifically includes: digging the at least one graphics layer according to the information related to the size of the video layer; Processing the video layer in the second thread to obtain the processed video layer specifically includes: in the second thread, processing the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • the size-related information of the video layer may include the size of the video layer and the position information of the video layer.
  • the size-related information of the video layer may include one vertex coordinate and two length information of the video layer.
  • the length information indicates the length and width of the video layer; the information related to the size of the video layer can also include the coordinates of two vertices and one length information of the video layer, and the size and the length of the video layer can be uniquely determined according to the coordinates of the two vertices and one length information. Play position; if the position information of the video layer is the coordinates of the 4 vertices of the display video layer, the size of the video layer can be determined according to the coordinates of the 4 vertices.
  • the sizes of the processed video layer and the digging area are all based on The size of the video layer set by this setting can ensure that the size of the digging area in the graphics layer is exactly the same as the size of the video layer.
  • the processed video layer and the "dug area" can be completely matched, so that the processed video layer It can be displayed simultaneously through the "dug area”.
  • the multi-layer graphics layer is synthesized in the first thread based on the first vertical synchronization signal, and the at least one graphics layer is holed; the synthesized graphics layer is synthesized based on the second vertical synchronization signal It is superimposed with the processed video layer to obtain display data; wherein, the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  • first vertical synchronization signal and the second vertical synchronization signal are two independent periodic signals, and the first vertical synchronization signal and the second vertical synchronization signal may have different frame rates and different periods.
  • the effective signal of the first vertical synchronization signal arrives, the multi-layer graphics layer is synthesized in the first thread and the hole processing is performed on at least one of the graphics layers; when the effective signal of the second vertical synchronization signal arrives , Superimpose the synthesized graphics layer and the processed video layer to obtain the display data.
  • the frame rate of the first vertical synchronization signal is lower than the frame rate of the second vertical synchronization signal.
  • the size-related information of the video layer is set first, and then the multi-layer graphics layer is synthesized and based on the video layer.
  • the size-related information is used to dig holes for at least one layer of graphics.
  • setting the size-related information of the video layer in the first thread needs to wait for the effective signal of the vertical synchronization signal to arrive. It is carried out after the effective signal of the vertical synchronization signal arrives.
  • the first notification information is sent to the first thread; in the first thread, according to the first Video
  • the size of the Buffer sets the information related to the size of the video layer; or, when the second thread detects that the size of the Video Buffer changes, it sends the first notification information to the first thread; in the first thread, according to The size of the changed Video Buffer resets the information related to the size of the video layer.
  • each Video Buffer stores data of one video layer
  • the size of the Video Buffer is related to the size of the stored video layer. Therefore, when the size of the video layer changes, the size of the Video Buffer also changes.
  • the first notification information is used to notify the first thread that the size of the video layer has changed.
  • synthesizing multiple graphics layers and digging holes on at least one graphics layer in the first thread to obtain a synthesized graphics layer specifically includes: in the first thread, The hardware synthesizer HWC synthesizes the multiple graphics layers and performs hole processing on the at least one graphics layer to obtain the synthesized graphics layer.
  • the indication information of multiple graphic buffers Graphic Buffers is sent to the hardware synthesizer HWC; among them, one Graphic Buffer stores a layer of graphics layer data; in the first thread, In one thread, the HWC obtains the multiple graphics layer data from the Graphic Buffers.
  • the multi-layer graphics layer is synthesized and the at least one graphics layer is digged to obtain the synthesized graphics layer, which specifically includes: in the first thread, the graphics The processor GPU synthesizes the multiple graphics layers and performs hole processing on the at least one graphics layer to obtain the synthesized graphics layer.
  • GPU hardware resources can be used to perform graphics layer synthesis and graphics layer digging.
  • the method further includes: in the first thread, sending the size-related information of the video layer to Media HW; In the second thread, Media HW processes the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • Media HW first receives information about the size of the video layer in the first thread, and then Media HW receives the first frame of video layer data in the second thread, and then Media HW receives The information related to the size of the video layer processes the first frame of video layer data; in another case, Media HW first receives the first frame of video layer data in the second thread, and then Media HW receives it in the first thread.
  • Media HW will not process the first frame of video layer data immediately, but will wait for the size-related information of the video layer to receive the first frame of video layer data. Perform processing to avoid processing errors in the first frame of video layer data.
  • the superimposing the synthesized graphics layer and the processed video layer to obtain display data specifically includes: the display driver superimposes the synthesized graphics layer and the processed video layer to obtain The display data.
  • the second aspect of the present application provides a method for image data processing.
  • the method includes: in a first thread, SurfaceFlinger calls graphics hardware to synthesize multiple graphics layers and dig holes for at least one graphics layer to obtain the synthesis
  • SurfaceFlinger calls the media hardware to process the video layer to obtain the processed video layer; the synthesized graphics layer includes the excavated area; the processed video layer can be displayed through the excavated area Out; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain display data.
  • the SurfaceFlinger invokes graphics hardware resources to synthesize multiple graphics layers and perform digging processing on at least one graphics layer to obtain the synthesized graphics layer
  • the method further includes: In the thread, the SurfaceFlinger sends the calculated information about the size of the video layer to the Media HW; the SurfaceFlinger calls graphics hardware resources to dig holes for at least one graphics layer, which specifically includes: in the first thread, the SurfaceFlinger calls the graphics hardware to dig holes in the at least one graphics layer according to the size-related information of the video layer to obtain the synthesized graphics layer; in the second thread, SurfaceFlinger calls the media hardware to process the video layer to obtain The processed video layer specifically includes: in the second thread, SurfaceFlinger calls the Media HW to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • the SurfaceFlinger calls the hardware resource to synthesize the multi-layer graphics layer and perform the at least one graphics layer in the first thread. Digging processing; when the effective signal of the second vertical synchronization signal arrives, the drive display superimposes the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and the The second vertical synchronization signals are independent of each other.
  • the effective signal of the first vertical synchronization signal arrives, in the first thread, the information related to the calculated size of the video layer is first sent to the Media HW, and then the SurfaceFlinger
  • the hardware resource is called to synthesize the multi-layer graphics layer, and the at least one graphics layer is drilled according to the size-related information of the video layer.
  • it further includes: when the second thread obtains the first video buffer Video Buffer, sending first notification information to the first thread; in the first thread, SurfaceFlinger obtains the The size of the first Video Buffer, and set the information related to the size of the video layer according to the size of the first Video Buffer; or, when the second thread detects that the size of the Video Buffer has changed, send it to the first The thread sends the first notification information; in the first thread, SurfaceFlinger obtains the size of the changed Video Buffer, and resets the size-related information of the video layer according to the size of the changed Video Buffer; where, the Video Buffer A piece of data of the video layer is stored in the video layer, and the size of the Video Buffer is related to the size of the video layer.
  • SurfaceFlinger calls graphics hardware to synthesize multiple graphics layers and dig holes for at least one graphics layer to obtain a synthesized graphics layer, which specifically includes: In the first thread, the SurfaceFlinger calls the hardware synthesizer HWC to synthesize the multiple graphics layers and dig holes for the at least one graphics layer to obtain the synthesized graphics layer; the display driver combines the synthesized graphics layer and Before superimposing the processed video layer to obtain display data, the method further includes: the HWC sends the synthesized graphics layer to the display driver; and the Media HW sends the processed video layer to the display driver.
  • the HWC abstraction sends the indication information of the FrameBuffer storing the synthesized graphics layer to the display driver; after the Media HW gets the processed video layer, the Media HW will store the processed video The instruction information of the Video Buffer of the layer is sent to the display driver.
  • SurfaceFlinger calls graphics hardware to synthesize multiple graphics layers and dig holes for at least one graphics layer to obtain a synthesized graphics layer, which specifically includes: In one thread, the SurfaceFlinger calls the graphics processor GPU; composites the multiple graphics layers and digs holes for the at least one graphics layer to obtain the composite graphics layer; the display driver performs the composite graphics layer Before superimposing the processed video layer to obtain the display data, the method further includes: the GPU returns the synthesized graphics layer to the SurfaceFlinger; the SurfaceFlinger sends the synthesized graphics layer to the HWC; and the HWC sends the synthesized graphics layer to the HWC.
  • the synthesized graphics layer is sent to the display driver; the Media HW sends the processed video layer to the display driver.
  • the GPU needs to return the synthesized graphics layer to SurfaceFlinger first, and then SurfaceFlinge sends the instructions of the FrameBuffer storing the synthesized graphics layer to the HWC abstraction , And then the HWC abstracts and sends the instruction information of the FrameBuffer storing the synthesized graphics layer to the display driver.
  • it further includes: SurfaceFlinger creates the first thread and the second thread in the initialization phase; when the buffer received by the SurfaceFlinger is a graphic buffer Graphic Buffer, notifying the first thread to process the Graphic Buffer When the buffer received by the SurfaceFlinger is a Video Buffer, notify the second thread to process the video layer data in the Video Buffer; among them, one Graphic Buffer stores one layer of graphics layer data, and one Video Buffer stores One layer of video layer data.
  • the Media HW receives the first frame of video layer data in the second thread; the Media HW receives information about the size of the video layer in the first thread; the Media HW In the second thread, the first frame of video layer data is processed according to the information related to the size of the video layer.
  • the Media HW receives the first frame of video layer data in the second thread before receiving the information about the size of the video layer, the Media HW receives the video layer data in the first thread. After the size-related information, the Media HW processes the first frame of video layer data in the second thread according to the size-related information of the video layer.
  • the third aspect of the present application provides an image data processing device.
  • the device includes a processor on which software instructions run to form a framework layer, a hardware abstraction layer HAL, and a driver layer.
  • the HAL includes graphics hardware abstraction and Media hardware Media HW abstraction
  • the driver layer includes graphics hardware drivers, media hardware drivers, and display drivers; this framework layer is used in the first thread to call graphics hardware through the graphics hardware abstraction and the graphics hardware driver, and it is used for multiple layers
  • the graphics layer is synthesized and at least one graphics layer of the multi-layer graphics layer is digged to obtain a synthesized graphics layer.
  • the synthesized graphics layer includes a digging area; the frame layer is used in the second thread , Through the media hardware abstraction and the media hardware driver calling the media hardware Media HW to process the video layer to obtain the processed video layer; the processed video layer can be displayed through the digging area; the display driver, It is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
  • the device further includes a transmission interface through which the processor receives data sent by other devices or sends data to other devices.
  • the device can be coupled with hardware resources such as a display, media hardware, or graphics hardware through a connector, a transmission line, or a bus.
  • the device can be a processor chip with image or video processing functions.
  • the device, media hardware, and graphics hardware can be integrated on one chip.
  • the device, media hardware, graphics hardware, and display may be integrated on one terminal.
  • the graphics hardware abstraction corresponds to the graphics hardware driver
  • the media hardware abstraction corresponds to the media hardware driver
  • the graphics hardware can be called through the graphics hardware abstraction and the graphics hardware driver
  • the media hardware can be called through the media hardware abstraction and the media hardware driver.
  • the graphics hardware driver is abstractly called by accessing the graphics hardware to realize the calling of the graphics hardware
  • the media hardware driver is abstractly called by accessing the media hardware to realize the calling of the media hardware.
  • the synthesized graphics layer is stored in the FrameBuffer.
  • the graphics hardware abstracts the FrameBuffer instruction information to the display driver, and the display driver can read from the corresponding memory space according to the FrameBuffer instruction information Take the synthesized graphics layer data;
  • the media hardware obtains the processed video layer, the media hardware abstracts and sends the instruction information of the Video Buffer storing the processed video layer to the display driver, and the display driver can correspond to the instruction letter of the Video Buffer Read the processed video layer data in the memory space.
  • the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to the media hardware abstraction; the framework layer is specifically used for In the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the size-related information of the video layer, and at least one graphics layer is digged to obtain the synthesized graphics layer; in the second thread, call Media HW processes the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • the framework layer is specifically used to call the graphics hardware based on the first vertical synchronization signal to synthesize the multi-layer graphics layer in the first thread and to dig holes in at least one graphics layer Processing:
  • the display driver is specifically used to superimpose the synthesized graphics layer and the processed video layer based on the second vertical synchronization signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  • the framework layer is specifically used to, when the effective signal of the first vertical synchronization signal arrives, in the first thread, the size-related information of the video layer is first sent to the media hardware abstraction, and then Calling graphics hardware to synthesize multiple graphics layers and digging holes for at least one graphics layer according to the size-related information of the video layer.
  • the frame rate of the first vertical synchronization signal is lower than the frame rate of the second vertical synchronization signal.
  • the second thread when the second thread obtains the first video buffer Video Buffer, it sends the first notification information to the first thread; the framework layer is also used to receive the first notification in the first thread After the information, in the first thread, the size of the first Video Buffer is obtained, and the size-related information of the video layer is set according to the size of the first Video Buffer.
  • the framework layer is also used to, after the first thread receives the first notification information, in the first thread, Obtain the size of the changed Video Buffer, and set the information related to the size of the video layer according to the size of the changed Video Buffer; among them, the Video Buffer stores the data of a video layer, and the size of the Video Buffer is related to the size of the video layer.
  • the graphics hardware includes HWC and GPU.
  • the hardware abstraction layer includes the HWC abstraction
  • the driver layer includes the HWC driver and the GPU driver.
  • the framework layer is specifically used for: in the first thread, through the HWC abstraction and the HWC driver to call the HWC, the multi-layer graphics layer is synthesized and at least one graphics layer is digged to obtain the synthesized graphics layer; the HWC abstraction is used Yu: Send the synthesized graphics layer to the display driver; the media hardware abstraction is also used to send the processed video layer to the display driver.
  • the framework layer is specifically used to: in the first thread, call the GPU through the GPU driver to synthesize multiple graphics layers and dig holes for at least one graphics layer Processing to obtain the synthesized graphics layer; GPU is also used to: return the synthesized graphics layer to the framework layer; the framework layer is also used to send the synthesized graphics layer to the HWC abstraction; the HWC abstraction is used to send the synthesized graphics layer For the display driver; the Media HW abstraction is also used to send the processed video layer to the display driver.
  • the HWC abstraction sends the instruction information of the FrameBuffer storing the synthesized graphics layer to the display driver
  • the media hardware abstraction sends the instruction information of the VideoBuffer storing the processed video layer to the display driver.
  • the SurfaceFlinger of the framework layer is used to create the first thread and the second thread in the initialization phase; SurfaceFlinger is also used to notify the first thread to process the Graphic Buffer when the Graphic Buffer is received.
  • notify the second thread to process the video layer data in the Video Buffer among them, a layer of graphics layer data is stored in the Graphic Buffer, and a layer of video layer data is stored in the Video Buffer.
  • the Media HW abstraction is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame layer is specifically used to : In the second thread, Media HW abstractly calls Media HW to process the first frame of video layer data according to the size-related information of the video layer.
  • the fourth aspect of the present application provides an image data processing device.
  • the device includes: a framework layer for invoking the graphics hardware to synthesize the multi-layer graphics layer in the first thread and at least one of the multi-layer graphics layers
  • a graphics layer is used for digging a hole to obtain a composite graphics layer, the composite graphics layer includes the digging area; the graphics hardware; media hardware Media HW; the framework layer is also used to call the Media in the second thread HW processes the video layer to obtain the processed video layer; the processed video layer can be displayed through the digging area; the display driver is used to superimpose the synthesized graphics layer and the processed video layer , Get the display data.
  • framework layer and the display driver are part of the operating system formed by software instructions running on the processor.
  • Graphics hardware and media hardware may be coupled with the processor through connectors, interfaces, transmission lines or buses, etc. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces.
  • the software instructions running on the processor are also used to form a hardware abstraction layer, graphics hardware drivers, and media hardware drivers.
  • the hardware abstraction layer includes graphics hardware drivers corresponding to graphics hardware and media Media hardware driver corresponding to the hardware.
  • the framework layer is specifically used to call graphics hardware through graphics hardware abstraction and graphics hardware drivers, and the framework layer is specifically used to call media hardware through media hardware abstraction and media hardware drivers.
  • the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to Media HW; the framework layer is specifically used Therefore, in the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and the at least one graphics layer is digged to obtain the synthesized graphics layer; In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • the framework layer is specifically used to, based on the first vertical synchronization signal, call the graphics hardware in the first thread to synthesize the multi-layer graphics layer and perform the at least one graphics layer.
  • Digging processing; the display driver is specifically used to, when the second vertical synchronization signal arrives, superimpose the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and The second vertical synchronization signals are independent of each other.
  • the framework layer is specifically used to send information related to the size of the video layer to the Media in the first thread when the effective signal of the first vertical synchronization signal arrives. HW, and then call the graphics hardware to synthesize the multi-layer graphics layer, and perform digging processing on the at least one graphics layer according to the size-related information of the video layer.
  • the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
  • the framework layer is also used to receive in the first thread After the first notification information, in the first thread, obtain the size of the first Video Buffer, and set the size-related information of the video layer according to the size of the first Video Buffer; or, when the first When the second thread detects that the size of the Video Buffer has changed, it sends first notification information to the first thread; the framework layer is also used to: after the first thread receives the first notification information, in the first thread , Obtain the size of the changed Video Buffer, and set the size-related information of the video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores a piece of data of the video layer, and the size of the Video Buffer It is related to the size of the video layer.
  • the graphics hardware includes a hardware synthesizer HWC
  • the framework layer is specifically used to: in the first thread, call the HWC to synthesize the multi-layer graphics layer and to compose at least one graphics layer Perform the digging process to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain the display data, the HWC is also used to: send the synthesized graphics layer to the display Driver; The Media HW is also used to send the processed video layer to the display driver.
  • sending the synthesized graphics layer to the display driver is specifically implemented by the HWC abstraction in the hardware abstraction layer of the HWC, and sending the processed video layer to the display driver is implemented by the Media HW abstraction.
  • the graphics hardware includes a graphics processor GPU
  • the framework layer is specifically used to: in the first thread, call the GPU to synthesize the multi-layer graphics layer and dig at least one graphics layer.
  • the GPU is also used to: return the synthesized graphics layer to the framework Layer;
  • the framework layer is also used to send the synthesized graphics layer to the HWC;
  • the HWC is also used to send the synthesized graphics layer to the display driver;
  • the Media HW is also used to send the processed graphics layer
  • the video layer is sent to the display driver.
  • the framework layer includes SurfaceFlinger, which is used to: create the first thread and the second thread in the initialization phase; and the SurfaceFlinger is also used to, when receiving the Graphic Buffer, Notify the first thread to process the graphics layer data in the Graphic Buffer; when the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer; wherein the Graphic Buffer stores a layer of graphics layer data , The Video Buffer stores a layer of video layer data.
  • the Media HW is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame The layer is specifically used to: in the second thread, call the Media HW to process the first frame of video layer data according to the size-related information of the video layer.
  • the fifth aspect of the present application provides an image data processing device.
  • the device includes a processor, graphics hardware, and media hardware.
  • Software instructions run on the processor to form a framework layer and a display driver; the framework layer is used to In the first thread, the graphics hardware is called to synthesize multiple graphics layers and at least one graphics layer is digging holes to obtain a composite graphics layer.
  • the composite graphics layer includes the digging area; the frame layer is used in the second In the thread, the media hardware is called to process the video layer to obtain the processed video layer; the display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
  • graphics hardware and media hardware may be coupled with the processor through connectors, interfaces, transmission lines or buses, etc. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces.
  • the software instructions running on the processor are also used to form a hardware abstraction layer, graphics hardware drivers, and media hardware drivers.
  • the hardware abstraction layer includes graphics hardware drivers corresponding to graphics hardware and media Media hardware driver corresponding to the hardware.
  • the framework layer is specifically used to call graphics hardware through graphics hardware abstraction and graphics hardware drivers, and the framework layer is specifically used to call media hardware through media hardware abstraction and media hardware drivers.
  • the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to Media HW; the framework layer is specifically used Therefore, in the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and the at least one graphics layer is digged to obtain the synthesized graphics layer; In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • the framework layer is specifically used to, based on the first vertical synchronization signal, call the graphics hardware in the first thread to synthesize the multi-layer graphics layer and perform the at least one graphics layer.
  • Digging processing; the display driver is specifically used to, when the second vertical synchronization signal arrives, superimpose the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and The second vertical synchronization signals are independent of each other.
  • the framework layer is specifically used to send information related to the size of the video layer to the Media in the first thread when the effective signal of the first vertical synchronization signal arrives. HW, and then call the graphics hardware to synthesize the multi-layer graphics layer, and perform digging processing on the at least one graphics layer according to the size-related information of the video layer.
  • the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
  • the framework layer is also used to receive in the first thread After the first notification information, in the first thread, obtain the size of the first Video Buffer, and set the size-related information of the video layer according to the size of the first Video Buffer; or, when the first When the second thread detects that the size of the Video Buffer has changed, it sends first notification information to the first thread; the framework layer is also used to: after the first thread receives the first notification information, in the first thread , Obtain the size of the changed Video Buffer, and set the size-related information of the video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores a piece of data of the video layer, and the size of the Video Buffer It is related to the size of the video layer.
  • the graphics hardware includes a hardware synthesizer HWC
  • the framework layer is specifically used to: in the first thread, call the HWC to synthesize the multi-layer graphics layer and to compose at least one graphics layer Perform the digging process to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain the display data, the HWC is also used to: send the synthesized graphics layer to the display Driver; The Media HW is also used to send the processed video layer to the display driver.
  • sending the synthesized graphics layer to the display driver is specifically implemented by the HWC abstraction in the hardware abstraction layer of the HWC, and sending the processed video layer to the display driver is implemented by the Media HW abstraction.
  • the graphics hardware includes a graphics processor GPU
  • the framework layer is specifically used to: in the first thread, call the GPU to synthesize the multi-layer graphics layer and dig at least one graphics layer.
  • the GPU is also used to: return the synthesized graphics layer to the framework Layer;
  • the framework layer is also used to send the synthesized graphics layer to the HWC;
  • the HWC is also used to send the synthesized graphics layer to the display driver;
  • the Media HW is also used to send the processed graphics layer
  • the video layer is sent to the display driver.
  • the framework layer includes SurfaceFlinger, which is used to: create the first thread and the second thread in the initialization phase; and the SurfaceFlinger is also used to, when receiving the Graphic Buffer, Notify the first thread to process the graphics layer data in the Graphic Buffer; when the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer; wherein the Graphic Buffer stores a layer of graphics layer data , The Video Buffer stores a layer of video layer data.
  • the Media HW is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame The layer is specifically used to: in the second thread, call the Media HW to process the first frame of video layer data according to the size-related information of the video layer.
  • the framework layer calls Media HW through the Media HW abstraction and Media HW driver to realize the processing of the video layer.
  • the sixth aspect of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when the instructions are run on a computer or a processor, the computer or the processor is caused to execute the first aspect as described above. Or the method in any of its possible implementations.
  • the seventh aspect of the present application provides a computer program product containing instructions.
  • the instructions run on a computer or a processor, the computer or the processor executes the above-mentioned first aspect or any one of its possible implementations.
  • the method is not limited to:
  • FIG. 1 is a schematic diagram of an exemplary terminal architecture provided by an embodiment of the application
  • FIG. 2 is a hardware architecture diagram of an exemplary image processing device provided by an embodiment of the application
  • FIG. 3a is a schematic diagram of an exemplary operating system architecture applicable to an embodiment of this application.
  • Figure 4 is a schematic diagram of a traditional image processing architecture
  • FIG. 5 is a schematic diagram of an exemplary image processing architecture provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of another exemplary image processing architecture provided by an embodiment of the application.
  • FIG. 8 is a flowchart of another image processing method provided by an embodiment of the application.
  • FIG. 9 is a flowchart of another image processing method provided by an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of an exemplary image processing apparatus provided by an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application.
  • FIG. 14 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application.
  • At least one (item) refers to one or more, and “multiple” refers to two or more.
  • “And/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships. For example, “A and/or B” can mean: only A, only B, and both A and B. , Where A and B can be singular or plural. The character “/” generally indicates that the associated objects before and after are in an “or” relationship. "The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • At least one (a) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c" ", where a, b, and c can be single or multiple.
  • Graphic Buffer used to save graphics data or graphics layer data
  • the graphics layer data can include, for example, bullet screen data, subtitle data, navigation bar, status bar, icon layer, floating window, application display interface or Information such as identification information
  • graphics layer data can also be some data information rendered after the application is started.
  • the graphics data in the graphics buffer can come from multiple applications.
  • Frame Buffer used to save the composite graphics layer data
  • the composite graphics layer data is synthesized from the multi-layer graphics layer data.
  • Video Buffer Used to store video data.
  • the video data can come from Tencent Video, iQiyi Video, Youku Video, etc., for example. For example, it is used to store decoded data of the multimedia frame.
  • Vsync signal used to synchronize the time when the application starts to render, the time to wake up the SurfaceFlinger composite graphics layer, and the display refresh cycle (display refresh cycle) of the display device.
  • the Vsync signal is periodic, the number of Vsync valid signals in a unit time is called the Vsync frame rate, the time interval between two adjacent Vsync valid signals is called the Vsync period, and the Vsync frame rate is the reciprocal of the Vsync period.
  • the Vsync signal can be active high or active low, and the Vsync signal can be level triggered, rising edge triggered or falling edge triggered.
  • the arrival of the effective Vsync signal can be understood as: the arrival of the rising edge of the Vsync signal, the arrival of the falling edge of the Vsync signal, when the Vsync signal is a high-level signal, or when the Vsync signal is a low-level signal.
  • the terminal 100 may include an antenna system 110, a radio frequency (RF) circuit 120, a processor 130, a memory 140, a camera 150, an audio circuit 160, a display screen 170, one or more sensors 180, a wireless transceiver 190, and so on.
  • RF radio frequency
  • the antenna system 110 may be one or more antennas, and may also be an antenna array composed of multiple antennas.
  • the radio frequency circuit 120 may include one or more analog radio frequency transceivers, the radio frequency circuit 120 may also include one or more digital radio frequency transceivers, and the RF circuit 120 is coupled to the antenna system 110. It should be understood that in the various embodiments of the present application, coupling refers to mutual connection in a specific manner, including direct connection or indirect connection through other devices, for example, connection through various interfaces, transmission lines, buses, and the like.
  • the radio frequency circuit 120 can be used for various types of cellular wireless communications.
  • the processor 130 may include a communication processor, and the communication processor may be used to control the RF circuit 120 to receive and send signals through the antenna system 110, and the signal may be a voice signal, a media signal, or a control signal.
  • the processor 130 may include various general processing devices, such as a general central processing unit (Central Processing Unit, CPU), a system on chip (System on Chip, SOC), a processor integrated on the SOC, and a separate processor chip. Or a controller, etc.; the processor 130 may also include a dedicated processing device, such as an application specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA), or a digital signal processor (Digital Signal Processor).
  • ASIC application specific integrated circuit
  • FPGA Field Programmable Gate Array
  • Digital Signal Processor Digital Signal Processor
  • the processor 130 may be a processor group composed of multiple processors, and the multiple processors are coupled to each other through one or more buses.
  • the processor may include an analog-to-digital converter (Analog-to-Digital Converter, ADC) and a digital-to-analog converter (Digital-to-Analog Converter, DAC) to realize signal connection between different components of the device.
  • ADC Analog-to-Digital Converter
  • DAC Digital-to-Analog Converter
  • the processor 130 is used to process media signals such as images, audios, and videos.
  • the memory 140 is coupled to the processor 130. Specifically, the memory 140 may be coupled to the processor 130 through one or more memory controllers.
  • the memory 140 can be used to store computer program instructions, including a computer operating system (Operation System, OS) and various user application programs.
  • the memory 140 can also be used to store user data, such as graphics image data, video data, and video data rendered by application programs. Audio data, calendar information, contact information, or other media files, etc.
  • the processor 130 may read computer program instructions or user data from the memory 140, or store computer program instructions or user data in the memory 140, so as to implement related processing functions.
  • the memory 140 may be a non-power-down volatile memory, such as EMMC (Embedded Multi Media Card, embedded multimedia card), UFS (Universal Flash Storage), or read-only memory (Read-Only Memory, ROM), Or other types of static storage devices that can store static information and instructions, or volatile memory (volatile memory), such as Random Access Memory (RAM), or other types that can store information and instructions
  • EMMC embedded Multi Media Card, embedded multimedia card
  • UFS Universal Flash Storage
  • ROM read-only memory
  • ROM volatile memory
  • volatile memory volatile memory
  • volatile memory volatile memory
  • RAM Random Access Memory
  • the type of dynamic storage device can also be Electrically Erasable Programmable Read-Only Memory (EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage, optical discs Memory (including compact discs, laser discs, digital universal discs or Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, but not limited to this.
  • the memory 140 may be independent of
  • the camera 150 is used to collect images or videos.
  • the user can trigger the turning on of the camera 150 through an application program instruction to realize a photographing or camera function, such as taking pictures or videos of any scene.
  • the camera can include components such as lenses, filters, and image sensors. Among them, the camera may be located in front of the terminal device or on the back of the terminal device. The specific number and arrangement of the cameras can be flexibly determined according to the requirements of the designer or manufacturer's strategy, which is not limited in this application.
  • the audio circuit 160 is coupled with the processor 130.
  • the audio circuit 160 may include a microphone 161 and a speaker 162.
  • the microphone 161 may receive sound input from the outside, and the speaker 162 may play audio data.
  • the terminal 100 may have one or more microphones and one or more earphones, and the embodiment of the present application does not limit the number of microphones and earphones.
  • the display screen 170 is used to provide users with various display interfaces or various menu information for selection.
  • the content displayed on the display screen 170 includes, but is not limited to, a soft keyboard, a virtual mouse, virtual keys and icons, etc. These display contents are associated with specific internal modules or functions.
  • the display screen 170 may also accept user input.
  • the display screen 170 may also display information input by the user, such as accepting control information such as enabling or disabling.
  • the display screen 170 may include a display panel 171 and a touch panel 172.
  • the display panel 171 may adopt a liquid crystal display (Liquid Crystal Display, LCD), an organic light emitting diode (Organic Light-Emitting Diode, OLED), a light emitting diode (Light Emitting Diode, LED) display device, or a cathode ray tube (Cathode Ray tube). Tube, CRT) etc. to configure the display panel.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • LED Light Emitting Diode
  • CRT cathode Ray tube
  • the touch panel 172 also known as a touch screen, a touch-sensitive screen, etc., can collect user contact or non-contact operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 172
  • the operation near the touch panel 172 may also include a somatosensory operation; the operation includes a single-point control operation, a multi-point control operation, etc.), and the corresponding connection device is driven according to a preset program.
  • the touch panel 172 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the signal brought by the user's touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into information that can be processed by the processor 130, and sends it to
  • the processor 130 can receive and execute commands sent by the processor 130.
  • the touch panel 172 can cover the display panel 171, and the user can operate on or near the touch panel 172 according to the content displayed on the display panel 171. After the touch panel 172 detects the operation, it passes through the I/O subsystem 10 It is transmitted to the processor 130 to determine the user input, and then the processor 130 provides corresponding visual output on the display panel 171 through the I/O subsystem 10 according to the user input.
  • the touch panel 172 and the display panel 171 are used as two independent components to realize the input and output functions of the terminal 100, but in some embodiments, the touch panel 172 and the display panel 171 are integrated in together.
  • the sensor 180 may include an image sensor, a motion sensor, a proximity sensor, an environmental noise sensor, a sound sensor, an accelerometer, a temperature sensor, a gyroscope, or other types of sensors, and various combinations of them.
  • the processor 130 drives the sensor 180 to receive various information such as audio information, image information, or motion information through the sensor controller 12 in the I/O subsystem 10, and the sensor 180 transmits the received information to the processor 130 for processing.
  • the wireless transceiver 190 can provide wireless connection capabilities to other devices.
  • the other devices can be peripheral devices such as wireless headsets, Bluetooth headsets, wireless mice, or wireless keyboards, or wireless networks, such as wireless fidelity (Wireless Fidelity). Fidelity, WiFi) network, wireless personal area network (Wireless Personal Area Network, WPAN), or other wireless local area network (Wireless Local Area Network, WLAN), etc.
  • the wireless transceiver 190 may be a Bluetooth compatible transceiver, which is used to wirelessly couple the processor 130 to peripheral devices such as Bluetooth headsets and wireless mice.
  • the wireless transceiver 190 may also be a WiFi compatible transceiver for processing
  • the device 130 is wirelessly coupled to a wireless network or other devices.
  • the terminal 100 may also include other input devices 14 coupled to the processor 130 to receive various user inputs, such as receiving inputted numbers, names, addresses, and media selections.
  • the other input devices 14 may include keyboards, physical buttons (press buttons, Rocker buttons, etc.), dials, slide switches, joysticks, click scroll wheels, and optical mice (optical mice are touch-sensitive surfaces that do not display visual output, or are extensions of touch-sensitive surfaces formed by touch screens).
  • the terminal 100 may also include the aforementioned I/O subsystem 10, and the I/O subsystem 10 may include other input device controllers 11 for receiving signals from other input devices 14 or sending the processor 130 to other input devices 190.
  • the I/O subsystem 10 may also include the aforementioned sensor controller 12 and display controller 13, which are respectively used to implement the exchange of data and control information between the sensor 180 and the display screen 170 and the processor 130.
  • the terminal 100 may further include a power source 101 to supply power to other components of the terminal 100 including 110-190, and the power source may be a rechargeable or non-rechargeable lithium ion battery or a nickel hydrogen battery.
  • the power supply 101 when the power supply 101 is a rechargeable battery, it can be coupled with the processor 130 through a power management system, so that the management of charging, discharging, and power consumption adjustment can be realized through the power management system.
  • the RF circuit 120, the processor 130, and the memory 140 may be partially or completely integrated on one chip, or may be independent chips.
  • the RF circuit 120, the processor 130, and the memory 140 may include one or more integrated circuits arranged on a printed circuit board (PCB).
  • PCB printed circuit board
  • FIG. 2 a hardware architecture diagram of an exemplary image processing apparatus provided by an embodiment of the present application.
  • the image processing apparatus 200 may be, for example, a processor chip.
  • the hardware architecture shown in FIG. 2 The figure may be an exemplary architecture diagram of the processor 130 in FIG. 1, and the image processing method and image processing architecture provided by the embodiment of the present application may be applied to the processor chip.
  • the device 200 includes: at least one CPU, a memory, a microcontroller (Microcontroller Unit, MCU), a GPU, an NPU, a memory bus, a receiving interface, a sending interface, and so on.
  • the device 200 may also include an application processor (Application Processor, AP), a decoder, and a dedicated video or image processor.
  • Application Processor Application Processor
  • the connectors include various interfaces, transmission lines, or buses. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces. The embodiment does not limit this.
  • the CPU can be a single-CPU processor or a multi-CPU processor; optionally, the CPU can be a processor group composed of multiple processors, between multiple processors Coupled to each other through one or more buses.
  • the receiving interface may be a data input interface of the processor chip.
  • the receiving interface and the transmitting interface may be High Definition Multimedia Interface (HDMI), V-By-One Interface, Embedded Display Port (eDP), Mobile Industry Processor Interface (MIPI) or Display Port (DP), etc.
  • HDMI High Definition Multimedia Interface
  • eDP Embedded Display Port
  • MIPI Mobile Industry Processor Interface
  • DP Display Port
  • the above-mentioned parts are integrated on the same chip; in another optional case, the CPU, GPU, decoder, receiving interface, and transmitting interface are integrated on one chip, and the chip is Each part of the access to external memory through the bus.
  • the dedicated video/graphics processor can be integrated with the CPU on the same chip, or it can exist as a separate processor chip.
  • the dedicated video/graphics processor can be a dedicated ISP.
  • the NPU can also be used as an independent processor chip. The NPU is used to implement various neural network or deep learning related operations.
  • the chip involved in the embodiments of this application is a system manufactured on the same semiconductor substrate by an integrated circuit process, also called a semiconductor chip, which can be manufactured on a substrate using an integrated circuit process (usually a semiconductor such as silicon) Material) is a collection of integrated circuits formed on the surface, the outer layer of which is usually encapsulated by a semiconductor packaging material.
  • the integrated circuit may include various types of functional devices. Each type of functional device includes transistors such as logic gate circuits, Metal-Oxide-Semiconductor (MOS) transistors, bipolar transistors or diodes, and may also include capacitors and resistors. Or inductance and other components. Each functional device can work independently or under the action of necessary driver software, and can realize various functions such as communication, calculation, or storage.
  • MOS Metal-Oxide-Semiconductor
  • FIG. 3a it is a schematic diagram of an exemplary operating system architecture to which this embodiment of the application is applicable.
  • the operating system may run on the processor 130 shown in FIG. 1, and the code corresponding to the operating system may be stored in the memory 140 shown in FIG. 1, or the operating system may run on the image processing system shown in FIG. ⁇ 200 ⁇ Device 200.
  • the APP layer may include, for example, applications such as WeChat, iQiyi, Tencent Video, Taobao, or Camera.
  • the frame layer is the logical scheduling layer of the operating system architecture, and the frame layer can perform resource scheduling and policy allocation for the video processing process.
  • the framework layer includes:
  • the Graphics Framework is responsible for the layout of the graphics window and the rendering of graphics data, storing the rendered graphics data in the Graphics Buffer, and sending the graphics layer data in the Graphic Buffer to SurfaceFlinger.
  • the Multimedia Framework is responsible for decoding the video stream and sending the decoded data to SurfaceFlinger.
  • Open Graphics Library provides an interface for graphics rendering and graphics layer overlay, and can be connected to the GPU driver.
  • HAL is the interface layer between operating system software and audio and video hardware devices.
  • HAL provides an interface for the interaction between upper layer software and lower layer hardware.
  • the HAL layer abstracts the underlying hardware into software that contains the corresponding hardware interfaces. By accessing the HAL layer, the settings of the underlying hardware devices can be realized. For example, the relevant hardware devices can be enabled or disabled at the HAL layer.
  • the driver layer is used to directly control the underlying hardware devices according to the control information input by the HAL.
  • the HAL includes a hardware composer (Hardware Composer, HWC) abstraction and a media hardware (Media Hardware, Media HW) abstraction.
  • the driver layer includes a hardware synthesis driver and a media hardware driver.
  • HWC is used for hardware synthesis of multiple graphics layers, which can provide support for SurfaceFlinger's hardware synthesis, and store the synthesized graphics layer in FrameBuffer and send it to the display driver.
  • Media HW is responsible for processing the video layer data, and informing the display driver of the processed video layer and the position information of the video layer.
  • Media HW is a dedicated hardware circuit that can be used to improve the video display effect. It should be understood that different vendors may refer to media hardware differently.
  • the HWC abstraction of the HAL layer corresponds to the hardware synthesis driver of the driver layer, and the media hardware abstraction corresponds to the media hardware driver.
  • the control of the underlying HWC hardware can be realized through the hardware synthesis driver.
  • the control of the underlying Media HW is achieved by accessing the media hardware abstraction and media hardware driver of the HAL layer.
  • FIG. 4 it is a schematic diagram of a traditional image processing architecture.
  • the graphics framework sends the instructions of multiple graphic buffers Graphic Buffers to SurfaceFlinger, and the multimedia framework sends the instructions of the video buffer Video Buffer to SurfaceFlinger, where one graphics layer corresponds to one Graphic Buffer, for example
  • the graphics layer of the navigation bar and the graphics layer of the status bar correspond to different Graphic Buffers.
  • SurfaceFlinger binds Graphic Buffers and Video Buffers to the corresponding layers. For example, bind Graphic Buffer 1 to the navigation bar graphics layer, and bind Graphic Buffer 2 to the state. Column graphics layer, bind Video Buffer to the video layer, etc.
  • HWC synthesizes multiple graphics layers in all Graphic Buffers, and digs holes in the graphics layer below the video layer during the synthesis process to display the video layer.
  • the graphics layer data synthesized by HWC is stored in FrameBuffer.
  • the HWC sends the instruction information of the Video Buffer to the Media HW so that the Media HW can read the video data from the Video Buffer and process the video data.
  • the hardware synthesizer and media hardware in Figure 4 both include the corresponding hardware abstraction layer and driver layer.
  • the hardware synthesizer of the hardware abstraction layer needs to be accessed to abstractly call the hardware synthesis driver.
  • the media hardware abstraction layer and the media hardware driver need to be used to implement the call to the Media HW. Call.
  • SurfaceFlinger sends the indication information of Graphic Buffers and Video Buffer to the HWC abstraction layer of the abstraction layer in the main thread, and uses the hardware synthesis driver to call the HWC hardware to realize the synthesis of multiple graphics layers and the performance of the graphics layer.
  • Digging treatment HWC sends the synthesized graphics layer data to the display driver, and Media HW sends the processed video image to the display driver.
  • the display driver When the new Vsync signal arrives, the display driver synthesizes the composite graphics layer data sent by the HWC and the video data sent by the Media HW, and sends the composite result to the display device for display.
  • the indication information of the buffer is used to point to a memory area.
  • the indication information may be a file descriptor (fd).
  • the embodiment of the present application provides an image processing architecture, and the image processing architecture is shown in FIG. 5.
  • the graphics framework sends the instructions of Graphic Buffer to SurfaceFlinger, and the multimedia framework sends the instructions of Video Buffer to SurfaceFlinger.
  • HWC can respectively dig holes in the graphics layer below the video layer, and then synthesize multiple graphics layers into one graphics layer; HWC can also synthesize multiple graphics layers into one graphics layer, and then the synthesized graphics Layers for digging.
  • the synthesized graphics layer data obtained by the HWC processing is stored in the FrameBuffer, and the HWC sends the instruction information of the FrameBuffer to the display driver.
  • Media HW stores the processed video layer data in Video Buffer, and sends the instruction information of Video Buffer to the display driver.
  • first thread and the second thread are both circular threads.
  • SurfacFlinger When SurfacFlinger receives the Graphic Buffer sent by the graphics framework, it will notify the first thread to process, but the first thread will not start to process until the graphics Vsync arrives. The graphics layer data in Graphic Buffer is processed.
  • SurfacFlinger receives the Video Buffer sent by the multimedia framework, it will notify the second thread to process, and the second thread will start processing after receiving the notification without waiting for the vertical synchronization signal.
  • the set size-related information of the video layer is sent to Media HW, so that Media HW can process the video data according to the size-related information of the video layer; similarly;
  • the HWC digs a hole in the graphics layer according to the information related to the size of the video layer set, so that the size of the digging area is equal to the size of the video layer. Because setting the size of the video layer and digging the hole are performed in the same thread, the size of the hole is exactly the same as the size of the set video layer, which ensures that the video layer and the graphics layer are synchronized and displayed.
  • the information related to the size of the video layer may include position information and size information of the video layer, etc.
  • the size of the video layer can be determined according to the position information .
  • the size-related information of the video layer can only include the position information; when the position information only includes a certain vertex position of the video layer (for example, the vertex position in the upper right corner), the size-related information of the video layer also includes the size of the video information.
  • the information related to the size of the video layer is calculated by SurfaceFlinger.
  • the multimedia framework sends the initial size of the video layer set by the system's own application programming interface (Application Programming Interface, API) to SurfaceFlinger, and SurfaceFlinger can also Capturing or perceiving the user's operations such as zooming in, zooming out, or rotating the video, the initial size of the video layer and operations such as zooming in, zooming out, or rotating sent by the SurfaceFlinger integrated multimedia framework, and calculate the information related to the size of the video layer.
  • API Application Programming Interface
  • the size of the Video Buffer will affect the digging of the graphics layer, and the processing of the Video Buffer is performed in the second thread, when the second thread receives the first Video Buffer or detects that the size of the Video Buffer has changed At that time, a notification message is sent to the first thread of SurfaceFlinger to notify that the size of the video layer has changed.
  • the notification information may be carried on an identification bit.
  • the size of the Video Buffer may include the size information of the Video Buffer and the rotation information of the Video Buffer.
  • Each Video Buffer stores a frame of video data or data of a video layer. The size of the Video Buffer and the video data (or The size of the video layer is related.
  • the size of the Video Buffer may be equal to the size of the video data
  • the rotation information of the Video Buffer is used to indicate the rotation information of the video data.
  • SurfaceFlinger receives the notification message, it obtains the size of the updated Video Buffer, and recalculates the information related to the size of the video data according to the size of the updated Video Buffer, so that the HWC in the first thread can be based on the changed size Re-dug the graphics layer to ensure that the video layer after the size change can be displayed through the graphics layer, so as to ensure that the video layer and the graphics layer can still be synchronized and displayed when the size of the Video Buffer changes.
  • the information related to the size of the video data calculated by SurfaceFlinger may not necessarily change.
  • the order of sending the size-related information of the video layer to Media HW in the first thread and sending the first frame of video data to Media HW in the second thread is not limited.
  • Media HW first receives the information related to the size of the video layer, and then receives the first frame of video data; in another optional case, Media HW first receives the first frame of video data, and then receives the video Information about the size of the layer. In either case, Media HW needs to process the first frame of video data according to the received information about the size of the video layer before sending it to the display driver.
  • SurfaceFlinger needs to refer to the size information of the Video Buffer when calculating the size of the video data.
  • Media HW receives the first Video Buffer, it will send it to SurfaceFlinger.
  • the first thread sends notification information.
  • SurfaceFlinger receives the notification message, it obtains the size of the first Video Buffer, and calculates the size-related information of the video data according to the size of the first Video Buffer.
  • Graphic Vsync is also used to trigger the application to render the graphics layer data.
  • the graphics layer data in the Graphic Buffer can come from multiple applications.
  • the application renders the graphics layer data to fill the Graphic Buffer and fill the Graphic Buffer and HWC pair.
  • the synthesis of multiple graphics layer data corresponds to two different cycles of the Graphic Vsync signal.
  • the composition of the graphics layer by HWC and the processing of the video layer by Media HW are performed in parallel by threads. Therefore, the processing of video images will no longer be affected by the composition of the graphics layer.
  • Video playback It is no longer affected by the composition of the graphics layer, and can effectively solve the problem of frame loss caused by the time-consuming composition of the graphics layer during the video playback process. Furthermore, setting the size of the video layer and digging holes in the graphics layer are completed in the same thread one after the other, and inter-thread communication can be carried out between the first thread and the second thread.
  • the second thread may notify the first thread so that the digging size of the graphics layer is consistent with the size of the video data, so as to ensure the synchronization and matching display of the video layer and the graphics layer.
  • the vertical synchronization signal that controls the synthesis of multiple graphics layers and the vertical synchronization signal that controls the refresh frame rate of the display device are independent of each other, the refresh frame rate of the video can be greater than the actual refresh frame rate of the graphics, so the image processing architecture can support The playback of high frame rate video with a video frame rate higher than the graphics refresh frame rate.
  • the graphics hardware included in the graphics processing system includes an HWC and a GPU. If the HWC does not support the overlay of the graphics layer, the hardware resources of the GPU can be called to implement the overlay of the multiple graphics layers.
  • FIG. 6 another exemplary image processing architecture is provided for this embodiment of the application.
  • the indication information of Graphic Buffers is sent to the GPU, and the GPU realizes the synthesis (or overlay) of the multi-layer graphics layer data and the matching
  • the synthesized graphics layer data is stored in FrameBuffer.
  • SurfaceFlinger uses the GPU's image processing function to realize the synthesis of multi-layer graphics layer data and the processing of digging holes in the graphics layer by calling the API interface of the GPU.
  • the GPU returns the processing result to SurfaceFlinger, and SurfaceFlinger sends the FrameBuffer instruction information to the hardware synthesizer, and then the hardware synthesizer informs the display driver of the FrameBuffer instruction information, so that the display driver can read and process from the corresponding memory according to the FrameBuffer instruction information After the graphics layer data.
  • the image processing architecture in Figure 6 is not done by HWC, but by GPU.
  • Other processing is not done by HWC, but by GPU.
  • the image processing architecture shown in FIG. 5 is the same, and reference may be made to the description of the image processing architecture shown in FIG. 5, which will not be repeated here.
  • the hardware synthesizer and media hardware in Figures 5 and 6 all include the corresponding hardware abstraction layer and driver layer, and the hardware synthesizer needs to be called through the hardware synthesizer abstraction layer and hardware synthesis driver.
  • the hardware synthesizer of the HAL need to access the hardware synthesizer of the HAL to abstractly call the hardware synthesis driver to implement the call to the hardware synthesizer; similarly, the media hardware abstraction layer and the media hardware driver need to be used to call the Media HW, or in other words, through the access
  • the media hardware of HAL abstractly calls the media hardware driver to realize the call to the media hardware.
  • an embodiment of the present application also provides an image data processing method. As shown in FIG. 7, the method includes:
  • the synthesized graphics layer data includes a digging area, and the digging area is usually set to be transparent so that when the graphics layer and the video layer are synthesized, the video layer can be displayed through the digging area.
  • the processed video layer can be displayed through the excavated area.
  • a certain size difference between the processed video layer and the excavated area is allowed, that is, the sizes of the processed video layer and the excavated area may not be exactly the same.
  • the synthesis and digging of the graphics layer and the processing of the video layer are performed in parallel in two threads. Therefore, the processing of video images will no longer be affected by the synthesis of the graphics layer, and video playback will also It is no longer affected by the composition of the graphics layer, and can effectively solve the problem of frame loss caused by the time-consuming composition of the graphics layer during video playback.
  • the method further includes: setting information related to the size of the video layer in the first thread.
  • the size-related information of the video layer is set in the first thread, and then at least one layer of graphics layer is digged according to the size-related information of the video layer; in the first thread, the set video
  • the information related to the size of the layer is sent to Media HW, and then the Media HW processes the video layer according to the information related to the size of the video layer in the second thread to obtain the processed video layer.
  • the size-related information of the video layer may include the size of the video layer and the position information of the video layer.
  • the size-related information of the video layer may include one vertex coordinate and two length information of the video layer.
  • the length information indicates the length and width of the video layer; the information related to the size of the video layer can also include the coordinates of two vertices and one length information of the video layer, and the size and the length of the video layer can be uniquely determined according to the coordinates of the two vertices and one length information. Play position; if the position information of the video layer is the coordinates of the 4 vertices of the display video layer, the size of the video layer can be determined according to the coordinates of the 4 vertices. At this time, the information related to the size of the video layer may only include position information. The information related to the size of the video layer is calculated by SurfaceFlinger.
  • the multimedia framework sets the initial size of the video layer through the system's own API and sends it to SurfaceFlinger.
  • SurfaceFlinger can also capture or perceive the user's zoom in or zoom out of the video.
  • the initial size of the video layer sent by the SurfaceFlinger integrated multimedia framework and operations such as zooming in, zooming out, or rotating, etc., calculate the information related to the size of the video layer.
  • the sizes of the processed video layer and the digging area are all based on The size of the video layer set by this setting can ensure that the size of the digging area in the graphics layer is exactly the same as the size of the video layer.
  • the processed video layer and the "dug area" can be completely matched, so that the processed video layer It can be displayed simultaneously through the "dug area”.
  • the multi-layer graphics layer is synthesized in the first thread based on the first vertical synchronization signal, and the at least one graphics layer is digged; the synthesized graphics layer is synthesized based on the second vertical synchronization signal It is superimposed with the processed video layer to obtain display data; wherein, the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  • first vertical synchronization signal and the second vertical synchronization signal are two independent periodic signals, and the first vertical synchronization signal and the second vertical synchronization signal may have different frame rates and different periods.
  • the effective signal of the first vertical synchronization signal arrives, the multi-layer graphics layer is synthesized in the first thread and the hole processing is performed on at least one of the graphics layers; when the effective signal of the second vertical synchronization signal arrives , Superimpose the synthesized graphics layer and the processed video layer to obtain the display data.
  • the first vertical synchronization signal and the second vertical synchronization signal may be high-level active or low-level active, and the first vertical synchronization signal and the second vertical synchronization signal may be level-triggered, rising-edge-triggered, or falling-edge-triggered.
  • the arrival of the effective signal of the Vsync signal can be understood as: the arrival of the rising edge of the Vsync signal, the arrival of the falling edge of the Vsync signal, when the Vsync signal is a high-level signal, or when the Vsync signal is a low-level signal.
  • the first vertical synchronization signal may be Graphic Vsync in the foregoing embodiment
  • the second vertical synchronization signal may be Display Vsync in the foregoing embodiment.
  • the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
  • the signal that controls the synthesis of display video data and the refresh frame rate of the display device is the second vertical synchronization signal, and the first vertical synchronization signal and the first vertical synchronization signal
  • the two vertical synchronization signals are independent of each other.
  • the frame rate of the second vertical synchronization signal can be greater than that of the first vertical synchronization signal. Therefore, the image processing architecture can support high frame rate video with a video frame rate higher than the graphics refresh frame rate. Play.
  • the size-related information of the video layer is set first, and then the multi-layer graphics layer is synthesized and based on the video layer.
  • the size-related information is used to dig holes for at least one layer of graphics.
  • setting the size-related information of the video layer in the first thread needs to wait for the effective signal of the vertical synchronization signal to arrive. It is carried out after the effective signal of the vertical synchronization signal arrives.
  • the second thread obtains the first video buffer Video Buffer
  • the first notification information is sent to the first thread; in the first thread, according to the first Video
  • the size of the Buffer sets the information related to the size of the video layer; or, when the second thread detects that the size of the Video Buffer changes, it sends the first notification information to the first thread; in the first thread, according to The size of the changed Video Buffer resets the information related to the size of the video layer.
  • inter-thread communication can be performed between the first thread and the second thread.
  • the second thread can notify the first thread so that In the first thread, reset the information related to the size of the video layer, and re-dug the graphics layer according to the changed size, so as to ensure that the video layer after the size change can be displayed through the digging area of the graphics layer. That is to ensure that when the size of the Video Buffer changes, it can still realize the synchronization and matching display of the video layer and the graphics layer.
  • the information related to the size of the video data calculated by SurfaceFlinger may not necessarily change.
  • the HWC synthesizes multiple graphics layers and performs hole-digging processing on at least one graphics layer in the first thread.
  • the SurfaceFlinger of the framework layer specifically calls the HWC driver by accessing the HWC abstraction of the hardware abstraction layer, so as to realize the call of the HWC hardware resources.
  • the GPU synthesizes multiple graphics layers and performs hole-digging processing on at least one graphics layer to obtain a synthesized graphics layer.
  • GPU hardware resources can be used to perform graphics layer synthesis and graphics layer digging.
  • the method further includes: in the first thread, sending the size-related information of the video layer to Media HW; In the second thread, Media HW processes the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • Media HW first receives the information about the size of the video layer in the first thread, and then Media HW receives the first frame of video layer data in the second thread, and then Media HW receives The information related to the size of the video layer processes the first frame of video layer data; in another case, Media HW first receives the first frame of video layer data in the second thread, and then Media HW receives it in the first thread.
  • Media HW will not process the first frame of video layer data immediately, but will wait for the size-related information of the video layer to receive the first frame of video layer data. Perform processing to avoid processing errors in the first frame of video layer data.
  • the display driver superimposes the synthesized graphics layer and the processed video layer to obtain display data.
  • the method further includes:
  • the SurfaceFlinger creates the first thread and the second thread in the initialization phase; when the SurfaceFlinger receives the Graphic Buffer, the first thread is notified to process the graphics layer data in the Graphic Buffer; when the SurfaceFlinger receives the Video Buffer , Notify the second thread to process the video layer data in the Video Buffer; wherein, the Graphic Buffer stores a layer of graphics layer data, and the Video Buffer stores a layer of video layer data.
  • an embodiment of the present application also provides another image data processing method. As shown in FIG. 8, the method includes:
  • the size-related information of the video layer may include the size of the video layer and the position information of the video layer.
  • the size-related information of the video layer is calculated by SurfaceFlinger.
  • the multimedia framework sets the video through the system's own API.
  • the initial size of the layer is sent to SurfaceFlinger.
  • SurfaceFlinger can also capture or perceive the user's operations such as zooming in, zooming out or rotating the video.
  • the initial size of the video layer sent by the SurfaceFlinger integrated multimedia framework and operations such as zooming in, zooming out or rotating are calculated Get information about the size of the video layer. Exemplarily, when a valid signal of Graphic Vsync arrives, step 801 is executed.
  • SurfaceFlinger calls the hardware resources of the HWC in the first thread to realize the synthesis of the multi-layer graphics layer.
  • the indication information of the Graphic Buffer storing the graphics layer data is sent to the HWC in the first thread.
  • the indication information Pointing to a section of memory, HWC can obtain graphics layer data from the corresponding memory and process it according to the instruction information.
  • the HWC synthesizes the multi-layer graphics layer data, and performs hole processing on at least one graphics layer according to the size-related information of the video layer to obtain a synthesized graphics layer;
  • the synthesized graphics layer data includes a digging area, and the digging area is usually set to be transparent so that when the graphics layer and the video layer are synthesized, the video layer can be displayed through the digging area.
  • HWC can dig holes in the graphics layer below the video layer, and then combine multiple graphics layers into one graphics layer; HWC can also combine multiple graphics layers into one graphics layer first, and then Then dig a hole in the synthesized graphics layer.
  • the composite graphics layer data obtained by the HWC processing can be stored in the FrameBuffer. It should be understood that step 803 will be executed only when the effective signal of Graphic Vsync arrives, and step 803 will be executed after step 801.
  • SurfaceFlinger can call the GPU resource in the first thread to implement the overlay of the multi-layer graphics layer, which corresponds to the image processing architecture shown in Figure 4.
  • 802 will send instructions for multiple Graphic Buffers to the GPU, so as to use the GPU's image processing function to realize the synthesis of multi-layer graphics layer data and the processing of digging holes in the graphics layer.
  • the GPU returns the processing result to SurfaceFlinger, and SurfaceFlinger sends the FrameBuffer instruction information to the hardware synthesizer, and then the hardware synthesizer informs the display driver of the FrameBuffer instruction information.
  • the HWC sends the instruction information of the FrameBuffer to the display driver, so that the display driver can obtain the synthesized graphics layer data from the corresponding memory according to the instruction information.
  • step 805 and step 801 are executed in parallel, that is, step 805 and step 801 can be executed at the same time.
  • the second thread When the first Video Buffer is received or when the size of the Video Buffer is detected to change, the second thread notifies the first thread.
  • the second thread sends first notification information to the first thread, where the first notification information is used to indicate that the size of the video layer of the Main Thread has changed.
  • SurfaceFlinger can obtain the updated Video Buffer size, and calculate the information related to the size of the video layer according to the updated Video Buffer size.
  • the size of the Video Buffer may include size information and rotation information.
  • Media HW first receives the information related to the size of the video layer in the first thread, and then Media HW receives the first frame of video layer data in the second thread, and then Media HW receives The information related to the size of the video layer processes the first frame of video layer data; in another case, Media HW first receives the first frame of video layer data in the second thread, and then Media HW receives it in the first thread.
  • Media HW will not process the first frame of video layer data immediately, but will wait for the size-related information of the video layer to receive the first frame of video layer data. Perform processing to avoid processing errors in the first frame of video layer data.
  • Media HW sends the instruction information of Video Buffer to the display driver so that the display driver can obtain the processed video layer data from the corresponding memory according to the instruction information.
  • Display Vsync arrives, and the display driver superimposes the video layer and the graphics layer to obtain display data and send it to the display device for display.
  • the composition of the graphics layer by HWC and the processing of the video layer by Media HW are performed in parallel by threads. Therefore, the processing of video images will no longer be affected by the composition of the graphics layer, and video playback will no longer be affected by graphics.
  • the effect of layer composition can effectively solve the problem of frame loss caused by the time-consuming graphics layer composition during video playback.
  • the second thread can notify the first thread, which can ensure that the size of the hole in the graphics layer is exactly the same as the size of the video layer, so that the video layer and the graphics layer can be synchronized and displayed.
  • the composite that controls multiple graphics layers is Graphic Vsync
  • the signal that controls the composite of display video data and the refresh frame rate of the display device is Display Vsync.
  • Graphic Vsync and Display Vsync are independent of each other, and the frame rate of Display Vsync can be greater than Graphic Vsync’s frame rate, so this image processing architecture can support the playback of high frame rate videos with a video frame rate higher than the graphics refresh frame rate.
  • the embodiment of the present application also provides another image data processing method. As shown in FIG. 9, the method includes:
  • the Video Thread is a dedicated thread dedicated to processing video layer data, and the Video Thread may correspond to the aforementioned second thread. It should be understood that the thread that SurfaceFlinger receives the Video Buffer and the Video Thread are different threads. Illustratively, the thread that receives the Video Buffer can be called the first receiving thread. The Video Thread and the first receiving thread need to communicate between threads to facilitate the first A receiving thread can notify the Video Thread that a new Video Buffer is available.
  • the Multimedia Framework sends the Video Buffer to the Buffer queue of the video layer
  • the buffer includes the Usage flag bit, which is used to indicate the type of the buffer.
  • the Usage flag bit is the first indicator value, it means that the buffer is a Video Buffer, and when the Usage flag bit is the second indicator value.
  • the Usage flag bit is not occupied, it indicates that the buffer is a Graphic Buffer; when the Usage flag bit is occupied, it indicates that the buffer is a Video Buffer.
  • SurfaceFlinger can use the Usage flag to determine whether the received buffer is a Video Buffer. It should be understood that Video Thread is a cyclic thread. When SurfaceFlinger receives the Video Buffer sent by the media framework, it will notify Video Thread to process the video layer data. Video Thread will start processing after receiving the notification without waiting for the vertical synchronization signal.
  • Video Thread After the Video Thread receives the notification, it takes out the Video Buffer from the Buffer queue of the video layer and sends it to the Media HW for processing;
  • Video Thread judges whether the received Video Buffer is the first Buffer received, if it is, then go to S24, if not, go to S22;
  • Video Thread judges whether the size of the current Video Buffer has changed compared with the previous Video Buffer, and if there is a change, proceed to S24; if there is no change, no processing is performed, which can also be understood as the end of the branch;
  • S20 and S22 are two judging conditions in parallel, and any one of the conditions will be met, and S24 will be entered.
  • the Video Thread sends first notification information to the Main Thread.
  • the first notification information is used to indicate that the size of the Main Thread video layer has changed so that the Main Thread can reset the video layer according to the updated size.
  • Size-related information for example, the first notification information may be carried on an identification bit.
  • SurfaceFlinger can obtain the size of the updated Video Buffer, and calculate the size of the video layer according to the size of the updated Video Buffer.
  • the Main Thread is also created by SurfaceFlinger in the initialization phase, the Main Thread is a thread used to process graphics layer data, and the Main Thread is also used to set the size of the video layer.
  • Main Thread can correspond to the aforementioned first thread.
  • the Main Thread needs to be notified so that the Main Thread can reset the size of the video layer.
  • Main Thread sends the information related to the size of the re-set video layer to Media HW. It should be understood that the information related to the size of the set video layer is calculated by SurfaceFlinger.
  • the multimedia framework sets the initial size of the video layer through the system's own API and transmits it to SurfaceFlinger.
  • SurfaceFlinger can also capture or Perceive the user's operations such as zooming in, zooming out, or rotating the video, the initial size of the video layer and operations such as zooming in, zooming out, or rotating sent from the SurfaceFlinger integrated multimedia framework can calculate information about the size of the video layer.
  • the Main Thread is a cyclic thread.
  • SurfacFlinger receives the Graphic Buffer sent by the Graphic Framework, it notifies the Main Thread to process it, but the Main Thread does not start processing the graphics layer data in the Graphic Buffer until the Graphic Vsync arrives.
  • the method also includes: in the Main Thread, SurfacFlinger sends the Graphic Buffers instruction information to the HWC, so that the HWC can synthesize the multi-layer graphics layer and perform hole processing on the related graphics layer.
  • the HWC itself does not support superimposing graphics
  • SurfacFlinger calls the GPU, and the GPU synthesizes the multi-layer graphics layer and performs hole processing on the related graphics layer.
  • the HWC sends the processed graphics layer to the display driver
  • S28 and S30 can be performed synchronously, and S28 and S20-S24 are also parallel, and the order is not limited. For example, when Media HW in S18 obtains the processed video layer, it will execute S28 to convert the processed video layer.
  • step S20 or S22 Send to the display driver, and when it is detected that the size of the first Video Buffer or Video Buffer is received, step S20 or S22 is executed, the processed video layer data is stored in the Video Buffer, and the Media HW will indicate the Video Buffer
  • the information is sent to the display driver so that the display driver can obtain the processed video layer data from the corresponding memory; the processed graphics layer data is stored in the FrameBuffer, and the HWC sends the instruction information of the FrameBuffer to the display driver so that the display driver can Obtain the processed graphics layer data from the corresponding memory.
  • the GPU returns the FrameBuffer instruction information to SurfaceFlinger, and SurfaceFlinger then sends the FrameBuffer instruction information to HWC, so that HWC will The instruction information of FrameBuffer is sent to the display driver.
  • the display driver When S32, Display Vsync arrives, the display driver superimposes the processed video layer and the processed graphics layer to obtain display data and send it to the display device for display.
  • Display Vsync is also used to control the refresh frame rate of the display device. Since the processed graphics layer data contains the "dug area", and the size of the "dug area” is the same as the size of the processed video layer, after the display driver combines the two, the video layer and the "dug area” The area" can be completely matched, so that the video layer can be displayed simultaneously through the "dug area”. In addition, since it is Graphic Vsync that controls the synthesis of multiple graphics layers, the signal that controls the synthesis of display video data and the refresh frame rate of the display device is Display Vsync.
  • Graphic Vsync and Display Vsync are independent of each other, and the frame rate of Display Vsync can be greater than Graphic The frame rate of Vsync, so the image processing architecture can support the playback of high frame rate video with a video frame rate higher than the graphics refresh frame rate.
  • the method embodiment corresponding to FIG. 9 describes the method in the form of steps, but the sequence number of the steps does not limit the execution order between the steps of the method.
  • the steps performed in the Video Thread and the steps performed in the Main Thread are parallel.
  • FIG. 10 is a structural diagram of an exemplary image data processing apparatus provided by an embodiment of the application.
  • the device includes a processor, and software instructions run on the processor to form a framework layer, a hardware abstraction layer, and a driver layer.
  • the device further includes a transmission interface through which the processor receives data sent by other devices or sends data to other devices.
  • the transmission interface may be, for example, an HDMI interface, a V-By-One interface, an eDP interface, or MIPI. Interface, DP interface or Universal Serial Bus (Universal Serial Bus, USB) interface, etc.
  • These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces, which are not limited in this embodiment.
  • the device can be coupled with hardware resources such as a display, media hardware, or graphics hardware through a connector, a transmission line, or a bus.
  • the device can be a processor chip with image or video processing functions.
  • the device, media hardware, and graphics hardware can be integrated on one chip.
  • the device, media hardware, graphics hardware, and display may be integrated on one terminal. It should be understood that the image processing framework shown in FIGS. 5 and 6 may be run on the device shown in FIG. 10, and the device shown in FIG. 10 may be used to implement the method embodiments corresponding to FIGS. 7-9.
  • the framework layer includes SurfaceFlinger
  • the hardware abstraction layer includes graphics hardware abstraction and media hardware abstraction
  • the driver layer includes media hardware drivers, graphics hardware drivers, and display drivers.
  • the graphics hardware abstraction corresponds to the graphics hardware driver
  • the media hardware abstraction corresponds to the media hardware driver
  • the graphics hardware can be called through the graphics hardware abstraction and the graphics hardware driver
  • the media hardware can be called through the media hardware abstraction and the media hardware driver.
  • the graphics hardware driver is abstractly called by accessing the graphics hardware to realize the calling of the graphics hardware
  • the media hardware driver is abstractly called by accessing the media hardware to realize the calling of the media hardware.
  • the framework layer is used in the first thread to call the graphics hardware through graphics hardware abstraction and graphics hardware drivers, to synthesize the multi-layer graphics layer, and to dig holes for at least one of the multi-layer graphics layers to obtain A synthesized graphic layer, the synthesized graphic layer including a digging area;
  • the framework layer is also used in the second thread to call Media HW through media hardware abstraction and media hardware driver to process the video layer to obtain the processed video layer; the processed video layer can pass through the digging area show;
  • the display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
  • the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to the media hardware abstraction;
  • the framework layer is specifically used to, in the first thread, call the graphics hardware to synthesize the multi-layer graphics layer according to the size-related information of the video layer, and to dig holes for at least one graphics layer to obtain the synthesized graphics layer;
  • Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • the framework layer is specifically used to call the graphics hardware based on the first vertical synchronization signal to synthesize the multi-layer graphics layer in the first thread and to dig holes in at least one graphics layer Processing:
  • the display driver is specifically used to superimpose the synthesized graphics layer and the processed video layer based on the second vertical synchronization signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  • the framework layer is specifically used to send the size-related information of the video layer to the media hardware abstraction in the first thread when the effective signal of the first vertical synchronization signal arrives. Then the graphics hardware is called to synthesize the multi-layer graphics layer, and the at least one graphics layer is digged according to the size-related information of the video layer.
  • the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
  • the second thread when the second thread obtains the first video buffer Video Buffer, it sends the first notification information to the first thread; the framework layer is also used to receive the first notification in the first thread After the information, in the first thread, the size of the first Video Buffer is obtained, and the size-related information of the video layer is set according to the size of the first Video Buffer.
  • the framework layer is also used to, after the first thread receives the first notification information, in the first thread, Obtain the size of the changed Video Buffer, and set the information related to the size of the video layer according to the size of the changed Video Buffer; among them, the Video Buffer stores the data of a video layer, and the size of the Video Buffer is related to the size of the video layer.
  • the graphics hardware includes an HWC and a GPU
  • FIG. 11 is a schematic structural diagram of another exemplary image processing apparatus provided in an embodiment of the present application.
  • the hardware abstraction layer includes the HWC abstraction
  • the driver layer includes the HWC driver and the GPU driver.
  • the framework layer is specifically used for: in the first thread, through the HWC abstraction and the HWC driver to call the HWC, the multi-layer graphics layer is synthesized and at least one graphics layer is digged to obtain the synthesized graphics layer; the HWC abstraction is used Yu: Send the synthesized graphics layer to the display driver; the media hardware abstraction is also used to send the processed video layer to the display driver.
  • the HWC abstraction sends the instruction information of the FrameBuffer storing the synthesized graphics layer to the display driver
  • the media hardware abstraction sends the instruction information of the VideoBuffer storing the processed video layer to the display driver.
  • the SurfaceFlinger of the framework layer is used to create the first thread and the second thread in the initialization phase; SurfaceFlinger is also used to notify the first thread to process the Graphic Buffer when the Graphic Buffer is received.
  • notify the second thread to process the video layer data in the Video Buffer among them, a layer of graphics layer data is stored in the Graphic Buffer, and a layer of video layer data is stored in the Video Buffer.
  • the Media HW abstraction is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame layer is specifically used to : In the second thread, Media HW abstractly calls Media HW to process the first frame of video layer data according to the size-related information of the video layer.
  • FIG. 12 it is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of this application.
  • the device includes: a framework layer, graphics hardware, media hardware, and a display driver.
  • the framework layer and the display driver are part of the operating system formed by software instructions running on the processor.
  • Graphics hardware and media hardware can be coupled with the processor through connectors, interfaces, transmission lines or buses. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces.
  • the graphics hardware may include GPU and HWC.
  • the framework layer is used in the first thread to call the graphics hardware to synthesize the multi-layer graphics layer and to dig holes for at least one of the multi-layer graphics layers to obtain the synthesized graphics layer, the synthesized graphics layer Including the digging area;
  • This framework layer is also used in the second thread to call Media HW to process the video layer to obtain the processed video layer; the processed video layer can be displayed through the digging area;
  • the display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
  • the software instructions running on the processor are also used to form a hardware abstraction layer, graphics hardware drivers, and media hardware drivers.
  • the hardware abstraction layer includes graphics hardware drivers corresponding to graphics hardware and media Media hardware driver corresponding to the hardware.
  • the frame layer is specifically used to call graphics hardware through graphics hardware abstraction and graphics hardware drivers, and the frame layer is specifically used to call media hardware through media hardware abstractions and media hardware drivers.
  • the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to Media HW; the framework layer is specifically used Therefore, in the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and the at least one graphics layer is digged to obtain the synthesized graphics layer; In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
  • the framework layer is specifically used to, based on the first vertical synchronization signal, call the graphics hardware in the first thread to synthesize the multi-layer graphics layer and perform the at least one graphics layer Digging processing;
  • the display driver is specifically used to, when the second vertical synchronization signal arrives, superimpose the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and The second vertical synchronization signals are independent of each other.
  • the framework layer is specifically used to send the size-related information of the video layer to the Media in the first thread when the effective signal of the first vertical synchronization signal arrives. HW, and then call the graphics hardware to synthesize the multi-layer graphics layer, and perform digging processing on the at least one graphics layer according to the size-related information of the video layer.
  • the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
  • the framework layer is also used to receive After the first notification information, in the first thread, obtain the size of the first Video Buffer, and set the size-related information of the video layer according to the size of the first Video Buffer; or, when the first When the second thread detects that the size of the Video Buffer has changed, it sends first notification information to the first thread; the framework layer is also used to: after the first thread receives the first notification information, in the first thread , Obtain the size of the changed Video Buffer, and set the size-related information of the video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores a piece of data of the video layer, and the size of the Video Buffer It is related to the size of the video layer.
  • the graphics hardware includes a hardware synthesizer HWC
  • the framework layer is specifically used to: in the first thread, call the HWC to synthesize the multi-layer graphics layer and perform the synthesis of at least one graphics layer Perform the digging process to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain the display data, the HWC is also used to: send the synthesized graphics layer to the display Driver; The Media HW is also used to send the processed video layer to the display driver.
  • the graphics hardware includes a graphics processor GPU
  • the framework layer is specifically used to: in the first thread, call the GPU to synthesize the multi-layer graphics layer and dig at least one graphics layer.
  • the GPU is also used to: return the synthesized graphics layer to the framework Layer;
  • the framework layer is also used to send the synthesized graphics layer to the HWC;
  • the HWC is also used to send the synthesized graphics layer to the display driver;
  • the Media HW is also used to send the processed graphics layer
  • the video layer is sent to the display driver.
  • the framework layer includes SurfaceFlinger, which is used to: create the first thread and the second thread in the initialization phase; and the SurfaceFlinger is also used to, when receiving the Graphic Buffer, Notify the first thread to process the graphics layer data in the Graphic Buffer; when the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer; wherein the Graphic Buffer stores a layer of graphics layer data , The Video Buffer stores a layer of video layer data.
  • the Media HW is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame The layer is specifically used to: in the second thread, call the Media HW to process the first frame of video layer data according to the size-related information of the video layer.
  • FIGS. 5 and 6 may be run on the device shown in FIG. 12, and the device shown in FIG. 12 may be used to implement the method embodiments corresponding to FIGS. 7-9.
  • FIGS. 7-9 For relevant detailed explanations, reference may be made to the descriptions of the foregoing method embodiments.
  • FIG. 13 it is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of this application.
  • the device includes a processor, graphics hardware, and media hardware, and software instructions run on the processor to form a framework layer and a display driver;
  • the framework layer is used for invoking the graphics hardware to synthesize multiple graphics layers and digging holes for at least one graphics layer in the first thread to obtain a synthesized graphics layer, and the synthesized graphics layer includes the digging area;
  • the framework layer is used to call the media hardware to process the video layer in the second thread to obtain the processed video layer;
  • the display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
  • the software instructions running on the processor are also used to form a hardware abstraction layer, which includes graphics hardware abstraction and media hardware abstraction.
  • the driver layer also includes graphics hardware drivers and media hardware drivers. Graphics hardware and media hardware can be coupled with the processor through connectors, interfaces, transmission lines, or buses.
  • the graphics hardware includes HWC and GPU, as shown in FIG. 14.
  • the hardware abstraction layer includes the HWC abstraction
  • the driver layer includes the HWC driver and the GPU driver.
  • the embodiments of the present application also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when it runs on a computer or a processor, the computer or the processor executes the method provided in the embodiments of the present application Part or all of the functions.
  • the embodiments of the present application also provide a computer program product containing instructions, which when run on a computer or a processor, enable the computer or the processor to perform part or all of the functions in the methods provided in the embodiments of the present application.

Abstract

An image data processing method and apparatus. According to the method and apparatus, the compositing and matting processing of a graphics layer and the processing of a video layer are performed in parallel in two threads. Therefore, the processing of the video layer is no longer affected by the composition of the graphics layer, such that the problem of frame loss being caused by the time-consuming composition of the graphics layer during the process of video playing can be effectively solved. The method comprises: in a first thread, compositing a multi-layer graphics layer and performing matting processing on at least one graphics layer of the multi-layer graphics layer, so as to obtain a composited graphics layer, wherein the composited graphics layer comprises a matting area; processing a video layer in a second thread so as to obtain a processed video layer; and superimposing the composited graphics layer and the processed video layer so as to obtain display data.

Description

一种图像数据处理的装置和方法Device and method for image data processing 技术领域Technical field
本申请涉及显示技术领域,尤其涉及一种图像数据处理的装置和方法。This application relates to the field of display technology, and in particular to a device and method for image data processing.
背景技术Background technique
显示设备可显示的内容包括图形数据和视频数据,图形数据例如包括状态栏、导航栏和图标数据等,通常来说,状态栏、导航栏和图标数据各自对应一个图形层,视频数据对应一个视频层,当多个图形数据和视频数据同时显示的时候,用户在显示屏上看到的画面是多个图形层和视频层合成的结果。The content that can be displayed by the display device includes graphic data and video data. The graphic data includes, for example, status bar, navigation bar, and icon data. Generally speaking, status bar, navigation bar, and icon data each correspond to a graphic layer, and video data corresponds to a video. Layer, when multiple graphics data and video data are displayed at the same time, the picture that the user sees on the display is the result of the synthesis of multiple graphics and video layers.
然而,在复杂场景下,图形层的合成是比较耗时的,通常会超过一个垂直同步(vertical synchronization,Vsync)周期,从而导致视频播放过程中出现丢帧。However, in complex scenes, the composition of the graphics layer is relatively time-consuming, and usually exceeds a vertical synchronization (Vsync) period, resulting in frame loss during video playback.
发明内容Summary of the invention
本申请实施例提供一种图像数据处理的装置和方法,用于解决视频播放过程中因图形层合成耗时导致的丢帧问题。The embodiments of the present application provide an image data processing device and method, which are used to solve the problem of frame loss caused by time-consuming composition of graphics layers during video playback.
本申请第一方面提供了一种图像数据处理的方法,该方法包括:在第一线程中对多层图形层进行合成以及对该多层图形层中至少一层图形层进行挖洞处理,得到合成的图形层,该合成的图形层包括挖洞区域;在第二线程中对视频层进行处理,得到处理后的视频层,该处理后的视频层能够透过该挖洞区域显示出来;将该合成的图形层和该处理后的视频层进行叠加,得到显示数据。The first aspect of the present application provides a method for processing image data. The method includes: synthesizing multiple graphics layers in a first thread and digging holes for at least one graphics layer of the multiple graphics layers to obtain The synthesized graphics layer, the synthesized graphics layer includes the excavated area; the video layer is processed in the second thread to obtain the processed video layer, and the processed video layer can be displayed through the excavated area; The synthesized graphics layer and the processed video layer are superimposed to obtain display data.
应当理解,合成的图形层数据中包含有挖洞区域,挖洞区域通常会被设置为透明的,以便图形层和视频层合成的时候,视频层可以透过该挖洞区域显示出来。在一种可选的情况中,可以对视频层下方的图形层分别进行挖洞,然后再将多个图形层合成一个图形层;也可以先将多个图形层合成一个图形层,然后再对该合成的图形层进行挖洞。It should be understood that the synthesized graphics layer data includes a digging area, and the digging area is usually set to be transparent so that when the graphics layer and the video layer are synthesized, the video layer can be displayed through the digging area. In an optional case, you can dig holes in the graphics layer below the video layer, and then combine multiple graphics layers into one graphics layer; you can also combine multiple graphics layers into one graphics layer, and then The synthesized graphics layer is digging holes.
本申请实施例中,对图形层的合成和挖洞处理以及对视频层的处理是在两个线程中并行进行的,因此,视频图像的处理将不再受图形层合成的影响,视频播放也不再受图形层合成的影响,可以有效解决视频播放过程中因为图形层合成耗时导致的丢帧问题。In the embodiment of this application, the synthesis and digging of the graphics layer and the processing of the video layer are performed in parallel in two threads. Therefore, the processing of video images will no longer be affected by the synthesis of the graphics layer, and video playback will also It is no longer affected by the composition of the graphics layer, and can effectively solve the problem of frame loss caused by the time-consuming composition of the graphics layer during video playback.
在一种可能的实施方式中,在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层之前,该方法还包括:在该第一线程中设置该视频层的大小相关的信息;该对至少一层图形层进行挖洞处理,具体包括:根据该视频层的大小相关的信息对该至少一层图形层进行挖洞处理;该在第二线程中对视频层进行处理,得到处理后的视频层,具体包括:在该第二线程中,根据该视频层的大小相关的信息对该视频层进行处理,得到该处理后的视频层。In a possible implementation manner, before synthesizing multiple graphics layers and digging holes on at least one graphics layer in the first thread to obtain the synthesized graphics layer, the method further includes: in the first thread Setting information related to the size of the video layer in the video layer; the digging of at least one graphics layer specifically includes: digging the at least one graphics layer according to the information related to the size of the video layer; Processing the video layer in the second thread to obtain the processed video layer specifically includes: in the second thread, processing the video layer according to the size-related information of the video layer to obtain the processed video layer.
应当理解,视频层的大小相关的信息可以包括视频层的尺寸和视频层的位置信息,示例性的,视频层的大小相关的信息可以包括视频层一个顶点坐标和两个长度信息,该两个长度信息表示视频层的长和宽;视频层的大小相关的信息还可以包括视频层的两个顶点坐标和一个长度信息,根据该两个顶点坐标和一个长度信息可以唯一确定视频层的大小和播放位置;如果视频层的位置信息为显示视频层的4个顶点坐标,则根据该4个顶点坐标可以确定视频层的尺寸,此时,视频层的大小相关的信息可以只包括位置信息。视频层的大小相关的信息是SurfaceFlinger计算得到的,示例性的,多媒体框架通过系统自带的API设置视频层的初始大小并发送给SurfaceFlinger,SurfaceFlinger还可以捕捉或感知到用户对视频的放大、缩小或旋转等操作,SurfaceFlinger综合多媒体框架发送来的视频层的初始大小以及放大、缩小或旋转等操作,计算得到视频层的大小相关的信息。It should be understood that the size-related information of the video layer may include the size of the video layer and the position information of the video layer. Exemplarily, the size-related information of the video layer may include one vertex coordinate and two length information of the video layer. The length information indicates the length and width of the video layer; the information related to the size of the video layer can also include the coordinates of two vertices and one length information of the video layer, and the size and the length of the video layer can be uniquely determined according to the coordinates of the two vertices and one length information. Play position; if the position information of the video layer is the coordinates of the 4 vertices of the display video layer, the size of the video layer can be determined according to the coordinates of the 4 vertices. At this time, the information related to the size of the video layer may only include position information. The information related to the size of the video layer is calculated by SurfaceFlinger. Illustratively, the multimedia framework sets the initial size of the video layer through the system's own API and sends it to SurfaceFlinger. SurfaceFlinger can also capture or perceive the user's zoom in or zoom out of the video. Or rotation and other operations, the initial size of the video layer sent by the SurfaceFlinger integrated multimedia framework and operations such as zooming in, zooming out, or rotating, etc., calculate the information related to the size of the video layer.
本申请实施例中,由于设置视频层的大小与对视频层下方的图形层进行挖洞处理是在同一个线程中先后完成的,也即处理后的视频层和挖洞区域的尺寸都是根据该设置的视频层的大小得到的,可以保证图形层中挖洞区域的尺寸与视频层的尺寸完全一致,处理后的视频层和“挖洞区域”可以完全匹配,从而使得处理后的视频层可以透过“挖洞区域”同步显示出来。In the embodiment of this application, since setting the size of the video layer and digging the graphics layer below the video layer are completed in the same thread successively, that is, the sizes of the processed video layer and the digging area are all based on The size of the video layer set by this setting can ensure that the size of the digging area in the graphics layer is exactly the same as the size of the video layer. The processed video layer and the "dug area" can be completely matched, so that the processed video layer It can be displayed simultaneously through the "dug area".
在一种可能的实施方式中,基于第一垂直同步信号在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理;基于第二垂直同步信号将合成的图形层和处理后的视频层进行叠加,得到显示数据;其中,第一垂直同步信号和第二垂直同步信号彼此独立。In a possible implementation manner, the multi-layer graphics layer is synthesized in the first thread based on the first vertical synchronization signal, and the at least one graphics layer is holed; the synthesized graphics layer is synthesized based on the second vertical synchronization signal It is superimposed with the processed video layer to obtain display data; wherein, the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
应当理解,第一垂直同步信号和第二垂直同步信号为两个彼此独立的周期性信号,第一垂直同步信号和第二垂直同步信号可以具有不同的帧率和不同的周期。具体的,当第一垂直同步信号的有效信号到来时,在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理;当第二垂直同步信号的有效信号到来时,将合成的图形层和处理后的视频层进行叠加,得到显示数据。第一垂直同步信号和第二垂直同步信号可以为高电平有效或低电平有效,第一垂直同步信号和第二垂直同步信号可以为电平触发、上升沿触发或下降沿触发。Vsync信号的有效信号到来可以理解为:Vsync信号的上升沿到来,Vsync信号的下降沿到来、Vsync信号为高电平信号时或者Vsync信号为低电平信号时的任一种。示例性的,第一垂直同步信号可以为前述实施例中的Graphic Vsync,第二垂直同步信号可以为前述实施例中的Display Vsync。It should be understood that the first vertical synchronization signal and the second vertical synchronization signal are two independent periodic signals, and the first vertical synchronization signal and the second vertical synchronization signal may have different frame rates and different periods. Specifically, when the effective signal of the first vertical synchronization signal arrives, the multi-layer graphics layer is synthesized in the first thread and the hole processing is performed on at least one of the graphics layers; when the effective signal of the second vertical synchronization signal arrives , Superimpose the synthesized graphics layer and the processed video layer to obtain the display data. The first vertical synchronization signal and the second vertical synchronization signal may be high-level active or low-level active, and the first vertical synchronization signal and the second vertical synchronization signal may be level-triggered, rising-edge-triggered, or falling-edge-triggered. The arrival of the effective signal of the Vsync signal can be understood as: the arrival of the rising edge of the Vsync signal, the arrival of the falling edge of the Vsync signal, when the Vsync signal is a high-level signal, or when the Vsync signal is a low-level signal. Exemplarily, the first vertical synchronization signal may be Graphic Vsync in the foregoing embodiment, and the second vertical synchronization signal may be Display Vsync in the foregoing embodiment.
在一种可能的实施方式中,第一垂直同步信号的帧率小于第二垂直同步信号的帧率。In a possible implementation manner, the frame rate of the first vertical synchronization signal is lower than the frame rate of the second vertical synchronization signal.
在本申请实施例中,由于控制多个图形层合成的为第一垂直同步信号,控制显示视频数据的合成以及显示设备刷新帧率的信号为第二垂直同步信号,第一垂直同步信号和第二垂直同步信号是彼此独立的,第二垂直同步信号的帧率可以大于第一垂直同步信号的帧率,因此该图像处理架构可以支持视频帧率高于图形刷新帧率的高帧率视频的播放。In the embodiment of the present application, since the first vertical synchronization signal is used to control the synthesis of multiple graphics layers, the signal that controls the synthesis of display video data and the refresh frame rate of the display device is the second vertical synchronization signal, and the first vertical synchronization signal and the first vertical synchronization signal The two vertical synchronization signals are independent of each other. The frame rate of the second vertical synchronization signal can be greater than that of the first vertical synchronization signal. Therefore, the image processing architecture can support high frame rate video with a video frame rate higher than the graphics refresh frame rate. Play.
在一种可能的实施方式中,当第一垂直同步信号的有效信号到来时,在第一线程中,先设置视频层的大小相关的信息,然后对多层图形层进行合成以及根据视频层的 大小相关的信息对至少一层图形层进行挖洞处理。In a possible implementation, when the effective signal of the first vertical synchronization signal arrives, in the first thread, the size-related information of the video layer is set first, and then the multi-layer graphics layer is synthesized and based on the video layer. The size-related information is used to dig holes for at least one layer of graphics.
在本申请实施例中,在第一线程中设置视频层的大小相关的信息需要等待垂直同步信号的有效信号到来之后才会进行,设置视频层的大小相关的信息与图形层合成是在同一个垂直同步信号的有效信号到来之后先后进行的。In the embodiment of this application, setting the size-related information of the video layer in the first thread needs to wait for the effective signal of the vertical synchronization signal to arrive. It is carried out after the effective signal of the vertical synchronization signal arrives.
在一种可能的实施方式中,当该第二线程获取到第一个视频缓冲区Video Buffer时,向该第一线程发送第一通知信息;在该第一线程中,根据该第一个Video Buffer的大小设置该视频层的大小相关的信息;或者,当该第二线程检测到该Video Buffer的大小发生变化时,向该第一线程发送第一通知信息;在该第一线程中,根据变化后的Video Buffer的大小重新设置该视频层的大小相关的信息。In a possible implementation manner, when the second thread obtains the first video buffer Video Buffer, the first notification information is sent to the first thread; in the first thread, according to the first Video The size of the Buffer sets the information related to the size of the video layer; or, when the second thread detects that the size of the Video Buffer changes, it sends the first notification information to the first thread; in the first thread, according to The size of the changed Video Buffer resets the information related to the size of the video layer.
应当理解,每个Video Buffer中存储一个视频层的数据,Video Buffer的大小与所存储的视频层的大小有关。因此,当视频层的大小发生变化时,Video Buffer的大小也会发生变化。该第一通知信息用于告知第一线程,视频层的大小发生了变化。It should be understood that each Video Buffer stores data of one video layer, and the size of the Video Buffer is related to the size of the stored video layer. Therefore, when the size of the video layer changes, the size of the Video Buffer also changes. The first notification information is used to notify the first thread that the size of the video layer has changed.
在一种可能的实施方式中,第一线程和第二线程之间可以进行线程间通信,当接收到第一个Video Buffer或者Video Buffer的尺寸发生变化时,第二线程可以通知第一线程,以便可以在第一线程中重新设置视频层的大小相关的信息,并根据变化后的尺寸对图形层重新进行挖洞,从而保证尺寸变化后的视频层可以透过图形层的挖洞区域显示出来,也即确保Video Buffer的尺寸发生变化时,依然能实现视频层和图形层的同步匹配显示。在一种可选的情况中,当Video Buffer的尺寸发生变化时,SurfaceFlinger计算的视频数据的大小相关的信息不一定会发生变化。In a possible implementation manner, inter-thread communication can be performed between the first thread and the second thread. When the size of the first Video Buffer or Video Buffer is received, the second thread can notify the first thread. In order to reset the information related to the size of the video layer in the first thread, and re-dig holes in the graphics layer according to the changed size, so as to ensure that the video layer after the size change can be displayed through the digging area of the graphics layer , That is, to ensure that the video layer and the graphics layer can still be synchronized and matched display when the size of the Video Buffer changes. In an optional situation, when the size of the Video Buffer changes, the information related to the size of the video data calculated by SurfaceFlinger may not necessarily change.
在一种可能的实施方式中,该在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,具体包括:在该第一线程中,硬件合成器HWC对该多个图形层进行合成以及对该至少一层图形层进行挖洞处理,得到该合成的图形层。In a possible implementation manner, synthesizing multiple graphics layers and digging holes on at least one graphics layer in the first thread to obtain a synthesized graphics layer specifically includes: in the first thread, The hardware synthesizer HWC synthesizes the multiple graphics layers and performs hole processing on the at least one graphics layer to obtain the synthesized graphics layer.
应当理解,在调用HWC的硬件资源执行图形层合成和挖洞处理时,具体是由框架层的SurfaceFlinger通过访问硬件抽象层的HWC抽象调用HWC驱动,从而实现对HWC硬件资源的调用。It should be understood that when calling the hardware resources of the HWC to perform graphics layer synthesis and digging processing, the SurfaceFlinger of the framework layer specifically calls the HWC driver by accessing the HWC abstraction of the hardware abstraction layer, so as to realize the call of the HWC hardware resources.
在一种可能的实施方式中,在该第一线程中,将多个图形缓冲区Graphic Buffers的指示信息发送给硬件合成器HWC;其中,一个Graphic Buffer中存储一层图形层数据;在该第一线程中,该HWC从该Graphic Buffers中获取该多个图形层数据。In a possible implementation manner, in the first thread, the indication information of multiple graphic buffers Graphic Buffers is sent to the hardware synthesizer HWC; among them, one Graphic Buffer stores a layer of graphics layer data; in the first thread, In one thread, the HWC obtains the multiple graphics layer data from the Graphic Buffers.
在一种可能的实施方式中,在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,具体包括:在该第一线程中,图形处理器GPU对该多个图形层进行合成以及对该至少一层图形层进行挖洞处理,得到该合成的图形层。In a possible implementation manner, in the first thread, the multi-layer graphics layer is synthesized and the at least one graphics layer is digged to obtain the synthesized graphics layer, which specifically includes: in the first thread, the graphics The processor GPU synthesizes the multiple graphics layers and performs hole processing on the at least one graphics layer to obtain the synthesized graphics layer.
当HWC不支持图形层合成以及图形层挖洞处理时,可以调用GPU的硬件资源进行图形层合成和图形层挖洞。When HWC does not support graphics layer synthesis and graphics layer digging processing, GPU hardware resources can be used to perform graphics layer synthesis and graphics layer digging.
在一种可能的实施方式中,在第一线程中设置视频层的大小相关的信息之后,该方法还包括:在该第一线程中,将该视频层的大小相关的信息发送给Media HW;在该第二线程中,Media HW根据该视频层的大小相关的信息对该视频层进行处理,得到该处理后的视频层。In a possible implementation manner, after setting the size-related information of the video layer in the first thread, the method further includes: in the first thread, sending the size-related information of the video layer to Media HW; In the second thread, Media HW processes the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可能的实施方式中,Media HW先在第一线程中收到视频层的大小相关的信息,然后Media HW在第二线程中收到第一帧视频层数据,然后Media HW根据收到的视频层大小相关的信息对第一帧视频层数据进行处理;在另一种情况中,Media HW先在第二线程中收到第一帧视频层数据,然后Media HW在第一线程中收到视频层的大小相关的信息,Media HW在收到第一帧视频层数据后并不会立马进行处理,而是会等收到视频层的大小相关的信息之后再对第一帧视频层数据进行处理,以避免第一帧视频层数据的处理出错。In a possible implementation, Media HW first receives information about the size of the video layer in the first thread, and then Media HW receives the first frame of video layer data in the second thread, and then Media HW receives The information related to the size of the video layer processes the first frame of video layer data; in another case, Media HW first receives the first frame of video layer data in the second thread, and then Media HW receives it in the first thread. When it comes to the size-related information of the video layer, Media HW will not process the first frame of video layer data immediately, but will wait for the size-related information of the video layer to receive the first frame of video layer data. Perform processing to avoid processing errors in the first frame of video layer data.
在一种可能的实施方式中,该将合成的图形层和处理后的视频层进行叠加,得到显示数据,具体包括:显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到该显示数据。In a possible implementation manner, the superimposing the synthesized graphics layer and the processed video layer to obtain display data specifically includes: the display driver superimposes the synthesized graphics layer and the processed video layer to obtain The display data.
在一种可能的实施方式中,该方法还包括:SurfaceFlinger在初始化阶段创建该第一线程和该第二线程;当该SurfaceFlinger接收到图形缓冲区Graphic Buffer时,通知该第一线程处理该Graphic Buffer中的图形层数据;当该SurfaceFlinger接收到Video Buffer时,通知该第二线程处理该Video Buffer中的视频层数据;其中,该Graphic Buffer中存储一层图形层数据,该Video Buffer中存储一层视频层数据。In a possible implementation manner, the method further includes: SurfaceFlinger creates the first thread and the second thread in the initialization phase; when the SurfaceFlinger receives the Graphic Buffer, notifying the first thread to process the Graphic Buffer When the SurfaceFlinger receives the Video Buffer, it notifies the second thread to process the video layer data in the Video Buffer; among them, the Graphic Buffer stores a layer of graphics layer data, and the Video Buffer stores a layer Video layer data.
本申请第二方面提供了一种图像数据处理的方法,该方法包括:在第一线程中,SurfaceFlinger调用图形硬件对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,该合成的图形层包括挖洞区域;在第二线程中,SurfaceFlinger调用媒体硬件对视频层进行处理,得到处理后的视频层;该处理后的视频层可以透过挖洞区域显示出来;显示驱动将合成的图形层和该处理后的视频层进行叠加,得到显示数据。The second aspect of the present application provides a method for image data processing. The method includes: in a first thread, SurfaceFlinger calls graphics hardware to synthesize multiple graphics layers and dig holes for at least one graphics layer to obtain the synthesis In the second thread, SurfaceFlinger calls the media hardware to process the video layer to obtain the processed video layer; the synthesized graphics layer includes the excavated area; the processed video layer can be displayed through the excavated area Out; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain display data.
在一种可能的实施方式中,该SurfaceFlinger调用图形硬件资源对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层之前,该方法还包括:在该第一线程中,该SurfaceFlinger将计算的该视频层的大小相关的信息发送给该Media HW;该SurfaceFlinger调用图形硬件资源对至少一层图形层进行挖洞处理,具体包括:在该第一线程中,该SurfaceFlinger调用图形硬件根据该视频层的大小相关的信息对该至少一层图形层进行挖洞处理,得到该合成的图形层;该在第二线程中,SurfaceFlinger调用媒体硬件对视频层进行处理,得到处理后的视频层,具体包括:在该第二线程中,SurfaceFlinger调用该Media HW根据该视频层的大小相关的信息对该视频层进行处理,得到该处理后的视频层。In a possible implementation manner, the SurfaceFlinger invokes graphics hardware resources to synthesize multiple graphics layers and perform digging processing on at least one graphics layer to obtain the synthesized graphics layer, the method further includes: In the thread, the SurfaceFlinger sends the calculated information about the size of the video layer to the Media HW; the SurfaceFlinger calls graphics hardware resources to dig holes for at least one graphics layer, which specifically includes: in the first thread, the SurfaceFlinger calls the graphics hardware to dig holes in the at least one graphics layer according to the size-related information of the video layer to obtain the synthesized graphics layer; in the second thread, SurfaceFlinger calls the media hardware to process the video layer to obtain The processed video layer specifically includes: in the second thread, SurfaceFlinger calls the Media HW to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可能的实施方式中,当第一垂直同步信号的有效信号到来时,该SurfaceFlinger调用该硬件资源在该第一线程中对该多层图形层进行合成以及对该至少一层图形层进行挖洞处理;当第二垂直同步信号的有效信号到来时,该驱动显示将该合成的图形层和该处理后的视频层进行叠加,得到该显示数据;其中,该第一垂直同步信号和该第二垂直同步信号彼此独立。In a possible implementation, when the effective signal of the first vertical synchronization signal arrives, the SurfaceFlinger calls the hardware resource to synthesize the multi-layer graphics layer and perform the at least one graphics layer in the first thread. Digging processing; when the effective signal of the second vertical synchronization signal arrives, the drive display superimposes the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and the The second vertical synchronization signals are independent of each other.
在一种可能的实施方式中,当该第一垂直同步信号的有效信号到来时,在该第一线程中,先将计算的该视频层的大小相关的信息发送给该Media HW,然后该SurfaceFlinger调用该硬件资源对该多层图形层进行合成以及根据该视频层的大小相关的信息对该至少一层图形层进行挖洞处理。In a possible implementation manner, when the effective signal of the first vertical synchronization signal arrives, in the first thread, the information related to the calculated size of the video layer is first sent to the Media HW, and then the SurfaceFlinger The hardware resource is called to synthesize the multi-layer graphics layer, and the at least one graphics layer is drilled according to the size-related information of the video layer.
在一种可能的实施方式中,该第一垂直同步信号的帧率小于该第二垂直同步信号的帧率。In a possible implementation manner, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
在一种可能的实施方式中,还包括:当该第二线程获取到第一个视频缓冲区Video Buffer时,向该第一线程发送第一通知信息;在该第一线程中,SurfaceFlinger获取该第一个Video Buffer的大小,并根据该第一个Video Buffer的大小设置该视频层的大小相关的信息;或者,当该第二线程检测到该Video Buffer的大小发生变化时,向该第一线程发送第一通知信息;在该第一线程中,SurfaceFlinger获取变化后的Video Buffer的大小,并根据该变化后的Video Buffer的大小重新设置该视频层的大小相关的信息;其中,该Video Buffer中存储一个该视频层的数据,该Video Buffer的大小与该视频层的大小有关。In a possible implementation manner, it further includes: when the second thread obtains the first video buffer Video Buffer, sending first notification information to the first thread; in the first thread, SurfaceFlinger obtains the The size of the first Video Buffer, and set the information related to the size of the video layer according to the size of the first Video Buffer; or, when the second thread detects that the size of the Video Buffer has changed, send it to the first The thread sends the first notification information; in the first thread, SurfaceFlinger obtains the size of the changed Video Buffer, and resets the size-related information of the video layer according to the size of the changed Video Buffer; where, the Video Buffer A piece of data of the video layer is stored in the video layer, and the size of the Video Buffer is related to the size of the video layer.
在一种可能的实施方式中,该在第一线程中,SurfaceFlinger调用图形硬件对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,具体包括:在该第一线程中,该SurfaceFlinger调用硬件合成器HWC对该多个图形层进行合成以及对该至少一层图形层进行挖洞处理,得到该合成的图形层;该显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该方法还包括:该HWC将该合成的图形层发送给该显示驱动;该Media HW将该处理后的视频层发送给该显示驱动。In a possible implementation manner, in the first thread, SurfaceFlinger calls graphics hardware to synthesize multiple graphics layers and dig holes for at least one graphics layer to obtain a synthesized graphics layer, which specifically includes: In the first thread, the SurfaceFlinger calls the hardware synthesizer HWC to synthesize the multiple graphics layers and dig holes for the at least one graphics layer to obtain the synthesized graphics layer; the display driver combines the synthesized graphics layer and Before superimposing the processed video layer to obtain display data, the method further includes: the HWC sends the synthesized graphics layer to the display driver; and the Media HW sends the processed video layer to the display driver.
应当理解,当HWC得到合成的图形层时,HWC抽象将存储有合成的图形层的FrameBuffer的指示信息发送给显示驱动;Media HW得到处理后的视频层之后,Media HW将存储有处理后的视频层的Video Buffer的指示信息发送给显示驱动。It should be understood that when the HWC gets the synthesized graphics layer, the HWC abstraction sends the indication information of the FrameBuffer storing the synthesized graphics layer to the display driver; after the Media HW gets the processed video layer, the Media HW will store the processed video The instruction information of the Video Buffer of the layer is sent to the display driver.
在一种可能的实施方式中,在第一线程中,SurfaceFlinger调用图形硬件对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,具体包括:在该第一线程中,该SurfaceFlinger调用图形处理器GPU;对该多个图形层进行合成以及对该至少一层图形层进行挖洞处理,得到该合成的图形层;该该显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该方法还包括:该GPU将该合成的图形层返给该SurfaceFlinger;该SurfaceFlinger将该合成的图形层发送给该HWC;该HWC将该合成的图形层发送给该显示驱动;该Media HW将该处理后的视频层发送给该显示驱动。In a possible implementation, in the first thread, SurfaceFlinger calls graphics hardware to synthesize multiple graphics layers and dig holes for at least one graphics layer to obtain a synthesized graphics layer, which specifically includes: In one thread, the SurfaceFlinger calls the graphics processor GPU; composites the multiple graphics layers and digs holes for the at least one graphics layer to obtain the composite graphics layer; the display driver performs the composite graphics layer Before superimposing the processed video layer to obtain the display data, the method further includes: the GPU returns the synthesized graphics layer to the SurfaceFlinger; the SurfaceFlinger sends the synthesized graphics layer to the HWC; and the HWC sends the synthesized graphics layer to the HWC. The synthesized graphics layer is sent to the display driver; the Media HW sends the processed video layer to the display driver.
应当理解,如果图形层合成以及图形层挖洞处理是由GPU执行的,GPU需要将合成的图形层先返给SurfaceFlinger,然后由SurfaceFlinge将存储有合成的图形层的FrameBuffer的指示信息发送给HWC抽象,再由HWC抽象将存储有合成的图形层的FrameBuffer的指示信息发送给显示驱动。It should be understood that if the graphics layer synthesis and graphics layer digging processing are performed by the GPU, the GPU needs to return the synthesized graphics layer to SurfaceFlinger first, and then SurfaceFlinge sends the instructions of the FrameBuffer storing the synthesized graphics layer to the HWC abstraction , And then the HWC abstracts and sends the instruction information of the FrameBuffer storing the synthesized graphics layer to the display driver.
在一种可能的实施方式中,还包括:SurfaceFlinger在初始化阶段创建该第一线程和该第二线程;当该SurfaceFlinger接收的buffer为图形缓冲区Graphic Buffer时,通知该第一线程处理该Graphic Buffer中的图形层数据;当该SurfaceFlinger接收的buffer为Video Buffer时,通知该第二线程处理该Video Buffer中的视频层数据;其中,一个Graphic Buffer中存储一层图形层数据,一个Video Buffer中存储一层视频层数据。In a possible implementation manner, it further includes: SurfaceFlinger creates the first thread and the second thread in the initialization phase; when the buffer received by the SurfaceFlinger is a graphic buffer Graphic Buffer, notifying the first thread to process the Graphic Buffer When the buffer received by the SurfaceFlinger is a Video Buffer, notify the second thread to process the video layer data in the Video Buffer; among them, one Graphic Buffer stores one layer of graphics layer data, and one Video Buffer stores One layer of video layer data.
在一种可能的实施方式中,该Media HW在该第二线程中收到第一帧视频层数据;该Media HW在该第一线程中收到该视频层的大小相关的信息;该Media HW在该第 二线程中根据该视频层的大小相关的信息对该第一帧视频层数据进行处理。In a possible implementation, the Media HW receives the first frame of video layer data in the second thread; the Media HW receives information about the size of the video layer in the first thread; the Media HW In the second thread, the first frame of video layer data is processed according to the information related to the size of the video layer.
应当理解,若该Media HW在收到该视频层的大小相关的信息之前,在该第二线程中收到了第一帧视频层数据,该Media HW在该第一线程中收到该视频层的大小相关的信息之后,该Media HW在该第二线程中根据该视频层的大小相关的信息对该第一帧视频层数据进行处理。It should be understood that if the Media HW receives the first frame of video layer data in the second thread before receiving the information about the size of the video layer, the Media HW receives the video layer data in the first thread. After the size-related information, the Media HW processes the first frame of video layer data in the second thread according to the size-related information of the video layer.
本申请第三方面提供了一种图像数据处理的装置,该装置包括:处理器,该处理器上运行有软件指令以形成框架层、硬件抽象层HAL和驱动层,该HAL包括图形硬件抽象和媒体硬件Media HW抽象,该驱动层包括图形硬件驱动、媒体硬件驱动和显示驱动;该框架层,用于在第一线程中,通过该图形硬件抽象和该图形硬件驱动调用图形硬件,对多层图形层进行合成以及对该多层图形层中的至少一层图形层进行挖洞处理,得到合成的图形层,该合成的图形层包括挖洞区域;该框架层,用于在第二线程中,通过该媒体硬件抽象和该媒体硬件驱动调用媒体硬件Media HW,对视频层进行处理,得到处理后的视频层;该处理后的视频层能够透过该挖洞区域显示出来;该显示驱动,用于将该合成的图形层和该处理后的视频层进行叠加,得到显示数据。The third aspect of the present application provides an image data processing device. The device includes a processor on which software instructions run to form a framework layer, a hardware abstraction layer HAL, and a driver layer. The HAL includes graphics hardware abstraction and Media hardware Media HW abstraction, the driver layer includes graphics hardware drivers, media hardware drivers, and display drivers; this framework layer is used in the first thread to call graphics hardware through the graphics hardware abstraction and the graphics hardware driver, and it is used for multiple layers The graphics layer is synthesized and at least one graphics layer of the multi-layer graphics layer is digged to obtain a synthesized graphics layer. The synthesized graphics layer includes a digging area; the frame layer is used in the second thread , Through the media hardware abstraction and the media hardware driver calling the media hardware Media HW to process the video layer to obtain the processed video layer; the processed video layer can be displayed through the digging area; the display driver, It is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
可选的,该装置还包括传输接口,处理器通过该传输接口接收其他装置发送的数据或者向其他装置发送数据。该装置可以通过连接器、传输线或总线等与显示器、媒体硬件或图形硬件等硬件资源相耦合。该装置可以为一个具有图像或视频处理功能的处理器芯片。在一种可选的情况中,该装置、媒体硬件和图形硬件可以集成在一个芯片上。在另一种可选的情况中,该装置、媒体硬件、图形硬件和显示器可以集成在一个终端上。其中,图形硬件抽象与图形硬件驱动对应,媒体硬件抽象与媒体硬件驱动对应,通过图形硬件抽象和图形硬件驱动可以调用图形硬件,通过媒体硬件抽象与媒体硬件驱动可以调用媒体硬件。例如,通过访问图形硬件抽象调用图形硬件驱动以实现对图形硬件的调用,通过访问媒体硬件抽象调用媒体硬件驱动以实现对媒体硬件的调用。Optionally, the device further includes a transmission interface through which the processor receives data sent by other devices or sends data to other devices. The device can be coupled with hardware resources such as a display, media hardware, or graphics hardware through a connector, a transmission line, or a bus. The device can be a processor chip with image or video processing functions. In an optional case, the device, media hardware, and graphics hardware can be integrated on one chip. In another optional situation, the device, media hardware, graphics hardware, and display may be integrated on one terminal. Among them, the graphics hardware abstraction corresponds to the graphics hardware driver, the media hardware abstraction corresponds to the media hardware driver, the graphics hardware can be called through the graphics hardware abstraction and the graphics hardware driver, and the media hardware can be called through the media hardware abstraction and the media hardware driver. For example, the graphics hardware driver is abstractly called by accessing the graphics hardware to realize the calling of the graphics hardware, and the media hardware driver is abstractly called by accessing the media hardware to realize the calling of the media hardware.
应当理解,合成的图形层存储在FrameBuffer中,图形硬件得到合成的图形层之后,图形硬件抽象将FrameBuffer的指示信息发送给显示驱动,显示驱动可以根据该FrameBuffer的指示信息从对应的内存空间中读取合成的图形层数据;媒体硬件得到处理后的视频层之后,媒体硬件抽象将存储有处理后的视频层的Video Buffer的指示信息发送给显示驱动,显示驱动可以根据Video Buffer的指示信从对应的内存空间中读取处理后的视频层数据。It should be understood that the synthesized graphics layer is stored in the FrameBuffer. After the graphics hardware obtains the synthesized graphics layer, the graphics hardware abstracts the FrameBuffer instruction information to the display driver, and the display driver can read from the corresponding memory space according to the FrameBuffer instruction information Take the synthesized graphics layer data; after the media hardware obtains the processed video layer, the media hardware abstracts and sends the instruction information of the Video Buffer storing the processed video layer to the display driver, and the display driver can correspond to the instruction letter of the Video Buffer Read the processed video layer data in the memory space.
在一种可能的实施方式中,框架层还用于:在第一线程中,设置视频层的大小相关的信息,并将视频层的大小相关的信息发送给媒体硬件抽象;框架层具体用于,在第一线程中,调用图形硬件根据视频层的大小相关的信息对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层;在第二线程中,调用Media HW根据视频层的大小相关的信息对视频层进行处理,得到处理后的视频层。In a possible implementation, the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to the media hardware abstraction; the framework layer is specifically used for In the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the size-related information of the video layer, and at least one graphics layer is digged to obtain the synthesized graphics layer; in the second thread, call Media HW processes the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可能的实施方式中,该框架层具体用于,基于第一垂直同步信号调用该图形硬件在该第一线程中对该多层图形层进行合成以及对至少一层图形层进行挖洞处理;该显示驱动具体用于,基于第二垂直同步信号将合成的图形层和处理后的视频层进行叠加,得到显示数据;其中,第一垂直同步信号和第二垂直同步信号彼此独立。In a possible implementation manner, the framework layer is specifically used to call the graphics hardware based on the first vertical synchronization signal to synthesize the multi-layer graphics layer in the first thread and to dig holes in at least one graphics layer Processing: The display driver is specifically used to superimpose the synthesized graphics layer and the processed video layer based on the second vertical synchronization signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
在一种可能的实施方式中,该框架层具体用于,当第一垂直同步信号的有效信号到来时,在第一线程中,先将视频层的大小相关的信息发送给媒体硬件抽象,然后调用图形硬件对多层图形层进行合成以及根据视频层的大小相关的信息对至少一层图形层进行挖洞处理。In a possible implementation manner, the framework layer is specifically used to, when the effective signal of the first vertical synchronization signal arrives, in the first thread, the size-related information of the video layer is first sent to the media hardware abstraction, and then Calling graphics hardware to synthesize multiple graphics layers and digging holes for at least one graphics layer according to the size-related information of the video layer.
在一种可能的实施方式中,第一垂直同步信号的帧率小于第二垂直同步信号的帧率。In a possible implementation manner, the frame rate of the first vertical synchronization signal is lower than the frame rate of the second vertical synchronization signal.
在一种可能的实施方式中,当第二线程获取到第一个视频缓冲区Video Buffer时,向第一线程发送第一通知信息;框架层还用于,在第一线程接收到第一通知信息之后,在第一线程中,获取第一个Video Buffer的大小,并根据第一个Video Buffer的大小设置视频层的大小相关的信息。In a possible implementation manner, when the second thread obtains the first video buffer Video Buffer, it sends the first notification information to the first thread; the framework layer is also used to receive the first notification in the first thread After the information, in the first thread, the size of the first Video Buffer is obtained, and the size-related information of the video layer is set according to the size of the first Video Buffer.
或者,当第二线程检测到Video Buffer的大小发生变化时,向第一线程发送第一通知信息;框架层还用于,在第一线程接收到第一通知信息之后,在第一线程中,获取变化后的Video Buffer的大小,并根据变化后的Video Buffer的大小设置视频层的大小相关的信息;其中,Video Buffer中存储一个视频层的数据,Video Buffer的大小与视频层的大小有关。Or, when the second thread detects that the size of the Video Buffer has changed, it sends the first notification information to the first thread; the framework layer is also used to, after the first thread receives the first notification information, in the first thread, Obtain the size of the changed Video Buffer, and set the information related to the size of the video layer according to the size of the changed Video Buffer; among them, the Video Buffer stores the data of a video layer, and the size of the Video Buffer is related to the size of the video layer.
在一种可能的实施方式中,图形硬件包括HWC和GPU,对应的,硬件抽象层包括HWC抽象,驱动层包括HWC驱动和GPU驱动。In a possible implementation manner, the graphics hardware includes HWC and GPU. Correspondingly, the hardware abstraction layer includes the HWC abstraction, and the driver layer includes the HWC driver and the GPU driver.
框架层具体用于:在第一线程中,通过HWC抽象和HWC驱动调用所HWC,对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层;HWC抽象用于:将合成的图形层发送给该显示驱动;媒体硬件抽象还用于,将处理后的视频层发送给显示驱动。The framework layer is specifically used for: in the first thread, through the HWC abstraction and the HWC driver to call the HWC, the multi-layer graphics layer is synthesized and at least one graphics layer is digged to obtain the synthesized graphics layer; the HWC abstraction is used Yu: Send the synthesized graphics layer to the display driver; the media hardware abstraction is also used to send the processed video layer to the display driver.
当HWC不支持图形层合成和图形层挖洞处理时,框架层具体用于:在第一线程中,通过GPU驱动调用GPU,对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层;GPU还用于:将合成的图形层返给框架层;框架层还用于,将合成的图形层发送给HWC抽象;HWC抽象用于,将合成的图形层发送给显示驱动;Media HW抽象还用于,将处理后的视频层发送给显示驱动。When HWC does not support graphics layer synthesis and graphics layer digging processing, the framework layer is specifically used to: in the first thread, call the GPU through the GPU driver to synthesize multiple graphics layers and dig holes for at least one graphics layer Processing to obtain the synthesized graphics layer; GPU is also used to: return the synthesized graphics layer to the framework layer; the framework layer is also used to send the synthesized graphics layer to the HWC abstraction; the HWC abstraction is used to send the synthesized graphics layer For the display driver; the Media HW abstraction is also used to send the processed video layer to the display driver.
应当理解,HWC抽象将存储有合成的图形层的FrameBuffer的指示信息发送给显示驱动,媒体硬件抽象将存储有处理后的视频层的Video Buffer的指示信息发送给显示驱动。It should be understood that the HWC abstraction sends the instruction information of the FrameBuffer storing the synthesized graphics layer to the display driver, and the media hardware abstraction sends the instruction information of the VideoBuffer storing the processed video layer to the display driver.
在一种可能的实施方式中,框架层的SurfaceFlinger用于:在初始化阶段创建第一线程和第二线程;SurfaceFlinger还用于,当接收到图形缓冲区Graphic Buffer时,通知第一线程处理Graphic Buffer中的图形层数据;当接收到Video Buffer时,通知第二线程处理Video Buffer中的视频层数据;其中,Graphic Buffer中存储一层图形层数据,Video Buffer中存储一层视频层数据。In a possible implementation, the SurfaceFlinger of the framework layer is used to create the first thread and the second thread in the initialization phase; SurfaceFlinger is also used to notify the first thread to process the Graphic Buffer when the Graphic Buffer is received. When receiving the Video Buffer, notify the second thread to process the video layer data in the Video Buffer; among them, a layer of graphics layer data is stored in the Graphic Buffer, and a layer of video layer data is stored in the Video Buffer.
在一种可能的实施方式中,Media HW抽象具体用于:在第二线程中收到第一帧视频层数据;在第一线程中收到视频层的大小相关的信息;框架层具体用于:在第二线程中,通过Media HW抽象调用Media HW根据视频层的大小相关的信息对第一帧视频层数据进行处理。In a possible implementation, the Media HW abstraction is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame layer is specifically used to : In the second thread, Media HW abstractly calls Media HW to process the first frame of video layer data according to the size-related information of the video layer.
应当理解,装置侧的有益效果可以参考方法侧,此处不再赘述。It should be understood that the beneficial effects of the device side can be referred to the method side, which will not be repeated here.
本申请第四方面提供了一种图像数据处理的装置,该装置包括:框架层,用于在第一线程中,调用图形硬件对多层图形层进行合成以及对该多层图形层中的至少一层图形层进行挖洞处理,得到合成的图形层,该合成的图形层包括挖洞区域;该图形硬件;媒体硬件Media HW;该框架层,还用于在第二线程中,调用该Media HW对视频层进行处理,得到处理后的视频层;该处理后的视频层能够透过该挖洞区域显示出来;显示驱动,用于将该合成的图形层和该处理后的视频层进行叠加,得到显示数据。The fourth aspect of the present application provides an image data processing device. The device includes: a framework layer for invoking the graphics hardware to synthesize the multi-layer graphics layer in the first thread and at least one of the multi-layer graphics layers A graphics layer is used for digging a hole to obtain a composite graphics layer, the composite graphics layer includes the digging area; the graphics hardware; media hardware Media HW; the framework layer is also used to call the Media in the second thread HW processes the video layer to obtain the processed video layer; the processed video layer can be displayed through the digging area; the display driver is used to superimpose the synthesized graphics layer and the processed video layer , Get the display data.
应当理解,框架层和显示驱动为软件指令运行在处理器上形成的操作系统的部分层。图形硬件和媒体硬件可以通过连接器、接口、传输线或总线等与处理器耦合,这些接口通常是电性通信接口,但是也可能是机械接口或其它形式的接口。It should be understood that the framework layer and the display driver are part of the operating system formed by software instructions running on the processor. Graphics hardware and media hardware may be coupled with the processor through connectors, interfaces, transmission lines or buses, etc. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces.
在一种可能的实施方式中,软件指令运行在处理器上还用于形成硬件抽象层、图形硬件驱动和媒体硬件驱动,其中,硬件抽象层包括与图形硬件对应的图形硬件驱动,以及与媒体硬件对应的媒体硬件驱动。框架层具体用于通过图形硬件抽象和图形硬件驱动调用图形硬件,框架层具体还用于通过媒体硬件抽象和媒体硬件驱动调用媒体硬件。In a possible implementation manner, the software instructions running on the processor are also used to form a hardware abstraction layer, graphics hardware drivers, and media hardware drivers. The hardware abstraction layer includes graphics hardware drivers corresponding to graphics hardware and media Media hardware driver corresponding to the hardware. The framework layer is specifically used to call graphics hardware through graphics hardware abstraction and graphics hardware drivers, and the framework layer is specifically used to call media hardware through media hardware abstraction and media hardware drivers.
在一种可能的实施方式中,框架层还用于:在第一线程中,设置视频层的大小相关的信息,并将该视频层的大小相关的信息发送给Media HW;该框架层具体用于,在该第一线程中,调用该图形硬件根据该视频层的大小相关的信息对该多层图形层进行合成以及对该至少一层图形层进行挖洞处理,得到该合成的图形层;在该第二线程中,调用该Media HW根据该视频层的大小相关的信息对该视频层进行处理,得到该处理后的视频层。In a possible implementation, the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to Media HW; the framework layer is specifically used Therefore, in the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and the at least one graphics layer is digged to obtain the synthesized graphics layer; In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可能的实施方式中,该框架层具体用于,基于第一垂直同步信号,在该第一线程中调用该图形硬件对该多层图形层进行合成以及对该至少一层图形层进行挖洞处理;该显示驱动具体用于,当第二垂直同步信号到来时,将该合成的图形层和该处理后的视频层进行叠加,得到该显示数据;其中,该第一垂直同步信号和该第二垂直同步信号彼此独立。In a possible implementation, the framework layer is specifically used to, based on the first vertical synchronization signal, call the graphics hardware in the first thread to synthesize the multi-layer graphics layer and perform the at least one graphics layer. Digging processing; the display driver is specifically used to, when the second vertical synchronization signal arrives, superimpose the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and The second vertical synchronization signals are independent of each other.
在一种可能的实施方式中,该框架层具体用于,当该第一垂直同步信号的有效信号到来时,在该第一线程中,先将该视频层的大小相关的信息发送给该Media HW,然后调用该图形硬件对该多层图形层进行合成以及根据该视频层的大小相关的信息对该至少一层图形层进行挖洞处理。In a possible implementation manner, the framework layer is specifically used to send information related to the size of the video layer to the Media in the first thread when the effective signal of the first vertical synchronization signal arrives. HW, and then call the graphics hardware to synthesize the multi-layer graphics layer, and perform digging processing on the at least one graphics layer according to the size-related information of the video layer.
在一种可能的实施方式中,该第一垂直同步信号的帧率小于该第二垂直同步信号的帧率。In a possible implementation manner, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
在一种可能的实施方式中,当该第二线程获取到第一个视频缓冲区Video Buffer时,向该第一线程发送第一通知信息;该框架层还用于,在该第一线程接收到该第一通知信息之后,在该第一线程中,获取该第一个Video Buffer的大小,并根据该第一个Video Buffer的大小设置该视频层的大小相关的信息;或者,当该第二线程检测到该Video Buffer的大小发生变化时,向该第一线程发送第一通知信息;该框架层还用于,在该第一线程接收到该第一通知信息之后,在该第一线程中,获取变化后的Video Buffer的大小,并根据该变化后的Video Buffer的大小设置该视频层的大小相关的信息;其中,该Video Buffer中存储一个该视频层的数据,该Video Buffer的大小与该 视频层的大小有关。In a possible implementation manner, when the second thread obtains the first video buffer Video Buffer, the first notification information is sent to the first thread; the framework layer is also used to receive in the first thread After the first notification information, in the first thread, obtain the size of the first Video Buffer, and set the size-related information of the video layer according to the size of the first Video Buffer; or, when the first When the second thread detects that the size of the Video Buffer has changed, it sends first notification information to the first thread; the framework layer is also used to: after the first thread receives the first notification information, in the first thread , Obtain the size of the changed Video Buffer, and set the size-related information of the video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores a piece of data of the video layer, and the size of the Video Buffer It is related to the size of the video layer.
在一种可能的实施方式中,该图形硬件包括硬件合成器HWC,该框架层具体用于:在该第一线程中,调用该HWC对该多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到该合成的图形层;显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该HWC还用于:将该合成的图形层发送给显示驱动;该Media HW还用于,将该处理后的视频层发送给显示驱动。In a possible implementation manner, the graphics hardware includes a hardware synthesizer HWC, and the framework layer is specifically used to: in the first thread, call the HWC to synthesize the multi-layer graphics layer and to compose at least one graphics layer Perform the digging process to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain the display data, the HWC is also used to: send the synthesized graphics layer to the display Driver; The Media HW is also used to send the processed video layer to the display driver.
应当理解,将合成的图形层发送给显示驱动具体是由HWC在硬件抽象层的HWC抽象实现的,将处理后的视频层发送给显示驱动是由Media HW抽象实现的。It should be understood that sending the synthesized graphics layer to the display driver is specifically implemented by the HWC abstraction in the hardware abstraction layer of the HWC, and sending the processed video layer to the display driver is implemented by the Media HW abstraction.
在一种可能的实施方式中,该图形硬件包括图形处理器GPU,该框架层具体用于:在第一线程中,调用GPU对该多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到该合成的图形层;该显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该GPU还用于:将该合成的图形层返给该框架层;该框架层还用于,将该合成的图形层发送给该HWC;该HWC还用于,将该合成的图形层发送给该显示驱动;该Media HW还用于,将该处理后的视频层发送给该显示驱动。In a possible implementation manner, the graphics hardware includes a graphics processor GPU, and the framework layer is specifically used to: in the first thread, call the GPU to synthesize the multi-layer graphics layer and dig at least one graphics layer. Hole processing to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before the display data is obtained, the GPU is also used to: return the synthesized graphics layer to the framework Layer; the framework layer is also used to send the synthesized graphics layer to the HWC; the HWC is also used to send the synthesized graphics layer to the display driver; the Media HW is also used to send the processed graphics layer The video layer is sent to the display driver.
在一种可能的实施方式中,该框架层包括SurfaceFlinger,该SurfaceFlinger用于:在初始化阶段创建该第一线程和该第二线程;该SurfaceFlinger还用于,当接收到图形缓冲区Graphic Buffer时,通知该第一线程处理该Graphic Buffer中的图形层数据;当该接收到Video Buffer时,通知该第二线程处理该Video Buffer中的视频层数据;其中,该Graphic Buffer中存储一层图形层数据,该Video Buffer中存储一层视频层数据。In a possible implementation manner, the framework layer includes SurfaceFlinger, which is used to: create the first thread and the second thread in the initialization phase; and the SurfaceFlinger is also used to, when receiving the Graphic Buffer, Notify the first thread to process the graphics layer data in the Graphic Buffer; when the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer; wherein the Graphic Buffer stores a layer of graphics layer data , The Video Buffer stores a layer of video layer data.
在一种可能的实施方式中,该Media HW具体用于:在该第二线程中收到第一帧视频层数据;在该第一线程中收到该视频层的大小相关的信息;该框架层具体用于:在该第二线程中,调用该Media HW根据该视频层的大小相关的信息对该第一帧视频层数据进行处理。In a possible implementation manner, the Media HW is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame The layer is specifically used to: in the second thread, call the Media HW to process the first frame of video layer data according to the size-related information of the video layer.
本申请第五方面提供了一种图像数据处理的装置,该装置包括:处理器、图形硬件和媒体硬件,该处理器上运行有软件指令以形成框架层和显示驱动;框架层,用于在第一线程中,调用图形硬件对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,合成的图形层包括挖洞区域;框架层,用于在第二线程中,调用媒体硬件对视频层进行处理,得到处理后的视频层;显示驱动,用于将合成的图形层和处理后的视频层进行叠加,得到显示数据。The fifth aspect of the present application provides an image data processing device. The device includes a processor, graphics hardware, and media hardware. Software instructions run on the processor to form a framework layer and a display driver; the framework layer is used to In the first thread, the graphics hardware is called to synthesize multiple graphics layers and at least one graphics layer is digging holes to obtain a composite graphics layer. The composite graphics layer includes the digging area; the frame layer is used in the second In the thread, the media hardware is called to process the video layer to obtain the processed video layer; the display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
应当理解,图形硬件和媒体硬件可以通过连接器、接口、传输线或总线等与处理器耦合,这些接口通常是电性通信接口,但是也可能是机械接口或其它形式的接口。It should be understood that graphics hardware and media hardware may be coupled with the processor through connectors, interfaces, transmission lines or buses, etc. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces.
在一种可能的实施方式中,软件指令运行在处理器上还用于形成硬件抽象层、图形硬件驱动和媒体硬件驱动,其中,硬件抽象层包括与图形硬件对应的图形硬件驱动,以及与媒体硬件对应的媒体硬件驱动。框架层具体用于通过图形硬件抽象和图形硬件驱动调用图形硬件,框架层具体还用于通过媒体硬件抽象和媒体硬件驱动调用媒体硬件。In a possible implementation manner, the software instructions running on the processor are also used to form a hardware abstraction layer, graphics hardware drivers, and media hardware drivers. The hardware abstraction layer includes graphics hardware drivers corresponding to graphics hardware and media Media hardware driver corresponding to the hardware. The framework layer is specifically used to call graphics hardware through graphics hardware abstraction and graphics hardware drivers, and the framework layer is specifically used to call media hardware through media hardware abstraction and media hardware drivers.
在一种可能的实施方式中,框架层还用于:在第一线程中,设置视频层的大小相关的信息,并将该视频层的大小相关的信息发送给Media HW;该框架层具体用于,在该第一线程中,调用该图形硬件根据该视频层的大小相关的信息对该多层图形层进 行合成以及对该至少一层图形层进行挖洞处理,得到该合成的图形层;在该第二线程中,调用该Media HW根据该视频层的大小相关的信息对该视频层进行处理,得到该处理后的视频层。In a possible implementation, the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to Media HW; the framework layer is specifically used Therefore, in the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and the at least one graphics layer is digged to obtain the synthesized graphics layer; In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可能的实施方式中,该框架层具体用于,基于第一垂直同步信号,在该第一线程中调用该图形硬件对该多层图形层进行合成以及对该至少一层图形层进行挖洞处理;该显示驱动具体用于,当第二垂直同步信号到来时,将该合成的图形层和该处理后的视频层进行叠加,得到该显示数据;其中,该第一垂直同步信号和该第二垂直同步信号彼此独立。In a possible implementation, the framework layer is specifically used to, based on the first vertical synchronization signal, call the graphics hardware in the first thread to synthesize the multi-layer graphics layer and perform the at least one graphics layer. Digging processing; the display driver is specifically used to, when the second vertical synchronization signal arrives, superimpose the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and The second vertical synchronization signals are independent of each other.
在一种可能的实施方式中,该框架层具体用于,当该第一垂直同步信号的有效信号到来时,在该第一线程中,先将该视频层的大小相关的信息发送给该Media HW,然后调用该图形硬件对该多层图形层进行合成以及根据该视频层的大小相关的信息对该至少一层图形层进行挖洞处理。In a possible implementation manner, the framework layer is specifically used to send information related to the size of the video layer to the Media in the first thread when the effective signal of the first vertical synchronization signal arrives. HW, and then call the graphics hardware to synthesize the multi-layer graphics layer, and perform digging processing on the at least one graphics layer according to the size-related information of the video layer.
在一种可能的实施方式中,该第一垂直同步信号的帧率小于该第二垂直同步信号的帧率。In a possible implementation manner, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
在一种可能的实施方式中,当该第二线程获取到第一个视频缓冲区Video Buffer时,向该第一线程发送第一通知信息;该框架层还用于,在该第一线程接收到该第一通知信息之后,在该第一线程中,获取该第一个Video Buffer的大小,并根据该第一个Video Buffer的大小设置该视频层的大小相关的信息;或者,当该第二线程检测到该Video Buffer的大小发生变化时,向该第一线程发送第一通知信息;该框架层还用于,在该第一线程接收到该第一通知信息之后,在该第一线程中,获取变化后的Video Buffer的大小,并根据该变化后的Video Buffer的大小设置该视频层的大小相关的信息;其中,该Video Buffer中存储一个该视频层的数据,该Video Buffer的大小与该视频层的大小有关。In a possible implementation manner, when the second thread obtains the first video buffer Video Buffer, the first notification information is sent to the first thread; the framework layer is also used to receive in the first thread After the first notification information, in the first thread, obtain the size of the first Video Buffer, and set the size-related information of the video layer according to the size of the first Video Buffer; or, when the first When the second thread detects that the size of the Video Buffer has changed, it sends first notification information to the first thread; the framework layer is also used to: after the first thread receives the first notification information, in the first thread , Obtain the size of the changed Video Buffer, and set the size-related information of the video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores a piece of data of the video layer, and the size of the Video Buffer It is related to the size of the video layer.
在一种可能的实施方式中,该图形硬件包括硬件合成器HWC,该框架层具体用于:在该第一线程中,调用该HWC对该多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到该合成的图形层;显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该HWC还用于:将该合成的图形层发送给显示驱动;该Media HW还用于,将该处理后的视频层发送给显示驱动。In a possible implementation manner, the graphics hardware includes a hardware synthesizer HWC, and the framework layer is specifically used to: in the first thread, call the HWC to synthesize the multi-layer graphics layer and to compose at least one graphics layer Perform the digging process to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain the display data, the HWC is also used to: send the synthesized graphics layer to the display Driver; The Media HW is also used to send the processed video layer to the display driver.
应当理解,将合成的图形层发送给显示驱动具体是由HWC在硬件抽象层的HWC抽象实现的,将处理后的视频层发送给显示驱动是由Media HW抽象实现的。It should be understood that sending the synthesized graphics layer to the display driver is specifically implemented by the HWC abstraction in the hardware abstraction layer of the HWC, and sending the processed video layer to the display driver is implemented by the Media HW abstraction.
在一种可能的实施方式中,该图形硬件包括图形处理器GPU,该框架层具体用于:在第一线程中,调用GPU对该多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到该合成的图形层;该显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该GPU还用于:将该合成的图形层返给该框架层;该框架层还用于,将该合成的图形层发送给该HWC;该HWC还用于,将该合成的图形层发送给该显示驱动;该Media HW还用于,将该处理后的视频层发送给该显示驱动。In a possible implementation manner, the graphics hardware includes a graphics processor GPU, and the framework layer is specifically used to: in the first thread, call the GPU to synthesize the multi-layer graphics layer and dig at least one graphics layer. Hole processing to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before the display data is obtained, the GPU is also used to: return the synthesized graphics layer to the framework Layer; the framework layer is also used to send the synthesized graphics layer to the HWC; the HWC is also used to send the synthesized graphics layer to the display driver; the Media HW is also used to send the processed graphics layer The video layer is sent to the display driver.
在一种可能的实施方式中,该框架层包括SurfaceFlinger,该SurfaceFlinger用于:在初始化阶段创建该第一线程和该第二线程;该SurfaceFlinger还用于,当接收到图形缓冲区Graphic Buffer时,通知该第一线程处理该Graphic Buffer中的图形层数据;当 该接收到Video Buffer时,通知该第二线程处理该Video Buffer中的视频层数据;其中,该Graphic Buffer中存储一层图形层数据,该Video Buffer中存储一层视频层数据。In a possible implementation manner, the framework layer includes SurfaceFlinger, which is used to: create the first thread and the second thread in the initialization phase; and the SurfaceFlinger is also used to, when receiving the Graphic Buffer, Notify the first thread to process the graphics layer data in the Graphic Buffer; when the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer; wherein the Graphic Buffer stores a layer of graphics layer data , The Video Buffer stores a layer of video layer data.
在一种可能的实施方式中,该Media HW具体用于:在该第二线程中收到第一帧视频层数据;在该第一线程中收到该视频层的大小相关的信息;该框架层具体用于:在该第二线程中,调用该Media HW根据该视频层的大小相关的信息对该第一帧视频层数据进行处理。In a possible implementation manner, the Media HW is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame The layer is specifically used to: in the second thread, call the Media HW to process the first frame of video layer data according to the size-related information of the video layer.
如前所述,框架层通过Media HW抽象和Media HW驱动来调用Media HW以实现对视频层的处理。As mentioned earlier, the framework layer calls Media HW through the Media HW abstraction and Media HW driver to realize the processing of the video layer.
本申请第六方面提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当该指令在计算机或处理器上运行时,使得该计算机或处理器执行如上述第一方面或者其任一种可能的实施方式中该的方法。The sixth aspect of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when the instructions are run on a computer or a processor, the computer or the processor is caused to execute the first aspect as described above. Or the method in any of its possible implementations.
本申请第七方面提供了一种包含指令的计算机程序产品,当该指令在计算机或处理器上运行时,使得该计算机或处理器执行如上述第一方面或者其任一种可能的实施方式中该的方法。The seventh aspect of the present application provides a computer program product containing instructions. When the instructions run on a computer or a processor, the computer or the processor executes the above-mentioned first aspect or any one of its possible implementations. The method.
附图说明Description of the drawings
图1为本申请实施例提供的一种示例性的终端的架构示意图;FIG. 1 is a schematic diagram of an exemplary terminal architecture provided by an embodiment of the application;
图2为本申请实施例提供的一种示例性的图像处理装置的硬件架构图;FIG. 2 is a hardware architecture diagram of an exemplary image processing device provided by an embodiment of the application;
图3a为本申请实施例适用的一种示例性的操作系统架构示意图;FIG. 3a is a schematic diagram of an exemplary operating system architecture applicable to an embodiment of this application;
图3b为本申请实施例提供的一种示例性的图形层合成的过程示意图;FIG. 3b is a schematic diagram of an exemplary graphic layer synthesis process provided by an embodiment of the application;
图4为一种传统的图像处理架构示意图;Figure 4 is a schematic diagram of a traditional image processing architecture;
图5为本申请实施例提供的一种示例性的图像处理架构示意图;FIG. 5 is a schematic diagram of an exemplary image processing architecture provided by an embodiment of the application;
图6为本申请实施例提供的另一种示例性的图像处理架构示意图;FIG. 6 is a schematic diagram of another exemplary image processing architecture provided by an embodiment of the application;
图7为本申请实施例提供的一种图像处理的方法流程图;FIG. 7 is a flowchart of an image processing method provided by an embodiment of the application;
图8为本申请实施例提供的另一种图像处理的方法流程图;FIG. 8 is a flowchart of another image processing method provided by an embodiment of the application;
图9为本申请实施例提供的另一种图像处理的方法流程图;FIG. 9 is a flowchart of another image processing method provided by an embodiment of the application;
图10为本申请实施例提供的一种示例性的图像处理装置的架构示意图;FIG. 10 is a schematic structural diagram of an exemplary image processing apparatus provided by an embodiment of the application;
图11为本申请实施例提供的另一种示例性的图像处理装置的架构示意图;FIG. 11 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application;
图12为本申请实施例提供的另一种示例性的图像处理装置的架构示意图;FIG. 12 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application;
图13为本申请实施例提供的另一种示例性的图像处理装置的架构示意图;FIG. 13 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application;
图14为本申请实施例提供的另一种示例性的图像处理装置的架构示意图。FIG. 14 is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of the application.
具体实施方式detailed description
本申请的说明书实施例和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", etc. in the specification embodiments and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. In addition, the terms "including" and "having" and any variations of them are intended to cover non-exclusive inclusion, for example, including a series of steps or units. The method, system, product, or device is not necessarily limited to those clearly listed steps or units, but may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or devices.
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。It should be understood that in this application, "at least one (item)" refers to one or more, and "multiple" refers to two or more. "And/or" is used to describe the association relationship of associated objects, indicating that there can be three types of relationships. For example, "A and/or B" can mean: only A, only B, and both A and B. , Where A and B can be singular or plural. The character "/" generally indicates that the associated objects before and after are in an "or" relationship. "The following at least one item (a)" or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a). For example, at least one (a) of a, b or c can mean: a, b, c, "a and b", "a and c", "b and c", or "a and b and c" ", where a, b, and c can be single or multiple.
首先,为了便于理解本申请实施例,下面介绍一下本申请实施例相关的术语。First, in order to facilitate the understanding of the embodiments of the present application, the following introduces terms related to the embodiments of the present application.
图形缓冲区(Graphic Buffer):用于保存图形数据或图形层的数据,图形层的数据例如可以包括弹幕数据、字幕数据、导航栏、状态栏、图标层、悬浮窗口、应用程序显示界面或标识信息等信息,图形层数据还可以为应用程序启动后渲染的一些数据信息。图形缓冲区中的图形数据可以来自于多个应用程序。Graphic Buffer (Graphic Buffer): used to save graphics data or graphics layer data, the graphics layer data can include, for example, bullet screen data, subtitle data, navigation bar, status bar, icon layer, floating window, application display interface or Information such as identification information, graphics layer data can also be some data information rendered after the application is started. The graphics data in the graphics buffer can come from multiple applications.
帧缓冲区(FrameBuffer):用于保存合成图形层数据,该合成图形层数据是对多层图形层数据合成得到的。Frame Buffer (FrameBuffer): used to save the composite graphics layer data, the composite graphics layer data is synthesized from the multi-layer graphics layer data.
视频缓冲区(Video Buffer):用于保存视频数据,该视频数据例如可以来自于腾讯视频、爱奇艺视频和优酷视频等。还例如用于存储多媒体框架解码后的数据。Video Buffer (Video Buffer): Used to store video data. The video data can come from Tencent Video, iQiyi Video, Youku Video, etc., for example. For example, it is used to store decoded data of the multimedia frame.
Surface:表示应用程序进程的一个窗口的图形数据,一个Surface对应一个图形层数据。Surface: Represents the graphics data of a window of the application process, and a Surface corresponds to a graphics layer data.
Vsync信号:用于同步应用程序开始渲染的时间、唤醒SurfaceFlinger合成图形层的时间、以及显示设备的刷新周期(display refresh cycle)。Vsync信号是周期性的,单位时间内Vsync有效信号的个数称为Vsync帧率,两个相邻的Vsync有效信号之间的时间间隔称为Vsync周期,Vsync帧率为Vsync周期的倒数。示例性的,常见的Vsync周期为16ms,Vsync帧率为1s/16ms=60。应当理解,Vsync信号可以为高电平有效或低电平有效,Vsync信号可以为电平触发、上升沿触发或下降沿触发。Vsync有效信号到来可以理解为:Vsync信号的上升沿到来,Vsync信号的下降沿到来、Vsync信号为高电平信号时或者Vsync信号为低电平信号时的任一种。Vsync signal: used to synchronize the time when the application starts to render, the time to wake up the SurfaceFlinger composite graphics layer, and the display refresh cycle (display refresh cycle) of the display device. The Vsync signal is periodic, the number of Vsync valid signals in a unit time is called the Vsync frame rate, the time interval between two adjacent Vsync valid signals is called the Vsync period, and the Vsync frame rate is the reciprocal of the Vsync period. Exemplarily, the common Vsync cycle is 16ms, and the Vsync frame rate is 1s/16ms=60. It should be understood that the Vsync signal can be active high or active low, and the Vsync signal can be level triggered, rising edge triggered or falling edge triggered. The arrival of the effective Vsync signal can be understood as: the arrival of the rising edge of the Vsync signal, the arrival of the falling edge of the Vsync signal, when the Vsync signal is a high-level signal, or when the Vsync signal is a low-level signal.
如图1所示,为本申请实施例提供的一种示例性的终端100的架构示意图。该终端100可以包括天线系统110、射频(Radio Frequency,RF)电路120、处理器130、存储器140、摄像头150、音频电路160、显示屏170、一个或多个传感器180和无线收发器190等。As shown in FIG. 1, it is a schematic structural diagram of an exemplary terminal 100 provided by an embodiment of this application. The terminal 100 may include an antenna system 110, a radio frequency (RF) circuit 120, a processor 130, a memory 140, a camera 150, an audio circuit 160, a display screen 170, one or more sensors 180, a wireless transceiver 190, and so on.
天线系统110可以是一个或多个天线,还可以是由多个天线组成的天线阵列。射频电路120可以包括一个或多个模拟射频收发器,该射频电路120还可以包括一个或多个数字射频收发器,该RF电路120耦合到天线系统110。应当理解,本申请的各个实施例中,耦合是指通过特定方式的相互联系,包括直接相连或者通过其他设备间接相连,例如可以通过各类接口、传输线、总线等相连。该射频电路120可用于各类蜂窝无线通信。The antenna system 110 may be one or more antennas, and may also be an antenna array composed of multiple antennas. The radio frequency circuit 120 may include one or more analog radio frequency transceivers, the radio frequency circuit 120 may also include one or more digital radio frequency transceivers, and the RF circuit 120 is coupled to the antenna system 110. It should be understood that in the various embodiments of the present application, coupling refers to mutual connection in a specific manner, including direct connection or indirect connection through other devices, for example, connection through various interfaces, transmission lines, buses, and the like. The radio frequency circuit 120 can be used for various types of cellular wireless communications.
处理器130可包括通信处理器,该通信处理器可用来控制RF电路120通过天线系统110实现信号的接收和发送,该信号可以是语音信号、媒体信号或控制信号。该 处理器130可以包括各种通用处理设备,例如可以是通用中央处理器(Central Processing Unit,CPU)、片上系统(System on Chip,SOC)、集成在SOC上的处理器、单独的处理器芯片或控制器等;该处理器130还可以包括专用处理设备,例如专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或数字信号处理器(Digital Signal Processor,DSP)、专用的视频或图形处理器、图形处理单元(Graphics Processing Unit,GPU)以及神经网络处理单元(Neural-network Processing Unit,NPU)等。该处理器130可以是多个处理器构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。该处理器可以包括模拟-数字转换器(Analog-to-Digital Converter,ADC)、数字-模拟转换器(Digital-to-Analog Converter,DAC)以实现装置不同部件之间信号的连接。处理器130用于实现图像、音频和视频等媒体信号的处理。The processor 130 may include a communication processor, and the communication processor may be used to control the RF circuit 120 to receive and send signals through the antenna system 110, and the signal may be a voice signal, a media signal, or a control signal. The processor 130 may include various general processing devices, such as a general central processing unit (Central Processing Unit, CPU), a system on chip (System on Chip, SOC), a processor integrated on the SOC, and a separate processor chip. Or a controller, etc.; the processor 130 may also include a dedicated processing device, such as an application specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA), or a digital signal processor (Digital Signal Processor). Processor, DSP), dedicated video or graphics processor, graphics processing unit (Graphics Processing Unit, GPU), neural network processing unit (Neural-network Processing Unit, NPU), etc. The processor 130 may be a processor group composed of multiple processors, and the multiple processors are coupled to each other through one or more buses. The processor may include an analog-to-digital converter (Analog-to-Digital Converter, ADC) and a digital-to-analog converter (Digital-to-Analog Converter, DAC) to realize signal connection between different components of the device. The processor 130 is used to process media signals such as images, audios, and videos.
存储器140耦合到处理器130,具体的,该存储器140可以通过一个或多个存储器控制器耦合到处理器130。存储器140可以用于存储计算机程序指令,包括计算机操作系统(Operation System,OS)和各种用户应用程序,存储器140还可以用于存储用户数据,例如应用程序渲染得到的图形图像数据、视频数据、音频数据、日历信息、联系人信息、或其他媒体文件等。处理器130可以从存储器140读取计算机程序指令或用户数据,或者向存储器140存入计算机程序指令或用户数据,以实现相关的处理功能。该存储器140可以是非掉电易失性存储器,例如是EMMC(Embedded Multi Media Card,嵌入式多媒体卡)、UFS(Universal Flash Storage,通用闪存存储)或只读存储器(Read-Only Memory,ROM),或者是可存储静态信息和指令的其他类型的静态存储设备,还可以是掉电易失性存储器(volatile memory),例如随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储器、光碟存储器(包括压缩光碟、激光碟、数字通用光碟或蓝光光碟等)、磁盘存储介质或者其他磁存储设备,但不限于此。可选的,存储器140可以独立在处理器130之外,存储器140也可以和处理器130集成在一起。The memory 140 is coupled to the processor 130. Specifically, the memory 140 may be coupled to the processor 130 through one or more memory controllers. The memory 140 can be used to store computer program instructions, including a computer operating system (Operation System, OS) and various user application programs. The memory 140 can also be used to store user data, such as graphics image data, video data, and video data rendered by application programs. Audio data, calendar information, contact information, or other media files, etc. The processor 130 may read computer program instructions or user data from the memory 140, or store computer program instructions or user data in the memory 140, so as to implement related processing functions. The memory 140 may be a non-power-down volatile memory, such as EMMC (Embedded Multi Media Card, embedded multimedia card), UFS (Universal Flash Storage), or read-only memory (Read-Only Memory, ROM), Or other types of static storage devices that can store static information and instructions, or volatile memory (volatile memory), such as Random Access Memory (RAM), or other types that can store information and instructions The type of dynamic storage device can also be Electrically Erasable Programmable Read-Only Memory (EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage, optical discs Memory (including compact discs, laser discs, digital universal discs or Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, but not limited to this. Optionally, the memory 140 may be independent of the processor 130, and the memory 140 may also be integrated with the processor 130.
摄像头150用于采集图像或视频,示例性的,用户可以通过应用程序指令触发开启摄像头150,实现拍照或者摄像功能,如拍摄获取任意场景的图片或视频。摄像头可以包括镜头,滤光片和图像传感器等部件。其中,摄像头可以位于终端设备的前面,也可以位于终端设备的背面,摄像头具体个数以及排布方式可以根据设计者或厂商策略的需求灵活确定,本申请不做限定。The camera 150 is used to collect images or videos. Illustratively, the user can trigger the turning on of the camera 150 through an application program instruction to realize a photographing or camera function, such as taking pictures or videos of any scene. The camera can include components such as lenses, filters, and image sensors. Among them, the camera may be located in front of the terminal device or on the back of the terminal device. The specific number and arrangement of the cameras can be flexibly determined according to the requirements of the designer or manufacturer's strategy, which is not limited in this application.
音频电路160与处理器130耦合。该音频电路160可以包括麦克风161和扬声器162,麦克风161可以从外界接收声音输入,扬声器162可以实现音频数据的播放。应当理解,该终端100可以有一个或多个麦克风、一个或多个耳机,本申请实施例对麦克风和耳机的数量不做限定。The audio circuit 160 is coupled with the processor 130. The audio circuit 160 may include a microphone 161 and a speaker 162. The microphone 161 may receive sound input from the outside, and the speaker 162 may play audio data. It should be understood that the terminal 100 may have one or more microphones and one or more earphones, and the embodiment of the present application does not limit the number of microphones and earphones.
显示屏170,用于向用户提供各种显示界面或者可供选择的各种菜单信息,示例性的,显示屏170显示的内容包括但不限于,软键盘、虚拟鼠标、虚拟按键和图标等,这些显示内容与内部的具体模块或功能相关联,显示屏170还可以接受用户输入,可 选的,显示屏170还可以显示用户输入的信息,例如接受使能或禁用等控制信息。具体的,显示屏170可以包括显示面板171和触控面板172。其中,显示面板171可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)、发光二级管(Light Emitting Diode,LED)显示设备或阴极射线管(Cathode Ray Tube,CRT)等来配置显示面板。触控面板172,也称为触摸屏、触敏屏等,可收集用户在其上或附近的接触或者非接触操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板172上或在触控面板172附近的操作,也可以包括体感操作;该操作包括单点控制操作、多点控制操作等),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板172可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成处理器130能够处理的信息,再送给处理器130,并能接收处理器130发来的命令并加以执行。进一步的,触控面板172可覆盖显示面板171,用户可以根据显示面板171显示的内容在触控面板172上或者附近进行操作,触控面板172检测到该操作后,通过I/O子系统10传送给处理器130以确定用户输入,随后处理器130根据用户输入通过I/O子系统10在显示面板171上提供相应的视觉输出。虽然在图1中,触控面板172与显示面板171是作为两个独立的部件来实现终端100的输入和输出功能,但是在某些实施例中,触控面板172与显示面板171是集成在一起的。The display screen 170 is used to provide users with various display interfaces or various menu information for selection. Illustratively, the content displayed on the display screen 170 includes, but is not limited to, a soft keyboard, a virtual mouse, virtual keys and icons, etc. These display contents are associated with specific internal modules or functions. The display screen 170 may also accept user input. Optionally, the display screen 170 may also display information input by the user, such as accepting control information such as enabling or disabling. Specifically, the display screen 170 may include a display panel 171 and a touch panel 172. Among them, the display panel 171 may adopt a liquid crystal display (Liquid Crystal Display, LCD), an organic light emitting diode (Organic Light-Emitting Diode, OLED), a light emitting diode (Light Emitting Diode, LED) display device, or a cathode ray tube (Cathode Ray tube). Tube, CRT) etc. to configure the display panel. The touch panel 172, also known as a touch screen, a touch-sensitive screen, etc., can collect user contact or non-contact operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 172 Or the operation near the touch panel 172 may also include a somatosensory operation; the operation includes a single-point control operation, a multi-point control operation, etc.), and the corresponding connection device is driven according to a preset program. Optionally, the touch panel 172 may include two parts: a touch detection device and a touch controller. Among them, the touch detection device detects the signal brought by the user's touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into information that can be processed by the processor 130, and sends it to The processor 130 can receive and execute commands sent by the processor 130. Further, the touch panel 172 can cover the display panel 171, and the user can operate on or near the touch panel 172 according to the content displayed on the display panel 171. After the touch panel 172 detects the operation, it passes through the I/O subsystem 10 It is transmitted to the processor 130 to determine the user input, and then the processor 130 provides corresponding visual output on the display panel 171 through the I/O subsystem 10 according to the user input. Although in FIG. 1, the touch panel 172 and the display panel 171 are used as two independent components to realize the input and output functions of the terminal 100, but in some embodiments, the touch panel 172 and the display panel 171 are integrated in together.
传感器180可以包括图像传感器、运动传感器、接近度传感器、环境噪声传感器、声音传感器、加速度计、温度传感器、陀螺仪或者其他类型的传感器,以及它们的各种形式的组合。处理器130通过I/O子系统10中的传感器控制器12驱动传感器180接收音频信息、图像信息或运动信息等各种信息,传感器180将接收的信息传到处理器130中进行处理。The sensor 180 may include an image sensor, a motion sensor, a proximity sensor, an environmental noise sensor, a sound sensor, an accelerometer, a temperature sensor, a gyroscope, or other types of sensors, and various combinations of them. The processor 130 drives the sensor 180 to receive various information such as audio information, image information, or motion information through the sensor controller 12 in the I/O subsystem 10, and the sensor 180 transmits the received information to the processor 130 for processing.
无线收发器190,该无线收发器190可以向其他设备提供无线连接能力,其他设备可以是无线耳麦、蓝牙耳机、无线鼠标或无线键盘等外围设备,也可以是无线网络,例如无线保真(Wireless Fidelity,WiFi)网络、无线个人局域网络(Wireless Personal Area Network,WPAN)或者其他无线局域网络(Wireless Local Area Network,WLAN)等。无线收发器190可以是蓝牙兼容的收发器,用于将处理器130以无线方式耦合到蓝牙耳机、无线鼠标等外围设备,该无线收发器190也可以是WiFi兼容的收发器,用于将处理器130以无线方式耦合到无线网络或其他设备。The wireless transceiver 190 can provide wireless connection capabilities to other devices. The other devices can be peripheral devices such as wireless headsets, Bluetooth headsets, wireless mice, or wireless keyboards, or wireless networks, such as wireless fidelity (Wireless Fidelity). Fidelity, WiFi) network, wireless personal area network (Wireless Personal Area Network, WPAN), or other wireless local area network (Wireless Local Area Network, WLAN), etc. The wireless transceiver 190 may be a Bluetooth compatible transceiver, which is used to wirelessly couple the processor 130 to peripheral devices such as Bluetooth headsets and wireless mice. The wireless transceiver 190 may also be a WiFi compatible transceiver for processing The device 130 is wirelessly coupled to a wireless network or other devices.
终端100还可以包括其他输入设备14,耦合到处理器130以接收各种用户输入,例如接收输入的号码、姓名、地址以及媒体选择等,其他输入设备14可以包括键盘、物理按钮(按压按钮、摇臂按钮等)、拨号盘、滑动开关、操纵杆、点击滚轮和光鼠(光鼠是不显示可视输出的触摸敏感表面,或者是由触摸屏形成的触摸敏感表面的延伸)等。The terminal 100 may also include other input devices 14 coupled to the processor 130 to receive various user inputs, such as receiving inputted numbers, names, addresses, and media selections. The other input devices 14 may include keyboards, physical buttons (press buttons, Rocker buttons, etc.), dials, slide switches, joysticks, click scroll wheels, and optical mice (optical mice are touch-sensitive surfaces that do not display visual output, or are extensions of touch-sensitive surfaces formed by touch screens).
终端100还可以包括上述的I/O子系统10,该I/O子系统10可以包括其他输入设备控制器11,用于从其他输入设备14接收信号或者向其他输入设备190发送处理器130的控制或驱动信息,I/O子系统10还可以包括上述的传感器控制器12和显示器控制器13,分别用于实现传感器180和显示屏170与处理器130之间的数据和控制信息 的交换。The terminal 100 may also include the aforementioned I/O subsystem 10, and the I/O subsystem 10 may include other input device controllers 11 for receiving signals from other input devices 14 or sending the processor 130 to other input devices 190. For controlling or driving information, the I/O subsystem 10 may also include the aforementioned sensor controller 12 and display controller 13, which are respectively used to implement the exchange of data and control information between the sensor 180 and the display screen 170 and the processor 130.
终端100还可以包括电源101,以向终端100的包括110-190在内的其他部件供电,该电源可以是可充电的或不可充电的锂离子电池或镍氢电池。进一步的,当电源101是可充电的电池时,可以通过电源管理系统与处理器130耦合,从而通过电源管理系统实现管理充电、放电、以及功耗调整等。The terminal 100 may further include a power source 101 to supply power to other components of the terminal 100 including 110-190, and the power source may be a rechargeable or non-rechargeable lithium ion battery or a nickel hydrogen battery. Further, when the power supply 101 is a rechargeable battery, it can be coupled with the processor 130 through a power management system, so that the management of charging, discharging, and power consumption adjustment can be realized through the power management system.
应当理解,图1中的终端100仅仅是一种示例,对终端100的具体形态不构成限定,终端100还可以包括图1中未显示出来的现有的或者将来可能增加的其他组成部分。It should be understood that the terminal 100 in FIG. 1 is only an example, and does not limit the specific form of the terminal 100. The terminal 100 may also include other existing components that are not shown in FIG. 1 or that may be added in the future.
在一种可选的方案中,RF电路120、处理器130和存储器140可以部分或全部集成在一个芯片上,也可以是彼此独立的芯片。RF电路120、处理器130和存储器140可以包括布置在印刷电路板(Printed Circuit Board,PCB)上的一个或多个集成电路。In an optional solution, the RF circuit 120, the processor 130, and the memory 140 may be partially or completely integrated on one chip, or may be independent chips. The RF circuit 120, the processor 130, and the memory 140 may include one or more integrated circuits arranged on a printed circuit board (PCB).
如图2所示,为本申请实施例提供的一种示例性的图像处理装置的硬件架构图,该图像处理装置200例如可以为处理器芯片,示例性的,图2中所示的硬件架构图可以是图1中的处理器130的示例性架构图,本申请实施例提供的图像处理方法和图像处理架构可以应用在该处理器芯片上。As shown in FIG. 2, a hardware architecture diagram of an exemplary image processing apparatus provided by an embodiment of the present application. The image processing apparatus 200 may be, for example, a processor chip. Exemplarily, the hardware architecture shown in FIG. 2 The figure may be an exemplary architecture diagram of the processor 130 in FIG. 1, and the image processing method and image processing architecture provided by the embodiment of the present application may be applied to the processor chip.
参考图2,该装置200包括:至少一个CPU,存储器、微控制器(Microcontroller Unit,MCU)、GPU、NPU、内存总线、接收接口和发送接口等。虽然图2中未示出,该装置200还可以包括应用处理器(Application Processor,AP),解码器以及专用的视频或图像处理器。Referring to FIG. 2, the device 200 includes: at least one CPU, a memory, a microcontroller (Microcontroller Unit, MCU), a GPU, an NPU, a memory bus, a receiving interface, a sending interface, and so on. Although not shown in FIG. 2, the device 200 may also include an application processor (Application Processor, AP), a decoder, and a dedicated video or image processor.
装置200的上述各个部分通过连接器相耦合,示例性的,连接器包括各类接口、传输线或总线等,这些接口通常是电性通信接口,但是也可能是机械接口或其它形式的接口,本实施例对此不做限定。The above-mentioned parts of the device 200 are coupled through connectors. Illustratively, the connectors include various interfaces, transmission lines, or buses. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces. The embodiment does not limit this.
可选的,CPU可以是一个单核(single-CPU)处理器或多核(multi-CPU)处理器;可选的,CPU可以是多个处理器构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。该接收接口可以为处理器芯片的数据输入的接口,在一种可选的情况下,该接收接口和发送接口可以是高清晰度多媒体接口(High Definition Multimedia Interface,HDMI)、V-By-One接口、嵌入式显示端口(Embedded Display Port,eDP)、移动产业处理器接口(Mobile Industry Processor Interface,MIPI)或Display Port(DP)等。该存储器可以参考前述对于存储器140部分的描述。Optionally, the CPU can be a single-CPU processor or a multi-CPU processor; optionally, the CPU can be a processor group composed of multiple processors, between multiple processors Coupled to each other through one or more buses. The receiving interface may be a data input interface of the processor chip. In an optional case, the receiving interface and the transmitting interface may be High Definition Multimedia Interface (HDMI), V-By-One Interface, Embedded Display Port (eDP), Mobile Industry Processor Interface (MIPI) or Display Port (DP), etc. For the memory, reference may be made to the foregoing description of the memory 140.
在一种可选的情况中,上述各部分集成在同一个芯片上;在另一种可选的情况中,CPU、GPU、解码器、接收接口以及发送接口集成在一个芯片上,该芯片内部的各部分通过总线访问外部的存储器。专用视频/图形处理器可以与CPU集成在同一个芯片上,也可以作为单独的处理器芯片存在,例如专用视频/图形处理器可以为专用ISP。在一种可选的情况中,NPU也可以作为独立的处理器芯片。该NPU用于实现各种神经网络或者深度学习的相关运算。In an optional case, the above-mentioned parts are integrated on the same chip; in another optional case, the CPU, GPU, decoder, receiving interface, and transmitting interface are integrated on one chip, and the chip is Each part of the access to external memory through the bus. The dedicated video/graphics processor can be integrated with the CPU on the same chip, or it can exist as a separate processor chip. For example, the dedicated video/graphics processor can be a dedicated ISP. In an optional situation, the NPU can also be used as an independent processor chip. The NPU is used to implement various neural network or deep learning related operations.
在本申请实施例中涉及的芯片是以集成电路工艺制造在同一个半导体衬底上的系统,也叫半导体芯片,其可以是利用集成电路工艺制作在衬底(通常是例如硅一类的半导体材料)上形成的集成电路的集合,其外层通常被半导体封装材料封装。所述集成电路可以包括各类功能器件,每一类功能器件包括逻辑门电路、金属氧化物半导体 (Metal-Oxide-Semiconductor,MOS)晶体管、双极晶体管或二极管等晶体管,也可包括电容、电阻或电感等其他部件。每个功能器件可以独立工作或者在必要的驱动软件的作用下工作,可以实现通信、运算、或存储等各类功能。The chip involved in the embodiments of this application is a system manufactured on the same semiconductor substrate by an integrated circuit process, also called a semiconductor chip, which can be manufactured on a substrate using an integrated circuit process (usually a semiconductor such as silicon) Material) is a collection of integrated circuits formed on the surface, the outer layer of which is usually encapsulated by a semiconductor packaging material. The integrated circuit may include various types of functional devices. Each type of functional device includes transistors such as logic gate circuits, Metal-Oxide-Semiconductor (MOS) transistors, bipolar transistors or diodes, and may also include capacitors and resistors. Or inductance and other components. Each functional device can work independently or under the action of necessary driver software, and can realize various functions such as communication, calculation, or storage.
如图3a所示,为本申请实施例适用的一种示例性的操作系统架构示意图。该操作系统可以运行在图1所示的处理器130上,该操作系统对应的代码可以存储在图1所示的存储器140中,或者,该操作系统可以运行在如图2所示的图像处理装置200上。As shown in FIG. 3a, it is a schematic diagram of an exemplary operating system architecture to which this embodiment of the application is applicable. The operating system may run on the processor 130 shown in FIG. 1, and the code corresponding to the operating system may be stored in the memory 140 shown in FIG. 1, or the operating system may run on the image processing system shown in FIG.装置200上。 Device 200.
示例性的,该操作系统例如可以是Android系统、iOS系统或Linux系统等。该操作系统架构包括应用(Application,APP)层,框架层、硬件抽象层(Hardware Abstraction Layer,HAL)和驱动层。Exemplarily, the operating system may be, for example, an Android system, an iOS system, or a Linux system. The operating system architecture includes an application (APP) layer, a framework layer, a hardware abstraction layer (HAL), and a driver layer.
可选的,APP层例如可以包括微信、爱奇艺、腾讯视频、淘宝或相机等应用程序。Optionally, the APP layer may include, for example, applications such as WeChat, iQiyi, Tencent Video, Taobao, or Camera.
框架层为操作系统架构的逻辑调度层,框架层能够对视频处理过程进行资源调度和策略分配。示例性的,框架层包括:The frame layer is the logical scheduling layer of the operating system architecture, and the frame layer can perform resource scheduling and policy allocation for the video processing process. Exemplarily, the framework layer includes:
图形框架(Graphics Framework)负责图形窗口的布局和图形数据的渲染,将渲染后的图形数据存储到图形缓冲区(Graphic Buffer)中,以及将Graphic Buffer中的图形层数据发送给SurfaceFlinger。The Graphics Framework is responsible for the layout of the graphics window and the rendering of graphics data, storing the rendered graphics data in the Graphics Buffer, and sending the graphics layer data in the Graphic Buffer to SurfaceFlinger.
多媒体框架(Multimedia Framework)负责视频流的解码以及将解码后的数据送给SurfaceFlinger。The Multimedia Framework is responsible for decoding the video stream and sending the decoded data to SurfaceFlinger.
SurfaceFlinger负责管理各个图层(包括图形层和视频层),接收各个图层的Graphic Buffer和Video Buffer,以及利用图形处理器(Graphics Processing Unit,GPU)或者硬件合成器进行图形层的叠加。SurfaceFlinger叠加得到的图形层数据存储在Framebuffer中,该Framebuffer中的数据可供显示屏读取并显示。如图3b所示为一种示例性的图形层合成的过程示意图,图3b所示的示例中,包括三个图形层数据,分别为状态栏图形层、图标图形层和导航栏图形层,示例性的,该三个图形层数据可以是同一个应用程序渲染得到的,其中,每个图形层数据对应一个Graphic Buffer,SurfaceFlinger将该三个图形层数据合成一个图像帧数据,并将该图像帧数据存入FrameBuffer中,该图像帧为状态栏图形层、图标图形层和导航栏图形层的合成结果,FrameBuffer中的图像帧数据可供显示屏读取并显示在显示屏上。在一种可选的情况中,待显示的内容除图形数据之外,还包括视频数据,该图像帧数据和视频数据合成之后再显示在显示屏上。SurfaceFlinger is responsible for managing each layer (including graphics layer and video layer), receiving the Graphic Buffer and Video Buffer of each layer, and superimposing the graphics layer by using a graphics processing unit (GPU) or a hardware compositor. The graphics layer data superimposed by SurfaceFlinger is stored in the Framebuffer, and the data in the Framebuffer can be read and displayed on the display screen. Figure 3b is a schematic diagram of an exemplary graphics layer synthesis process. The example shown in Figure 3b includes three graphics layer data, namely the status bar graphics layer, the icon graphics layer, and the navigation bar graphics layer. Examples Sexually, the three graphics layer data can be rendered by the same application, where each graphics layer data corresponds to a Graphic Buffer, and SurfaceFlinger synthesizes the three graphics layer data into one image frame data, and combines the image frame The data is stored in the FrameBuffer. The image frame is the composite result of the status bar graphics layer, the icon graphics layer and the navigation bar graphics layer. The image frame data in the FrameBuffer can be read and displayed on the display screen. In an optional situation, the content to be displayed includes video data in addition to the graphic data, and the image frame data and the video data are synthesized before being displayed on the display screen.
开放式图形语言(Open Graphics Library,OpenGL)为图形渲染和图形层的叠加提供接口,并可以与GPU驱动对接。The Open Graphics Library (OpenGL) provides an interface for graphics rendering and graphics layer overlay, and can be connected to the GPU driver.
HAL为操作系统软件与音视频硬件设备的接口层。HAL为上层软件和下层硬件之间的交互提供接口。HAL层将底层硬件抽象为包含相应硬件接口的软件,通过访问HAL层就可以实现对底层硬件设备的设置,例如可以在HAL层使能或禁用相关硬件设备。驱动层用于根据HAL输入的控制信息实现对底层硬件设备的直接控制。示例性的,HAL包括硬件合成器(Hardware Composer,HWC)抽象和媒体硬件(Media Hardware,Media HW)抽象,对应的,驱动层包括硬件合成驱动和媒体硬件驱动。HWC用于多个图形层的硬件合成,可以为SurfaceFlinger的硬件合成提供支持,并将合成的图形层存储在FrameBuffer中并送给显示驱动。Media HW负责视频层数据的处理,以及将处 理后的视频层和视频层的位置信息告知显示驱动,Media HW为一种专用硬件电路,可以用来提升视频显示效果。应当理解,不同的厂商对媒体硬件的称呼可能会有所不同。应当理解,HAL层的HWC抽象与驱动层的硬件合成驱动对应,媒体硬件抽象与媒体硬件驱动对应,通过访问HAL层的HWC抽象可以通过硬件合成驱动实现对底层HWC硬件的控制。通过访问HAL层的媒体硬件抽象和媒体硬件驱动实现对底层Media HW的控制。HAL is the interface layer between operating system software and audio and video hardware devices. HAL provides an interface for the interaction between upper layer software and lower layer hardware. The HAL layer abstracts the underlying hardware into software that contains the corresponding hardware interfaces. By accessing the HAL layer, the settings of the underlying hardware devices can be realized. For example, the relevant hardware devices can be enabled or disabled at the HAL layer. The driver layer is used to directly control the underlying hardware devices according to the control information input by the HAL. Exemplarily, the HAL includes a hardware composer (Hardware Composer, HWC) abstraction and a media hardware (Media Hardware, Media HW) abstraction. Correspondingly, the driver layer includes a hardware synthesis driver and a media hardware driver. HWC is used for hardware synthesis of multiple graphics layers, which can provide support for SurfaceFlinger's hardware synthesis, and store the synthesized graphics layer in FrameBuffer and send it to the display driver. Media HW is responsible for processing the video layer data, and informing the display driver of the processed video layer and the position information of the video layer. Media HW is a dedicated hardware circuit that can be used to improve the video display effect. It should be understood that different vendors may refer to media hardware differently. It should be understood that the HWC abstraction of the HAL layer corresponds to the hardware synthesis driver of the driver layer, and the media hardware abstraction corresponds to the media hardware driver. By accessing the HWC abstraction of the HAL layer, the control of the underlying HWC hardware can be realized through the hardware synthesis driver. The control of the underlying Media HW is achieved by accessing the media hardware abstraction and media hardware driver of the HAL layer.
驱动层还包括GPU驱动和显示驱动,GPU驱动负责图形的渲染和叠加,显示驱动负责视频层和图形层的合成,并将合成结果送给显示器显示。The driver layer also includes a GPU driver and a display driver. The GPU driver is responsible for the rendering and overlay of graphics, and the display driver is responsible for the synthesis of the video layer and the graphics layer, and sends the synthesis result to the display for display.
如图4所示,为一种传统的图像处理架构示意图。在该图像处理架构中,图形框架将多个图形缓冲区Graphic Buffers的指示信息发送给SurfaceFlinger,多媒体框架将视频缓冲区Video Buffer的指示信息发送给SurfaceFlinger,其中,一个图形层对应一个Graphic Buffer,例如导航栏图形层和状态栏图形层分别对应不同的Graphic Buffer,SurfaceFlinger将Graphic Buffers和Video Buffer绑定到对应的图层,例如将Graphic Buffer1绑定到导航栏图形层,将Graphic Buffer2绑定到状态栏图形层,将Video Buffer绑定到视频层等。当Vsync信号到来的时候,SurfaceFlinger在主线程中将Graphic Buffers和Video Buffer的指示信息一并送给硬件合成器HWC。HWC将所有Graphic Buffers中的多个图形层进行合成,并且在合成的过程中对视频层下方的图形层进行挖洞处理,以便将视频层显示出来。HWC合成得到的图形层数据存储在FrameBuffer中。紧接着,在该主线程中,HWC将Video Buffer的指示信息送给Media HW,以便Media HW从Video Buffer中读取视频数据并对视频数据进行处理。应当理解,虽然没有画出,图4中的硬件合成器和媒体硬件均包括对应的硬件抽象层和驱动层,在调用硬件合成器时,需要访问硬件抽象层的硬件合成器抽象调用硬件合成驱动以实现对硬件合成器的调用;同样的,需要通过媒体硬件抽象层和媒体硬件驱动实现对Media HW的调用,或者说,通过访问硬件抽象层的媒体硬件抽象调用媒体硬件驱动以实现对媒体硬件的调用。示例性的,SurfaceFlinger在主线程中将Graphic Buffers和Video Buffer的指示信息一并送给抽象层的HWC抽象层,并通过硬件合成驱动调用HWC硬件实现对多个图形层的合成以及对图形层进行挖洞处理。HWC将合成的图形层数据发送给显示驱动,Media HW将处理后的视频图像发送给显示驱动。待新的Vsync信号到来的时候,显示驱动对HWC发送来的合成图形层数据和Media HW发送来的视频数据进行合成,并将合成结果送给显示设备显示。应当理解,buffer的指示信息用于指向一块内存区域,示例性的,该指示信息可以是文件描述符(file descriptor,fd)。As shown in Figure 4, it is a schematic diagram of a traditional image processing architecture. In this image processing architecture, the graphics framework sends the instructions of multiple graphic buffers Graphic Buffers to SurfaceFlinger, and the multimedia framework sends the instructions of the video buffer Video Buffer to SurfaceFlinger, where one graphics layer corresponds to one Graphic Buffer, for example The graphics layer of the navigation bar and the graphics layer of the status bar correspond to different Graphic Buffers. SurfaceFlinger binds Graphic Buffers and Video Buffers to the corresponding layers. For example, bind Graphic Buffer 1 to the navigation bar graphics layer, and bind Graphic Buffer 2 to the state. Column graphics layer, bind Video Buffer to the video layer, etc. When the Vsync signal arrives, SurfaceFlinger sends the instructions of Graphic Buffers and Video Buffer to the hardware synthesizer HWC in the main thread. HWC synthesizes multiple graphics layers in all Graphic Buffers, and digs holes in the graphics layer below the video layer during the synthesis process to display the video layer. The graphics layer data synthesized by HWC is stored in FrameBuffer. Then, in the main thread, the HWC sends the instruction information of the Video Buffer to the Media HW so that the Media HW can read the video data from the Video Buffer and process the video data. It should be understood that although not shown, the hardware synthesizer and media hardware in Figure 4 both include the corresponding hardware abstraction layer and driver layer. When the hardware synthesizer is called, the hardware synthesizer of the hardware abstraction layer needs to be accessed to abstractly call the hardware synthesis driver. In order to implement the call to the hardware synthesizer; similarly, the media hardware abstraction layer and the media hardware driver need to be used to implement the call to the Media HW. Call. Exemplarily, SurfaceFlinger sends the indication information of Graphic Buffers and Video Buffer to the HWC abstraction layer of the abstraction layer in the main thread, and uses the hardware synthesis driver to call the HWC hardware to realize the synthesis of multiple graphics layers and the performance of the graphics layer. Digging treatment. HWC sends the synthesized graphics layer data to the display driver, and Media HW sends the processed video image to the display driver. When the new Vsync signal arrives, the display driver synthesizes the composite graphics layer data sent by the HWC and the video data sent by the Media HW, and sends the composite result to the display device for display. It should be understood that the indication information of the buffer is used to point to a memory area. Exemplarily, the indication information may be a file descriptor (fd).
该图像处理架构中,由于视频帧的显示帧率和多个图形层的合成是由同一个垂直同步信号控制的,所以能够支持的视频帧率受限于图形层合成的速度,因此需要更好的硬件来支撑图形帧的高帧率刷新。另外,由于HWC对多个图形层的合成处理和Media HW对视频数据的处理是在同一个线程中先后进行的,因此,视频播放很容易受到多个图形层合成的影响。尤其在一些复杂的图形场景中,图形渲染和合成是比较耗时的,导致一些图形帧和视频帧没能及时送显,这些图形帧和视频帧会被应用程序或SurfaceFlinger丢掉,即便提升了硬件性能,仍然可能导致视频播放过程中出现丢帧。In this image processing architecture, since the display frame rate of the video frame and the synthesis of multiple graphics layers are controlled by the same vertical synchronization signal, the supported video frame rate is limited by the speed of the graphics layer synthesis, so better The hardware to support the high frame rate refresh of graphics frames. In addition, because HWC's processing of multiple graphics layers and Media HW's processing of video data are performed sequentially in the same thread, video playback is easily affected by the synthesis of multiple graphics layers. Especially in some complex graphics scenes, graphics rendering and compositing are time-consuming, resulting in some graphics frames and video frames not being displayed in time, these graphics frames and video frames will be discarded by the application or SurfaceFlinger, even if the hardware is upgraded Performance may still cause frame loss during video playback.
本申请实施例提供一种图像处理架构,该图像处理架构如图5所示。The embodiment of the present application provides an image processing architecture, and the image processing architecture is shown in FIG. 5.
图形框架将Graphic Buffer的指示信息发送给SurfaceFlinger,多媒体框架将Video Buffer的指示信息发送给SurfaceFlinger。The graphics framework sends the instructions of Graphic Buffer to SurfaceFlinger, and the multimedia framework sends the instructions of Video Buffer to SurfaceFlinger.
SurfaceFlinger初始化的时候会同时开启两个线程:第一线程和第二线程,其中,在第一线程中,将Graphic Buffers的指示信息发送给HWC,以便HWC进行多个图形层的合成以及对图形层的挖洞处理;而在并行的第二线程中,将Video Buffer的指示信息发送给Media HW,以便Media HW对视频数据进行处理。这样一来,由于Media HW对视频数据的处理和HWC对多个图形层的合成处理是在两个线程中并行完成的,Media HW对视频数据的处理将不再受图形层合成的进度的影响。应当理解,HWC可以对视频层下方的图形层分别进行挖洞,然后再将多个图形层合成一个图形层;HWC也可以先将多个图形层合成一个图形层,然后再对该合成的图形层进行挖洞。HWC处理得到的合成图形层数据存储在FrameBuffer中,HWC将FrameBuffer的指示信息发送给显示驱动。Media HW将处理后的视频层数据存储在Video Buffer中,并将Video Buffer的指示信息发送给显示驱动。When SurfaceFlinger is initialized, it will start two threads at the same time: the first thread and the second thread. In the first thread, the instruction information of Graphic Buffers is sent to HWC so that HWC can synthesize multiple graphics layers and control the graphics layer. In the second parallel thread, the instruction information of Video Buffer is sent to Media HW so that Media HW can process the video data. In this way, since Media HW's processing of video data and HWC's composite processing of multiple graphics layers are completed in parallel in two threads, Media HW's processing of video data will no longer be affected by the progress of graphics layer synthesis. . It should be understood that HWC can respectively dig holes in the graphics layer below the video layer, and then synthesize multiple graphics layers into one graphics layer; HWC can also synthesize multiple graphics layers into one graphics layer, and then the synthesized graphics Layers for digging. The synthesized graphics layer data obtained by the HWC processing is stored in the FrameBuffer, and the HWC sends the instruction information of the FrameBuffer to the display driver. Media HW stores the processed video layer data in Video Buffer, and sends the instruction information of Video Buffer to the display driver.
应当理解,第一线程和第二线程都是循环线程,当SurfacFlinger接收到图形框架发送来的Graphic Buffer时会通知第一线程进行处理,但是第一线程需要等图形Vsync到来的时候才会开始对Graphic Buffer中的图形层数据进行处理。当SurfacFlinger接收到多媒体框架发送来的Video Buffer时会通知第二线程进行处理,第二线程接到通知后就会开始进行处理,无需等待垂直同步信号。It should be understood that the first thread and the second thread are both circular threads. When SurfacFlinger receives the Graphic Buffer sent by the graphics framework, it will notify the first thread to process, but the first thread will not start to process until the graphics Vsync arrives. The graphics layer data in Graphic Buffer is processed. When SurfacFlinger receives the Video Buffer sent by the multimedia framework, it will notify the second thread to process, and the second thread will start processing after receiving the notification without waiting for the vertical synchronization signal.
具体的,当图形Vsync到来时,在第一线程中,将设置的视频层的大小相关的信息发送给Media HW,从而Media HW可以根据该视频层的大小相关的信息对视频数据进行处理;同样在该第一线程中,HWC根据设置的视频层的大小相关的信息对图形层进行挖洞,使得挖洞区域的大小等于视频层的大小。因为设置视频层的大小和进行挖洞是在同一个线程中进行的,所以挖洞尺寸与设置的视频层的大小是完全一致的,确保了视频层和图形层的同步匹配显示。示例性的,视频层的大小相关的信息可以包括视频层的位置信息和尺寸信息等,可选的,当位置信息包括视频层的四个顶点位置时,根据该位置信息可以确定视频层的尺寸,视频层的大小相关的信息可以只包括该位置信息;当位置信息仅包括视频层的某个顶点位置时(例如右上角的顶点位置),该视频层的大小相关的信息还包括视频的尺寸信息。该视频层的大小相关的信息是SurfaceFlinger计算出来的,示例性的,多媒体框架通过系统自带的应用程序编程接口(Application Programming Interface,API)设置的视频层的初始大小发送给SurfaceFlinger,SurfaceFlinger还可以捕捉或感知到用户对视频的放大、缩小或旋转等操作,SurfaceFlinger综合多媒体框架发送来的视频层的初始大小以及放大、缩小或旋转等操作,计算得到视频层的大小相关的信息。Specifically, when the graphics Vsync arrives, in the first thread, the set size-related information of the video layer is sent to Media HW, so that Media HW can process the video data according to the size-related information of the video layer; similarly; In the first thread, the HWC digs a hole in the graphics layer according to the information related to the size of the video layer set, so that the size of the digging area is equal to the size of the video layer. Because setting the size of the video layer and digging the hole are performed in the same thread, the size of the hole is exactly the same as the size of the set video layer, which ensures that the video layer and the graphics layer are synchronized and displayed. Exemplarily, the information related to the size of the video layer may include position information and size information of the video layer, etc. Optionally, when the position information includes the positions of the four vertices of the video layer, the size of the video layer can be determined according to the position information , The size-related information of the video layer can only include the position information; when the position information only includes a certain vertex position of the video layer (for example, the vertex position in the upper right corner), the size-related information of the video layer also includes the size of the video information. The information related to the size of the video layer is calculated by SurfaceFlinger. For example, the multimedia framework sends the initial size of the video layer set by the system's own application programming interface (Application Programming Interface, API) to SurfaceFlinger, and SurfaceFlinger can also Capturing or perceiving the user's operations such as zooming in, zooming out, or rotating the video, the initial size of the video layer and operations such as zooming in, zooming out, or rotating sent by the SurfaceFlinger integrated multimedia framework, and calculate the information related to the size of the video layer.
但是,由于Video Buffer的大小会影响图形层的挖洞,而Video Buffer的处理是在第二线程中进行的,当第二线程收到第一个Video Buffer或者检测到Video Buffer的大小发生变化的时候,向SurfaceFlinger的第一线程发送通知信息,以通知视频层的大小发生了变化。示例性的,该通知信息可以承载在一个标识位上。应当理解,Video Buffer的大小可以包括Video Buffer的大小信息和Video Buffer的旋转信息等,每一个 Video Buffer中存储一帧视频数据或者说存储一个视频层的数据,Video Buffer的大小与视频数据(或者视频层)的大小有关,例如,Video Buffer的大小可以等于该视频数据的大小,Video Buffer的旋转信息用于表示该视频数据的旋转信息。当SurfaceFlinger接收到该通知消息之后,获取更新后的Video Buffer的尺寸,并根据更新后的Video Buffer的尺寸重新计算视频数据的大小相关的信息,以便第一线程中的HWC可以根据变化后的尺寸对图形层重新进行挖洞,从而保证尺寸变化后的视频层可以透过图形层显示出来,以确保Video Buffer的尺寸发生变化时,依然能实现视频层和图形层的同步匹配显示。在一种可选的情况中,当Video Buffer的尺寸发生变化时,SurfaceFlinger计算的视频数据的大小相关的信息不一定会发生变化。However, because the size of the Video Buffer will affect the digging of the graphics layer, and the processing of the Video Buffer is performed in the second thread, when the second thread receives the first Video Buffer or detects that the size of the Video Buffer has changed At that time, a notification message is sent to the first thread of SurfaceFlinger to notify that the size of the video layer has changed. Exemplarily, the notification information may be carried on an identification bit. It should be understood that the size of the Video Buffer may include the size information of the Video Buffer and the rotation information of the Video Buffer. Each Video Buffer stores a frame of video data or data of a video layer. The size of the Video Buffer and the video data (or The size of the video layer is related. For example, the size of the Video Buffer may be equal to the size of the video data, and the rotation information of the Video Buffer is used to indicate the rotation information of the video data. After SurfaceFlinger receives the notification message, it obtains the size of the updated Video Buffer, and recalculates the information related to the size of the video data according to the size of the updated Video Buffer, so that the HWC in the first thread can be based on the changed size Re-dug the graphics layer to ensure that the video layer after the size change can be displayed through the graphics layer, so as to ensure that the video layer and the graphics layer can still be synchronized and displayed when the size of the Video Buffer changes. In an optional situation, when the size of the Video Buffer changes, the information related to the size of the video data calculated by SurfaceFlinger may not necessarily change.
应当理解,在第一线程中将视频层的大小相关的信息发送给Media HW和在第二线程中将第一帧视频数据发送给Media HW的先后顺序是不限定的,在一种可选的情况中,Media HW先接收到视频层的大小相关的信息,后接收到第一帧视频数据;在另一种可选的情况中,Media HW先接收到第一帧视频数据,后接收到视频层的大小相关的信息。无论哪一种情况,Media HW都需要根据接收到的视频层的大小相关的信息对第一帧视频数据进行处理之后再发送给显示驱动。It should be understood that the order of sending the size-related information of the video layer to Media HW in the first thread and sending the first frame of video data to Media HW in the second thread is not limited. In the case, Media HW first receives the information related to the size of the video layer, and then receives the first frame of video data; in another optional case, Media HW first receives the first frame of video data, and then receives the video Information about the size of the layer. In either case, Media HW needs to process the first frame of video data according to the received information about the size of the video layer before sending it to the display driver.
在一种可选的方案中,SurfaceFlinger在计算视频数据的大小相关的信息的时候需要参考Video Buffer的尺寸信息,这种情况下,Media HW在接收到第一个Video Buffer的时候,向SurfaceFlinger的第一线程发送通知信息,当SurfaceFlinger接收到该通知消息之后,获取该第一个Video Buffer的尺寸,并根据该第一个Video Buffer的尺寸计算视频数据的大小相关的信息。In an optional solution, SurfaceFlinger needs to refer to the size information of the Video Buffer when calculating the size of the video data. In this case, when Media HW receives the first Video Buffer, it will send it to SurfaceFlinger. The first thread sends notification information. After SurfaceFlinger receives the notification message, it obtains the size of the first Video Buffer, and calculates the size-related information of the video data according to the size of the first Video Buffer.
另外,该图像处理架构引入两种垂直同步信号Graphic Vsync和Display Vsync,其中Graphic Vsync用于触发多个图形层的合成,Display Vsync用于触发显示驱动对图形层和视频层的合成以及显示设备的刷新。应当理解,Graphic Vsync和Display Vsync是两个彼此独立的垂直同步信号,Graphic Vsync和Display Vsync的帧率可以不同。例如,可以设置Display Vsync的帧率大于Graphic Vsync的帧率,从而使得视频的刷新帧率可以大于图形的实际刷新帧率。示例性的,Graphic Vsync还用于触发应用程序对图形层数据的渲染,Graphic Buffer中的图形层数据可以来自多个应用程序,应用程序渲染图形层数据以填充Graphic Buffer,填充Graphic Buffer和HWC对多个图形层数据进行合成对应Graphic Vsync信号的两个不同的周期。In addition, the image processing architecture introduces two vertical synchronization signals Graphic Vsync and Display Vsync. Graphic Vsync is used to trigger the synthesis of multiple graphics layers, and Display Vsync is used to trigger the display driver to synthesize the graphics layer and video layer and display device Refresh. It should be understood that Graphic Vsync and Display Vsync are two independent vertical synchronization signals, and the frame rates of Graphic Vsync and Display Vsync may be different. For example, the frame rate of Display Vsync can be set to be greater than the frame rate of Graphic Vsync, so that the refresh frame rate of the video can be greater than the actual refresh frame rate of the graphics. Exemplarily, Graphic Vsync is also used to trigger the application to render the graphics layer data. The graphics layer data in the Graphic Buffer can come from multiple applications. The application renders the graphics layer data to fill the Graphic Buffer and fill the Graphic Buffer and HWC pair. The synthesis of multiple graphics layer data corresponds to two different cycles of the Graphic Vsync signal.
本申请实施例提供的图像处理的架构,HWC对图形层的合成与Media HW对视频层的处理是分线程并行进行的,因此,视频图像的处理将不再受图形层合成的影响,视频播放也不再受图形层合成的影响,可以有效解决视频播放过程中因为图形层合成耗时导致的丢帧问题。进一步的,设置视频层的大小和对图形层进行挖洞是在同一个线程中先后完成的,并且第一线程和第二线程之间可以进行线程间通信,当Video Buffer的尺寸发生变化时,第二线程可以通知第一线程,以便图形层的挖洞尺寸与视频数据的尺寸保持一致,从而保证视频层和图形层的同步匹配显示。另外,由于控制多个图形层合成的垂直同步信号和控制显示设备刷新帧率的垂直同步信号是彼此独立的,视频的刷新帧率可以大于图形的实际刷新帧率,因此该图像处理架构可以支持视频帧率高于图形刷新帧率的高帧率视频的播放。In the image processing architecture provided by the embodiments of this application, the composition of the graphics layer by HWC and the processing of the video layer by Media HW are performed in parallel by threads. Therefore, the processing of video images will no longer be affected by the composition of the graphics layer. Video playback It is no longer affected by the composition of the graphics layer, and can effectively solve the problem of frame loss caused by the time-consuming composition of the graphics layer during the video playback process. Furthermore, setting the size of the video layer and digging holes in the graphics layer are completed in the same thread one after the other, and inter-thread communication can be carried out between the first thread and the second thread. When the size of the Video Buffer changes, The second thread may notify the first thread so that the digging size of the graphics layer is consistent with the size of the video data, so as to ensure the synchronization and matching display of the video layer and the graphics layer. In addition, because the vertical synchronization signal that controls the synthesis of multiple graphics layers and the vertical synchronization signal that controls the refresh frame rate of the display device are independent of each other, the refresh frame rate of the video can be greater than the actual refresh frame rate of the graphics, so the image processing architecture can support The playback of high frame rate video with a video frame rate higher than the graphics refresh frame rate.
在一种可选的情况中,图形处理系统包括的图形硬件包括HWC和GPU,如果HWC不支持对图形层进行叠加,可以调用GPU的硬件资源实现对多层图形层的叠加。In an optional situation, the graphics hardware included in the graphics processing system includes an HWC and a GPU. If the HWC does not support the overlay of the graphics layer, the hardware resources of the GPU can be called to implement the overlay of the multiple graphics layers.
如图6所示为本申请实施例提供另一种示例性的图像处理架构。As shown in FIG. 6, another exemplary image processing architecture is provided for this embodiment of the application.
图6所示的图像处理架构,在第一线程中,当等到图形Vsync到来时,将Graphic Buffers的指示信息发送给GPU,由GPU实现对多层图形层数据的合成(或者说叠加)以及对图形层的挖洞处理,合成得到的图形层数据存储在FrameBuffer中。或者说,SurfaceFlinger通过调用GPU的API接口,以使用GPU的图像处理功能实现多层图形层数据的合成以及对图形层的挖洞处理。GPU将处理结果返回给SurfaceFlinger,SurfaceFlinger将FrameBuffer的指示信息发送给硬件合成器,然后硬件合成器将FrameBuffer的指示信息告知显示驱动,以便显示驱动可以根据FrameBuffer的指示信息从对应的内存中读取处理后的图形层数据。与图5所示的图像处理架构相比,图6中的图像处理架构的多层图形层的叠加和对图形层的挖洞处理不是由HWC完成的,而是由GPU完成的,其他处理与图5所示的图像处理架构相同,可参考图5所示的图像处理架构部分的描述,此处不再赘述。In the image processing architecture shown in Figure 6, in the first thread, when the graphics Vsync arrives, the indication information of Graphic Buffers is sent to the GPU, and the GPU realizes the synthesis (or overlay) of the multi-layer graphics layer data and the matching In the process of digging holes in the graphics layer, the synthesized graphics layer data is stored in FrameBuffer. In other words, SurfaceFlinger uses the GPU's image processing function to realize the synthesis of multi-layer graphics layer data and the processing of digging holes in the graphics layer by calling the API interface of the GPU. The GPU returns the processing result to SurfaceFlinger, and SurfaceFlinger sends the FrameBuffer instruction information to the hardware synthesizer, and then the hardware synthesizer informs the display driver of the FrameBuffer instruction information, so that the display driver can read and process from the corresponding memory according to the FrameBuffer instruction information After the graphics layer data. Compared with the image processing architecture shown in Figure 5, the image processing architecture in Figure 6 is not done by HWC, but by GPU. Other processing is not done by HWC, but by GPU. The image processing architecture shown in FIG. 5 is the same, and reference may be made to the description of the image processing architecture shown in FIG. 5, which will not be repeated here.
应当理解,虽然未示出,图5和图6中的硬件合成器和媒体硬件均包括对应的硬件抽象层和驱动层,需要通过硬件合成器抽象层和硬件合成驱动实现对硬件合成器的调用,或者说,需要访问HAL的硬件合成器抽象调用硬件合成驱动以实现对硬件合成器的调用;同样的,需要通过媒体硬件抽象层和媒体硬件驱动实现对Media HW的调用,或者说,通过访问HAL的媒体硬件抽象调用媒体硬件驱动以实现对媒体硬件的调用。应当理解,图5和图6中,Graphic Buffer的指示信息发送给硬件合成器抽象,Video Buffer的指示信息发送给媒体硬件抽象;硬件合成器抽象将FrameBuffer的指示信息发送给显示驱动,媒体硬件抽象将Video Buffer的指示信息发送给显示驱动。也即,可以认为,相关指示信息的传递发生在硬件抽象层和驱动层,而不会真的发送到硬件。It should be understood that although not shown, the hardware synthesizer and media hardware in Figures 5 and 6 all include the corresponding hardware abstraction layer and driver layer, and the hardware synthesizer needs to be called through the hardware synthesizer abstraction layer and hardware synthesis driver. , In other words, need to access the hardware synthesizer of the HAL to abstractly call the hardware synthesis driver to implement the call to the hardware synthesizer; similarly, the media hardware abstraction layer and the media hardware driver need to be used to call the Media HW, or in other words, through the access The media hardware of HAL abstractly calls the media hardware driver to realize the call to the media hardware. It should be understood that in Figures 5 and 6, the instruction information of Graphic Buffer is sent to the hardware synthesizer abstraction, and the instruction information of Video Buffer is sent to the media hardware abstraction; the hardware synthesizer abstraction sends the instruction information of FrameBuffer to the display driver, and the media hardware abstraction Send the instruction information of Video Buffer to the display driver. In other words, it can be considered that the transmission of related instruction information occurs at the hardware abstraction layer and the driver layer, and will not be sent to the hardware.
基于图5和图6所示的图像处理的架构,本申请实施例还提供一种图像数据处理的方法,如图7所示,该方法包括:Based on the image processing architecture shown in FIG. 5 and FIG. 6, an embodiment of the present application also provides an image data processing method. As shown in FIG. 7, the method includes:
701、在第一线程中对多层图形层进行合成以及对该多层图形层中的至少一层图形层进行挖洞处理,得到合成的图形层,该合成的图形层包括挖洞区域;701. In the first thread, synthesize multiple graphics layers and perform digging processing on at least one graphics layer of the multiple graphics layers to obtain a composite graphics layer, where the composite graphics layer includes a digging area;
应当理解,合成的图形层数据中包含有挖洞区域,挖洞区域通常会被设置为透明的,以便图形层和视频层合成的时候,视频层可以透过该挖洞区域显示出来。在一种可选的情况中,可以对视频层下方的图形层分别进行挖洞,然后再将多个图形层合成一个图形层;也可以先将多个图形层合成一个图形层,然后再对该合成的图形层进行挖洞。It should be understood that the synthesized graphics layer data includes a digging area, and the digging area is usually set to be transparent so that when the graphics layer and the video layer are synthesized, the video layer can be displayed through the digging area. In an optional case, you can dig holes in the graphics layer below the video layer, and then combine multiple graphics layers into one graphics layer; you can also combine multiple graphics layers into one graphics layer, and then The synthesized graphics layer is digging holes.
702、在第二线程中对视频层进行处理,得到处理后的视频层;702. Process the video layer in the second thread to obtain a processed video layer.
应当理解,该处理后的视频层能够透过挖洞区域显示出来。该方法实施例中允许处理后的视频层和挖洞区域存在一定的尺寸差,也即,处理后的视频层与挖洞区域的尺寸不一定完全一致。It should be understood that the processed video layer can be displayed through the excavated area. In this method embodiment, a certain size difference between the processed video layer and the excavated area is allowed, that is, the sizes of the processed video layer and the excavated area may not be exactly the same.
703、将合成的图形层和处理后的视频层进行叠加,得到显示数据。703. Superimpose the synthesized graphics layer and the processed video layer to obtain display data.
本申请实施例中,对图形层的合成和挖洞处理以及对视频层的处理是在两个线程中并行进行的,因此,视频图像的处理将不再受图形层合成的影响,视频播放也不再 受图形层合成的影响,可以有效解决视频播放过程中因为图形层合成耗时导致的丢帧问题。In the embodiment of this application, the synthesis and digging of the graphics layer and the processing of the video layer are performed in parallel in two threads. Therefore, the processing of video images will no longer be affected by the synthesis of the graphics layer, and video playback will also It is no longer affected by the composition of the graphics layer, and can effectively solve the problem of frame loss caused by the time-consuming composition of the graphics layer during video playback.
在一种可选的情况中,在步骤701之前,该方法还包括:在该第一线程中设置视频层的大小相关的信息。In an optional situation, before step 701, the method further includes: setting information related to the size of the video layer in the first thread.
这样一来,在第一线程中先设置视频层的大小相关的信息,然后根据该视频层的大小相关的信息对至少一层图形层进行挖洞处理;在该第一线程中将设置的视频层的大小相关的信息发送给Media HW,然后,Media HW在第二线程中根据该视频层的大小相关的信息对视频层进行处理,得到处理后的视频层。应当理解,视频层的大小相关的信息可以包括视频层的尺寸和视频层的位置信息,示例性的,视频层的大小相关的信息可以包括视频层一个顶点坐标和两个长度信息,该两个长度信息表示视频层的长和宽;视频层的大小相关的信息还可以包括视频层的两个顶点坐标和一个长度信息,根据该两个顶点坐标和一个长度信息可以唯一确定视频层的大小和播放位置;如果视频层的位置信息为显示视频层的4个顶点坐标,则根据该4个顶点坐标可以确定视频层的尺寸,此时,视频层的大小相关的信息可以只包括位置信息。视频层的大小相关的信息是SurfaceFlinger计算得到的,示例性的,多媒体框架通过系统自带的API设置视频层的初始大小并发送给SurfaceFlinger,SurfaceFlinger还可以捕捉或感知到用户对视频的放大、缩小或旋转等操作,SurfaceFlinger综合多媒体框架发送来的视频层的初始大小以及放大、缩小或旋转等操作,计算得到视频层的大小相关的信息。In this way, the size-related information of the video layer is set in the first thread, and then at least one layer of graphics layer is digged according to the size-related information of the video layer; in the first thread, the set video The information related to the size of the layer is sent to Media HW, and then the Media HW processes the video layer according to the information related to the size of the video layer in the second thread to obtain the processed video layer. It should be understood that the size-related information of the video layer may include the size of the video layer and the position information of the video layer. Exemplarily, the size-related information of the video layer may include one vertex coordinate and two length information of the video layer. The length information indicates the length and width of the video layer; the information related to the size of the video layer can also include the coordinates of two vertices and one length information of the video layer, and the size and the length of the video layer can be uniquely determined according to the coordinates of the two vertices and one length information. Play position; if the position information of the video layer is the coordinates of the 4 vertices of the display video layer, the size of the video layer can be determined according to the coordinates of the 4 vertices. At this time, the information related to the size of the video layer may only include position information. The information related to the size of the video layer is calculated by SurfaceFlinger. Illustratively, the multimedia framework sets the initial size of the video layer through the system's own API and sends it to SurfaceFlinger. SurfaceFlinger can also capture or perceive the user's zoom in or zoom out of the video. Or rotation and other operations, the initial size of the video layer sent by the SurfaceFlinger integrated multimedia framework and operations such as zooming in, zooming out, or rotating, etc., calculate the information related to the size of the video layer.
本申请实施例中,由于设置视频层的大小与对视频层下方的图形层进行挖洞处理是在同一个线程中先后完成的,也即处理后的视频层和挖洞区域的尺寸都是根据该设置的视频层的大小得到的,可以保证图形层中挖洞区域的尺寸与视频层的尺寸完全一致,处理后的视频层和“挖洞区域”可以完全匹配,从而使得处理后的视频层可以透过“挖洞区域”同步显示出来。In the embodiment of this application, since setting the size of the video layer and digging the graphics layer below the video layer are completed in the same thread successively, that is, the sizes of the processed video layer and the digging area are all based on The size of the video layer set by this setting can ensure that the size of the digging area in the graphics layer is exactly the same as the size of the video layer. The processed video layer and the "dug area" can be completely matched, so that the processed video layer It can be displayed simultaneously through the "dug area".
在一种可选的情况中,基于第一垂直同步信号在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理;基于第二垂直同步信号将合成的图形层和处理后的视频层进行叠加,得到显示数据;其中,第一垂直同步信号和第二垂直同步信号彼此独立。In an optional case, the multi-layer graphics layer is synthesized in the first thread based on the first vertical synchronization signal, and the at least one graphics layer is digged; the synthesized graphics layer is synthesized based on the second vertical synchronization signal It is superimposed with the processed video layer to obtain display data; wherein, the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
应当理解,第一垂直同步信号和第二垂直同步信号为两个彼此独立的周期性信号,第一垂直同步信号和第二垂直同步信号可以具有不同的帧率和不同的周期。具体的,当第一垂直同步信号的有效信号到来时,在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理;当第二垂直同步信号的有效信号到来时,将合成的图形层和处理后的视频层进行叠加,得到显示数据。第一垂直同步信号和第二垂直同步信号可以为高电平有效或低电平有效,第一垂直同步信号和第二垂直同步信号可以为电平触发、上升沿触发或下降沿触发。Vsync信号的有效信号到来可以理解为:Vsync信号的上升沿到来,Vsync信号的下降沿到来、Vsync信号为高电平信号时或者Vsync信号为低电平信号时的任一种。示例性的,第一垂直同步信号可以为前述实施例中的Graphic Vsync,第二垂直同步信号可以为前述实施例中的Display Vsync。It should be understood that the first vertical synchronization signal and the second vertical synchronization signal are two independent periodic signals, and the first vertical synchronization signal and the second vertical synchronization signal may have different frame rates and different periods. Specifically, when the effective signal of the first vertical synchronization signal arrives, the multi-layer graphics layer is synthesized in the first thread and the hole processing is performed on at least one of the graphics layers; when the effective signal of the second vertical synchronization signal arrives , Superimpose the synthesized graphics layer and the processed video layer to obtain the display data. The first vertical synchronization signal and the second vertical synchronization signal may be high-level active or low-level active, and the first vertical synchronization signal and the second vertical synchronization signal may be level-triggered, rising-edge-triggered, or falling-edge-triggered. The arrival of the effective signal of the Vsync signal can be understood as: the arrival of the rising edge of the Vsync signal, the arrival of the falling edge of the Vsync signal, when the Vsync signal is a high-level signal, or when the Vsync signal is a low-level signal. Exemplarily, the first vertical synchronization signal may be Graphic Vsync in the foregoing embodiment, and the second vertical synchronization signal may be Display Vsync in the foregoing embodiment.
在一种可选的情况中,第一垂直同步信号的帧率小于第二垂直同步信号的帧率。In an optional situation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
在本申请实施例中,由于控制多个图形层合成的为第一垂直同步信号,控制显示 视频数据的合成以及显示设备刷新帧率的信号为第二垂直同步信号,第一垂直同步信号和第二垂直同步信号是彼此独立的,第二垂直同步信号的帧率可以大于第一垂直同步信号的帧率,因此该图像处理架构可以支持视频帧率高于图形刷新帧率的高帧率视频的播放。In the embodiment of the present application, since the first vertical synchronization signal is used to control the synthesis of multiple graphics layers, the signal that controls the synthesis of display video data and the refresh frame rate of the display device is the second vertical synchronization signal, and the first vertical synchronization signal and the first vertical synchronization signal The two vertical synchronization signals are independent of each other. The frame rate of the second vertical synchronization signal can be greater than that of the first vertical synchronization signal. Therefore, the image processing architecture can support high frame rate video with a video frame rate higher than the graphics refresh frame rate. Play.
在一种可选的情况中,当第一垂直同步信号的有效信号到来时,在第一线程中,先设置视频层的大小相关的信息,然后对多层图形层进行合成以及根据视频层的大小相关的信息对至少一层图形层进行挖洞处理。In an optional situation, when the effective signal of the first vertical synchronization signal arrives, in the first thread, the size-related information of the video layer is set first, and then the multi-layer graphics layer is synthesized and based on the video layer. The size-related information is used to dig holes for at least one layer of graphics.
在本申请实施例中,在第一线程中设置视频层的大小相关的信息需要等待垂直同步信号的有效信号到来之后才会进行,设置视频层的大小相关的信息与图形层合成是在同一个垂直同步信号的有效信号到来之后先后进行的。In the embodiment of this application, setting the size-related information of the video layer in the first thread needs to wait for the effective signal of the vertical synchronization signal to arrive. It is carried out after the effective signal of the vertical synchronization signal arrives.
在一种可选的情况中,当该第二线程获取到第一个视频缓冲区Video Buffer时,向该第一线程发送第一通知信息;在该第一线程中,根据该第一个Video Buffer的大小设置该视频层的大小相关的信息;或者,当该第二线程检测到该Video Buffer的大小发生变化时,向该第一线程发送第一通知信息;在该第一线程中,根据变化后的Video Buffer的大小重新设置该视频层的大小相关的信息。In an optional situation, when the second thread obtains the first video buffer Video Buffer, the first notification information is sent to the first thread; in the first thread, according to the first Video The size of the Buffer sets the information related to the size of the video layer; or, when the second thread detects that the size of the Video Buffer changes, it sends the first notification information to the first thread; in the first thread, according to The size of the changed Video Buffer resets the information related to the size of the video layer.
应当理解,每个Video Buffer中存储一个视频层的数据,Video Buffer的大小与所存储的视频层的大小有关。因此,当视频层的大小发生变化时,Video Buffer的大小也会发生变化。该第一通知信息用于告知第一线程,视频层的大小发生了变化。It should be understood that each Video Buffer stores data of one video layer, and the size of the Video Buffer is related to the size of the stored video layer. Therefore, when the size of the video layer changes, the size of the Video Buffer also changes. The first notification information is used to notify the first thread that the size of the video layer has changed.
在本申请实施例中,第一线程和第二线程之间可以进行线程间通信,当接收到第一个Video Buffer或者Video Buffer的尺寸发生变化时,第二线程可以通知第一线程,以便可以在第一线程中重新设置视频层的大小相关的信息,并根据变化后的尺寸对图形层重新进行挖洞,从而保证尺寸变化后的视频层可以透过图形层的挖洞区域显示出来,也即确保Video Buffer的尺寸发生变化时,依然能实现视频层和图形层的同步匹配显示。在一种可选的情况中,当Video Buffer的尺寸发生变化时,SurfaceFlinger计算的视频数据的大小相关的信息不一定会发生变化。In this embodiment of the application, inter-thread communication can be performed between the first thread and the second thread. When the size of the first Video Buffer or Video Buffer is received, the second thread can notify the first thread so that In the first thread, reset the information related to the size of the video layer, and re-dug the graphics layer according to the changed size, so as to ensure that the video layer after the size change can be displayed through the digging area of the graphics layer. That is to ensure that when the size of the Video Buffer changes, it can still realize the synchronization and matching display of the video layer and the graphics layer. In an optional situation, when the size of the Video Buffer changes, the information related to the size of the video data calculated by SurfaceFlinger may not necessarily change.
在一种可选的情况中,由HWC在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理。In an optional case, the HWC synthesizes multiple graphics layers and performs hole-digging processing on at least one graphics layer in the first thread.
应当理解,在调用HWC的硬件资源执行图形层合成和挖洞处理时,具体是由框架层的SurfaceFlinger通过访问硬件抽象层的HWC抽象调用HWC驱动,从而实现对HWC硬件资源的调用。It should be understood that when calling the hardware resources of the HWC to perform graphics layer synthesis and digging processing, the SurfaceFlinger of the framework layer specifically calls the HWC driver by accessing the HWC abstraction of the hardware abstraction layer, so as to realize the call of the HWC hardware resources.
在一种可选的情况中,由GPU在第一线程中,对多个图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层。In an optional situation, in the first thread, the GPU synthesizes multiple graphics layers and performs hole-digging processing on at least one graphics layer to obtain a synthesized graphics layer.
当HWC不支持图形层合成以及图形层挖洞处理时,可以调用GPU的硬件资源进行图形层合成和图形层挖洞。When HWC does not support graphics layer synthesis and graphics layer digging processing, GPU hardware resources can be used to perform graphics layer synthesis and graphics layer digging.
在一种可选的情况中,在第一线程中设置视频层的大小相关的信息之后,该方法还包括:在该第一线程中,将该视频层的大小相关的信息发送给Media HW;在该第二线程中,Media HW根据该视频层的大小相关的信息对该视频层进行处理,得到该处理后的视频层。In an optional situation, after setting the size-related information of the video layer in the first thread, the method further includes: in the first thread, sending the size-related information of the video layer to Media HW; In the second thread, Media HW processes the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可选的情况中,Media HW先在第一线程中收到视频层的大小相关的信息, 然后Media HW在第二线程中收到第一帧视频层数据,然后Media HW根据收到的视频层大小相关的信息对第一帧视频层数据进行处理;在另一种情况中,Media HW先在第二线程中收到第一帧视频层数据,然后Media HW在第一线程中收到视频层的大小相关的信息,Media HW在收到第一帧视频层数据后并不会立马进行处理,而是会等收到视频层的大小相关的信息之后再对第一帧视频层数据进行处理,以避免第一帧视频层数据的处理出错。In an optional situation, Media HW first receives the information about the size of the video layer in the first thread, and then Media HW receives the first frame of video layer data in the second thread, and then Media HW receives The information related to the size of the video layer processes the first frame of video layer data; in another case, Media HW first receives the first frame of video layer data in the second thread, and then Media HW receives it in the first thread. When it comes to the size-related information of the video layer, Media HW will not process the first frame of video layer data immediately, but will wait for the size-related information of the video layer to receive the first frame of video layer data. Perform processing to avoid processing errors in the first frame of video layer data.
在一种可选的情况中,由显示驱动将合成的图形层和处理后的视频层进行叠加,得到显示数据。In an optional situation, the display driver superimposes the synthesized graphics layer and the processed video layer to obtain display data.
在一种可选的情况中,该方法还包括:In an optional case, the method further includes:
SurfaceFlinger在初始化阶段创建该第一线程和该第二线程;当该SurfaceFlinger接收到图形缓冲区Graphic Buffer时,通知该第一线程处理该Graphic Buffer中的图形层数据;当该SurfaceFlinger接收到Video Buffer时,通知该第二线程处理该Video Buffer中的视频层数据;其中,该Graphic Buffer中存储一层图形层数据,该Video Buffer中存储一层视频层数据。SurfaceFlinger creates the first thread and the second thread in the initialization phase; when the SurfaceFlinger receives the Graphic Buffer, the first thread is notified to process the graphics layer data in the Graphic Buffer; when the SurfaceFlinger receives the Video Buffer , Notify the second thread to process the video layer data in the Video Buffer; wherein, the Graphic Buffer stores a layer of graphics layer data, and the Video Buffer stores a layer of video layer data.
基于图5和图6所示的图像处理的架构,本申请实施例还提供另一种图像数据处理的方法,如图8所示,该方法包括:Based on the image processing architecture shown in FIG. 5 and FIG. 6, an embodiment of the present application also provides another image data processing method. As shown in FIG. 8, the method includes:
801、在第一线程中设置视频层的大小相关的信息并发送给Media HW;801. Set information related to the size of the video layer in the first thread and send it to Media HW;
应当理解,视频层的大小相关的信息可以包括视频层的尺寸和视频层的位置信息,视频层的大小相关的信息是SurfaceFlinger计算得到的,示例性的,多媒体框架通过系统自带的API设置视频层的初始大小并发送给SurfaceFlinger,SurfaceFlinger还可以捕捉或感知到用户对视频的放大、缩小或旋转等操作,SurfaceFlinger综合多媒体框架发送来的视频层的初始大小以及放大、缩小或旋转等操作,计算得到视频层的大小相关的信息。示例性的,当Graphic Vsync的有效信号到来时,才会执行步骤801。It should be understood that the size-related information of the video layer may include the size of the video layer and the position information of the video layer. The size-related information of the video layer is calculated by SurfaceFlinger. Illustratively, the multimedia framework sets the video through the system's own API. The initial size of the layer is sent to SurfaceFlinger. SurfaceFlinger can also capture or perceive the user's operations such as zooming in, zooming out or rotating the video. The initial size of the video layer sent by the SurfaceFlinger integrated multimedia framework and operations such as zooming in, zooming out or rotating are calculated Get information about the size of the video layer. Exemplarily, when a valid signal of Graphic Vsync arrives, step 801 is executed.
802、在第一线程中将多个Graphic Buffer的指示信息发送给HWC;802. Send indication information of multiple Graphic Buffers to the HWC in the first thread.
应当理解,SurfaceFlinger在第一线程中调用HWC的硬件资源实现对多层图形层的合成,具体实现时,在第一线程中将存储图形层数据的Graphic Buffer的指示信息发送给HWC,该指示信息指向一段内存,HWC可以根据该指示信息从对应的内存获取图形层数据并进行处理。It should be understood that SurfaceFlinger calls the hardware resources of the HWC in the first thread to realize the synthesis of the multi-layer graphics layer. In the specific implementation, the indication information of the Graphic Buffer storing the graphics layer data is sent to the HWC in the first thread. The indication information Pointing to a section of memory, HWC can obtain graphics layer data from the corresponding memory and process it according to the instruction information.
803、在第一线程中,HWC对多层图形层数据进行合成,并且根据视频层的大小相关的信息对至少一层图形层进行挖洞处理,得到合成的图形层;803. In the first thread, the HWC synthesizes the multi-layer graphics layer data, and performs hole processing on at least one graphics layer according to the size-related information of the video layer to obtain a synthesized graphics layer;
应当理解,合成的图形层数据中包含有挖洞区域,挖洞区域通常会被设置为透明的,以便图形层和视频层合成的时候,视频层可以透过该挖洞区域显示出来。在一种可选的情况中,HWC可以对视频层下方的图形层分别进行挖洞,然后再将多个图形层合成一个图形层;HWC也可以先将多个图形层合成一个图形层,然后再对该合成的图形层进行挖洞。示例性的,HWC处理得到的合成图形层数据可以存储在FrameBuffer中。应当理解,当Graphic Vsync的有效信号到来时,才会执行步骤803,而且步骤803在步骤801之后执行。It should be understood that the synthesized graphics layer data includes a digging area, and the digging area is usually set to be transparent so that when the graphics layer and the video layer are synthesized, the video layer can be displayed through the digging area. In an optional case, HWC can dig holes in the graphics layer below the video layer, and then combine multiple graphics layers into one graphics layer; HWC can also combine multiple graphics layers into one graphics layer first, and then Then dig a hole in the synthesized graphics layer. Exemplarily, the composite graphics layer data obtained by the HWC processing can be stored in the FrameBuffer. It should be understood that step 803 will be executed only when the effective signal of Graphic Vsync arrives, and step 803 will be executed after step 801.
在一种可选的情况中,如果HWC不支持对图形层进行叠加,SurfaceFlinger可以在第一线程中调用GPU的资源实现对多层图形层的叠加,对应如图4所示的图像处理 架构。在这种情况中,802中会将将多个Graphic Buffer的指示信息发送给GPU,以使用GPU的图像处理功能实现多层图形层数据的合成以及对图形层的挖洞处理。GPU将处理结果返回给SurfaceFlinger,SurfaceFlinger将FrameBuffer的指示信息发送给硬件合成器,然后硬件合成器将FrameBuffer的指示信息告知显示驱动。In an optional situation, if the HWC does not support the overlay of the graphics layer, SurfaceFlinger can call the GPU resource in the first thread to implement the overlay of the multi-layer graphics layer, which corresponds to the image processing architecture shown in Figure 4. In this case, 802 will send instructions for multiple Graphic Buffers to the GPU, so as to use the GPU's image processing function to realize the synthesis of multi-layer graphics layer data and the processing of digging holes in the graphics layer. The GPU returns the processing result to SurfaceFlinger, and SurfaceFlinger sends the FrameBuffer instruction information to the hardware synthesizer, and then the hardware synthesizer informs the display driver of the FrameBuffer instruction information.
804、将合成的图形层发送给显示驱动;804. Send the synthesized graphics layer to the display driver;
应当理解,HWC将FrameBuffer的指示信息发送给显示驱动,以便显示驱动可以根据该指示信息从对应内存中获取合成的图形层数据。It should be understood that the HWC sends the instruction information of the FrameBuffer to the display driver, so that the display driver can obtain the synthesized graphics layer data from the corresponding memory according to the instruction information.
805、在第二线程中将Video Buffer的指示信息发送给Media HW;805. Send the instruction information of Video Buffer to Media HW in the second thread.
应当理解,步骤805与步骤801是并行执行的,也即步骤805与步骤801可以同时执行。It should be understood that step 805 and step 801 are executed in parallel, that is, step 805 and step 801 can be executed at the same time.
806、当接收到第一个Video Buffer或者当检测到Video Buffer的尺寸发生变化时,第二线程通知第一线程;806. When the first Video Buffer is received or when the size of the Video Buffer is detected to change, the second thread notifies the first thread.
示例性的,第二线程向第一线程发送第一通知信息,该第一通知信息用于指示Main Thread的视频层的大小发生了变化。第一线程在收到通知信息后,SurfaceFlinger能够获取更新后的Video Buffer的尺寸,并根据更新后的Video Buffer的尺寸计算视频层的大小相关的信息。示例性的,Video Buffer的尺寸可以包括尺寸信息和旋转信息。Exemplarily, the second thread sends first notification information to the first thread, where the first notification information is used to indicate that the size of the video layer of the Main Thread has changed. After the first thread receives the notification information, SurfaceFlinger can obtain the updated Video Buffer size, and calculate the information related to the size of the video layer according to the updated Video Buffer size. Exemplarily, the size of the Video Buffer may include size information and rotation information.
807、在第二线程中,Media HW根据在第一线程中收到的视频层的大小相关的信息对视频层数据进行处理;807. In the second thread, Media HW processes the video layer data according to the information related to the size of the video layer received in the first thread.
在一种可选的情况中,Media HW先在第一线程中收到视频层的大小相关的信息,然后Media HW在第二线程中收到第一帧视频层数据,然后Media HW根据收到的视频层大小相关的信息对第一帧视频层数据进行处理;在另一种情况中,Media HW先在第二线程中收到第一帧视频层数据,然后Media HW在第一线程中收到视频层的大小相关的信息,Media HW在收到第一帧视频层数据后并不会立马进行处理,而是会等收到视频层的大小相关的信息之后再对第一帧视频层数据进行处理,以避免第一帧视频层数据的处理出错。In an optional case, Media HW first receives the information related to the size of the video layer in the first thread, and then Media HW receives the first frame of video layer data in the second thread, and then Media HW receives The information related to the size of the video layer processes the first frame of video layer data; in another case, Media HW first receives the first frame of video layer data in the second thread, and then Media HW receives it in the first thread. When it comes to the size-related information of the video layer, Media HW will not process the first frame of video layer data immediately, but will wait for the size-related information of the video layer to receive the first frame of video layer data. Perform processing to avoid processing errors in the first frame of video layer data.
808、将处理后的视频层发送给显示驱动;808. Send the processed video layer to the display driver;
Media HW将Video Buffer的指示信息发送给显示驱动,以便显示驱动可以根据该指示信息从对应内存中获取处理后的视频层数据。Media HW sends the instruction information of Video Buffer to the display driver so that the display driver can obtain the processed video layer data from the corresponding memory according to the instruction information.
809、Display Vsync到来,显示驱动对视频层和图形层进行叠加,得到显示数据,并送给显示设备进行显示。809. Display Vsync arrives, and the display driver superimposes the video layer and the graphics layer to obtain display data and send it to the display device for display.
本申请实施例中,HWC对图形层的合成与Media HW对视频层的处理是分线程并行进行的,因此,视频图像的处理将不再受图形层合成的影响,视频播放也不再受图形层合成的影响,可以有效解决视频播放过程中因为图形层合成耗时导致的丢帧问题。另外,由于设置视频层的大小与对视频层下方的图形层进行挖洞处理是在同一个线程中先后完成的,并且第一线程和第二线程之间可以进行线程间通信,当Video Buffer的尺寸发生变化时,第二线程可以通知第一线程,可以保证图形层中挖洞的尺寸与视频层的尺寸完全一致,从而使视频层和图形层同步匹配显示。进一步的,由于控制多个图形层合成的为Graphic Vsync,控制显示视频数据的合成以及显示设备刷新帧率的信号为Display Vsync,Graphic Vsync和Display Vsync是彼此独立的,Display Vsync 的帧率可以大于Graphic Vsync的帧率,因此该图像处理架构可以支持视频帧率高于图形刷新帧率的高帧率视频的播放。In the embodiments of this application, the composition of the graphics layer by HWC and the processing of the video layer by Media HW are performed in parallel by threads. Therefore, the processing of video images will no longer be affected by the composition of the graphics layer, and video playback will no longer be affected by graphics. The effect of layer composition can effectively solve the problem of frame loss caused by the time-consuming graphics layer composition during video playback. In addition, because setting the size of the video layer and digging holes in the graphics layer below the video layer are completed in the same thread, and inter-thread communication can be carried out between the first thread and the second thread, when the Video Buffer is When the size changes, the second thread can notify the first thread, which can ensure that the size of the hole in the graphics layer is exactly the same as the size of the video layer, so that the video layer and the graphics layer can be synchronized and displayed. Furthermore, since the composite that controls multiple graphics layers is Graphic Vsync, the signal that controls the composite of display video data and the refresh frame rate of the display device is Display Vsync. Graphic Vsync and Display Vsync are independent of each other, and the frame rate of Display Vsync can be greater than Graphic Vsync’s frame rate, so this image processing architecture can support the playback of high frame rate videos with a video frame rate higher than the graphics refresh frame rate.
本申请实施例还提供另一种图像数据处理的方法,如图9所示,该方法包括:The embodiment of the present application also provides another image data processing method. As shown in FIG. 9, the method includes:
S10、SurfaceFlinger在初始化阶段创建Video Thread;S10. SurfaceFlinger creates a Video Thread in the initialization phase;
Video Thread为专门用于处理视频层数据的专用线程,该Video Thread可以对应前述的第二线程。应当理解,SurfaceFlinger接收Video Buffer的线程与Video Thread是不同的线程,示例性的,可以将接收Video Buffer的线程称为第一接收线程,Video Thread与第一接收线程需要进行线程间通信,以便第一接收线程可以通知Video Thread有新的可用的Video Buffer。The Video Thread is a dedicated thread dedicated to processing video layer data, and the Video Thread may correspond to the aforementioned second thread. It should be understood that the thread that SurfaceFlinger receives the Video Buffer and the Video Thread are different threads. Illustratively, the thread that receives the Video Buffer can be called the first receiving thread. The Video Thread and the first receiving thread need to communicate between threads to facilitate the first A receiving thread can notify the Video Thread that a new Video Buffer is available.
S12、Multimedia Framework将Video Buffer送到视频层的Buffer队列中;S12. The Multimedia Framework sends the Video Buffer to the Buffer queue of the video layer;
示例性的,buffer包括Usage标志位,该Usage标志位用于指示该buffer的类型,例如当Usage标志位为第一指示值时,说明该buffer为Video Buffer,当Usage标志位为第二指示值时,说明该buffer为Graphic Buffer。在一种可选的情况下,Usage标志位未被占用时,说明该buffer为Graphic Buffer;Usage标志位被占用时,说明该buffer为Video Buffer。Exemplarily, the buffer includes the Usage flag bit, which is used to indicate the type of the buffer. For example, when the Usage flag bit is the first indicator value, it means that the buffer is a Video Buffer, and when the Usage flag bit is the second indicator value. When, it means that the buffer is a Graphic Buffer. In an optional case, when the Usage flag bit is not occupied, it indicates that the buffer is a Graphic Buffer; when the Usage flag bit is occupied, it indicates that the buffer is a Video Buffer.
S14、SurfaceFlinger在第一接收线程中接收到buffer;S14. SurfaceFlinger receives the buffer in the first receiving thread;
S16、当接收的buffer为Video Buffer时,SurfaceFlinger通知Video Thread有新来的可用的Video Buffer;S16. When the received buffer is a Video Buffer, SurfaceFlinger informs the Video Thread that there is a new available Video Buffer;
示例性的,SurfaceFlinger可以通过Usage标志位来判断接收到的buffer是否为Video Buffer。应当理解,Video Thread是一个循环线程,SurfaceFlinger接收到媒体框架发送来的Video Buffer时会通知Video Thread处理视频层数据,Video Thread接到通知后就会开始进行处理,无需等待垂直同步信号。Exemplarily, SurfaceFlinger can use the Usage flag to determine whether the received buffer is a Video Buffer. It should be understood that Video Thread is a cyclic thread. When SurfaceFlinger receives the Video Buffer sent by the media framework, it will notify Video Thread to process the video layer data. Video Thread will start processing after receiving the notification without waiting for the vertical synchronization signal.
S18、Video Thread收到通知后,从视频层的Buffer队列中取出Video Buffer,并送给Media HW来处理;S18. After the Video Thread receives the notification, it takes out the Video Buffer from the Buffer queue of the video layer and sends it to the Media HW for processing;
S20、Video Thread判断收到的Video Buffer是否为接收到的第一个Buffer,如果是,则进入S24,如果不是则进入S22;S20. Video Thread judges whether the received Video Buffer is the first Buffer received, if it is, then go to S24, if not, go to S22;
S22、Video Thread判断当前Video Buffer的尺寸相比前一个Video Buffer是否有变化,如果有变化则进入S24;如果没有变化,则不做任何处理,也可以理解为该支路结束;S22. Video Thread judges whether the size of the current Video Buffer has changed compared with the previous Video Buffer, and if there is a change, proceed to S24; if there is no change, no processing is performed, which can also be understood as the end of the branch;
应当理解,S20和S22为两个并列的判断条件,任一个条件满足都会进入S24。It should be understood that S20 and S22 are two judging conditions in parallel, and any one of the conditions will be met, and S24 will be entered.
S24、通知Main Thread视频层的大小发生了变化;S24. Notify the Main Thread that the size of the video layer has changed;
在一种可选的情况中,Video Thread向Main Thread发送第一通知信息,该第一通知信息用于指示Main Thread视频层的大小发生了变化以便Main Thread根据更新后的尺寸重新设置视频层的大小相关的信息,例如该第一通知信息可以承载在一个标识位上。Main Thread在收到通知信息后,SurfaceFlinger能够获取更新后的Video Buffer的尺寸,并根据更新后的Video Buffer的尺寸计算视频层的大小。应当理解,Main Thread也是SurfaceFlinger在初始化阶段创建的,Main Thread为用于处理图形层数据的线程,Main Thread还用于设置视频层的大小。Main Thread可以对应前述的第一线程。因此,当Video Thread接收的Video Buffer为第一个Video Buffer或者Video Buffer 的尺寸发生了变化,需要通知Main Thread,以便Main Thread重新设置视频层的大小。Main Thread将重新设置的视频层的大小相关的信息发送给Media HW。应当理解,设置的视频层的大小相关的信息是SurfaceFlinger计算的,在一种可选的情况中,多媒体框架通过系统自带的API设置视频层的初始大小并传送给SurfaceFlinger,SurfaceFlinger还可以捕捉或感知到用户对视频的放大、缩小或旋转等操作,SurfaceFlinger综合多媒体框架发送来的视频层的初始大小以及放大、缩小或旋转等操作,计算得到视频层的大小相关的信息。In an optional situation, the Video Thread sends first notification information to the Main Thread. The first notification information is used to indicate that the size of the Main Thread video layer has changed so that the Main Thread can reset the video layer according to the updated size. Size-related information, for example, the first notification information may be carried on an identification bit. After Main Thread receives the notification information, SurfaceFlinger can obtain the size of the updated Video Buffer, and calculate the size of the video layer according to the size of the updated Video Buffer. It should be understood that the Main Thread is also created by SurfaceFlinger in the initialization phase, the Main Thread is a thread used to process graphics layer data, and the Main Thread is also used to set the size of the video layer. Main Thread can correspond to the aforementioned first thread. Therefore, when the Video Buffer received by the Video Thread is the first Video Buffer or the size of the Video Buffer has changed, the Main Thread needs to be notified so that the Main Thread can reset the size of the video layer. Main Thread sends the information related to the size of the re-set video layer to Media HW. It should be understood that the information related to the size of the set video layer is calculated by SurfaceFlinger. In an optional case, the multimedia framework sets the initial size of the video layer through the system's own API and transmits it to SurfaceFlinger. SurfaceFlinger can also capture or Perceive the user's operations such as zooming in, zooming out, or rotating the video, the initial size of the video layer and operations such as zooming in, zooming out, or rotating sent from the SurfaceFlinger integrated multimedia framework can calculate information about the size of the video layer.
S26、在Main Thread中,当Graphic Vsync到来,首先将设置的视频层的大小相关的信息发送给Media HW,然后HWC对多层图形层进行合成以及根据设置的视频层的大小相关的信息对视频层下方的图形层进行挖洞处理;S26. In the Main Thread, when Graphic Vsync arrives, the information related to the size of the set video layer is first sent to Media HW, and then the HWC synthesizes the multi-layer graphics layer and compares the video according to the information related to the size of the set video layer. The graphics layer below the layer is digging holes;
应当理解,将设置的视频层的大小相关的信息发送给Media HW也可以理解为通过Media HW设置视频层的大小。Main Thread是一个循环线程,当SurfacFlinger接收到Graphic Framework发送的Graphic Buffer时通知Main Thread进行处理,但是Main Thread需要等Graphic Vsync到来的时候才会开始对Graphic Buffer中的图形层数据进行处理。该方法还包括:在Main Thread中,SurfacFlinger将Graphic Buffers的指示信息发送给HWC,以便HWC对多层图形层进行合成以及对相关图形层进行挖洞处理。可选的,如果HWC本身不支持对图形进行叠加,当Graphic Vsync到来,SurfacFlinger调用GPU,由GPU对多层图形层进行合成以及对相关图形层进行挖洞处理。It should be understood that sending information related to the size of the set video layer to the Media HW can also be understood as setting the size of the video layer through the Media HW. The Main Thread is a cyclic thread. When SurfacFlinger receives the Graphic Buffer sent by the Graphic Framework, it notifies the Main Thread to process it, but the Main Thread does not start processing the graphics layer data in the Graphic Buffer until the Graphic Vsync arrives. The method also includes: in the Main Thread, SurfacFlinger sends the Graphic Buffers instruction information to the HWC, so that the HWC can synthesize the multi-layer graphics layer and perform hole processing on the related graphics layer. Optionally, if the HWC itself does not support superimposing graphics, when Graphic Vsync arrives, SurfacFlinger calls the GPU, and the GPU synthesizes the multi-layer graphics layer and performs hole processing on the related graphics layer.
由于设置视频层的大小与对视频层下方的图形层进行挖洞处理是在同一个线程中先后完成的,可以保证图形层中挖洞的尺寸与视频层的尺寸完全一致,从而使视频层和图形层的同步匹配显示。Since setting the size of the video layer and digging the graphics layer below the video layer are completed in the same thread, it can ensure that the size of the digging hole in the graphics layer is exactly the same as the size of the video layer, so that the video layer and the video layer are exactly the same. Synchronous matching display of the graphics layer.
S28、Media HW将处理后的视频层送给显示驱动;S28. Media HW sends the processed video layer to the display driver;
S30、HWC将处理后的图形层发送给显示驱动;S30. The HWC sends the processed graphics layer to the display driver;
应当理解,S28和S30可以是同步进行的,S28和S20-S24也是并行的,不限定先后顺序,例如,当S18中Media HW得到处理后的视频层之后就会执行S28将处理后的视频层发送给显示驱动,而当监测到接收到第一个Video Buffer或者Video Buffer的尺寸发生变化时,执行步骤S20或S22,处理后的视频层数据存储在Video Buffer中,Media HW将Video Buffer的指示信息发送给显示驱动,以便显示驱动可以从对应的内存中获取处理后的视频层数据;处理后的图形层数据存储在FrameBuffer中,HWC将FrameBuffer的指示信息发送给给显示驱动,以便显示驱动可以从对应的内存中获取处理后的图形层数据。可选的,如果HWC本身不支持对图形进行叠加,多个图形层的叠加是GPU完成的,GPU将FrameBuffer的指示信息返回给SurfaceFlinger,SurfaceFlinger再将FrameBuffer的指示信息发送给HWC,从而由HWC将FrameBuffer的指示信息发送给给显示驱动。It should be understood that S28 and S30 can be performed synchronously, and S28 and S20-S24 are also parallel, and the order is not limited. For example, when Media HW in S18 obtains the processed video layer, it will execute S28 to convert the processed video layer. Send to the display driver, and when it is detected that the size of the first Video Buffer or Video Buffer is received, step S20 or S22 is executed, the processed video layer data is stored in the Video Buffer, and the Media HW will indicate the Video Buffer The information is sent to the display driver so that the display driver can obtain the processed video layer data from the corresponding memory; the processed graphics layer data is stored in the FrameBuffer, and the HWC sends the instruction information of the FrameBuffer to the display driver so that the display driver can Obtain the processed graphics layer data from the corresponding memory. Optionally, if the HWC itself does not support graphics overlay, the overlay of multiple graphics layers is done by the GPU, the GPU returns the FrameBuffer instruction information to SurfaceFlinger, and SurfaceFlinger then sends the FrameBuffer instruction information to HWC, so that HWC will The instruction information of FrameBuffer is sent to the display driver.
S32、Display Vsync到来时,显示驱动将处理后的视频层和处理后的图形层进行叠加,得到显示数据,并送给显示设备显示。When S32, Display Vsync arrives, the display driver superimposes the processed video layer and the processed graphics layer to obtain display data and send it to the display device for display.
Display Vsync还用于控制显示设备的刷新帧率。由于处理后的图形层数据中包含“挖洞区域”,而该“挖洞区域”的尺寸与处理后的视频层的尺寸是一致的,显示驱动将两者合成之后,视频层和“挖洞区域”可以完全匹配,从而使得视频层可以透过 “挖洞区域”同步显示出来。另外,由于控制多个图形层合成的为Graphic Vsync,控制显示视频数据的合成以及显示设备刷新帧率的信号为Display Vsync,Graphic Vsync和Display Vsync是彼此独立的,Display Vsync的帧率可以大于Graphic Vsync的帧率,因此该图像处理架构可以支持视频帧率高于图形刷新帧率的高帧率视频的播放。Display Vsync is also used to control the refresh frame rate of the display device. Since the processed graphics layer data contains the "dug area", and the size of the "dug area" is the same as the size of the processed video layer, after the display driver combines the two, the video layer and the "dug area" The area" can be completely matched, so that the video layer can be displayed simultaneously through the "dug area". In addition, since it is Graphic Vsync that controls the synthesis of multiple graphics layers, the signal that controls the synthesis of display video data and the refresh frame rate of the display device is Display Vsync. Graphic Vsync and Display Vsync are independent of each other, and the frame rate of Display Vsync can be greater than Graphic The frame rate of Vsync, so the image processing architecture can support the playback of high frame rate video with a video frame rate higher than the graphics refresh frame rate.
应当理解,为了便于理解,图9对应的方法实施例以步骤的形式对方法进行描述,但是步骤的序号并不限制方法的步骤之间的执行顺序。Video Thread中执行的步骤和Main Thread中执行的步骤是并行的。It should be understood that, for ease of understanding, the method embodiment corresponding to FIG. 9 describes the method in the form of steps, but the sequence number of the steps does not limit the execution order between the steps of the method. The steps performed in the Video Thread and the steps performed in the Main Thread are parallel.
图10为本申请实施例提供的一种示例性的图像数据处理装置的架构图。该装置包括:处理器,该处理器上运行有软件指令,以形成框架层、硬件抽象层和驱动层。可选的,该装置还包括传输接口,处理器通过该传输接口接收其他装置发送的数据或者向其他装置发送数据,该传输接口例如可以为HDMI接口、V-By-One接口、eDP接口、MIPI接口、DP接口或通用串行总线(Universal Serial Bus,USB)接口等。这些接口通常是电性通信接口,但是也可能是机械接口或其它形式的接口,本实施例对此不做限定。该装置可以通过连接器、传输线或总线等与显示器、媒体硬件或图形硬件等硬件资源相耦合。该装置可以为一个具有图像或视频处理功能的处理器芯片。在一种可选的情况中,该装置、媒体硬件和图形硬件可以集成在一个芯片上。在另一种可选的情况中,该装置、媒体硬件、图形硬件和显示器可以集成在一个终端上。应当理解,前述图5和图6所示的图像处理框架可以运行在图10所示的装置上,并且图10所示的装置可以用于实现前述图7至图9对应的方法实施例。FIG. 10 is a structural diagram of an exemplary image data processing apparatus provided by an embodiment of the application. The device includes a processor, and software instructions run on the processor to form a framework layer, a hardware abstraction layer, and a driver layer. Optionally, the device further includes a transmission interface through which the processor receives data sent by other devices or sends data to other devices. The transmission interface may be, for example, an HDMI interface, a V-By-One interface, an eDP interface, or MIPI. Interface, DP interface or Universal Serial Bus (Universal Serial Bus, USB) interface, etc. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces, which are not limited in this embodiment. The device can be coupled with hardware resources such as a display, media hardware, or graphics hardware through a connector, a transmission line, or a bus. The device can be a processor chip with image or video processing functions. In an optional case, the device, media hardware, and graphics hardware can be integrated on one chip. In another optional situation, the device, media hardware, graphics hardware, and display may be integrated on one terminal. It should be understood that the image processing framework shown in FIGS. 5 and 6 may be run on the device shown in FIG. 10, and the device shown in FIG. 10 may be used to implement the method embodiments corresponding to FIGS. 7-9.
示例性的,框架层包括SurfaceFlinger,硬件抽象层包括图形硬件抽象和媒体硬件抽象,驱动层包括媒体硬件驱动、图形硬件驱动和显示驱动。其中,图形硬件抽象与图形硬件驱动对应,媒体硬件抽象与媒体硬件驱动对应,通过图形硬件抽象和图形硬件驱动可以调用图形硬件,通过媒体硬件抽象与媒体硬件驱动可以调用媒体硬件。例如,通过访问图形硬件抽象调用图形硬件驱动以实现对图形硬件的调用,通过访问媒体硬件抽象调用媒体硬件驱动以实现对媒体硬件的调用。Exemplarily, the framework layer includes SurfaceFlinger, the hardware abstraction layer includes graphics hardware abstraction and media hardware abstraction, and the driver layer includes media hardware drivers, graphics hardware drivers, and display drivers. Among them, the graphics hardware abstraction corresponds to the graphics hardware driver, the media hardware abstraction corresponds to the media hardware driver, the graphics hardware can be called through the graphics hardware abstraction and the graphics hardware driver, and the media hardware can be called through the media hardware abstraction and the media hardware driver. For example, the graphics hardware driver is abstractly called by accessing the graphics hardware to realize the calling of the graphics hardware, and the media hardware driver is abstractly called by accessing the media hardware to realize the calling of the media hardware.
该框架层,用于在第一线程中,通过图形硬件抽象和图形硬件驱动调用图形硬件,对多层图形层进行合成以及对多层图形层中的至少一层图形层进行挖洞处理,得到合成的图形层,该合成的图形层包括挖洞区域;The framework layer is used in the first thread to call the graphics hardware through graphics hardware abstraction and graphics hardware drivers, to synthesize the multi-layer graphics layer, and to dig holes for at least one of the multi-layer graphics layers to obtain A synthesized graphic layer, the synthesized graphic layer including a digging area;
该框架层,还用于在第二线程中,通过媒体硬件抽象和媒体硬件驱动调用Media HW,对视频层进行处理,得到处理后的视频层;该处理后的视频层能够透过挖洞区域显示出来;The framework layer is also used in the second thread to call Media HW through media hardware abstraction and media hardware driver to process the video layer to obtain the processed video layer; the processed video layer can pass through the digging area show;
该显示驱动,用于将合成的图形层和处理后的视频层进行叠加,得到显示数据。The display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
应当理解,合成的图形层存储在FrameBuffer中,图形硬件得到合成的图形层之后,图形硬件抽象将FrameBuffer的指示信息发送给显示驱动,显示驱动可以根据该FrameBuffer的指示信息从对应的内存空间中读取合成的图形层数据;媒体硬件得到处理后的视频层之后,媒体硬件抽象将存储有处理后的视频层的Video Buffer的指示信息发送给显示驱动,显示驱动可以根据Video Buffer的指示信从对应的内存空间中读取处理后的视频层数据。It should be understood that the synthesized graphics layer is stored in the FrameBuffer. After the graphics hardware obtains the synthesized graphics layer, the graphics hardware abstracts the FrameBuffer instruction information to the display driver, and the display driver can read from the corresponding memory space according to the FrameBuffer instruction information Take the synthesized graphics layer data; after the media hardware obtains the processed video layer, the media hardware abstracts and sends the instruction information of the Video Buffer storing the processed video layer to the display driver, and the display driver can correspond to the instruction letter of the Video Buffer Read the processed video layer data in the memory space.
在一种可选的情况中,框架层还用于:在第一线程中,设置视频层的大小相关的 信息,并将视频层的大小相关的信息发送给媒体硬件抽象;In an optional case, the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to the media hardware abstraction;
框架层具体用于,在第一线程中,调用图形硬件根据视频层的大小相关的信息对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层;在第二线程中,调用Media HW根据视频层的大小相关的信息对视频层进行处理,得到处理后的视频层。The framework layer is specifically used to, in the first thread, call the graphics hardware to synthesize the multi-layer graphics layer according to the size-related information of the video layer, and to dig holes for at least one graphics layer to obtain the synthesized graphics layer; In the second thread, Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可选的情况中,该框架层具体用于,基于第一垂直同步信号调用该图形硬件在该第一线程中对该多层图形层进行合成以及对至少一层图形层进行挖洞处理;该显示驱动具体用于,基于第二垂直同步信号将合成的图形层和处理后的视频层进行叠加,得到显示数据;其中,第一垂直同步信号和第二垂直同步信号彼此独立。In an optional situation, the framework layer is specifically used to call the graphics hardware based on the first vertical synchronization signal to synthesize the multi-layer graphics layer in the first thread and to dig holes in at least one graphics layer Processing: The display driver is specifically used to superimpose the synthesized graphics layer and the processed video layer based on the second vertical synchronization signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
在一种可选的情况中,所述框架层具体用于,当第一垂直同步信号的有效信号到来时,在第一线程中,先将视频层的大小相关的信息发送给媒体硬件抽象,然后调用图形硬件对多层图形层进行合成以及根据视频层的大小相关的信息对至少一层图形层进行挖洞处理。In an optional case, the framework layer is specifically used to send the size-related information of the video layer to the media hardware abstraction in the first thread when the effective signal of the first vertical synchronization signal arrives. Then the graphics hardware is called to synthesize the multi-layer graphics layer, and the at least one graphics layer is digged according to the size-related information of the video layer.
在一种可选的情况中,第一垂直同步信号的帧率小于第二垂直同步信号的帧率。In an optional situation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
在一种可选的情况中,当第二线程获取到第一个视频缓冲区Video Buffer时,向第一线程发送第一通知信息;框架层还用于,在第一线程接收到第一通知信息之后,在第一线程中,获取第一个Video Buffer的大小,并根据第一个Video Buffer的大小设置视频层的大小相关的信息。In an optional case, when the second thread obtains the first video buffer Video Buffer, it sends the first notification information to the first thread; the framework layer is also used to receive the first notification in the first thread After the information, in the first thread, the size of the first Video Buffer is obtained, and the size-related information of the video layer is set according to the size of the first Video Buffer.
或者,当第二线程检测到Video Buffer的大小发生变化时,向第一线程发送第一通知信息;框架层还用于,在第一线程接收到第一通知信息之后,在第一线程中,获取变化后的Video Buffer的大小,并根据变化后的Video Buffer的大小设置视频层的大小相关的信息;其中,Video Buffer中存储一个视频层的数据,Video Buffer的大小与视频层的大小有关。Or, when the second thread detects that the size of the Video Buffer has changed, it sends the first notification information to the first thread; the framework layer is also used to, after the first thread receives the first notification information, in the first thread, Obtain the size of the changed Video Buffer, and set the information related to the size of the video layer according to the size of the changed Video Buffer; among them, the Video Buffer stores the data of a video layer, and the size of the Video Buffer is related to the size of the video layer.
在一种可选的情况中,图形硬件包括HWC和GPU,如图11所示为本申请实施例提供的另一种示例性的图像处理装置的架构示意图。对应的,硬件抽象层包括HWC抽象,驱动层包括HWC驱动和GPU驱动。In an optional situation, the graphics hardware includes an HWC and a GPU, as shown in FIG. 11 is a schematic structural diagram of another exemplary image processing apparatus provided in an embodiment of the present application. Correspondingly, the hardware abstraction layer includes the HWC abstraction, and the driver layer includes the HWC driver and the GPU driver.
框架层具体用于:在第一线程中,通过HWC抽象和HWC驱动调用所HWC,对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层;HWC抽象用于:将合成的图形层发送给所述显示驱动;媒体硬件抽象还用于,将处理后的视频层发送给显示驱动。The framework layer is specifically used for: in the first thread, through the HWC abstraction and the HWC driver to call the HWC, the multi-layer graphics layer is synthesized and at least one graphics layer is digged to obtain the synthesized graphics layer; the HWC abstraction is used Yu: Send the synthesized graphics layer to the display driver; the media hardware abstraction is also used to send the processed video layer to the display driver.
当HWC不支持图形层合成和图形层挖洞处理时,框架层具体用于:在第一线程中,通过GPU驱动调用GPU,对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层;GPU还用于:将合成的图形层返给框架层;框架层还用于,将合成的图形层发送给HWC抽象;HWC抽象用于,将合成的图形层发送给显示驱动;Media HW抽象还用于,将处理后的视频层发送给显示驱动。When HWC does not support graphics layer synthesis and graphics layer digging processing, the framework layer is specifically used to: in the first thread, call the GPU through the GPU driver to synthesize multiple graphics layers and dig holes for at least one graphics layer Processing to obtain the synthesized graphics layer; GPU is also used to: return the synthesized graphics layer to the framework layer; the framework layer is also used to send the synthesized graphics layer to the HWC abstraction; the HWC abstraction is used to send the synthesized graphics layer For the display driver; the Media HW abstraction is also used to send the processed video layer to the display driver.
应当理解,HWC抽象将存储有合成的图形层的FrameBuffer的指示信息发送给显示驱动,媒体硬件抽象将存储有处理后的视频层的Video Buffer的指示信息发送给显示驱动。It should be understood that the HWC abstraction sends the instruction information of the FrameBuffer storing the synthesized graphics layer to the display driver, and the media hardware abstraction sends the instruction information of the VideoBuffer storing the processed video layer to the display driver.
在一种可选的情况中,框架层的SurfaceFlinger用于:在初始化阶段创建第一线 程和第二线程;SurfaceFlinger还用于,当接收到图形缓冲区Graphic Buffer时,通知第一线程处理Graphic Buffer中的图形层数据;当接收到Video Buffer时,通知第二线程处理Video Buffer中的视频层数据;其中,Graphic Buffer中存储一层图形层数据,Video Buffer中存储一层视频层数据。In an optional case, the SurfaceFlinger of the framework layer is used to create the first thread and the second thread in the initialization phase; SurfaceFlinger is also used to notify the first thread to process the Graphic Buffer when the Graphic Buffer is received. When receiving the Video Buffer, notify the second thread to process the video layer data in the Video Buffer; among them, a layer of graphics layer data is stored in the Graphic Buffer, and a layer of video layer data is stored in the Video Buffer.
在一种可选的情况中,Media HW抽象具体用于:在第二线程中收到第一帧视频层数据;在第一线程中收到视频层的大小相关的信息;框架层具体用于:在第二线程中,通过Media HW抽象调用Media HW根据视频层的大小相关的信息对第一帧视频层数据进行处理。In an optional case, the Media HW abstraction is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame layer is specifically used to : In the second thread, Media HW abstractly calls Media HW to process the first frame of video layer data according to the size-related information of the video layer.
如图12所示,为本申请实施例提供的另一种示例性的图像处理装置的架构示意图。该装置包括:框架层、图形硬件、媒体硬件和显示驱动,其中,框架层和显示驱动为软件指令运行在处理器上形成的操作系统的部分层。图形硬件和媒体硬件可以通过连接器、接口、传输线或总线等与处理器耦合,这些接口通常是电性通信接口,但是也可能是机械接口或其它形式的接口。示例性的,该图形硬件可以包括GPU和HWC。As shown in FIG. 12, it is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of this application. The device includes: a framework layer, graphics hardware, media hardware, and a display driver. The framework layer and the display driver are part of the operating system formed by software instructions running on the processor. Graphics hardware and media hardware can be coupled with the processor through connectors, interfaces, transmission lines or buses. These interfaces are usually electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces. Exemplarily, the graphics hardware may include GPU and HWC.
该框架层,用于在第一线程中,调用图形硬件对多层图形层进行合成以及对多层图形层中的至少一层图形层进行挖洞处理,得到合成的图形层,合成的图形层包括挖洞区域;The framework layer is used in the first thread to call the graphics hardware to synthesize the multi-layer graphics layer and to dig holes for at least one of the multi-layer graphics layers to obtain the synthesized graphics layer, the synthesized graphics layer Including the digging area;
该框架层,还用于在第二线程中,调用Media HW对视频层进行处理,得到处理后的视频层;处理后的视频层能够透过挖洞区域显示出来;This framework layer is also used in the second thread to call Media HW to process the video layer to obtain the processed video layer; the processed video layer can be displayed through the digging area;
显示驱动,用于将合成的图形层和处理后的视频层进行叠加,得到显示数据。The display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
在一种可选的情况中,软件指令运行在处理器上还用于形成硬件抽象层、图形硬件驱动和媒体硬件驱动,其中,硬件抽象层包括与图形硬件对应的图形硬件驱动,以及与媒体硬件对应的媒体硬件驱动。框架层具体用于通过图形硬件抽象和图形硬件驱动调用图形硬件,框架层具体用于通过媒体硬件抽象和媒体硬件驱动调用媒体硬件。In an optional situation, the software instructions running on the processor are also used to form a hardware abstraction layer, graphics hardware drivers, and media hardware drivers. The hardware abstraction layer includes graphics hardware drivers corresponding to graphics hardware and media Media hardware driver corresponding to the hardware. The frame layer is specifically used to call graphics hardware through graphics hardware abstraction and graphics hardware drivers, and the frame layer is specifically used to call media hardware through media hardware abstractions and media hardware drivers.
在一种可选的情况中,框架层还用于:在第一线程中,设置视频层的大小相关的信息,并将该视频层的大小相关的信息发送给Media HW;该框架层具体用于,在该第一线程中,调用该图形硬件根据该视频层的大小相关的信息对该多层图形层进行合成以及对该至少一层图形层进行挖洞处理,得到该合成的图形层;在该第二线程中,调用该Media HW根据该视频层的大小相关的信息对该视频层进行处理,得到该处理后的视频层。In an optional case, the framework layer is also used to: in the first thread, set the size-related information of the video layer, and send the size-related information of the video layer to Media HW; the framework layer is specifically used Therefore, in the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and the at least one graphics layer is digged to obtain the synthesized graphics layer; In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
在一种可选的情况中,该框架层具体用于,基于第一垂直同步信号,在该第一线程中调用该图形硬件对该多层图形层进行合成以及对该至少一层图形层进行挖洞处理;该显示驱动具体用于,当第二垂直同步信号到来时,将该合成的图形层和该处理后的视频层进行叠加,得到该显示数据;其中,该第一垂直同步信号和该第二垂直同步信号彼此独立。In an optional case, the framework layer is specifically used to, based on the first vertical synchronization signal, call the graphics hardware in the first thread to synthesize the multi-layer graphics layer and perform the at least one graphics layer Digging processing; the display driver is specifically used to, when the second vertical synchronization signal arrives, superimpose the synthesized graphics layer and the processed video layer to obtain the display data; wherein, the first vertical synchronization signal and The second vertical synchronization signals are independent of each other.
在一种可选的情况中,该框架层具体用于,当该第一垂直同步信号的有效信号到来时,在该第一线程中,先将该视频层的大小相关的信息发送给该Media HW,然后调用该图形硬件对该多层图形层进行合成以及根据该视频层的大小相关的信息对该至少一层图形层进行挖洞处理。In an optional situation, the framework layer is specifically used to send the size-related information of the video layer to the Media in the first thread when the effective signal of the first vertical synchronization signal arrives. HW, and then call the graphics hardware to synthesize the multi-layer graphics layer, and perform digging processing on the at least one graphics layer according to the size-related information of the video layer.
在一种可选的情况中,该第一垂直同步信号的帧率小于该第二垂直同步信号的帧 率。In an optional situation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
在一种可选的情况中,当该第二线程获取到第一个视频缓冲区Video Buffer时,向该第一线程发送第一通知信息;该框架层还用于,在该第一线程接收到该第一通知信息之后,在该第一线程中,获取该第一个Video Buffer的大小,并根据该第一个Video Buffer的大小设置该视频层的大小相关的信息;或者,当该第二线程检测到该Video Buffer的大小发生变化时,向该第一线程发送第一通知信息;该框架层还用于,在该第一线程接收到该第一通知信息之后,在该第一线程中,获取变化后的Video Buffer的大小,并根据该变化后的Video Buffer的大小设置该视频层的大小相关的信息;其中,该Video Buffer中存储一个该视频层的数据,该Video Buffer的大小与该视频层的大小有关。In an optional situation, when the second thread obtains the first video buffer Video Buffer, the first notification information is sent to the first thread; the framework layer is also used to receive After the first notification information, in the first thread, obtain the size of the first Video Buffer, and set the size-related information of the video layer according to the size of the first Video Buffer; or, when the first When the second thread detects that the size of the Video Buffer has changed, it sends first notification information to the first thread; the framework layer is also used to: after the first thread receives the first notification information, in the first thread , Obtain the size of the changed Video Buffer, and set the size-related information of the video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores a piece of data of the video layer, and the size of the Video Buffer It is related to the size of the video layer.
在一种可选的情况中,该图形硬件包括硬件合成器HWC,该框架层具体用于:在该第一线程中,调用该HWC对该多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到该合成的图形层;显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该HWC还用于:将该合成的图形层发送给显示驱动;该Media HW还用于,将该处理后的视频层发送给显示驱动。In an optional case, the graphics hardware includes a hardware synthesizer HWC, and the framework layer is specifically used to: in the first thread, call the HWC to synthesize the multi-layer graphics layer and perform the synthesis of at least one graphics layer Perform the digging process to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer to obtain the display data, the HWC is also used to: send the synthesized graphics layer to the display Driver; The Media HW is also used to send the processed video layer to the display driver.
在一种可选的情况中,该图形硬件包括图形处理器GPU,该框架层具体用于:在第一线程中,调用GPU对该多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到该合成的图形层;该显示驱动将该合成的图形层和该处理后的视频层进行叠加,得到显示数据之前,该GPU还用于:将该合成的图形层返给该框架层;该框架层还用于,将该合成的图形层发送给该HWC;该HWC还用于,将该合成的图形层发送给该显示驱动;该Media HW还用于,将该处理后的视频层发送给该显示驱动。In an optional case, the graphics hardware includes a graphics processor GPU, and the framework layer is specifically used to: in the first thread, call the GPU to synthesize the multi-layer graphics layer and dig at least one graphics layer. Hole processing to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before the display data is obtained, the GPU is also used to: return the synthesized graphics layer to the framework Layer; the framework layer is also used to send the synthesized graphics layer to the HWC; the HWC is also used to send the synthesized graphics layer to the display driver; the Media HW is also used to send the processed graphics layer The video layer is sent to the display driver.
在一种可选的情况中,该框架层包括SurfaceFlinger,该SurfaceFlinger用于:在初始化阶段创建该第一线程和该第二线程;该SurfaceFlinger还用于,当接收到图形缓冲区Graphic Buffer时,通知该第一线程处理该Graphic Buffer中的图形层数据;当该接收到Video Buffer时,通知该第二线程处理该Video Buffer中的视频层数据;其中,该Graphic Buffer中存储一层图形层数据,该Video Buffer中存储一层视频层数据。In an optional case, the framework layer includes SurfaceFlinger, which is used to: create the first thread and the second thread in the initialization phase; and the SurfaceFlinger is also used to, when receiving the Graphic Buffer, Notify the first thread to process the graphics layer data in the Graphic Buffer; when the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer; wherein the Graphic Buffer stores a layer of graphics layer data , The Video Buffer stores a layer of video layer data.
在一种可选的情况中,该Media HW具体用于:在该第二线程中收到第一帧视频层数据;在该第一线程中收到该视频层的大小相关的信息;该框架层具体用于:在该第二线程中,调用该Media HW根据该视频层的大小相关的信息对该第一帧视频层数据进行处理。In an optional case, the Media HW is specifically used to: receive the first frame of video layer data in the second thread; receive the size-related information of the video layer in the first thread; the frame The layer is specifically used to: in the second thread, call the Media HW to process the first frame of video layer data according to the size-related information of the video layer.
应当理解,前述图5和图6所示的图像处理框架可以运行在图12所示的装置上,并且图12所示的装置可以用于实现前述图7至图9对应的方法实施例。相关细节解释可以参照前述各方法实施例的说明。It should be understood that the image processing framework shown in FIGS. 5 and 6 may be run on the device shown in FIG. 12, and the device shown in FIG. 12 may be used to implement the method embodiments corresponding to FIGS. 7-9. For relevant detailed explanations, reference may be made to the descriptions of the foregoing method embodiments.
如图13所示,为本申请实施例提供的另一种示例性的图像处理装置的架构示意图。该装置包括:处理器、图形硬件和媒体硬件,该处理器上运行有软件指令以形成框架层和显示驱动;As shown in FIG. 13, it is a schematic structural diagram of another exemplary image processing apparatus provided by an embodiment of this application. The device includes a processor, graphics hardware, and media hardware, and software instructions run on the processor to form a framework layer and a display driver;
框架层,用于在第一线程中,调用图形硬件对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,合成的图形层包括挖洞区域;The framework layer is used for invoking the graphics hardware to synthesize multiple graphics layers and digging holes for at least one graphics layer in the first thread to obtain a synthesized graphics layer, and the synthesized graphics layer includes the digging area;
框架层,用于在第二线程中,调用媒体硬件对视频层进行处理,得到处理后的视 频层;The framework layer is used to call the media hardware to process the video layer in the second thread to obtain the processed video layer;
显示驱动,用于将合成的图形层和处理后的视频层进行叠加,得到显示数据。The display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
如图13所示,该处理器上运行的软件指令还用于形成硬件抽象层,该硬件抽象层包括图形硬件抽象和媒体硬件抽象。对应的,虽然图13中未示出,驱动层还包括图形硬件驱动和媒体硬件驱动。图形硬件和媒体硬件可以通过连接器、接口、传输线或总线等与处理器耦合。在一种可选的情况中,图形硬件包括HWC和GPU,如图14所示。对应的,硬件抽象层包括HWC抽象,驱动层包括HWC驱动和GPU驱动。As shown in Figure 13, the software instructions running on the processor are also used to form a hardware abstraction layer, which includes graphics hardware abstraction and media hardware abstraction. Correspondingly, although not shown in FIG. 13, the driver layer also includes graphics hardware drivers and media hardware drivers. Graphics hardware and media hardware can be coupled with the processor through connectors, interfaces, transmission lines, or buses. In an optional case, the graphics hardware includes HWC and GPU, as shown in FIG. 14. Correspondingly, the hardware abstraction layer includes the HWC abstraction, and the driver layer includes the HWC driver and the GPU driver.
应当理解,前述图5和图6所示的图像处理框架可以运行在图13或图14所示的装置上,并且图13或图14所示的装置可以用于实现前述图7至图9对应的方法实施例。此处不再详细限定。It should be understood that the aforementioned image processing framework shown in Figures 5 and 6 can be run on the device shown in Figure 13 or Figure 14, and the device shown in Figure 13 or Figure 14 can be used to implement the aforementioned corresponding to Figures 7 to 9的方法实施例。 Example of the method. It is not limited here in detail.
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得计算机或处理器执行本申请实施例提供的方法中的部分或全部功能。The embodiments of the present application also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when it runs on a computer or a processor, the computer or the processor executes the method provided in the embodiments of the present application Part or all of the functions.
本申请实施例还提供一种包含指令的计算机程序产品,当其在计算机或处理器上运行时,使得计算机或处理器执行本申请实施例提供的方法中的部分或全部功能。The embodiments of the present application also provide a computer program product containing instructions, which when run on a computer or a processor, enable the computer or the processor to perform part or all of the functions in the methods provided in the embodiments of the present application.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the scope of the technical solutions of the embodiments of the present application.

Claims (33)

  1. 一种图像数据处理的方法,其特征在于,所述方法包括:A method for image data processing, characterized in that the method includes:
    在第一线程中对多层图形层进行合成以及对所述多层图形层中至少一层图形层进行挖洞处理,得到合成的图形层,所述合成的图形层包括挖洞区域;Synthesize the multi-layer graphics layer in the first thread and perform digging processing on at least one graphics layer of the multi-layer graphics layer to obtain a synthesized graphics layer, and the synthesized graphics layer includes a digging area;
    在第二线程中对视频层进行处理,得到处理后的视频层,所述处理后的视频层能够透过所述挖洞区域显示出来;Processing the video layer in the second thread to obtain a processed video layer, and the processed video layer can be displayed through the digging area;
    将所述合成的图形层和所述处理后的视频层进行叠加,得到显示数据。The synthesized graphics layer and the processed video layer are superimposed to obtain display data.
  2. 根据权利要求1所述的方法,其特征在于,所述在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层之前,所述方法还包括:The method according to claim 1, characterized in that, before the multi-layer graphics layer is synthesized in the first thread and the hole processing is performed on at least one graphics layer to obtain the synthesized graphics layer, the method further include:
    在所述第一线程中设置所述视频层的大小相关的信息;Setting information related to the size of the video layer in the first thread;
    所述对至少一层图形层进行挖洞处理,具体包括:The digging process for at least one layer of graphics specifically includes:
    根据所述视频层的大小相关的信息对所述至少一层图形层进行挖洞处理;Digging holes for the at least one graphics layer according to information related to the size of the video layer;
    所述在第二线程中对视频层进行处理,得到处理后的视频层,具体包括:The processing of the video layer in the second thread to obtain the processed video layer specifically includes:
    在所述第二线程中,根据所述视频层的大小相关的信息对所述视频层进行处理,得到所述处理后的视频层。In the second thread, the video layer is processed according to information related to the size of the video layer to obtain the processed video layer.
  3. 根据权利要求1或2所述的方法,其特征在于,The method according to claim 1 or 2, characterized in that:
    基于第一垂直同步信号在所述第一线程中对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理;Synthesize the multi-layer graphics layer in the first thread based on the first vertical synchronization signal and perform digging processing on the at least one graphics layer;
    基于第二垂直同步信号将所述合成的图形层和所述处理后的视频层进行叠加,得到所述显示数据;Superimposing the synthesized graphics layer and the processed video layer based on a second vertical synchronization signal to obtain the display data;
    其中,所述第一垂直同步信号和所述第二垂直同步信号彼此独立。Wherein, the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  4. 根据权利要求3所述的方法,其特征在于,The method of claim 3, wherein:
    当所述第一垂直同步信号的有效信号到来时,在所述第一线程中,先设置所述视频层的大小相关的信息,然后对所述多层图形层进行合成以及根据所述视频层的大小相关的信息对所述至少一层图形层进行挖洞处理。When the effective signal of the first vertical synchronization signal arrives, in the first thread, the size-related information of the video layer is first set, and then the multi-layer graphics layer is synthesized and based on the video layer The information related to the size of the at least one layer of graphics is processed by digging holes.
  5. 根据权利要求3或4所述的方法,其特征在于,所述第一垂直同步信号的帧率小于所述第二垂直同步信号的帧率。The method according to claim 3 or 4, wherein the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
  6. 根据权利要求1至5任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1 to 5, further comprising:
    当所述第二线程获取到第一个视频缓冲区Video Buffer时,向所述第一线程发送第一通知信息;When the second thread obtains the first video buffer Video Buffer, send the first notification information to the first thread;
    在所述第一线程中,根据所述第一个Video Buffer的大小设置所述视频层的大小相关的信息;或者,In the first thread, information related to the size of the video layer is set according to the size of the first Video Buffer; or,
    当所述第二线程检测到所述Video Buffer的大小发生变化时,向所述第一线程发送第一通知信息;When the second thread detects that the size of the Video Buffer has changed, sending first notification information to the first thread;
    在所述第一线程中,根据变化后的Video Buffer的大小重新设置所述视频层的大小相关的信息;In the first thread, the information related to the size of the video layer is reset according to the size of the changed Video Buffer;
    其中,所述Video Buffer中存储一个所述视频层的数据,所述Video Buffer的大 小与所述视频层的大小有关。Wherein, the Video Buffer stores one piece of data of the video layer, and the size of the Video Buffer is related to the size of the video layer.
  7. 根据权利要求1至6任一项所述的方法,其特征在于,所述在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,具体包括:The method according to any one of claims 1 to 6, characterized in that, in the first thread, the multi-layer graphics layer is synthesized and the at least one graphics layer is digged to obtain a synthesized graphics layer, Specifically:
    在所述第一线程中,硬件合成器HWC对所述多个图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层。In the first thread, the hardware synthesizer HWC synthesizes the multiple graphics layers and performs hole processing on the at least one graphics layer to obtain the synthesized graphics layer.
  8. 根据权利要求1至6任一项所述的方法,其特征在于,所述在第一线程中对多层图形层进行合成以及对至少一层图形层进行挖洞处理,得到合成的图形层,具体包括:The method according to any one of claims 1 to 6, characterized in that, in the first thread, the multi-layer graphics layer is synthesized and the at least one graphics layer is digged to obtain a synthesized graphics layer, Specifically:
    在所述第一线程中,图形处理器GPU对所述多个图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层。In the first thread, the graphics processor GPU synthesizes the multiple graphics layers and performs hole processing on the at least one graphics layer to obtain the synthesized graphics layer.
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述在所述第一线程中设置所述视频层的大小相关的信息之后,所述方法还包括:The method according to any one of claims 1 to 8, wherein after the setting of the size-related information of the video layer in the first thread, the method further comprises:
    在所述第一线程中,将所述视频层的大小相关的信息发送给媒体硬件Media HW;In the first thread, the information related to the size of the video layer is sent to the media hardware Media HW;
    所述在第二线程中对视频层进行处理,得到处理后的视频层,具体包括:The processing of the video layer in the second thread to obtain the processed video layer specifically includes:
    在所述第二线程中,所述Media HW根据所述视频层的大小相关的信息对所述视频层进行处理,得到所述处理后的视频层。In the second thread, the Media HW processes the video layer according to the size-related information of the video layer to obtain the processed video layer.
  10. 根据权利要求1至9任一项所述的方法,其特征在于,The method according to any one of claims 1 to 9, characterized in that,
    所述将合成的图形层和处理后的视频层进行叠加,得到显示数据,具体包括:The superimposing the synthesized graphics layer and the processed video layer to obtain display data specifically includes:
    显示驱动将所述合成的图形层和所述处理后的视频层进行叠加,得到所述显示数据。The display driver superimposes the synthesized graphics layer and the processed video layer to obtain the display data.
  11. 根据权利要求1至8任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1 to 8, further comprising:
    SurfaceFlinger在初始化阶段创建所述第一线程和所述第二线程;SurfaceFlinger creates the first thread and the second thread in the initialization phase;
    当所述SurfaceFlinger接收到图形缓冲区Graphic Buffer时,通知所述第一线程处理所述Graphic Buffer中的图形层数据;When the SurfaceFlinger receives the Graphic Buffer, notify the first thread to process the graphics layer data in the Graphic Buffer;
    当所述SurfaceFlinger接收到Video Buffer时,通知所述第二线程处理所述Video Buffer中的视频层数据;When the SurfaceFlinger receives the Video Buffer, notify the second thread to process the video layer data in the Video Buffer;
    其中,所述Graphic Buffer中存储一层图形层数据,所述Video Buffer中存储一层视频层数据。Wherein, a layer of graphics layer data is stored in the Graphic Buffer, and a layer of video layer data is stored in the Video Buffer.
  12. 一种图像数据处理的装置,其特征在于,所述装置包括:处理器,所述处理器上运行有软件指令以形成框架层、硬件抽象层HAL和驱动层,所述HAL包括图形硬件抽象和媒体硬件Media HW抽象,所述驱动层包括图形硬件驱动、媒体硬件驱动和显示驱动;An image data processing device, characterized in that, the device includes a processor on which software instructions run to form a framework layer, a hardware abstraction layer HAL, and a driver layer, and the HAL includes graphics hardware abstraction and Media hardware Media HW is abstracted, and the driver layer includes graphics hardware driver, media hardware driver and display driver;
    所述框架层,用于在第一线程中,通过所述图形硬件抽象和所述图形硬件驱动调用图形硬件,对多层图形层进行合成以及对所述多层图形层中的至少一层图形层进行挖洞处理,得到合成的图形层,所述合成的图形层包括挖洞区域;The framework layer is used for invoking the graphics hardware through the graphics hardware abstraction and the graphics hardware driver in the first thread, to synthesize the multi-layer graphics layer, and to compose at least one layer of the graphics in the multi-layer graphics layer Performing digging processing on the layer to obtain a synthesized graphic layer, the synthesized graphic layer including the digging area;
    所述框架层,用于在第二线程中,通过所述媒体硬件抽象和所述媒体硬件驱动调用Media HW,对视频层进行处理,得到处理后的视频层;所述处理后的视频层能够透过所述挖洞区域显示出来;The framework layer is used in the second thread to call Media HW through the media hardware abstraction and the media hardware driver to process the video layer to obtain the processed video layer; the processed video layer can Display through the digging area;
    所述显示驱动,用于将所述合成的图形层和所述处理后的视频层进行叠加,得到显示数据。The display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
  13. 根据权利要求12所述的装置,其特征在于,所述框架层还用于:The device according to claim 12, wherein the frame layer is also used for:
    在所述第一线程中,设置所述视频层的大小相关的信息,并将所述视频层的大小相关的信息发送给所述Media HW抽象;In the first thread, setting information related to the size of the video layer, and sending the information related to the size of the video layer to the Media HW abstraction;
    所述框架层具体用于,The frame layer is specifically used for,
    在所述第一线程中,调用所述图形硬件根据所述视频层的大小相关的信息对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层;In the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the size-related information of the video layer, and the at least one graphics layer is digged to obtain the synthesis Graphics layer
    在所述第二线程中,调用所述Media HW根据所述视频层的大小相关的信息对所述视频层进行处理,得到所述处理后的视频层。In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
  14. 根据权利要求12或13所述的装置,其特征在于,The device according to claim 12 or 13, characterized in that:
    所述框架层具体用于,基于第一垂直同步信号调用所述图形硬件在所述第一线程中对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理;The framework layer is specifically configured to call the graphics hardware to synthesize the multi-layer graphics layer in the first thread based on the first vertical synchronization signal and perform hole-digging processing on the at least one graphics layer;
    所述显示驱动具体用于,基于第二垂直同步信号将所述合成的图形层和所述处理后的视频层进行叠加,得到所述显示数据;The display driving is specifically configured to superimpose the synthesized graphics layer and the processed video layer based on a second vertical synchronization signal to obtain the display data;
    其中,所述第一垂直同步信号和所述第二垂直同步信号彼此独立。Wherein, the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  15. 根据权利要求14所述的装置,其特征在于,The device of claim 14, wherein:
    所述框架层具体用于,当所述第一垂直同步信号的有效信号到来时,在所述第一线程中,先将所述视频层的大小相关的信息发送给所述Media HW抽象,然后调用所述图形硬件对所述多层图形层进行合成以及根据所述视频层的大小相关的信息对所述至少一层图形层进行挖洞处理。The framework layer is specifically used to, when the effective signal of the first vertical synchronization signal arrives, in the first thread, first send the size-related information of the video layer to the Media HW abstraction, and then Invoking the graphics hardware to synthesize the multi-layer graphics layer, and performing hole processing on the at least one graphics layer according to information related to the size of the video layer.
  16. 根据权利要求14或15所述的装置,其特征在于,所述第一垂直同步信号的帧率小于所述第二垂直同步信号的帧率。The device according to claim 14 or 15, wherein the frame rate of the first vertical synchronization signal is lower than the frame rate of the second vertical synchronization signal.
  17. 根据权利要求12至16任一项所述的装置,其特征在于:The device according to any one of claims 12 to 16, characterized in that:
    当所述第二线程获取到第一个视频缓冲区Video Buffer时,向所述第一线程发送第一通知信息;When the second thread obtains the first video buffer Video Buffer, send the first notification information to the first thread;
    所述框架层还用于,在所述第一线程接收到所述第一通知信息之后,在所述第一线程中,获取所述第一个Video Buffer的大小,并根据所述第一个Video Buffer的大小设置所述视频层的大小相关的信息;或者,The framework layer is also configured to, after the first thread receives the first notification information, in the first thread, obtain the size of the first Video Buffer, and according to the first The size of Video Buffer sets information related to the size of the video layer; or,
    当所述第二线程检测到所述Video Buffer的大小发生变化时,向所述第一线程发送第一通知信息;When the second thread detects that the size of the Video Buffer has changed, sending first notification information to the first thread;
    所述框架层还用于,在所述第一线程接收到所述第一通知信息之后,在所述第一线程中,获取变化后的Video Buffer的大小,并根据所述变化后的Video Buffer的大小设置所述视频层的大小相关的信息;The framework layer is also used to, after the first thread receives the first notification information, in the first thread, obtain the size of the changed Video Buffer, and according to the changed Video Buffer size The size of sets the information related to the size of the video layer;
    其中,所述Video Buffer中存储一个所述视频层的数据,所述Video Buffer的大小与所述视频层的大小有关。Wherein, the Video Buffer stores one piece of data of the video layer, and the size of the Video Buffer is related to the size of the video layer.
  18. 根据权利要求12至17任一项所述的装置,其特征在于,所述图形硬件包括硬件合成器HWC,所述图形硬件抽象包括HWC抽象,所述图形硬件驱动包括HWC 驱动,所述框架层具体用于:The apparatus according to any one of claims 12 to 17, wherein the graphics hardware includes a hardware synthesizer HWC, the graphics hardware abstraction includes HWC abstraction, the graphics hardware driver includes an HWC driver, and the framework layer Specifically used for:
    在所述第一线程中,通过所述HWC抽象和所述HWC驱动调用所述HWC,对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层;In the first thread, call the HWC through the HWC abstraction and the HWC driver, synthesize the multi-layer graphics layer, and perform hole processing on the at least one graphics layer to obtain the synthesis Graphics layer
    所述显示驱动将所述合成的图形层和所述处理后的视频层进行叠加,得到显示数据之前,所述HWC抽象还用于:The display driver superimposes the synthesized graphics layer and the processed video layer, and before the display data is obtained, the HWC abstraction is also used for:
    将所述合成的图形层发送给所述显示驱动;Sending the synthesized graphics layer to the display driver;
    所述Media HW抽象还用于,将所述处理后的视频层发送给所述显示驱动。The Media HW abstraction is also used to send the processed video layer to the display driver.
  19. 根据权利要求12至17任一项所述的装置,其特征在于,所述图形硬件包括图形处理器GPU,所述图形硬件驱动包括GPU驱动,所述框架层具体用于:The device according to any one of claims 12 to 17, wherein the graphics hardware includes a graphics processor GPU, the graphics hardware driver includes a GPU driver, and the framework layer is specifically used for:
    在所述第一线程中,通过所述GPU驱动调用所述GPU,对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层;In the first thread, the GPU is invoked by the GPU driver, the multi-layer graphics layer is synthesized, and the at least one graphics layer is burrowed to obtain the synthesized graphics layer;
    所述显示驱动将所述合成的图形层和所述处理后的视频层进行叠加,得到显示数据之前,所述GPU还用于:The display driver superimposes the synthesized graphics layer and the processed video layer, and before the display data is obtained, the GPU is also used for:
    将所述合成的图形层返给所述框架层;Returning the synthesized graphics layer to the frame layer;
    所述框架层还用于,将所述合成的图形层发送给所述HWC抽象;The framework layer is also used to send the synthesized graphics layer to the HWC abstraction;
    所述HWC抽象还用于,将所述合成的图形层发送给所述显示驱动;The HWC abstraction is also used to send the synthesized graphics layer to the display driver;
    所述Media HW抽象还用于,将所述处理后的视频层发送给所述显示驱动。The Media HW abstraction is also used to send the processed video layer to the display driver.
  20. 根据权利要求12至19任一项所述的装置,其特征在于,所述框架层包括SurfaceFlinger,所述SurfaceFlinger用于:The device according to any one of claims 12 to 19, wherein the frame layer comprises SurfaceFlinger, and the SurfaceFlinger is used for:
    在初始化阶段创建所述第一线程和所述第二线程;Creating the first thread and the second thread in the initialization phase;
    所述SurfaceFlinger还用于,当接收到图形缓冲区Graphic Buffer时,通知所述第一线程处理所述Graphic Buffer中的图形层数据;The SurfaceFlinger is also used to notify the first thread to process the graphics layer data in the Graphic Buffer when receiving the Graphic Buffer Graphic Buffer;
    当所述接收到Video Buffer时,通知所述第二线程处理所述Video Buffer中的视频层数据;When the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer;
    其中,所述Graphic Buffer中存储一层图形层数据,所述Video Buffer中存储一层视频层数据。Wherein, a layer of graphics layer data is stored in the Graphic Buffer, and a layer of video layer data is stored in the Video Buffer.
  21. 根据权利要求12至20任一项所述的装置,其特征在于,所述Media HW抽象具体用于:The device according to any one of claims 12 to 20, wherein the Media HW abstraction is specifically used for:
    在所述第二线程中收到第一帧视频层数据;Receiving the first frame of video layer data in the second thread;
    在所述第一线程中收到所述视频层的大小相关的信息;Receiving information about the size of the video layer in the first thread;
    所述框架层具体用于:The framework layer is specifically used for:
    在所述第二线程中,通过所述Media HW抽象调用所述Media HW根据所述视频层的大小相关的信息对所述第一帧视频层数据进行处理。In the second thread, the Media HW abstractly calls the Media HW to process the first frame of video layer data according to the size-related information of the video layer.
  22. 一种图像数据处理的装置,其特征在于,所述装置包括:An image data processing device, characterized in that the device includes:
    框架层,用于在第一线程中,调用图形硬件对多层图形层进行合成以及对所述多层图形层中的至少一层图形层进行挖洞处理,得到合成的图形层,所述合成的图形层包括挖洞区域;The framework layer is used in the first thread to call the graphics hardware to synthesize the multi-layer graphics layer and to perform digging processing on at least one of the multi-layer graphics layers to obtain a synthesized graphics layer, the synthesis The graphic layer of includes the digging area;
    所述图形硬件;The graphics hardware;
    媒体硬件Media HW;Media HW;
    所述框架层,还用于在第二线程中,调用所述Media HW对视频层进行处理,得到处理后的视频层;所述处理后的视频层能够透过所述挖洞区域显示出来;The framework layer is also used to call the Media HW to process the video layer in the second thread to obtain the processed video layer; the processed video layer can be displayed through the excavated area;
    显示驱动,用于将所述合成的图形层和所述处理后的视频层进行叠加,得到显示数据。The display driver is used to superimpose the synthesized graphics layer and the processed video layer to obtain display data.
  23. 根据权利要求22所述的装置,其特征在于,所述框架层还用于:The device according to claim 22, wherein the frame layer is also used for:
    在所述第一线程中,设置所述视频层的大小相关的信息,并将所述视频层的大小相关的信息发送给所述Media HW;In the first thread, setting information related to the size of the video layer, and sending the information related to the size of the video layer to the Media HW;
    所述框架层具体用于,The frame layer is specifically used for,
    在所述第一线程中,调用所述图形硬件根据所述视频层的大小相关的信息对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层;In the first thread, the graphics hardware is called to synthesize the multi-layer graphics layer according to the size-related information of the video layer, and the at least one graphics layer is digged to obtain the synthesis Graphics layer
    在所述第二线程中,调用所述Media HW根据所述视频层的大小相关的信息对所述视频层进行处理,得到所述处理后的视频层。In the second thread, the Media HW is called to process the video layer according to the size-related information of the video layer to obtain the processed video layer.
  24. 根据权利要求22或23所述的装置,其特征在于,The device according to claim 22 or 23, wherein:
    所述框架层具体用于,基于第一垂直同步信号,在所述第一线程中调用所述图形硬件对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理;The framework layer is specifically configured to, based on a first vertical synchronization signal, call the graphics hardware in the first thread to synthesize the multi-layer graphics layer and perform hole-digging processing on the at least one graphics layer;
    所述显示驱动具体用于,当第二垂直同步信号到来时,将所述合成的图形层和所述处理后的视频层进行叠加,得到所述显示数据;The display driving is specifically configured to, when a second vertical synchronization signal arrives, superimpose the synthesized graphics layer and the processed video layer to obtain the display data;
    其中,所述第一垂直同步信号和所述第二垂直同步信号彼此独立。Wherein, the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  25. 根据权利要求24所述的装置,其特征在于,The device of claim 24, wherein:
    所述框架层具体用于,当所述第一垂直同步信号的有效信号到来时,在所述第一线程中,先将所述视频层的大小相关的信息发送给所述Media HW,然后调用所述图形硬件对所述多层图形层进行合成以及根据所述视频层的大小相关的信息对所述至少一层图形层进行挖洞处理。The framework layer is specifically configured to, when the effective signal of the first vertical synchronization signal arrives, in the first thread, first send the size-related information of the video layer to the Media HW, and then call The graphics hardware synthesizes the multi-layer graphics layer and performs hole processing on the at least one graphics layer according to information related to the size of the video layer.
  26. 根据权利要求24或25所述的装置,其特征在于,所述第一垂直同步信号的帧率小于所述第二垂直同步信号的帧率。The device according to claim 24 or 25, wherein the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
  27. 根据权利要求22至26任一项所述的装置,其特征在于:The device according to any one of claims 22 to 26, characterized in that:
    当所述第二线程获取到第一个视频缓冲区Video Buffer时,向所述第一线程发送第一通知信息;When the second thread obtains the first video buffer Video Buffer, send the first notification information to the first thread;
    所述框架层还用于,在所述第一线程接收到所述第一通知信息之后,在所述第一线程中,获取所述第一个Video Buffer的大小,并根据所述第一个Video Buffer的大小设置所述视频层的大小相关的信息;或者,The framework layer is also configured to, after the first thread receives the first notification information, in the first thread, obtain the size of the first Video Buffer, and according to the first The size of Video Buffer sets information related to the size of the video layer; or,
    当所述第二线程检测到所述Video Buffer的大小发生变化时,向所述第一线程发送第一通知信息;When the second thread detects that the size of the Video Buffer has changed, sending first notification information to the first thread;
    所述框架层还用于,在所述第一线程接收到所述第一通知信息之后,在所述第一线程中,获取变化后的Video Buffer的大小,并根据所述变化后的Video Buffer的大小设置所述视频层的大小相关的信息;The framework layer is also used to, after the first thread receives the first notification information, in the first thread, obtain the size of the changed Video Buffer, and according to the changed Video Buffer size The size of sets the information related to the size of the video layer;
    其中,所述Video Buffer中存储一个所述视频层的数据,所述Video Buffer的大 小与所述视频层的大小有关。Wherein, the Video Buffer stores one piece of data of the video layer, and the size of the Video Buffer is related to the size of the video layer.
  28. 根据权利要求22至27任一项所述的装置,其特征在于,所述图形硬件包括硬件合成器HWC,所述框架层具体用于:The device according to any one of claims 22 to 27, wherein the graphics hardware comprises a hardware synthesizer HWC, and the framework layer is specifically used for:
    在所述第一线程中,调用所述HWC对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层;In the first thread, call the HWC to synthesize the multi-layer graphics layer and perform hole-digging processing on the at least one graphics layer to obtain the synthesized graphics layer;
    所述显示驱动将所述合成的图形层和所述处理后的视频层进行叠加,得到显示数据之前,所述HWC还用于:The display driver superimposes the synthesized graphics layer and the processed video layer, and before the display data is obtained, the HWC is also used for:
    将所述合成的图形层发送给所述显示驱动;Sending the synthesized graphics layer to the display driver;
    所述Media HW还用于,将所述处理后的视频层发送给所述显示驱动。The Media HW is also used to send the processed video layer to the display driver.
  29. 根据权利要求22至27任一项所述的装置,其特征在于,所述图形硬件包括图形处理器GPU,所述框架层具体用于:The device according to any one of claims 22 to 27, wherein the graphics hardware comprises a graphics processing unit (GPU), and the framework layer is specifically used for:
    在所述第一线程中,调用所述GPU对所述多层图形层进行合成以及对所述至少一层图形层进行挖洞处理,得到所述合成的图形层;In the first thread, invoking the GPU to synthesize the multi-layer graphics layer and perform digging processing on the at least one graphics layer to obtain the synthesized graphics layer;
    所述显示驱动将所述合成的图形层和所述处理后的视频层进行叠加,得到显示数据之前,所述GPU还用于:The display driver superimposes the synthesized graphics layer and the processed video layer, and before the display data is obtained, the GPU is also used for:
    将所述合成的图形层返给所述框架层;Returning the synthesized graphics layer to the frame layer;
    所述框架层还用于,将所述合成的图形层发送给所述HWC;The frame layer is also used to send the synthesized graphics layer to the HWC;
    所述HWC还用于,将所述合成的图形层发送给所述显示驱动;The HWC is also used to send the synthesized graphics layer to the display driver;
    所述Media HW还用于,将所述处理后的视频层发送给所述显示驱动。The Media HW is also used to send the processed video layer to the display driver.
  30. 根据权利要求22至29任一项所述的装置,其特征在于,所述框架层包括SurfaceFlinger,所述SurfaceFlinger用于:The device according to any one of claims 22 to 29, wherein the frame layer comprises SurfaceFlinger, and the SurfaceFlinger is used for:
    在初始化阶段创建所述第一线程和所述第二线程;Creating the first thread and the second thread in the initialization phase;
    所述SurfaceFlinger还用于,The SurfaceFlinger is also used to,
    当接收到图形缓冲区Graphic Buffer时,通知所述第一线程处理所述Graphic Buffer中的图形层数据;When receiving the Graphic Buffer Graphic Buffer, notify the first thread to process the graphics layer data in the Graphic Buffer;
    当所述接收到Video Buffer时,通知所述第二线程处理所述Video Buffer中的视频层数据;When the Video Buffer is received, notify the second thread to process the video layer data in the Video Buffer;
    其中,所述Graphic Buffer中存储一层图形层数据,所述Video Buffer中存储一层视频层数据。Wherein, a layer of graphics layer data is stored in the Graphic Buffer, and a layer of video layer data is stored in the Video Buffer.
  31. 根据权利要求22至30任一项所述的装置,其特征在于,所述Media HW具体用于:The device according to any one of claims 22 to 30, wherein the Media HW is specifically used for:
    在所述第二线程中收到第一帧视频层数据;Receiving the first frame of video layer data in the second thread;
    在所述第一线程中收到所述视频层的大小相关的信息;Receiving information about the size of the video layer in the first thread;
    所述框架层具体用于:The framework layer is specifically used for:
    在所述第二线程中,调用所述Media HW根据所述视频层的大小相关的信息对所述第一帧视频层数据进行处理。In the second thread, the Media HW is called to process the first frame of video layer data according to the information related to the size of the video layer.
  32. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机或处理器上运行时,使得所述计算机或处理器执行如权利要求1至11任一项所述的方法。A computer-readable storage medium in which instructions are stored, when the instructions are run on a computer or a processor, the computer or the processor is made to execute any one of claims 1 to 11 The method described.
  33. 一种包含指令的计算机程序产品,当其在计算机或处理器上运行时,使得所述计算机或处理器执行如权利要求1至11任一项所述的方法。A computer program product containing instructions, when it runs on a computer or processor, causes the computer or processor to execute the method according to any one of claims 1 to 11.
PCT/CN2020/096018 2020-06-15 2020-06-15 Image data processing apparatus and method WO2021253141A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080102044.9A CN116075804A (en) 2020-06-15 2020-06-15 Image data processing device and method
PCT/CN2020/096018 WO2021253141A1 (en) 2020-06-15 2020-06-15 Image data processing apparatus and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096018 WO2021253141A1 (en) 2020-06-15 2020-06-15 Image data processing apparatus and method

Publications (1)

Publication Number Publication Date
WO2021253141A1 true WO2021253141A1 (en) 2021-12-23

Family

ID=79268886

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096018 WO2021253141A1 (en) 2020-06-15 2020-06-15 Image data processing apparatus and method

Country Status (2)

Country Link
CN (1) CN116075804A (en)
WO (1) WO2021253141A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130766A (en) * 2023-01-17 2023-11-28 荣耀终端有限公司 Thread processing method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10257487B1 (en) * 2018-01-16 2019-04-09 Qualcomm Incorporated Power efficient video playback based on display hardware feedback
CN109934795A (en) * 2019-03-04 2019-06-25 京东方科技集团股份有限公司 A kind of display methods, device, electronic equipment and computer readable storage medium
CN111124562A (en) * 2019-11-15 2020-05-08 北京经纬恒润科技有限公司 Application program double-screen display method and device
CN111198735A (en) * 2018-11-20 2020-05-26 深圳市优必选科技有限公司 Layer information acquisition method, layer information acquisition device and terminal equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10257487B1 (en) * 2018-01-16 2019-04-09 Qualcomm Incorporated Power efficient video playback based on display hardware feedback
CN111198735A (en) * 2018-11-20 2020-05-26 深圳市优必选科技有限公司 Layer information acquisition method, layer information acquisition device and terminal equipment
CN109934795A (en) * 2019-03-04 2019-06-25 京东方科技集团股份有限公司 A kind of display methods, device, electronic equipment and computer readable storage medium
CN111124562A (en) * 2019-11-15 2020-05-08 北京经纬恒润科技有限公司 Application program double-screen display method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LUO, SHENGYANG: "Analysis of the realization principle of Android view SurfaceView", 16 March 2013 (2013-03-16), CN, pages 1 - 15, XP009533127, Retrieved from the Internet <URL:https://blog.csdn.net/Luoshengyang/article/details/8661317> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130766A (en) * 2023-01-17 2023-11-28 荣耀终端有限公司 Thread processing method and electronic equipment

Also Published As

Publication number Publication date
CN116075804A (en) 2023-05-05

Similar Documents

Publication Publication Date Title
WO2022052772A1 (en) Application interface display method under multi-window mirroring scenario, and electronic device
US11927874B2 (en) Mobile camera system
US20160110152A1 (en) Method for sharing screen between devices and device using the same
WO2021129253A1 (en) Method for displaying multiple windows, and electronic device and system
US8477143B2 (en) Buffers for display acceleration
US20130038726A1 (en) Electronic apparatus and method for providing stereo sound
CN112558825A (en) Information processing method and electronic equipment
US20140351729A1 (en) Method of operating application and electronic device implementing the same
JP2013546043A (en) Instant remote rendering
JP2013542515A (en) Redirection between different environments
US20150067555A1 (en) Method for configuring screen and electronic device thereof
AU2014201365A1 (en) Image data processing method and electronic device supporting the same
WO2018161534A1 (en) Image display method, dual screen terminal and computer readable non-volatile storage medium
EP2329401A1 (en) Expandable systems architecture for a handheld device that dynamically generates different user environments for device displays
WO2022083296A1 (en) Display method and electronic device
KR20150037066A (en) Display method of electronic apparatus and electronic appparatus thereof
CN112527174B (en) Information processing method and electronic equipment
EP2892047A2 (en) Image data output control method and electronic device supporting the same
CN112527222A (en) Information processing method and electronic equipment
US20200376375A1 (en) Method and apparatus for performing client side latency enhancement with aid of cloud game server side image orientation control
WO2024041047A1 (en) Screen refresh rate switching method and electronic device
WO2021253141A1 (en) Image data processing apparatus and method
WO2021254113A1 (en) Control method for three-dimensional interface and terminal
US11936928B2 (en) Method, system and device for sharing contents
WO2022151937A1 (en) Interface display method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940930

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20940930

Country of ref document: EP

Kind code of ref document: A1