CN116075804A - Image data processing device and method - Google Patents

Image data processing device and method Download PDF

Info

Publication number
CN116075804A
CN116075804A CN202080102044.9A CN202080102044A CN116075804A CN 116075804 A CN116075804 A CN 116075804A CN 202080102044 A CN202080102044 A CN 202080102044A CN 116075804 A CN116075804 A CN 116075804A
Authority
CN
China
Prior art keywords
layer
video
thread
graphics
graphic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080102044.9A
Other languages
Chinese (zh)
Inventor
张运强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN116075804A publication Critical patent/CN116075804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a method and a device for processing image data, which are used for synthesizing a graphic layer, carrying out hole digging processing and processing a video layer in parallel in two threads, so that the processing of the video layer is not influenced by the synthesis of the graphic layer any more, and the problem of frame loss caused by time consumption of the synthesis of the graphic layer in the video playing process can be effectively solved. The method comprises the following steps: synthesizing a plurality of graphic layers in a first thread and digging holes in at least one graphic layer in the plurality of graphic layers to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area; processing the video layer in the second thread to obtain a processed video layer; and superposing the synthesized graphic layer and the processed video layer to obtain display data.

Description

Image data processing device and method Technical Field
The present disclosure relates to the field of display technologies, and in particular, to an apparatus and a method for processing image data.
Background
The content displayable by the display device includes graphic data including, for example, status bars, navigation bars, icon data, and the like, and generally speaking, the status bars, navigation bars, and icon data each correspond to one graphic layer, the video data corresponds to one video layer, and when a plurality of graphic data and video data are simultaneously displayed, a screen seen by a user on the display screen is a result of the combination of the plurality of graphic layers and the video layers.
However, in complex scenarios, the composition of the graphics layer is time consuming, typically exceeding one vertical synchronization (vertical synchronization, vsync) period, resulting in dropped frames during video playback.
Disclosure of Invention
The embodiment of the application provides a device and a method for processing image data, which are used for solving the problem of frame loss caused by time consuming synthesis of a graphics layer in the video playing process.
A first aspect of the present application provides a method of image data processing, the method comprising: synthesizing a plurality of graphic layers in a first thread and digging holes in at least one graphic layer in the plurality of graphic layers to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area; processing the video layer in a second thread to obtain a processed video layer, wherein the processed video layer can be displayed through the hole digging area; and superposing the synthesized graphic layer and the processed video layer to obtain display data.
It will be appreciated that the resultant graphic layer data includes a hole-digging region which is typically arranged to be transparent so that the video layer can be displayed through the hole-digging region when the graphic layer and the video layer are combined. In an alternative case, holes may be drilled in the graphics layers below the video layer, respectively, and then the multiple graphics layers may be combined into one graphics layer; alternatively, a plurality of pattern layers may be first combined into one pattern layer, and then the combined pattern layer may be hollowed out.
In the embodiment of the application, the synthesis and hole digging processing of the graphics layer and the processing of the video layer are performed in parallel in two threads, so that the processing of the video image is not influenced by the synthesis of the graphics layer any more, the video playing is not influenced by the synthesis of the graphics layer any more, and the problem of frame loss caused by time consumption of the synthesis of the graphics layer in the video playing process can be effectively solved.
In one possible embodiment, before synthesizing the multiple graphic layers and hole-digging at least one graphic layer in the first thread to obtain a synthesized graphic layer, the method further includes: setting information related to the size of the video layer in the first thread; the hole digging treatment is carried out on at least one graph layer, and the method specifically comprises the following steps: digging holes on the at least one graphic layer according to the information related to the size of the video layer; the processing the video layer in the second thread to obtain a processed video layer specifically includes: and in the second thread, processing the video layer according to the information related to the size of the video layer to obtain the processed video layer.
It should be appreciated that the size-related information of the video layer may include the size of the video layer and the position information of the video layer, and that the size-related information of the video layer may include one vertex coordinate of the video layer and two length information representing the length and width of the video layer, by way of example; the information related to the size of the video layer can also comprise two vertex coordinates and one length information of the video layer, and the size and the playing position of the video layer can be uniquely determined according to the two vertex coordinates and the one length information; if the position information of the video layer is 4 vertex coordinates for displaying the video layer, the size of the video layer may be determined according to the 4 vertex coordinates, and at this time, the information related to the size of the video layer may include only the position information. The information related to the size of the video layer is obtained by calculating the SurfaceFlinger, and the multimedia framework sets the initial size of the video layer through an API of the system and sends the initial size to SurfaceFlinger, surfaceFlinger, so that the operations of amplifying, shrinking or rotating the video by a user can be captured or perceived, and the information related to the size of the video layer is obtained by calculating the initial size of the video layer and the operations of amplifying, shrinking or rotating the video layer sent by the SurfaceFlinger.
In the embodiment of the application, since the setting of the size of the video layer and the hole digging processing of the graphics layer below the video layer are completed in the same thread, that is, the sizes of the processed video layer and the hole digging region are obtained according to the set size of the video layer, the size of the hole digging region in the graphics layer is completely consistent with the size of the video layer, the processed video layer and the hole digging region can be completely matched, and therefore the processed video layer can be synchronously displayed through the hole digging region.
In one possible implementation, the synthesizing of the multiple graphic layers and the hole digging of at least one graphic layer are performed in the first thread based on the first vertical synchronization signal; superposing the synthesized graphic layer and the processed video layer based on the second vertical synchronous signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
It should be understood that the first vertical synchronization signal and the second vertical synchronization signal are two periodic signals independent of each other, and the first vertical synchronization signal and the second vertical synchronization signal may have different frame rates and different periods. Specifically, when the effective signal of the first vertical synchronous signal arrives, synthesizing a plurality of graphic layers in a first thread and hole digging treatment is carried out on at least one graphic layer; and when the effective signal of the second vertical synchronous signal arrives, the synthesized graphic layer and the processed video layer are overlapped to obtain display data. The first and second vertical synchronization signals may be active high or active low, and the first and second vertical synchronization signals may be level triggered, rising edge triggered, or falling edge triggered. The valid signal arrival of the Vsync signal can be understood as: the rising edge of the Vsync signal comes, the falling edge of the Vsync signal comes, or the Vsync signal is a high level signal or a low level signal. For example, the first vertical synchronization signal may be Graphic Vsync in the foregoing embodiment, and the second vertical synchronization signal may be Display Vsync in the foregoing embodiment.
In one possible implementation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
In the embodiment of the present application, since the first vertical synchronization signal is used for controlling the synthesis of the plurality of graphics layers, the signal for controlling the synthesis of the display video data and the refresh frame rate of the display device is the second vertical synchronization signal, the first vertical synchronization signal and the second vertical synchronization signal are independent from each other, and the frame rate of the second vertical synchronization signal may be greater than the frame rate of the first vertical synchronization signal, so that the image processing architecture may support the playing of a high-frame-rate video with a video frame rate higher than the graphics refresh frame rate.
In one possible implementation, when the valid signal of the first vertical synchronization signal arrives, in the first thread, information related to the size of the video layer is set first, then the multi-layer graphics layer is synthesized, and hole digging processing is performed on at least one graphics layer according to the information related to the size of the video layer.
In this embodiment of the present application, setting information related to the size of the video layer in the first thread needs to be performed after the arrival of the effective signal of the vertical synchronization signal, and setting information related to the size of the video layer and synthesizing the graphics layer are performed sequentially after the arrival of the effective signal of the same vertical synchronization signal.
In one possible implementation, when the second thread acquires a first Video Buffer, sending first notification information to the first thread; setting information related to the size of the Video layer according to the size of the first Video Buffer in the first thread; or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; in the first thread, information about the size of the Video layer is reset according to the changed size of the Video Buffer.
It should be appreciated that one Video layer of data is stored in each Video Buffer, and that the size of the Video Buffer is related to the size of the stored Video layer. Therefore, when the size of the Video layer changes, the size of the Video Buffer also changes. The first notification information is used for notifying the first thread that the size of the video layer is changed.
In a possible implementation manner, communication between the first thread and the second thread can be performed, when the first Video Buffer or the size of the Video Buffer is received to change, the second thread can notify the first thread, so that information related to the size of the Video layer can be reset in the first thread, and hole digging is performed on the graphics layer again according to the changed size, so that the Video layer after the size change can be displayed through the hole digging area of the graphics layer, that is, synchronous matching display of the Video layer and the graphics layer can still be realized when the size of the Video Buffer is ensured to change. In an alternative case, when the size of the Video Buffer changes, the information about the size of the Video data calculated by surfeflinger does not necessarily change.
In one possible implementation manner, the synthesizing the multiple graphic layers and the hole digging treatment are performed on at least one graphic layer in the first thread to obtain a synthesized graphic layer, which specifically includes: in the first thread, a hardware synthesizer HWC synthesizes the plurality of graphics layers and holes the at least one graphics layer to obtain the synthesized graphics layer.
It should be appreciated that when the hardware resources of the HWC are invoked to perform graphics layer synthesis and hole digging, the HWC is specifically driven by the SurfaceFlinger of the framework layer through the HWC abstraction of the access hardware abstraction layer, thereby implementing the invocation of the HWC hardware resources.
In one possible implementation, in the first thread, the instruction information of the Graphic Buffers is sent to the hardware synthesizer HWC; wherein, a Graphic Buffer stores a layer of Graphic layer data; in the first thread, the HWC obtains the plurality of graphics layer data from the graphics Buffers.
In one possible implementation manner, the synthesizing of the multiple graphic layers and the hole digging of at least one graphic layer in the first thread are performed to obtain a synthesized graphic layer, which specifically includes: in the first thread, the graphics processor GPU synthesizes the plurality of graphics layers and performs hole digging processing on the at least one graphics layer to obtain the synthesized graphics layer.
When the HWC does not support graphics layer synthesis and graphics layer hole digging, hardware resources of the GPU can be called to perform graphics layer synthesis and graphics layer hole digging.
In one possible implementation, after setting the information about the size of the video layer in the first thread, the method further includes: in the first thread, sending information about the size of the video layer to a Media HW; in the second thread, the Media HW processes the video layer according to the information related to the size of the video layer, and the processed video layer is obtained.
In one possible implementation, the Media HW receives information about the size of the video layer in the first thread, then the Media HW receives first frame video layer data in the second thread, and then the Media HW processes the first frame video layer data according to the received information about the size of the video layer; in another case, the Media HW receives the first frame of video layer data in the second thread, then the Media HW receives the information about the size of the video layer in the first thread, and the Media HW does not immediately process the first frame of video layer data after receiving the first frame of video layer data, but processes the first frame of video layer data after waiting for the information about the size of the video layer to be received, so as to avoid processing errors of the first frame of video layer data.
In one possible implementation manner, the method for superposing the synthesized graphic layer and the processed video layer to obtain display data specifically includes: and the display driver superimposes the synthesized graphic layer and the processed video layer to obtain the display data.
In one possible embodiment, the method further comprises: creating the first thread and the second thread in an initialization stage by SurfaceFinger; when the surface eFlinger receives Graphic Buffer, notifying the first thread to process Graphic layer data in the Graphic Buffer; when the SurfaceFlinger receives the Video Buffer, notifying the second thread to process Video layer data in the Video Buffer; wherein, a layer of Graphic layer data is stored in the Graphic Buffer, and a layer of Video layer data is stored in the Video Buffer.
A second aspect of the present application provides a method of image data processing, the method comprising: in a first thread, enabling a surface eFlinger to call graphics hardware to synthesize a plurality of graphics layers and hole digging at least one graphics layer to obtain a synthesized graphics layer, wherein the synthesized graphics layer comprises a hole digging area; in a second thread, the SurfaceFlinger calls media hardware to process the video layer, and the processed video layer is obtained; the processed video layer can be displayed through the hole digging area; and the display driver superimposes the synthesized graphic layer and the processed video layer to obtain display data.
In one possible implementation, before the SurfaceFlinger invokes the graphics hardware resource to synthesize the multiple graphics layers and digs the hole into at least one graphics layer, the method further includes: in the first thread, the SurfaceFlinger sends information about the calculated size of the video layer to the Media HW; the surface eFlinger calls a graphics hardware resource to perform hole digging processing on at least one graphics layer, and specifically comprises the following steps: in the first thread, the SurfaceFlinger calls graphics hardware to perform hole digging processing on the at least one graphics layer according to the information related to the size of the video layer, so as to obtain the synthesized graphics layer; in the second thread, the SurfaceFlinger calls media hardware to process the video layer, and the processed video layer is obtained, which specifically comprises the following steps: in the second thread, surfaceFlinger calls the Media HW to process the video layer according to the information related to the size of the video layer, and the processed video layer is obtained.
In a possible implementation manner, when a valid signal of the first vertical synchronization signal arrives, the surfeflinger invokes the hardware resource to synthesize the multiple graphics layers and hole the at least one graphics layer in the first thread; when the effective signal of the second vertical synchronous signal arrives, the driving display superimposes the synthesized graphic layer and the processed video layer to obtain the display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
In one possible implementation, when the valid signal of the first vertical synchronization signal arrives, in the first thread, the calculated information related to the size of the video layer is sent to the Media HW, and then the SurfaceFlinger invokes the hardware resource to synthesize the multiple graphics layers and digs the at least one graphics layer according to the information related to the size of the video layer.
In one possible implementation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
In one possible embodiment, the method further comprises: when the second thread acquires a first Video Buffer, sending first notification information to the first thread; in the first thread, acquiring the size of the first Video Buffer by a surface eFlink, and setting information related to the size of the Video layer according to the size of the first Video Buffer; or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; in the first thread, acquiring the size of the changed Video Buffer by a surface eFlinger, and resetting the information related to the size of the Video layer according to the size of the changed Video Buffer; wherein the Video Buffer stores data of one Video layer, and the size of the Video Buffer is related to the size of the Video layer.
In one possible implementation manner, in the first thread, the SurfaceFlinger calls graphics hardware to synthesize a plurality of graphics layers and digs holes into at least one graphics layer to obtain a synthesized graphics layer, which specifically includes: in the first thread, the SurfaceFlinger calls a hardware synthesizer HWC to synthesize the plurality of graphics layers and dig holes of the at least one graphics layer to obtain the synthesized graphics layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the method further includes: the HWC sends the synthesized graphics layer to the display driver; the Media HW sends the processed video layer to the display driver.
It should be appreciated that when the HWC gets the synthesized graphics layer, the HWC abstraction sends the indication information of the frame buffer storing the synthesized graphics layer to the display driver; after the Media HW obtains the processed Video layer, the Media HW transmits the instruction information of the Video Buffer storing the processed Video layer to the display driver.
In one possible implementation manner, in a first thread, a surface eFlinger calls graphics hardware to synthesize a plurality of graphics layers and digs holes into at least one graphics layer to obtain a synthesized graphics layer, which specifically includes: in the first thread, the SurfaceFlinger invokes a graphics processor GPU; synthesizing the plurality of graph layers and digging holes on at least one graph layer to obtain the synthesized graph layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the method further includes: the GPU returns the synthesized graphics layer to the SurfaceFlinger; the SurfaceFlinger sends the synthesized graphics layer to the HWC; the HWC sends the synthesized graphics layer to the display driver; the Media HW sends the processed video layer to the display driver.
It should be appreciated that if graphics layer compositing and graphics layer hole digging are performed by the GPU, the GPU needs to return the composited graphics layer to the surfaceflink first, then the surfaceflink sends the indication information of the frame buffer storing the composited graphics layer to the HWC abstraction, and then the HWC abstraction sends the indication information of the frame buffer storing the composited graphics layer to the display driver.
In one possible embodiment, the method further comprises: creating the first thread and the second thread in an initialization stage by SurfaceFinger; when the Buffer received by the surface eFlinger is Graphic Buffer, notifying the first thread to process Graphic layer data in the Graphic Buffer; when the Buffer received by the SurfaceFlinger is a Video Buffer, notifying the second thread to process Video layer data in the Video Buffer; one Graphic Buffer stores one layer of Graphic layer data, and one Video Buffer stores one layer of Video layer data.
In one possible implementation, the Media HW receives first frame video layer data in the second thread; the Media HW receiving information about the size of the video layer in the first thread; the Media HW processes the first frame of video layer data in the second thread according to information about the size of the video layer.
It should be appreciated that if the Media HW receives the first frame of video layer data in the second thread before receiving the size-related information of the video layer, the Media HW processes the first frame of video layer data in the second thread according to the size-related information of the video layer after receiving the size-related information of the video layer in the first thread.
A third aspect of the present application provides an apparatus for image data processing, the apparatus comprising: a processor having software instructions running thereon to form a framework layer, a hardware abstraction layer, HAL, and a driver layer, the HAL comprising a graphics hardware abstraction and a Media hardware, HW, abstraction, the driver layer comprising a graphics hardware driver, a Media hardware driver, and a display driver; the framework layer is used for combining the multi-layer graph layers and digging holes of at least one graph layer in the multi-layer graph layers through the graph hardware abstraction and the graph hardware driver call graph hardware in a first thread to obtain a combined graph layer, wherein the combined graph layer comprises a digging hole area; the framework layer is used for calling Media hardware HW through the Media hardware abstraction and the Media hardware driver in the second thread, and processing the video layer to obtain a processed video layer; the processed video layer can be displayed through the hole digging area; and the display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
Optionally, the device further comprises a transmission interface, and the processor receives data sent by other devices or sends data to other devices through the transmission interface. The device may be coupled to hardware resources such as a display, media hardware, or graphics hardware via connectors, transmission lines, buses, or the like. The device may be a processor chip with image or video processing capabilities. In an alternative case, the device, media hardware and graphics hardware may be integrated on one chip. In another alternative, the apparatus, media hardware, graphics hardware and display may be integrated on one terminal. The graphics hardware abstraction corresponds to the graphics hardware driver, the media hardware abstraction corresponds to the media hardware driver, the graphics hardware can be called through the graphics hardware abstraction and the graphics hardware driver, and the media hardware can be called through the media hardware abstraction and the media hardware driver. For example, invoking a graphics hardware driver by accessing a graphics hardware abstraction to effect invocation of graphics hardware, invoking a media hardware driver by accessing a media hardware abstraction to effect invocation of media hardware.
It should be understood that, after the graphics hardware obtains the synthesized graphics layer, the graphics hardware abstraction sends the indication information of the frame buffer to the display driver, and the display driver can read the synthesized graphics layer data from the corresponding memory space according to the indication information of the frame buffer; after the media hardware obtains the processed Video layer, the media hardware abstract sends the indication information of the Video Buffer stored with the processed Video layer to the display driver, and the display driver can read the processed Video layer data from the corresponding memory space according to the indication information of the Video Buffer.
In one possible embodiment, the frame layer is further configured to: setting information related to the size of a video layer in a first thread, and sending the information related to the size of the video layer to a media hardware abstraction; the framework layer is specifically used for calling graphic hardware to synthesize a plurality of graphic layers and hole digging at least one graphic layer according to the information related to the size of the video layer in the first thread to obtain the synthesized graphic layer; and in the second thread, calling the Media HW to process the video layer according to the information related to the size of the video layer, and obtaining the processed video layer.
In one possible implementation, the framework layer is specifically configured to invoke the graphics hardware to synthesize the multiple graphics layers and hole at least one graphics layer in the first thread based on a first vertical synchronization signal; the display driver is specifically configured to superimpose the synthesized graphics layer and the processed video layer based on the second vertical synchronization signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
In one possible implementation, the framework layer is specifically configured to send, in the first thread, information related to a size of the video layer to the media hardware abstraction when a valid signal of the first vertical synchronization signal arrives, and then invoke the graphics hardware to synthesize the multiple graphics layers and hole at least one graphics layer according to the information related to the size of the video layer.
In one possible implementation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
In one possible implementation, when the second thread acquires the first Video Buffer, the first thread sends first notification information; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the first Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the first Video Buffer.
Or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; the framework layer is further used for acquiring the size of the changed Video Buffer in the first thread after the first thread receives the first notification information, and setting information related to the size of the Video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores the data of a Video layer, and the size of the Video Buffer is related to the size of the Video layer.
In one possible implementation, the graphics hardware includes a HWC and a GPU, and the corresponding hardware abstraction layer includes HWC abstraction, and the driver layer includes HWC driver and GPU driver.
The frame layer is specifically used for: in a first thread, synthesizing a plurality of graphic layers and performing hole digging treatment on at least one graphic layer through HWC abstraction and HWC driving and calling the HWC to obtain a synthesized graphic layer; HWC abstraction is used for: sending the synthesized graphic layer to the display driver; the media hardware abstraction is also used to send the processed video layer to the display driver.
When the HWC does not support graphics layer synthesis and graphics layer hole-sinking, the framework layer is specifically for: in a first thread, a GPU is driven and called through the GPU, the multi-layer graphic layers are synthesized, and hole digging processing is carried out on at least one layer of graphic layers, so that synthesized graphic layers are obtained; the GPU is also configured to: returning the synthesized graphic layer to the frame layer; the framework layer is also used for sending the synthesized graphic layer to HWC abstract; the HWC abstraction is used for sending the synthesized graphic layer to the display driver; the Media HW abstraction is also used to send the processed video layer to the display driver.
It should be appreciated that the HWC abstraction sends the display driver the indication of the frame Buffer with the synthesized graphics layer stored therein, and the media hardware abstraction sends the display driver the indication of the Video Buffer with the processed Video layer stored therein.
In one possible implementation, a SurfaceFlinger of the frame layer is used to: creating a first thread and a second thread in an initialization stage; the surface eFlinger is further used for notifying the first thread to process graphics layer data in the graphics Buffer when the graphics Buffer is received; when receiving the Video Buffer, notifying a second thread to process Video layer data in the Video Buffer; wherein, graphic Buffer stores a layer of Graphic layer data, video Buffer stores a layer of Video layer data.
In one possible implementation, the Media HW abstract is used for: receiving the first frame of video layer data in a second thread; receiving information related to the size of a video layer in a first thread; the frame layer is specifically used for: in the second thread, the first frame of video layer data is processed according to the information related to the size of the video layer through the Media HW abstract call Media HW.
It should be understood that the beneficial effects of the device side may refer to the method side, and will not be described in detail herein.
A fourth aspect of the present application provides an apparatus for image data processing, the apparatus comprising: the framework layer is used for calling the graphic hardware to synthesize the multi-layer graphic layers and digging holes of at least one graphic layer in the multi-layer graphic layers in the first thread to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area; the graphics hardware; media hardware Media HW; the framework layer is further used for calling the Media HW to process the video layer in the second thread to obtain a processed video layer; the processed video layer can be displayed through the hole digging area; and the display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
It should be appreciated that the framework layer and display driver are part of the layer of the operating system formed by the execution of software instructions on the processor. Graphics hardware and media hardware may be coupled to the processor through connectors, interfaces, transmission lines or buses, etc., which are typically electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces.
In one possible implementation, the software instructions run on the processor to form a hardware abstraction layer, a graphics hardware driver, and a media hardware driver, wherein the hardware abstraction layer includes the graphics hardware driver corresponding to the graphics hardware, and the media hardware driver corresponding to the media hardware. The framework layer concrete is used for calling the graphics hardware through the graphics hardware abstraction and the graphics hardware driver, and the framework layer concrete is also used for calling the media hardware through the media hardware abstraction and the media hardware driver.
In one possible embodiment, the frame layer is further configured to: setting information related to the size of a video layer in a first thread, and sending the information related to the size of the video layer to Media HW; the framework layer is specifically used for calling the graphic hardware to synthesize the multi-layer graphic layer and hole digging the at least one graphic layer according to the information related to the size of the video layer in the first thread to obtain the synthesized graphic layer; and in the second thread, calling the Media HW to process the video layer according to the information related to the size of the video layer, and obtaining the processed video layer.
In one possible implementation, the framework layer is specifically configured to invoke the graphics hardware in the first thread to synthesize the multiple graphics layers and hole the at least one graphics layer based on a first vertical synchronization signal; the display driver is specifically configured to superimpose the synthesized graphics layer and the processed video layer when the second vertical synchronization signal arrives, so as to obtain the display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
In a possible implementation manner, the framework layer is specifically configured to send, in the first thread, information related to the size of the video layer to the Media HW when the valid signal of the first vertical synchronization signal arrives, and then invoke the graphics hardware to synthesize the multiple graphics layers and hole the at least one graphics layer according to the information related to the size of the video layer.
In one possible implementation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
In one possible implementation, when the second thread acquires a first Video Buffer, sending first notification information to the first thread; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the first Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the first Video Buffer; or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the changed Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the changed Video Buffer; wherein the Video Buffer stores data of one Video layer, and the size of the Video Buffer is related to the size of the Video layer.
In a possible implementation, the graphics hardware comprises a hardware synthesizer HWC, the framework layer being specifically configured to: in the first thread, calling the HWC to synthesize the multi-layer graphic layer and digging at least one layer of graphic layer to obtain the synthesized graphic layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the HWC is further configured to: transmitting the synthesized graphic layer to a display driver; the Media HW is also used to send the processed video layer to a display driver.
It should be appreciated that sending the synthesized graphics layer to the display driver is specifically implemented by the HWC abstraction at the hardware abstraction layer, and sending the processed video layer to the display driver is implemented by the Media HW abstraction.
In one possible implementation, the graphics hardware includes a graphics processor GPU, and the framework layer is specifically configured to: in a first thread, a GPU is called to synthesize the multi-layer graphic layer and hole digging is carried out on at least one layer of graphic layer, so that the synthesized graphic layer is obtained; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the GPU is further configured to: returning the synthesized graphic layer to the frame layer; the framework layer is also configured to send the synthesized graphics layer to the HWC; the HWC is also used for sending the synthesized graphic layer to the display driver; the Media HW is also used to send the processed video layer to the display driver.
In one possible implementation, the frame layer includes a SurfaceFlinger for: creating the first thread and the second thread in an initialization phase; the surface eFlinger is further configured to notify the first thread to process graphics layer data in the graphics Buffer when receiving the graphics Buffer; when the Video Buffer is received, notifying the second thread to process Video layer data in the Video Buffer; wherein, a layer of Graphic layer data is stored in the Graphic Buffer, and a layer of Video layer data is stored in the Video Buffer.
In one possible embodiment, the Media HW is specifically adapted to: receiving first frame video layer data in the second thread; receiving information related to the size of the video layer in the first thread; the frame layer is specifically used for: in the second thread, the Media HW is invoked to process the first frame of video layer data according to information about the size of the video layer.
A fifth aspect of the present application provides an apparatus for image data processing, the apparatus comprising: a processor, graphics hardware, and media hardware, the processor having software instructions running thereon to form a frame layer and a display driver; the framework layer is used for calling the graphic hardware to synthesize the multi-layer graphic layers and digging holes of at least one layer of graphic layers in the first thread to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area; the frame layer is used for calling media hardware to process the video layer in the second thread to obtain a processed video layer; and the display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
It should be appreciated that the graphics hardware and media hardware may be coupled to the processor through connectors, interfaces, transmission lines or buses, etc., which are typically electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces.
In one possible implementation, the software instructions run on the processor to form a hardware abstraction layer, a graphics hardware driver, and a media hardware driver, wherein the hardware abstraction layer includes the graphics hardware driver corresponding to the graphics hardware, and the media hardware driver corresponding to the media hardware. The framework layer concrete is used for calling the graphics hardware through the graphics hardware abstraction and the graphics hardware driver, and the framework layer concrete is also used for calling the media hardware through the media hardware abstraction and the media hardware driver.
In one possible embodiment, the frame layer is further configured to: setting information related to the size of a video layer in a first thread, and sending the information related to the size of the video layer to Media HW; the framework layer is specifically used for calling the graphic hardware to synthesize the multi-layer graphic layer and hole digging the at least one graphic layer according to the information related to the size of the video layer in the first thread to obtain the synthesized graphic layer; and in the second thread, calling the Media HW to process the video layer according to the information related to the size of the video layer, and obtaining the processed video layer.
In one possible implementation, the framework layer is specifically configured to invoke the graphics hardware in the first thread to synthesize the multiple graphics layers and hole the at least one graphics layer based on a first vertical synchronization signal; the display driver is specifically configured to superimpose the synthesized graphics layer and the processed video layer when the second vertical synchronization signal arrives, so as to obtain the display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
In a possible implementation manner, the framework layer is specifically configured to send, in the first thread, information related to the size of the video layer to the Media HW when the valid signal of the first vertical synchronization signal arrives, and then invoke the graphics hardware to synthesize the multiple graphics layers and hole the at least one graphics layer according to the information related to the size of the video layer.
In one possible implementation, the frame rate of the first vertical synchronization signal is less than the frame rate of the second vertical synchronization signal.
In one possible implementation, when the second thread acquires a first Video Buffer, sending first notification information to the first thread; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the first Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the first Video Buffer; or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the changed Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the changed Video Buffer; wherein the Video Buffer stores data of one Video layer, and the size of the Video Buffer is related to the size of the Video layer.
In a possible implementation, the graphics hardware comprises a hardware synthesizer HWC, the framework layer being specifically configured to: in the first thread, calling the HWC to synthesize the multi-layer graphic layer and digging at least one layer of graphic layer to obtain the synthesized graphic layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the HWC is further configured to: transmitting the synthesized graphic layer to a display driver; the Media HW is also used to send the processed video layer to a display driver.
It should be appreciated that sending the synthesized graphics layer to the display driver is specifically implemented by the HWC abstraction at the hardware abstraction layer, and sending the processed video layer to the display driver is implemented by the Media HW abstraction.
In one possible implementation, the graphics hardware includes a graphics processor GPU, and the framework layer is specifically configured to: in a first thread, a GPU is called to synthesize the multi-layer graphic layer and hole digging is carried out on at least one layer of graphic layer, so that the synthesized graphic layer is obtained; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the GPU is further configured to: returning the synthesized graphic layer to the frame layer; the framework layer is also configured to send the synthesized graphics layer to the HWC; the HWC is also used for sending the synthesized graphic layer to the display driver; the Media HW is also used to send the processed video layer to the display driver.
In one possible implementation, the frame layer includes a SurfaceFlinger for: creating the first thread and the second thread in an initialization phase; the surface eFlinger is further configured to notify the first thread to process graphics layer data in the graphics Buffer when receiving the graphics Buffer; when the Video Buffer is received, notifying the second thread to process Video layer data in the Video Buffer; wherein, a layer of Graphic layer data is stored in the Graphic Buffer, and a layer of Video layer data is stored in the Video Buffer.
In one possible embodiment, the Media HW is specifically adapted to: receiving first frame video layer data in the second thread; receiving information related to the size of the video layer in the first thread; the frame layer is specifically used for: in the second thread, the Media HW is invoked to process the first frame of video layer data according to information about the size of the video layer.
As previously described, the framework layer invokes the Media HW through the Media HW abstraction and the Media HW driver to implement the processing of the video layer.
A sixth aspect of the present application provides a computer readable storage medium having instructions stored therein which, when run on a computer or processor, cause the computer or processor to perform the method as described in the first aspect or any possible implementation thereof.
A seventh aspect of the present application provides a computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method as described in the first aspect or any of its possible embodiments.
Drawings
Fig. 1 is a schematic architecture diagram of an exemplary terminal according to an embodiment of the present application;
fig. 2 is a hardware architecture diagram of an exemplary image processing apparatus according to an embodiment of the present application;
FIG. 3a is a schematic diagram of an exemplary operating system architecture suitable for use with embodiments of the present application;
FIG. 3b is a schematic diagram of an exemplary process of graphics layer composition provided in an embodiment of the present application;
FIG. 4 is a diagram of a conventional image processing architecture;
FIG. 5 is a schematic diagram of an exemplary image processing architecture according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another exemplary image processing architecture provided in an embodiment of the present application;
FIG. 7 is a flowchart of a method for image processing according to an embodiment of the present disclosure;
FIG. 8 is a flowchart of another method for image processing according to an embodiment of the present disclosure;
FIG. 9 is a flowchart of another method for image processing according to an embodiment of the present application;
Fig. 10 is a schematic architecture diagram of an exemplary image processing apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an architecture of another exemplary image processing apparatus according to an embodiment of the present application;
FIG. 12 is a schematic diagram of an architecture of another exemplary image processing apparatus according to an embodiment of the present application;
FIG. 13 is a schematic diagram of an architecture of another exemplary image processing apparatus according to an embodiment of the present application;
fig. 14 is a schematic architecture diagram of another exemplary image processing apparatus according to an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion, such as a series of steps or elements. The method, system, article, or apparatus is not necessarily limited to those explicitly listed but may include other steps or elements not explicitly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
First, in order to facilitate understanding of the embodiments of the present application, terms related to the embodiments of the present application are described below.
Graphic Buffer (Graphic Buffer): the graphics layer data may include, for example, bullet screen data, subtitle data, navigation fields, status fields, icon layers, floating windows, application display interfaces, or identification information, and may be some data information rendered after the application is started. Graphics data in the graphics buffer may come from multiple applications.
Frame buffer (frame buffer): for storing composite graphics layer data, the composite graphics layer data being synthesized from multiple layers of graphics layer data.
Video Buffer (Video Buffer): for storing video data, such as, for example, tencel video, aiqi video, and you ku video. But also for example for storing the decoded data of the multimedia framework.
Surface: graphics data representing a window of an application process, a Surface corresponding to a graphics layer data.
Vsync signal: for synchronizing the time at which the application begins rendering, the time at which the surfeflinger composite graphics layer is awakened, and the refresh period (display refresh cycle) of the display device. The Vsync signals are periodic, the number of Vsync valid signals in a unit time is referred to as a Vsync frame rate, and a time interval between two adjacent Vsync valid signals is referred to as a Vsync period, and the Vsync frame rate is the inverse of the Vsync period. Illustratively, a common Vsync period is 16ms, and the Vsync frame rate is 1s/16 ms=60. It should be appreciated that the Vsync signal may be active high or active low and the Vsync signal may be level triggered, rising edge triggered, or falling edge triggered. The arrival of the Vsync valid signal can be understood as: the rising edge of the Vsync signal comes, the falling edge of the Vsync signal comes, or the Vsync signal is a high level signal or a low level signal.
As shown in fig. 1, an architecture diagram of an exemplary terminal 100 according to an embodiment of the present application is provided. The terminal 100 may include an antenna system 110, radio Frequency (RF) circuitry 120, a processor 130, a memory 140, a camera 150, audio circuitry 160, a display 170, one or more sensors 180, a wireless transceiver 190, and the like.
Antenna system 110 may be one or more antennas or an antenna array comprised of multiple antennas. The radio frequency circuitry 120 may include one or more analog radio frequency transceivers, the radio frequency circuitry 120 may also include one or more digital radio frequency transceivers, and the RF circuitry 120 is coupled to the antenna system 110. It should be understood that in various embodiments of the present application, coupled is intended to mean interconnected by a particular means, including directly or indirectly through other devices, e.g., through various interfaces, transmission lines, buses, etc. The radio frequency circuit 120 may be used for various types of cellular wireless communications.
Processor 130 may include a communication processor that may be used to control RF circuitry 120 to effect the reception and transmission of signals, which may be voice signals, media signals, or control signals, through antenna system 110. The processor 130 may include various general-purpose processing devices, such as a general-purpose central processing unit (Central Processing Unit, CPU), a System On Chip (SOC), a processor integrated on the SOC, a separate processor Chip or controller, etc.; the processor 130 may also include special purpose processing devices such as an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or digital signal processor (Digital Signal Processor, DSP), a special purpose video or graphics processor, a graphics processing unit (Graphics Processing Unit, GPU), a Neural network processing unit (Neural-network Processing Unit, NPU), and the like. The processor 130 may be a processor group of multiple processors coupled to each other by one or more buses. The processor may include Analog-to-Digital Converter (ADC) and Digital-to-Analog Converter (DAC) to enable the connection of signals between the different components of the device. The processor 130 is used to implement processing of media signals such as images, audio and video.
The memory 140 is coupled to the processor 130, and in particular, the memory 140 may be coupled to the processor 130 by one or more memory controllers. Memory 140 may be used to store computer program instructions, including a computer Operating System (OS) and various user applications, and memory 140 may also be used to store user data, such as graphical image data, video data, audio data, calendar information, contact information, or other media files rendered by the application, and the like. Processor 130 may read computer program instructions or user data from memory 140 or store computer program instructions or user data to memory 140 to implement the relevant processing functions. The Memory 140 may be a non-powered-down volatile Memory, such as EMMC (Embedded Multi Media Card ), UFS (Universal Flash Storage, universal flash Memory) or Read-Only Memory (ROM), or other types of static storage devices that can store static information and instructions, or a powered-down volatile Memory (volatile Memory), such as random access Memory (Random Access Memory, RAM) or other types of dynamic storage devices that can store information and instructions, or an electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media, or other magnetic storage devices, but is not limited thereto. Alternatively, the memory 140 may be separate from the processor 130, or the memory 140 may be integrated with the processor 130.
The camera 150 is used for capturing images or videos, and a user can trigger to start the camera 150 through an application program instruction to achieve a photographing or shooting function, such as shooting and obtaining a picture or video of any scene. The camera may include a lens, a filter, an image sensor, and the like. The cameras can be located in front of the terminal equipment and also can be located in back of the terminal equipment, the specific number and arrangement mode of the cameras can be flexibly determined according to the requirements of a designer or manufacturer policy, and the method is not limited.
Audio circuitry 160 is coupled to processor 130. The audio circuit 160 may include a microphone 161 and a speaker 162, the microphone 161 may receive sound input from the outside, and the speaker 162 may enable playback of audio data. It should be understood that the terminal 100 may have one or more microphones and one or more headphones, and the number of microphones and headphones is not limited in this embodiment of the present application.
Display 170 for providing a user with various display interfaces or various menu information alternatives, and exemplary display 170 displays content including, but not limited to, soft keyboards, virtual mice, virtual keys and icons, etc., associated with specific modules or functions within, display 170 may also accept user input, and optionally display 170 may also display user entered information, such as control information to accept enabling or disabling, etc. Specifically, the display 170 may include a display panel 171 and a touch panel 172. Among them, the display panel 171 may be configured using a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), a Light-Emitting Diode (Light Emitting Diode), an LED display device, a Cathode Ray Tube (CRT), or the like. The touch panel 172, also referred to as a touch screen, a touch sensitive screen, etc., may collect contact or non-contact operations on or near the user (e.g., operations of the user using a finger, a stylus, etc., of any suitable object or accessory on or near the touch panel 172, and may also include somatosensory operations; the operations include single point control operations, multi-point control operations, etc.), and drive the corresponding connection devices according to a predetermined program. Alternatively, the touch panel 172 may include two parts, a touch detection device and a touch controller. The touch detection device detects a signal brought by touch operation of a user and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into information which can be processed by the processor 130, sends the information to the processor 130, and can receive and execute commands sent by the processor 130. Further, the touch panel 172 may cover the display panel 171, and the user may operate on or near the touch panel 172 according to the content displayed on the display panel 171, and after the touch panel 172 detects the operation, the operation is transmitted to the processor 130 through the I/O subsystem 10 to determine the user input, and then the processor 130 provides a corresponding visual output on the display panel 171 through the I/O subsystem 10 according to the user input. Although in fig. 1, the touch panel 172 and the display panel 171 are implemented as two separate components for input and output functions of the terminal 100, in some embodiments, the touch panel 172 and the display panel 171 are integrated.
The sensor 180 may include an image sensor, a motion sensor, a proximity sensor, an ambient noise sensor, a sound sensor, an accelerometer, a temperature sensor, a gyroscope, or other type of sensor, as well as various forms and combinations thereof. The processor 130 receives various information such as audio information, image information, or motion information by driving the sensor 180 through the sensor controller 12 in the I/O subsystem 10, and the sensor 180 passes the received information to the processor 130 for processing.
Wireless transceiver 190, wireless transceiver 190 may provide wireless connectivity to other devices, such as wireless headsets, bluetooth headsets, wireless mice or wireless keyboards, and wireless networks, such as wireless fidelity (Wireless Fidelity, wiFi) networks, wireless personal area networks (Wireless Personal Area Network, WPAN) or other wireless local area networks (Wireless Local Area Network, WLAN), and the like. The wireless transceiver 190 may be a bluetooth compatible transceiver for wirelessly coupling the processor 130 to a peripheral device such as a bluetooth headset, wireless mouse, etc., or the wireless transceiver 190 may be a WiFi compatible transceiver for wirelessly coupling the processor 130 to a wireless network or other device.
The terminal 100 may also include other input devices 14 coupled to the processor 130 to receive various user inputs, such as receiving an entered number, name, address, media selections, etc., other input devices 14 may include a keyboard, physical buttons (push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and optical mice (optical mice are touch sensitive surfaces that do not display visual output, or are extensions of a touch sensitive surface formed by a touch screen), etc.
The terminal 100 may further include the above-described I/O subsystem 10, the I/O subsystem 10 may include other input device controllers 11 for receiving signals from other input devices 14 or transmitting control or drive information of the processor 130 to other input devices 190, and the I/O subsystem 10 may further include the above-described sensor controller 12 and display controller 13 for enabling exchange of data and control information between the sensor 180 and the display 170, respectively, and the processor 130.
Terminal 100 may also include a power supply 101, which may be a rechargeable or non-rechargeable lithium ion battery or nickel metal hydride battery, to power other components of terminal 100, including 110-190. Further, when the power source 101 is a rechargeable battery, the processor 130 may be coupled through a power management system, so as to manage charging, discharging, power consumption adjustment, etc. through the power management system.
It should be understood that the terminal 100 in fig. 1 is only an example, and the specific form of the terminal 100 is not limited, and the terminal 100 may further include other components that are not shown in fig. 1 and may be added in the future.
In an alternative, RF circuit 120, processor 130, and memory 140 may be partially or fully integrated on a single chip or may be separate chips. The RF circuitry 120, processor 130, and memory 140 may include one or more integrated circuits disposed on a printed circuit board (Printed Circuit Board, PCB).
As shown in fig. 2, for an exemplary hardware architecture diagram of an image processing apparatus provided in an embodiment of the present application, the image processing apparatus 200 may be, for example, a processor chip, and an exemplary hardware architecture diagram shown in fig. 2 may be an exemplary architecture diagram of the processor 130 in fig. 1, and an image processing method and an image processing architecture provided in an embodiment of the present application may be applied to the processor chip.
Referring to fig. 2, the apparatus 200 includes: at least one CPU, a memory, a microcontroller (Microcontroller Unit, MCU), a GPU, an NPU, a memory bus, a receiving interface, a transmitting interface, and the like. Although not shown in fig. 2, the apparatus 200 may further include an application processor (Application Processor, AP), a decoder, and a dedicated video or image processor.
The various portions of the device 200 are coupled by connectors, which may include, for example, various types of interfaces, transmission lines or buses, etc., which are typically electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces, as the present embodiment is not limited in this regard.
Alternatively, the CPU may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor; alternatively, the CPU may be a processor group of multiple processors coupled to each other via one or more buses. The receiving interface may be an interface for data input of the processor chip, and in an alternative case, the receiving interface and the transmitting interface may be a high definition multimedia interface (High Definition Multimedia Interface, HDMI), a V-By-One interface, an embedded Display Port (Embedded Display Port, eDP), a mobile industry processor interface (Mobile Industry Processor Interface, MIPI), or a Display Port (DP), etc. The memory may refer to the previous description of the portion of memory 140.
In an alternative case, the above parts are integrated on the same chip; in another alternative, the CPU, GPU, decoder, receiving interface and transmitting interface are integrated on a chip, with parts of the chip internal accessing external memory via a bus. The special video/graphics processor may be integrated with the CPU on the same chip or may exist as a separate processor chip, e.g. the special video/graphics processor may be a special ISP. In an alternative scenario, the NPU may also act as a stand-alone processor chip. The NPU is used to implement various neural networks or deep learning correlation operations.
The chips referred to in the embodiments of the present application are systems fabricated on the same semiconductor substrate in an integrated circuit process, also referred to as semiconductor chips, which may be a collection of integrated circuits formed on a substrate (typically a semiconductor material such as silicon) using an integrated circuit process, the outer layers of which are typically encapsulated by a semiconductor encapsulation material. The integrated circuit may include various types of functional devices, each of which may include logic gates, metal-Oxide-Semiconductor (MOS) transistors, bipolar transistors, or diodes, and other components such as capacitors, resistors, or inductors. Each functional device can work independently or under the action of necessary driving software, and can realize various functions such as communication, operation, storage and the like.
FIG. 3a is a schematic diagram of an exemplary operating system architecture suitable for use with embodiments of the present application. The operating system may run on the processor 130 shown in fig. 1, the code corresponding to the operating system may be stored in the memory 140 shown in fig. 1, or the operating system may run on the image processing apparatus 200 shown in fig. 2.
By way of example, the operating system may be, for example, an Android system, an iOS system, or a Linux system, etc. The operating system architecture includes an Application (APP) layer, a framework layer, a hardware abstraction layer (Hardware Abstraction Layer, HAL), and a driver layer.
Alternatively, the APP layer may include applications such as WeChat, aiqi, tencel video, panning, or camera.
The framework layer is a logic scheduling layer of the operating system architecture, and can perform resource scheduling and strategy allocation on the video processing process. Illustratively, the framing layer includes:
the graphics framework (Graphics Framework) is responsible for the layout of the graphics window and the rendering of the graphics data, stores the rendered graphics data in a graphics Buffer (graphics Buffer), and sends the graphics layer data in the graphics Buffer to the SurfaceFlinger.
The multimedia framework (Multimedia Framework) is responsible for decoding the video stream and delivering the decoded data to the surfeflinger.
The surface eFlink is responsible for managing each layer (including a graphics layer and a Video layer), receiving Graphic Buffer and Video Buffer of each layer, and overlapping the graphics layers by using a graphics processor (Graphics Processing Unit, GPU) or a hardware synthesizer. The graphics layer data obtained by the superposition of the surface eFlingers are stored in a frame buffer, and the data in the frame buffer can be read and displayed by a display screen. Fig. 3b is a schematic diagram illustrating an exemplary process of synthesizing graphics layers, where in the example shown in fig. 3b includes three graphics layer data, which are respectively a status bar graphics layer, an icon graphics layer, and a navigation bar graphics layer, and the three graphics layer data may be rendered by the same application program, where each graphics layer data corresponds to a Graphic Buffer, and a surface eflink synthesizes the three graphics layer data into one image frame data, and stores the image frame data in a frame Buffer, where the image frame is a synthesis result of the status bar graphics layer, the icon graphics layer, and the navigation bar graphics layer, and the image frame data in the frame Buffer is available for a display screen to read and display on the display screen. In an alternative case, the content to be displayed includes video data in addition to the graphics data, and the image frame data and the video data are displayed on the display screen after being synthesized.
Open graphics language (Open Graphics Library, openGL) provides an interface for graphics rendering and superposition of graphics layers, and may interface with GPU drivers.
The HAL is an interface layer of operating system software and audio/video hardware equipment. The HAL provides an interface for interaction between the upper layer software and the lower layer hardware. The HAL layer abstracts the underlying hardware into software containing corresponding hardware interfaces, and the setting of the underlying hardware devices can be achieved by accessing the HAL layer, for example, related hardware devices can be enabled or disabled at the HAL layer. The driving layer is used for directly controlling the bottom hardware equipment according to the control information input by the HAL. Illustratively, the HAL includes a Hardware synthesizer (Hardware Composer, HWC) abstraction and a Media Hardware (Media HW) abstraction, and the corresponding driver layer includes a Hardware synthesis driver and a Media Hardware driver. The HWC is used for hardware synthesis of a plurality of graphics layers, can provide support for hardware synthesis of SurfaceFlinger, and stores the synthesized graphics layers in a frame buffer and sends the synthesized graphics layers to a display driver. The Media HW is responsible for processing the video layer data and informing the processed video layer and the position information of the video layer to the display driver, and is a special hardware circuit which can be used for improving the video display effect. It should be appreciated that different vendors may refer to media hardware differently. It should be appreciated that the HWC abstraction of the HAL layer corresponds to the hardware composition driver of the driver layer, the media hardware abstraction corresponds to the media hardware driver, and the control of the underlying HWC hardware can be achieved by accessing the HWC abstraction of the HAL layer through the hardware composition driver. Control of the underlying Media HW is achieved by accessing the Media hardware abstraction and Media hardware drivers of the HAL layer.
The driving layer also comprises a GPU driving and a display driving, wherein the GPU driving is responsible for rendering and superposing graphics, the display driving is responsible for synthesizing a video layer and a graphics layer, and the synthesized result is sent to the display for display.
Fig. 4 is a schematic diagram of a conventional image processing architecture. In the image processing architecture, the graphics framework sends indication information of Graphic Buffers to the surface eFlink, the multimedia framework sends indication information of Video Buffers to the surface eFlink, one graphics layer corresponds to one Graphic Buffer, for example, a navigation bar graphics layer and a status bar graphics layer respectively correspond to different Graphic Buffers, the surface eFlink binds the Graphic Buffers and the Video Buffers to the corresponding graphics layers, for example, binds Graphic Buffer1 to the navigation bar graphics layer, binds Graphic Buffer2 to the status bar graphics layer, binds Video Buffer to the Video layer, and the like. When the Vsync signal arrives, the surface eflinger sends the indication information of Graphic Buffers and Video Buffers to the hardware synthesizer HWC together in the main thread. The HWC synthesizes a plurality of Graphic layers in all Graphic Buffers, and holes the Graphic layers below the video layer in the synthesis process so as to display the video layer. The HWC synthesized graphics layer data is stored in the frame buffer. Next, in the main thread, the HWC sends the indication information of the Video Buffer to the Media HW, so that the Media HW reads Video data from the Video Buffer and processes the Video data. It should be understood that, although not shown, the hardware synthesizer and the media hardware in fig. 4 each include a corresponding hardware abstraction layer and a driver layer, and when the hardware synthesizer is called, the hardware synthesizer accessing the hardware abstraction layer is required to abstract and call the hardware synthesis driver to realize the call to the hardware synthesizer; similarly, a call to Media HW needs to be made through a Media hardware abstraction layer and a Media hardware driver, or, a Media hardware driver is called through a Media hardware abstraction that accesses the hardware abstraction layer to make a call to Media hardware. Illustratively, the SurfaceFlinger sends the indication information of Graphic Buffers and Video Buffers to the HWC abstract layer of the abstract layer in the main thread, and invokes the HWC hardware through hardware synthesis drive to realize the synthesis of a plurality of Graphic layers and the hole digging treatment of the Graphic layers. The HWC sends the synthesized graphics layer data to the display driver and the Media HW sends the processed video image to the display driver. When a new Vsync signal arrives, the display driver synthesizes the synthesized graphics layer data sent by the HWC and the video data sent by the Media HW, and sends the synthesized result to the display device for display. It should be appreciated that the buffer indication information is used to point to a block of memory, and may be a file descriptor (fd), for example.
In the image processing architecture, since the display frame rate of the video frame and the composition of the plurality of graphics layers are controlled by the same vertical synchronization signal, the video frame rate that can be supported is limited by the speed of the composition of the graphics layers, and therefore better hardware is required to support the high frame rate refresh of the graphics frames. In addition, since the synthesis processing of the HWC and the processing of the Media HW on the video data are sequentially performed in the same thread, video playback is easily affected by the synthesis of the multiple graphics layers. Especially in some complex graphics scenes, graphics rendering and compositing are time-consuming, resulting in that some graphics frames and video frames cannot be sent and displayed in time, and these graphics frames and video frames can be lost by an application program or a surfeflinger, even if hardware performance is improved, frame loss can still occur in the video playing process.
An embodiment of the present application provides an image processing architecture, which is shown in fig. 5.
The Graphic framework sends the indication information of Graphic Buffer to the surface eFlink, and the multimedia framework sends the indication information of Video Buffer to the surface eFlink.
When the SurfaceFlinger is initialized, two threads are started simultaneously: the system comprises a first thread and a second thread, wherein in the first thread, indication information of Graphic Buffers is sent to the HWC so that the HWC can synthesize a plurality of Graphic layers and dig holes of the Graphic layers; and in the parallel second thread, sending the indication information of the Video Buffer to the Media HW so that the Media HW processes the Video data. In this way, the processing of video data by the Media HW and the synthesis processing of multiple graphics layers by the HWC are performed in parallel in two threads, and the processing of video data by the Media HW is no longer affected by the progress of the graphics layer synthesis. It should be appreciated that the HWC may hole the graphics layers below the video layer separately and then combine the multiple graphics layers into one graphics layer; the HWC may also first combine multiple graphics layers into one graphics layer and then hole the combined graphics layer. The synthesized graphics layer data obtained by HWC processing is stored in a frame buffer, and the HWC sends indication information of the frame buffer to a display driver. The Media HW stores the processed Video layer data in the Video Buffer and transmits the indication information of the Video Buffer to the display driver.
It should be understood that, both the first thread and the second thread are loop threads, when the surface flinger receives the Graphic Buffer sent by the graphics framework, the first thread is notified to process, but the first thread needs to wait for the arrival of the graphics Vsync before starting to process the graphics layer data in the Graphic Buffer. When the surface Buffer receives the Video Buffer sent by the multimedia framework, the second thread is informed to process, and the second thread starts to process after receiving the notification without waiting for a vertical synchronization signal.
Specifically, when the graphic Vsync arrives, in the first thread, information related to the size of the set video layer is sent to the Media HW, so that the Media HW can process the video data according to the information related to the size of the video layer; also in this first thread, the HWC holes the graphics layer according to information about the size of the video layer set such that the size of the hole-digging area is equal to the size of the video layer. Because the size of the video layer is set and the hole digging is carried out in the same thread, the hole digging size is completely consistent with the size of the set video layer, and synchronous matching display of the video layer and the graphic layer is ensured. For example, the information related to the size of the video layer may include position information, size information, and the like of the video layer, and optionally, when the position information includes four vertex positions of the video layer, the size of the video layer may be determined according to the position information, and the information related to the size of the video layer may include only the position information; when the position information includes only a certain vertex position (for example, the vertex position of the upper right corner) of the video layer, the size-related information of the video layer further includes size information of the video. The information about the size of the video layer is calculated by surfeflinger, and the multimedia framework, by way of example, sends the initial size of the video layer set by the application programming interface (Application Programming Interface, API) of the system to SurfaceFlinger, surfaceFlinger, can capture or sense the operations of amplifying, reducing or rotating the video by the user, and the information about the size of the video layer is calculated by surfeflinger integrating the initial size of the video layer sent by the multimedia framework and the operations of amplifying, reducing or rotating the video.
However, since the size of the Video Buffer may affect the hole digging of the graphics layer, the processing of the Video Buffer is performed in the second thread, and when the second thread receives the first Video Buffer or detects that the size of the Video Buffer changes, notification information is sent to the first thread of the surface file to notify that the size of the Video layer changes. For example, the notification information may be carried on an identification bit. It should be understood that the size of the Video Buffer may include size information of the Video Buffer, rotation information of the Video Buffer, etc., each of which stores one frame of Video data or stores one Video layer of data, the size of the Video Buffer being related to the size of the Video data (or Video layer), for example, the size of the Video Buffer may be equal to the size of the Video data, and the rotation information of the Video Buffer is used to represent the rotation information of the Video data. After the SurfaceFlinger receives the notification message, acquiring the updated Video Buffer size, and recalculating information related to the Video data size according to the updated Video Buffer size, so that HWC in the first thread can hole the graphics layer again according to the changed size, and the Video layer after the size change can be displayed through the graphics layer, and synchronous matching display of the Video layer and the graphics layer can still be realized when the Video Buffer size is changed. In an alternative case, when the size of the Video Buffer changes, the information about the size of the Video data calculated by surfeflinger does not necessarily change.
It should be appreciated that the order in which the size-related information of the video layer is sent to the Media HW in the first thread and the first frame of video data is sent to the Media HW in the second thread is not limited, and in an alternative case, the Media HW receives the size-related information of the video layer first and then receives the first frame of video data; in another alternative, the Media HW receives the first frame of video data first and then receives information about the size of the video layer. In either case, the Media HW needs to process the first frame of video data according to the received information about the size of the video layer before sending it to the display driver.
In an alternative scheme, the surface efliger needs to refer to the size information of the Video Buffer when calculating the size-related information of the Video data, in this case, the Media HW sends notification information to the first thread of the surface efliger when receiving the first Video Buffer, and after the surface efliger receives the notification information, obtains the size of the first Video Buffer, and calculates the size-related information of the Video data according to the size of the first Video Buffer.
In addition, the image processing architecture introduces two vertical synchronization signals Graphic Vsync and Display Vsync, wherein the Graphic Vsync is used for triggering the composition of a plurality of Graphic layers, and the Display Vsync is used for triggering the composition of the Display driver to the Graphic layers and the video layers and the refresh of the Display device. It should be understood that Graphic Vsync and Display Vsync are two vertical synchronization signals independent of each other, and the frame rates of the Graphic Vsync and Display Vsync may be different. For example, the frame rate of Display Vsync may be set to be greater than the frame rate of Graphic Vsync, so that the refresh frame rate of video may be greater than the actual refresh frame rate of graphics. Illustratively, graphic Vsync is also used to trigger the application program to render the graphics layer data, which may be from multiple applications, and the application program renders the graphics layer data to fill the Graphic Buffer, and fills two different periods of the Graphic Buffer and HWC synthesizing the multiple graphics layer data into the corresponding Graphic Vsync signal.
According to the image processing architecture provided by the embodiment of the application, the synthesis of the graphics layer by the HWC and the processing of the video layer by the Media HW are performed in parallel by the thread, so that the processing of the video image is not influenced by the synthesis of the graphics layer any more, the video playing is not influenced by the synthesis of the graphics layer any more, and the problem of frame loss caused by time consumption of the synthesis of the graphics layer in the video playing process can be effectively solved. Furthermore, setting the size of the Video layer and digging holes on the graphic layer are completed in the same thread, inter-thread communication can be performed between the first thread and the second thread, and when the size of the Video Buffer changes, the second thread can inform the first thread so that the digging hole size of the graphic layer is consistent with the size of the Video data, and synchronous matching display of the Video layer and the graphic layer is ensured. In addition, since the vertical synchronization signal controlling the composition of the plurality of graphics layers and the vertical synchronization signal controlling the refresh frame rate of the display device are independent of each other, the refresh frame rate of the video may be greater than the actual refresh frame rate of the graphics, and thus the image processing architecture may support the playback of high frame rate video having a video frame rate higher than the refresh frame rate of the graphics.
In an alternative scenario, graphics hardware included in a graphics processing system includes a HWC and a GPU, and if the HWC does not support overlaying the graphics layers, hardware resources of the GPU may be invoked to implement the overlaying of the multi-layer graphics layers.
Another exemplary image processing architecture is provided for embodiments of the present application as shown in fig. 6.
In the image processing architecture shown in fig. 6, in the first thread, when the arrival of the graphics Vsync is waited, the instruction information of Graphic Buffers is sent to the GPU, the GPU implements the synthesis (or superposition) of the multi-layer graphics layer data and the hole digging process of the graphics layer, and the synthesized graphics layer data is stored in the frame buffer. Alternatively, surfeflinger may use the image processing function of the GPU to implement the synthesis of the multi-layer graphics layer data and the hole digging process for the graphics layer by calling the API interface of the GPU. The GPU returns the processing result to SurfaceFlinger, surfaceFlinger to send the indication information of the frame buffer to the hardware synthesizer, and then the hardware synthesizer informs the display driver of the indication information of the frame buffer, so that the display driver can read the processed graphics layer data from the corresponding memory according to the indication information of the frame buffer. In contrast to the image processing architecture shown in fig. 5, the superposition of the multi-layer graphics layers and the hole digging processing on the graphics layers in the image processing architecture of fig. 6 are not performed by the HWC, but by the GPU, and other processing is the same as that of the image processing architecture shown in fig. 5, and reference is made to the description of the image processing architecture shown in fig. 5, which is not repeated here.
It should be understood that, although not shown, the hardware synthesizer and media hardware in fig. 5 and 6 each include a corresponding hardware abstraction layer and driver layer, and require that the hardware synthesizer abstraction layer and the hardware synthesis driver implement a call to the hardware synthesizer, or that the hardware synthesizer accessing the HAL abstractly call the hardware synthesis driver to implement a call to the hardware synthesizer; likewise, calls to the Media HW need to be made through the Media hardware abstraction layer and the Media hardware driver, or, the Media hardware driver is called by accessing the Media hardware abstraction of the HAL to make the call to the Media hardware. It should be understood that in fig. 5 and 6, the indication information of Graphic Buffer is sent to the hardware synthesizer abstraction, and the indication information of Video Buffer is sent to the media hardware abstraction; the hardware synthesizer abstraction sends the indication information of the frame Buffer to the display driver, and the media hardware abstraction sends the indication information of the Video Buffer to the display driver. That is, it can be considered that the transfer of the related instruction information occurs at the hardware abstraction layer and the driver layer without being actually transmitted to the hardware.
Based on the architecture of image processing shown in fig. 5 and fig. 6, an embodiment of the present application further provides a method for processing image data, as shown in fig. 7, where the method includes:
701. Synthesizing a plurality of graphic layers in a first thread and digging holes in at least one graphic layer in the plurality of graphic layers to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area;
it will be appreciated that the resultant graphic layer data includes a hole-digging region which is typically arranged to be transparent so that the video layer can be displayed through the hole-digging region when the graphic layer and the video layer are combined. In an alternative case, holes may be drilled in the graphics layers below the video layer, respectively, and then the multiple graphics layers may be combined into one graphics layer; alternatively, a plurality of pattern layers may be first combined into one pattern layer, and then the combined pattern layer may be hollowed out.
702. Processing the video layer in the second thread to obtain a processed video layer;
it should be appreciated that the processed video layer can be displayed through the hole-punched area. The method embodiment allows a certain size difference between the processed video layer and the hole digging area, that is, the sizes of the processed video layer and the hole digging area are not necessarily completely consistent.
703. And superposing the synthesized graphic layer and the processed video layer to obtain display data.
In the embodiment of the application, the synthesis and hole digging processing of the graphics layer and the processing of the video layer are performed in parallel in two threads, so that the processing of the video image is not influenced by the synthesis of the graphics layer any more, the video playing is not influenced by the synthesis of the graphics layer any more, and the problem of frame loss caused by time consumption of the synthesis of the graphics layer in the video playing process can be effectively solved.
In an alternative case, before step 701, the method further comprises: information about the size of the video layer is set in the first thread.
In this way, the information related to the size of the video layer is firstly set in the first thread, and then the hole digging processing is carried out on at least one graphic layer according to the information related to the size of the video layer; and transmitting the information related to the size of the set video layer to the Media HW in the first thread, and then processing the video layer according to the information related to the size of the video layer in the second thread by the Media HW to obtain the processed video layer. It should be appreciated that the size-related information of the video layer may include the size of the video layer and the position information of the video layer, and that the size-related information of the video layer may include one vertex coordinate of the video layer and two length information representing the length and width of the video layer, by way of example; the information related to the size of the video layer can also comprise two vertex coordinates and one length information of the video layer, and the size and the playing position of the video layer can be uniquely determined according to the two vertex coordinates and the one length information; if the position information of the video layer is 4 vertex coordinates for displaying the video layer, the size of the video layer may be determined according to the 4 vertex coordinates, and at this time, the information related to the size of the video layer may include only the position information. The information related to the size of the video layer is obtained by calculating the SurfaceFlinger, and the multimedia framework sets the initial size of the video layer through an API of the system and sends the initial size to SurfaceFlinger, surfaceFlinger, so that the operations of amplifying, shrinking or rotating the video by a user can be captured or perceived, and the information related to the size of the video layer is obtained by calculating the initial size of the video layer and the operations of amplifying, shrinking or rotating the video layer sent by the SurfaceFlinger.
In the embodiment of the application, since the setting of the size of the video layer and the hole digging processing of the graphics layer below the video layer are completed in the same thread, that is, the sizes of the processed video layer and the hole digging region are obtained according to the set size of the video layer, the size of the hole digging region in the graphics layer is completely consistent with the size of the video layer, the processed video layer and the hole digging region can be completely matched, and therefore the processed video layer can be synchronously displayed through the hole digging region.
In an alternative case, synthesizing the multiple graphic layers in the first thread and hole digging the at least one graphic layer based on the first vertical synchronous signal; superposing the synthesized graphic layer and the processed video layer based on the second vertical synchronous signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
It should be understood that the first vertical synchronization signal and the second vertical synchronization signal are two periodic signals independent of each other, and the first vertical synchronization signal and the second vertical synchronization signal may have different frame rates and different periods. Specifically, when the effective signal of the first vertical synchronous signal arrives, synthesizing a plurality of graphic layers in a first thread and hole digging treatment is carried out on at least one graphic layer; and when the effective signal of the second vertical synchronous signal arrives, the synthesized graphic layer and the processed video layer are overlapped to obtain display data. The first and second vertical synchronization signals may be active high or active low, and the first and second vertical synchronization signals may be level triggered, rising edge triggered, or falling edge triggered. The valid signal arrival of the Vsync signal can be understood as: the rising edge of the Vsync signal comes, the falling edge of the Vsync signal comes, or the Vsync signal is a high level signal or a low level signal. For example, the first vertical synchronization signal may be Graphic Vsync in the foregoing embodiment, and the second vertical synchronization signal may be Display Vsync in the foregoing embodiment.
In an alternative case, the frame rate of the first vertical synchronization signal is smaller than the frame rate of the second vertical synchronization signal.
In the embodiment of the present application, since the first vertical synchronization signal is used for controlling the synthesis of the plurality of graphics layers, the signal for controlling the synthesis of the display video data and the refresh frame rate of the display device is the second vertical synchronization signal, the first vertical synchronization signal and the second vertical synchronization signal are independent from each other, and the frame rate of the second vertical synchronization signal may be greater than the frame rate of the first vertical synchronization signal, so that the image processing architecture may support the playing of a high-frame-rate video with a video frame rate higher than the graphics refresh frame rate.
In an optional case, when the valid signal of the first vertical synchronization signal arrives, in the first thread, information related to the size of the video layer is set first, then the multi-layer graphics layers are synthesized, and hole digging processing is performed on at least one graphics layer according to the information related to the size of the video layer.
In this embodiment of the present application, setting information related to the size of the video layer in the first thread needs to be performed after the arrival of the effective signal of the vertical synchronization signal, and setting information related to the size of the video layer and synthesizing the graphics layer are performed sequentially after the arrival of the effective signal of the same vertical synchronization signal.
In an optional case, when the second thread acquires the first Video Buffer, sending first notification information to the first thread; setting information related to the size of the Video layer according to the size of the first Video Buffer in the first thread; or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; in the first thread, information about the size of the Video layer is reset according to the changed size of the Video Buffer.
It should be appreciated that one Video layer of data is stored in each Video Buffer, and that the size of the Video Buffer is related to the size of the stored Video layer. Therefore, when the size of the Video layer changes, the size of the Video Buffer also changes. The first notification information is used for notifying the first thread that the size of the video layer is changed.
In this embodiment of the present application, inter-thread communication may be performed between the first thread and the second thread, when the size of the first Video Buffer or the Video Buffer is received to change, the second thread may notify the first thread, so that information related to the size of the Video layer may be reset in the first thread, and hole may be re-performed on the graphics layer according to the changed size, so as to ensure that the Video layer after the size change may be displayed through the hole area of the graphics layer, that is, ensure that when the size of the Video Buffer changes, synchronous matching display of the Video layer and the graphics layer may still be implemented. In an alternative case, when the size of the Video Buffer changes, the information about the size of the Video data calculated by surfeflinger does not necessarily change.
In an alternative case, the synthesis of the multiple graphics layers and the hole digging of at least one graphics layer are performed by the HWC in the first thread.
It should be appreciated that when the hardware resources of the HWC are invoked to perform graphics layer synthesis and hole digging, the HWC is specifically driven by the SurfaceFlinger of the framework layer through the HWC abstraction of the access hardware abstraction layer, thereby implementing the invocation of the HWC hardware resources.
In an alternative case, the GPU synthesizes the plurality of graphic layers and holes at least one graphic layer in the first thread to obtain synthesized graphic layers.
When the HWC does not support graphics layer synthesis and graphics layer hole digging, hardware resources of the GPU can be called to perform graphics layer synthesis and graphics layer hole digging.
In an alternative case, after setting the information about the size of the video layer in the first thread, the method further includes: in the first thread, sending information about the size of the video layer to a Media HW; in the second thread, the Media HW processes the video layer according to the information related to the size of the video layer, and the processed video layer is obtained.
In an alternative case, the Media HW receives information about the size of the video layer in the first thread, then the Media HW receives the first frame of video layer data in the second thread, and then the Media HW processes the first frame of video layer data according to the received information about the size of the video layer; in another case, the Media HW receives the first frame of video layer data in the second thread, then the Media HW receives the information about the size of the video layer in the first thread, and the Media HW does not immediately process the first frame of video layer data after receiving the first frame of video layer data, but processes the first frame of video layer data after waiting for the information about the size of the video layer to be received, so as to avoid processing errors of the first frame of video layer data.
In an alternative case, the synthesized graphics layer and the processed video layer are superimposed by a display driver to obtain display data.
In an alternative case, the method further comprises:
creating the first thread and the second thread in an initialization stage by SurfaceFinger; when the surface eFlinger receives Graphic Buffer, notifying the first thread to process Graphic layer data in the Graphic Buffer; when the SurfaceFlinger receives the Video Buffer, notifying the second thread to process Video layer data in the Video Buffer; wherein, a layer of Graphic layer data is stored in the Graphic Buffer, and a layer of Video layer data is stored in the Video Buffer.
Based on the architecture of image processing shown in fig. 5 and 6, another method for processing image data is further provided in the embodiments of the present application, as shown in fig. 8, where the method includes:
801. setting information related to the size of a video layer in a first thread and sending the information to Media HW;
it should be understood that the information related to the size of the video layer may include the size of the video layer and the position information of the video layer, where the information related to the size of the video layer is calculated by surfeflinger, and the multimedia framework sets the initial size of the video layer through an API of the system and sends the initial size to SurfaceFlinger, surfaceFlinger, and may capture or sense the operations such as zooming in, zooming out or rotating the video by the user, where the surfeflinger synthesizes the initial size of the video layer and the operations such as zooming in, zooming out or rotating the video layer sent by the multimedia framework, and calculates the information related to the size of the video layer. Illustratively, step 801 is performed when a valid signal for Graphic Vsync arrives.
802. Transmitting the indication information of the plurality of Graphic buffers to the HWC in the first thread;
it should be understood that, when the hardware resource of the HWC is called in the first thread to implement the synthesis of the multi-layer graphics layer, in a specific implementation, the instruction information of the Graphic Buffer storing the graphics layer data is sent to the HWC in the first thread, where the instruction information points to a section of memory, and the HWC may acquire and process the graphics layer data from the corresponding memory according to the instruction information.
803. In a first thread, the HWC synthesizes the multi-layer graphic layer data, and performs hole digging treatment on at least one layer of graphic layer according to the information related to the size of the video layer to obtain a synthesized graphic layer;
it will be appreciated that the resultant graphic layer data includes a hole-digging region which is typically arranged to be transparent so that the video layer can be displayed through the hole-digging region when the graphic layer and the video layer are combined. In an alternative case, the HWC may hole the graphics layers below the video layer separately and then combine the multiple graphics layers into one graphics layer; the HWC may also first combine multiple graphics layers into one graphics layer and then hole the combined graphics layer. For example, the synthesized graphics layer data resulting from HWC processing may be stored in a FrameBuffer. It should be appreciated that step 803 is only performed when the valid signal of Graphic Vsync arrives, and that step 803 is performed after step 801.
In an alternative case, if the HWC does not support overlaying graphics layers, surfeFlinger may invoke resources of the GPU in the first thread to implement overlaying of multiple graphics layers, corresponding to the image processing architecture shown in FIG. 4. In this case, the instruction information of the plurality of Graphic buffers is sent to the GPU in 802, so as to implement the synthesis of the data of the multi-layer graphics layer and the hole digging processing of the graphics layer by using the image processing function of the GPU. The GPU returns the processing result to SurfaceFlinger, surfaceFlinger to send the indication information of the frame buffer to the hardware synthesizer, and then the hardware synthesizer informs the display driver of the indication information of the frame buffer.
804. Sending the synthesized graphic layer to a display driver;
it should be appreciated that the HWC sends the frame buffer indication information to the display driver so that the display driver may obtain the synthesized graphics layer data from the corresponding memory according to the indication information.
805. Transmitting the indication information of the Video Buffer to the Media HW in the second thread;
it should be understood that step 805 is performed in parallel with step 801, i.e., step 805 and step 801 may be performed simultaneously.
806. When a first Video Buffer is received or when the change of the size of the Video Buffer is detected, the second thread notifies the first thread;
Illustratively, the second Thread sends first notification information to the first Thread, the first notification information indicating that the size of the video layer of the Main Thread has changed. After the first thread receives the notification information, the SurfaceFlinger can acquire the updated Video Buffer size, and calculate information related to the Video layer size according to the updated Video Buffer size. The size of the Video Buffer may include size information and rotation information, for example.
807. In the second thread, the Media HW processes the video layer data according to the information about the size of the video layer received in the first thread;
in an alternative case, the Media HW receives information about the size of the video layer in the first thread, then the Media HW receives the first frame of video layer data in the second thread, and then the Media HW processes the first frame of video layer data according to the received information about the size of the video layer; in another case, the Media HW receives the first frame of video layer data in the second thread, then the Media HW receives the information about the size of the video layer in the first thread, and the Media HW does not immediately process the first frame of video layer data after receiving the first frame of video layer data, but processes the first frame of video layer data after waiting for the information about the size of the video layer to be received, so as to avoid processing errors of the first frame of video layer data.
808. Transmitting the processed video layer to a display driver;
the Media HW sends the indication information of the Video Buffer to the display driver, so that the display driver can acquire the processed Video layer data from the corresponding memory according to the indication information.
809. Display Vsync arrives, and the Display driver superimposes the video layer and the graphics layer to obtain Display data and sends the Display data to the Display device for Display.
In the embodiment of the application, the synthesis of the graphics layer by the HWC and the processing of the video layer by the Media HW are performed in parallel by the thread, so that the processing of the video image is not influenced by the synthesis of the graphics layer any more, the video playing is not influenced by the synthesis of the graphics layer any more, and the problem of frame loss caused by time consumption of the synthesis of the graphics layer in the video playing process can be effectively solved. In addition, the size of the Video layer and the hole digging processing of the graphics layer below the Video layer are sequentially completed in the same thread, inter-thread communication can be performed between the first thread and the second thread, and when the size of the Video Buffer changes, the second thread can inform the first thread, so that the hole digging size in the graphics layer is completely consistent with the size of the Video layer, and the Video layer and the graphics layer are synchronously matched and displayed. Further, since the graphics Vsync is synthesized by controlling the plurality of graphics layers, the Display Vsync is used as a signal for controlling the synthesis of the Display video data and the refresh frame rate of the Display device, and the graphics Vsync and the Display Vsync are independent of each other, the frame rate of the Display Vsync may be greater than the frame rate of the graphics Vsync, and thus the image processing architecture may support the playing of a high-frame-rate video having a video frame rate higher than the graphics refresh frame rate.
The embodiment of the application also provides another method for processing image data, as shown in fig. 9, the method includes:
s10, creating a Video Thread in an initialization stage by SurfaceFlinger;
video Thread is a dedicated Thread dedicated to processing Video layer data and may correspond to the aforementioned second Thread. It should be appreciated that the Thread receiving the Video Buffer is a different Thread than the Video Thread, and for example, the Thread receiving the Video Buffer may be referred to as the first receiving Thread, with which inter-Thread communication is required so that the first receiving Thread may notify the Video Thread that a new Video Buffer is available.
S12, multimedia Framework, sending the Video Buffer to a Buffer queue of a Video layer;
the Buffer includes a user flag bit, where the user flag bit is used to indicate a type of the Buffer, for example, when the user flag bit is a first indication value, it indicates that the Buffer is a Video Buffer, and when the user flag bit is a second indication value, it indicates that the Buffer is a Graphic Buffer. In an optional case, when the Usage flag bit is unoccupied, the Buffer is illustrated as a Graphic Buffer; when the Usage flag bit is occupied, the Buffer is a Video Buffer.
S14, receiving a buffer in a first receiving thread by the SurfaceFlinger;
s16, when the received Buffer is a Video Buffer, notifying a Video Thread that a new available Video Buffer exists by the surface eFlinger;
illustratively, the surfeFlinger may determine whether the received Buffer is a Video Buffer by using a Usage flag bit. It should be appreciated that Video Thread is a loop Thread that notifies the Video Thread to process Video layer data when the surface Buffer sent by the media frame is received, and the Video Thread starts processing after receiving the notification without waiting for a vertical synchronization signal.
S18, after receiving the notification, the Video Thread takes out the Video Buffer from the Buffer queue of the Video layer and sends the Video Buffer to the Media HW for processing;
s20, the Video Thread judges whether the received Video Buffer is the first Buffer received, if so, the S24 is entered, and if not, the S22 is entered;
s22, the Video Thread judges whether the size of the current Video Buffer is changed compared with the previous Video Buffer, and if so, the S24 is entered; if no change is made, no processing is done, and the branch can be understood to end;
it should be understood that S20 and S22 are two parallel judgment conditions, and that either condition is satisfied to proceed to S24.
S24, notifying that the size of the Main Thread video layer is changed;
in an alternative case, the Video Thread sends first notification information to the Main Thread, where the first notification information is used to indicate that the size of the Main Thread Video layer has changed so that the Main Thread resets the information related to the size of the Video layer according to the updated size, for example, the first notification information may be carried on an identification bit. After the Main Thread receives the notification information, the surface eFlink can acquire the updated Video Buffer size, and calculate the Video layer size according to the updated Video Buffer size. It should be appreciated that Main Thread is also created by surfeflinger during the initialization phase, and is a Thread for processing graphics layer data, and is also used to set the size of the video layer. Main Thread may correspond to the first Thread described above. Therefore, when the Video Buffer received by the Video Thread is the first Video Buffer or the size of the Video Buffer is changed, the Main Thread needs to be notified so that the Main Thread resizes the Video layer. Main Thread sends information about the size of the reset video layer to the Media HW. It should be appreciated that the information about the size of the video layer that is set is calculated by surfeflinger, and in an optional case, the multimedia framework sets the initial size of the video layer through an API of the system and transmits the initial size to SurfaceFlinger, surfaceFlinger, and may capture or sense operations such as zooming in, zooming out, or rotating the video by the user, where surfeflinger synthesizes the initial size of the video layer and the operations such as zooming in, zooming out, or rotating the video layer that are sent by the multimedia framework, and calculates the information about the size of the video layer.
S26, in Main Thread, when Graphic Vsync arrives, firstly, information related to the size of a set video layer is sent to Media HW, then HWC synthesizes a plurality of layers of Graphic layers and performs hole digging processing on the Graphic layers below the video layer according to the information related to the size of the set video layer;
it should be understood that sending information about the size of the set video layer to the Media HW may also be understood as setting the size of the video layer by the Media HW. The Main Thread is a loop Thread, and notifies the Main Thread to process when the surface Buffer receives the Graphic Buffer sent by Graphic Framework, but the Main Thread starts to process the graphics layer data in the Graphic Buffer when the Graphic Vsync arrives. The method further comprises the steps of: in Main Thread, the surface flinger sends Graphic Buffers indication information to the HWC, so that the HWC synthesizes the multiple Graphic layers and digs holes on the relevant Graphic layers. Alternatively, if the HWC does not support graphics overlay itself, when Graphic Vsync arrives, the surface flinger invokes the GPU, which synthesizes the multi-layer graphics layers and digs the relevant graphics layers.
Because the size of the video layer and the hole digging treatment on the graph layer below the video layer are completed in the same thread, the size of the hole digging in the graph layer is ensured to be completely consistent with the size of the video layer, so that the video layer and the graph layer are synchronously matched and displayed.
S28, the Media HW sends the processed video layer to a display driver;
s30, the HWC sends the processed graphic layer to a display driver;
it should be understood that S28 and S30 may be performed synchronously, and S28 and S20-S24 are also parallel, which is not limited to a sequence, for example, when the Media HW in S18 obtains the processed Video layer, the processing step S28 is executed to send the processed Video layer to the display driver, and when it is monitored that the first Video Buffer is received or the size of the Video Buffer is changed, the processing step S20 or S22 is executed, where the processed Video layer data is stored in the Video Buffer, the Media HW sends the indication information of the Video Buffer to the display driver, so that the display driver may obtain the processed Video layer data from the corresponding memory; the processed graphics layer data is stored in a frame buffer, and the HWC sends the indication information of the frame buffer to a display driver so that the display driver can acquire the processed graphics layer data from a corresponding memory. Optionally, if the HWC does not support graphics overlaying, the overlaying of the multiple graphics layers is performed by the GPU, and the GPU returns the frame buffer indication information to SurfaceFlinger, surfaceFlinger and then sends the frame buffer indication information to the HWC, so that the HWC sends the frame buffer indication information to the display driver.
And S32, when the Display Vsync arrives, the Display driver superimposes the processed video layer and the processed graphic layer to obtain Display data, and sends the Display data to the Display equipment for Display.
Display Vsync is also used to control the refresh frame rate of the Display device. Because the processed graphic layer data contains the 'hole digging area', the size of the 'hole digging area' is consistent with the size of the processed video layer, and the video layer and the 'hole digging area' can be completely matched after the processed graphic layer data and the processed video layer are combined by the display driver, the video layer can be synchronously displayed through the 'hole digging area'. In addition, since the graphics Vsync is synthesized by controlling the plurality of graphics layers, the Display Vsync is used as a signal for controlling the synthesis of the Display video data and the refresh frame rate of the Display device, and the graphics Vsync and the Display Vsync are independent of each other, the frame rate of the Display Vsync may be greater than the frame rate of the graphics Vsync, and thus the image processing architecture may support the playback of a high frame rate video having a video frame rate higher than the graphics refresh frame rate.
It should be understood that, for ease of understanding, the corresponding method embodiment of fig. 9 describes the method in terms of steps, but the sequence numbers of the steps do not limit the order of execution among the steps of the method. The steps performed in Video Thread and the steps performed in Main Thread are parallel.
Fig. 10 is a block diagram of an exemplary image data processing apparatus according to an embodiment of the present application. The device comprises: a processor having software instructions running thereon to form a framework layer, a hardware abstraction layer, and a driver layer. Optionally, the device further includes a transmission interface, through which the processor receives data sent By other devices or sends data to other devices, where the transmission interface may be, for example, an HDMI interface, a V-By-One interface, an eDP interface, an MIPI interface, a DP interface, or a universal serial bus (Universal Serial Bus, USB) interface. These interfaces are typically electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces, which are not limited in this embodiment. The device may be coupled to hardware resources such as a display, media hardware, or graphics hardware via connectors, transmission lines, buses, or the like. The device may be a processor chip with image or video processing capabilities. In an alternative case, the device, media hardware and graphics hardware may be integrated on one chip. In another alternative, the apparatus, media hardware, graphics hardware and display may be integrated on one terminal. It should be understood that the image processing framework shown in fig. 5 and 6 described above may be run on the apparatus shown in fig. 10, and that the apparatus shown in fig. 10 may be used to implement the method embodiments corresponding to fig. 7 to 9 described above.
Illustratively, the framework layer includes SurfaceFlinger, the hardware abstraction layer includes graphics hardware abstraction and media hardware abstraction, and the driver layer includes media hardware drivers, graphics hardware drivers, and display drivers. The graphics hardware abstraction corresponds to the graphics hardware driver, the media hardware abstraction corresponds to the media hardware driver, the graphics hardware can be called through the graphics hardware abstraction and the graphics hardware driver, and the media hardware can be called through the media hardware abstraction and the media hardware driver. For example, invoking a graphics hardware driver by accessing a graphics hardware abstraction to effect invocation of graphics hardware, invoking a media hardware driver by accessing a media hardware abstraction to effect invocation of media hardware.
The framework layer is used for combining the multi-layer graph layers and digging holes of at least one graph layer in the multi-layer graph layers through graph hardware abstraction and graph hardware driving and calling graph hardware in a first thread to obtain a combined graph layer, wherein the combined graph layer comprises a digging hole area;
the framework layer is also used for calling Media HW through Media hardware abstraction and Media hardware drive in the second thread, and processing the video layer to obtain a processed video layer; the processed video layer can be displayed through the hole digging area;
The display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
It should be understood that, after the graphics hardware obtains the synthesized graphics layer, the graphics hardware abstraction sends the indication information of the frame buffer to the display driver, and the display driver can read the synthesized graphics layer data from the corresponding memory space according to the indication information of the frame buffer; after the media hardware obtains the processed Video layer, the media hardware abstract sends the indication information of the Video Buffer stored with the processed Video layer to the display driver, and the display driver can read the processed Video layer data from the corresponding memory space according to the indication information of the Video Buffer.
In an alternative case, the frame layer is also used to: setting information related to the size of a video layer in a first thread, and sending the information related to the size of the video layer to a media hardware abstraction;
the framework layer is specifically used for calling graphic hardware to synthesize a plurality of graphic layers and hole digging at least one graphic layer according to the information related to the size of the video layer in the first thread to obtain the synthesized graphic layer; and in the second thread, calling the Media HW to process the video layer according to the information related to the size of the video layer, and obtaining the processed video layer.
In an optional case, the framework layer is specifically configured to invoke the graphics hardware to synthesize the multiple graphics layers and hole at least one graphics layer in the first thread based on a first vertical synchronization signal; the display driver is specifically configured to superimpose the synthesized graphics layer and the processed video layer based on the second vertical synchronization signal to obtain display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
In an optional case, the framework layer is specifically configured to, when a valid signal of the first vertical synchronization signal arrives, send, in the first thread, information related to a size of the video layer to the media hardware abstraction, and then call the graphics hardware to synthesize the multiple graphics layers and hole at least one graphics layer according to the information related to the size of the video layer.
In an alternative case, the frame rate of the first vertical synchronization signal is smaller than the frame rate of the second vertical synchronization signal.
In an optional case, when the second thread acquires the first Video Buffer, sending first notification information to the first thread; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the first Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the first Video Buffer.
Or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; the framework layer is further used for acquiring the size of the changed Video Buffer in the first thread after the first thread receives the first notification information, and setting information related to the size of the Video layer according to the size of the changed Video Buffer; wherein, the Video Buffer stores the data of a Video layer, and the size of the Video Buffer is related to the size of the Video layer.
In an alternative scenario, the graphics hardware includes a HWC and a GPU, as shown in fig. 11, which is a schematic diagram of the architecture of another exemplary image processing apparatus provided in an embodiment of the present application. Correspondingly, the hardware abstraction layer comprises HWC abstraction, and the driving layer comprises HWC driving and GPU driving.
The frame layer is specifically used for: in a first thread, synthesizing a plurality of graphic layers and performing hole digging treatment on at least one graphic layer through HWC abstraction and HWC driving and calling the HWC to obtain a synthesized graphic layer; HWC abstraction is used for: sending the synthesized graphic layer to the display driver; the media hardware abstraction is also used to send the processed video layer to the display driver.
When the HWC does not support graphics layer synthesis and graphics layer hole-sinking, the framework layer is specifically for: in a first thread, a GPU is driven and called through the GPU, the multi-layer graphic layers are synthesized, and hole digging processing is carried out on at least one layer of graphic layers, so that synthesized graphic layers are obtained; the GPU is also configured to: returning the synthesized graphic layer to the frame layer; the framework layer is also used for sending the synthesized graphic layer to HWC abstract; the HWC abstraction is used for sending the synthesized graphic layer to the display driver; the Media HW abstraction is also used to send the processed video layer to the display driver.
It should be appreciated that the HWC abstraction sends the display driver the indication of the frame Buffer with the synthesized graphics layer stored therein, and the media hardware abstraction sends the display driver the indication of the Video Buffer with the processed Video layer stored therein.
In an alternative case, a SurfaceFlinger of the frame layer is used to: creating a first thread and a second thread in an initialization phase; the surface eFlinger is further used for notifying the first thread to process graphics layer data in the graphics Buffer when the graphics Buffer is received; when receiving the Video Buffer, notifying a second thread to process Video layer data in the Video Buffer; wherein, graphic Buffer stores a layer of Graphic layer data, video Buffer stores a layer of Video layer data.
In an alternative case, the Media HW abstraction is specific to: receiving the first frame of video layer data in a second thread; receiving information related to the size of a video layer in a first thread; the frame layer is specifically used for: in the second thread, the first frame of video layer data is processed according to the information related to the size of the video layer through the Media HW abstract call Media HW.
As shown in fig. 12, a schematic architecture diagram of another exemplary image processing apparatus according to an embodiment of the present application is provided. The device comprises: the framework layer, the graphics hardware, the media hardware and the display driver, wherein the framework layer and the display driver are part of an operating system formed by software instructions running on the processor. Graphics hardware and media hardware may be coupled to the processor through connectors, interfaces, transmission lines or buses, etc., which are typically electrical communication interfaces, but may also be mechanical interfaces or other forms of interfaces. By way of example, the graphics hardware may include a GPU and a HWC.
The framework layer is used for calling graphic hardware to synthesize the multi-layer graphic layers and digging holes of at least one graphic layer in the multi-layer graphic layers in a first thread to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area;
The framework layer is also used for calling Media HW to process the video layer in the second thread to obtain a processed video layer; the processed video layer can be displayed through the hole digging area;
and the display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
In an alternative case, the software instructions run on the processor to form a hardware abstraction layer, a graphics hardware driver, and a media hardware driver, wherein the hardware abstraction layer includes the graphics hardware driver corresponding to the graphics hardware, and the media hardware driver corresponding to the media hardware. The framework layer concrete is used for calling the graphics hardware through the graphics hardware abstraction and the graphics hardware driver, and the framework layer concrete is used for calling the media hardware through the media hardware abstraction and the media hardware driver.
In an alternative case, the frame layer is also used to: setting information related to the size of a video layer in a first thread, and sending the information related to the size of the video layer to Media HW; the framework layer is specifically used for calling the graphic hardware to synthesize the multi-layer graphic layer and hole digging the at least one graphic layer according to the information related to the size of the video layer in the first thread to obtain the synthesized graphic layer; and in the second thread, calling the Media HW to process the video layer according to the information related to the size of the video layer, and obtaining the processed video layer.
In an optional case, the framework layer is specifically configured to invoke the graphics hardware in the first thread to synthesize the multiple graphics layers and hole the at least one graphics layer based on a first vertical synchronization signal; the display driver is specifically configured to superimpose the synthesized graphics layer and the processed video layer when the second vertical synchronization signal arrives, so as to obtain the display data; wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
In an optional case, the framework layer is specifically configured to send, in the first thread, information related to the size of the video layer to the Media HW when the valid signal of the first vertical synchronization signal arrives, and then invoke the graphics hardware to synthesize the multiple graphics layers and hole the at least one graphics layer according to the information related to the size of the video layer.
In an alternative case, the frame rate of the first vertical synchronization signal is smaller than the frame rate of the second vertical synchronization signal.
In an optional case, when the second thread acquires the first Video Buffer, sending first notification information to the first thread; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the first Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the first Video Buffer; or when the second thread detects that the size of the Video Buffer is changed, sending first notification information to the first thread; the framework layer is further configured to, after the first thread receives the first notification information, obtain a size of the changed Video Buffer in the first thread, and set information related to the size of the Video layer according to the size of the changed Video Buffer; wherein the Video Buffer stores data of one Video layer, and the size of the Video Buffer is related to the size of the Video layer.
In an alternative case, the graphics hardware comprises a hardware synthesizer HWC, the framework layer being specifically adapted to: in the first thread, calling the HWC to synthesize the multi-layer graphic layer and digging at least one layer of graphic layer to obtain the synthesized graphic layer; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the HWC is further configured to: transmitting the synthesized graphic layer to a display driver; the Media HW is also used to send the processed video layer to a display driver.
In an alternative case, the graphics hardware comprises a graphics processor GPU, and the framework layer is specifically configured to: in a first thread, a GPU is called to synthesize the multi-layer graphic layer and hole digging is carried out on at least one layer of graphic layer, so that the synthesized graphic layer is obtained; the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the GPU is further configured to: returning the synthesized graphic layer to the frame layer; the framework layer is also configured to send the synthesized graphics layer to the HWC; the HWC is also used for sending the synthesized graphic layer to the display driver; the Media HW is also used to send the processed video layer to the display driver.
In an alternative case, the frame layer includes a SurfaceFlinger for: creating the first thread and the second thread in an initialization phase; the surface eFlinger is further configured to notify the first thread to process graphics layer data in the graphics Buffer when receiving the graphics Buffer; when the Video Buffer is received, notifying the second thread to process Video layer data in the Video Buffer; wherein, a layer of Graphic layer data is stored in the Graphic Buffer, and a layer of Video layer data is stored in the Video Buffer.
In an alternative case, the Media HW is specifically adapted to: receiving first frame video layer data in the second thread; receiving information related to the size of the video layer in the first thread; the frame layer is specifically used for: in the second thread, the Media HW is invoked to process the first frame of video layer data according to information about the size of the video layer.
It should be appreciated that the image processing framework shown in fig. 5 and 6 described above may be run on the apparatus shown in fig. 12, and that the apparatus shown in fig. 12 may be used to implement the method embodiments corresponding to fig. 7 to 9 described above. For a detailed explanation, reference is made to the description of the method embodiments described above.
Fig. 13 is a schematic architecture diagram of another exemplary image processing apparatus according to an embodiment of the present application. The device comprises: a processor, graphics hardware, and media hardware, the processor having software instructions running thereon to form a frame layer and a display driver;
the framework layer is used for calling the graphic hardware to synthesize the multi-layer graphic layers and digging holes of at least one layer of graphic layers in the first thread to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area;
the frame layer is used for calling media hardware to process the video layer in the second thread to obtain a processed video layer;
and the display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
As shown in fig. 13, the software instructions running on the processor are also used to form a hardware abstraction layer that includes a graphics hardware abstraction and a media hardware abstraction. Correspondingly, although not shown in FIG. 13, the driver layer also includes graphics hardware drivers and media hardware drivers. Graphics hardware and media hardware may be coupled to the processor through connectors, interfaces, transmission lines or buses, etc. In an alternative case, the graphics hardware includes a HWC and a GPU, as shown in FIG. 14. Correspondingly, the hardware abstraction layer comprises HWC abstraction, and the driving layer comprises HWC driving and GPU driving.
It should be understood that the image processing framework shown in fig. 5 and 6 described above may be run on the apparatus shown in fig. 13 or 14, and that the apparatus shown in fig. 13 or 14 may be used to implement the method embodiments corresponding to fig. 7 to 9 described above. And are not defined in detail herein.
The present embodiments also provide a computer-readable storage medium having instructions stored therein, which when run on a computer or processor, cause the computer or processor to perform some or all of the functions of the methods provided by the embodiments of the present application.
The present embodiments also provide a computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform some or all of the functions of the methods provided by the embodiments of the present application.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (33)

  1. A method of image data processing, the method comprising:
    synthesizing a plurality of graphic layers in a first thread and digging holes in at least one graphic layer in the plurality of graphic layers to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area;
    processing the video layer in a second thread to obtain a processed video layer, wherein the processed video layer can be displayed through the hole digging area;
    and superposing the synthesized graphic layer and the processed video layer to obtain display data.
  2. The method of claim 1, wherein, before synthesizing the plurality of graphics layers and hole digging the at least one graphics layer in the first thread to obtain the synthesized graphics layer, the method further comprises:
    setting information related to the size of the video layer in the first thread;
    the hole digging treatment is carried out on at least one graph layer, and the method specifically comprises the following steps:
    digging holes on the at least one graphic layer according to the information related to the size of the video layer;
    the processing the video layer in the second thread to obtain a processed video layer specifically includes:
    And in the second thread, processing the video layer according to the information related to the size of the video layer to obtain the processed video layer.
  3. A method according to claim 1 or 2, characterized in that,
    synthesizing the multi-layer graph layer in the first thread based on a first vertical synchronous signal and hole digging the at least one layer graph layer;
    superposing the synthesized graphic layer and the processed video layer based on a second vertical synchronous signal to obtain the display data;
    wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  4. The method of claim 3, wherein the step of,
    when the effective signal of the first vertical synchronization signal arrives, setting information related to the size of the video layer in the first thread, then synthesizing the multi-layer graphic layer, and carrying out hole digging processing on at least one layer of graphic layer according to the information related to the size of the video layer.
  5. The method of claim 3 or 4, wherein a frame rate of the first vertical synchronization signal is less than a frame rate of the second vertical synchronization signal.
  6. The method according to any one of claims 1 to 5, further comprising:
    when the second thread acquires a first Video Buffer, sending first notification information to the first thread;
    setting information related to the size of the Video layer according to the size of the first Video Buffer in the first thread; or alternatively, the process may be performed,
    when the second thread detects that the size of the Video Buffer is changed, first notification information is sent to the first thread;
    resetting information related to the size of the Video layer according to the size of the changed Video Buffer in the first thread;
    wherein, the Video Buffer stores the data of one Video layer, and the size of the Video Buffer is related to the size of the Video layer.
  7. The method according to any one of claims 1 to 6, wherein the synthesizing the multiple graphic layers and the hole digging processing are performed on at least one graphic layer in the first thread to obtain a synthesized graphic layer, specifically including:
    in the first thread, a hardware synthesizer HWC synthesizes the plurality of graphic layers and performs hole digging processing on the at least one graphic layer to obtain the synthesized graphic layer.
  8. The method according to any one of claims 1 to 6, wherein the synthesizing the multiple graphic layers and the hole digging processing are performed on at least one graphic layer in the first thread to obtain a synthesized graphic layer, specifically including:
    and in the first thread, the graphics processor GPU synthesizes the plurality of graphics layers and performs hole digging processing on the at least one graphics layer to obtain the synthesized graphics layer.
  9. The method according to any one of claims 1 to 8, wherein after setting the size-related information of the video layer in the first thread, the method further comprises:
    in the first thread, sending information about the size of the video layer to Media hardware, HW;
    the processing the video layer in the second thread to obtain a processed video layer specifically includes:
    in the second thread, the Media HW processes the video layer according to the information related to the size of the video layer, and the processed video layer is obtained.
  10. The method according to any one of claims 1 to 9, wherein,
    the method comprises the steps of superposing the synthesized graphic layer and the processed video layer to obtain display data, and specifically comprises the following steps:
    And the display driver superimposes the synthesized graphic layer and the processed video layer to obtain the display data.
  11. The method according to any one of claims 1 to 8, further comprising:
    creating the first thread and the second thread in an initialization stage by SurfaceFinger;
    when the SurfaceFlinger receives Graphic Buffer of the Graphic Buffer, notifying the first thread to process Graphic layer data in the Graphic Buffer;
    when the SurfaceFlinger receives Video Buffer, notifying the second thread to process Video layer data in the Video Buffer;
    and the Graphic Buffer stores one layer of Graphic layer data, and the Video Buffer stores one layer of Video layer data.
  12. An apparatus for processing image data, the apparatus comprising: a processor on which software instructions are run to form a framework layer, a hardware abstraction layer HAL, and a driver layer, the HAL comprising a graphics hardware abstraction and a Media hardware Media HW abstraction, the driver layer comprising a graphics hardware driver, a Media hardware driver, and a display driver;
    the framework layer is used for calling the graphic hardware through the graphic hardware abstraction and the graphic hardware drive in a first thread, synthesizing the multi-layer graphic layers and digging holes of at least one graphic layer in the multi-layer graphic layers to obtain a synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area;
    The framework layer is used for calling Media HW through the Media hardware abstraction and the Media hardware driver in a second thread, and processing the video layer to obtain a processed video layer; the processed video layer can be displayed through the hole digging area;
    and the display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
  13. The apparatus of claim 12, wherein the frame layer is further to:
    setting information related to the size of the video layer in the first thread, and sending the information related to the size of the video layer to the Media HW abstract;
    the frame layer is particularly intended for use in,
    in the first thread, calling the graphics hardware to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and hole digging the at least one layer graphics layer to obtain the synthesized graphics layer;
    and in the second thread, calling the Media HW to process the video layer according to the information related to the size of the video layer, and obtaining the processed video layer.
  14. The device according to claim 12 or 13, wherein,
    The framework layer is specifically configured to invoke the graphics hardware to synthesize the multiple graphics layers and hole the at least one graphics layer in the first thread based on a first vertical synchronization signal;
    the display driver is specifically configured to superimpose the synthesized graphics layer and the processed video layer based on a second vertical synchronization signal to obtain the display data;
    wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  15. The apparatus of claim 14, wherein the device comprises a plurality of sensors,
    the framework layer is specifically configured to, when an effective signal of the first vertical synchronization signal arrives, send, in the first thread, information related to a size of the video layer to the Media HW abstraction, and then call the graphics hardware to synthesize the multiple graphics layers and perform hole digging processing on the at least one graphics layer according to the information related to the size of the video layer.
  16. The apparatus of claim 14 or 15, wherein a frame rate of the first vertical synchronization signal is less than a frame rate of the second vertical synchronization signal.
  17. The apparatus according to any one of claims 12 to 16, wherein:
    When the second thread acquires a first Video Buffer, sending first notification information to the first thread;
    the framework layer is further configured to obtain, in the first thread, a size of the first Video Buffer after the first thread receives the first notification information, and set information related to the size of the Video layer according to the size of the first Video Buffer; or alternatively, the process may be performed,
    when the second thread detects that the size of the Video Buffer is changed, first notification information is sent to the first thread;
    the framework layer is further configured to obtain, in the first thread after the first thread receives the first notification information, a size of the changed Video Buffer, and set information related to the size of the Video layer according to the size of the changed Video Buffer;
    wherein, the Video Buffer stores the data of one Video layer, and the size of the Video Buffer is related to the size of the Video layer.
  18. The apparatus of any of claims 12 to 17, wherein the graphics hardware comprises a hardware synthesizer HWC, the graphics hardware abstraction comprises a HWC abstraction, the graphics hardware driver comprises a HWC driver, and the framework layer is specifically configured to:
    In the first thread, the HWC is abstracted and driven by the HWC to call the HWC, the multi-layer graphics layer is synthesized, and hole digging processing is carried out on at least one graphics layer, so that the synthesized graphics layer is obtained;
    the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the HWC abstraction is further configured to:
    sending the synthesized graphic layer to the display driver;
    the Media HW abstraction is also used to send the processed video layer to the display driver.
  19. The apparatus according to any of the claims 12 to 17, wherein the graphics hardware comprises a graphics processor GPU, the graphics hardware driver comprises a GPU driver, the framework layer is specifically configured to:
    in the first thread, the GPU is driven and called through the GPU, the multi-layer graphics layers are synthesized, and hole digging processing is carried out on at least one layer of graphics layers, so that the synthesized graphics layers are obtained;
    the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the GPU is further configured to:
    Returning the synthesized graphic layer to the frame layer;
    the framework layer is further configured to send the synthesized graphics layer to the HWC abstraction;
    the HWC abstraction is also used for sending the synthesized graphics layer to the display driver;
    the Media HW abstraction is also used to send the processed video layer to the display driver.
  20. An apparatus as claimed in any one of claims 12 to 19 wherein the frame layer comprises a SurfaceFlinger to:
    creating the first thread and the second thread in an initialization phase;
    the surface eFlinger is further configured to notify the first thread to process graphics layer data in a graphics Buffer when receiving the graphics Buffer;
    when the Video Buffer is received, notifying the second thread to process Video layer data in the Video Buffer;
    and the Graphic Buffer stores one layer of Graphic layer data, and the Video Buffer stores one layer of Video layer data.
  21. The apparatus of any one of claims 12 to 20, wherein the Media HW abstract is specifically configured to:
    receiving first frame video layer data in the second thread;
    Receiving information related to the size of the video layer in the first thread;
    the frame layer is specifically used for:
    in the second thread, the Media HW is abstracted and invoked to process the first frame video layer data according to the information related to the size of the video layer.
  22. An apparatus for processing image data, the apparatus comprising:
    the framework layer is used for calling the graphic hardware to synthesize the multi-layer graphic layers and digging holes of at least one graphic layer in the multi-layer graphic layers in the first thread to obtain the synthesized graphic layer, wherein the synthesized graphic layer comprises a digging hole area;
    the graphics hardware;
    media hardware Media HW;
    the framework layer is further used for calling the Media HW to process the video layer in a second thread to obtain a processed video layer; the processed video layer can be displayed through the hole digging area;
    and the display driver is used for superposing the synthesized graphic layer and the processed video layer to obtain display data.
  23. The apparatus of claim 22, wherein the frame layer is further configured to:
    Setting information related to the size of the video layer in the first thread, and sending the information related to the size of the video layer to the Media HW;
    the frame layer is particularly intended for use in,
    in the first thread, calling the graphics hardware to synthesize the multi-layer graphics layer according to the information related to the size of the video layer and hole digging the at least one layer graphics layer to obtain the synthesized graphics layer;
    and in the second thread, calling the Media HW to process the video layer according to the information related to the size of the video layer, and obtaining the processed video layer.
  24. The apparatus of claim 22 or 23, wherein the device comprises a plurality of sensors,
    the framework layer is specifically configured to invoke the graphics hardware in the first thread to synthesize the multiple graphics layers and perform hole digging processing on the at least one graphics layer based on a first vertical synchronization signal;
    the display driver is specifically configured to superimpose the synthesized graphics layer and the processed video layer when a second vertical synchronization signal arrives, so as to obtain the display data;
    wherein the first vertical synchronization signal and the second vertical synchronization signal are independent of each other.
  25. The apparatus of claim 24, wherein the device comprises a plurality of sensors,
    the framework layer is specifically configured to, when an effective signal of the first vertical synchronization signal arrives, send, in the first thread, information related to a size of the video layer to the Media HW, and then call the graphics hardware to synthesize the multiple graphics layers and perform hole digging processing on the at least one graphics layer according to the information related to the size of the video layer.
  26. The apparatus of claim 24 or 25, wherein a frame rate of the first vertical synchronization signal is less than a frame rate of the second vertical synchronization signal.
  27. The apparatus according to any one of claims 22 to 26, wherein:
    when the second thread acquires a first Video Buffer, sending first notification information to the first thread;
    the framework layer is further configured to obtain, in the first thread, a size of the first Video Buffer after the first thread receives the first notification information, and set information related to the size of the Video layer according to the size of the first Video Buffer; or alternatively, the process may be performed,
    when the second thread detects that the size of the Video Buffer is changed, first notification information is sent to the first thread;
    The framework layer is further configured to obtain, in the first thread after the first thread receives the first notification information, a size of the changed Video Buffer, and set information related to the size of the Video layer according to the size of the changed Video Buffer;
    wherein, the Video Buffer stores the data of one Video layer, and the size of the Video Buffer is related to the size of the Video layer.
  28. The apparatus according to any of the claims 22 to 27, wherein the graphics hardware comprises a hardware synthesizer HWC, the framework layer being specifically configured to:
    in the first thread, calling the HWC to synthesize the multi-layer graphic layer and hole digging the at least one graphic layer to obtain the synthesized graphic layer;
    the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the HWC is further configured to:
    sending the synthesized graphic layer to the display driver;
    the Media HW is further configured to send the processed video layer to the display driver.
  29. The apparatus according to any one of claims 22 to 27, wherein the graphics hardware comprises a graphics processor GPU, the framework layer being specifically configured to:
    In the first thread, the GPU is called to synthesize the multi-layer graphic layers and hole digging is carried out on at least one layer of graphic layers, so that the synthesized graphic layers are obtained;
    the display driver superimposes the synthesized graphics layer and the processed video layer, and before obtaining display data, the GPU is further configured to:
    returning the synthesized graphic layer to the frame layer;
    the framework layer is further configured to send the synthesized graphics layer to the HWC;
    the HWC is also used for sending the synthesized graphic layer to the display driver;
    the Media HW is further configured to send the processed video layer to the display driver.
  30. An apparatus as claimed in any one of claims 22 to 29 wherein the frame layer comprises a SurfaceFlinger to:
    creating the first thread and the second thread in an initialization phase;
    the surfeflinger is also used,
    when a Graphic Buffer of a Graphic Buffer is received, notifying the first thread to process Graphic layer data in the Graphic Buffer;
    when the Video Buffer is received, notifying the second thread to process Video layer data in the Video Buffer;
    And the Graphic Buffer stores one layer of Graphic layer data, and the Video Buffer stores one layer of Video layer data.
  31. The apparatus according to any one of claims 22 to 30, wherein said Media HW is specifically configured to:
    receiving first frame video layer data in the second thread;
    receiving information related to the size of the video layer in the first thread;
    the frame layer is specifically used for:
    and in the second thread, calling the Media HW to process the first frame video layer data according to the information related to the size of the video layer.
  32. A computer readable storage medium having instructions stored therein which, when run on a computer or processor, cause the computer or processor to perform the method of any of claims 1 to 11.
  33. A computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method of any of claims 1 to 11.
CN202080102044.9A 2020-06-15 2020-06-15 Image data processing device and method Pending CN116075804A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/096018 WO2021253141A1 (en) 2020-06-15 2020-06-15 Image data processing apparatus and method

Publications (1)

Publication Number Publication Date
CN116075804A true CN116075804A (en) 2023-05-05

Family

ID=79268886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080102044.9A Pending CN116075804A (en) 2020-06-15 2020-06-15 Image data processing device and method

Country Status (2)

Country Link
CN (1) CN116075804A (en)
WO (1) WO2021253141A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130766A (en) * 2023-01-17 2023-11-28 荣耀终端有限公司 Thread processing method and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10257487B1 (en) * 2018-01-16 2019-04-09 Qualcomm Incorporated Power efficient video playback based on display hardware feedback
CN111198735A (en) * 2018-11-20 2020-05-26 深圳市优必选科技有限公司 Layer information acquisition method, layer information acquisition device and terminal equipment
CN109934795B (en) * 2019-03-04 2021-03-16 京东方科技集团股份有限公司 Display method, display device, electronic equipment and computer readable storage medium
CN111124562A (en) * 2019-11-15 2020-05-08 北京经纬恒润科技有限公司 Application program double-screen display method and device

Also Published As

Publication number Publication date
WO2021253141A1 (en) 2021-12-23

Similar Documents

Publication Publication Date Title
CN112004086B (en) Video data processing method and device
CN104869305B (en) Method and apparatus for processing image data
CN108762881B (en) Interface drawing method and device, terminal and storage medium
WO2022083296A1 (en) Display method and electronic device
KR20160061133A (en) Method for dispalying image and electronic device thereof
CN115793916A (en) Method, electronic device and system for displaying multiple windows
CN113409427B (en) Animation playing method and device, electronic equipment and computer readable storage medium
EP3828832B1 (en) Display control method, display control device and computer-readable storage medium
CN112348929A (en) Rendering method and device of frame animation, computer equipment and storage medium
CN110673944B (en) Method and device for executing task
KR20150027934A (en) Apparatas and method for generating a file of receiving a shoot image of multi angle in an electronic device
CN110968815B (en) Page refreshing method, device, terminal and storage medium
CN110045958B (en) Texture data generation method, device, storage medium and equipment
CN116166256A (en) Interface generation method and electronic equipment
CN111865630A (en) Topology information acquisition method, device, terminal and storage medium
CN116075804A (en) Image data processing device and method
CN110971840B (en) Video mapping method and device, computer equipment and storage medium
WO2023005751A1 (en) Rendering method and electronic device
WO2022151937A1 (en) Interface display method and electronic device
CN116166255A (en) Interface generation method and electronic equipment
CN116688494B (en) Method and electronic device for generating game prediction frame
CN116708931B (en) Image processing method and electronic equipment
CN114500979B (en) Display device, control device, and synchronization calibration method
WO2024078306A1 (en) Banner notification message display method and electronic device
CN113127130B (en) Page jump method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination