CN117692714A - Video display method and electronic equipment - Google Patents

Video display method and electronic equipment Download PDF

Info

Publication number
CN117692714A
CN117692714A CN202310852349.7A CN202310852349A CN117692714A CN 117692714 A CN117692714 A CN 117692714A CN 202310852349 A CN202310852349 A CN 202310852349A CN 117692714 A CN117692714 A CN 117692714A
Authority
CN
China
Prior art keywords
video
electronic device
hdr
opengl
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310852349.7A
Other languages
Chinese (zh)
Inventor
吴孟函
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310852349.7A priority Critical patent/CN117692714A/en
Publication of CN117692714A publication Critical patent/CN117692714A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a video display method and electronic equipment. The method can be applied to electronic equipment with image processing capability, such as smart phones, tablet computers and the like. When the video editing application cannot normally display the video frame of the HDR video, the electronic device uses the SurfaceView as a display control of a preview area of the video editing application, and sets the attribute of the layer to which the SurfaceView belongs to BT2020, so that the HDR video can be displayed on a user interface of the video editing application.

Description

Video display method and electronic equipment
Technical Field
The present application relates to the field of terminals, and in particular, to a video display method and an electronic device.
Background
With the development of electronic technology, electronic devices such as mobile phones and tablet computers can support shooting, storing and playing various types of video, for example, standard Dynamic Range (Standard Dynamic Range, SDR) video, high-Dynamic Range (HDR) video, and the like. HDR video may include richer color effects than SDR video, enabling more image details to be recorded, thereby enabling HDR video to exhibit excellent viewing effects.
For SDR video, in response to a video editing operation for a standard dynamic range (Standard Dynamic Range, SDR) video, the electronic device may video edit the SDR video, e.g., clip, add text, etc. Video editing applications on electronic devices can perform video editing on the SDR video, but the video editing applications cannot perform video editing on the HDR video, and therefore cannot display the HDR video in a preview area of the video editing applications.
Disclosure of Invention
The application provides a video display method and electronic equipment, which can display HDR video in a preview area of video editing application, and are beneficial to improving the image processing capability of the electronic equipment.
In a first aspect, the present application provides a video display method, which is applied to an electronic device, and includes: displaying a first user interface, wherein the first user interface comprises a first display window for displaying a first video and an editing control, and the video type of the first video is HDR video; detecting a first operation on the editing control, in response to the first operation: creating a Surface view, and applying for the Surface view for a first Surface; creating a decoder; applying for a second Surface, and binding the first Surface with the second Surface; decoding, by a decoder, the first video into N first video frames; performing format conversion on the N first video frames through OpenGL to obtain N second video frames output by the OpenGL; invoking a graphic processor (Graphics Processing Unit, GPU) of the electronic device to perform format conversion on the N second video frames to obtain N third video frames, and outputting the N third video frames to a second Surface; based on the binding between the first Surface and the second Surface, outputting N third video frames on the second Surface to the first Surface; setting the attribute of the layer to which the Surface view belongs as BT2020, and synthesizing a preview video of the first video based on N third video frames on the first Surface and the attribute of the layer to which the Surface view belongs, wherein the video type of the preview video is HDR video; and displaying a second user interface, wherein the second user interface comprises a second display window, the second display window is used for displaying the preview video, and the display control of the second display window is SurfaceView.
By implementing the video display method provided in the first aspect, the electronic device uses the surface view as a display control in the preview area of the video editing application, and sets the attribute of the layer to which the surface view belongs to be BT2020, so that the HDR video can be displayed in the preview area of the video editing application, further the editing of the HDR video can be realized, and the image processing capability of the electronic device is improved.
In some embodiments, in combination with the method provided in the first aspect, the format of the first video frame includes: the color coding format is YUV format, the data type of the color value is integer, and the color gamut is BT2020; the format of the first video frame may be expressed as (YUV, INT, BT 2020);
the format of the second video frame includes: the color coding format is RGB format, the data type of the color value is floating point type, and the color gamut is BT2020; the format of the second video frame may be expressed as (RGB, flow, BT 2020);
the format of the third video frame includes: the color coding format is YUV format, the data type of the color value is integer, and the color gamut is BT2020; the format of the third video frame may be expressed as (YUV, INT, BT 2020).
Optionally, the first video frame, the second video frame and the third video frame are all video frames encoded by the sensory quantization curve (Perceptual Quantizer, PQ), and further the format of the first video frame may be expressed as (YUV, INT, BT2020, PQ), the format of the second video frame may be expressed as (RGB, flow, BT2020, PQ), and the format of the third video frame may be expressed as (YUV, INT, BT2020, PQ).
In some embodiments, the second user interface further comprises a video type control for indicating that the video type of the preview video is HDR video, so that the user knows that the preview video displayed in the second user interface is HDR video.
In some embodiments, in combination with the method provided in the first aspect, the video type control in the second user interface may not only indicate a video type of the preview video, but also output a video type adjustment window in response to the second operation when the second operation acting on the video type control is detected;
wherein the video type adjustment window includes at least one of the following options: export format, video type, resolution, frame rate; the export format includes video format options and GIF picture format options, the video type includes HDR video type options and normal video type options, the resolution includes 1080P options and 2K/4K options, and the frame rate includes 24 options, 25 options, 30 options, 50 options, and 60 options. Wherein the normal video type is the SDR video type.
The video type adjustment window provides a plurality of options for a user, so that the user can adaptively adjust the video type, resolution, frame rate and the like of the preview video according to requirements, and the image processing capability of the electronic device can be further improved.
In some embodiments, in combination with the method provided in the first aspect, after the second user interface outputs the video type adjustment window, the method further includes: detecting a switching operation acting on the video type in the video type adjustment window, in response to the switching operation: converting the preview video from HDR video to SDR video; and displaying the SDR video of the first video on the second display window. Therefore, the switching of the preview video from the HDR video to the SDR video is realized, so that the preview area of the video editing application can display the HDR video and the SDR video, and the switching display between the HDR video and the SDR video can be realized. For example, the HDR video type is selected before a switch operation, which may be an operation to click on a normal video type; for another example, the normal video type is selected before the switching operation, which may be an operation to click on the HDR video type.
In some embodiments, the method provided in connection with the first aspect, converting the preview video from HDR video to SDR video comprises: converting the HDR nonlinear electrical signal of the third video frame n into an HDR linear optical signal by an electro-optical conversion function EOTF; performing color space conversion on the HDR linear optical signal; tone mapping is carried out on the HDR linear optical signal after the color space conversion to obtain an SDR nonlinear optical signal; converting the SDR linear optical signal into an SDR nonlinear electric signal through a photoelectric conversion function to obtain an SDR nonlinear electric signal of a third video frame n; the third video frame N is any one of the N third video frames. The process may quickly convert the preview video from HDR video to SDR video to reduce the latency of the user.
In some embodiments, the EOTF is related to a luminance parameter of a third video frame n, which is a maximum luminance of the third video frame n, in combination with the method provided in the first aspect. That is, when the electro-optical conversion is performed on each third video frame, the luminance parameters are different, and the conversion variability may be reflected. In the electro-optical conversion, the contrast between the HDR video frame and the SDR video frame can be improved by considering the brightness parameter.
In some embodiments, in combination with the method provided in the first aspect, before converting the HDR nonlinear electrical signal of the third video frame n into the HDR linear optical signal, the method further includes: dividing the third video frame n into L grouping areas, and calculating the brightness maximum value of each grouping area in the L grouping areas to obtain L brightness maximum values; m is a positive integer; selecting the first K brightness maximum values from the L brightness maximum values according to the sequence from large to small, wherein K is an integer greater than 1 and less than M; and calculating an average value of the K brightness maximum values, wherein the average value is the maximum brightness of the third video frame n. The luminance parameter determined in this way helps to improve the contrast between the HDR video frame and the SDR video frame.
In some embodiments, the electronic device includes a video editing application, an application architecture layer, and memory; creating a Surface view, and applying for the Surface view a first Surface, which may specifically include: the video editing application sends a first request to an application program architecture layer, wherein the first request is used for requesting to create a SurfaceView; the application program architecture layer responds to the first request, creates a Surface view, and sends a second request to the memory, wherein the second request is used for requesting the memory to distribute the first Surface; the memory responds to the second request, distributes the first Surface and sends a second response to the application program architecture layer, wherein the second response comprises the identification information of the first Surface; the application architecture layer sends a first response to the video editing application, the first response including identification information of the first Surface.
In the process of creating the Surface view, the video editing application obtains the identification information of the Surface in the Surface view, namely the identification information of the first Surface, from the memory, and the first Surface is used for displaying the HDR color gamut, namely the BT2020 color gamut.
With reference to the method provided in the first aspect, in some embodiments, the electronic device further includes an open graphics library OpenGL; applying for the second Surface, and binding the first Surface with the second Surface, which may specifically include: the video editing application sends a first instruction to the OpenGL, wherein the first instruction is used for indicating the OpenGL to apply for a second Surface to the memory; the OpenGL responds to the first instruction and sends a third request to the memory, wherein the third request is used for requesting the memory to allocate a second Surface; the memory responds to a third request, allocates a second Surface and sends a third response to the OpenGL, wherein the third response comprises identification information of the second Surface; openGL binds the first Surface with the second Surface.
OpenGL binds the first Surface with the second Surface so that OpenGL controls the GPU to transmit video frames on the second Surface to the first Surface to display HDR video frames or HDR video through the Surface view.
With reference to the method provided in the first aspect, in some embodiments, the electronic device further includes a MediaCodec for creating a decoder; after creating the decoder, the above method further comprises: the video editing application sends a fourth request to the MediaCodec, wherein the fourth request is used for requesting a third Surface; the third Surface is used for storing N first video frames output by the decoder; the MediaCodec responds to the fourth request and sends a fifth request to the memory, wherein the fifth request is used for requesting the memory to allocate a third Surface; the memory responds to the fifth request, allocates a third Surface and sends a fifth response to the MediaCodec, wherein the fifth response comprises identification information of the third Surface; the MediaCodec sends a fourth response to the video editing application, the fourth response including identification information of the third Surface.
With reference to the method provided in the first aspect, in some embodiments, the method further includes: the decoder created by the MediaCodec sends N first video frames to the memory; the memory writes N first video frames into a third Surface; performing format conversion on the N first video frames through OpenGL to obtain N second video frames output by the OpenGL, wherein the method comprises the following steps: the video editing application sends a second instruction to the OpenGL, wherein the second instruction is used for instructing the OpenGL to read N first video frames from a third Surface; responding to the second instruction, openGL uniqueness N first video frames from a third Surface, and performing format conversion on the N first video frames to obtain N second video frames output by OpenGL; openGL stores N second video frames on the first video memory.
In combination with the method provided in the first aspect, in some embodiments, invoking a GPU of the electronic device to perform format conversion on the N second video frames to obtain N third video frames, and outputting the N third video frames to the second Surface, where the method specifically includes: the OpenGL sends a third instruction to the GPU, wherein the third instruction is used for instructing the GPU to perform format conversion on N second video frames; the GPU responds to the third instruction to perform format conversion on N second video frames stored on the first video memory to obtain N third video frames; the OpenGL sends a fourth instruction to the GPU, wherein the fourth instruction is used for instructing the GPU to output N third video frames to the second Surface; the fourth instruction comprises identification information of the second Surface; and responding to the fourth instruction by the GPU, and outputting N third video frames to the second Surface based on the identification information of the second Surface.
In some embodiments, the third instruction or the fourth instruction includes identification information of the first Surface bound to the second Surface;
based on the binding between the first Surface and the second Surface, outputting the N third video frames on the second Surface to the first Surface, wherein the method specifically comprises the following steps: the GPU determines the binding of the first Surface and the second Surface based on the third instruction or the fourth instruction, and outputs N third video frames on the second Surface to the first Surface based on the binding of the first Surface and the second Surface.
With reference to the method provided in the first aspect, in some embodiments, the electronic device further includes a hardware abstraction layer; setting a layer to which the Surface view belongs as a BT2020 attribute, and synthesizing a preview video of the first video based on N third video frames on the first Surface and the BT2020 attribute, wherein the method specifically comprises the following steps: the application program architecture layer monitors that data change exists on the first Surface, and sets a layer to which the Surface view belongs as a BT2020 attribute; the application program architecture layer sends the layer set as the BT2020 attribute to the hardware abstraction layer; the hardware abstraction layer synthesizes a preview video of the first video based on the N third video frames on the first Surface and the BT2020 attribute. So that the second display window of the second user interface can display HDR video.
In a second aspect, the present application provides an electronic device comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories being operable to store computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method as described in the first aspect and any possible implementation of the first aspect.
In a third aspect, the present application provides a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
In a fourth aspect, the present application provides a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation of the first aspect.
It will be appreciated that the electronic device provided in the second aspect, the computer storage medium provided in the third aspect, and the computer program product provided in the fourth aspect described above are all configured to perform the method provided in the first aspect of the present application. Therefore, the advantages achieved by the method can be referred to as the advantages of the corresponding method, and will not be described herein.
Drawings
FIGS. 1A-1F are a set of user interface schematics provided by embodiments of the present application;
fig. 2 is a software architecture diagram of an electronic device provided in an embodiment of the present application;
FIG. 3 is a flowchart of a video display method provided in an embodiment of the present application;
FIG. 4 is a flowchart of an electronic device initialization video editing environment provided by an embodiment of the present application;
Fig. 5 is a flowchart of an electronic device provided in an embodiment of the present application generating an HDR video frame of an HDR video to be edited;
fig. 6 is a flowchart of an electronic device provided in an embodiment of the present application converting an HDR video frame into an SDR video frame;
fig. 7 is a hardware configuration diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
1. SurfaceView and textureView
SurfaceView inherits from class View, which is essentially a View, but has its own Surface, so that SurfaceView has a corresponding Window State in the System management service (Window Manager Service, WMS), and a Layer (Layer) in SurfaceFlinger. Rendering of the surface view may be put into a separate thread to do, and there may be own GL context when rendering. Because SurfaceView does not affect the main thread's response to time, it can draw in an independent thread, does not affect the main thread, and the picture is smoother when playing video using a double buffering mechanism. That is, the rendering of the surfacview may be put on a separate thread instead of the main thread.
TextureView also inherits from class View, which is also a View in nature, and can directly project content streams into View, and can be used to implement functions such as Live preview. TextureView does not create a window alone in WMS, but rather is a common View in View hierarchy, and thus can move, rotate, scale, animate, etc. as other common views. Because TextureView is drawn in View hierarchy, textureView is typically rendered at the main thread.
Because the rendering of the surface view can be put into a separate thread for rendering, the surface view can pull the picture of the video player alone, while the TextureView generally renders at the main thread, so the TextureView cannot pull the picture of the video player alone for rendering.
In view of the advantages of surfaceView, embodiments of the present application employ surfaceView in the application framework layer.
2. Sensory quantization curve (Perceptual Quantizer, PQ)
PQ is a way to efficiently encode HDR luminance information. Each pair of adjacent code values differs slightly less than the perceptible difference throughout the dynamic range, making the code value utilization extremely high. The PQ encoded signal can be decoded on an HDR capable device. The HDR video referred to in the embodiments of the present application are all PQ-encoded HDR video.
3. Color gamut
The color gamut represents the range of colors that can be displayed when video encoding. SDR video uses BT709 color gamut; HDR video uses BT2020 gamut. Therefore, compared to SDR video, HDR video can use more color types, have a wider color representation range, display a higher brightness range, and further, can support richer image color representations and more vivid image detail representations. This also enables the HDR video to provide a viewing effect for the user, thereby enhancing the user's use experience.
Not limited to BT2020 gamut, BT709 gamut, HDR video and SDR video may use other types of gamuts as well. In general, however, HDR video uses a wider range of color gamuts than SDR video uses, with more color and more detail.
In general, the type of video supported by electronic devices (hereinafter referred to as electronic device 100) such as a mobile phone and a tablet pc is SDR video. With the development of photographing technology and image technology, the electronic device 100 supports photographing not only SDR video but also HDR video. Thus, the need for users to edit HDR video also emerges.
Currently, a video editing application used by the electronic device 100 to edit an HDR video is an SDR video editing application, which cannot normally display a color gamut of the HDR video, and cannot perform video editing on the HDR video.
In order to solve the above problems, embodiments of the present application provide a video display method. The method can be applied to the electronic equipment with the image processing capability, such as a mobile phone, a tablet computer and the like, wherein the electronic equipment (namely the electronic equipment 100) is provided with the image processing capability.
By implementing the video display method provided by the embodiment of the present application, the electronic device 100 may use the SurfaceView as a preview area control of the video editing application, and set a layer to which the SurfaceView belongs to be a BT2020 color gamut, so that the electronic device 100 may display an HDR video on a user interface of the video editing application, and in particular, may display the HDR video on a video preview area of the user interface.
Further, the electronic device 100 may add a filter or the like to the HDR video in response to a video editing operation acting on an editing control in the user interface when the video editing operation is detected. Video editing operations include, but are not limited to, adding filters. For example, video editing operations may also include cropping, image inversion, scaling, adding text, adding filters, adding a header (or footer or other page), adding a video watermark or decal, and so forth.
By implementing the video display method provided by the embodiment of the application, the user interface may further include a video type control, the electronic device 100 detects a user operation acting on the video type control, and in response to the user operation, outputs a video type adjustment window on the user interface, where the video type adjustment window includes a video type. The electronic device 100 detects a switching operation acting on the video type, converts the HDR video into the SDR video in response to the switching operation, and switches the HDR video displayed in the video preview area of the user interface into the SDR video.
Not limited to cell phones, tablet computers, electronic device 100 may also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular telephone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (artificial intelligence, AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device. The embodiment of the application does not particularly limit the specific type of the electronic device.
Fig. 1A to 1F schematically illustrate a set of user interfaces on an electronic device 100, and an application scenario for implementing the video display method provided in the embodiments of the present application is specifically described below with reference to fig. 1A to 1F.
First, fig. 1A illustrates a user interface, i.e., home page, on an electronic device 100 that presents an installed application. As shown in FIG. 1A, one or more application icons are displayed in the main page, such as a "clock" application icon, a "calendar" application icon, a "weather" application icon, and so forth.
The one or more application icons include a "gallery" application (hereinafter "gallery") icon, i.e., icon 111. The electronic device 100 may detect a user operation on the icon 111. The operation is, for example, a click operation or the like. In response to the above, the electronic apparatus 100 may display the user interface shown in fig. 1B.
FIG. 1B illustrates the main interface of the "gallery" when running the "gallery" on the electronic device 100. The interface may be presented with one or more pictures or videos. Wherein the one or more videos include HDR video, LOG video, and other types of video, such as SDR video. The LOG video refers to a low-saturation, low-brightness video captured in a LOG gray mode, and may also be referred to as a LOG gray scale.
As shown in fig. 1B, the video indicated by the icon 121 may be LOG video; the video indicated by icon 122 may be HDR video; the video indicated by the icon 123 may be SDR video. When the electronic device 100 presents an HDR video or LOG video, an icon indicating the video may display the type to which the video belongs. In this way, the user can learn the type of video through the information displayed in the icon. For example, the lower left corner of the icon 121 shows a LOG; the lower left corner of the icon 122 shows HDR. The video in fig. 1B that is not marked HDR or LOG is SDR video.
The electronic device 100 may detect a user operation on the icon 122, in response to which the electronic device 100 may display the user interface shown in fig. 1C. Fig. 1C shows a user interface of the electronic device 100 specifically showing a certain picture or video.
As shown in fig. 1C, the user interface may include a window 131. Window 131 may be used to display video that the user selects to browse. For example, in fig. 1B, the video that the user selects to browse is the HDR video indicated by icon 122 ("video M"). Thus, the window 131 may display "video M".
The user interface also includes icons 132, controls 133. Icon 132 may be used to represent the type of video displayed in window 131. For example, "HDR" displayed in the current icon 132 may indicate that "video M" is a video of the HDR type.
The control 133 may be used to receive an operation by a user to edit a video (or picture) and upon detecting the operation, the electronic device 100 may display a user interface to edit the video (or picture) in response to the operation. In general, in the case where the video editing application of the electronic device 100 does not support processing of HDR video, the electronic device 100 does not provide a function of editing HDR video to the user. Accordingly, fig. 1C generally does not include control 133, i.e., electronic device 100 does not provide the user with a control to edit the video because electronic device 100 cannot be outputting and saving the edited HDR video.
In the embodiment of the present application, the electronic device 100 may display the HDR video, and may provide the function of editing the HDR video for the user, so as to meet the editing requirement of the user and improve the use experience of the user. Thus, in the user interface shown in fig. 1C, the electronic device 100 may display the control 133 and may respond to user operations acting on the control 133.
The user interface may also include a control 134, a share control 135, a favorites control 136, a delete control 137, and the like.
Control 134 may be used to present detailed information of the video such as time of capture, location of capture, color coding format, code rate, frame rate, pixel size, and so forth.
The sharing control 135 may be used to send the video M for use by other applications. For example, upon detecting a user operation on the sharing control, in response to the operation, the electronic device 100 may display icons of one or more applications, including icons of social software 1. Upon detecting an application icon acting on social software 1, in response to this operation, electronic device 100 may send video M to social software 1, through which the user may further share the video to friends.
The collection control 136 may be used to mark video. In the user interface shown in fig. 1C, upon detecting a user operation on the favorites control, the electronic device 100 can mark the video M as a favorite video of the user in response to the operation. The electronic device 100 may generate an album for displaying videos that are marked as user likes. In this way, in the case where the video M is marked as a user favorite video, the user can quickly view the video M through the album in which the user favorite video is shown.
The delete control 137 may be used to delete video M.
Upon detecting a user operation on control 133, electronic device 100 can display the user interface shown in FIG. 1D. FIG. 1D illustrates a user interface of a video editing application. As shown in fig. 1D, the user interface may include a window 141, a window 142, an operation bar 143, and an operation bar 144.
The window 141 may be used to display a preview video of a video to be edited or an edited video. Window 141 is the preview area of the user interface of the video editing application. Typically, window 141 will display cover video frames of the video to be edited. When a user operation on the play button 145 is detected, the window 141 may sequentially display a video frame stream of the video, i.e., play the video. In the present embodiment, the window 141 may be used for displaying an HDR video to be edited or a preview video of an edited HDR video, i.e. the window 141 may be used for displaying a preview video of the HDR type.
Window 142 may be used to display a stream of video frames of the edited video. The user may drag window 142 to adjust the video frames displayed in window 141. Specifically, a scale 147 is also shown in fig. 1D. The electronic device 100 may detect a user operation on the window 142 to slide left or right, and in response to the user operation, the position of the video frame stream where the scale 147 is located is different, and at this time, the electronic device 100 may display the video frame where the current scale 147 is located in the window 141.
Icons of a plurality of video editing operations can be displayed in the operation fields 143 and 144. Generally, one icon displayed in the operation field 143 indicates one edit manipulation category. The operation field 144 may display video editing operations belonging to the selected operation category in the current operation field 143 according to the selected operation category. For example, the operation field 143 includes "clip". The "clip" displayed in bold may indicate that the type of video editing operation currently selected by the user is "clip". At this time, displayed in the operation field 144 are some operations belonging to the "clip" class, such as "cut", "volume", "frame", and the like.
For example, the electronic device 100 may detect a user operation on the "split" control, in response to which the electronic device 100 may display one or more operational controls of the split video. The electronic device 100 may record a user's splitting operation, such as a start time and an end time of a first video segment, a start time and an end time of a second video segment, and so on.
For another example, the electronic device 100 may detect a user operation on a "frame" control, in response to which the electronic device 100 may record the size of the video frame set by the user, and then crop the original video frame.
The operation field 144 corresponding to the "clip" operation also includes other editing controls belonging to the "clip" class. In response to a user operation acting on the control, the electronic device 100 may record and execute a video editing operation corresponding to the control, which is not exemplified here.
The user interface also includes export control 146. When a user operation is detected on export control 146, in response to the operation, electronic device 100 can export the video of the current state under the corresponding file directory. The video in the current state may be a video to which an editing operation is added or may not be performed. Export control 146 may also be a save control for saving the video of the current state.
In the present embodiment, the user interface also includes a video type control 148. Upon detecting a user operation on control 133, video type control 148 may display "HDR" by default (e.g., control 148 in fig. 1D displays "HDR"), indicating that window 141 displays a video type that is HDR video. Alternatively, upon detecting a user operation on control 133, video type control 148 may display "hdr|1080P" by default, indicating that window 141 displays video of the type HDR video at 1080P resolution. Alternatively, upon detecting a user operation on control 133, video type control 148 may display "HDR|2K/4K" by default, indicating that window 141 displays video of the type HDR video at a resolution of 2K/4K.
Video type control 148 may provide options for export format, video type, resolution, frame rate, and the like.
The electronic device 100 can detect a user operation on the video type control 148, in response to which the electronic device 100 can output the video type adjustment window shown in fig. 1E in a user interface of the video editing application. Fig. 1E illustrates the electronic device 100 outputting a video type adjustment window at a user interface of a video editing application. The video type adjustment window, window 149 in fig. 1E, may completely cover window 141, or may cover a portion of the content of window 141, the size of which depends on the content being presented. For example, the window may present content of derived format, video type, resolution, and frame rate, thereby providing the user with the option to adjust the content.
The export formats may be divided into video formats and GIF-map formats, and fig. 1E exemplifies the selection of a video format.
The video types may be classified into an HDR video type and a normal video type, i.e., an SDR video type, with fig. 1E taking the selection of the HDR video type as an example. Alternatively, in the case where the video type is an HDR video type, window 149 may also display text of "high dynamic range"; in the case where the video type is a normal video type, window 149 may also display text of "standard dynamic range" so that the user selects the video type according to the need. The specific text display position may be located above the "normal video" text, flush with the "video type" text.
The resolution can be divided into 1080P and 2K/4K, with 2K/4K being selected as an example in FIG. 1E. Optionally, under the condition that the resolution is 1080P, the window 149 may also display a text with a poor picture reduction effect; in the case of a resolution of 2K/4K, the window 149 may also display a "very small reduced screen, which occupies a large space" of text, so that the user selects the resolution as desired. The specific text display position can be positioned above the text of 2K/4K and is level with the text of resolution.
The frame rate (Frames Per Second, FPS) can be divided into 24, 25, 30, 50 and 60, fig. 1E taking the selection of 30 as an example. FPS is a definition in the field of images, which refers to the number of frames per second transmitted by a picture, and colloquially refers to the number of pictures of an animation or video. The higher the frame rate, the smoother the playback. Alternatively, the window 149 may display text that "the higher the frame rate, the smoother the playback" so that the user selects the frame rate as desired. The specific text display position may be located above the "50" text, flush with the "frame rate (FPS)".
Window 149 may also display the size of the video, which is related to the selected export format, video type, resolution, and frame rate.
The content displayed by video type control 148 in FIG. 1E may change as the video type and resolution change. For example, the video type is an HDR video type and the resolution is 2K/4K, then video type control 148 may display "HDR|2K/4K". For another example, if the video type is a normal video type and the resolution is 1080P, then video type control 148 may display "sdr|1080P".
The electronic device 100 may detect a switching operation (or described as a video type switching operation) acting on the "video type" in the window 149, i.e., switching from the HDR video type to the normal video type, in response to which the electronic device 100 may convert the HDR video to the SDR video and display the user interface shown in fig. 1F. Fig. 1F illustrates a user interface for editing an "sdr|2k/4K" video. The video type displayed by window 141 in fig. 1F is SDR video, and the content displayed by video type control 148 is "sdr|2k/4K".
The electronic device 100 may detect a switching operation of "video type" acting in the window 149, and a switching operation of "resolution", i.e., switching from the HDR video type to the normal video type and from 2K/4K to 1080P, and in response to both switching operations, the electronic device 100 may convert the HDR video to the SDR video and adjust the resolution to 1080P, so that the content displayed by the video type control 148 is "sdr|1080P".
The electronic device 100 can detect a user operation on the video type control 148 in fig. 1F, and in response to the operation, output a video type adjustment window in a user interface of the video editing application. If the electronic device 100 detects a switching operation acting on "video type" in the video type adjustment window, i.e., switching from the normal video type to the HDR video type, in response to the operation, the SDR video may be converted to the HDR video and the content displayed by the video type control 148 may be adjusted to "hdr|2k/4K".
With the embodiment of the application, the electronic device 100 may implement the user interfaces shown in fig. 1D to 1F, and the preview area in the user interface of the video editing application on the electronic device 100 may display the HDR video, and may further switch the HDR video displayed in the preview area into the SDR video according to the video type switching operation. The preview region may thus support the display of HDR video in order to enable video editing of the HDR video. The electronic device 100 also supports converting HDR video to SDR video, thereby displaying the SDR video.
The specific process by which the electronic device 100 implements the user interface shown in fig. 1D-1F is described in detail below.
First, fig. 2 exemplarily shows a software architecture of the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into five layers, from top to bottom, an application layer, an application Framework layer, an Zhuoyun row (Android run) and system library, a hardware abstraction layer (Hardware Abstraction Layer, HAL), and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include camera, gallery, video, music, navigation, calendar, map, WLAN, etc. applications. In the embodiment of the application, the application program layer further comprises a video editing application. The video editing application has video data processing capability and can provide video editing functions for users, including video data processing such as clipping, rendering, and the like. The video editing application in the embodiment of the application can realize the editing of the HDR video and also can realize the editing of the SDR video. The user interfaces shown in fig. 1D-1F may be viewed as user interfaces provided for the video editing application described above. The video editing application may also be described as a video editor.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build a display interface for an application. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
In an embodiment of the present application, view system creativity SurfaceView, surfaceView can pull out the frames of the video player individually for rendering. The view system may also apply a Surface to the memory that is used to display the gamut of the HDR video, such as the BT2020 gamut. The view system may further set a layer to which the Surface view belongs as an attribute of BT2020 when detecting that there is a data change on the Surface, and send the set layer to which the Surface view belongs to the hardware abstraction layer, so that the hardware abstraction layer synthesizes an HDR video frame. The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
In an embodiment of the present application, the application framework layer further includes a media framework. A plurality of tools for editing video and audio are provided in the media frame. Wherein the tool comprises MediaCodec. MediaCodec is a class provided by Android for encoding and decoding audio and video. It includes encoder and decoder.
Wherein an encoder may convert one form of video or audio input to the encoder into another form by a compression technique, and a decoder performs a reverse process of encoding, and may convert one form of video or audio input to the decoder into another form by a decompression technique.
For example, the video input to the encoder may be HDR video. The above HDR video is composed of N HDR video frames with a color gamut of BT 2020. The above N is an integer greater than 1. After receiving the HDR video, the decoder may split the video composed of the N HDR video frames with the color gamut of BT2020 into N independent HDR video frames for the subsequent electronic device 100 to perform image processing on each HDR video frame.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL or OpenGL ES, etc.), 2D graphics engines (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The open graphics library (Open Graphics Library, openGL) is provided with a plurality of image rendering functions that can be used to draw three-dimensional scenes from simple graphics to complex. The OpenGL provided by the system library may include a GLES rendering module for executing OpenGL ES pipeline rendering procedures. In the OpenGL ES pipeline rendering flow, the slave CPU may input vertex coordinates, texture coordinates, and texture images to the GPU, while also outputting parameters to the GPU. The whole rendering process is to sequentially execute input data (namely vertex coordinates, texture coordinates and texture images) from a CPU to a GPU in each component module of the GPU, and finally output a block of cache data. In an embodiment of the present application, the GLES rendering module may output a video frame in a PQ-encoded (BT 2020, RGB) format.
In an embodiment of the present application, the system library further includes an EGL. EGL is an interface, i.e., bridge, between a graphics rendering API (e.g., openGL ES) and a local platform (e.g., android) view system. EGL may provide a mechanism to create rendering surfaces and may also provide synchronization functionality of local platform rendering (e.g., window rendering) and OpenGL ES rendering.
In the embodiment of the application, the system library further comprises a SurfaceFlinger. Surface represents the producer in the buffer queue and Surface eFlinger represents the consumer, and Surface eFlinger receives data buffers from multiple sources, combines them, and sends them to the hardware display layer.
A Hardware Abstraction Layer (HAL) provides a standard interface that can be used to control the actions of the hardware. The HAL comprises a plurality of library modules, each implementing a set of interfaces for a particular type of hardware component, e.g. WLAN module, bluetooth module, etc.
In the embodiment of the application, the HAL also includes an hwcComposer and an OpenGL ES module. The hwcComponer is a HAL module for Layer synthesis and display in android, and provides hardware support for SurfaceFlinger services. The OpenGL ES module is used for providing hardware support for the OpenGL ES.
The kernel layer is the basis of the android system, for example, ART relies on the kernel layer to perform underlying functions, such as thread and low-level memory management, etc. The kernel layer is a layer between hardware and software. The kernel layer at least comprises display drive, camera drive, audio drive, sensor drive, GPU drive and the like.
Fig. 3 illustrates a flow chart of an electronic device 100 implementation displaying HDR video. In conjunction with the user interfaces shown in fig. 1A-1F and the software architecture of the electronic device 100 shown in fig. 2, the flow of implementing the display of HDR video by the electronic device 100 will be specifically described in the embodiments of the present application.
S101, the electronic device 100 detects a user operation acting on an editing control in a user interface of the HDR video to be edited.
The color coding format adopted by the HDR video is YUV format, the data type of the color value of the color channel is Integer (INT), and the color gamut is BT2020. The HDR video to be edited refers to an HDR video that the user wants the electronic device 100 to edit. The user interface for the HDR video to be edited refers to a user interface for presenting the HDR video to be edited, which may include a browse window for the HDR video to be edited, an editing control, and a video type icon that may be used to indicate that the video type in the browse window is the HDR video, i.e. that the video to be edited is the HDR video. The user interface may also include other operational controls, such as a share control, a favorites control, a delete control, and more controls.
When the electronic device 100 detects a click operation on a certain HDR video in the gallery user interface, the user interface of the HDR video may be displayed in response to the click operation, and the electronic device 100 may treat the HDR video as an HDR video to be edited. For example, when the electronic device 100 detects a click operation on the icon 122 in fig. 1B, the user interface shown in fig. 1C may be displayed in response to the click operation, where the user interface is a user interface of the HDR video indicated by the icon 122, that is, a user interface of the HDR video to be edited.
The electronic device 100 detects a user operation acting on an editing control in the user interface of the HDR video to be edited, for example, detects a click operation acting on the editing control 133 in the user interface shown in fig. 1C. In other words, the electronic device 100 detects, in the case of outputting a user interface of an HDR video to be edited, whether a touch instruction for an editing control in the user interface is received.
S102, the electronic device 100 detects the user operation, and initializes the video editing environment in response to the user operation.
The electronic device 100 detects a user operation on the editing control, and in response to the user operation, may initialize the video editing environment. Initializing a video editing environment refers to creating or applying for tools and storage space required to edit a video so that the electronic device 100 can perform data processing of the edited video.
Initializing a video editing environment includes: creating an encoder, creating a decoder, performing OpenGL rendering, and applying for a memory for a user to cache video frames and a video memory provided by a GPU. The decoder is operable to split the video to be edited into a sequence of video frames; the encoder may be used to combine the edited video frames into video. OpenGL may be used to adjust video frames and/or modify pixels in video frames to change the image content included in the video, i.e., render the video frames. The adjusting the video frame includes adjusting to increase or decrease the video frame and modifying the size of the video frame. The initializing video editing environment in the embodiments of the present application may not include creating an encoder, which may be created after the HDR video is displayed in the preview area of the video editing application, so as to combine the edited HDR video frames into the HDR video. Before creating the decoder, the embodiment of the application can comprise creating a surface view so that a preview area control of the video editing application is realized through the surface view.
The memory comprises a Surface and a buffer queue. Surface may be used to buffer rendered video frames output by the GPU. The BufferQueue may be used to cache video to be edited that is input by a video editing application. The decoder may split the video to be edited stored in the BufferQueue into a sequence of video frames to be edited.
In particular, FIG. 4 illustrates a flow chart for the electronic device 100 to initialize a video editing environment. As shown in fig. 4, APP may be used to represent a video editing application.
First, step (1) the electronic device 100 may detect a user operation acting on an editing control in a user interface of the HDR video to be edited. Referring to the user interface shown in FIG. 1C, user operations acting on the editing control 133 may be referred to as user operations clicking on the editing control.
Responding to the user operation in the step (1), performing interface initialization by the APP, and requesting to create a surface view from a Framework layer. Since the APP knows that the video to be edited is an HDR video, the APP requests creation of a surfacview from the frame layer. Further, step (2) APP sends a request to create a surfacview to the frame layer. Specifically, the APP sends a request to create a surfacview to the view system in the Framework layer. The frame layer in step (3) creates a surface view in response to the request in step (2). When creating a surfacview, dataSpace and/or transfer information may be generated in the layers of SurfaceFlinger. The dataSpace and/or transfer information may indicate that the layer has the property BT2020, or (BT 2020, PQ). The Surface view may be used to render a video player's picture by pulling it out alone, and may also be used to generate a layer of BT2020 properties, i.e. to set the Surface in the Surface view to BT2020. In the embodiment of the application, surfaceView is used as a preview area window of the video editing application.
After creating the Surface view, the Framework layer applies for a block of memory, e.g., surface C, from the memory. The memory may provide a plurality of surfaces. Each Surface carries an Identification (ID) indicating the Surface. For any Surface, the Surface ID is in one-to-one correspondence with the Surface address. For example, assume that the Surface-01 ID is 01; addresses 0011-0100. When identifying that the ID of a Surface is 01, the electronic device 100 may determine that the Surface is Surface-01, and may also determine that the address of the Surface is 0011-0100; conversely, when an address used by a Surface is identified as 0011-0100, electronic device 100 may determine that the Surface is Surface-01.
And (4) the Framework layer sends a request for applying Surface to the memory. In step (5), the memory responds to the request in step (4) and allocates Surface C, which is a block of memory without format. And (6) returning the ID and/or address of the Surface C to the frame layer by the memory in the step (6). The frame layer receives the ID and/or address of Surface C, and may bind Surface C with Surface View based on the ID and/or address. That is, surface in Surface view is Surface C. The frame layer then returns a create success response to the APP for responding to the request in step (2), which may carry the ID and/or address of the Surface C. That is, the Framework layer applies a block of memory to the memory, and executes a create success callback in the case of allocating a Surface C to the memory.
Step (2) -step (7) can be understood as a process of interface initialization.
Step (8) the APP may send a request to create a decoder to MediaCodec. The request may carry attribute information of the HDR video. The attribute information can be used to indicate the color gamut and color coding format of the video to be edited, and because the video to be edited is HDR video, the attribute information carried by the request is BT2020 color gamut and YUV color coding format.
In response to the request in step (8), step (9) MediaCodec may create a decoder. When creating a decoder, mediaCodec does not need to specify the type of video that the decoder supports decoding. The decoder can determine the type of the video to be decoded after receiving the video to be decoded input by the APP. In the embodiment of the present application, the video to be edited is HDR video, and therefore, the decoder described above may be used to decode HDR video. After the MediaCodec creates the decoder, the decoder may send a request to the memory to apply for a block of memory (BufferQueue). The BufferQueue may be used to receive video to be decoded of APP input. Step (11) in response to the request in step (10), the memory may allocate a block of memory buffer queue for the decoder. The memory then returns the addresses of the BufferQueue to the decoder in step (12).
After receiving the address of the BufferQueue returned by the memory, the decoder can locate the BufferQueue available in the memory according to the address of the BufferQueue. Subsequently, the decoder may return a response to the APP indicating that the creation decoder was successful, step (13).
Upon receiving a response to the success of creating the decoder, step (14) the APP may send a request to create a surface to the decoder. Surface is a memory storage space with a specific data structure. Typically, surface is dedicated to buffering decoded video frames. Step (15) the decoder may send a request to the memory to apply for a Surface, such as a request to apply for Surface a, in response to the request in step (14). The memory of step (16) may be partitioned into a block of memory, such as Surface a, for use by the decoder in response to the application of step (15). Surface a is used to accept stored BT2020, YUV video frames, i.e. to accept stored HDR video frames.
After the allocation of Surface A for the decoder is completed, the memory of step (17) may return the ID and/or address of Surface A to the decoder. After receiving the return information, the decoder may confirm that the application Surface was successful and determine the ID and/or address of the Surface A that may be used. Further, the step (18) decoder may return the ID and/or address of Surface a to APP. Thus, the APP can determine that the decoder has completed applying for Surface to memory, and can determine the ID and/or address of the usable Surface A that the decoder applies for.
Step (19) APP may then send an initialization request to OpenGL. The initialization request may carry information of surface a. Surface A information may be used to indicate OpenGL: the ID and/or address of Surface a in the decoder for receiving the decoded video frame.
In step (20), according to the Surface a information carried in the initialization request, openGL may determine the Surface a used by the decoder, that is, when the video frame is output after the calculation processing on the video frame is performed, openGL may determine which buffer (Surface) the video frame is written into, which in the embodiment of the present application is exemplified by writing the Surface a.
In addition, after receiving the initialization request, openGL in step (21) applies a block of memory to the GPU, and marks it as memory a. The video memory a can be used for caching video frames to be edited. The memory a may be a texture (texture) in OpenGL or a frame buffer (Frame BufferObject).
In response to step (21), the GPU may divide a block of memory for OpenGL as a memory (memory a) of the OpenGL application. Then, in step (22), the GPU may return the address of the video memory a to OpenGL. After receiving the address of the video memory a, the OpenGL can locate the video memory a through the address, and further the OpenGL can use the video memory a. Then, in step (23), openGL switches the video memory format to 10 bits, so that OpenGL retains the precision of the HDR video frame during the data transmission process, so that the HDR video can be displayed in the preview area. Step (24) OpenGL may then return initialization completion information to the APP. The initialization completion information may indicate APP: openGL has completed initialization.
After confirming that OpenGL has completed initialization, step (25) APP may send an instruction to OpenGL, where the instruction is used to instruct OpenGL to apply a Surface, such as Surface B, to memory. The instruction may carry the ID and/or address of Surface C. In response to the instruction in step (25), openGL may apply a surface to the memory. In response to the application of step (26), the memory of step (27) may divide a storage space, such as Surface B, for receiving YUV output of OpenGL, that is, receiving picture data after OpenGL rendering is completed. That is, surface B is used to accept video frames in YUV color coded format output by OpenGL. The memory of step (28) may return the ID and/or address of Surface B to OpenGL. Further, step (29) OpenGL may bind Surface B to Surface C. Wherein Surface B may also be described as Eglsurface.
The process shown in steps (1) through (29) of fig. 4 illustrates a process in which the electronic device 100 initializes a video editing environment. After the editing environment initialization is completed, the electronic device 100 may display the HDR video in the preview area of the video editing application.
S103, the electronic device 100 generates a preview video of the HDR video to be edited, the preview video being the HDR video.
After completing the process of initializing the video editing environment, the electronic device 100 may generate a preview video of the HDR video to be edited using the video editing environment described above.
Steps (1) to (14) in fig. 5 show a specific flow of generating an HDR video frame of an HDR video to be edited by the electronic device 100.
First, step (1) APP may input the HDR video to be edited into the decoder. Specifically, according to the editing environment initialization process, the APP may determine an address of a BufferQueue applied by the decoder for buffering a video to be decoded. After determining the address, APP may write the HDR video to be edited to the BufferQueue. At this time, the color coding format adopted by the HDR video to be edited is YUV format, the data type of the color value of the color channel is Integer (INT), and the color gamut is BT2020.
Step (2) when it is detected that the video is written in the BufferQueue, the decoder may decode the video stored in the BufferQueue, thereby obtaining a video frame sequence of the video. Thus, after writing the HDR video to be edited into the BufferQueue, the decoder may output the video frames of the HDR video to be edited described above, i.e., N HDR video frames (HDR video frames to be edited). At this point, the color coding format, data type, and color gamut of the HDR video frame to be edited remain YUV, INT, BT2020.
It will be appreciated that a video may also include audio. Thus, the decoder also includes an audio decoder. In the video display method provided in the embodiment of the present application, the processing related to audio is the prior art, and will not be described herein again.
In the case of including audio data, after decoding by the decoder, the electronic device 100 may obtain N HDR video frames of the HDR video to be edited and the audio data, respectively. It will be appreciated that when the HDR video to be edited does not include audio data, the electronic apparatus 100 does not need to perform audio decoding on the HDR video to be edited, and thus, the decoded data does not include audio data.
After the decoding is completed in the step (3), the decoder may sequentially send the HDR video frames to be edited (YUV, INT, BT and 2020) obtained by decoding to the memory. Accordingly, the memory in step (4) may sequentially receive N HDR video frames to be edited (YUV, INT, BT 2020), and write the received HDR video frames to be edited onto Surface a.
After decoding is completed, APP can know from MediaCodec that HDR video to be edited has completed decoding, (5) APP sends instruction 1 to OpenGL, instruction 1 being used to instruct OpenGL to read HDR video frames to be edited from Surface a (YUV, INT, BT 2020).
Step (6) OpenGL reads the HDR video frame to be edited (YUV, INT, BT 2020) from Surface a, and may change the color coding format of the HDR video frame to be edited and the data type of the color values in the color channel. In the embodiment of the present application, openGL sets the color coding format of the HDR video frame to be edited to RGB, and sets the data type of the color value in the color channel to flow, that is, changes the original (YUV, INT) format of the HDR video frame to be edited to the (RGB, flow) format of the HDR video frame to be edited. This is because the color coding format of the operations supported by OpenGL when drawing and/or rendering video frames is RGB, the data type of the color values in the color channel is floating point type.
Therefore, after the N HDR video frames to be edited are input to OpenGL, the color coding format of the N HDR video frames is changed to RGB, and the data type of the color values in the color channel is changed to floating point. At this point, the color gamut of the HDR video frame remains BT2020. That is, openGL converts HDR video frames to be edited in (YUV, INT, BT 2020) format to HDR video frames to be edited in (RGB, flow, BT 2020) format. That is, openGL renders output is an HDR video frame to be edited in (RGB, flow, BT 2020) format, which may be a PQ encoded video frame. Second, openGL may store the HDR video frame to be edited in (RGB, flow, BT 2020) format on the video memory a. The video memory a is obtained from the GPU by the OpenGL through step (21) and step (22) shown in fig. 4.
Step (7) OpenGL sends instruction 2 to the GPU, where the instruction 2 is used to instruct the GPU to convert the format of the HDR video frame to be edited, and specifically instruct the GPU to convert the HDR video frame to be edited in the format (RGB, flow, BT 2020) stored on the video memory a into the HDR video frame to be edited in the format (YUV, INT, BT 2020). Step (8) the GPU, in response to instruction 2, converts the HDR video frame to be edited in (RGB, flow, BT 2020) format stored on the video memory a into the HDR video frame to be edited in (YUV, INT, BT 2020) format. Steps (7) and (8) can be understood as OpenGL instructs the GPU to convert the HDR video frame to be edited in (RGB, flow, BT 2020) format of the OpenGL rendering output stored on the video memory a into the HDR video frame to be edited in (YUV, INT, BT 2020) format.
Step (9) OpenGL sends instruction 3 to the GPU, where the instruction 3 is used to instruct the GPU to output the HDR video frame to be edited onto Surface B, and specifically instruct the GPU to output the HDR video frame to be edited in (YUV, INT, BT 2020) format onto Surface B. Step (10) the GPU outputs the HDR video frame to be edited in the format of (YUV, INT, BT 2020) onto Surface B in response to instruction 3. Wherein instruction 3 may carry the ID and/or address of Surface B to instruct the GPU to output the HDR video frame to be edited onto Surface B. Instruction 3 may also carry the ID and/or address of Surface C bound to Surface B for the GPU to perform step (11).
Step (11) the GPU outputs the HDR video frame to be edited in the format of (YUV, INT, BT 2020) on Surface B to Surface C of Surface view. The GPU may output the HDR video frame to be edited in the format of (YUV, INT, BT 2020) on Surface B to Surface C of Surface view based on the binding relationship between Surface B and Surface C.
Step (9) -step (11) may be understood that, based on the binding relationship between Surface B and Surface C in Surface view, the Surface B is used as the rendering output position of OpenGL, when OpenGL calls SwapBuffer, the HDR video frame to be edited after rendering is completed is exchanged from the back buffer of Surface B to Surface view, so that the attribute of Surface C is (BT 2020, PQ).
Further, in step (12), the frame layer has a data change on Surface C where Surface view is monitored, and the layer to which Surface view belongs is set as BT2020 attribute. That is, the frame layer switches data in the backBuffer that monitors Surface B onto Surface C, and the data format on Surface C is (YUV, INT, BT 2020), so the frame layer sets the layer to which Surface view belongs as the BT2020 attribute. Optionally, the layer to which the surfmeview belongs is set as a (BT 2020, PQ) attribute.
The Framework layer sends the layer set as the BT2020 attribute to the hardware abstraction layer in step (13).
Step (14) the hardware abstraction layer synthesizes the HDR video frame. The hardware abstraction layer may integrate the layers of the preview area of the video editing application together in response to receiving the layer having the attribute BT2020, and place the layer having the attribute BT2020 at the uppermost layer to cover the underlying layer, and then calculate the brightness displayed on the screen according to the attribute BT2020, and perform tone mapping (ToneMapping). The hardware abstraction layer may synthesize the preview video from a plurality of HDR video frames.
The process shown in steps (1) to (14) in fig. 5 shows a process in which the electronic device 100 generates a preview video. After generating the preview video, the electronic device 100 may display the HDR video in a preview area of the video editing application.
S104, the electronic device 100 displays a preview video of the HDR video to be edited, which is the HDR video, on the user interface of the video editing application.
The electronic device 100 displays a preview video of the HDR video to be edited in a user interface of the video editing application. Specifically, the electronic device 100 may display the preview video in the preview area in the user interface of the video editing application, for example, referring to fig. 1D, display the cover video frame of the preview video in the window 141, and when detecting the user operation acting on the play button 145, the window 141 may sequentially display the video frame stream of the preview video.
In the embodiment of the present application, the video type of the preview video in S104 is HDR video. That is, when detecting a user operation acting on an editing control in a user interface of an HDR video to be edited, the electronic apparatus 100 switches to the user interface of the video editing application in response to the user operation, and displays a preview video, which is an HDR video, in the user interface of the video editing application so as to realize editing of the HDR video.
The operations of the electronic device 100 to display the preview video of the HDR video on the user interface of the video editing application are briefly described in S101 to S104 from the perspective of the electronic device 100 and its functional modules. The user interface may also display a video type control, upon detecting a user operation on the control, the electronic device 100 may execute S105.
S105, when the electronic device 100 detects a user operation on the video type control in the user interface of the video editing application, the video type adjustment window is output in the user interface of the video editing application in response to the operation.
Wherein the video type adjustment window may provide the user with the option of deriving format, video type, resolution, and frame rate. For example, the video type adjustment window may refer to window 149 shown in FIG. 1E.
The export format may be divided into a video format and a GIF map format.
Video types can be classified into an HDR video type and a normal video type, i.e., an SDR video type. In the case where the video type is an HDR video type, the preview video is represented as an HDR video; in the case where the video type is a normal video type, the preview video is represented as an SDR video. Optionally, in the case where the video type is an HDR video type, the video type adjustment window may also display text of "high dynamic range"; the video type adjustment window may also display text of "standard dynamic range" in the case where the video type is a normal video type, so that the user selects the video type according to the need.
The resolution can be divided into 1080P and 2K/4K. Optionally, under the condition that the resolution is 1080P, the video type adjustment window can also display characters with poor picture reduction effect; under the condition that the resolution is 2K/4K, the video type adjustment window can also display characters of 'extremely reduced pictures and large occupied space', so that a user can select the resolution according to the requirement.
The frame rate may be divided into 24, 25, 30, 50, and 60.FPS is a definition in the field of images, which refers to the number of frames per second transmitted by a picture, and colloquially refers to the number of pictures of an animation or video. The higher the frame rate, the smoother the playback. Optionally, the video type adjustment window may also display text that "the higher the frame rate, the smoother the playback" so that the user selects the frame rate according to the need.
The video type adjustment window may also display the size of the preview video, which is related to the selected export format, video type, resolution, and frame rate.
S106, when the electronic device 100 detects a switching operation acting on the video type in the video type adjustment window, the preview video is converted from the HDR video to the SDR video in response to the operation.
A switching operation for switching the video type from the HDR video to the SDR video, which is acted on the video type in the video type adjustment window. When the electronic device 100 detects a switching operation acting on the video type in the video type adjustment window, the preview video is converted from the HDR video to the SDR video in response to the operation. For example, referring to the user interface shown in fig. 1E, when the electronic device 100 detects a click operation on a normal video in the window 149, the video type of the preview video is adjusted to an SDR video in response to the click operation, and the preview video is converted from an HDR video to an SDR video.
The electronic device 100 converts the preview video from HDR video to SDR video to display the SDR video of the HDR video to be edited in the preview area of the video editing application.
For one HDR video frame, the electronic device 100 converts the HDR video frame into an SDR video frame, and steps (1) to (4) in fig. 6 show a specific flow of converting the HDR video frame into the SDR video frame by the electronic device 100.
Step (1) converts the HDR nonlinear electrical signal of the HDR video frame into an HDR linear optical signal by an Electro-optical transfer function (Electro-Optical Transfer Function, EOTF).
In the present embodiment, EOTF may be represented as F D =EOTF[E',L]Or F D =EOTF[E']+L,F D Representing nonlinear color values (i.e., HDR linear optical signals), E' representing linear color values (i.e., HDR nonlinear electrical signals), and L representing luminance parameters. The above formula can be understood as taking the brightness parameter as one calculation parameter in the process of converting the nonlinear electrical signal into the linear optical signal. The luminance parameter represents the picture luminance of the current HDR video frame. The luminance of different HDR video frames in the same HDR video may be different, e.g. a dark HDR video frame with 300nit and a bright HDR video frame with 4000nit, so different HDR video frames in the same HDR video may have different luminance parameters.
In one implementation, for an HDR video frame, the luminance parameter is the maximum luminance of the HDR video frame. In order to convert an HDR video frame into an SDR video frame, the contrast is noticeable (e.g., bright portions are brighter, dark portions are darker), with maximum luminance as the luminance parameter. For example, the electronic device 100 segments the HDR video frame to obtain 50 x 50 horizontal and vertical packet regions, calculates the luminance maximum value of each packet region, obtains 2500 luminance maximum values, selects the first 20 luminance maximum values from the 2500 luminance maximum values in order from large to small, and calculates the average value of the 20 luminance maximum values, that is, the maximum luminance of the HDR video frame. Step (2) performs a color space conversion (Color Space Converting) on the HDR linear optical signal, i.e. converting the color gamut of the HDR linear optical signal from BT2020 to BT709.
And (3) performing Tone Mapping (Tone Mapping) on the HDR linear optical signal after the color space conversion, and Mapping the HDR linear optical signal into an SDR linear optical signal.
Step (4) converts the SDR linear Optical signal into an SDR nonlinear electrical signal by means of an Optical-to-electrical conversion function (OETF) (Optical-Electro Transfer Function), thereby effecting the conversion of the HDR video frame into an SDR video frame.
In one implementation, the electronic device 100 may employ an FFmpeg command line to implement steps (1) -step (4) in fig. 6.
In another implementation, openGL controls the GPU to perform steps (1) -step (4) in fig. 6. For example, for step (1), openGL sends instructions to GPU that instruct GUP to convert the HDR nonlinear electrical signal to an HDR linear optical signal through EOTF.
In yet another implementation, the electronic device 100 decodes the HDR video into a texure TexTure, openGL reads YUV 10bit data from the texure TexTure, and then performs steps (1) - (4) in fig. 6.
S107, the electronic device 100 displays the SDR video of the HDR video to be edited in the user interface of the video editing application.
After the electronic device 100 converts the preview video from the HDR video to the SDR video, the electronic device 100 may display the SDR video of the HDR video to be edited in a user interface of a video editing application. Specifically, the electronic device 100 may display the SDR video in a preview area in a user interface of a video editing application, for example, referring to fig. 1F, a cover video frame of the SDR video may be displayed in the window 141, and when a user operation acting on the play button 145 is detected, the window 141 may sequentially display a video frame stream of the SDR video.
The electronic device 100 also displays the video type as SDR video in a user interface of a video editing application. In particular, the electronic device 100 may indicate that the video type is SDR video in a video editing application's user interface, for example, referring to fig. 1F, the "SDR" displayed in the video type control 148 indicates that the video type is SDR video.
The operations of the electronic device 100 to convert the preview video from the HDR video to the SDR video and display the SDR video in the preview area are briefly described in S105 to S107 above from the perspective of the electronic device 100 and its functional modules.
For displaying SDR video in the preview area, if the electronic device 100 detects a user operation on a video type control in a user interface of a video editing application, in response to the operation, a video type adjustment window is output in the user interface of the video editing application, where the video type in the video type adjustment window is a normal video type. When the electronic device 100 detects a switching operation acting on a video type in the video type adjustment window (i.e., the switching operation is used to instruct switching of a normal video type to an HDR video type), in response to the operation, a video editing environment is initialized, a preview video of the video type being the HDR video is generated, and the HDR video is displayed in a preview area of a user interface of the video editing application. The initializing video editing environment may refer to the specific description of S102 in fig. 3, and the generating of the preview video with the video type of HDR video may refer to the specific description of S103 in fig. 3, which is not described herein.
Fig. 7 exemplarily shows a hardware configuration diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
In an embodiment of the present application, the electronic device 100 displaying the user interface shown in fig. 1A-1F may be accomplished through a GPU, encoder, decoder, openGL, and display 194.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
In the embodiment of the present application, the HDR video to be edited may be obtained by the electronic device 100 from other electronic devices through a wireless communication function, or may be obtained by shooting the electronic device 100 through an ISP, a camera 193, a video codec, a GPU, and a display screen 194.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (dynamic random access memory, DRAM), synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), double data rate synchronous dynamic random-access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, e.g., fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc. The nonvolatile memory may include a disk storage device, a flash memory (flash memory).
The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. divided according to an operation principle, may include single-level memory cells (SLC), multi-level memory cells (MLC), triple-level memory cells (TLC), quad-level memory cells (QLC), etc. divided according to a storage specification, may include universal FLASH memory (english: universal FLASH storage, UFS), embedded multimedia memory cards (embedded multi media Card, eMMC), etc. divided according to a storage specification.
The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like.
The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
In the embodiment of the present application, the internal memory 121 may support the electronic device 100 to apply Surface, bufferQueue to the memory, and so on.
The external memory interface 120 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device 100. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external nonvolatile memory. In embodiments of the present application, sound may be captured by microphone 170C when electronic device 100 captures HDR video. During the playing of the video, speakers connected to speaker 170A or headphone interface 170D may support the playing of audio in the video.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
In the embodiment of the present application, the electronic device 100 may detect whether there is a user operation acting on the display 194 of the electronic device 100 through the touch sensor 180K. After the touch sensor 180K detects the above-described user operation, the electronic apparatus 100 may perform image processing indicated by the above-described user operation, for example, display of an HDR video or the like in a preview area in a user interface of a video editing application.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
By implementing the embodiment of the application, the electronic device can use the Surface view as a display control of the preview area of the video editing application, set the attribute of the layer to which the Surface view belongs as BT2020, bind the Surface C in the Surface view with the Surface B for receiving the rendering output of OpenGL, so that the video frames in the Surface B can be transmitted to the Surface C, and HDR video can be displayed in the preview area in the user interface of the video editing application. Furthermore, the electronic device may further switch the HDR video displayed in the preview area to the SDR video according to a video type switching operation.
In the embodiments of the present application:
1. in a user interface that presents video, a user clicking on an edit control for the user interface may be referred to as a first operation, such as the operation of click control 133 in FIG. 1C. The user interface that presents video may be referred to as a first user interface, such as the user interface shown in FIG. 1C. The window in the user interface that presents video for displaying video may be referred to as a first window, such as window 131 in fig. 1C. Upon detecting the first user operation, the video currently displayed by the electronic device, i.e., the video to be edited selected by the user, may be referred to as a first video, such as video M displayed in window 131 in fig. 1C. The series of video frames resulting from the decoder decoding the first video may be referred to as the first video frame. The first video frame has N. The specific value of N is determined by the duration of the first video.
2. The Surface in Surface view may be referred to as a first Surface, such as Surface C in fig. 4. The identification information of the first Surface is the ID and/or address of the first Surface. The Surface applied to the memory by the OpenGL is called a second Surface by the video editing application, for example, steps (25) - (28) in fig. 4, and the Surface B applied to the memory by the OpenGL is called by the video editing application. The Surface used to store the video frames output by the decoder may be referred to as a third Surface, such as Surface a in fig. 4.
3. The video frame obtained by converting the video frame by OpenGL may be referred to as a second video frame, for example, an HDR video frame to be edited in the format of (RGB, flow type, BT 2020) in step (6) in fig. 5. The video frame obtained by converting the format of the video frame by the GPU may be referred to as a third video frame, for example, an HDR video frame to be edited in the format of (YUV, INT type, BT 2020) in step (8) in fig. 5.
4. The user interface of the video editing application displaying HDR video may be referred to as a second user interface, such as the user interface shown in fig. 1D. The video preview window in the user interface may be referred to as a second display window, such as window 141 shown in FIG. 1D.
5. The operation of the user clicking on the video type control may be referred to as a second operation, such as the operation of clicking on control 148 in FIG. 1D. In response to clicking on control 148, the output video type adjustment window may refer to window 149 shown in FIG. 1F.
6. When outputting the video type adjustment window, if the HDR video type is selected in the window (i.e., the video currently displayed in the preview area is the HDR video), the operation of clicking the normal video type by the user may be referred to as a switching operation, which is used to instruct the electronic device to switch the video type from the HDR video type to the normal video type. If the normal video type is selected in the window (i.e. the video currently displayed in the preview area is SDR video), the operation of clicking the HDR video type by the user may be referred to as a switching operation, for instructing the electronic device to switch the video type from the normal video type to the HDR video type.
7. In fig. 4: the request in step (2) may be referred to as a first request; the request in step (4) may be referred to as a second request; step (6) may describe the memory sending a second response to the application framework layer, the second response including identification information of the first Surface; step (7) may be described as the application framework layer sending a first response to the video editing application, the first response including identification information of the first Surface; the instruction in step (25) may be referred to as a first instruction; the request in step (26) may be referred to as a third request; step (28) may be described as the memory sending a third response to OpenGL, the third response including identification information of the second Surface; the request in step (14) may be referred to as a fourth request; the request in step (15) may be referred to as a fifth request; step (17) may be described as the memory sending a fifth response to MediaCodec, the fifth response including identification information of the third Surface; step (18) may be described as MediaCodec sending a fifth response to the video editing application, the fifth response including identification information of the third Surface.
8. In fig. 5: instruction 1 in step (5) may be referred to as a second instruction; instruction 2 in step (7) may be referred to as a third instruction; instruction 3 in step (9) may be referred to as a fourth instruction.
The term "User Interface (UI)" in the description and claims of the present application and in the drawings is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. The user interface of the application program is source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, the interface source code is analyzed and rendered on the terminal equipment, and finally the interface source code is presented as content which can be identified by a user, such as a picture, characters, buttons and the like. Controls (controls), also known as parts (widgets), are basic elements of a user interface, typical controls being toolbars (toolbars), menu bars (menu bars), text boxes (text boxes), buttons (buttons), scroll bars (scrollbars), pictures and text. The properties and content of the controls in the interface are defined by labels or nodes, such as XML passing < Textview >, < ImgView >, XML passing,
Nodes such as < VideoView > specify controls included in the interface. One node corresponds to a control or attribute in the interface, and the node is rendered into visual content for a user after being analyzed and rendered. In addition, many applications, such as the interface of a hybrid application (hybrid application), typically include web pages. A web page, also referred to as a page, is understood to be a special control embedded in an application program interface, and is source code written in a specific computer language, such as hypertext markup language (hyper text markup language, GTML), cascading style sheets (cascading style sheets, CSS), java script (JavaScript, JS), etc., and the web page source code may be loaded and displayed as user-recognizable content by a browser or web page display component similar to the browser function. The specific content contained in a web page is also defined by tags or nodes in the web page source code, such as GTML defines elements and attributes of the web page by < p >, < img >, < video >, < canvas >.
A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items. As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (18)

1. A video display method applied to an electronic device, the method comprising:
displaying a first user interface comprising a first display window for displaying a first video and an editing control; the video type of the first video is a high dynamic range HDR video;
detecting a first operation acting on the editing control, in response to the first operation:
creating a Surface view, and applying for the Surface view for a first Surface;
creating a decoder;
applying for a second Surface and binding the first Surface with the second Surface;
decoding, by the decoder, the first video into N first video frames;
performing format conversion on the N first video frames through an open graphics library OpenGL to obtain N second video frames output by the OpenGL;
Invoking a Graphic Processor (GPU) of the electronic equipment to perform format conversion on the N second video frames to obtain N third video frames, and outputting the N third video frames to the second Surface;
outputting the N third video frames on the second Surface to the first Surface based on the binding between the first Surface and the second Surface;
setting the attribute of the layer to which the Surface view belongs as a BT2020 attribute, and synthesizing a preview video of the first video based on the N third video frames on the first Surface and the attribute of the layer to which the Surface view belongs, wherein the video type of the preview video is HDR video;
and displaying a second user interface, wherein the second user interface comprises a second display window, the second display window is used for displaying the preview video, and a display control of the second display window is the SurfaceView.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the format of the first video frame includes: the color coding format is YUV format, the data type of the color value is integer, and the color gamut is BT2020;
the format of the second video frame includes: the color coding format is RGB format, the data type of the color value is floating point type, and the color gamut is BT2020;
The format of the third video frame includes: the color coding format is YUV format, the data type of the color value is integer, and the color gamut is BT2020.
3. The method of claim 1 or 2, wherein the second user interface further comprises a video type control for indicating that the video type of the preview video is HDR video.
4. A method according to claim 3, characterized in that the method further comprises:
detecting a second operation acting on the video type control, and responding to the second operation, and outputting a video type adjustment window on the second user interface;
wherein the video type adjustment window includes at least one of the following options: export format, video type, resolution, frame rate; the export format includes a video format option and a GIF map format option, the video type includes an HDR video type option and a normal video type option, the resolution includes a 1080P option and a 2K/4K option, and the frame rate includes 24 options, 25 options, 30 options, 50 options, and 60 options.
5. The method according to claim 4, wherein the method further comprises:
Detecting a switching operation acting on the video type in the video type adjustment window, in response to the switching operation:
converting the preview video from HDR video to SDR video;
and displaying the SDR video of the first video on the second display window.
6. The method of claim 5, wherein the converting the preview video from HDR video to SDR video comprises:
converting the HDR nonlinear electrical signal of the third video frame n into an HDR linear optical signal by an electro-optical conversion function EOTF;
performing color space conversion on the HDR linear optical signal;
tone mapping is carried out on the HDR linear optical signal after the color space conversion to obtain an SDR nonlinear optical signal;
converting the SDR linear optical signal into an SDR nonlinear electric signal through a photoelectric conversion function to obtain an SDR nonlinear electric signal of the third video frame n;
wherein the third video frame N is any one of the N third video frames.
7. The method of claim 6, wherein the EOTF is related to a luminance parameter of the third video frame n, the luminance parameter of the third video frame n being a maximum luminance of the third video frame n.
8. The method of claim 7, wherein the method further comprises:
dividing the third video frame n into L grouping areas, and calculating the brightness maximum value of each grouping area in the L grouping areas to obtain L brightness maximum values; m is a positive integer;
selecting the first K brightness maximum values from the L brightness maximum values according to the sequence from large to small, wherein K is an integer greater than 1 and less than M;
and calculating an average value of the K brightness maximum values, wherein the average value is the maximum brightness of the third video frame n.
9. The method of any of claims 1-8, wherein the electronic device comprises a video editing application, an application architecture layer, and a memory;
the creating a Surface view and applying for the Surface view for a first Surface, specifically includes:
the video editing application sends a first request to the application program architecture layer, wherein the first request is used for requesting to create the SurfaceView;
the application program architecture layer responds to the first request, creates the Surface view and sends a second request to the memory, wherein the second request is used for requesting the memory to distribute the first Surface;
The memory responds to the second request, distributes the first Surface and sends a second response to the application program architecture layer, wherein the second response comprises identification information of the first Surface;
the application architecture layer sends a first response to the video editing application, the first response including identification information of the first Surface.
10. The method of claim 9, wherein the electronic device further comprises OpenGL;
the applying for the second Surface and binding the first Surface with the second Surface specifically includes:
the video editing application sends a first instruction to the OpenGL, wherein the first instruction is used for indicating the OpenGL to apply the second Surface to the memory;
the OpenGL responds to the first instruction and sends a third request to the memory, wherein the third request is used for requesting the memory to allocate the second Surface;
the memory responds to the third request, distributes the second Surface and sends a third response to the OpenGL, wherein the third response comprises identification information of the second Surface;
the OpenGL binds the first Surface with the second Surface.
11. The method of claim 9, wherein the electronic device further comprises a MediaCodec; said MediaCodec is used to create said decoder;
the method further comprises the steps of:
the video editing application sends a fourth request to the MediaCodec, wherein the fourth request is used for requesting a third Surface; the third Surface is used for storing the N first video frames output by the decoder;
the MediaCodec responds to the fourth request and sends a fifth request to the memory, wherein the fifth request is used for requesting the memory to allocate the third Surface;
the memory responds to the fifth request, distributes the third Surface and sends a fifth response to the mediacode, wherein the fifth response comprises identification information of the third Surface;
the MediaCodec sends a fourth response to the video editing application, the fourth response including identification information of the third Surface.
12. The method of claim 11, wherein the method further comprises:
the decoder created by the MediaCodec sends the N first video frames to the memory;
the memory writes the N first video frames into the third Surface;
The performing format conversion on the N first video frames through OpenGL to obtain N second video frames output by OpenGL includes:
the video editing application sends a second instruction to the OpenGL, wherein the second instruction is used for instructing the OpenGL to read the N first video frames from the third Surface;
the OpenGL responds to the second instruction, the N first video frames are uniqueness from the third Surface, format conversion is carried out on the N first video frames, and N second video frames output by the OpenGL are obtained;
the OpenGL stores the N second video frames on a first video memory.
13. The method according to claim 12, wherein the invoking the GPU of the electronic device to format convert the N second video frames to obtain N third video frames, and outputting the N third video frames to the second Surface specifically includes:
the OpenGL sends a third instruction to the GPU, wherein the third instruction is used for instructing the GPU to perform format conversion on the N second video frames;
the GPU responds to the third instruction to perform format conversion on the N second video frames stored on the first video memory to obtain N third video frames;
The OpenGL sends a fourth instruction to the GPU, wherein the fourth instruction is used for instructing the GPU to output the N third video frames to the second Surface; the fourth instruction comprises identification information of the second Surface;
and responding to the fourth instruction by the GPU, and outputting the N third video frames to the second Surface based on the identification information of the second Surface.
14. A method according to claim 12, wherein the third instruction or the fourth instruction includes identification information of a first Surface bound to the second Surface;
the outputting the N third video frames on the second Surface to the first Surface based on the binding between the first Surface and the second Surface specifically includes:
and the GPU determines the binding of the first Surface and the second Surface based on the third instruction or the fourth instruction, and outputs the N third video frames on the second Surface to the first Surface based on the binding of the first Surface and the second Surface.
15. The method of any of claims 9-14, wherein the electronic device further comprises a hardware abstraction layer;
Setting the layer to which the Surface view belongs as a BT2020 attribute, and synthesizing a preview video of the first video based on the N third video frames on the first Surface and the BT2020 attribute, wherein the method specifically includes:
the application program architecture layer monitors that the first Surface has data change, and sets a layer to which the Surface view belongs as a BT2020 attribute;
the application program architecture layer sends a layer set as a BT2020 attribute to the hardware abstraction layer;
the hardware abstraction layer synthesizes a preview video of the first video based on the N third video frames on the first Surface and BT2020 attributes.
16. An electronic device comprising one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the method of any of claims 1-15 to be performed.
17. A computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-15.
18. A computer readable storage medium comprising instructions which, when run on an electronic device, cause the method of any one of claims 1-15 to be performed.
CN202310852349.7A 2023-07-11 2023-07-11 Video display method and electronic equipment Pending CN117692714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310852349.7A CN117692714A (en) 2023-07-11 2023-07-11 Video display method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310852349.7A CN117692714A (en) 2023-07-11 2023-07-11 Video display method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117692714A true CN117692714A (en) 2024-03-12

Family

ID=90128907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310852349.7A Pending CN117692714A (en) 2023-07-11 2023-07-11 Video display method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117692714A (en)

Similar Documents

Publication Publication Date Title
WO2020253719A1 (en) Screen recording method and electronic device
CN115473957B (en) Image processing method and electronic equipment
WO2020093988A1 (en) Image processing method and electronic device
CN111240547A (en) Interactive method for cross-device task processing, electronic device and storage medium
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113961157A (en) Display interaction system, display method and equipment
CN116048933A (en) Fluency detection method
CN115119048B (en) Video stream processing method and electronic equipment
CN114222187B (en) Video editing method and electronic equipment
WO2023071482A1 (en) Video editing method and electronic device
CN116939559A (en) Bluetooth audio coding data distribution method, electronic equipment and storage medium
CN117692714A (en) Video display method and electronic equipment
CN116193275B (en) Video processing method and related equipment
CN115482143B (en) Image data calling method and system for application, electronic equipment and storage medium
CN117692723A (en) Video editing method and electronic equipment
CN116095512B (en) Photographing method of terminal equipment and related device
CN117221713B (en) Parameter loading method and electronic equipment
WO2022206600A1 (en) Screen projection method and system, and related apparatus
CN117764853A (en) Face image enhancement method and electronic equipment
CN116828100A (en) Bluetooth audio playing method, electronic equipment and storage medium
CN116939090A (en) Method for switching Bluetooth device to play audio data and related device
CN116414493A (en) Image processing method, electronic device and storage medium
CN117785343A (en) Interface generation method and electronic equipment
CN117806745A (en) Interface generation method and electronic equipment
CN114385282A (en) Theme color conversion method, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination