CN117692723A - Video editing method and electronic equipment - Google Patents

Video editing method and electronic equipment Download PDF

Info

Publication number
CN117692723A
CN117692723A CN202310858956.4A CN202310858956A CN117692723A CN 117692723 A CN117692723 A CN 117692723A CN 202310858956 A CN202310858956 A CN 202310858956A CN 117692723 A CN117692723 A CN 117692723A
Authority
CN
China
Prior art keywords
video
video frames
electronic device
color gamut
opengl
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310858956.4A
Other languages
Chinese (zh)
Inventor
吴孟函
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310858956.4A priority Critical patent/CN117692723A/en
Publication of CN117692723A publication Critical patent/CN117692723A/en
Pending legal-status Critical Current

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video editing method and electronic equipment. The method can be applied to electronic equipment with image processing capability, such as smart phones, tablet computers and the like. When the video editor is used for editing the HDR video, if the SDR material is used, the electronic equipment can convert the color gamut of the SDR material into the color gamut which is the same as the color gamut of the HDR video, so that the output edited video is an HDR video frame, and further the quality degradation of the edited HDR video is avoided.

Description

Video editing method and electronic equipment
Technical Field
The application relates to the field of terminals, in particular to a video editing method and electronic equipment.
Background
Currently, most intelligent electronic devices are capable of supporting capturing high-dynamic range (HDR) video. HDR video employs a wider range of colors and brightness than normal video (e.g., standard dynamic range (standard dynamic range, SDR) video), enabling more realistic picture effects to be presented, and thus may lead to better viewing for the user.
However, when the user edits the captured HDR video, if SDR material such as a sticker, text, or special effects is used, the video output after editing is the SDR video, and the quality of the edited HDR video is degraded.
Disclosure of Invention
The embodiment of the application provides a video editing method and electronic equipment, when a video editor is used for editing an HDR video, if SDR materials are used, the electronic equipment can convert the SDR materials of BT709 color gamut into the HDR materials of BT2020 color gamut, so that the output edited video is the HDR video, and further the quality degradation of the edited HDR video is avoided.
In a first aspect, an embodiment of the present application provides a video editing method, where the method is applied to an electronic device, and the method includes: detecting a first operation acting on the first material, the first operation being for indicating to add the first material to one or more of the N first video frames; the color gamut of the first material is a first color gamut, and the color gamuts of N first video frames are second color gamuts; the first color gamut and the second color gamut are different; performing color gamut conversion on the first material in response to the first operation to obtain a second material; the color gamut of the second material is a second color gamut; the second material is overlapped into one or more frames of the N first video frames to obtain N second video frames; the color gamut of the N second video frames is the second color gamut.
After the method provided by the first aspect is implemented, the electronic device can convert the first material of the first color gamut into the material of the second color gamut, and then superimpose the material of the second color gamut on the video frame of the second color gamut, so that the video frame of the second color gamut can be obtained, and the electronic device can output the video frame of the second color gamut, thereby avoiding the quality degradation of the edited video of the second color gamut.
With reference to the first aspect, in an alternative embodiment, the color range that can be represented by the first color gamut is smaller than the color range that can be represented by the second color gamut.
With reference to the first aspect, in an optional implementation manner, before detecting the first operation on the first material, the method further includes: detecting a second operation acting on the first video, wherein the second operation corresponds to an editing control and is used for triggering a service for editing the video; in response to the second operation, decoding the first video into N first video frames, the color gamuts of the N first video frames being a second color gamut; any one of the N first video frames is displayed in a first interface for receiving a first operation on the first material.
With reference to the first aspect, in an optional implementation manner, the method further includes: any one of the N second video frames is displayed on the second interface.
With reference to the first aspect, in an optional implementation manner, the method further includes: detecting a third operation acting on the save control in the second interface; and responding to a third operation, storing the N second video frames as second videos, wherein the second videos are videos obtained by adding second materials to the first videos.
After implementing the method provided by the embodiment, the electronic device can package the edited video frame into a video according to the detected user operation of storing the video by the user, and store the video frame into the local storage space for the user to browse, forward and the like at any time.
With reference to the first aspect, in an optional implementation manner, after any one of the N second video frames is displayed on the second interface, the method further includes: detecting a fourth operation acting on the N second video frames, the fourth operation being an editing operation for changing the display effects of the N second video frames; in response to the fourth operation, display effects of the N second video frames are updated.
With reference to the first aspect, in an optional implementation manner, an editing operation for changing display effects of the N second video frames includes at least one of: adding a first material and deleting a second material; the operations of adding the first material and deleting the second material are used for updating the color value of the pixel point of one or more frames of the N second video frames.
In combination with the first aspect, in an alternative embodiment, the first color gamut is BT709 and the second color gamut is BT2020.
In this way, the electronic device can convert the material with the color gamut BT709 into the material with the color gamut BT 2020. When the video editor is used for editing the video to be edited with the color gamut of BT2020, the electronic device may superimpose the material with the color gamut of BT2020 obtained after the color gamut conversion on the video to be edited with the color gamut of BT2020, so that the color gamut of the video after editing is BT2020, and quality degradation of the video after editing is avoided, thereby improving use experience of a user.
With reference to the first aspect, in an optional implementation manner, the first material is standard dynamic range SDR material, the second material is high dynamic range HDR material, and the first video and the second video are HDR video.
With reference to the first aspect, in an optional implementation manner, the electronic device includes a video editing application APP, an open image library OpenGL, and a third party software development kit SDK; performing color gamut conversion on the first material to obtain a second material, including: in response to a first request message from the SDK, openGL changes the color coding format of the first material into an RGB format; the first request message is sent to OpenGL by the SDK after receiving a second request message from the APP, the first request message comprises a first material, and the second request message is used for requesting to call the SDK to perform color gamut conversion on the first material to obtain a second material; openGL sends a first material in an RGB format to an SDK; the APP receives a first material in an RGB format from the SDK and a callback for performing color gamut conversion on the first material, and sends the first material in the RGB format to the OpenGL; performing color gamut conversion on the first material in the RGB format by OpenGL to obtain a second material; the color coding format of the second material is an RGB format.
With reference to the first aspect, in an optional implementation manner, the second material is superimposed on one or more frames of the N first video frames to obtain N second video frames, including: the APP sends a second material and N first video frames to the SDK; the second material is sent to the APP by OpenGL; the SDK sends a third request message to the OpenGL, wherein the third request message is used for requesting to superimpose the second material and N first video frames, and the third request message comprises the second material and N first video frames; and the OpenGL superimposes the second material and the N first video frames to obtain N second video frames. With reference to the first aspect, in an optional implementation manner, the method further includes: openGL sends N second video frames to the SDK; the SDK sends N second video frames to the APP; the APP sends N second video frames to the OpenGL; openGL stores N second video frames in the first video memory.
With reference to the first aspect, in an optional implementation manner, displaying any one of the N second video frames on the second interface includes: the OpenGL outputs the N second video frames stored in the first video memory to the second video memory; the second video memory is applied to a memory by the frame work; the FrameWork acquires the N second video frames from the second video memory, and sends callbacks for rendering the N second video frames to the APP; and the APP displays any one video frame in the N second video frames on the first interface.
In a second aspect, an embodiment of the present application provides an electronic device, including: the touch screen, the camera, one or more processors and one or more memories; the one or more processors are coupled with the touch screen, the camera, the one or more memories for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of the first aspect or any of the alternative implementations of the first aspect.
In a third aspect, the present application provides a chip system for application to a device, the chip system comprising one or more processors adapted to invoke computer instructions to cause the device to perform a method according to the first aspect or any of the alternative embodiments of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a device, cause the electronic device to perform a method according to the first aspect or any of the alternative embodiments of the first aspect.
In a fifth aspect, the present application provides a computer readable storage medium comprising instructions which, when run on an electronic device, cause the electronic device to perform the method according to the first aspect or any of the alternative embodiments of the first aspect.
Drawings
FIGS. 1A-1I are a set of user interface diagrams provided by embodiments of the present application;
fig. 2 is a schematic software architecture of an electronic device 100 according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a video editing method according to an embodiment of the present application;
FIG. 4 is a flowchart of an electronic device 100 initializing a video editing environment provided by an embodiment of the present application;
fig. 5 is a flowchart of an electronic device 100 editing HDR video provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure of an electronic device 100 according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application for the embodiment. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Currently, most electronic devices are capable of supporting the capture of HDR video. The HDR video adopts a wider color and brightness range than the common video, so that a more real picture effect can be presented in the HDR video, and a better viewing effect can be brought to a user.
In most HDR video, the bit depth representing the color is 10 bits (bits). Where bit depth refers to the method by which a computer records the color of a digital image using "bits". The bit depth of 10 bits may represent a number of 10 using a count unit bit by a computer, which may represent 2 by the 10 bits 10 (1024,0-1023) colors. Whereas in SDR video and SDR images the bit depth representing the color is 8 bits, the computer can represent 2 by this 8 bits 8 (256, 0-255) colors.
The color gamut represents the range of colors that can be displayed when video encoding. HDR video uses BT2020 color gamut, SDR video and SDR image uses BT709 color gamut. Therefore, compared to SDR video and SDR material, HDR video can use more color types, and the color representation range is wider and the luminance range of the display is also higher. Further, HDR video can support richer image color representations and more vivid image detail representations. This also enables the HDR video to present a more realistic picture effect, thereby enhancing the user experience.
Not limited to BT2020 gamut, BT709 gamut, HDR video and SDR video may use other types of gamuts as well. In general, however, HDR video uses a wider range of color gamuts than SDR video uses, with more color and more detail.
After capturing the video, the user generally performs editing operation on the captured video, so that the edited video can more meet the personalized requirements of the user. For example, after capturing the HDR video (denoted as HDR video 1), in response to the operation of adding the special effects by the user, the electronic device 100 may add the image content of the special effects (such as the decal, the text, the animation special effects, etc.) to the HDR video 1, so that the edited HDR video has the corresponding image effects described above.
Typically, in editing video, the image content is an SDR image (or referred to as SDR material). In this case, after the SDR material is used in the process of editing the HDR video (10 bits), the edited video finally output by the electronic device 100 is the SDR video (8 bits), so that the quality of the edited HDR video is reduced (the bit depth is changed from 10 bits to 8 bits, the color gamut is changed from BT2020 to BT 709), and the use experience of the user is further reduced.
In order to solve the above problems, embodiments of the present application provide a video editing method. The video editing method can be applied to the electronic equipment with the image processing capability.
By implementing the video editing method provided by the embodiment of the present application, the electronic device 100 may convert the SDR material with the color gamut of BT709 into the HDR material with the color gamut of BT2020, so that after the SDR material is added in the process of editing the HDR video by the electronic device 100, the output edited video is still the HDR video.
Specifically, after receiving the operation of the HDR video in the 10bit (BT 2020, YUV) format to be edited, the electronic device 100 may use the capability provided by the open graphics library (Open Graphics Library, openGL) to convert the HDR video data in the 10bit (BT 2020, YUV) format into the HDR video data in the 10bit (BT 2020, RGB) format; the edited HDR video data in 10bit (BT 2020, RGB) format is rendered so that the edited HDR video data is displayed in the main interface of the video editor.
After displaying the edited HDR video data in the main interface of the video editor, if the electronic device 100 receives an operation of adding the three-party SDR material from the user, it may first notify the three-party software development kit (software development kit, SDK) that the edited video is an HDR video, and that the three-party SDR material needs to be converted into an HDR material; the three-party SDK is used to send (BT 709, RGB) formatted three-party SDR material to the video editor and request the ability to invoke SDR to HDR. Second, the electronic device 100 sends (BT 709, RGB) formatted three-way SDR material to OpenGL through the video editor to convert the (BT 709, RGB) formatted three-way SDR material to (BT 2020, RGB) formatted three-way HDR material using the capabilities provided by OpenGL. The electronic device 100 then sends the (BT 2020, RGB) formatted three-party HDR material to the video editor via OpenGL. Further, the edited HDR video data and the (BT 2020, RGB) formatted three-party HDR material are sent to the three-party SDK through the video editor. The three-party SDK superimposes the three-party HDR material in the (BT 2020, RGB) format on the edited HDR video data, so that the video data with the material added is obtained. Finally, the electronic device 100 renders the video data to which the material is added, thereby displaying the HDR video data to which the material is added in the main interface of the video editor.
By implementing the video editing method, the electronic device 100 not only meets the personalized requirement of editing the HDR video (10 bits) by a user, but also ensures that the video output after editing is still the HDR video (10 bits) without reducing the quality of the edited video.
Alternatively, electronic device 100 may include, but is not limited to, a cell phone, tablet, desktop, laptop, handheld, notebook, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, as well as a cellular telephone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) device, virtual Reality (VR) device, artificial intelligence (artificial intelligence, AI) device, wearable device, in-vehicle device, smart home device, and/or smart city device, among others. The graphics processor of the electronic device has the capability of directly editing and saving 10bit depth video. The embodiment of the application does not particularly limit the specific type of the electronic device.
Fig. 1A to fig. 1I are a set of user interface diagrams provided in an embodiment of the present application, and an application scenario for implementing the video editing method provided in the embodiment of the present application is specifically described below with reference to fig. 1A to fig. 1I.
Fig. 1A is a user interface, i.e., home page, of an electronic device 100 displaying an icon of an installed application provided in an embodiment of the present application. As shown in fig. 1A, the main page displays a plurality of application icons, such as a "clock" application icon, a "calendar" application icon, a "weather" application icon, and the like.
The plurality of application icons in the home page includes a "gallery" application (hereinafter "gallery") icon, i.e., icon 111. The electronic device 100 may detect a user operation on the icon 111. Such as a click operation, a long press operation, and the like. In response to the above, the electronic apparatus 100 may display the user interface shown in fig. 1B.
Fig. 1B is a main interface of a "gallery" when the "gallery" is running on the electronic device 100 according to an embodiment of the present application. The interface may be presented with one or more pictures or videos. Wherein the one or more videos include HDR video, LOG video, and other types of video, such as SDR video. The LOG video refers to a low-saturation, low-brightness video captured in LOG mode, and may also be referred to as a LOG gray scale. The bit depth of the HDR video and the LOG video is 10 bits; the bit depth of SDR video is 8 bits.
As shown in fig. 1B, the video indicated by the icon 121 may be LOG video; the video indicated by icon 122 may be HDR video; the video indicated by the icon 123 may be SDR video. When the electronic device 100 presents an HDR video or LOG video, an icon indicating the video may display the type to which the video belongs. In this way, the user can learn the type of video through the information displayed in the icon. For example, the lower left corner of the icon 121 shows a LOG; the lower left corner of the icon 122 shows HDR. The video in fig. 1B that is not marked HDR or LOG is SDR video.
The electronic device 100 may detect a user operation on the icon 121, and in response to the operation, the electronic device 100 may display the user interface shown in fig. 1C.
Fig. 1C shows a user interface of the electronic device 100 specifically showing a certain picture or video. As shown in fig. 1C, the user interface may include a window 131. Window 131 may be used to display a picture or video that the user selects to view. For example, in fig. 1B, the user selects the picture or video browsed to be the HDR video indicated by icon 122 (denoted as video a). Thus, video a may be displayed in window 131.
The user interface also includes icons 132, controls 133. Icon 132 may be used to represent the type of video displayed in window 131. For example, "HDR" displayed in the current icon 132 may indicate that "video a" is a video of the HDR type. Referring to fig. 1B, when the video selected by the user is a LOG video (e.g., the video indicated by the selected icon 121), a "LOG" typeface may be displayed in the icon 132; when the user selected video is SDR video (e.g., video indicated by selecting icon 123), an "SDR" typeface or the like may be displayed in icon 132.
The control 133 may be used to receive user operations to edit a video (or picture) and display a user interface to edit the video (or picture).
The user interface may also include a control 134, a share control 135, a favorites control 136, a delete control 137, and the like. Control 134 may be used to present detailed information of the video such as time of capture, location of capture, color coding format, code rate, frame rate, pixel size, and so forth.
The sharing control 135 may be used to send video a to other applications for use. For example, upon detecting a user operation on the sharing control, in response to the operation, the electronic device 100 may display icons of one or more applications including icons of social software a (e.g., QQ, weChat, etc.). Upon detecting an application icon acting on social software a, in response to the operation, electronic device 100 may send video a to social software a, through which the user may further share the video to friends.
The collection control 136 may be used to mark video. In the user interface shown in fig. 1C, upon detecting a user operation on the favorites control, in response to the operation, the electronic device 100 can mark video a as a favorite video of the user. The electronic device 100 may generate an album for displaying videos that are marked as user likes. In this way, in the case where the video a is marked as a user's favorite video, the user can quickly view the video a through the album in which the user's favorite video is shown.
The delete control 137 may be used to delete video a.
When the electronic device 100 detects a user operation on the control 133, a user interface shown in fig. 1D may be displayed in response to the operation.
Fig. 1D is a user interface for editing a video (or picture) by a user provided in an embodiment of the present application. As shown in fig. 1D, the user interface may include a window 141, a window 142, an operation bar 143, and an operation bar 144.
Window 141 may be used to display a preview image of the edited HDR video. Typically, window 141 will display a cover video frame of the video. When a user operation on the play button 145 is detected, the window 141 may sequentially display a video frame stream of the video, i.e., play the video.
Window 142 may be used to display a stream of video frames of the edited video. The user may drag window 142 to adjust the video frames displayed in window 141. Specifically, a scale 147 is also shown in fig. 1D. The electronic device 100 may detect a user operation on the window 142 to slide left or right, and in response to the user operation, the position of the video frame stream where the scale 147 is located is different, and at this time, the electronic device 100 may display the video frame where the current scale 147 is located in the window 141.
Icons of a plurality of video editing operations can be displayed in the operation fields 143 and 144. Generally, one icon displayed in the operation field 143 indicates one edit manipulation category. The operation field 144 may display video editing operations belonging to the selected operation category in the current operation field 143 according to the selected operation category. For example, the operation field 143 includes "clip". The "clip" displayed in bold may indicate that the type of video editing operation currently selected by the user is "clip". At this time, displayed in the operation field 144 are some operations belonging to the "clip" class, such as "cut", "volume", "frame", and the like.
For example, the electronic device 100 may detect a user operation on the "split" control, in response to which the electronic device 100 may display one or more operational controls of the split video. The electronic device 100 may record a user's splitting operation, such as a start time and an end time of a first video segment, a start time and an end time of a second video segment, and so on.
The operation field 143 also includes "text", "sticker", "animated special effect", and the like. Where one or more of the images of "text", "decal", "animated special effects" are SDR images (otherwise referred to as SDR material), the color gamut of these SDR materials is BT709.
The user interface also includes a save control 146. When a user operation on save control 146 is detected, in response to the operation, electronic device 100 can save the video of the current state. The video in the current state may be a video to which an editing operation is added or may not be performed.
The electronic device 100 may detect a user operation acting on the "decal" control in the operation field 143, and in response to the operation, the electronic device 100 may display the user interface shown in fig. 1E.
Fig. 1E is a diagram of a display of an electronic device 100 that provides a user with a user interface for adding a sticker to a video. As shown in fig. 1E, a plurality of sticker options are included in the "sticker". Wherein, each sticker option corresponds to a sticker with different picture display effects. The user may select one of a plurality of stickers provided by the electronic device 100. In response to the user operation of selecting a sticker, the electronic apparatus 100 may perform the image processing of the sticker instruction selected by the user on the edited video, so that the screen of the processed video has a display effect consistent with the display effect of the filter.
As shown in fig. 1E, a plurality of decal controls may be displayed in the interface for selecting a decal, such as decal control 151, decal control 152, decal control 153, decal control 154, decal control 155, and so forth. Each of the above-described decal controls indicates an image processing method of rendering an image using a decal.
First, when the user interface shown in fig. 1E is displayed, the electronic device 100 defaults to setting the currently used sticker to "none", i.e., no sticker is added. Then, when an operation by the user on a certain sticker control is detected, in response to the operation, the electronic apparatus 100 may convert the color gamut of the above-described user-selected sticker to the same color gamut as that of the HDR video displayed in the window 141 (i.e., convert the color gamut of the user-selected sticker from BT709 to BT 2020), and thereafter display the above-described user-selected sticker in the window 141. For example, the electronic device 100 detects a user operation on the decal control 152, in response to which the electronic device 100 may convert the loving color gamut in the decal control to BT2020, after which the electronic device 100 may display the user interface shown in fig. 1F.
Fig. 1F is a diagram illustrating a user interface of an electronic device 100 according to an embodiment of the present application after a sticker is added to a video. As shown in fig. 1F, upon detecting a user's operation on the decal control 152, the electronic device 100 may highlight the decal control 152, e.g., increase the decal control 152, thicken the decal control 152, set the control to highlight, etc., as the embodiments of the present application are not limited in this regard.
It will be appreciated that the electronic device 100 is not performing rendering of the entire video using the "decal 2" described above. Generally, to save computing resources, the electronic device 100 may only render the video frame displayed in the current window, or in some embodiments, the electronic device 100 may also process the cover video frame using other simple image processing means, so that the processed image has the effect of "sticker 2" when previewing.
The user interface also includes a confirmation control 148 ("v"), a cancel control 149 ("x"). Upon determining that the currently selected decal meets its own needs, the user may click on the confirmation control 148.
Of course, when it is determined that the currently selected decal does not meet the own needs, the user may click on other decal controls to select other decals. In response to a user operation acting on any of the decal controls, electronic device 100 may display a video in window 141 after the decal indicated by any of the decal controls described above is added. The user may click on cancel control 149 when none of the decals provided by electronic device 100 meets the user's needs, or when the user pauses the addition of a decal. In response to the above-described user operation, the electronic apparatus 100 may display the user interface shown in fig. 1D.
The electronic device 100 may detect a user operation acting on the "text" control in the operation field 143, in response to which the electronic device 100 may display the user interface shown in fig. 1G.
Fig. 1G is a diagram of an electronic device 100 displaying a user interface that provides a user with text added to a video, as provided in an embodiment of the present application. As shown in fig. 1G, the "text" control in the operation field 143 may be bolded to indicate that the type of editing operation currently selected by the user is "text". Meanwhile, the editing controls displayed in the operation field 144 are replaced with corresponding operation controls under the "text" operation, including "head" and "tail". Wherein, the "head" and "tail" include multiple text templates.
As shown in fig. 1G, first, the electronic device 100 may display a text template of "title" such as "none" 151, "title 1," "title 2," "title 3," "title 4," "title 5," and so forth. The electronic device 100 may detect a user operation acting on any of the templates described above, for example, a user operation acting on "title 5", and in response to the above operation, the electronic device 100 may convert the color gamut of "title 5" selected by the above user into the same color gamut as that of the HDR video displayed in the window 141 (i.e., convert the color gamut of "title 5" from BT709 to BT 2020), and thereafter display the head effect of "title 5" in the window 141. Meanwhile, the electronic device 100 may display the user interface shown in fig. 1H.
The electronic device 100 can then detect a user operation on the confirmation control 148. At this time, the electronic apparatus 100 may confirm that the user selects the editing operation using the title shown as "title 5".
The process of adding the footer to the edited video when the electronic device 100 detects that the user edits the footer is the user editing the footer may refer to the process of adding the footer, which is not described herein. In addition, the electronic device 100 may provide more editing capabilities, which are not exemplified herein.
Fig. 1E-1H are operational processes of an electronic device 100 provided in an embodiment of the present application to receive an addition of SDR material (decals, text, etc.) in an HDR video by a user. After detecting the operation of adding the SDR material, the electronic device 100 may convert the SDR material into an HDR material, and then, the electronic device 100 superimposes the HDR material on the HDR video displayed in the window 141, to obtain an HDR video after adding the material, and further, store the HDR video after adding the material.
As shown in fig. 1H, the electronic device 100 may detect a user operation on the save control 146, in response to which the electronic device 100 may perform a calculation to save the HDR video after adding the material. After the save is completed, the electronic device 100 may display the user interface shown in fig. 1I. In contrast to the user interface shown in fig. 1C, the video shown in window 131 is HDR video after adding material.
Alternatively, the electronic device 100 may save the HDR video after adding the material as a new HDR video. In this way, the electronic device 100 may provide both pre-editing HDR video and post-editing personalized HDR video (i.e., HDR video after adding material) to the user.
By implementing the method described in fig. 1A-1I, a user may add SDR material during editing of the photographed 10bit HDR video, and after the addition of the material is completed, save the video after the addition of the material as the 10bit HDR video. Compared with a general method for saving 10-bit HDR video added with SDR materials as 8-bit SDR video, the video editing method provided by the embodiment of the application can ensure that the video quality of the edited video is not reduced, so that the use experience of a user is not reduced.
The specific process by which the electronic device 100 implements the video editing capabilities shown in fig. 1A-1I is described below.
First, referring to fig. 2, fig. 2 is a schematic software architecture of an electronic device 100 according to an embodiment of the present application.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 2, the application package may include camera, gallery, video, music, navigation, calendar, map, WLAN, etc. applications. In the embodiment of the application, the application program layer further comprises a video editing application. The video editing application has video data processing capability, and can provide video editing functions for users, including video data processing such as cutting, rendering, adding materials and the like. The user interfaces shown in fig. 1D-1I may be viewed as user interfaces provided for the video editing application described above.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
In an embodiment of the present application, the application framework layer further includes a media framework. A plurality of tools for editing video and audio are provided in the media frame. Wherein the tool comprises MediaCodec. MediaCodec is an Android-supplied class for encoding and decoding audio and video, and includes an encoder and a decoder.
Wherein an encoder may convert one form of video or audio input to the encoder into another form by a compression technique, and a decoder performs a reverse process of encoding, and may convert one form of video or audio input to the decoder into another form by a decompression technique.
For example, the video input to the decoder may be HDR video, which is composed of N video frames of color gamut BT2020, where N is an integer greater than 1. After receiving the HDR video, the decoder may split the video composed of the N video frames with the color gamut BT2020 into N independent video frames for the subsequent electronic device 100 to perform image processing on each video frame.
Android run time includes a core library and virtual machines. Android run is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The open graphics library (Open Graphics Library, openGL) is provided with a plurality of image rendering functions that can be used to draw three-dimensional scenes from simple graphics to complex. In the embodiment of the present application, openGL provided by the system library may be used to provide a graphic image editing operation for a video editing application, for example, the operation of adding a sticker, the operation of adding text, and so on described in the foregoing embodiment.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Referring to fig. 3, fig. 3 is a flowchart of a video editing method according to an embodiment of the present application. The following describes the flow of the video editing method provided in the embodiment of the present application with reference to the user interfaces shown in fig. 1A-1I and the software architecture of the electronic device 100 shown in fig. 2.
S301, the electronic device 100 determines the HDR video to be edited selected by the user.
When displaying image resources such as pictures, videos, etc. stored in the gallery for viewing by a user, the electronic device 100 may display an edit control. The editing control may provide a service for the user to edit the currently displayed image asset. The video editing method provided by the embodiment of the application is mainly applied to video image resources. The following embodiments will take video as an example, and describe a video editing method provided in the embodiments of the present application.
Referring to the user interface shown in fig. 1B, the electronic device 100 may detect a user operation on the icon 122, in response to which it is determined that the user has selected to edit the HDR video indicated by the icon 122. At the same time, the electronic device 100 may display the user interface described in FIG. 1C.
S302, the electronic device 100 initializes the video editing environment.
Upon detecting a user operation on the editing control, the electronic device 100 can initialize a video editing environment. Initializing a video editing environment refers to creating or applying for tools, storage space required to edit a video so that the electronic device 100 can perform data processing of the edited video.
Initializing a video editing environment includes: creating a decoder, openGL, and applying for a video memory of a user to cache video frames. Wherein the decoder is operable to split the video to be edited into a sequence of video frames; openGL may be used to adjust video frames and/or modify pixels in video frames to change the image content included in the video, i.e., render the video frames. The adjusting the video frame includes adjusting to increase or decrease the video frame and modifying the size of the video frame.
Wherein, the video memory comprises SurfaceA, surfaceB and BufferQueue. Surface a may be used to display the gamut of HDR video; the surface b may be used to OpenGL cache rendered video frames. The BufferQueue may be used to cache video to be edited that is input by a video editing application. The decoder may split the video to be edited stored in the BufferQueue into a sequence of video frames to be edited.
Specifically, referring to fig. 4, fig. 4 is a flowchart of initializing a video editing environment by the electronic device 100 according to an embodiment of the present application. As shown in fig. 4, the VideoEditor may be used to represent a video editing Application (APP).
First, (1) the electronic device 100 may detect a user's operation to click on an edit control for HDR video (BT 2020, 10 bit). Referring to the user interface shown in fig. 1C, a user operation on the edit control 133 can be referred to as a user operation to click on the edit control for HDR video.
(2) In response to the above, the VideoEditor may determine the type of video to be edited. In the embodiment of the present application, the video to be edited is HDR video. The color coding format adopted by the HDR video is YUV format, the color gamut is BT2020, and the bit depth is 10 bits.
Then, (3) the VideoEditor may send an interface initialization request to the application FrameWork (FrameWork) requesting creation of a surfacview. The SurfaceView inherits from class View, which is essentially a View, but has its own Surface, so that SurfaceView has a corresponding Window State in the System management service (Window Manager Service, WMS), and a Layer (Layer) in SurfaceFlinger. Rendering of the surface view may be put into a separate thread to do, and there may be own GL context when rendering. Because SurfaceView does not affect the main thread's response to time, it can draw in an independent thread, does not affect the main thread, and the picture is smoother when playing video using a double buffering mechanism. That is, the rendering of the surfacview may be put on a separate thread instead of the main thread. Since the rendering of the SurfaceView can be put into a separate thread to do, the SurfaceView can pull out the video player's pictures individually to render. In addition, the developer can control interface forms such as the format, the size and the like of the Surface in the Surface view, and the correct position of the interface on the screen can be ensured. SurfaceView may be used to generate BT2020 layers.
(4) In response to the request to create SurfaceView described above, frameWork may apply for SurfaceA to the memory. The surface may be used to display the gamut of HDR video.
In response to the above application, the memory may partition a block of memory for the FrameWork as a Surface (i.e., surface a) of the FrameWork application.
The memory may provide multiple surfaces. Each Surface carries an Identity (ID) indicating the Surface. For any Surface, the Surface ID is in one-to-one correspondence with the Surface address. For example, assume that the Surface-01 ID is 01; addresses 0011-0100. When identifying that the ID of a Surface is 01, the electronic device 100 may determine that the Surface is Surface-01, and may also determine that the address of the Surface is 0011-0100; conversely, when an address used by a Surface is identified as 0011-0100, electronic device 100 may determine that the Surface is Surface-01.
The memory may then return the ID and/or address of surface a to the FrameWork (5). (6) After receiving the ID and/or address of the surface a, the frame work may send a surface success callback to the VideoEditor. Then, (7) the VideoEditor may send a request to the FrameWork to get a Surface.
(8) In response to the request to acquire a Surface, the frame work may send the ID and/or address of the Surface a to the VideoEditor.
After receiving the ID and/or address of the Surface a, the video editor (9) may perform EGL environment initialization 10bit environment, send a request to OpenGL to create an EglWindowSurface (or referred to as Surface B), and request that the Surface B be associated with the Surface a. The request may also carry the ID and/or address of the surface a.
The Surface B is used for OpenGL to cache the rendered video frame, and the color gamut of the rendered video frame is BT2020.
(10) In response to the request to create Surface B, openGL may apply for Surface B to the memory. Then, in response to the above application, the memory may partition a storage space for OpenGL as a Surface (i.e., surface b) of the OpenGL application. The surface B may be texture (texture) in OpenGL.
The (11) memory may then return the ID and/or address of the surface b to OpenGL. After receiving the ID and/or address of the surface b, (12) OpenGL may associate the surface b with the surface a. In this way, openGL may output rendered video frames stored in surface b into surface a to display the rendered video frames through surface a.
After creating Surface B and associative binding Surface B and Surface a, (13) OpenGL may return a create and associate successful message to the VideoEditor.
Thereafter, (14) the VideoEditor may send a request to create a decoder to the media framework (MediaCodec). The decoder may split the video to be edited stored in the BufferQueue into a sequence of video frames to be edited.
(15) In response to the above-described request to create a decoder, mediaCodec may create a decoder for decoding (BT 2020, YUV) video. Alternatively, mediaCodec may not need to specify the type of video that the decoder supports decoding when creating the decoder. The decoder may determine the type of the video to be decoded after receiving the video to be decoded input by the video editor.
After the MediaCode creates the decoder, (16) the MediaCode can apply for a block of memory space (BufferQueue) to the memory. The BufferQueue may receive video to be decoded of the video editor input. Then, (17) in response to the application, the memory may allocate a BufferQueue for the decoder. The memory may then return the address of the BufferQueue to the MediaCode (18).
After receiving the address of the BufferQueue returned by the memory, the decoder can locate the BufferQueue available in the memory according to the address. Thereafter, (19) the MediaCode may return a message to the VideoEditor that the creation of the decoder was successful.
The processes shown in steps (1) to (19) in fig. 4 illustrate a process in which the electronic device 100 initializes a video editing environment. After the video editing environment initialization is completed, the electronic device 100 may begin performing user-selected editing operations on the video to be edited.
S303, the electronic device 100 displays any one of the HDR video frames to be edited in the first interface.
Referring to the user interface shown in fig. 1C, window 131 may provide a user with a view to image resources stored in electronic device 100; control 133, an edit control, may provide a user with a service to edit video. At this time, the edited video is displayed in the window 131.
After detecting a user operation on the editing control, in response to the operation, the electronic device 100 may execute different editing policies on the HDR video to be edited to meet the personalized needs of the user.
Specifically, referring to fig. 5, fig. 5 is a flowchart of editing an HDR video by the electronic device 100 according to an embodiment of the present application. Steps (1) to (9) in fig. 5 show a flow in which the electronic device 100 displays any one of the HDR video frames to be edited on the first interface.
First, the electronic device 100 may detect a user operation acting on the icon 122 in the user interface shown in fig. 1B, and in response to the operation, (1) the video editor may transmit the HDR video to be edited to the MediaCodec. Specifically, according to the video editing environment initialization process, the video editor may determine the address of the BufferQueue for caching the video to be decoded, which is applied by MediaCodec. After determining the address, the VideoEditor may write the HDR video to be edited to the BufferQueue. At this time, the HDR video to be edited adopts a color gamut BT2020 color coding format that is YUV format. Alternatively, the HDR video to be edited may also be represented as HDR video to be edited (BT 2020, YUV).
(2) When the video written in the BufferQueue is detected, the MediaCodec may decode the HDR video to be edited stored in the BufferQueue by using the created decoder, so as to obtain a video frame sequence corresponding to the HDR video to be edited, which may be referred to as HDR video frames to be edited (or referred to as N first video frames). Thus, after writing the HDR video to be edited into BufferQueue, mediaCodec can output the above described HDR video frame to be edited using the created decoder. At this time, the HDR video frame to be edited adopts a color gamut of BT2020 and a color coding format of YUV. The HDR video frame (BT 2020, YUV) in fig. 5 may be used to represent the HDR video frame to be edited.
After decoding is completed, (3) MediaCodec may return the HDR video frame to be edited (BT 2020, YUV) to the VideoEditor. Accordingly, the VideoEditor receives an HDR video frame (BT 2020, YUV) to be edited.
Then, (4) the VideoEditor may send the received HDR video frame to be edited (BT 2020, YUV) to OpenGL. Accordingly, openGL receives an HDR video frame to be edited (BT 2020, YUV).
(5) After receiving the HDR video frame to be edited (BT 2020, YUV) sent by the VideoEditor, openGL may change the color coding format of the HDR video frame to be edited. In the embodiment of the present application, openGL sets the color coding format of the above-mentioned HDR video frame to be edited to RGB, that is, changes the original HDR video frame to be edited in ((BT 2020, YUV) format to the HDR video frame to be edited in ((BT 2020, RGB).
Therefore, after the HDR video frame to be edited is input to OpenGL, the color coding format of the HDR video frame to be edited is changed to RGB. At this point, the gamut of the HDR video frame is still BT2020, i.e., the above process does not involve a gamut change. That is, openGL may convert HDR video frames to be edited in (BT 2020, YUV) format to HDR video frames to be edited in (BT 2020, RGB) format.
After converting the HDR video frame to be edited in (BT 2020, YUV) format into the HDR video frame to be edited in (BT 2020, RGB), openGL may store the HDR video frame to be edited in (BT 2020, RGB) format in SurfaceB. In the steps (10) to (11) shown in fig. 4, surface b is the Surface applied to the memory by OpenGL. In this application, surfaceB may also be referred to as a first video memory.
(7) After the storage is successful, openGL may output the (BT 2020, RGB) formatted HDR video frames to be edited stored in surface b into surface a. The Surface a is the Surface applied to the memory by the frame work in steps (4) to (5) shown in fig. 4. (7) The effect of EglswapBuffer is to exchange data in SurfaceB into SurfaceA. This is because SurfaceB is for OpenGL to buffer rendered video frames, surfaceB cannot display rendered video frames, and SurfaceA can be used to display the gamut of HDR video frames.
In step (12) shown in fig. 4, openGL has associated-bound surfeb and surfea. Therefore, openGL needs to output HDR video frames to be edited in (BT 2020, RGB) format stored in surface b into surface a. In this application, surface A may also be referred to as a second video memory.
(8) The frame work may obtain the HDR video frame to be edited in (BT 2020, RGB) format from the surface a and render the HDR video frame to be edited back to the VideoEditor. (9) The video editor may display the HDR video frame to be edited in a main interface preview area. At this time, the electronic device 100 may display any one of the HDR video frames to be edited in the above-described (BT 2020, RGB) format. Wherein the HDR video frame to be edited is recorded using a PQ curve, and thus the HDR video frame to be edited may be represented as an HDR video frame to be edited (BT 2020, RGB, PQ).
Referring to the user interface shown in fig. 1D, after implementing the method shown in steps (1) to (9) in fig. 5, the electronic device 100 may display any one of the above-described HDR video frames to be edited in the window 141.
S304, the electronic device 100 detects a first operation acting on the SDR material in the first interface, and performs color gamut conversion on the SDR material to obtain an HDR material in response to the first operation.
Steps (10) to (16) in fig. 5 show a flow of performing color gamut conversion on the first material by the electronic device 100 to obtain the second material. Next, the processing procedure of the electronic device 100 for performing color gamut conversion on the first material to obtain the second material will be described with reference to steps (10) to (16) in fig. 5. The three-party SDR material in fig. 5 may also be referred to as an SDR material, and the upgraded three-party SDR material may also be referred to as an HDR material.
First, (10) the electronic device 100 may detect a first operation on three-way SDR material (e.g., material indicated by icon 152) in the user interface shown in fig. 1E. (11) In response to this operation, the VideoEditor may send a message to the three-party SDK requesting invocation of the three-party SDK rendering capabilities and notify the three-party SDK that the video to be edited is an HDR video, requesting upgrading of the three-party SDR material to an HDR material. In this application, the request to upgrade the three-way SDR material to the HDR material may also be referred to as a second request message.
In response to the request, (12) the three-party SDK sends a request to OpenGL to change the color-coded format of the three-party SDR material to RGB format. This is because the three-party SDR material stored in the three-party SDK is a picture, and when performing color gamut conversion on the three-party SDR material, the three-party SDR material in RGB format is required. In this application, the request for changing the color coding format of the three-party SDR material to the RGB format may also be referred to as a first request message.
After OpenGL receives the request to change the color coding format of the three-party SDR material to RGB format, the three-party SDR material in picture format may be converted to the three-party SDR material in RGB format. Then, (13) OpenGL returns the three-party SDR material in RGB format to the three-party SDK. At this time, the color gamut of the three-party SDR material is BT709, and the color coding format is RGB. Alternatively, the three-way SDR material in RGB format may be represented as three-way SDR material (BT 709, RGB).
After receiving the three-party SDR material (BT 709, RGB), the (14) three-party SDK may send the three-party SDR material (BT 709, RGB) to the VideoEditor and send a message that the three-party SDR material is upgraded to an HDR material callback. Accordingly, the VideoEditor receives the three-party SDR material (BT 709, RGB) from the three-party SDK and a message to upgrade the three-party SDR material to an HDR material callback. This is because the three-party SDK does not have the capability of performing color gamut conversion, and if color gamut conversion is required, the three-party SDK needs to send the three-party SDR material (BT 709, RGB) to the VideoEditor first, and then the VideoEditor sends the three-party SDR material (BT 709, RGB) to OpenGL, so that color gamut conversion of the three-party SDR material is achieved using the rendering capability of OpenGL.
The (15) VideoEditor may then send the three-party SDR material (BT 709, RGB) to OpenGL. (16) After receiving the three-party SDR material (BT 709, RGB), openGL may perform a color gamut conversion on the three-party SDR material (BT 709, RGB) to convert the color gamut of the three-party SDR material (BT 709, RGB) to the same color gamut as the color gamut of the HDR video frame to be edited. Since conversion of the color coding format is not involved at this time, the color gamut of the material subjected to the color gamut conversion is BT2020, and the color coding format is RGB. In this case, the color gamut of the upgraded three-party SDR material is BT2020, the color coding format is RGB, and the recording standard is PQ curve. That is, openGL may convert (or upgrade) three-way SDR material in (BT 709, RGB) format to material in (BT 2020, RGB, PQ) format. Alternatively, the (BT 2020, RGB, PQ) formatted material may also be referred to as upgraded three-way SDR material or HDR material.
After the steps (10) - (16) above, the SDR material indicated by the decal control 152 will be converted into HDR material.
It is understood that the electronic device 100 may detect a plurality of first operations on the first material. For example, in addition to detecting the decal indicated by the decal control 152 in the user interface shown in fig. 1E, the electronic device 100 may also detect an operation acting on one or more of the decal control 153, the decal control 154, the decal control 155, and the like, and detect an operation acting on "title 5" in the user interface shown in fig. 1G (adding "title 5" at the head), and the like, and implement the methods shown in steps (10) to (16) in fig. 5, perform color gamut conversion on the first material, and obtain the second material.
S305, the electronic device 100 superimposes the HDR material on one or more frames of the HDR video frames to be edited, to obtain an HDR video frame after the HDR material is superimposed.
Referring to fig. 5, steps (17) to (20) in fig. 5 show a flow in which the electronic device 100 superimposes the second material on one or more frames of the N first video frames to obtain N second video frames.
First, (17) OpenGL sends the upgraded three-way SDR material (BT 2020, RGB, PQ) to the VideoEditor. The (18) VideoEditor may then send the upgraded three-way SDR material (BT 2020, RGB, PQ) and the HDR video frame to be edited (BT 2020, RGB, PQ) in step (9) to the three-way SDK.
After receiving the upgraded three-party SDR material (BT 2020, RGB) and the HDR video frame to be edited (BT 2020, RGB), the (19) three-party SDK may send the upgraded three-party SDR material (BT 2020, RGB, PQ) and the HDR video frame to be edited (BT 2020, RGB, PQ) to OpenGL and request to superimpose the upgraded three-party SDR material (BT 2020, RGB, PQ) and the HDR video frame to be edited (BT 2020, RGB, PQ). In this application, the request for superimposing the upgraded three-party SDR material (BT 2020, RGB) and the HDR video frame to be edited (BT 2020, RGB) may also be referred to as a third request message.
In response to the request, (20) OpenGL superimposes the upgraded three-party SDR material (BT 2020, RGB, PQ) and the HDR video frame to be edited (BT 2020, RGB, PQ) to obtain the HDR video frame after superimposing the material (BT 2020, RGB, PQ). Alternatively, the HDR video frame after the material is superimposed may also be referred to as N second video frames.
S306, the electronic device 100 displays any one of the HDR video frames after the superimposed material on the second interface.
In connection with steps (21) to (27) in fig. 5, a process in which the electronic device 100 displays any one of N second video frames on the second interface will be described.
First, after obtaining the HDR video frame (BT 2020, RGB, PQ) after overlaying the material, (21) OpenGL may send the HDR video frame (BT 2020, RGB, PQ) after overlaying the material to the three-party SDK.
The (22) three-way SDK may then send the overlaid material HDR video frame (BT 2020, RGB, PQ) to the VideoEditor. After receiving the HDR video frame (BT 2020, RGB, PQ) after the superimposed material, (23) the VideoEditor may send the HDR video frame (BT 2020, RGB, PQ) after the superimposed material to OpenGL.
(24) OpenGL may store the received HDR video frame (BT 2020, RGB, PQ) after overlaying the material in surfaeb. Then, (25) OpenGL may output the HDR video frame (BT 2020, RGB, PQ) after superimposing the material stored in surface b into surface a. The Surface a is the Surface applied to the memory by the frame work in steps (4) to (5) shown in fig. 4.
(26) The frame work may obtain the HDR video frame to be edited in (BT 2020, RGB) format from the surface a and render the HDR video frame to be edited back to the VideoEditor. The video editor may then display the HDR video frame to be edited in the main interface preview area (27). At this time, the electronic apparatus 100 may display any one of the HDR video frames (BT 2020, RGB, PQ) after the above-described superimposed materials.
Referring to the user interface shown in fig. 1F, after implementing the method shown in steps (10) to (27) in fig. 5, the electronic device 100 may display the first video frame of the above-mentioned HDR video frame (BT 2020, RGB, PQ) after superimposing the materials in the window 141.
Optionally, after displaying any one of the HDR video frames after the material is superimposed on the second interface, the electronic device 100 may further detect a fourth operation acting on the HDR video frame after the material is superimposed, where the fourth operation is an editing operation for changing a display effect of the HDR video frame after the material is superimposed; in response to the fourth operation, the display effect of the HDR video frame after the superimposition material is updated.
Optionally, the editing operation for changing the display effect of the HDR video frame after the material is superimposed may include at least one of: adding SDR material, deleting HDR material in the HDR video frame after superposition material. The operations of adding SDR material and deleting HDR material in the HDR video frame after the superimposed material may be used to update color values of pixels of one or more frames in the HDR video frame after the superimposed material.
For example, referring to fig. 1F, after the electronic device 100 displays the first video frame in the HDR video frames after the superimposition material (the sticker indicated by the sticker control 152) in the window 141, the operation of adding the sticker indicated by the sticker control 153 may also be detected, and in response to this operation, the electronic device 100 may perform the operations of steps S304 to S306, to obtain the HDR video frames after the superimposition material (including the sticker indicated by the sticker control 152 and the sticker indicated by the sticker control 153).
As another example, referring to fig. 1F, after the electronic device displays the first video frame in the HDR video frames after the overlaid material (the decal indicated by the decal control 152) in the window 141, the electronic device may further detect an operation of deleting the HDR material (the decal indicated by the decal control 152), and in response to the operation, the electronic device 100 deletes the decal indicated by the decal control 152. At this time, the electronic device 100 may display the user interface shown in fig. 1E.
Optionally, S307, the electronic device 100 generates the second video according to the HDR video frame after the material is superimposed.
The second video is obtained by adding a second material to the first video.
In an alternative embodiment, the electronic device 100 may further detect a third operation on the save control in the second interface, and save the N second video frames as the second video in response to the third operation.
The user interface for editing video also includes a save control, such as save control 146 in FIG. 1D. The electronic device 100 may detect a user operation on a save control, such as the user operation on the save control 146 shown in FIG. 1H. In response to the user operation, the electronic device 100 may generate a second video according to the HDR video frame (N second video frames) after the material is superimposed, and store the second video in a storage device such as a memory card, a hard disk, or the like, for subsequent browsing by the user. Wherein the color gamut of the second video is BT2020.
In this embodiment of the present application, when the video editor is used to edit the HDR video, if the electronic device 100 detects a user operation acting on the three-party SDR material, the color gamut (BT 709) of the three-party SDR material may be converted into the color gamut identical to the color gamut (BT 2020) of the HDR video, so that the output edited video is the HDR video, which avoids the quality degradation of the edited HDR video, and further improves the use experience of the user.
Fig. 6 is a schematic hardware structure of an electronic device 100 according to an embodiment of the present application.
As shown in fig. 6, the electronic device 100 may include a processor 110, an external memory interface 120A, an internal memory 120B, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 140A, a battery 140B, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown in FIG. 6, or may combine certain components, or split certain components, or a different arrangement of components. The components shown in fig. 6 may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also provide power to the electronic device through the power management module 140A while charging the battery 140B.
The power management module 140A is configured to connect the battery 140B, and the charge management module 140 and the processor 110. The power management module 140A receives input from the battery 140B and/or the charge management module 140 to power the processor 110, the internal memory 120B, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 140A may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 140A may also be disposed in the processor 110. In other embodiments, the power management module 140A and the charge management module 140 may also be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
In an embodiment of the present application, the electronic device 100 displaying the user interface shown in fig. 1A-1I may be accomplished through the decoder, openGL, frameWork, the three-party SDK, and the display 194.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
In the embodiment of the present application, the HDR video to be edited may be obtained by the electronic device 100 from other electronic devices through a wireless communication function, or may be obtained by shooting the electronic device 100 through an ISP, a camera 193, a video codec, a GPU, and a display screen 194.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The internal memory 120B may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM).
The random access memory may include a static random-access memory (SRAM), a dynamic random-access memory (dynamic random access memory, DRAM), a synchronous dynamic random-access memory (synchronous dynamic random access memory, SDRAM), a double data rate synchronous dynamic random-access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, such as fifth generation DDR SDRAM is commonly referred to as DDR5 SDRAM), etc.;
the nonvolatile memory may include a disk storage device, a flash memory (flash memory).
The FLASH memory may include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. divided according to an operation principle, may include single-level memory cells (SLC), multi-level memory cells (MLC), triple-level memory cells (TLC), quad-level memory cells (QLC), etc. divided according to a memory specification, may include universal FLASH memory (universal FLASH storage, UFS), embedded multimedia memory cards (embedded multi media Card, eMMC), etc. divided according to a memory specification.
The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like.
The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write.
In the embodiment of the present application, the internal memory 120B may support the electronic device 100 to apply for Surface and BufferQueue to the memory.
The external memory interface 120A may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device 100. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120A to implement a data storage function. For example, files such as music and video are stored in an external nonvolatile memory. In embodiments of the present application, sound may be captured by microphone 170C when electronic device 100 captures HDR video. During the playing of the video, speakers connected to speaker 170A or headphone interface 170D may support the playing of audio in the video.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 140B to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 140B to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
In the embodiment of the present application, the electronic device 100 may detect whether there is a user operation acting on the display 194 of the electronic device 100 through the touch sensor 180K. After the touch sensor 180K detects the user operation, the electronic device 100 may perform the image processing indicated by the user operation, and implement the corresponding processing.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
When the video editing method provided by the embodiment of the application is implemented, if the SDR material is used when the video editor is used for editing the HDR video, the electronic device 100 can convert the SDR material with the color gamut of BT709 into the HDR material with the color gamut of BT2020, so that after the SDR material is added in the process of editing the HDR video by the electronic device 100, the finally output edited video is still the HDR video, thereby avoiding the quality degradation of the edited video, and further improving the use experience of a user.
In the embodiments of the present application:
1. the operation of the user clicking on the edit control for triggering the service of editing the video may be referred to as a second operation, such as the operation of clicking the control 133 in fig. 1C. Upon detecting the second operation, the electronic device 100 may display the current video, i.e., the HDR video selected by the user to be edited, may be referred to as a first video, such as the video displayed in window 131 in fig. 1C. The series of video frames resulting from decoding the first video by the MediaCodec created decoder may be referred to as HDR video frames to be edited, or N first video frames.
2. The three-way SDR material, indicated by the decal control 152-decal control 155 shown in fig. 1D-1H, and in the text templates of "title 1", "title 2", "title 3", "title 4", "title 5", etc., may be referred to as SDR material, or first material. The editing operation acting on the sticker control 152-155 shown in fig. 1D to 1H, the editing operation acting on "title 1", "title 2", "title 3", "title 4", "title 5", etc. in the text template of "title" may be referred to as an editing operation selected by the user to change the first video display effect, i.e., a first operation. After the electronic device 100 performs the operation shown in step S304, the obtained material may be referred to as an upgraded three-party SDR material, or an HDR material, or a second material.
3. Among one or more frames of the N first video frames, the video frame on which the second material is superimposed may be referred to as an HDR video frame on which the material is superimposed, or N second video frames, and a video composed of the N second video frames may be referred to as a second video.
4. BT709 may be referred to as a first gamut; BT2020 may be referred to as a second gamut.
5. The SurfaceB applied to the memory by OpenGL may be referred to as a first video memory; the surface a of the frame work to memory application may be referred to as a second video memory.
The term "User Interface (UI)" in the description and claims of the present application and in the drawings is a media interface for interaction and information exchange between an application program or an operating system and a user, which enables conversion between an internal form of information and a form acceptable to the user. The user interface of the application program is source code written in a specific computer language such as java, extensible markup language (extensible markup language, XML) and the like, the interface source code is analyzed and rendered on the terminal equipment, and finally the interface source code is presented as content which can be identified by a user, such as a picture, characters, buttons and the like. Controls (controls), also known as parts (widgets), are basic elements of a user interface, typical controls being toolbars (toolbars), menu bars (menu bars), text boxes (text boxes), buttons (buttons), scroll bars (scrollbars), pictures and text. The properties and content of the controls in the interface are defined by labels or nodes, such as XML specifies the controls contained in the interface by nodes of < Textview >, < ImgView >, < VideoView >, etc. One node corresponds to a control or attribute in the interface, and the node is rendered into visual content for a user after being analyzed and rendered. In addition, many applications, such as the interface of a hybrid application (hybrid application), typically include web pages. A web page, also referred to as a page, is understood to be a special control embedded in an application program interface, and is source code written in a specific computer language, such as hypertext markup language (hyper text markup language, GTML), cascading style sheets (cascading style sheets, CSS), java script (JavaScript, JS), etc., and the web page source code may be loaded and displayed as user-recognizable content by a browser or web page display component similar to the browser function. The specific content contained in a web page is also defined by tags or nodes in the web page source code, such as GTML defines elements and attributes of the web page by < p >, < img >, < video >, < canvas >.
A commonly used presentation form of the user interface is a graphical user interface (graphic user interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that the term "and/or" as used in this application refers to and encompasses any or all possible combinations of one or more of the listed items. As used in the above embodiments, the term "when …" may be interpreted to mean "if …" or "after …" or "in response to determination …" or "in response to detection …" depending on the context. Similarly, the phrase "at the time of determination …" or "if detected (a stated condition or event)" may be interpreted to mean "if determined …" or "in response to determination …" or "at the time of detection (a stated condition or event)" or "in response to detection (a stated condition or event)" depending on the context.
The terms first, second, and the like in the description and in the claims and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprising," "including," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a series of steps or elements may be included, or alternatively, steps or elements not listed or, alternatively, other steps or elements inherent to such process, method, article, or apparatus may be included.
Only some, but not all, of the matters relevant to the present application are shown in the accompanying drawings. Before discussing the exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. And the aforementioned storage medium includes: ROM or random access memory RAM, magnetic or optical disk, etc.

Claims (15)

1. A video editing method applied to an electronic device, the method comprising:
detecting a first operation acting on a first material, the first operation being for indicating to add the first material to one or more of N first video frames; the color gamut of the first material is a first color gamut, and the color gamuts of the N first video frames are second color gamuts; the first color gamut and the second color gamut are different;
performing color gamut conversion on the first material in response to the first operation to obtain a second material; the color gamut of the second material is the second color gamut;
the second material is overlapped into one or more frames of the N first video frames, so that N second video frames are obtained; the color gamuts of the N second video frames are second color gamuts.
2. The method of claim 1, wherein the first color gamut is capable of representing a smaller color range than the second color gamut is capable of representing.
3. The method of claim 1, wherein prior to detecting the first operation on the first material, the method further comprises:
detecting a second operation acting on the first video, wherein the second operation corresponds to an editing control and is used for triggering a service for editing the video;
in response to the second operation, decoding the first video into N first video frames, the color gamuts of the N first video frames being second color gamuts;
any one of the N first video frames is displayed in a first interface, and the first interface is used for receiving a first operation acting on a first material.
4. The method according to claim 1, wherein the method further comprises:
and displaying any one video frame in the N second video frames on a second interface.
5. The method according to claim 4, wherein the method further comprises:
detecting a third operation acting on a save control in the second interface;
and responding to the third operation, and storing the N second video frames as second videos, wherein the second videos are videos obtained by adding second materials to the first videos.
6. The method of claim 5, wherein after displaying any one of the N second video frames at the second interface, the method further comprises:
detecting a fourth operation acting on the N second video frames, the fourth operation being an editing operation that changes the display effects of the N second video frames;
and in response to the fourth operation, updating the display effect of the N second video frames.
7. The method of claim 6, wherein the editing operation that changes the display effect of the N second video frames comprises at least one of:
adding the first material and deleting the second material;
the operations of adding the first material and deleting the second material are used for updating the color value of the pixel point of one or more frames of the N second video frames.
8. The method of any of claims 1 to 7, wherein the first color gamut is BT709 and the second color gamut is BT2020.
9. The method of any of claims 1 to 7, wherein the first material is standard dynamic range, SDR, material, the second material is high dynamic range, HDR, material, and the first video and the second video are HDR video.
10. The method according to any of claims 1 to 7, wherein the electronic device comprises a video editing application APP, an open image library OpenGL, a third party software development kit SDK; performing color gamut conversion on the first material to obtain a second material, including:
the OpenGL responds to a first request message from the SDK to change the color coding format of the first material into an RGB format; the first request message is sent to the OpenGL after the SDK receives a second request message from the APP, where the first request message includes the first material, and the second request message is used to request to call the SDK to perform color gamut conversion on the first material, so as to obtain a second material;
the OpenGL sends the first material in the RGB format to the SDK;
the APP receives the first material in the RGB format from the SDK and a callback for performing color gamut conversion on the first material, and sends the first material in the RGB format to the OpenGL;
the OpenGL performs color gamut conversion on the first material in the RGB format to obtain a second material; the color coding format of the second material is RGB format.
11. The method of claim 10, wherein the superimposing the second material into one or more of the N first video frames to obtain N second video frames comprises:
the APP sends the second material and the N first video frames to the SDK; the second material is sent to the APP by the OpenGL;
the SDK sends a third request message to the OpenGL, wherein the third request message is used for requesting to stack the second material and the N first video frames, and the third request message comprises the second material and the N first video frames;
and the OpenGL is used for overlapping the second material into one or more frames of the N first video frames to obtain N second video frames.
12. The method of claim 11, wherein the method further comprises:
the OpenGL sends the N second video frames to the SDK;
the SDK sends the N second video frames to the APP;
the APP sends the N second video frames to the OpenGL;
the OpenGL stores the N second video frames in a first video memory.
13. The method of claim 12, wherein displaying any one of the N second video frames at the second interface comprises:
The OpenGL outputs the N second video frames stored in the first video memory to a second video memory; the second video memory is applied to a memory by the frame work;
the FrameWork acquires the N second video frames from the second video memory, and sends callbacks for rendering the N second video frames to the APP;
and the APP displays any one video frame in the N second video frames on the first interface.
14. An electronic device, comprising: the device comprises a memory, a processor and a touch screen; wherein:
the touch screen is used for displaying content;
the memory is used for storing a computer program, and the computer program comprises program instructions;
the processor is configured to invoke the program instructions to cause the electronic device to perform the method of any of claims 1 to 13.
15. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1 to 13.
CN202310858956.4A 2023-07-12 2023-07-12 Video editing method and electronic equipment Pending CN117692723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310858956.4A CN117692723A (en) 2023-07-12 2023-07-12 Video editing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310858956.4A CN117692723A (en) 2023-07-12 2023-07-12 Video editing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117692723A true CN117692723A (en) 2024-03-12

Family

ID=90127222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310858956.4A Pending CN117692723A (en) 2023-07-12 2023-07-12 Video editing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117692723A (en)

Similar Documents

Publication Publication Date Title
WO2020253719A1 (en) Screen recording method and electronic device
CN115473957B (en) Image processing method and electronic equipment
CN109559270B (en) Image processing method and electronic equipment
CN111240547A (en) Interactive method for cross-device task processing, electronic device and storage medium
CN109981885B (en) Method for presenting video by electronic equipment in incoming call and electronic equipment
CN109857401B (en) Display method of electronic equipment, graphical user interface and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113961157A (en) Display interaction system, display method and equipment
WO2023273543A1 (en) Folder management method and apparatus
CN116048933A (en) Fluency detection method
CN114222187B (en) Video editing method and electronic equipment
WO2023071482A1 (en) Video editing method and electronic device
CN113448658A (en) Screen capture processing method, graphical user interface and terminal
WO2023000746A1 (en) Augmented reality video processing method and electronic device
CN116828100A (en) Bluetooth audio playing method, electronic equipment and storage medium
CN115730091A (en) Comment display method and device, terminal device and readable storage medium
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium
CN117692723A (en) Video editing method and electronic equipment
CN116193275B (en) Video processing method and related equipment
CN117221713B (en) Parameter loading method and electronic equipment
CN117692714A (en) Video display method and electronic equipment
CN115562535B (en) Application control method and electronic equipment
CN115482143B (en) Image data calling method and system for application, electronic equipment and storage medium
CN116795476B (en) Wallpaper deleting method and electronic equipment
WO2023222097A1 (en) Text recognition method and related apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination