WO2023016014A1 - Video editing method and electronic device - Google Patents

Video editing method and electronic device Download PDF

Info

Publication number
WO2023016014A1
WO2023016014A1 PCT/CN2022/093042 CN2022093042W WO2023016014A1 WO 2023016014 A1 WO2023016014 A1 WO 2023016014A1 CN 2022093042 W CN2022093042 W CN 2022093042W WO 2023016014 A1 WO2023016014 A1 WO 2023016014A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video frames
color
electronic device
opengl
Prior art date
Application number
PCT/CN2022/093042
Other languages
French (fr)
Chinese (zh)
Inventor
吴孟函
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023016014A1 publication Critical patent/WO2023016014A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/67Circuits for processing colour signals for matrixing

Definitions

  • the present application relates to the field of terminals, in particular to a video editing method and electronic equipment.
  • HDR High-Dynamic Range
  • HDR video includes richer color effects and can record more image details, so that the video can present an excellent viewing effect.
  • many smart terminals cannot support editing HDR video. This makes it impossible for the user to edit the shot video after shooting the HDR video, such as adding a video filter, etc., thereby affecting the user experience.
  • the present application provides a video editing method and electronic equipment, implementing the above video editing method, when the video to be edited is an HDR video, the electronic equipment can convert the HDR video of the BT2020 color gamut into the SDR video of the BT709 color gamut, so that When using the SDR video editor to edit HDR video, the video to be edited can also be displayed normally.
  • the present application provides a video editing method, which is applied to an electronic device, and the method includes: detecting a first user operation, the first user operation corresponds to an editing control, and is used to trigger a video editing service; responding Based on the first user operation, the first video is decoded into N first video frames, and the color gamut of the N first video frames is the first color gamut; the color gamut conversion is performed on the N first video frames to obtain N first video frames Two video frames, the color gamut of the N second video frames is the second color gamut, the first color gamut is different from the second color gamut; any one of the N second video frames is displayed on the first interface.
  • the electronic device can convert the video in the first color gamut to the video in the second color gamut, and then display the video in the second color gamut.
  • the electronic device can convert the color gamut of the video to be edited from the first color gamut to the second color gamut, so that the editing The video browser can normally display the video frame of the video to be edited without problems such as unclear display affecting the user experience.
  • the range of colors that can be represented by the second color gamut is smaller than the range of colors that can be represented by the first color gamut.
  • the color gamut conversion is performed on the N first video frames, specifically: performing color gamut conversion on the N first video frames by using the first color table, and the first color table Contains several color values used to alter the color gamut of the video frame.
  • the electronic device can use the first color table that provides a color conversion relationship to convert a video in the first color gamut into a video in the second color gamut.
  • a color table whose color values are within the range of the second color gamut can be used to convert the video of the first color gamut to the video of the second color gamut. Therefore, there are various types of the first color table, and there are also various methods for using the first color table to realize color value correspondence and thereby realize color gamut conversion.
  • the method further includes: detecting a second user operation, where the second user operation is to change the second video frame selected by the user.
  • An editing operation of a video display effect in response to a second user operation, increase or decrease the number of second video frames, and/or change the number of pixels of one or more second video frames in the N second video frames , and/or, change the color values of the pixels of one or more second video frames in N second video frames to obtain M third video frames; M and N are equal or not equal; M is displayed on the first interface Any one of the third video frames.
  • the electronic device can also detect editing operations selected by the user to change the display effect of the first video, such as splitting the video, deleting video frames, adding a title or credits, cropping the screen size, adding filters, etc. wait.
  • the electronic device can perform the above-mentioned editing operation, and display the video frame after the above-mentioned editing and copying. In this way, whenever the user selects an editing operation, the user can immediately see the style of the video after the editing operation is performed.
  • the method further includes: detecting a third user operation, where the third user operation corresponds to a save control; in response to the third user operation, saving M third video frames as The second video, where the second video is an edited video obtained by editing the first video; the second video is displayed on the second interface.
  • the electronic device can detect the user operation of saving the video, and in response to the operation, the electronic device can package the edited video frame into a video and save it in the local storage space, For users to browse, forward, etc. at any time.
  • the editing operation selected by the user to change the display effect of the first video includes: splitting the video, deleting video frames, adding a title or credits, cropping the screen size, adding filters, One or more of the operations of adding text or graphics; among them, the operation of splitting the video, deleting the video frame, adding the beginning or end of the credits is used to increase or decrease the number of the second video frame, and the operation of cropping the screen size is used to change N
  • the number of pixels in one or more second video frames in the second video frame, the operation of adding a filter, adding text or graphics is used to change the pixel points of one or more second video frames in the N second video frames color value.
  • the second user operation when the second user operation includes an operation to increase or decrease the number of second video frames, then M and N are not equal; when the second user operation does not include an increase or decrease When operating on the number of second video frames, then M and N are equal.
  • the first color gamut is BT2020
  • the second color gamut is BT709.
  • the electronic device can convert the video to be edited with a color gamut of BT2020 into a video with a color gamut of BT709.
  • the electronic device can display the converted video with a color gamut of BT709, so that the electronic device can normally display the video of the video to be edited Frames, and there will be no problems such as unclear display that affect the user experience.
  • the first video is a high dynamic range HDR video
  • the second video is a standard dynamic range SDR video.
  • the electronic device can convert the HDR video to be edited into the SDR video.
  • the electronic device can display the above-mentioned converted SDR video, so that the electronic device can normally display the video frame of the video to be edited without unclear display, etc. Issues affecting user experience.
  • the electronic device includes a video editing application APP, a decoder, and a first memory, where the first memory is a storage space for buffering video input by the APP in the encoder, and the first Decoding the video into N first video frames specifically includes: the APP sends the first video to the first memory; the decoder reads the first video from the first memory, and decomposes the first video into N first video frames.
  • the electronic device further includes an open graphics library OpenGL and a graphics processor GPU, and uses the first color table to perform color gamut conversion on the N first video frames to obtain N second video frames.
  • the video frame specifically includes: OpenGL receives the N first video frames sent by the decoder, the color coding format of the N first video frames is YUV format, and the data type of the data representing the color value is an integer; OpenGL converts the N first video frames The color coding format of the first video frame is changed to RGB format, and the data type of the data representing the color value is changed to floating point type; OpenGL calls the first color table from the GPU, and uses the color value conversion relationship provided by the first color table to modify N The color values of the pixels of the first video frames are obtained to obtain N second video frames; the color coding format of the N second video frames is RGB format, and the data type of the data representing the color value is a floating point type.
  • the method before OpenGL calls the first color table from the GPU, the method further includes: OpenGL obtains the first color table from the APP, and loads the first color table into the GPU.
  • using the color value conversion relationship provided by the first color table to modify the color values of the pixels of the N first video frames specifically includes: OpenGL determining the current color value of the first pixel The color value Q1, the data type of Q1 is an integer, the first pixel is any pixel in any one of the N first video frames; OpenGL takes the position of Q1 in the three-dimensional color space as the origin, and determines the same value as Q1 The 7 auxiliary color values that constitute the space cube; OpenGL determines the respective index values of Q1 and the 7 auxiliary color values in the first color table; OpenGL queries the corresponding Q1 and 7 auxiliary colors in the first color table according to the index values The target color value of the value; OpenGL interpolates the target color value of Q1 and the 7 auxiliary color values to obtain the changed color value, and sets the color value of the first pixel as the changed color value.
  • displaying any one of the N second video frames on the first interface specifically includes: the GPU sends the first video memory address to OpenGL, and the first video memory address is to store N The video memory address of the second video frame; OpenGL sends the first video memory address to the APP; the APP obtains N second video frames according to the first video memory address; the APP displays the N second video frames obtained through the first video memory address on the first interface. Any one of the video frames.
  • the APP converts the second user
  • the operation is sent to OpenGL; OpenGL determines according to the second user operation: realize the calculation logic of the video display effect indicated by the second user operation; OpenGL sends the calculation logic to the GPU; the GPU increases or decreases the number of second video frames according to the calculation logic, and /or, change the number of pixels of one or more second video frames in the N second video frames, and/or, change the color of the pixels of one or more second video frames in the N second video frames value, to obtain M third video frames; the video composed of M third video frames has the display effect indicated by the second user operation, and the color coding format of the M third video frames is RGB format
  • displaying any one of the M third video frames on the first interface specifically includes: the GPU sends the second video memory address to OpenGL, and the second video memory address is to store M The video memory address of the third video frame; OpenGL sends the second video memory address to APP; APP obtains M third video frames according to the second video memory address; APP displays on the first interface that M third video frames are obtained through the second video memory address Any one of the video frames in the frame.
  • the electronic device further includes an encoder and a second memory, and saves M third video frames as the second video, specifically including: the APP sends a request to OpenGL to call the C2D engine; In response to the request, OpenGL calls the C2D engine to obtain M third video frames from the GPU; the color encoding format of the M third video frames obtained by OpenGL is YUV format, and the data type representing the color value is an integer; OpenGL will The M third video frames are sent to the second memory; the encoder reads the M third video frames from the second memory, and encapsulates the M third video frames into the second video.
  • the method before the APP sends the first video to the first memory, the method further includes: the APP calls mediacodec to create an encoder; the encoder applies to the memory for the first memory; OpenGL receives the video sent by the APP.
  • the memory identification ID and/or address of the first memory and determine the first memory for receiving the video frame output by OpenGL according to the ID and/or address; the APP calls mediacodec to create a decoder; the decoder applies to the memory for a second memory.
  • the present application provides an electronic device, which includes one or more processors and one or more memories; wherein, one or more memories are coupled with one or more processors, and one or more
  • the memory is used to store computer program codes.
  • the computer program codes include computer instructions.
  • the electronic device executes the method described in the first aspect and any possible implementation manner of the first aspect.
  • the present application provides a computer-readable storage medium, including instructions.
  • the above-mentioned instructions When the above-mentioned instructions are run on an electronic device, the above-mentioned electronic device executes the method described in the first aspect and any possible implementation manner of the first aspect. method.
  • the present application provides a computer program product containing instructions.
  • the above-mentioned computer program product is run on an electronic device, the above-mentioned electronic device is executed as described in the first aspect and any possible implementation manner of the first aspect. method.
  • the electronic device provided in the second aspect above, the computer storage medium provided in the third aspect, and the computer program product provided in the fourth aspect are all used to execute the method provided in the first aspect of the present application. Therefore, the beneficial effects that it can achieve can refer to the beneficial effects in the corresponding method, and will not be repeated here.
  • Figure 1A- Figure 1K are a set of schematic diagrams of user interfaces provided by the embodiment of the present application.
  • FIG. 2 is a software architecture diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 3 is a flow chart of the video editing method provided by the embodiment of the present application.
  • FIG. 4 is a flowchart of an electronic device initializing a video editing environment provided by an embodiment of the present application
  • Fig. 5 is a flow chart of the electronic device provided in the embodiment of the present application to transform the color gamut of the video to be edited and process the video to be edited according to the editing operation selected by the user;
  • FIG. 6 is a flow chart of saving an edited video by an electronic device provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of video color gamut conversion implemented by an electronic device using LUT resources according to an embodiment of the present application.
  • FIG. 8 is a hardware structural diagram of an electronic device provided by an embodiment of the present application.
  • the color gamut represents the range of colors that can be displayed when video is encoded.
  • Standard Dynamic Range (SDR) video uses BT709 color gamut; High Dynamic Range (High Dynamic Range, HDR) video uses BT2020 color gamut. Therefore, compared with SDR video, HDR video can use more colors, a wider range of color representation, and a higher display brightness range. Further, HDR video can support richer image color performance and more vivid images. Image detail performance. This also enables HDR video to provide users with viewing effects, thereby improving the user experience.
  • BT2020 color gamut BT709 color gamut
  • HDR video and SDR video may also use other types of color gamut. But in general, the color gamut used by HDR video is wider than that used by SDR video, and the color and details are richer.
  • the electronic device 100 For short, electronic devices such as mobile phones and tablet computers (hereinafter referred to as the electronic device 100 for short) support the shooting of SDR videos. With the development of shooting technology and image technology, the electronic device 100 not only supports shooting SDR video, but also supports shooting HDR video. In this way, the user's demand for editing HDR video also emerges.
  • the editor used by the electronic device 100 to edit the HDR video is also an SDR video editor.
  • the SDR video editor when using the SDR video editor to edit the HDR video, since the color gamut (BT2020) of the HDR video includes a wider color range than the SDR video (BT709), in the process of using the SDR editor to edit the HDR video, the SDR The editor cannot properly display the HDR video to be edited, for example, the display is not clear, contains noise points, and so on.
  • an embodiment of the present application provides a video editing method.
  • the method can be applied to an electronic device capable of processing the image, such as a mobile phone, a tablet computer, and the like (ie, the electronic device 100 ).
  • the electronic device 100 can convert the HDR video with a color gamut of BT2020 into an SDR video with a color gamut of BT709, so that the electronic device 100 can normally process the video when using an editor to edit the video And display the video frame of the video to be edited.
  • the conversion of the HDR video to be edited by the electronic device 100 into a corresponding SDR video may be implemented by using a LUT filter for rendering.
  • the above-mentioned LUT filter is a special filter based on a color lookup table (look up table, LUT) algorithm.
  • This specific LUT filter can be used to convert video frames with a color gamut of BT2020 to video frames with a color gamut of BT709, thereby realizing the function of converting HDR video to SDR video.
  • the aforementioned LUT filter can be adjusted accordingly so that it can perform adaptive color gamut conversion.
  • the color gamut of HDR video is BT2020 and the color gamut of SDR video is sRGB
  • the above LUT filter can be used to convert the video frame of BT2020 color gamut to the video frame of sRGB color gamut.
  • Video editing operations include, but are not limited to, adding filters.
  • the above editing operations also include cropping, image inversion, zooming, adding text, adding filters, adding headers (or endings or other pages), adding video watermarks or stickers, and so on.
  • the electronic device 100 can also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and a cellular phone, a personal digital Assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) equipment, virtual reality (virtual reality, VR) equipment, artificial intelligence (artificial intelligence, AI) equipment, wearable equipment, vehicle equipment, smart home equipment and/or smart city devices, the graphics processors of the aforementioned electronic devices are not capable of editing and saving the edited video as HDR video.
  • the embodiment of the present application does not specifically limit the specific type of the electronic device.
  • FIGS. 1A-1K schematically show a group of user interfaces on the electronic device 100.
  • the application scenarios for implementing the video editing method provided by the embodiment of the present application will be described in detail below with reference to FIGS. 1A-1K.
  • FIG. 1A exemplarily shows a user interface displaying installed application programs on the electronic device 100, that is, a home page.
  • one or more application program icons are displayed on the main page, such as a "clock” application program icon, a “calendar” application program icon, a "weather” application program icon, and the like.
  • the above-mentioned one or more application program icons include an icon of a “Gallery” application program (hereinafter referred to as “Gallery”), that is, an icon 111 .
  • the electronic device 100 may detect a user operation acting on the icon 111 .
  • the above operation is, for example, a click operation or the like.
  • the electronic device 100 may display the user interface shown in FIG. 1B .
  • FIG. 1B exemplarily shows the main interface of the "Gallery” when the "Gallery” is running on the electronic device 100 .
  • This interface can display one or more pictures or videos.
  • the above-mentioned one or more videos include HDR video, LOG video, and other types of video, such as SDR video.
  • the above-mentioned LOG video refers to a low-saturation, low-brightness video shot in the LOG gray mode, and may also be called a LOG gray film.
  • the video indicated by the icon 121 may be a LOG video; the video indicated by the icon 122 may be an HDR video; the video indicated by the icon 123 may be an SDR video.
  • the icon indicating the video can display the type of the video. In this way, the user can know the type of the video through the information displayed in the icon. For example, LOG is displayed in the lower left corner of the icon 121 ; HDR is displayed in the lower left corner of the icon 122 . Videos not labeled HDR or LOG in Figure 1B are SDR videos.
  • the electronic device 100 may detect a user operation acting on the icon 122, and in response to the operation, the electronic device 100 may display the user interface shown in FIG. 1C.
  • FIG. 1C is a user interface of the electronic device 100 specifically displaying a certain picture or video.
  • the user interface may include a window 131 .
  • the window 131 can be used to display the videos that the user chooses to browse.
  • the video that the user selects to browse is the HDR video indicated by the icon 122 ("Video A"). Then, "Video A" can be displayed in the window 131 .
  • the user interface also includes icons 132 and controls 133 .
  • Icons 132 may be used to represent the type of video displayed in window 131 .
  • "HDR" displayed in the current icon 132 may indicate that "Video A" is an HDR type video.
  • the control 133 can be used to receive a user's operation of editing a video (or picture), and display a user interface for editing a video (or picture).
  • FIG. 1C generally includes a control 133, that is, the electronic device 100 does not provide the user with a control for editing video, because the electronic device 100 cannot output and save the edited HDR video.
  • the electronic device 100 can convert the HDR video into an SDR video, and then provide the user with the function of editing the SDR video, so as to meet the user's editing needs and improve the user experience. Therefore, in the user interface shown in FIG. 1C , the electronic device 100 can display the control 133 and can respond to a user operation acting on the control 133 .
  • the user interface may also include controls 134, share controls (135), favorite controls (136), delete controls (137), and the like.
  • the control 134 can be used to display detailed information of the video, such as shooting time, shooting location, color coding format, bit rate, frame rate, pixel size and so on.
  • the sharing control (135) can be used to send the video A to other applications for use.
  • the electronic device 100 may display icons of one or more applications, and the icons of the one or more applications include an icon of social software A.
  • the electronic device 100 can send the video A to the social software A, and further, the user can share the video to friends through the social software.
  • a favorite control can be used to tag videos.
  • the electronic device 100 may mark video A as the user's favorite video.
  • the electronic device 100 can generate an album for displaying videos marked as favorite by the user. In this way, in the case that video A is marked as the user's favorite video, the user can quickly view video A through the above-mentioned photo album showing the user's favorite videos.
  • a delete control (137) can be used to delete video A.
  • FIG. 1D exemplarily shows a user interface for a user to edit a video (or picture).
  • the user interface may include a window 141 , a window 142 , an operation bar 143 , and an operation bar 144 .
  • the window 141 may be used to display a preview image of the edited HDR video. Generally, window 141 will display the cover video frame of the video. When a user operation acting on the play button 145 is detected, the window 141 may sequentially display the video frame stream of the video, that is, play the video.
  • Window 142 may be used to display a stream of video frames of the video being edited.
  • the user can drag the window 142 to adjust the video frame displayed in the window 141 .
  • a scale 147 is also shown in FIG. 1D .
  • the electronic device 100 can detect the user operation of sliding left or right on the window 142, and in response to the above user operation, the position of the video frame stream where the ruler 147 is located is different. At this time, the electronic device 100 can display the current Frame of video where ruler 147 is located.
  • a plurality of icons for video editing operations may be displayed in the operation bar 143 and the operation bar 144 .
  • an icon displayed in the operation bar 143 indicates a category of editing operations.
  • the operation bar 144 can display video editing operations belonging to the selected operation category in the current operation bar 143 .
  • “Clip” is included in the operation column 143 .
  • the "clip” displayed in bold may indicate that the type of video editing operation currently selected by the user is "clip”.
  • some operations belonging to the "Clip” category are displayed in the operation bar 144, such as "Split", “Crop”, “Volume”, “Frame” and so on.
  • the electronic device 100 may detect a user operation on the "split" control, and in response to the operation, the electronic device 100 may display one or more operation controls for splitting the video.
  • the electronic device 100 can record the segmentation operation of the user, such as the start time and end time of the first video segment, the start time and end time of the second video segment, and so on.
  • the electronic device 100 may detect a user operation acting on the "frame" control, and in response to the operation, the electronic device 100 may record the size of the video frame set by the user, and then crop the original video frame.
  • the operation bar 144 corresponding to the "clip” operation also includes other editing controls belonging to the "clip” category.
  • the electronic device 100 may record and execute video editing operations corresponding to the above-mentioned controls, which will not be exemplified here.
  • the user interface also includes a save control 146 .
  • the electronic device 100 may save the video in the current state.
  • the video in the current state may be a video with editing operations added, or a video without editing operations.
  • the electronic device 100 may detect a user operation on the "filter” control in the operation bar 143, and in response to the operation, the electronic device 100 may display the user interface shown in FIG. 1E.
  • FIG. 1E exemplarily shows a user interface where the electronic device 100 displays a filter provided for a user to adjust the color of a video image.
  • Filter includes several filter options. Each filter option corresponds to an image processing method to adjust the display effect of the video screen.
  • a user may select one of a plurality of filters provided by the electronic device 100 .
  • the electronic device 100 may perform image processing indicated by the above-mentioned filter selected by the user on the edited video, so that the screen of the processed video has a display consistent with the display effect of the above-mentioned filter Effect.
  • multiple filter controls may be displayed in the filter selection interface, such as filter control 151 , filter control 152 , filter control 153 , filter control 154 , filter control 155 and so on.
  • Each of the filter controls above indicates an image processing method that uses a filter to render an image.
  • the electronic device 100 may set the currently used filter as the filter 151 (the filter indicated by the filter control 151 ) by default. Then, when a user's operation on a certain filter control is detected, in response to the operation, the electronic device 100 may display a user interface for editing a video using a filter indicated by the above-mentioned filter control.
  • the electronic device 100 may detect a user operation on the filter control 155, and in response to the operation, the electronic device 100 may display the user interface described in FIG. 1F. As shown in FIG. 1F , the electronic device 100 can highlight the display effect of the selected filter control, for example, enlarge the filter control, thicken the border of the control, or set the highlight of the control, etc. Examples are not limited to this.
  • the electronic device 100 will also display in the window 141 a preview image after using the filter control 155 to render the video to be edited.
  • the picture color of "Video A” displayed in window 141 is different from the picture color in window 141 in Fig. 1E at this time, and the "Video A” in window 141 in Fig. Mirror consistent display effect.
  • the electronic device 100 often only renders the video frame displayed in the current window, or, in some embodiments, the electronic device 100 can also use other simple image processing means to process the above-mentioned cover video frame, so that after processing The image of is previewed with the effect of the above filter.
  • the user interface also displays a confirmation control 147 (" ⁇ "), a cancel control 148 ("x").
  • confirmation control
  • x cancel control
  • the user may click other filter controls to select other filters.
  • the electronic device 100 may display in the window 141 a video rendered using the filter indicated by any of the above filter controls.
  • the user can click the cancel control 148 .
  • the electronic device 100 may display the user interface shown in FIG. 1D .
  • the electronic device 100 may detect a user operation acting on the "music" control in the operation bar 143, and in response to the above operation, the electronic device 100 may display the user interface shown in FIG. 1G.
  • the "Music" control in the operation bar 143 can be thickened to indicate that the type of editing operation currently selected by the user is “Music”.
  • the editing controls displayed in the operation bar 144 will be replaced with corresponding operation controls under the "music” operation, such as "add music” and "extract music”.
  • the electronic device 100 may display a user interface for adding music.
  • the user interface may display multiple music options, such as “Music 1", “Music 2", “Music 3", “Music 4" and so on.
  • Each music option is followed by playback controls and usage controls.
  • the playback control can be used to play the music corresponding to the music option.
  • Use controls can be used to apply the above music to the edited video.
  • the "Extract Music” control can be used to extract audio from the video to be edited.
  • the electronic device 100 may extract the audio from "Video A".
  • the operation bar 144 may also include more music operation controls. The embodiment of the present application does not limit this.
  • the electronic device 100 may detect a user operation acting on the "text" control in the operation bar 143, and in response to the above operation, the electronic device 100 may display the user interface shown in FIG. 1I.
  • the "text" control in the operation bar 143 can be thickened to indicate that the type of editing operation currently selected by the user is “text”.
  • the editing controls displayed in the operation bar 144 will be replaced with the corresponding operating controls under the "text" operation, including "title” and “title”.
  • "title” and “title” include multiple text templates.
  • the electronic device 100 can display a text template of "Title”, such as “None” 151, “Title 1", “Title 2", “Title 3", “Title 4", “Title 5" etc.
  • the electronic device 100 can detect a user operation acting on any of the above templates, for example, detect a user operation acting on "Title 5", and in response to the above operation, the electronic device 100 can display the title of "Title 5" in the window 141 Effect.
  • the electronic device 100 may detect a user operation on the confirmation control 161 . At this time, the electronic device 100 may confirm that the user selects an editing operation using the title shown in "Title 5". Meanwhile, the electronic device 100 may display the user interface described in FIG. 1K.
  • the electronic device 100 detects that the user edits the "credits credits” for the process of adding a credits credits to the edited video, refer to the above process of adding a "credits credits", and will not be repeated here.
  • the electronic device 100 can also provide more editing capabilities, which will not be listed one by one here.
  • 1D-1J exemplarily show a process in which the electronic device 100 receives an operation of editing a video by a user. After detecting the operation of saving the video, the electronic device 100 can convert the HDR video displayed in the window 133 in FIG. Edit operation shown in 1J, and then save the edited SDR video.
  • the electronic device 100 may perform calculations for editing and saving the SDR video. After the saving is completed, the electronic device 100 may display the user interface shown in FIG. 1K. Compared with the user interface shown in FIG. 1C , at this moment, the video displayed in window 131 is the edited SDR video. For example, the cover of the video shown in FIG. 1K is the title added in the editing operation.
  • the electronic device 100 may save the edited video as a new video.
  • the electronic device 100 can not only provide the user with an HDR video before editing, but also provide the user with an edited personalized SDR video.
  • the electronic device 100 when the electronic device 100 does not support the output of HDR video, the electronic device 100 can convert the HDR video to be edited into an SDR video. Then, on the basis that the electronic device 100 supports outputting the SDR video, the electronic device 100 may provide the user with a function of editing the SDR video. In this way, from the user's point of view: the user can edit the above-mentioned HDR video, and save the edited video.
  • the electronic device 100 can convert the above-mentioned HDR video into a corresponding SDR video, and then Provide users with video editing services to firstly meet users' needs for editing videos.
  • FIG. 2 exemplarily shows the software architecture of the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the software structure of the electronic device 100 is exemplarily described by taking an Android system with a layered architecture as an example.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
  • the application layer can consist of a series of application packages. As shown in Figure 2, the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application program layer also includes a video editing application.
  • the video editing application has video data processing capabilities, and can provide users with video editing functions, including cropping, rendering, and other video data processing.
  • the user interfaces shown in FIGS. 1D-1J can be regarded as the user interfaces provided by the above-mentioned video editing application.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions. As shown in Figure 2, the application framework layer can include window managers, content providers, view systems, phone managers, resource managers, notification managers, and so on.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications. Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 100 . For example, the management of call status (including connected, hung up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • prompting text information in the status bar issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
  • the application framework layer also includes a media framework.
  • the media framework provides multiple tools for editing video and audio.
  • the above tools include MediaCodec.
  • MediaCodec is a class provided by Android for encoding and decoding audio and video. It includes encoder, decoder.
  • the encoder can convert one form of video or audio input to the encoder into another form through compression technology, and the decoder performs the reverse process of encoding, and can convert one form of video input to the decoder Or the audio is converted to another form by decompression techniques.
  • the video input to the encoder may be HDR video.
  • the above HDR video is composed of N video frames whose color gamut is BT2020.
  • the aforementioned N is an integer greater than 1.
  • the decoder may split the above-mentioned N video frames composed of BT2020 color gamut into N independent video frames for the subsequent electronic device 100 to perform image processing on each video frame.
  • the Android Runtime includes core library and virtual machine.
  • the Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • Open Graphics Library provides multiple image rendering functions, which can be used to draw from simple graphics to complex three-dimensional scenes.
  • the OpenGL provided by the system library can be used to provide graphics and image editing operations for video editing applications, such as video cropping operations and filter adding operations described in the foregoing embodiments.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • FIG. 3 exemplarily shows a flow chart of editing HDR video by the electronic device 100 .
  • the following embodiments of the present application will specifically introduce the process for the electronic device 100 to provide users with editing HDR video.
  • S101 The electronic device 100 determines that the user chooses to edit an HDR video.
  • the electronic device 100 may display editing controls.
  • the edit control can provide users with the service of editing the currently displayed image resource.
  • the video editing method provided in the embodiment of the present application is mainly applied to video image resources. Subsequent embodiments will use video as an example to introduce the video editing method provided by the embodiment of the present application.
  • the window 131 can provide the user with browsing image resources stored in the electronic device 100 ; the control 133 , that is, the editing control, can provide the user with the service of editing video.
  • the edited video is the video displayed in the window 131 .
  • the electronic device 100 may first determine the type of the edited video, so as to determine whether the edited video is processed and output by the chip platform.
  • the electronic device 100 may determine that when editing the HDR video, the electronic device 100 cannot correctly display the HDR video frame. This is because the color gamut adopted by the HDR video is BT2020, and the chip platform used by the electronic device 100 does not support displaying the HDR video frame whose color gamut is BT2020 in the editing scene.
  • the electronic device 100 displays abnormal HDR video frames, thereby affecting user experience.
  • the abnormal display of the HDR video frame above includes unclear display, wrong pixel color, and so on.
  • the electronic device 100 may determine that: when editing the SDR video, the electronic device 100 can normally display the SDR video to be edited.
  • the electronic device 100 may implement different editing strategies according to the type of the video, so as to meet the user's editing needs.
  • the electronic device 100 may determine: call the processing and output capabilities provided by the chip, and provide the user with a service of editing the video.
  • the edited video is an HDR video
  • the electronic device 100 cannot display the HDR video frame normally, at this time, the electronic device 100 can determine to use the image editing method provided in the embodiment of this application: use LUT resources to convert the HDR video frame into a corresponding The SDR video frame, and then the electronic device 100 can display the above SDR video frame, so as to avoid abnormal display.
  • the above LUT resource is a data resource established based on a color lookup table (look up table, LUT) algorithm for converting a video frame with a color gamut of BT2020 into a video frame with a color gamut of BT709.
  • the electronic device 100 can provide users with the ability to edit HDR videos.
  • the electronic device 100 may execute the image editing method provided in the embodiment of the present application to provide the user with a video editing service, so as to meet the user's demand for editing personalized videos.
  • the electronic device 100 may first determine the type of the displayed video (“Video A”) in the window 131 . At this time, "Video A” is an HDR video, therefore, the electronic device 100 may determine that the video to be edited (“Video A”) is an HDR video. Subsequently, the electronic device 100 may determine to use the image editing method provided in the embodiment of the present application to provide the user with the ability to edit the HDR video.
  • S102 The electronic device 100 initializes a video editing environment.
  • Initializing the editing environment refers to creating or applying for tools and storage space required for editing a video, so that the electronic device 100 can perform data processing for editing the video.
  • Initializing the video editing environment includes: creating encoders, decoders, OpenGL, applying for memory for user cache video frames and display memory provided by GPU.
  • a decoder can be used to split a video to be edited into a sequence of video frames; an encoder can be used to combine edited video frames into a video.
  • OpenGL can be used to adjust the video frame, and/or modify the pixels in the video frame, so as to change the image content included in the video, that is, to render the video frame.
  • the aforementioned adjustment of the video frame includes adjusting to increase or decrease the video frame, and to modify the size of the video frame.
  • the above memory includes surface and BufferQueue.
  • a Surface can be used to cache rendered video frames output by the GPU.
  • the encoder can encapsulate the sequence of video frames stored in the Surface into a video.
  • BufferQueue can be used to cache the video to be edited input by the video editing application.
  • the decoder can split the video to be edited stored in the BufferQueue into a sequence of video frames to be edited.
  • FIG. 4 exemplarily shows a flow chart of electronic device 100 initializing a video editing environment.
  • APP can be used to represent a video editing application.
  • the electronic device 100 may detect a user operation of clicking an edit control.
  • a user operation acting on the edit control 133 may be referred to as a user operation of clicking the edit control.
  • the APP can determine the type of the video to be edited and the type of the edited video.
  • the video to be edited is an HDR video.
  • the color encoding format adopted by the HDR video is YUV format
  • the data type of the color value of the color channel is integer (INT)
  • the color gamut is BT2020.
  • the electronic device 100 can determine to convert the HDR video into an SDR video, and the color gamut is BT709, so as to ensure that the electronic device 100 can support the display of the video frame of the HDR video to be edited in the editing environment (that is, display the SDR video corresponding to the HDR video) , while providing users with the ability to edit HDR videos.
  • the APP can determine that the video to be edited is an SDR video, and the edited video is also an SDR video.
  • the above output video type information may be used to indicate the type of the edited video output by the encoder.
  • the above output video type information may include: the color gamut and color coding format of the edited video.
  • the output video type information is specifically BT709 and YUV.
  • the above BT709 may be used to indicate that the color gamut of the edited video is the BT709 color gamut; the above YUV may indicate that the color coding format of the edited video is YUV.
  • the above output video type information may also include other more parameters describing the output video type.
  • the video encoded by the encoder is the SDR video.
  • MediaCodec may create the encoder indicated by the above output video type information. For example, when recognizing the output video type information (BT709, YUV) carried in the above request, MediaCodec can create an encoder for encoding SDR video (BT709, YUV). The encoder can encapsulate the input SDR video frame with BT709 color gamut and YUV color coding format into an SDR video.
  • MediaCodec may return confirmation information indicating the creation to the APP.
  • the above confirmation information indicating the completion of creation is, for example, the confirmation character ACK and the like.
  • the APP may send a request to the above-mentioned encoder to create a surface.
  • Surface is a memory storage space with a specific data structure. Generally, a surface is dedicated to buffering video frames to be encoded.
  • the encoder can apply for a surface from the memory.
  • a piece of storage space can be divided into the memory as the surface requested by the encoder for use by the encoder.
  • the memory can provide multiple surfaces. Each surface carries an identity (ID) indicating the surface. For any surface, the ID of the surface is in one-to-one correspondence with the address of the surface. For example, suppose the ID of surface-01 is 01; the address is 0011-0100. When it is recognized that the ID of a certain surface is 01, the electronic device 100 can determine that the above-mentioned surface is surface-01, and at the same time can also determine that the address of the above-mentioned surface is 0011-0100; otherwise, when it is recognized that the address used by a certain surface is From 0011 to 0100, the electronic device 100 may determine that the surface is surface-01.
  • ID identity
  • the memory may return the ID and/or address of the above-mentioned surface to the encoder.
  • the encoder can confirm that the application for the surface is successful, and determine the ID and/or address of the surface that can be used. Further, (10) the encoder can return the ID and/or address of the above-mentioned surface to the APP. In this way, the APP can determine that the encoder has completed the process of applying for a surface from the memory, and can determine the ID and/or address of the usable surface requested by the encoder.
  • (11) APP can send an initialization request to OpenGL.
  • the above request may carry surface information.
  • the above surface information may be used to indicate the ID and/or address of the surface in the OpenGL decoder for receiving the edited video.
  • OpenGL can determine the surface used by the encoder, that is, when the edited video frame is output after performing the calculation and processing of the video frame, OpenGL can determine Which cache (surface) is written to the above-mentioned edited video frame.
  • Video memory A can be used to cache video frames to be edited.
  • the above-mentioned video memory A can be a texture (texture) or a frame buffer object (Frame Buffer Object) in OpenGL.
  • the GPU will allocate a storage space for OpenGL as the video memory (video memory A) requested by OpenGL. Then, (14) GPU can return the address of video memory A to OpenGL. After receiving the address of the video memory A, OpenGL can locate the video memory A through the above address, and then OpenGL can use the video memory A. Then (15) OpenGL can return confirmation information to APP.
  • the confirmation information may indicate to the APP: OpenGL has been initialized.
  • APP can send a request for creating a decoder to MediaCodec.
  • MediaCodec may create a decoder.
  • the decoder can determine the type of the video to be decoded after receiving the video to be decoded from the APP.
  • the video to be edited is an HDR video, therefore, the above decoder can be used to decode the HDR video.
  • the decoder can send a request for a storage space (BufferQueue) to the memory.
  • BufferQueue can be used to receive the video to be decoded input by APP.
  • the memory can allocate a BufferQueue for the decoder.
  • the memory will return the address of the above-mentioned BufferQueue to the decoder.
  • the decoder After receiving the address of the BufferQueue returned by the memory, the decoder can locate the usable BufferQueue in the memory according to the above address. Subsequently, (21) the decoder may return confirmation information indicating successful creation of the decoder to the APP.
  • the process of APP calling MediaCodec to create a decoder may also occur before creating an encoder. This embodiment of the present application does not limit this.
  • steps (1) to (21) in FIG. 4 shows the process of the electronic device 100 initializing the video editing environment. After the initialization of the editing environment is completed, the electronic device 100 can start to perform editing operations selected by the user on the video to be edited.
  • S103 The electronic device 100 converts the HDR video frame into an SDR video frame.
  • the electronic device 100 can use the above-mentioned video editing environment to split the HDR video (color space BT2020) to be edited into SDR video frames (color space BT709).
  • the electronic device 100 can first use a decoder to split the above-mentioned HDR video (color gamut BT2020) to be edited into HDR video frames (color gamut BT2020), and then, the electronic device 100 can use the color gamut conversion filter resource (LUT resource) to convert the above HDR video frame (gamut BT2020) to SDR video frame.
  • Steps (1) to (15) in FIG. 5 show a specific process for the electronic device 100 to convert HDR video frames into SDR video frames.
  • the APP can send an instruction to OpenGL to load the LUT resource into the GPU memory.
  • the APP can determine the storage address of the LUT resource.
  • the above indication may carry the above storage address, such as 0100-1000. In this way, the APP can instruct OpenGL to obtain the storage space of the LUT resource.
  • OpenGL may first locate the storage space for storing the LUT resource according to the storage address carried in the instruction. Then, OpenGL can read the LUT resources from the above storage space, and write the above LUT resources into the GPU.
  • the address of the video memory storing the LUT resource in the GPU may be specified by the APP or by OpenGL.
  • the above instruction should also carry the address of the video memory storing the LUT resource.
  • OpenGL can return the writing success indication information to the APP.
  • the APP can confirm that OpenGL has completed the operation of loading the LUT resources required to convert the HDR video to the SDR video into the GPU memory.
  • APP can input the HDR video to be edited into the decoder.
  • the APP can determine the address of the BufferQueue requested by the decoder for caching the video to be decoded. After determining the above address, the APP can write the HDR video to be edited to the above BufferQueue.
  • the color encoding format adopted by the HDR video to be edited is YUV format
  • the data type of the color value of the color channel is integer (INT)
  • the color gamut is BT2020.
  • the decoder When detecting that the video is written into the BufferQueue, the decoder can decode the video stored in the BufferQueue, so as to obtain the video frame sequence of the video. Therefore, after writing the HDR video to be edited into the BufferQueue, the decoder can output the video frames of the HDR video to be edited, that is, N HDR video frames (HDR video frames to be edited). At this time, the color coding format, data type and color gamut of the HDR video frame are still YUV, INT, BT2020.
  • a video may also include audio. Therefore, a decoder also includes an audio decoder.
  • the processing related to audio is a prior art, and will not be repeated here.
  • the electronic device 100 can respectively obtain N HDR video frames and audio data of the HDR video to be edited. It can be understood that when the HDR video to be edited does not include audio data, the electronic device 100 does not need to perform audio decoding on the HDR video to be edited, and therefore, the decoded data does not include audio data.
  • the decoder can sequentially send the decoded HDR video frames (YUV, INT, BT2020) to be edited to OpenGL.
  • OpenGL can successively receive N HDR video frames (YUV, INT, BT2020) to be edited.
  • OpenGL can change the color coding format of the HDR video frame to be edited and the data type of the color value in the color channel.
  • OpenGL will set the color coding format of the HDR video frame to be edited to RGB, and the data type of the color value in the color channel to FLOAT, that is, the HDR video frame to be edited in the original (YUV, INT) format Change the HDR video frame to be edited in (RGB, FLOAT) format. This is because the color coding format supported by OpenGL when drawing and/or rendering the video frame is RGB, and the data type of the color value in the color channel is floating point.
  • the color gamut of the HDR video frame is still BT2020, that is, the above process does not involve converting the HDR video frame into an SDR video frame.
  • OpenGL can write the changed HDR video frame to be edited GPU. Specifically, in steps (13) and (14) shown in FIG. 4, OpenGL applies for video memory A to the GPU. At this time, OpenGL can write the HDR video frame (RGB, FLOAT, BT2020) to be edited after the above changes into the GPU.
  • OpenGL can call the LUT resource written in the GPU in advance.
  • the above LUT resources are used to convert HDR video frames to SDR video frames.
  • OpenGL can determine the calculation logic for converting the HDR video frame to be edited into an SDR video frame (RGB, FLOAT, BT709) using the above-mentioned LUT resource. (11) Then, OpenGL can send the above calculation logic to the GPU to instruct the GPU to use the above LUT resource to convert the HDR video frame to be edited into an SDR video frame (RGB, FLOAT, BT709).
  • the GPU can sequentially modify the color value of each pixel in the HDR video frame to be edited, so as to realize the conversion of pixels with a color gamut of BT2020 into pixels with a color gamut of BT709, Then realize the conversion of HDR video frame into SDR video. Subsequent embodiments will introduce in detail how the GPU converts pixels with a color gamut of BT2020 into pixels with a color gamut of BT709 according to the calculation logic issued by OpenGL, so as to realize the specific process of converting HDR video frames into SDR video frames, which will not be expanded here .
  • the GPU can send the address of the video memory B storing the above SDR video frame to OpenGL.
  • OpenGL may return the address of the above-mentioned video memory B to the APP.
  • the video memory B above can be the same as or different from the video memory A.
  • the APP can confirm that OpenGL has been completed: convert the HDR video frame to be edited into an SDR video frame.
  • the electronic device 100 can display the above-mentioned SDR video frame (RGB, FLOAT, BT709). In this way, when the HDR video frame cannot be displayed normally, the electronic device 100 can display the SDR video frame converted according to the HDR video frame, thereby avoiding the situation that the electronic device 100 cannot normally display the HDR video frame to be edited.
  • the electronic device 100 can obtain the SDR video frame (BT709) converted according to the HDR video frame (BT2020) . Then, the electronic device 100 can display the above SDR video frame in the preview window 141 . At this time, when the preview window 141 cannot normally display the original HDR video frame, the SDR video frame displayed in the preview window 141 can provide a better preview effect for the user.
  • S104 The electronic device 100 modifies the SDR video frame to be edited according to the detected editing operation.
  • the electronic device 100 may display a user interface for editing the video.
  • the user interface shown in FIGS. 1D-1J may be referred to as a user interface for editing a video.
  • the preview window 141 displays the SDR video frame to be edited (BT709) converted according to the HDR video frame to be edited (BT2020)
  • multiple controls for editing video can be displayed in the user interface, such as the operation bar 143,
  • the operation bar 144 provides multiple editing controls and the like for the user to edit the video.
  • the electronic device 100 may detect a user operation acting on the editing control, and then perform image processing on the SDR video frame to be edited, so that the edited SDR video frame has the display effect indicated by the editing control.
  • the electronic device 100 can detect the user operation acting on the filter 155, and then execute the filter 155 on the SDR video frame to be edited (BT709). Indicated image processing to generate edited SDR video frames (BT709).
  • the edited SDR video frame has the display effect shown by filter 155 .
  • the electronic device 100 may detect a user operation acting on a certain editing control.
  • a user operation acting on a certain editing control For example, in the user interface shown in FIG. 1F , the electronic device 100 may detect a user operation acting on the edit control filter 155 .
  • the above-mentioned user operations acting on a certain edit processing control may be referred to as edit operations selected by the user.
  • the APP can send the editing operation to OpenGL.
  • the APP may send the editing operation of the filter 155 to OpenGL.
  • OpenGL After receiving the editing operation sent by the APP, OpenGL can determine the calculation logic for performing the editing operation on the SDR video frame to be edited. (19) Then, after the above-mentioned calculation logic is determined, OpenGL can send the above-mentioned calculation logic to the GPU to instruct the GPU to perform the calculation processing of the SDR video frame to be edited, so that the edited SDR video frame has the editing operation selected by the user display effect. At this time, the color gamut of the edited SDR video frame is still BT709, that is, the editing object selected by the user is the SDR video frame, and the edited video frame is still the SDR video frame.
  • the GPU can modify the video frame size and/or pixel color value of the SDR video frame to be edited according to the above calculation logic.
  • the SDR video frame that has been processed according to the calculation logic may be called an edited SDR video frame.
  • the calculation logic received by the GPU will instruct the GPU to modify the color value of the pixel in the video frame according to the color conversion formula of the added filter. color value.
  • the calculation logic received by the GPU will instruct the GPU to modify the number of pixels in the video frame according to the video frame on which the cut operation is applied.
  • the GPU may send the address of the video memory C storing the edited SDR video frame to OpenGL.
  • the above-mentioned video memory C may be the same as or different from video memory A or video memory B.
  • OpenGL can send the address of the above-mentioned video memory C to the APP.
  • the APP After receiving the video memory address for storing the edited SDR video frame returned by OpenGL, the APP can obtain the edited SDR video frame according to the above address, and further, the APP can display the above-mentioned edited SDR video frame. In combination with the user interface shown in FIG. 1F , the APP can display the above SDR video frame with the display effect shown by the filter 155 added in the preview window 141 .
  • the electronic device 100 can detect editing operations selected by multiple users. For example, in addition to the operation of adding a filter shown in FIG. 1F , the electronic device 100 may also detect the user's operation of adding a title/title as shown in FIG. 1I . When an editing operation selected by the user is not detected, the APP can send the above-mentioned editing operation to OpenGL, and then OpenGL can instruct the GPU to perform corresponding calculations, so as to add different video display effects to the SDR to be edited.
  • the electronic device 100 will also execute the editing processing of the audio data.
  • the processing of audio data includes audio clipping, adding audio, deleting audio, merging audio, etc., which will not be repeated here.
  • the electronic device 100 generates an edited SDR video according to the edited SDR video frame.
  • the user interface for editing video also includes a save control, such as save control 146 in FIG. 1D .
  • the electronic device 100 may detect a user operation on the save control, such as the user operation on the save control 146 shown in FIG. 1J .
  • the electronic device 100 can generate an SDR video according to the edited SDR video frame, and save it to a storage device such as a memory card or a hard disk for subsequent viewing by the user.
  • FIG. 6 exemplarily shows a specific process for the electronic device 100 to generate an edited SDR video according to edited SDR video frames.
  • (1) APP can detect the user's save operation. For example, a user operation acting on the save control 146 as shown in FIG. 1J .
  • the APP may send a request to OpenGL to call the C2D engine to output the edited SDR video.
  • the C2D engine is a function provided by the GPU to output image data stored in video memory. At this point, the C2D engine can be used to output edited SDR video frames stored in the GPU.
  • the GPU may call the C2D engine to read the edited SDR video frame from the GPU.
  • the color coding format of the edited SDR video frame stored in the GPU is RGB
  • the data type of the color value in the color channel is floating point type
  • the color gamut is BT709 (RGB, FLOAT type, BT709).
  • the GPU may output the above edited SDR video frame to OpenGL.
  • the C2D engine can perform format conversion on the above-mentioned edited SDR video frame, including changing the color coding format of the video frame and the data type of the color value, so that the color coding format and color value of the final output video
  • the data type is the same as the video before editing.
  • the C2D engine may set the color encoding format of the edited SDR video frame stored in the GPU to YUV, and set the data type of the color value to integer.
  • the color encoding format of the edited SDR video frame obtained from the GPU by OpenGL calling the C2D engine is YUV
  • the data type of the color value is an integer (YUV, INT type, BT709).
  • OpenGL can input the above-mentioned (YUV, INT type, BT709) edited SDR video frame into the surface that the encoder applies to cache the video frame. Specifically, with reference to step (11) in Figure 4, OpenGL can determine the ID and/or memory address of the surface used to cache the edited video frame in the encoder, and OpenGL can locate the above-mentioned cache editing according to the ID and/or memory address. The memory space of the surface of the next video frame. Then, OpenGL can write the above-mentioned edited SDR video frame (YUV, INT type, BT709) obtained from the GPU into the above-mentioned surface.
  • the encoder can detect whether there is a video frame in the surface in real time, that is, the edited SDR video frame.
  • the encoder combines and encapsulates the video frames stored in the surface, so that the sequence of video frames split by the decoder and processed by OpenGL and GPU is repackaged into a video.
  • the electronic device 100 can obtain an SDR video composed of M edited SDR video frames, which is recorded as an edited SDR video.
  • M may be the same as or different from N (the number of video frames obtained after decoding by the decoder) in step (5) in FIG. 5 .
  • the encoder can return the edited SDR video obtained after encoding to the storage space specified by the APP. Further, the APP can obtain and display the edited video from the above storage space for the user to browse or use.
  • the electronic device 100 can execute the method shown in the above steps (1) to (6), and obtain the edited SDR video. Then, the APP can display the above-mentioned edited SDR video on the user interface shown in FIG. 1K . In this way, the user can obtain an edited and personalized video, so as to meet the user's demand for editing HDR video.
  • the edited SDR video is a video with a user-specified personalized display effect.
  • the aforementioned personalized display effect refers to the display effect of the editing operation instruction selected by the aforementioned user. For example, the dark gray display effect indicated by the filter control 155, the opening effect indicated by the opening title "Title 5", and so on.
  • the electronic device 100 will also encode the processed audio data at the same time.
  • the edited output SDR video is also processed audio data.
  • the above-mentioned processed audio data may or may not be consistent with the pre-processed audio data.
  • the editing operation selected by the user involves the processing of audio, such as the editing operation of adding audio shown in Figure 1G and Figure 1H, the above-mentioned processed audio data is inconsistent with the pre-processing audio data; otherwise, the above-mentioned processed audio The data is the same as the audio data before processing.
  • the above steps S104 to S105 briefly introduce the method for the electronic device 100 to perform editing operations on the HDR video to be edited and save it as an SDR video.
  • OpenGL instructs the GPU to use LUT resources to convert the HDR video with a color gamut of BT2020 into an SDR video frame with a color gamut of BT709, as shown in FIG. 7 .
  • Table 1 exemplarily shows LUT resources.
  • the LUT resource is a data resource for converting a video frame with a color gamut of BT2020 into a video frame with a color gamut of BT709 established based on a color lookup table (LUT) algorithm, which can be expressed as a A color lookup table.
  • LUT color lookup table
  • the LUT resource shown in Table 1 records the value of each color channel of 33*33*33 (35937) colors.
  • the LUT resources are not limited to the 33*33*33 LUT resources shown in Table 1, and there are other forms of LUT resources, such as 64*64*64 specifications, which are not limited in this embodiment of the present application.
  • the color number 1 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step number is 0. That is, the color value processed by the step size is (0,0,0), the color number of the color value corresponding to the color value (0,0,0) in Table 1 is 1, and its color value is (0.0333104, 0.0306401, 0.0307927).
  • Color number 2 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step number is 1.
  • Color number 3 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step number is 2.
  • the color number 33 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step The long number is 32.
  • the color number 34 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 1; when B is at the first step, the step number is 0. No more examples here.
  • the following describes in detail how the GPU uses LUT resources to convert the HDR video with a color gamut of BT2020 into an SDR video frame with a color gamut of BT709.
  • the pixel point Q is any pixel point in the HDR video frame.
  • the values of the color channels in the pixel point Q are respectively Q 1 (0.1, 0.2, 0.1).
  • the above Q 1 is the color value of the pixel point Q at this time.
  • 0.1, 0.2, and 0.1 are floating point types.
  • the GPU can multiply the value of each color channel in the color value Q 1 of the pixel Q by 256 to determine the position of each color channel in Q 1 from 0 to 256, and obtain the color value Q 2 (25.6, 51.2, 25.6 ). According to the rounding method, at this time, the color value Q 2 (26, 51, 26) actually obtained by the GPU.
  • the GPU may determine the color value of Q 3 (3, 6, 3) corresponding to the LUT resource according to the look-up formula and Q 3 (3, 6, 3). Specifically, the GPU may determine the color number W of Q 3 in the color lookup table (Table 1) according to the table lookup formula.
  • Table 1 the color lookup formula is as follows:
  • the GPU can determine that the color value of Q 3 obtained through mapping in Table 1 is (1, 0.98204, 0).
  • the GPU can determine another 7 color values that form a cube with Q 3 in a three-dimensional space according to Q 3 , which are denoted as S 1 -S 7 .
  • the GPU can determine the color numbers of S 1 -S 7 in the color look-up table (Table 1) according to the look-up formula, and then determine the converted color values of S 1 -S 7 in Table 1.
  • the GPU can determine the color values of each color channel corresponding to the above-mentioned Q 3 , S 1 -S 7 in Table 1, as shown in Table 2:
  • the GPU may use an interpolation method to determine Q 4 (0, 0.174277, 0.035676) from the above Q 3 and the above S 1 -S 7 .
  • Q 4 is the final color value of the pixel point Q.
  • the above-mentioned interpolation methods include but are not limited to trilinear interpolation methods, tetrahedral interpolation methods, etc., which will not be repeated here.
  • Fig. 7 takes a pixel point Q in the HDR video frame to be edited as an example, and shows the specific process of converting a pixel point with a color gamut of BT2020 into a pixel point with a color gamut of BT709 using LUT resources by the GPU.
  • the GPU can perform the above processing on other pixels in the HDR video frame to be edited, so as to convert the color gamut of each pixel in the HDR video frame to be edited from BT2020 to BT709, and then convert the HDR video frame to be edited to SDR video frame to be edited.
  • FIG. 8 exemplarily shows a schematic diagram of a hardware structure of the electronic device 100 .
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory. This memory may hold instructions or data that processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be disposed in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system, etc. applied on the electronic device 100. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the electronic device 100 may display the user interface shown in FIG. 1A-FIG. 1K through a GPU, an encoder, a decoder, OpenGL, and a display screen 194 .
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the edited HDR video can be obtained by the electronic device 100 from other electronic devices through the wireless communication function, or it can be obtained by the electronic device 100 through the ISP, camera 193, video codec, GPU, display screen 194 shots were obtained.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
  • RAM random access memory
  • NVM non-volatile memory
  • Random access memory may include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory
  • Non-volatile memory may include magnetic disk storage devices, flash memory (flash memory).
  • flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc.
  • it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), triple-level cell (TLC), quad-level cell (QLC), etc.
  • SLC single-level storage cells
  • MLC multi-level storage cells
  • TLC triple-level cell
  • QLC quad-level cell
  • UFS universal flash storage
  • eMMC embedded multimedia card
  • the random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
  • the non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
  • the internal memory 121 can support the electronic device 100 to apply for a surface and a conveyor bufferqueue from the memory.
  • the external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 .
  • the external non-volatile memory communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, files such as music and video are stored in an external non-volatile memory.
  • the microphone 170C may collect sound.
  • the speaker 170A or the speaker connected to the earphone interface 170D can support playing the audio in the video.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material.
  • the electronic device 100 determines the intensity of pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called “touch device”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the electronic device 100 detects whether there is a user operation on the display screen 194 of the electronic device 100 through the touch sensor 180K. After the touch sensor 180K detects the above-mentioned user operation, the electronic device 100 may execute the image processing indicated by the above-mentioned user operation to realize video color gamut conversion.
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 is also compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the electronic device 100 can convert the HDR video with a color gamut of BT2020 into a color gamut.
  • the SDR video whose gamut is BT709, and the SDR video whose color gamut is BT709 is normally displayed by the above-mentioned SDR video editor.
  • the electronic device 100 can display the above-mentioned converted SDR video instead of directly displaying the HDR video, thereby avoiding Use the SDR editor to directly display the display problems that affect the user experience such as the display of HDR video frames is not clear.
  • the user's operation of clicking the edit control for triggering editing of the video service may be referred to as a first user operation, such as the operation of clicking the control 133 in FIG. 1C .
  • the first user operation When the first user operation is detected, the video currently displayed on the electronic device, that is, the video selected by the user to be edited, may be referred to as the first video, such as video A displayed in window 131 in FIG. 1C .
  • a series of video frames obtained by decoding the first video by the decoder may be referred to as first video frames.
  • the first video frame consists of N. The N number is determined by the duration of the first video.
  • the video frame obtained after changing the color value of the pixels of the first video frame according to the color value and the color correspondence provided by the LUT resource may be called a second video frame, and the number of the second video frames is also N.
  • the user interface shown in FIG. 1D may be referred to as a first interface.
  • the BT2020 color gamut can be called the first color gamut; the BT709 color gamut can be called the second color gamut.
  • the LUT resource in the format of 33*33*33 shown in Table 1 may be called the first color table.
  • Editing operations such as "editing”, “filter”, “music” and “text” shown in Figure 1D- Figure 1J can be referred to as editing operations selected by the user to change the display effect of the first video, that is, the first video Two user operations.
  • the video frame obtained after the above editing operation may be referred to as the third video frame.
  • the third video frame may include more or fewer video frames than the second video frame, and/or the third video frame may have the same number as the second video frame but have different display effects (filters, text stickers , image stickers).
  • the operation on the save control 146 in FIG. 1J may be referred to as a third user operation.
  • the saved video shown in FIG. 1K may be referred to as a second video.
  • the user interface shown in FIG. 1K may be referred to as a second interface.
  • the operation acting on the "split" control in Figure 1D can be called the operation of splitting the video; the operation acting on the "delete” control can be called the operation of deleting video frames; the operation acting on the "frame” control can be called cropping Manipulation of screen size.
  • the operation of selecting the filter control 155 shown in FIGS. 1E-1F may be referred to as an operation of adding a filter.
  • the operation of adding the "Title 5" title shown in FIGS. 1I-1J may be called adding a title or trailer.
  • the cache BufferQueue applied by the decoder can be called the first memory; the cache surface applied by the encoder can be called the second memory.
  • Pixel Q in Figure 7 can be called the first pixel, Q 1 can be called the current color value Q1 of the first pixel, and the color values S 1 to S 7 can be called the 7 space cubes that form a space cube with Q1
  • the auxiliary color value; the color number W can be called the index value; the corresponding color values of Q1, S 1 to S 7 in Table 1 can be called the target color value; Q 4 obtained after interpolation can be called the changed color value.
  • the address of video memory B in Figure 4 can be called the first video memory address; the address of video memory C in Figure 5 can be called the second video memory address.
  • UI user interface
  • the term "user interface (UI)" in the specification, claims and drawings of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the internal form of information Conversion to and from a form acceptable to the user.
  • the user interface of the application program is the source code written in specific computer languages such as java and extensible markup language (XML). Such as pictures, text, buttons and other controls.
  • Control also known as widget (widget)
  • Typical controls include toolbar (toolbar), menu bar (menu bar), text box (text box), button (button), scroll bar (scrollbar), images and text.
  • the properties and contents of the controls in the interface are defined through labels or nodes.
  • XML specifies the controls contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
  • a node corresponds to a control or property in the interface, and after the node is parsed and rendered, it is presented as the content visible to the user.
  • the interfaces of many applications, such as hybrid applications usually include web pages.
  • a web page, also called a page, can be understood as a special control embedded in the application program interface.
  • a web page is a source code written in a specific computer language, such as hyper text markup language (GTML), cascading style Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc.
  • GTML hyper text markup language
  • cascading style Tables cascading style sheets, CSS
  • java scripts JavaScript, JS
  • the specific content contained in the webpage is also defined by the tags or nodes in the source code of the webpage.
  • GTML defines the elements and attributes of the webpage through ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas>.
  • GUI graphical user interface
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
  • the processes can be completed by computer programs to instruct related hardware.
  • the programs can be stored in computer-readable storage media.
  • When the programs are executed may include the processes of the foregoing method embodiments.
  • the aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A video editing method. The method can be applied to electronic devices capable of image processing, such as smart phones and tablet computers. During editing an HDR video using an SDR video editor, when the SDR video editor cannot display video frames of the HDR video normally, for example, the display is not clear, an electronic device can use color correspondences constructed on the basis of a color lookup table to convert the color gamut of the HDR video being edited into a color gamut that the SDR video editor can normally display, such that the SDR video editor can display the HDR video frames more normally.

Description

视频编辑方法和电子设备Video editing method and electronic device
本申请要求于2021年08月12日提交中国专利局、申请号为202110927488.2、申请名称为“一种将HDR视频转为SDR视频进行视频编辑方法”的中国专利申请的优先权,和2021年11月10日提交中国专利局、申请号为202111329478.5、申请名称为“视频编辑方法和电子设备”的中国专利申请的优先权。其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the China Patent Office on August 12, 2021, with the application number 202110927488.2 and the application name "A Method for Converting HDR Video to SDR Video for Video Editing", and on November 2021 The priority of the Chinese patent application with the application number 202111329478.5 and the application title "Video Editing Method and Electronic Device" submitted to the China Patent Office on March 10. The entire contents of which are incorporated by reference in this application.
技术领域technical field
本申请涉及终端领域,尤其涉及视频编辑方法和电子设备。The present application relates to the field of terminals, in particular to a video editing method and electronic equipment.
背景技术Background technique
目前,大多数手机、平板电脑等智能终端均能支持拍摄高动态范围(High-Dynamic Range,HDR)视频。HDR视频包括更丰富的色彩效果,能够记录更多的图像细节,从而使视频能够呈现出极佳的观影效果。但是,由于芯片平台的图形处理能力的限制,很多智能终端不能支持编辑HDR视频。这就使得用户无法在拍摄HDR视频视频后对已拍摄的视频进行编辑视频编辑,例如添加视频滤镜等,从而影响用户使用体验。At present, most smart terminals such as mobile phones and tablet computers can support shooting high-dynamic range (High-Dynamic Range, HDR) video. HDR video includes richer color effects and can record more image details, so that the video can present an excellent viewing effect. However, due to the limitation of the graphics processing capability of the chip platform, many smart terminals cannot support editing HDR video. This makes it impossible for the user to edit the shot video after shooting the HDR video, such as adding a video filter, etc., thereby affecting the user experience.
发明内容Contents of the invention
本申请提供了一种视频编辑方法和电子设备,实施上述视频编辑方法,当待编辑的视频为HDR视频时,电子设备可以将BT2020色域的HDR视频转化为BT709色域的SDR视频,从而使得在使用SDR视频编辑器编辑HDR视频时,也能正常显示待编辑的视频。The present application provides a video editing method and electronic equipment, implementing the above video editing method, when the video to be edited is an HDR video, the electronic equipment can convert the HDR video of the BT2020 color gamut into the SDR video of the BT709 color gamut, so that When using the SDR video editor to edit HDR video, the video to be edited can also be displayed normally.
第一方面,本申请提供了一种视频编辑方法,该方法应用于电子设备,该方法包括:检测到第一用户操作,第一用户操作对应于编辑控件,用于触发编辑视频的业务;响应于第一用户操作,将第一视频解码为N个第一视频帧,N个第一视频帧的色域为第一色域;对N个第一视频帧进行色域转换,得到N个第二视频帧,N个第二视频帧的色域为第二色域,第一色域与第二色域不同;在第一界面显示N个第二视频帧中的任意一个。In a first aspect, the present application provides a video editing method, which is applied to an electronic device, and the method includes: detecting a first user operation, the first user operation corresponds to an editing control, and is used to trigger a video editing service; responding Based on the first user operation, the first video is decoded into N first video frames, and the color gamut of the N first video frames is the first color gamut; the color gamut conversion is performed on the N first video frames to obtain N first video frames Two video frames, the color gamut of the N second video frames is the second color gamut, the first color gamut is different from the second color gamut; any one of the N second video frames is displayed on the first interface.
实施第一方面提供的视频编辑方法,电子设备可以将第一色域的视频转换为第二色域的视频,然后显示第二色域的视频。这样,当电子设备的视频编辑器无法支持正常显示第一色域的待编辑视频时,电子设备可以将上述待编辑视频的色域由第一色域转换为第二色域,从而使得该编辑器可以正常显示待编辑视频的视频帧,而不会出现显示不清晰等影响用户使用体验的问题。By implementing the video editing method provided in the first aspect, the electronic device can convert the video in the first color gamut to the video in the second color gamut, and then display the video in the second color gamut. In this way, when the video editor of the electronic device cannot normally display the video to be edited in the first color gamut, the electronic device can convert the color gamut of the video to be edited from the first color gamut to the second color gamut, so that the editing The video browser can normally display the video frame of the video to be edited without problems such as unclear display affecting the user experience.
结合第一方面提供的方法,在一些实施例中,第二色域所能表示的颜色范围小于第一色域的所能表示的颜色范围。In combination with the method provided in the first aspect, in some embodiments, the range of colors that can be represented by the second color gamut is smaller than the range of colors that can be represented by the first color gamut.
结合第一方面提供的方法,在一些实施例中,对N个第一视频帧进行色域转换,具体为:利用第一颜色表对N个第一视频帧进行色域转换,第一颜色表包括多个用于变更视频帧色域的颜色值。In combination with the method provided in the first aspect, in some embodiments, the color gamut conversion is performed on the N first video frames, specifically: performing color gamut conversion on the N first video frames by using the first color table, and the first color table Contains several color values used to alter the color gamut of the video frame.
实施上述实施例提供的方法,电子设备可以使用提供颜色转化关系的第一颜色表,将第一色域的视频转化为第二色域的视频。颜色值在第二色域范围内的颜色表均可用于将第一色域的视频转换为第二色域的视频。因此,第一颜色表种类多样,利用第一颜色表实现 颜色值对应从而实现色域转化的方法也多样。By implementing the methods provided in the foregoing embodiments, the electronic device can use the first color table that provides a color conversion relationship to convert a video in the first color gamut into a video in the second color gamut. A color table whose color values are within the range of the second color gamut can be used to convert the video of the first color gamut to the video of the second color gamut. Therefore, there are various types of the first color table, and there are also various methods for using the first color table to realize color value correspondence and thereby realize color gamut conversion.
结合第一方面提供的方法,在一些实施例中,在第一界面显示N个第二视频帧之后,该方法还包括:检测到第二用户操作,第二用户操作为用户选定的改变第一视频显示效果的编辑操作;响应于第二用户操作,增加或减少第二视频帧的数量,和/或,变更N个第二视频帧中一个或多个第二视频帧的像素点的数量,和/或,变更N个第二视频帧中一个或多个第二视频帧的像素点的颜色值,得到M个第三视频帧;M与N相等或不等;在第一界面显示M个第三视频帧中的任意一个。With reference to the method provided in the first aspect, in some embodiments, after displaying N second video frames on the first interface, the method further includes: detecting a second user operation, where the second user operation is to change the second video frame selected by the user. An editing operation of a video display effect; in response to a second user operation, increase or decrease the number of second video frames, and/or change the number of pixels of one or more second video frames in the N second video frames , and/or, change the color values of the pixels of one or more second video frames in N second video frames to obtain M third video frames; M and N are equal or not equal; M is displayed on the first interface Any one of the third video frames.
实施上述实施例提供的方法,在电子设备还可以检测到用户选定的改变第一视频显示效果的编辑操作,例如分割视频、删除视频帧、添加片头或片尾、裁剪画面尺寸、添加滤镜等等。同时,电子设备可执行上述编辑操作,并显示经过上述编辑抄作处理后的视频帧。这样,每当用户选择一个编辑操作后,用户可以立刻看到实施该编辑操作之后的视频的样式。Implementing the method provided by the above embodiment, the electronic device can also detect editing operations selected by the user to change the display effect of the first video, such as splitting the video, deleting video frames, adding a title or credits, cropping the screen size, adding filters, etc. wait. At the same time, the electronic device can perform the above-mentioned editing operation, and display the video frame after the above-mentioned editing and copying. In this way, whenever the user selects an editing operation, the user can immediately see the style of the video after the editing operation is performed.
结合第一方面提供的方法,在一些实施例中,该方法还包括;检测到第三用户操作,第三用户操作对应于保存控件;响应于第三用户操作,保存M个第三视频帧为第二视频,第二视频为第一视频经过编辑操作处理得到的编辑后的视频;在第二界面显示第二视频。In combination with the method provided in the first aspect, in some embodiments, the method further includes: detecting a third user operation, where the third user operation corresponds to a save control; in response to the third user operation, saving M third video frames as The second video, where the second video is an edited video obtained by editing the first video; the second video is displayed on the second interface.
实施上述实施例提供的方法,电子设备可以根据检测到用户保存视频的用户操作,响应于该操作,电子设备可以将编辑后的视频帧封装为一个视频,并将其保存到本地存储空间中,以供用户随时浏览、转发等。Implementing the method provided by the above embodiment, the electronic device can detect the user operation of saving the video, and in response to the operation, the electronic device can package the edited video frame into a video and save it in the local storage space, For users to browse, forward, etc. at any time.
结合第一方面提供的方法,在一些实施例中,用户选定的改变第一视频显示效果的编辑操作,包括:分割视频、删除视频帧、添加片头或片尾、裁剪画面尺寸、添加滤镜、添加文本或图形的操作中的一个或多个;其中,分割视频、删除视频帧、添加片头或片尾的操作用于增加或减少第二视频帧的数量,裁剪画面尺寸的操作用于变更N个第二视频帧中一个或多个第二视频帧的像素点的数量,添加滤镜、添加文本或图形的操作用于变更N个第二视频帧中一个或多个第二视频帧的像素点的颜色值。In combination with the method provided in the first aspect, in some embodiments, the editing operation selected by the user to change the display effect of the first video includes: splitting the video, deleting video frames, adding a title or credits, cropping the screen size, adding filters, One or more of the operations of adding text or graphics; among them, the operation of splitting the video, deleting the video frame, adding the beginning or end of the credits is used to increase or decrease the number of the second video frame, and the operation of cropping the screen size is used to change N The number of pixels in one or more second video frames in the second video frame, the operation of adding a filter, adding text or graphics is used to change the pixel points of one or more second video frames in the N second video frames color value.
结合第一方面提供的方法,在一些实施例中,当第二用户操作包括增加或减少第二视频帧的数量的操作时,则M与N不等;当第二用户操作不包括增加或减少第二视频帧的数量的操作时,则M与N相等。In combination with the method provided in the first aspect, in some embodiments, when the second user operation includes an operation to increase or decrease the number of second video frames, then M and N are not equal; when the second user operation does not include an increase or decrease When operating on the number of second video frames, then M and N are equal.
结合第一方面提供的方法,在一些实施例中,第一色域为BT2020,第二色域为BT709。In combination with the method provided in the first aspect, in some embodiments, the first color gamut is BT2020, and the second color gamut is BT709.
这样,电子设备可以将色域为BT2020的待编辑视频转化为色域为BT709的视频。当电子设备所使用的视频编辑器不支持显示色域为BT2020的待编辑视频时,电子设备可以显示上述经过转化得到的色域为BT709的视频,从而使得电子设备可以正常显示待编辑视频的视频帧,而不会出现显示不清晰等影响用户使用体验的问题。In this way, the electronic device can convert the video to be edited with a color gamut of BT2020 into a video with a color gamut of BT709. When the video editor used by the electronic device does not support the display of the video to be edited with a color gamut of BT2020, the electronic device can display the converted video with a color gamut of BT709, so that the electronic device can normally display the video of the video to be edited Frames, and there will be no problems such as unclear display that affect the user experience.
结合第一方面提供的方法,在一些实施例中,第一视频为高动态范围HDR视频,第二视频为标准动态范围SDR视频。With reference to the method provided in the first aspect, in some embodiments, the first video is a high dynamic range HDR video, and the second video is a standard dynamic range SDR video.
这样,电子设备可以将待编辑HDR视频转化为SDR视频。当电子设备所使用的视频编辑器不支持显示HDR视频时,电子设备可以显示上述经过转化得到的SDR视频,从而使得电子设备可以正常显示待编辑视频的视频帧,而不会出现显示不清晰等影响用户使用体验的问题。In this way, the electronic device can convert the HDR video to be edited into the SDR video. When the video editor used by the electronic device does not support the display of HDR video, the electronic device can display the above-mentioned converted SDR video, so that the electronic device can normally display the video frame of the video to be edited without unclear display, etc. Issues affecting user experience.
结合第一方面提供的方法,在一些实施例中,电子设备包括视频编辑应用APP、解码器、第一内存,第一内存为编码器中用于缓存APP输入的视频的存储空间,将第一视频解码为N个第一视频帧,具体包括:APP向第一内存发送第一视频;解码器从第一内存中读得第一视频,并将第一视频分解为N个第一视频帧。In combination with the method provided in the first aspect, in some embodiments, the electronic device includes a video editing application APP, a decoder, and a first memory, where the first memory is a storage space for buffering video input by the APP in the encoder, and the first Decoding the video into N first video frames specifically includes: the APP sends the first video to the first memory; the decoder reads the first video from the first memory, and decomposes the first video into N first video frames.
结合第一方面提供的方法,在一些实施例中,电子设备还包括开放图形库OpenGL、图形处理器GPU,利用第一颜色表对N个第一视频帧进行色域转换,得到N个第二视频帧,具体包括:OpenGL接收到解码器发送的N个第一视频帧,N个第一视频帧的颜色编码格式为YUV格式、表示颜色值的数据的数据类型为整型;OpenGL将N个第一视频帧的颜色编码格式变更为RGB格式,表示颜色值的数据的数据类型变更为浮点型;OpenGL从GPU中调用第一颜色表,使用第一颜色表提供的颜色值转换关系修改N个第一视频帧的像素点的颜色值,得到N个第二视频帧;N个第二视频帧的颜色编码格式为RGB格式、表示颜色值的数据的数据类型为浮点型。In combination with the method provided in the first aspect, in some embodiments, the electronic device further includes an open graphics library OpenGL and a graphics processor GPU, and uses the first color table to perform color gamut conversion on the N first video frames to obtain N second video frames. The video frame specifically includes: OpenGL receives the N first video frames sent by the decoder, the color coding format of the N first video frames is YUV format, and the data type of the data representing the color value is an integer; OpenGL converts the N first video frames The color coding format of the first video frame is changed to RGB format, and the data type of the data representing the color value is changed to floating point type; OpenGL calls the first color table from the GPU, and uses the color value conversion relationship provided by the first color table to modify N The color values of the pixels of the first video frames are obtained to obtain N second video frames; the color coding format of the N second video frames is RGB format, and the data type of the data representing the color value is a floating point type.
结合第一方面提供的方法,在一些实施例中,在OpenGL从GPU中调用第一颜色表之前,方法还包括:OpenGL向APP获取第一颜色表,并将第一颜色表加载到GPU中。With reference to the method provided in the first aspect, in some embodiments, before OpenGL calls the first color table from the GPU, the method further includes: OpenGL obtains the first color table from the APP, and loads the first color table into the GPU.
结合第一方面提供的方法,在一些实施例中,使用第一颜色表提供的颜色值转换关系修改N个第一视频帧的像素点的颜色值,具体包括:OpenGL确定第一像素点的当前颜色值Q1,Q1的数据类型为整型,第一像素点为N个第一视频帧任意一个视频帧中的任意一个像素点;OpenGL以Q1在三维颜色空间中的位置为原点,确定与Q1构成空间正方体的7个辅助颜色值;OpenGL确定Q1和7个辅助颜色值在第一颜色表中的各自的索引值;OpenGL依据索引值在第一颜色表中查询分别对应Q1和7个辅助颜色值的目标颜色值;OpenGL将Q1和7个辅助颜色值的目标颜色值进行插值,得到变更后颜色值,将第一像素点的颜色值设置为变更后的颜色值。In combination with the method provided in the first aspect, in some embodiments, using the color value conversion relationship provided by the first color table to modify the color values of the pixels of the N first video frames specifically includes: OpenGL determining the current color value of the first pixel The color value Q1, the data type of Q1 is an integer, the first pixel is any pixel in any one of the N first video frames; OpenGL takes the position of Q1 in the three-dimensional color space as the origin, and determines the same value as Q1 The 7 auxiliary color values that constitute the space cube; OpenGL determines the respective index values of Q1 and the 7 auxiliary color values in the first color table; OpenGL queries the corresponding Q1 and 7 auxiliary colors in the first color table according to the index values The target color value of the value; OpenGL interpolates the target color value of Q1 and the 7 auxiliary color values to obtain the changed color value, and sets the color value of the first pixel as the changed color value.
结合第一方面提供的方法,在一些实施例中,一个颜色值(R,G,B)在第一颜色表中的索引值W计算公式为:W=R*33 2+G*33 1+B*33 0In combination with the method provided in the first aspect, in some embodiments, the calculation formula of the index value W of a color value (R, G, B) in the first color table is: W=R*33 2 +G*33 1 + B*33 0 .
结合第一方面提供的方法,在一些实施例中,在第一界面显示N个第二视频帧中的任意一个,具体包括:GPU将第一显存地址发送给OpenGL,第一显存地址为存储N个第二视频帧的显存地址;OpenGL将第一显存地址发送给APP;APP根据第一显存地址获取N个第二视频帧;APP在第一界面显示通过第一显存地址获取的N个第二视频帧中的任意一个视频帧。In conjunction with the method provided in the first aspect, in some embodiments, displaying any one of the N second video frames on the first interface specifically includes: the GPU sends the first video memory address to OpenGL, and the first video memory address is to store N The video memory address of the second video frame; OpenGL sends the first video memory address to the APP; the APP obtains N second video frames according to the first video memory address; the APP displays the N second video frames obtained through the first video memory address on the first interface. Any one of the video frames.
结合第一方面提供的方法,在一些实施例中,响应于第二用户操作,增加或减少第二视频帧的数量,和/或,变更N个第二视频帧中一个或多个第二视频帧的像素点的数量,和/或,变更N个第二视频帧中一个或多个第二视频帧的像素点的颜色值,得到M个第三视频帧,具体包括:APP将第二用户操作发送给OpenGL;OpenGL根据第二用户操作确定:实现第二用户操作指示的视频显示效果的计算逻辑;OpenGL将计算逻辑发送给GPU;GPU依据计算逻辑增加或减少第二视频帧的数量,和/或,变更N个第二视频帧中一个或多个第二视频帧的像素点的数量,和/或,变更N个第二视频帧中一个或多个第二视频帧的像素点的颜色值,得到M个第三视频帧;M个第三视频帧构成的视频具备第二用户操作指示的显示效果,M个第三视频帧的颜色编码格式为RGB格式,表示颜色值的数据的数据类型为浮点型。In combination with the method provided in the first aspect, in some embodiments, in response to a second user operation, increase or decrease the number of second video frames, and/or change one or more second video frames in the N second video frames The number of pixels of the frame, and/or, change the color value of the pixels of one or more second video frames in the N second video frames to obtain M third video frames, specifically including: the APP converts the second user The operation is sent to OpenGL; OpenGL determines according to the second user operation: realize the calculation logic of the video display effect indicated by the second user operation; OpenGL sends the calculation logic to the GPU; the GPU increases or decreases the number of second video frames according to the calculation logic, and /or, change the number of pixels of one or more second video frames in the N second video frames, and/or, change the color of the pixels of one or more second video frames in the N second video frames value, to obtain M third video frames; the video composed of M third video frames has the display effect indicated by the second user operation, and the color coding format of the M third video frames is RGB format, which represents the data of the color value data The type is float.
结合第一方面提供的方法,在一些实施例中,在第一界面显示M个第三视频帧中的任意一个,具体包括:GPU将第二显存地址发送给OpenGL,第二显存地址为存储M个第三视频帧的显存地址;OpenGL将第二显存地址发送给APP;APP根据第二显存地址获取M个第三视频帧;APP在第一界面显示通过第二显存地址获取M个第三视频帧中的任意一个视频帧。In conjunction with the method provided in the first aspect, in some embodiments, displaying any one of the M third video frames on the first interface specifically includes: the GPU sends the second video memory address to OpenGL, and the second video memory address is to store M The video memory address of the third video frame; OpenGL sends the second video memory address to APP; APP obtains M third video frames according to the second video memory address; APP displays on the first interface that M third video frames are obtained through the second video memory address Any one of the video frames in the frame.
结合第一方面提供的方法,在一些实施例中,电子设备还包括编码器和第二内存,保存M个第三视频帧为第二视频,具体包括:APP向OpenGL发送调用C2D引擎的请求;响应于请求,OpenGL调用C2D引擎从GPU中获取M个第三视频帧;OpenGL获得的M个第三视频帧的颜色编码格式为YUV格式,表示颜色值的数据的数据类型为整型;OpenGL将M个第三视频帧发送给第二内存;编码器从第二内存中读得M个第三视频帧,并将M个第三视频帧封装为第二视频。In combination with the method provided in the first aspect, in some embodiments, the electronic device further includes an encoder and a second memory, and saves M third video frames as the second video, specifically including: the APP sends a request to OpenGL to call the C2D engine; In response to the request, OpenGL calls the C2D engine to obtain M third video frames from the GPU; the color encoding format of the M third video frames obtained by OpenGL is YUV format, and the data type representing the color value is an integer; OpenGL will The M third video frames are sent to the second memory; the encoder reads the M third video frames from the second memory, and encapsulates the M third video frames into the second video.
结合第一方面提供的方法,在一些实施例中,在APP向第一内存发送第一视频之前,方法还包括:APP调用mediacodec创建编码器;编码器向内存申请第一内存;OpenGL接收APP发送的第一内存的内存标识ID和/或地址,根据ID和/或地址确定用于接收OpenGL输出的视频帧的第一内存;APP调用mediacodec创建解码器;解码器向内存申请第二内存。In combination with the method provided in the first aspect, in some embodiments, before the APP sends the first video to the first memory, the method further includes: the APP calls mediacodec to create an encoder; the encoder applies to the memory for the first memory; OpenGL receives the video sent by the APP. The memory identification ID and/or address of the first memory, and determine the first memory for receiving the video frame output by OpenGL according to the ID and/or address; the APP calls mediacodec to create a decoder; the decoder applies to the memory for a second memory.
第二方面,本申请提供了一种电子设备,该电子设备包括一个或多个处理器和一个或多个存储器;其中,一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。In a second aspect, the present application provides an electronic device, which includes one or more processors and one or more memories; wherein, one or more memories are coupled with one or more processors, and one or more The memory is used to store computer program codes. The computer program codes include computer instructions. When one or more processors execute the computer instructions, the electronic device executes the method described in the first aspect and any possible implementation manner of the first aspect.
第三方面,本申请提供一种计算机可读存储介质,包括指令,当上述指令在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。In a third aspect, the present application provides a computer-readable storage medium, including instructions. When the above-mentioned instructions are run on an electronic device, the above-mentioned electronic device executes the method described in the first aspect and any possible implementation manner of the first aspect. method.
第四方面,本申请提供一种包含指令的计算机程序产品,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第一方面以及第一方面中任一可能的实现方式描述的方法。In a fourth aspect, the present application provides a computer program product containing instructions. When the above-mentioned computer program product is run on an electronic device, the above-mentioned electronic device is executed as described in the first aspect and any possible implementation manner of the first aspect. method.
可以理解地,上述第二方面提供的电子设备、第三方面提供的计算机存储介质、第四方面提供的计算机程序产品均用于执行本申请第一方面提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。It can be understood that the electronic device provided in the second aspect above, the computer storage medium provided in the third aspect, and the computer program product provided in the fourth aspect are all used to execute the method provided in the first aspect of the present application. Therefore, the beneficial effects that it can achieve can refer to the beneficial effects in the corresponding method, and will not be repeated here.
附图说明Description of drawings
图1A-图1K是本申请实施例提供的一组用户界面示意图;Figure 1A-Figure 1K are a set of schematic diagrams of user interfaces provided by the embodiment of the present application;
图2是本申请实施例提供的电子设备的软件架构图;FIG. 2 is a software architecture diagram of an electronic device provided by an embodiment of the present application;
图3是本申请实施例提供的视频编辑方法的流程图;Fig. 3 is a flow chart of the video editing method provided by the embodiment of the present application;
图4是本申请实施例提供的电子设备初始化视频编辑环境的流程图;FIG. 4 is a flowchart of an electronic device initializing a video editing environment provided by an embodiment of the present application;
图5是本申请实施例提供的电子设备变换待编辑视频的色域并依据用户选定的编辑操作处理待编辑视频的流程图;Fig. 5 is a flow chart of the electronic device provided in the embodiment of the present application to transform the color gamut of the video to be edited and process the video to be edited according to the editing operation selected by the user;
图6是本申请实施例提供的电子设备保存编辑后的视频的流程图;FIG. 6 is a flow chart of saving an edited video by an electronic device provided in an embodiment of the present application;
图7是本申请实施例提供的电子设备使用LUT资源实现视频色域转化的示意图;FIG. 7 is a schematic diagram of video color gamut conversion implemented by an electronic device using LUT resources according to an embodiment of the present application;
图8是本申请实施例提供的电子设备的硬件结构图。FIG. 8 is a hardware structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。The terms used in the following embodiments of the present application are only for the purpose of describing specific embodiments, and are not intended to limit the present application.
色域表示视频编码时所能显示的色彩范围。标准动态范围(Standard Dynamic Range,SDR)视频使用BT709色域;高动态范围(High Dynamic Range,HDR)视频使用BT2020色域。因此,相比于SDR视频,HDR视频所能使用的颜色种类更多,颜色表示范围更广,显示的亮度范围也更高,进一步的,HDR视频能够支持更丰富的图像色彩表现和更生动的图像细节表现。这也就使得HDR视频能够为用户提供观影效果,从而提升用户的使用体验。The color gamut represents the range of colors that can be displayed when video is encoded. Standard Dynamic Range (SDR) video uses BT709 color gamut; High Dynamic Range (High Dynamic Range, HDR) video uses BT2020 color gamut. Therefore, compared with SDR video, HDR video can use more colors, a wider range of color representation, and a higher display brightness range. Further, HDR video can support richer image color performance and more vivid images. Image detail performance. This also enables HDR video to provide users with viewing effects, thereby improving the user experience.
不限于BT2020色域、BT709色域,HDR视频和SDR视频还可能使用其他类型的色域。但总的来说,HDR视频所使用的色域的范围要比SDR视频使用的色域的范围广,色彩更和细节加丰富。Not limited to BT2020 color gamut, BT709 color gamut, HDR video and SDR video may also use other types of color gamut. But in general, the color gamut used by HDR video is wider than that used by SDR video, and the color and details are richer.
通常情况下,手机、平板电脑等电子设备(后续简称电子设备100)支持拍摄的视频的类型为SDR视频。随着拍摄技术和图像技术的发展,电子设备100不仅支持拍摄SDR视频,还支持拍摄HDR视频。这样,用户编辑HDR视频的需求也随之显现出来。Usually, electronic devices such as mobile phones and tablet computers (hereinafter referred to as the electronic device 100 for short) support the shooting of SDR videos. With the development of shooting technology and image technology, the electronic device 100 not only supports shooting SDR video, but also supports shooting HDR video. In this way, the user's demand for editing HDR video also emerges.
电子设备100编辑HDR视频所使用的编辑器还是SDR视频编辑器。这时,在使用SDR视频编辑器编辑HDR视频时,由于HDR视频的色域(BT2020)包括的颜色范围比SDR视频(BT709)更广,因此在使用SDR编辑器编辑HDR视频的过程中,SDR编辑器无法正常显示待编辑HDR视频,例如显示不清晰、包含噪声点等等。The editor used by the electronic device 100 to edit the HDR video is also an SDR video editor. At this time, when using the SDR video editor to edit the HDR video, since the color gamut (BT2020) of the HDR video includes a wider color range than the SDR video (BT709), in the process of using the SDR editor to edit the HDR video, the SDR The editor cannot properly display the HDR video to be edited, for example, the display is not clear, contains noise points, and so on.
为了解决上述问题,本申请实施例提供了一种视频编辑方法。该方法可应用于具备该图像处理能力的电子设备上,上述电子设备(即电子设备100)例如手机、平板电脑等。In order to solve the above problems, an embodiment of the present application provides a video editing method. The method can be applied to an electronic device capable of processing the image, such as a mobile phone, a tablet computer, and the like (ie, the electronic device 100 ).
实施本申请实施例提供了的视频编辑方法,电子设备100可以将色域为BT2020的HDR视频转换为色域为BT709的SDR视频,从而使得电子设备100在使用编辑器编辑视频时能够正常的处理并显示待编辑视频的视频帧。Implementing the video editing method provided in the embodiment of the present application, the electronic device 100 can convert the HDR video with a color gamut of BT2020 into an SDR video with a color gamut of BT709, so that the electronic device 100 can normally process the video when using an editor to edit the video And display the video frame of the video to be edited.
其中,电子设备100将待编辑的HDR视频转换为对应的SDR视频可通过使用LUT滤镜渲染实现。上述LUT滤镜是基于颜色查找表(look up table,LUT)算法建立的一种特殊的滤镜。该特定的LUT滤镜可用于将色域为BT2020的视频帧转换为色域为BT709的视频帧,从而实现将HDR视频转换为SDR视频的功能。Wherein, the conversion of the HDR video to be edited by the electronic device 100 into a corresponding SDR video may be implemented by using a LUT filter for rendering. The above-mentioned LUT filter is a special filter based on a color lookup table (look up table, LUT) algorithm. This specific LUT filter can be used to convert video frames with a color gamut of BT2020 to video frames with a color gamut of BT709, thereby realizing the function of converting HDR video to SDR video.
可选的,当HDR视频和/或SDR视频采用其他色域模式时,上述LUT滤镜可相应地进行调整,使其能够进行适配的色域转换。例如,当HDR视频所采用的色域为BT2020,而SDR视频采用的色域为sRGB时,上述LUT滤镜可用于将色域为BT2020的视频帧转换为色域为sRGB的视频帧。Optionally, when the HDR video and/or the SDR video adopts other color gamut modes, the aforementioned LUT filter can be adjusted accordingly so that it can perform adaptive color gamut conversion. For example, when the color gamut of HDR video is BT2020 and the color gamut of SDR video is sRGB, the above LUT filter can be used to convert the video frame of BT2020 color gamut to the video frame of sRGB color gamut.
这样,用户可以编辑上述有HDR视频转化得到的SDR视频,为该视频添加滤镜等。视频编辑操作包括但不限于添加滤镜。例如,上述编辑操作还包括裁剪、图像反转、缩放,添加文字,添加滤镜,添加片头(或片尾或其他页面),添加视频水印或贴纸等等。In this way, the user can edit the above-mentioned SDR video converted from the HDR video, add filters, etc. to the video. Video editing operations include, but are not limited to, adding filters. For example, the above editing operations also include cropping, image inversion, zooming, adding text, adding filters, adding headers (or endings or other pages), adding video watermarks or stickers, and so on.
不限于手机、平板电脑,电子设备100还可以是桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上 网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,上述电子设备的图形处理器不具备编辑并将编辑后的视频保存为HDR视频。本申请实施例对该电子设备的具体类型不作特殊限制。Not limited to mobile phones and tablet computers, the electronic device 100 can also be a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and a cellular phone, a personal digital Assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) equipment, virtual reality (virtual reality, VR) equipment, artificial intelligence (artificial intelligence, AI) equipment, wearable equipment, vehicle equipment, smart home equipment and/or smart city devices, the graphics processors of the aforementioned electronic devices are not capable of editing and saving the edited video as HDR video. The embodiment of the present application does not specifically limit the specific type of the electronic device.
图1A-图1K示例性示出了电子设备100上一组用户界面,下面结合图1A-图1K具体介绍实施本申请实施例提供的视频编辑方法的应用场景。1A-1K schematically show a group of user interfaces on the electronic device 100. The application scenarios for implementing the video editing method provided by the embodiment of the present application will be described in detail below with reference to FIGS. 1A-1K.
首先,图1A示例性示出了电子设备100上展示已安装应用程序的用户界面,即主页面(home page)。如图1A所示,主页面中显示有一个或多个应用程序图标,例如“时钟”应用程序图标、“日历”应用程序图标、“天气”应用程序图标等等。First, FIG. 1A exemplarily shows a user interface displaying installed application programs on the electronic device 100, that is, a home page. As shown in FIG. 1A , one or more application program icons are displayed on the main page, such as a "clock" application program icon, a "calendar" application program icon, a "weather" application program icon, and the like.
上述一个或多个应用程序图标包括“图库”应用程序(后简称“图库”)图标,即图标111。电子设备100可检测到作用于图标111的用户操作。上述操作例如是点击操作等。响应于上述操作,电子设备100可显示图1B所示的用户界面。The above-mentioned one or more application program icons include an icon of a “Gallery” application program (hereinafter referred to as “Gallery”), that is, an icon 111 . The electronic device 100 may detect a user operation acting on the icon 111 . The above operation is, for example, a click operation or the like. In response to the above operations, the electronic device 100 may display the user interface shown in FIG. 1B .
图1B示例性示出了电子设备100上运行“图库”时“图库”的主界面。该界面可展示有一个或多个图片或视频。其中,上述一个或多个视频包括HDR视频、LOG视频,以及其他类型的视频,例如SDR视频。上述LOG视频是指采用LOG灰模式拍摄的低饱和、低亮度的视频,也可称为LOG灰片。FIG. 1B exemplarily shows the main interface of the "Gallery" when the "Gallery" is running on the electronic device 100 . This interface can display one or more pictures or videos. Wherein, the above-mentioned one or more videos include HDR video, LOG video, and other types of video, such as SDR video. The above-mentioned LOG video refers to a low-saturation, low-brightness video shot in the LOG gray mode, and may also be called a LOG gray film.
如图1B所示,图标121指示的视频可以为LOG视频;图标122指示的视频可以为HDR视频;图标123指示的视频可以为SDR视频。在电子设备100展示HDR视频或LOG视频时,指示该视频的图标可显示该视频所属的类型。这样,用户可以通过图标中显示的信息,了解该视频的类型。例如,图标121中左下角显示有LOG;图标122中左下角显示有HDR。图1B中未标记HDR或LOG的视频为SDR视频。As shown in FIG. 1B , the video indicated by the icon 121 may be a LOG video; the video indicated by the icon 122 may be an HDR video; the video indicated by the icon 123 may be an SDR video. When the electronic device 100 displays HDR video or LOG video, the icon indicating the video can display the type of the video. In this way, the user can know the type of the video through the information displayed in the icon. For example, LOG is displayed in the lower left corner of the icon 121 ; HDR is displayed in the lower left corner of the icon 122 . Videos not labeled HDR or LOG in Figure 1B are SDR videos.
电子设备100可检测到作用于图标122的用户操作,响应于该操作,电子设备100可显示图1C所示的用户界面。图1C为电子设备100具体展示某一图片或视频的用户界面。The electronic device 100 may detect a user operation acting on the icon 122, and in response to the operation, the electronic device 100 may display the user interface shown in FIG. 1C. FIG. 1C is a user interface of the electronic device 100 specifically displaying a certain picture or video.
如图1C所示,该用户界面可包括窗口131。窗口131可用于显示用户选择浏览的视频。例如,在图1B中,用户选择浏览的视频为图标122指示的HDR视频(“视频A”)。于是,窗口131中可显示“视频A”。As shown in FIG. 1C , the user interface may include a window 131 . The window 131 can be used to display the videos that the user chooses to browse. For example, in FIG. 1B , the video that the user selects to browse is the HDR video indicated by the icon 122 ("Video A"). Then, "Video A" can be displayed in the window 131 .
该用户界面还包括图标132、控件133。图标132可用于表示窗口131中显示的视频的类型。例如,当前图标132中显示的“HDR”可指示“视频A”为HDR类型的视频。The user interface also includes icons 132 and controls 133 . Icons 132 may be used to represent the type of video displayed in window 131 . For example, "HDR" displayed in the current icon 132 may indicate that "Video A" is an HDR type video.
控件133可用于接收用户编辑视频(或图片)的操作,并显示编辑视频(或图片)的用户界面。一般的,在电子设备100所使用的芯片平台不支持处理HDR视频的情况下,电子设备100不会为用户提供编辑HDR视频的功能。因此,图1C一般包括控件133,即电子设备100不会像用户提供编辑视频的控件,这是因为,电子设备100无法在输出并保存编辑后的HDR视频。The control 133 can be used to receive a user's operation of editing a video (or picture), and display a user interface for editing a video (or picture). Generally, if the chip platform used by the electronic device 100 does not support the processing of HDR video, the electronic device 100 will not provide the user with the function of editing the HDR video. Therefore, FIG. 1C generally includes a control 133, that is, the electronic device 100 does not provide the user with a control for editing video, because the electronic device 100 cannot output and save the edited HDR video.
而在本申请实施例中,电子设备100可将HDR转换为SDR视频,然后,为用户提供编辑上述SDR视频的功能,以满足用户的编辑需求,提升用户使用体验。因此,在图1C所示的用户界面中,电子设备100可显示控件133,并且,可响应作用于控件133的用户 操作。In the embodiment of the present application, the electronic device 100 can convert the HDR video into an SDR video, and then provide the user with the function of editing the SDR video, so as to meet the user's editing needs and improve the user experience. Therefore, in the user interface shown in FIG. 1C , the electronic device 100 can display the control 133 and can respond to a user operation acting on the control 133 .
该用户界面还可包括控件134、分享控件(135)、收藏控件(136)、删除控件(137)等等。The user interface may also include controls 134, share controls (135), favorite controls (136), delete controls (137), and the like.
控件134可用于展示视频的详细信息,例如拍摄时间、拍摄位置、颜色编码格式、码率、帧率、像素大小等等。The control 134 can be used to display detailed information of the video, such as shooting time, shooting location, color coding format, bit rate, frame rate, pixel size and so on.
分享控件(135)可用于将视频A发送给其他应用使用。例如,当检测到作用于分享控件的用户操作后,响应于该操作,电子设备100可显示一个或多个应用的图标,上述一个或多个应用的图标包括社交软件A的图标。当检测到作用于社交软件A的应用图标后,响应于该操作,电子设备100可将视频A发送给社交软件A,进一步的,用户可通过该社交软件将上述视频分享给好友。The sharing control (135) can be used to send the video A to other applications for use. For example, after detecting a user operation acting on the sharing control, in response to the operation, the electronic device 100 may display icons of one or more applications, and the icons of the one or more applications include an icon of social software A. After detecting the application icon acting on the social software A, in response to the operation, the electronic device 100 can send the video A to the social software A, and further, the user can share the video to friends through the social software.
收藏控件(136)可用于标记视频。在图1C所示的用户界面中,当检测到作用于收藏控件的用户操作后,响应于该操作,电子设备100可将视频A标记为用户喜爱的视频。电子设备100可生成一个相册,该相册用于展示被标记为用户喜爱的视频。这样,在视频A被标记为用户喜爱的视频的情况下,用户可通过上述展示用户喜爱的视频的相册快速地查看视频A。A favorite control (136) can be used to tag videos. In the user interface shown in FIG. 1C , when a user operation acting on the favorite control is detected, in response to the operation, the electronic device 100 may mark video A as the user's favorite video. The electronic device 100 can generate an album for displaying videos marked as favorite by the user. In this way, in the case that video A is marked as the user's favorite video, the user can quickly view video A through the above-mentioned photo album showing the user's favorite videos.
删除控件(137)可用于删除视频A。A delete control (137) can be used to delete video A.
当检测到作用于控件133的用户操作后,电子设备100可显示图1D所示的用户界面。图1D示例性示出了用户编辑视频(或图片)的用户界面。如图1D所示,该用户界面可包括窗口141、窗口142、操作栏143、操作栏144。When a user operation on the control 133 is detected, the electronic device 100 may display the user interface shown in FIG. 1D . FIG. 1D exemplarily shows a user interface for a user to edit a video (or picture). As shown in FIG. 1D , the user interface may include a window 141 , a window 142 , an operation bar 143 , and an operation bar 144 .
窗口141可用于显示被编辑的HDR视频的预览图像。一般的,窗口141会显示该视频的封面视频帧。当检测到作用于播放按钮145的用户操作后,窗口141可依次显示该视频的视频帧流,即播放该视频。The window 141 may be used to display a preview image of the edited HDR video. Generally, window 141 will display the cover video frame of the video. When a user operation acting on the play button 145 is detected, the window 141 may sequentially display the video frame stream of the video, that is, play the video.
窗口142可用于显示被编辑的视频的视频帧流。用户可以拖动窗口142来调整窗口141中显示地视频帧。具体地,图1D中还显示有标尺147。电子设备100可以检测到作用于窗口142左滑或右滑的用户操作,响应于上述用户操作,标尺147所处的视频帧流的位置不同,这时,电子设备100可在窗口141中显示当前标尺147所在位置的视频帧。 Window 142 may be used to display a stream of video frames of the video being edited. The user can drag the window 142 to adjust the video frame displayed in the window 141 . Specifically, a scale 147 is also shown in FIG. 1D . The electronic device 100 can detect the user operation of sliding left or right on the window 142, and in response to the above user operation, the position of the video frame stream where the ruler 147 is located is different. At this time, the electronic device 100 can display the current Frame of video where ruler 147 is located.
操作栏143、操作栏144中可显示多个视频编辑操作的图标。一般的,操作栏143中显示的一个图标指示一个编辑操作大类。操作栏144可根据当前操作栏143中被选中的操作大类显示属于该类的视频编辑操作。例如,操作栏143中包括“剪辑”。粗体显示的“剪辑”可表示用户当前选中的视频编辑操作的类型为“剪辑”。此时,操作栏144中显示的为一些归属于“剪辑”类的操作,例如“分割”、“截取”、“音量”、“画幅”等。A plurality of icons for video editing operations may be displayed in the operation bar 143 and the operation bar 144 . Generally, an icon displayed in the operation bar 143 indicates a category of editing operations. The operation bar 144 can display video editing operations belonging to the selected operation category in the current operation bar 143 . For example, "Clip" is included in the operation column 143 . The "clip" displayed in bold may indicate that the type of video editing operation currently selected by the user is "clip". At this time, some operations belonging to the "Clip" category are displayed in the operation bar 144, such as "Split", "Crop", "Volume", "Frame" and so on.
示例性的,电子设备100可检测到作用于“分割”控件的用户操作,响应于该操作,电子设备100可显示分割视频的一个或多个操作控件。电子设备100可记录用户的分割操作,例如第一个视频分段的起始时间和结束时间,第二视频分段的起始时间和结束时间等等。Exemplarily, the electronic device 100 may detect a user operation on the "split" control, and in response to the operation, the electronic device 100 may display one or more operation controls for splitting the video. The electronic device 100 can record the segmentation operation of the user, such as the start time and end time of the first video segment, the start time and end time of the second video segment, and so on.
又比如,电子设备100可检测到作用于“画幅”控件的用户操作,响应于该操作,电子设备100可记录用户设定的视频画面的尺寸,进而裁剪原始视频帧。For another example, the electronic device 100 may detect a user operation acting on the "frame" control, and in response to the operation, the electronic device 100 may record the size of the video frame set by the user, and then crop the original video frame.
“剪辑”操作对应的操作栏144中还包括其他归属于“剪辑”类的编辑控件。响应于 作用于上述控件的用户操作,电子设备100可记录并执行上述控件对应的视频编辑操作,这里不再一一例举。The operation bar 144 corresponding to the "clip" operation also includes other editing controls belonging to the "clip" category. In response to user operations acting on the above-mentioned controls, the electronic device 100 may record and execute video editing operations corresponding to the above-mentioned controls, which will not be exemplified here.
该用户界面还包括保存控件146。当检测到作用于保存控件146的用户操作时,响应于该操作,电子设备100可将当前状态的视频保存下来。当前状态的视频可以是附加有编辑操作的视频,也可以未进行编辑操作的视频。The user interface also includes a save control 146 . When a user operation acting on the save control 146 is detected, in response to the operation, the electronic device 100 may save the video in the current state. The video in the current state may be a video with editing operations added, or a video without editing operations.
电子设备100可检测到作用于操作栏143中“滤镜”控件的用户操作,响应于该操作,电子设备100可显示图1E所示的用户界面。图1E示例性示出了电子设备100显示提供给用户调整视频画面色彩的滤镜的用户界面。The electronic device 100 may detect a user operation on the "filter" control in the operation bar 143, and in response to the operation, the electronic device 100 may display the user interface shown in FIG. 1E. FIG. 1E exemplarily shows a user interface where the electronic device 100 displays a filter provided for a user to adjust the color of a video image.
“滤镜”中包括有多个滤镜选项。每一个滤镜选项对应了一种调整视频画面显示效果的图像处理方式。用户可以选择电子设备100提供的多个滤镜中的一个。响应于上述选择滤镜的用户操作,电子设备100可以对被编辑的视频执行上述用户选择的滤镜指示的图像处理,从而使得处理后的视频的画面具备与上述滤镜的显示效果一致的显示效果。"Filter" includes several filter options. Each filter option corresponds to an image processing method to adjust the display effect of the video screen. A user may select one of a plurality of filters provided by the electronic device 100 . In response to the above-mentioned user operation of selecting a filter, the electronic device 100 may perform image processing indicated by the above-mentioned filter selected by the user on the edited video, so that the screen of the processed video has a display consistent with the display effect of the above-mentioned filter Effect.
如图1E所示,选择滤镜的界面中可显示有多个滤镜控件,例如滤镜控件151、滤镜控件152、滤镜控件153、滤镜控件154、滤镜控件155等等。上述每一个滤镜控件指示一种使用滤镜渲染图像的图像处理方法。As shown in FIG. 1E , multiple filter controls may be displayed in the filter selection interface, such as filter control 151 , filter control 152 , filter control 153 , filter control 154 , filter control 155 and so on. Each of the filter controls above indicates an image processing method that uses a filter to render an image.
首先,在显示图1E所示的用户界面时,电子设备100可默认地设定当前使用的滤镜为滤镜151(滤镜控件151指示的滤镜)。然后,当检测到用户作用在某一滤镜控件上的操作时,响应于该操作,电子设备100可显示使用上述滤镜控件指示的滤镜编辑视频的用户界面。First, when displaying the user interface shown in FIG. 1E , the electronic device 100 may set the currently used filter as the filter 151 (the filter indicated by the filter control 151 ) by default. Then, when a user's operation on a certain filter control is detected, in response to the operation, the electronic device 100 may display a user interface for editing a video using a filter indicated by the above-mentioned filter control.
例如,电子设备100可检测到作用于滤镜控件155的用户操作,响应于该操作,电子设备100可显示图1F所示所述的用户界面。如图1F所示,电子设备100可突出被选择的滤镜控件的显示效果,例如将该滤镜控件增大,将该控件的边框加粗,或设置该控件高亮等等,本申请实施例对此不做限制。For example, the electronic device 100 may detect a user operation on the filter control 155, and in response to the operation, the electronic device 100 may display the user interface described in FIG. 1F. As shown in FIG. 1F , the electronic device 100 can highlight the display effect of the selected filter control, for example, enlarge the filter control, thicken the border of the control, or set the highlight of the control, etc. Examples are not limited to this.
同时,电子设备100还将在窗口141中显示使用滤镜控件155渲染待编辑视频后的预览图像。例如,此时窗口141中显示的“视频A”的画面颜色与图1E中窗口141中的画面颜色不同,且图1F中窗口141中的“视频A”的具有与滤镜控件155指示的滤镜一致的显示效果。At the same time, the electronic device 100 will also display in the window 141 a preview image after using the filter control 155 to render the video to be edited. For example, the picture color of "Video A" displayed in window 141 is different from the picture color in window 141 in Fig. 1E at this time, and the "Video A" in window 141 in Fig. Mirror consistent display effect.
一般的,为了节省计算资源,电子设备100往往只渲染当前窗口显示的视频帧,或者,在一些实施例中,电子设备100还可使用其他简单的图像处理手段处理上述封面视频帧,使得处理后的图像在预览时具备上述滤镜的效果。Generally, in order to save computing resources, the electronic device 100 often only renders the video frame displayed in the current window, or, in some embodiments, the electronic device 100 can also use other simple image processing means to process the above-mentioned cover video frame, so that after processing The image of is previewed with the effect of the above filter.
该用户界面还显示有确认控件147(“√”)、取消控件148(“ⅹ”)。当确定当前选择的滤镜满(滤镜155)足自身需求时,用户可点击确认控件147。The user interface also displays a confirmation control 147 ("√"), a cancel control 148 ("ⅹ"). When it is determined that the currently selected filter meets (filter 155 ) the user's needs, the user can click the confirmation control 147 .
当然,当确定当前选择的滤镜不满足自身需求时,用户可点击其他的滤镜控件,选择其他的滤镜。响应于作用在任一滤镜控件上的用户操作,电子设备100可在窗口141中显示使用上述任一滤镜控件指示的滤镜渲染后视频。当电子设备100提供的滤镜均不满足用户的需求时,或用户暂停编辑滤镜时,用户可点击取消控件148。响应于上述用户操作,电子设备100可显示图1D所示的用户界面。Of course, when it is determined that the currently selected filter does not meet the user's needs, the user may click other filter controls to select other filters. In response to a user operation acting on any filter control, the electronic device 100 may display in the window 141 a video rendered using the filter indicated by any of the above filter controls. When none of the filters provided by the electronic device 100 meets the user's needs, or when the user suspends editing the filters, the user can click the cancel control 148 . In response to the above user operations, the electronic device 100 may display the user interface shown in FIG. 1D .
电子设备100可检测到作用于操作栏143中“音乐”控件的用户操作,响应于上述操 作,电子设备100可显示图1G所示的用户界面。The electronic device 100 may detect a user operation acting on the "music" control in the operation bar 143, and in response to the above operation, the electronic device 100 may display the user interface shown in FIG. 1G.
此时,操作栏143中“音乐”控件可加粗,以示意用户当前选择的编辑操作的类型为“音乐”。同时,操作栏144中显示的编辑控件会替换成“音乐”操作下对应的操作控件,例如“添加音乐”、“提取音乐”。At this time, the "Music" control in the operation bar 143 can be thickened to indicate that the type of editing operation currently selected by the user is "Music". At the same time, the editing controls displayed in the operation bar 144 will be replaced with corresponding operation controls under the "music" operation, such as "add music" and "extract music".
当检测到作用于“添加音乐”控件的用户操作时,响应于该操作,电子设备100可显示添加音乐的用户界面。如图1H所示,该用户界面可显示有多个音乐选项,例如“音乐1”、“音乐2”、“音乐3”、“音乐4”等等。每一个音乐选项后显示有播放控件和使用控件。播放控件可用于播放该音乐选项对应的音乐。使用控件可用于将上述音乐应用到被编辑的视频上。When a user operation acting on the 'add music' control is detected, in response to the operation, the electronic device 100 may display a user interface for adding music. As shown in FIG. 1H , the user interface may display multiple music options, such as "Music 1", "Music 2", "Music 3", "Music 4" and so on. Each music option is followed by playback controls and usage controls. The playback control can be used to play the music corresponding to the music option. Use controls can be used to apply the above music to the edited video.
“提取音乐”控件可用于提取待编辑视频中的音频。例如,在图1G所示的场景下,当检测到作用于该控件的用户操作后,响应于该操作,电子设备100可将“视频A”中的音频提取出来。不限于“添加音乐”、“提取音乐”等控件,操作栏144还可包括更多的音乐操作控件。本申请实施例对此不作限制。The "Extract Music" control can be used to extract audio from the video to be edited. For example, in the scenario shown in FIG. 1G , after detecting a user operation on the control, in response to the operation, the electronic device 100 may extract the audio from "Video A". Not limited to controls such as "add music" and "extract music", the operation bar 144 may also include more music operation controls. The embodiment of the present application does not limit this.
电子设备100可检测到作用于操作栏143中“文本”控件的用户操作,响应于上述操作,电子设备100可显示图1I所示的用户界面。The electronic device 100 may detect a user operation acting on the "text" control in the operation bar 143, and in response to the above operation, the electronic device 100 may display the user interface shown in FIG. 1I.
此时,操作栏143中“文本”控件可加粗,以示意用户当前选择的编辑操作的类型为“文本”。同时,操作栏144中显示的编辑控件会替换成“文本”操作下对应的操作控件,包括“片头”、“片尾”。其中,“片头”和“片尾”又包括有多个文本模板。At this time, the "text" control in the operation bar 143 can be thickened to indicate that the type of editing operation currently selected by the user is "text". At the same time, the editing controls displayed in the operation bar 144 will be replaced with the corresponding operating controls under the "text" operation, including "title" and "title". Wherein, "title" and "title" include multiple text templates.
如图1I所示,首先,电子设备100可显示“片头”的文本模板,例如“无”151、“标题1”、“标题2”、“标题3”、“标题4”、“标题5”等等。电子设备100可检测到作用于上述任一模板的用户操作,例如,检测到作用于“标题5”的用户操作,响应于上述操作,电子设备100可在窗口141中显示“标题5”的片头效果。As shown in FIG. 1I , first, the electronic device 100 can display a text template of "Title", such as "None" 151, "Title 1", "Title 2", "Title 3", "Title 4", "Title 5" etc. The electronic device 100 can detect a user operation acting on any of the above templates, for example, detect a user operation acting on "Title 5", and in response to the above operation, the electronic device 100 can display the title of "Title 5" in the window 141 Effect.
然后,电子设备100可检测到作用于确认控件161的用户操作。此时,电子设备100可确认用户选定使用“标题5”所示的片头的编辑操作。同时,电子设备100可显示图1K所述的用户界面。Then, the electronic device 100 may detect a user operation on the confirmation control 161 . At this time, the electronic device 100 may confirm that the user selects an editing operation using the title shown in "Title 5". Meanwhile, the electronic device 100 may display the user interface described in FIG. 1K.
电子设备100检测到用户编辑“片尾”的用户操作为被编辑的视频添加片尾的过程,可参考上述添加“片头”的过程,这里不再赘述。此外,电子设备100还可提供更多的编辑能力,这里不再一一例举。The electronic device 100 detects that the user edits the "credits credits" for the process of adding a credits credits to the edited video, refer to the above process of adding a "credits credits", and will not be repeated here. In addition, the electronic device 100 can also provide more editing capabilities, which will not be listed one by one here.
图1D-图1J示例性示出了电子设备100接收用户编辑视频的操作的过程。在检测到保存视频的操作后,电子设备100可将图1C中窗口133中展示的HDR视频转换为对应的SDR视频,然后,电子设备100可对上述转换后的SDR视频依次执行图1D-图1J所示的编辑操作,进而保存编辑后的SDR视频。1D-1J exemplarily show a process in which the electronic device 100 receives an operation of editing a video by a user. After detecting the operation of saving the video, the electronic device 100 can convert the HDR video displayed in the window 133 in FIG. Edit operation shown in 1J, and then save the edited SDR video.
参考图1J、图1K,响应于作用在保存控件146的用户操作,电子设备100可执行编辑和保存SDR视频的计算。在保存完成后,电子设备100可显示图1K所示的用户界面。相比于图1C所示的用户界面,此时,窗口131中展示的视频为编辑后的SDR视频。例如,图1K中显示的视频的封面为编辑操作中添加的片头。Referring to FIGS. 1J and 1K , in response to user operations acting on the save control 146 , the electronic device 100 may perform calculations for editing and saving the SDR video. After the saving is completed, the electronic device 100 may display the user interface shown in FIG. 1K. Compared with the user interface shown in FIG. 1C , at this moment, the video displayed in window 131 is the edited SDR video. For example, the cover of the video shown in FIG. 1K is the title added in the editing operation.
可选的,电子设备100可以将编辑的后的视频保存为一个新的视频。这样,电子设备100既可以为用户提供编辑之前的HDR类型的视频,又可以为用户提供编辑后的个性化的 SDR视频。Optionally, the electronic device 100 may save the edited video as a new video. In this way, the electronic device 100 can not only provide the user with an HDR video before editing, but also provide the user with an edited personalized SDR video.
实施图1A-图1K所述的方法,当电子设备100不支持输出HDR视频时,电子设备100可以将待编辑的HDR视频转化为SDR视频。然后,在电子设备100支持输出SDR视频的基础上,电子设备100可以为用户提供编辑SDR视频的功能。这样,从用户角度来看:用户可以编辑上述HDR视频,并保存编辑后的视频。Implementing the methods described in FIGS. 1A-1K , when the electronic device 100 does not support the output of HDR video, the electronic device 100 can convert the HDR video to be edited into an SDR video. Then, on the basis that the electronic device 100 supports outputting the SDR video, the electronic device 100 may provide the user with a function of editing the SDR video. In this way, from the user's point of view: the user can edit the above-mentioned HDR video, and save the edited video.
在电子设备100所使用的芯片平台不支持输出HDR视频的场景下,在用户拍摄了一个HDR视频并想要编辑该HDR视频时,电子设备100可以将上述HDR视频转换为对应的SDR视频,进而为用户提供编辑视频的服务,以首先满足用户编辑视频的需求。In the scenario where the chip platform used by the electronic device 100 does not support the output of HDR video, when the user shoots an HDR video and wants to edit the HDR video, the electronic device 100 can convert the above-mentioned HDR video into a corresponding SDR video, and then Provide users with video editing services to firstly meet users' needs for editing videos.
下面具体介绍电子设备100实现图1A-图1K所示的视频编辑能力的具体过程。The specific process for the electronic device 100 to realize the video editing capability shown in FIG. 1A-FIG. 1K is specifically introduced below.
首先,图2示例性示出了电子设备100的软件架构。First, FIG. 2 exemplarily shows the software architecture of the electronic device 100 .
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。The software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present invention, the software structure of the electronic device 100 is exemplarily described by taking an Android system with a layered architecture as an example.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. In some embodiments, the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
应用程序层可以包括一系列应用程序包。如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。在本申请实施例中,应用程序层还包括视频编辑应用。视频编辑应用具备视频数据处理能力,能够为用户提供编辑视频的功能,包括裁剪、渲染、等视频数据处理。图1D-图1J示出的用户界面可视为上述视频编辑应用提供的用户界面。The application layer can consist of a series of application packages. As shown in Figure 2, the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message. In the embodiment of the present application, the application program layer also includes a video editing application. The video editing application has video data processing capabilities, and can provide users with video editing functions, including cropping, rendering, and other video data processing. The user interfaces shown in FIGS. 1D-1J can be regarded as the user interfaces provided by the above-mentioned video editing application.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions. As shown in Figure 2, the application framework layer can include window managers, content providers, view systems, phone managers, resource managers, notification managers, and so on.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。A window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc. Content providers are used to store and retrieve data and make it accessible to applications. Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc. The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. The view system can be used to build applications. A display interface can consist of one or more views. For example, a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The phone manager is used to provide communication functions of the electronic device 100 . For example, the management of call status (including connected, hung up, etc.). The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提 醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
在本申请实施例中,应用程序框架层还包括媒体框架。媒体框架中提供有多个编辑视频、音频的工具。其中,上述工具包括MediaCodec。MediaCodec是Android提供的用于对音视频进行编解码的类。它包括编码器、解码器。In the embodiment of the present application, the application framework layer also includes a media framework. The media framework provides multiple tools for editing video and audio. Among them, the above tools include MediaCodec. MediaCodec is a class provided by Android for encoding and decoding audio and video. It includes encoder, decoder.
其中,编码器可将输入该编码器的一种形式的视频或音频通过压缩技术转换成另一种形式,而解码器执行编码的反向过程,可将输入该解码器的一种形式的视频或音频通过解压缩技术转换成另一种形式。Among them, the encoder can convert one form of video or audio input to the encoder into another form through compression technology, and the decoder performs the reverse process of encoding, and can convert one form of video input to the decoder Or the audio is converted to another form by decompression techniques.
例如,输入编码器的视频可以为HDR视频。上述HDR视频是由N个色域为BT2020的视频帧组成的。上述N为大于1的整数。在接收到上述HDR视频后,解码器可将上述N个色域为BT2020的视频帧组成的视频,拆分成N个独立的视频帧,以供后续电子设备100对各个视频帧进行图像处理。For example, the video input to the encoder may be HDR video. The above HDR video is composed of N video frames whose color gamut is BT2020. The aforementioned N is an integer greater than 1. After receiving the above-mentioned HDR video, the decoder may split the above-mentioned N video frames composed of BT2020 color gamut into N independent video frames for the subsequent electronic device 100 to perform image processing on each video frame.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system. The core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes the java files of the application program layer and the application program framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。A system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc. The surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications. The media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc. 2D graphics engine is a drawing engine for 2D drawing.
开放图形库(Open Graphics Library,OpenGL)提供有多个图像渲染函数,可用来绘制从简单的图形到复杂的三维景象。在本申请实施例中,系统库提供的OpenGL可用于为视频编辑应用提供图形图像编辑操作,例如前述实施例介绍的视频裁剪操作、添加滤镜的操作等等。Open Graphics Library (OpenGL) provides multiple image rendering functions, which can be used to draw from simple graphics to complex three-dimensional scenes. In this embodiment of the present application, the OpenGL provided by the system library can be used to provide graphics and image editing operations for video editing applications, such as video cropping operations and filter adding operations described in the foregoing embodiments.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
图3示例性示出了电子设备100实现编辑HDR视频的流程图。结合图1A-图1K所示的用户界面和图2所示的电子设备100的软件架构,下面,本申请实施例将具体介绍电子设备100实现为用户提供编辑HDR视频的流程。FIG. 3 exemplarily shows a flow chart of editing HDR video by the electronic device 100 . Combining the user interface shown in FIG. 1A-FIG. 1K and the software architecture of the electronic device 100 shown in FIG. 2 , the following embodiments of the present application will specifically introduce the process for the electronic device 100 to provide users with editing HDR video.
S101:电子设备100确定用户选择编辑HDR视频。S101: The electronic device 100 determines that the user chooses to edit an HDR video.
在显示图库中存储的图片、视频等图像资源供用户浏览时,电子设备100可显示编辑控件。该编辑控件可为用户提供编辑当前显示的图像资源的服务。本申请实施例提供的视 频编辑方法主要应用于视频类图像资源。后续实施例将以视频为例,介绍本申请实施例提供的视频编辑方法。When displaying image resources such as pictures and videos stored in the gallery for the user to browse, the electronic device 100 may display editing controls. The edit control can provide users with the service of editing the currently displayed image resource. The video editing method provided in the embodiment of the present application is mainly applied to video image resources. Subsequent embodiments will use video as an example to introduce the video editing method provided by the embodiment of the present application.
参考图1C所示的用户界面,窗口131可为用户提供浏览电子设备100中存储的图像资源;控件133,即编辑控件,可为用户提供编辑视频的服务。此时,被编辑的视频为窗口131中显示视频。Referring to the user interface shown in FIG. 1C , the window 131 can provide the user with browsing image resources stored in the electronic device 100 ; the control 133 , that is, the editing control, can provide the user with the service of editing video. At this time, the edited video is the video displayed in the window 131 .
在检测到作用于编辑控件的用户操作后,响应于上述操作,电子设备100可首先判断被编辑的视频的类型,以确定被编辑的视频是否是芯片平台支持处理并输出的。在本申请实施例中,当检测到待编辑的视频为HDR视频时,电子设备100可确定:在编辑HDR视频时,电子设备100无法正确显示HDR视频帧。这是因为,HDR视频所采用的色域为BT2020,而在编辑场景中电子设备100所使用的芯片平台不支持显示色域为BT2020的HDR视频帧。这就使得编辑场景下,电子设备100显示HDR视频帧不正常,从而影响用户使用体验。上述显示HDR视频帧不正常包括显示不清、像素点颜色错误等等。After detecting a user operation acting on the edit control, in response to the above operation, the electronic device 100 may first determine the type of the edited video, so as to determine whether the edited video is processed and output by the chip platform. In the embodiment of the present application, when it is detected that the video to be edited is an HDR video, the electronic device 100 may determine that when editing the HDR video, the electronic device 100 cannot correctly display the HDR video frame. This is because the color gamut adopted by the HDR video is BT2020, and the chip platform used by the electronic device 100 does not support displaying the HDR video frame whose color gamut is BT2020 in the editing scene. As a result, in an editing scene, the electronic device 100 displays abnormal HDR video frames, thereby affecting user experience. The abnormal display of the HDR video frame above includes unclear display, wrong pixel color, and so on.
当检测到待编辑的视频为SDR视频时,电子设备100可确定:在编辑SDR视频时,电子设备100可以正常显示待编辑SDR视频。When detecting that the video to be edited is an SDR video, the electronic device 100 may determine that: when editing the SDR video, the electronic device 100 can normally display the SDR video to be edited.
在确定待编辑的视频的类型后,电子设备100可根据该视频的类型执行不同的编辑策略,以满足用户的编辑需求。After determining the type of the video to be edited, the electronic device 100 may implement different editing strategies according to the type of the video, so as to meet the user's editing needs.
具体的,当被编辑的视频为SDR视频时,电子设备100可确定:调用芯片提供的处理和输出能力,为用户提供编辑上述视频的服务。当被编辑的视频为HDR视频时,即电子设备100无法正常显示HDR视频帧,此时,电子设备100可确定使用本申请实施例提供的图像编辑方法:使用LUT资源将HDR视频帧转化为对应的SDR视频帧,进而电子设备100可以显示上述SDR视频帧,从而避免不正常的显示。上述LUT资源是基于颜色查找表(look up table,LUT)算法建立的用于将色域为BT2020的视频帧转换为色域为BT709的视频帧的数据资源。Specifically, when the video to be edited is an SDR video, the electronic device 100 may determine: call the processing and output capabilities provided by the chip, and provide the user with a service of editing the video. When the edited video is an HDR video, that is, the electronic device 100 cannot display the HDR video frame normally, at this time, the electronic device 100 can determine to use the image editing method provided in the embodiment of this application: use LUT resources to convert the HDR video frame into a corresponding The SDR video frame, and then the electronic device 100 can display the above SDR video frame, so as to avoid abnormal display. The above LUT resource is a data resource established based on a color lookup table (look up table, LUT) algorithm for converting a video frame with a color gamut of BT2020 into a video frame with a color gamut of BT709.
进一步的,基于现有的编辑SDR视频的能力,为用户提供编辑上述HDR视频的服务。这里,用户实际编辑的是上述HDR视频对应的SDR视频。这样,电子设备100就可以为用户提供编辑HDR视频的能力。Further, based on the existing ability to edit SDR video, the user is provided with the service of editing the above-mentioned HDR video. Here, what the user actually edits is the SDR video corresponding to the above HDR video. In this way, the electronic device 100 can provide users with the ability to edit HDR videos.
这样,当检测到编辑HDR视频的用户操作后,电子设备100可执行本申请实施例提供的图像编辑方法,为用户提供编辑视频的服务,以满足用户编辑个性化视频的需求。In this way, when a user operation for editing an HDR video is detected, the electronic device 100 may execute the image editing method provided in the embodiment of the present application to provide the user with a video editing service, so as to meet the user's demand for editing personalized videos.
参考图1C所示的用户界面,在检测到作用于编辑控件133的用户操作后,响应于上述操作,电子设备100可首先确定窗口131中的显示视频(“视频A”)的类型。此时,“视频A”为HDR视频,因此,电子设备100可确定待编辑视频(“视频A”)为HDR视频。随后,电子设备100可确定使用本申请实施例提供的图像编辑方法为用户提供编辑HDR视频的能力。Referring to the user interface shown in FIG. 1C , after detecting a user operation acting on the editing control 133 , in response to the above operation, the electronic device 100 may first determine the type of the displayed video (“Video A”) in the window 131 . At this time, "Video A" is an HDR video, therefore, the electronic device 100 may determine that the video to be edited ("Video A") is an HDR video. Subsequently, the electronic device 100 may determine to use the image editing method provided in the embodiment of the present application to provide the user with the ability to edit the HDR video.
S102:电子设备100初始化视频编辑环境。S102: The electronic device 100 initializes a video editing environment.
在检测到作用于编辑控件的用户操作后,电子设备100可以初始化编辑环境。初始化编辑环境是指创建或申请编辑视频所需的工具、存储空间,使得电子设备100可以执行编辑视频的数据处理。After detecting a user operation on an editing control, the electronic device 100 may initialize an editing environment. Initializing the editing environment refers to creating or applying for tools and storage space required for editing a video, so that the electronic device 100 can perform data processing for editing the video.
初始化视频编辑环境包括:创建编码器、解码器、OpenGL、申请用户缓存视频帧的内 存和GPU提供的显存。解码器可用于将待编辑视频拆分为视频帧序列;编码器可用于将编辑后的视频帧组合成视频。OpenGL可用于调整视频帧,和/或,修改视频帧中的像素点,从而改变视频中包括的图像内容,即渲染视频帧。上述调整视频帧包括调整增加或减少视频帧、修改视频帧的尺寸。Initializing the video editing environment includes: creating encoders, decoders, OpenGL, applying for memory for user cache video frames and display memory provided by GPU. A decoder can be used to split a video to be edited into a sequence of video frames; an encoder can be used to combine edited video frames into a video. OpenGL can be used to adjust the video frame, and/or modify the pixels in the video frame, so as to change the image content included in the video, that is, to render the video frame. The aforementioned adjustment of the video frame includes adjusting to increase or decrease the video frame, and to modify the size of the video frame.
其中,上述内存包括surface和BufferQueue。Surface可用于缓存GPU输出的渲染后的视频帧。编码器可将Surface中存储的视频帧序列封装为视频。BufferQueue可用于缓存视频编辑应用输入的待编辑视频。解码器可将BufferQueue中存储的待编辑视频拆分为待编辑视频帧序列。Among them, the above memory includes surface and BufferQueue. A Surface can be used to cache rendered video frames output by the GPU. The encoder can encapsulate the sequence of video frames stored in the Surface into a video. BufferQueue can be used to cache the video to be edited input by the video editing application. The decoder can split the video to be edited stored in the BufferQueue into a sequence of video frames to be edited.
具体的,图4示例性示出了电子设备100初始化视频编辑环境的流程图。如图4所示,APP可用于表示视频编辑应用。Specifically, FIG. 4 exemplarily shows a flow chart of electronic device 100 initializing a video editing environment. As shown in Figure 4, APP can be used to represent a video editing application.
首先,(1)电子设备100可检测到点击编辑控件的用户操作。参考图1C所示的用户界面,作用于编辑控件133的用户操作可称为点击编辑控件的用户操作。First, (1) the electronic device 100 may detect a user operation of clicking an edit control. Referring to the user interface shown in FIG. 1C , a user operation acting on the edit control 133 may be referred to as a user operation of clicking the edit control.
(2)响应于上述用户操作,APP可确定待编辑视频的类型和编辑后视频的类型。在本申请实施例中,待编辑的视频为HDR视频。其中,HDR视频采用的颜色编码格式为YUV格式、颜色通道的颜色值的数据类型为整型(INT)、色域为BT2020。这时,电子设备100可确定将HDR视频转化为SDR视频,色域为BT709,从而保证在编辑环境下电子设备100可支持显示待编辑HDR视频的视频帧(即显示HDR视频对应的SDR视频),同时为用户提供编辑HDR视频的功能。(2) In response to the above user operations, the APP can determine the type of the video to be edited and the type of the edited video. In this embodiment of the application, the video to be edited is an HDR video. Among them, the color encoding format adopted by the HDR video is YUV format, the data type of the color value of the color channel is integer (INT), and the color gamut is BT2020. At this time, the electronic device 100 can determine to convert the HDR video into an SDR video, and the color gamut is BT709, so as to ensure that the electronic device 100 can support the display of the video frame of the HDR video to be edited in the editing environment (that is, display the SDR video corresponding to the HDR video) , while providing users with the ability to edit HDR videos.
若待编辑的视频为SDR视频,APP可确定待编辑视频为SDR视频,编辑后视频也为SDR视频。If the video to be edited is an SDR video, the APP can determine that the video to be edited is an SDR video, and the edited video is also an SDR video.
然后,(3)APP可向MediaCodec发送创建编码器的请求。该请求可携带输出视频类型信息。上述输出视频类型信息可用于指示编码器输出的编辑后视频的类型。具体的,上述输出视频类型信息可包括:编辑后视频的色域和颜色编码格式。在本申请实施例中,输出视频类型信息具体为BT709、YUV。上述BT709可用于指示编辑后视频的色域为BT709色域;上述YUV可指示编辑后视频的颜色编码格式为YUV。不限于色域和颜色编码格式,上述输出视频类型信息还可包括其他更多的描述输出视频类型的参数。当上述输出视频类型信息具体为BT709、YUV时,编码器编码得到的视频即为SDR视频。Then, (3) APP can send a request to MediaCodec to create an encoder. The request can carry output video type information. The above output video type information may be used to indicate the type of the edited video output by the encoder. Specifically, the above output video type information may include: the color gamut and color coding format of the edited video. In the embodiment of this application, the output video type information is specifically BT709 and YUV. The above BT709 may be used to indicate that the color gamut of the edited video is the BT709 color gamut; the above YUV may indicate that the color coding format of the edited video is YUV. Not limited to the color gamut and color coding format, the above output video type information may also include other more parameters describing the output video type. When the above-mentioned output video type information is specifically BT709 or YUV, the video encoded by the encoder is the SDR video.
响应于上述请求,(4)MediaCodec可创建上述输出视频类型信息指示的编码器。例如,当识别到上述请求中携带的输出视频类型信息(BT709、YUV)时,MediaCodec可创建用于编码SDR视频(BT709、YUV)的编码器。该编码器可将输入的色域为BT709,颜色编码格式为YUV的SDR视频帧封装为一个SDR视频。In response to the above request, (4) MediaCodec may create the encoder indicated by the above output video type information. For example, when recognizing the output video type information (BT709, YUV) carried in the above request, MediaCodec can create an encoder for encoding SDR video (BT709, YUV). The encoder can encapsulate the input SDR video frame with BT709 color gamut and YUV color coding format into an SDR video.
在编码器创建完成后,(5)MediaCodec可向APP返回指示创建完成的确认信息。上述指示创建完成的确认信息例如确认字符ACK等等。After the encoder is created, (5) MediaCodec may return confirmation information indicating the creation to the APP. The above confirmation information indicating the completion of creation is, for example, the confirmation character ACK and the like.
在接收到上述确认信息后,(6)APP可向上述编码器发送创建surface的请求。Surface是具有特定数据结构的内存存储空间。一般的,surface专用于缓存待编码的视频帧。(7)响应于上述请求,编码器可向内存申请一个surface。(8)响应于上述申请,内存可划分一块存储空间作用编码器申请的surface,以供编码器使用。After receiving the above-mentioned confirmation information, (6) the APP may send a request to the above-mentioned encoder to create a surface. Surface is a memory storage space with a specific data structure. Generally, a surface is dedicated to buffering video frames to be encoded. (7) In response to the above request, the encoder can apply for a surface from the memory. (8) In response to the above application, a piece of storage space can be divided into the memory as the surface requested by the encoder for use by the encoder.
内存可提供多个surface。每一个surface携带有指示该surface的身份标识(ID)。对任 一surface而言,该surface的ID与该surface的地址是一一对应的。例如,假设surface-01的ID为01;地址为0011-0100。当识别到某一surface的ID为01时,电子设备100可确定上述surface为surface-01,同时也可确定上述surface的地址为0011-0100;反之,当识别到某一surface所使用的地址为0011-0100时,电子设备100可确定该surface为surface-01。The memory can provide multiple surfaces. Each surface carries an identity (ID) indicating the surface. For any surface, the ID of the surface is in one-to-one correspondence with the address of the surface. For example, suppose the ID of surface-01 is 01; the address is 0011-0100. When it is recognized that the ID of a certain surface is 01, the electronic device 100 can determine that the above-mentioned surface is surface-01, and at the same time can also determine that the address of the above-mentioned surface is 0011-0100; otherwise, when it is recognized that the address used by a certain surface is From 0011 to 0100, the electronic device 100 may determine that the surface is surface-01.
在完成为编码器分配surface后,(9)内存可向该编码器返回上述surface的ID和/或地址。在接收到上述返回信息后,编码器可确认申请surface成功,并确定可使用的surface的ID和/或地址。进一步的,(10)编码器可向APP返回上述surface的ID和/或地址。这样,APP可确定编码器已完成向内存申请surface这一过程,并可确定编码器申请的可使用的surface的ID和/或地址。After finishing allocating the surface to the encoder, (9) the memory may return the ID and/or address of the above-mentioned surface to the encoder. After receiving the above return information, the encoder can confirm that the application for the surface is successful, and determine the ID and/or address of the surface that can be used. Further, (10) the encoder can return the ID and/or address of the above-mentioned surface to the APP. In this way, the APP can determine that the encoder has completed the process of applying for a surface from the memory, and can determine the ID and/or address of the usable surface requested by the encoder.
然后,(11)APP可向OpenGL发送初始化请求。上述请求中可携带有surface信息。上述surface信息可用于指示OpenGL:解码器中用于接收编辑后视频的surface的ID和/或地址。Then, (11) APP can send an initialization request to OpenGL. The above request may carry surface information. The above surface information may be used to indicate the ID and/or address of the surface in the OpenGL decoder for receiving the edited video.
(12)根据上述请求中携带的surface信息,OpenGL可确定编码器所使用的surface,也就是说,当执行完对视频的视频帧的计算处理之后输出编辑后的视频帧时,OpenGL可确定将上述编辑后的视频帧写入哪一个缓存(surface)。(12) According to the surface information carried in the above request, OpenGL can determine the surface used by the encoder, that is, when the edited video frame is output after performing the calculation and processing of the video frame, OpenGL can determine Which cache (surface) is written to the above-mentioned edited video frame.
此外,在接收到上述初始化请求后,(13)OpenGL还会向GPU申请一块显存,记为显存A。显存A可用于缓存待编辑的视频帧。上述显存A可以为OpenGL中的纹理(texture)、或帧缓存对象(Frame BufferObject)。In addition, after receiving the above initialization request, (13) OpenGL will also apply for a piece of video memory from the GPU, which is recorded as video memory A. Video memory A can be used to cache video frames to be edited. The above-mentioned video memory A can be a texture (texture) or a frame buffer object (Frame Buffer Object) in OpenGL.
响应于上述申请,GPU会为OpenGL划分一块存储空间作为OpenGL申请的显存(显存A)。然后,(14)GPU可向OpenGL返回显存A的地址。在接收到显存A的地址后,OpenGL可通过上述地址定位显存A,进而OpenGL可使用显存A。随后(15)OpenGL可向APP返回确认信息。该确认信息可指示APP:OpenGL已完成初始化。In response to the above application, the GPU will allocate a storage space for OpenGL as the video memory (video memory A) requested by OpenGL. Then, (14) GPU can return the address of video memory A to OpenGL. After receiving the address of the video memory A, OpenGL can locate the video memory A through the above address, and then OpenGL can use the video memory A. Then (15) OpenGL can return confirmation information to APP. The confirmation information may indicate to the APP: OpenGL has been initialized.
在确认OpenGL已完成初始化后,(16)APP可向MediaCodec发送创建解码器的请求。(17)响应于上述请求,MediaCodec可创建解码器。在创建解码器时,MediaCodec无需指定解码器支持解码的视频的类型。解码器可在接收到APP输入的待解码视频后,再确定待解码视频的类型。在本申请实施例中,待编辑视频为HDR视频,因此,上述解码器可用于解码HDR视频。After confirming that OpenGL has been initialized, (16) APP can send a request for creating a decoder to MediaCodec. (17) In response to the above request, MediaCodec may create a decoder. When creating a decoder, MediaCodec does not need to specify the type of video that the decoder supports decoding. The decoder can determine the type of the video to be decoded after receiving the video to be decoded from the APP. In the embodiment of the present application, the video to be edited is an HDR video, therefore, the above decoder can be used to decode the HDR video.
在MediaCodec创建解码器后,(18)解码器可向内存发送申请一块存储空间(BufferQueue)的请求。BufferQueue可以用于接收APP输入的待解码的视频。(19)响应于上述请求,内存可为解码器分配一块存BufferQueue。随后,(20)内存会向解码器返回上述BufferQueue的地址。After the MediaCodec creates the decoder, (18) the decoder can send a request for a storage space (BufferQueue) to the memory. BufferQueue can be used to receive the video to be decoded input by APP. (19) In response to the above request, the memory can allocate a BufferQueue for the decoder. Subsequently, (20) the memory will return the address of the above-mentioned BufferQueue to the decoder.
在接收到内存返回的BufferQueue的地址后,解码器可根据上述地址定位内存中可使用的BufferQueue。随后,(21)解码器可向APP返回指示创建解码器成功的确认信息。After receiving the address of the BufferQueue returned by the memory, the decoder can locate the usable BufferQueue in the memory according to the above address. Subsequently, (21) the decoder may return confirmation information indicating successful creation of the decoder to the APP.
在其他实施例中,APP调用MediaCodec创建解码器的过程(步骤(16)-步骤(21))也可发生在创建编码器之前。本申请实施例对此不做限制。In other embodiments, the process of APP calling MediaCodec to create a decoder (step (16)-step (21)) may also occur before creating an encoder. This embodiment of the present application does not limit this.
图4中步骤(1)~(21)所示的过程示出了电子设备100初始化视频编辑环境的过程。在完成编辑环境初始化后,电子设备100可开始对待编辑视频执行用户选定的编辑操作。The process shown in steps (1) to (21) in FIG. 4 shows the process of the electronic device 100 initializing the video editing environment. After the initialization of the editing environment is completed, the electronic device 100 can start to perform editing operations selected by the user on the video to be edited.
S103:电子设备100将HDR视频帧转化为SDR视频帧。S103: The electronic device 100 converts the HDR video frame into an SDR video frame.
在完成视频编辑环境初始化的过程之后,电子设备100可利用上述视频编辑环境,将待编辑的HDR视频(色域BT2020)拆分成SDR视频帧(色域BT709)。After completing the process of initializing the video editing environment, the electronic device 100 can use the above-mentioned video editing environment to split the HDR video (color space BT2020) to be edited into SDR video frames (color space BT709).
具体的,电子设备100可首先利用解码器将上述待编辑的HDR视频(色域BT2020)拆分成HDR视频帧(色域BT2020),然后,电子设备100再利用色域转换滤镜资源(LUT资源)将上述HDR视频帧(色域BT2020)转换为SDR视频帧。图5中步骤(1)~(15)示出了电子设备100将HDR视频帧转化为SDR视频帧的具体流程。Specifically, the electronic device 100 can first use a decoder to split the above-mentioned HDR video (color gamut BT2020) to be edited into HDR video frames (color gamut BT2020), and then, the electronic device 100 can use the color gamut conversion filter resource (LUT resource) to convert the above HDR video frame (gamut BT2020) to SDR video frame. Steps (1) to (15) in FIG. 5 show a specific process for the electronic device 100 to convert HDR video frames into SDR video frames.
首先,(1)APP可向OpenGL发送将LUT资源加载到GPU显存的指示。APP可确定LUT资源的存储地址。上述指示可携带上述存储地址,例如0100-1000。这样,APP可指示OpenGL获取LUT资源的存储空间。First, (1) the APP can send an instruction to OpenGL to load the LUT resource into the GPU memory. The APP can determine the storage address of the LUT resource. The above indication may carry the above storage address, such as 0100-1000. In this way, the APP can instruct OpenGL to obtain the storage space of the LUT resource.
(2)在接收到上述指示之后,OpenGL可首先根据指示中携带的存储地址定位存储LUT资源的存储空间。然后,OpenGL可从上述存储空间中读取LUT资源,并将上述LUT资源写入到GPU中。(2) After receiving the above instruction, OpenGL may first locate the storage space for storing the LUT resource according to the storage address carried in the instruction. Then, OpenGL can read the LUT resources from the above storage space, and write the above LUT resources into the GPU.
其中,GPU中存储LUT资源的显存的地址可以是APP指定的,也可是OpenGL指定的。当上述存储LUT资源的显存的地址是APP指定时,上述指示中还应携带有存储LUT资源的显存的地址。Wherein, the address of the video memory storing the LUT resource in the GPU may be specified by the APP or by OpenGL. When the address of the video memory storing the LUT resource is specified by the APP, the above instruction should also carry the address of the video memory storing the LUT resource.
(3)在写入成功后,OpenGL可向APP返回写入成功指示信息。在接收到上述写入成功指示信息后,APP可确认OpenGL已经完成将HDR视频转SDR视频所需的LUT资源加载到GPU显存的操作。(3) After the writing is successful, OpenGL can return the writing success indication information to the APP. After receiving the above-mentioned write success indication information, the APP can confirm that OpenGL has completed the operation of loading the LUT resources required to convert the HDR video to the SDR video into the GPU memory.
随后,(4)APP可将待编辑HDR视频输入解码器。具体的,根据编辑环境初始化过程,APP可确定解码器申请的用于缓存待解码视频的BufferQueue的地址。在确定上述地址后,APP可向上述BufferQueue写入待编辑HDR视频。此时待编辑HDR视频所采用的颜色编码格式为YUV格式,颜色通道的颜色值的数据类型为整型(INT)、色域为BT2020。Then, (4) APP can input the HDR video to be edited into the decoder. Specifically, according to the initialization process of the editing environment, the APP can determine the address of the BufferQueue requested by the decoder for caching the video to be decoded. After determining the above address, the APP can write the HDR video to be edited to the above BufferQueue. At this time, the color encoding format adopted by the HDR video to be edited is YUV format, the data type of the color value of the color channel is integer (INT), and the color gamut is BT2020.
(5)当检测到BufferQueue中被写入视频时,解码器可对BufferQueue中存储的视频进行解码,从而得到该视频的视频帧序列。于是,在将待编辑HDR视频写入BufferQueue后,解码器可输出上述待编辑HDR视频的视频帧,即N个HDR视频帧(待编辑HDR视频帧)。这时,HDR视频帧的颜色编码格式、数据类型和色域仍然为YUV、INT、BT2020。(5) When detecting that the video is written into the BufferQueue, the decoder can decode the video stored in the BufferQueue, so as to obtain the video frame sequence of the video. Therefore, after writing the HDR video to be edited into the BufferQueue, the decoder can output the video frames of the HDR video to be edited, that is, N HDR video frames (HDR video frames to be edited). At this time, the color coding format, data type and color gamut of the HDR video frame are still YUV, INT, BT2020.
可以理解的,一个视频还可包括音频。因此,解码器还包括音频解码器。在本申请实施例提供的视频编辑方法中,涉及音频的处理为现有技术,这里不再赘述。Understandably, a video may also include audio. Therefore, a decoder also includes an audio decoder. In the video editing method provided in the embodiment of the present application, the processing related to audio is a prior art, and will not be repeated here.
在包括音频数据的情况下,在解码器解码后,电子设备100可分别得到待编辑HDR视频的N个HDR视频帧和音频数据。可以理解的,当待编辑HDR视频不包括音频数据时,电子设备100无需对待编辑HDR视频进行音频解码,因此,解码后得到的数据也不包括音频数据。In the case of including audio data, after decoding by the decoder, the electronic device 100 can respectively obtain N HDR video frames and audio data of the HDR video to be edited. It can be understood that when the HDR video to be edited does not include audio data, the electronic device 100 does not need to perform audio decoding on the HDR video to be edited, and therefore, the decoded data does not include audio data.
(6)在完成解码后,解码器可将解码得到的待编辑HDR视频帧(YUV、INT、BT2020)依次发送给OpenGL。相应地,OpenGL可陆续接收到N个待编辑HDR视频帧(YUV、INT、BT2020)。(6) After the decoding is completed, the decoder can sequentially send the decoded HDR video frames (YUV, INT, BT2020) to be edited to OpenGL. Correspondingly, OpenGL can successively receive N HDR video frames (YUV, INT, BT2020) to be edited.
(7)在接收到解码器发送的待编辑HDR视频帧(YUV、INT、BT2020)后,首先,OpenGL可变更上述待编辑HDR视频帧的颜色编码格式和颜色通道中颜色值的数据类型。在本申请实施例中,OpenGL会将上述待编辑HDR视频帧的颜色编码格式设置为RGB、颜 色通道中颜色值的数据类型设置为FLOAT,即将原来(YUV、INT)格式的待编辑HDR视频帧变更为(RGB、FLOAT)格式的待编辑HDR视频帧。这是因为,OpenGL在对视频帧进行绘制和/或渲染时所支持操作的颜色编码格式为RGB、颜色通道中颜色值的数据类型为浮点型。(7) After receiving the HDR video frame to be edited (YUV, INT, BT2020) sent by the decoder, first, OpenGL can change the color coding format of the HDR video frame to be edited and the data type of the color value in the color channel. In the embodiment of this application, OpenGL will set the color coding format of the HDR video frame to be edited to RGB, and the data type of the color value in the color channel to FLOAT, that is, the HDR video frame to be edited in the original (YUV, INT) format Change the HDR video frame to be edited in (RGB, FLOAT) format. This is because the color coding format supported by OpenGL when drawing and/or rendering the video frame is RGB, and the data type of the color value in the color channel is floating point.
因此,上述N个HDR视频帧在输入OpenGL之后,其颜色编码格式更变为RGB,颜色通道中颜色值的数据类型变更为浮点型。此时,HDR视频帧的色域仍然为BT2020,即上述过程还不涉及将HDR视频帧转换为SDR视频帧。Therefore, after the above N HDR video frames are input into OpenGL, their color coding format is changed to RGB, and the data type of the color value in the color channel is changed to floating point. At this time, the color gamut of the HDR video frame is still BT2020, that is, the above process does not involve converting the HDR video frame into an SDR video frame.
(8)在将(YUV、INT、BT2020)格式的待编辑HDR视频帧转换为(RGB、FLOAT、BT2020)格式的待编辑HDR视频帧之后,OpenGL可将变更后的待编辑HDR视频帧写入GPU中。具体的,在图4所示的步骤(13)和(14)中,OpenGL向GPU申请了显存A。这时,OpenGL可将上述变更后的待编辑HDR视频帧(RGB、FLOAT、BT2020)写入GPU中。(8) After converting the HDR video frame to be edited in (YUV, INT, BT2020) format to the HDR video frame to be edited in (RGB, FLOAT, BT2020) format, OpenGL can write the changed HDR video frame to be edited GPU. Specifically, in steps (13) and (14) shown in FIG. 4, OpenGL applies for video memory A to the GPU. At this time, OpenGL can write the HDR video frame (RGB, FLOAT, BT2020) to be edited after the above changes into the GPU.
(9)在写入成功之后,OpenGL可调用事先写入GPU中的LUT资源。上述LUT资源用于将HDR视频帧转换为SDR视频帧。(9) After the writing is successful, OpenGL can call the LUT resource written in the GPU in advance. The above LUT resources are used to convert HDR video frames to SDR video frames.
(10)在获取到上述LUT资源之后,OpenGL可确定使用上述LUT资源将待编辑HDR视频帧转换为SDR视频帧(RGB、FLOAT、BT709)的计算逻辑。(11)然后,OpenGL可向GPU下发上述计算逻辑,以指示GPU完成使用上述LUT资源将待编辑HDR视频帧转换为SDR视频帧(RGB、FLOAT、BT709)。(10) After acquiring the above-mentioned LUT resource, OpenGL can determine the calculation logic for converting the HDR video frame to be edited into an SDR video frame (RGB, FLOAT, BT709) using the above-mentioned LUT resource. (11) Then, OpenGL can send the above calculation logic to the GPU to instruct the GPU to use the above LUT resource to convert the HDR video frame to be edited into an SDR video frame (RGB, FLOAT, BT709).
(12)响应于OpenGL下发的计算逻辑,GPU可依次修改待编辑HDR视频帧中每一个像素点的颜色值,以实现将色域为BT2020的像素点转化为色域为BT709的像素点,进而实现将HDR视频帧转化为SDR视频。后续实施例将详细介绍GPU依照OpenGL下发的计算逻辑将色域为BT2020的像素点转化为色域为BT709的像素点,以实现将HDR视频帧转化为SDR视频的具体过程,这里先不展开。(12) In response to the calculation logic issued by OpenGL, the GPU can sequentially modify the color value of each pixel in the HDR video frame to be edited, so as to realize the conversion of pixels with a color gamut of BT2020 into pixels with a color gamut of BT709, Then realize the conversion of HDR video frame into SDR video. Subsequent embodiments will introduce in detail how the GPU converts pixels with a color gamut of BT2020 into pixels with a color gamut of BT709 according to the calculation logic issued by OpenGL, so as to realize the specific process of converting HDR video frames into SDR video frames, which will not be expanded here .
(13)在使用LUT资源将待编辑HDR视频帧转换为SDR视频帧(RGB、FLOAT、BT709)后,GPU可将存放上述SDR视频帧的显存B的地址发送给OpenGL。(14)进一步的,OpenGL可将上述显存B的地址返回给APP。上述显存B可以与显存A相同,也可以不同。(13) After using the LUT resource to convert the HDR video frame to be edited into an SDR video frame (RGB, FLOAT, BT709), the GPU can send the address of the video memory B storing the above SDR video frame to OpenGL. (14) Further, OpenGL may return the address of the above-mentioned video memory B to the APP. The video memory B above can be the same as or different from the video memory A.
(15)在接收到上述显存B的地址后,APP可确认OpenGL已经完成:将待编辑HDR视频帧转化为SDR视频帧。此时,电子设备100可显示上述SDR视频帧(RGB、FLOAT、BT709)。这样,当无法正常显示HDR视频帧时,电子设备100可以显示根据HDR视频帧转换得到的SDR视频帧,从而避免电子设备100无法正常显示待编辑HDR视频帧的情况。(15) After receiving the address of the above video memory B, the APP can confirm that OpenGL has been completed: convert the HDR video frame to be edited into an SDR video frame. At this time, the electronic device 100 can display the above-mentioned SDR video frame (RGB, FLOAT, BT709). In this way, when the HDR video frame cannot be displayed normally, the electronic device 100 can display the SDR video frame converted according to the HDR video frame, thereby avoiding the situation that the electronic device 100 cannot normally display the HDR video frame to be edited.
参考图1D所示的用户界面,当实施图5中步骤(1)~(14)所示的转换方法后,电子设备100可以得到根据HDR视频帧(BT2020)转换得到的SDR视频帧(BT709)。于是,电子设备100可以在预览窗141中显示上述SDR视频帧。这时,在预览窗141无法正常显示原HDR视频帧的情况下,预览窗141中显示的SDR视频帧可以为用户提供更好的预览效果。Referring to the user interface shown in Figure 1D, after implementing the conversion method shown in steps (1) to (14) in Figure 5, the electronic device 100 can obtain the SDR video frame (BT709) converted according to the HDR video frame (BT2020) . Then, the electronic device 100 can display the above SDR video frame in the preview window 141 . At this time, when the preview window 141 cannot normally display the original HDR video frame, the SDR video frame displayed in the preview window 141 can provide a better preview effect for the user.
S104:电子设备100根据检测到的编辑操作修改待编辑SDR视频帧。S104: The electronic device 100 modifies the SDR video frame to be edited according to the detected editing operation.
在检测到编辑视频的用户操作后,电子设备100可显示编辑视频的用户界面。图1D- 图1J所示的用户界面可称为编辑视频的用户界面。在预览窗141显示根据待编辑HDR视频帧(BT2020)转换得到的待编辑SDR视频帧(BT709)的同时,该用户界面中可显示有多个编辑视频的控件,例如图1D中操作栏143、操作栏144提供的多个编辑控件等等,以供用户编辑视频。电子设备100可检测到作用于上述编辑控件的用户操作,进而对待编辑SDR视频帧执行图像处理,使得编辑后的SDR视频帧具备上述编辑控件指示的显示效果。After detecting a user operation for editing a video, the electronic device 100 may display a user interface for editing the video. The user interface shown in FIGS. 1D-1J may be referred to as a user interface for editing a video. While the preview window 141 displays the SDR video frame to be edited (BT709) converted according to the HDR video frame to be edited (BT2020), multiple controls for editing video can be displayed in the user interface, such as the operation bar 143, The operation bar 144 provides multiple editing controls and the like for the user to edit the video. The electronic device 100 may detect a user operation acting on the editing control, and then perform image processing on the SDR video frame to be edited, so that the edited SDR video frame has the display effect indicated by the editing control.
下面以图1F所示的添加滤镜155的操作为例,结合图5中步骤(16)~(20),说明电子设备100根据检测到的编辑操作修改待编辑SDR视频帧的处理过程。Taking the operation of adding filter 155 shown in FIG. 1F as an example, and combining steps (16)-(20) in FIG. 5 , the process of modifying the SDR video frame to be edited by the electronic device 100 according to the detected editing operation will be described.
在将待编辑HDR视频帧(BT2020)转换为待编辑SDR视频帧(BT709)后,电子设备100可检测到作用于滤镜155的用户操作,进而对待编辑SDR视频帧(BT709)执行滤镜155指示的图像处理,生成编辑后的SDR视频帧(BT709)。该编辑后的SDR视频帧具备滤镜155所示的显示效果。After converting the HDR video frame to be edited (BT2020) into the SDR video frame to be edited (BT709), the electronic device 100 can detect the user operation acting on the filter 155, and then execute the filter 155 on the SDR video frame to be edited (BT709). Indicated image processing to generate edited SDR video frames (BT709). The edited SDR video frame has the display effect shown by filter 155 .
具体的,图5中步骤(16)~(20)参考,首先,(16)电子设备100可检测到作用于某一编辑控件的用户操作。例如,在图1F所示的用户界面中,电子设备100可检测到作用于编辑控件滤镜155的用户操作。上述作用于某一编辑处理控件的用户操作可称为用户选定的编辑操作。Specifically, referring to steps (16)-(20) in FIG. 5, first, (16) the electronic device 100 may detect a user operation acting on a certain editing control. For example, in the user interface shown in FIG. 1F , the electronic device 100 may detect a user operation acting on the edit control filter 155 . The above-mentioned user operations acting on a certain edit processing control may be referred to as edit operations selected by the user.
(17)在检测到用户选定的编辑操作之后,APP可向OpenGL发送上述编辑操作。例如,APP可向OpenGL发送滤镜155的编辑操作。(17) After detecting the editing operation selected by the user, the APP can send the editing operation to OpenGL. For example, the APP may send the editing operation of the filter 155 to OpenGL.
(18)在接收到APP发送的编辑操作之后,OpenGL可确定对待编辑SDR视频帧执行上述编辑操作的计算逻辑。(19)于是,在确定上述计算逻辑后,OpenGL可将上述计算逻辑下发给GPU,以指示GPU执行对待编辑SDR视频帧的计算处理,使得编辑后的SDR视频帧具备用户选定的编辑操作的显示效果。这时编辑后的SDR视频帧的色域仍为BT709,即用户选定的编辑操作编辑的对象为SDR视频帧,编辑后的视频帧仍为SDR视频帧。(18) After receiving the editing operation sent by the APP, OpenGL can determine the calculation logic for performing the editing operation on the SDR video frame to be edited. (19) Then, after the above-mentioned calculation logic is determined, OpenGL can send the above-mentioned calculation logic to the GPU to instruct the GPU to perform the calculation processing of the SDR video frame to be edited, so that the edited SDR video frame has the editing operation selected by the user display effect. At this time, the color gamut of the edited SDR video frame is still BT709, that is, the editing object selected by the user is the SDR video frame, and the edited video frame is still the SDR video frame.
(20)响应于OpenGL下发的计算逻辑,GPU可根据上述计算逻辑修改待编辑SDR视频帧的视频帧尺寸和/或像素点颜色值。已经按计算逻辑处理得到的SDR视频帧可称为编辑后SDR视频帧。(20) In response to the calculation logic delivered by OpenGL, the GPU can modify the video frame size and/or pixel color value of the SDR video frame to be edited according to the above calculation logic. The SDR video frame that has been processed according to the calculation logic may be called an edited SDR video frame.
当户选定的编辑操作为修改像素点颜色值的编辑操作时,例如添加滤镜的操作,GPU接收到的计算逻辑会指示GPU按照被添加滤镜的颜色转换公式修改视频帧中像素点的颜色值。当用户选定的编辑操作为修改视频帧尺寸的编辑操作时,例如添加剪切操作,GPU接收到的计算逻辑会指示GPU按照剪切操作作用的视频帧修改视频帧中像素点数量。When the editing operation selected by the user is an editing operation that modifies the color value of a pixel, such as the operation of adding a filter, the calculation logic received by the GPU will instruct the GPU to modify the color value of the pixel in the video frame according to the color conversion formula of the added filter. color value. When the editing operation selected by the user is to modify the size of the video frame, such as adding a cut operation, the calculation logic received by the GPU will instruct the GPU to modify the number of pixels in the video frame according to the video frame on which the cut operation is applied.
(21)当得到编辑后SDR视频帧后,GPU可将存储上述编辑后SDR视频帧的显存C的地址发送给OpenGL。上述显存C可以与显存A或显存B相同,也可以不同。(22)随后,OpenGL可将上述显存C的地址发送给APP。(21) After obtaining the edited SDR video frame, the GPU may send the address of the video memory C storing the edited SDR video frame to OpenGL. The above-mentioned video memory C may be the same as or different from video memory A or video memory B. (22) Subsequently, OpenGL can send the address of the above-mentioned video memory C to the APP.
(23)在接收到OpenGL返回的存储编辑后SDR视频帧的显存地址后,APP可根据上述地址获取编辑后SDR视频帧,进一步的,APP可显示上述编辑后SDR视频帧。结合图1F所示的用户界面,APP可在预览窗141中显示上述添加有滤镜155所示的显示效果的SDR视频帧。(23) After receiving the video memory address for storing the edited SDR video frame returned by OpenGL, the APP can obtain the edited SDR video frame according to the above address, and further, the APP can display the above-mentioned edited SDR video frame. In combination with the user interface shown in FIG. 1F , the APP can display the above SDR video frame with the display effect shown by the filter 155 added in the preview window 141 .
可以理解的,电子设备100可检测到多个用户选择的编辑操作。例如除了图1F所示的添加滤镜的操作,电子设备100还可检测到图1I所示的添加片头/片尾的用户操作等。当没 检测到一个用户选择的编辑操作后,APP可将上述编辑操作发送给OpenGL,进而,OpenGL可指示GPU执行相应地计算,从而为待编辑SDR是附加不同的视频显示效果。It can be understood that the electronic device 100 can detect editing operations selected by multiple users. For example, in addition to the operation of adding a filter shown in FIG. 1F , the electronic device 100 may also detect the user's operation of adding a title/title as shown in FIG. 1I . When an editing operation selected by the user is not detected, the APP can send the above-mentioned editing operation to OpenGL, and then OpenGL can instruct the GPU to perform corresponding calculations, so as to add different video display effects to the SDR to be edited.
若用户选定的编辑操作还包括对待编辑HDR视频中的音频数据的处理,那么,在电子设备100还会执行对上述音频数据的编辑处理。其中,对音频数据的处理包括音频裁剪、增加音频、删除音频、音频合并等等,这里不再赘述。If the editing operation selected by the user also includes the processing of the audio data in the HDR video to be edited, then the electronic device 100 will also execute the editing processing of the audio data. Wherein, the processing of audio data includes audio clipping, adding audio, deleting audio, merging audio, etc., which will not be repeated here.
S105:电子设备100根据编辑后的SDR视频帧生成编辑后的SDR视频。S105: The electronic device 100 generates an edited SDR video according to the edited SDR video frame.
编辑视频的用户界面还包括保存控件,例如图1D中的保存控件146。电子设备100可检测到作用于保存控件的用户操作,例如图1J所示的作用于保存控件146的用户操作。响应于上述用户操作,电子设备100可根据编辑后SDR视频帧生成SDR视频,并保存到存储卡、硬盘等存储设备中,以供用户后续浏览。The user interface for editing video also includes a save control, such as save control 146 in FIG. 1D . The electronic device 100 may detect a user operation on the save control, such as the user operation on the save control 146 shown in FIG. 1J . In response to the above user operation, the electronic device 100 can generate an SDR video according to the edited SDR video frame, and save it to a storage device such as a memory card or a hard disk for subsequent viewing by the user.
具体的,图6示例性示出了电子设备100根据编辑后的SDR视频帧生成编辑后的SDR视频的具体流程。Specifically, FIG. 6 exemplarily shows a specific process for the electronic device 100 to generate an edited SDR video according to edited SDR video frames.
首先,(1)APP可检测到用户的保存操作。例如图1J所示的作用于保存控件146的用户操作。(2)响应于该操作,APP可向OpenGL发送调用C2D引擎输出编辑后SDR视频的请求。C2D引擎是一个GPU提供的输出显存中存储的图像数据的函数。此时,C2D引擎可用于输出GPU中存储的编辑后SDR视频帧。First, (1) APP can detect the user's save operation. For example, a user operation acting on the save control 146 as shown in FIG. 1J . (2) In response to this operation, the APP may send a request to OpenGL to call the C2D engine to output the edited SDR video. The C2D engine is a function provided by the GPU to output image data stored in video memory. At this point, the C2D engine can be used to output edited SDR video frames stored in the GPU.
(3)响应于上述请求,GPU可以调用C2D引擎从GPU中读取编辑后的SDR视频帧。此时,GPU中存储的编辑后的SDR视频帧的颜色编码格式为RGB,颜色通道中颜色值的数据类型为浮点型,色域为BT709(RGB,FLOAT型,BT709)。(3) In response to the above request, the GPU may call the C2D engine to read the edited SDR video frame from the GPU. At this time, the color coding format of the edited SDR video frame stored in the GPU is RGB, the data type of the color value in the color channel is floating point type, and the color gamut is BT709 (RGB, FLOAT type, BT709).
响应于上述调用,GPU可向OpenGL输出上述编辑后的SDR视频帧。在输出的过程中,C2D引擎可以对上述编辑后的SDR视频帧进行格式转换,包括变更视频帧的颜色编码格式和颜色值的数据类型,从而使得最终输出的视频的颜色编码格式和颜色值的数据类型与编辑前的视频一致。In response to the above call, the GPU may output the above edited SDR video frame to OpenGL. During the output process, the C2D engine can perform format conversion on the above-mentioned edited SDR video frame, including changing the color coding format of the video frame and the data type of the color value, so that the color coding format and color value of the final output video The data type is the same as the video before editing.
具体的,C2D引擎可以将GPU中存储的编辑后的SDR视频帧的颜色编码格式设置为YUV,将颜色值的数据类型设置为整型。这样,OpenGL调用C2D引擎从GPU中获取的编辑后的SDR视频帧的颜色编码格式为YUV,颜色值的数据类型为整型(YUV,INT型,BT709)。Specifically, the C2D engine may set the color encoding format of the edited SDR video frame stored in the GPU to YUV, and set the data type of the color value to integer. In this way, the color encoding format of the edited SDR video frame obtained from the GPU by OpenGL calling the C2D engine is YUV, and the data type of the color value is an integer (YUV, INT type, BT709).
(4)然后,OpenGL可将上述(YUV,INT型,BT709)的编辑后的SDR视频帧输入到编码器申用于缓存视频帧的surface中。具体的,参考图4中步骤(11),OpenGL可确定编码器中用于缓存编辑后视频帧的surface的ID和/或内存地址,根据ID和/或内存地址OpenGL可定位上述用于缓存编辑后视频帧的surface的内存空间。然后,OpenGL可将上述从GPU中获取的编辑后的SDR视频帧(YUV,INT型,BT709)写入上述surface中。(4) Then, OpenGL can input the above-mentioned (YUV, INT type, BT709) edited SDR video frame into the surface that the encoder applies to cache the video frame. Specifically, with reference to step (11) in Figure 4, OpenGL can determine the ID and/or memory address of the surface used to cache the edited video frame in the encoder, and OpenGL can locate the above-mentioned cache editing according to the ID and/or memory address. The memory space of the surface of the next video frame. Then, OpenGL can write the above-mentioned edited SDR video frame (YUV, INT type, BT709) obtained from the GPU into the above-mentioned surface.
(5)编码器可实时地检测surface中是否有视频帧,即编辑后的SDR视频帧。当检测到surface中有视频帧时,编码器对surface中存储的视频帧进行组合、封装,从而将解码器拆分后的到的并经过OpenGL、GPU处理的视频帧序列重新封装为一个视频。这样,经过编码器的编码操作,电子设备100可得到由M个编辑后SDR视频帧组成的SDR视频,记为编辑后SDR视频。上述M与图5中步骤(5)中的N(解码器解码后得到的视频帧的数量)可以相同,也可以不同。(5) The encoder can detect whether there is a video frame in the surface in real time, that is, the edited SDR video frame. When a video frame is detected in the surface, the encoder combines and encapsulates the video frames stored in the surface, so that the sequence of video frames split by the decoder and processed by OpenGL and GPU is repackaged into a video. In this way, after the encoding operation by the encoder, the electronic device 100 can obtain an SDR video composed of M edited SDR video frames, which is recorded as an edited SDR video. The above M may be the same as or different from N (the number of video frames obtained after decoding by the decoder) in step (5) in FIG. 5 .
最后,(6)编码器可将编码后得到的编辑后SDR视频传回APP指定的存储空间。进一步的,APP可从上述存储空间获取并显示编辑后的视频,以供用户浏览或使用。Finally, (6) the encoder can return the edited SDR video obtained after encoding to the storage space specified by the APP. Further, the APP can obtain and display the edited video from the above storage space for the user to browse or use.
结合图1J、图1K所示的用户界面,在检测到作用于图1J中保存控件146的保存操作后,电子设备100可执行上述步骤(1)~(6)所示的方法,得到编辑后的SDR视频。然后,APP可在参考图1K所示的用户界面显示上述编辑后的SDR视频。这样,用户可以得到经过编辑的个性化的视频,从而满足用户编辑HDR视频的需求。Combined with the user interface shown in Figure 1J and Figure 1K, after detecting the save operation acting on the save control 146 in Figure 1J, the electronic device 100 can execute the method shown in the above steps (1) to (6), and obtain the edited SDR video. Then, the APP can display the above-mentioned edited SDR video on the user interface shown in FIG. 1K . In this way, the user can obtain an edited and personalized video, so as to meet the user's demand for editing HDR video.
该编辑后SDR视频是具有用户指定的个性化显示效果的视频。上述个性化显示效果是指上述用户选定的编辑操作指示的显示效果。例如滤镜控件155指示的深灰色显示效果、片头“标题5”指示的片头效果等等。The edited SDR video is a video with a user-specified personalized display effect. The aforementioned personalized display effect refers to the display effect of the editing operation instruction selected by the aforementioned user. For example, the dark gray display effect indicated by the filter control 155, the opening effect indicated by the opening title "Title 5", and so on.
可以理解的,当S105、S106还包括对解码后的音频数据的处理时,在编码编辑后SDR视频帧的过程中,电子设备100还会同时编码处理后的音频数据。这时,编辑后输出的SDR视频还处理后的音频数据。上述处理后的音频数据可以与处理前的音频数据一致,也可以不一致。当用户选定的编辑操作涉及对音频的处理时,例如图1G、图1H所示的添加音频的编辑操作,上述处理后的音频数据与处理前的音频数据不一致;反之,上述处理后的音频数据与处理前的音频数据一致。It can be understood that when S105 and S106 also include processing the decoded audio data, in the process of encoding the edited SDR video frame, the electronic device 100 will also encode the processed audio data at the same time. At this time, the edited output SDR video is also processed audio data. The above-mentioned processed audio data may or may not be consistent with the pre-processed audio data. When the editing operation selected by the user involves the processing of audio, such as the editing operation of adding audio shown in Figure 1G and Figure 1H, the above-mentioned processed audio data is inconsistent with the pre-processing audio data; otherwise, the above-mentioned processed audio The data is the same as the audio data before processing.
上述S104~S105从电子设备100及其功能模块的角度简要地介绍了电子设备100对待编辑HDR视频执行编辑操作,并保存为SDR视频的方法。其中,OpenGL指示GPU利用LUT资源将色域为BT2020的HDR视频转换为色域为BT709的SDR视频帧的具体过程可参考图7。From the perspective of the electronic device 100 and its functional modules, the above steps S104 to S105 briefly introduce the method for the electronic device 100 to perform editing operations on the HDR video to be edited and save it as an SDR video. Wherein, OpenGL instructs the GPU to use LUT resources to convert the HDR video with a color gamut of BT2020 into an SDR video frame with a color gamut of BT709, as shown in FIG. 7 .
首先,表1示例性示出了LUT资源。如表1所示,LUT资源是基于颜色查找表(look up table,LUT)算法建立的用于将色域为BT2020的视频帧转换为色域为BT709的视频帧的数据资源,可表示为一张颜色查找表。First, Table 1 exemplarily shows LUT resources. As shown in Table 1, the LUT resource is a data resource for converting a video frame with a color gamut of BT2020 into a video frame with a color gamut of BT709 established based on a color lookup table (LUT) algorithm, which can be expressed as a A color lookup table.
表1所示的LUT资源记录了33*33*33(35937)种颜色的各个颜色通道的取值。不限于表1所示的33*33*33的LUT资源,LUT资源还有其他形式的,例如64*64*64规格的,本申请实施例对此不做限制。The LUT resource shown in Table 1 records the value of each color channel of 33*33*33 (35937) colors. The LUT resources are not limited to the 33*33*33 LUT resources shown in Table 1, and there are other forms of LUT resources, such as 64*64*64 specifications, which are not limited in this embodiment of the present application.
表1Table 1
颜色编号color number R(33)R(33) G(33)G(33) B(33)B(33)
11 0.03331040.0333104 0.03064010.0306401 0.03079270.0307927
22 0.04776070.0477607 0.02955670.0295567 0.03064010.0306401
33 0.08081180.0808118 0.0270390.027039 0.03027390.0302739
44 11 22 33
……... ……... ……... ……...
34683468 11 0.982040.98204 00
……... ……... ……... ……...
3593535935 11 11 11
3593635936 11 11 11
3593735937 11 11 11
颜色编号1代表当R在第一步长的时候,步长编号为0;G在第一个步长的时候,步长编号为0;B在第一个步长的时候,步长编号为0。即,经过步长处理的颜色值为(0,0,0)是,该颜色值(0,0,0)在表1中对应的颜色值的颜色编号为1,其颜色值为(0.0333104,0.0306401,0.0307927)。The color number 1 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step number is 0. That is, the color value processed by the step size is (0,0,0), the color number of the color value corresponding to the color value (0,0,0) in Table 1 is 1, and its color value is (0.0333104, 0.0306401, 0.0307927).
颜色编号2代表当R在第一步长的时候,步长编号为0;G在第一个步长的时候,步长编号为0;B在第一个步长的时候,步长编号为1。颜色编号3代表当R在第一步长的时候,步长编号为0;G在第一个步长的时候,步长编号为0;B在第一个步长的时候,步长编号为2。依次类推,颜色编号33代表当R在第一步长的时候,步长编号为0;G在第一个步长的时候,步长编号为0;B在第一个步长的时候,步长编号为32。颜色编号34代表当R在第一步长的时候,步长编号为0;G在第一个步长的时候,步长编号为1;B在第一个步长的时候,步长编号为0。这里不再一一例举。Color number 2 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step number is 1. Color number 3 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step number is 2. By analogy, the color number 33 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 0; when B is at the first step, the step The long number is 32. The color number 34 means that when R is at the first step, the step number is 0; when G is at the first step, the step number is 1; when B is at the first step, the step number is 0. No more examples here.
下面具体介绍GPU利用LUT资源将色域为BT2020的HDR视频转换为色域为BT709的SDR视频帧的过程。The following describes in detail how the GPU uses LUT resources to convert the HDR video with a color gamut of BT2020 into an SDR video frame with a color gamut of BT709.
如图7所示,像素点Q为HDR视频帧中任意一个像素点。假设,像素点Q中颜色通道的取值分别为Q 1(0.1,0.2,0.1)。上述Q 1为此时像素点Q的颜色值。其中0.1,0.2,0.1为浮点型。 As shown in FIG. 7 , the pixel point Q is any pixel point in the HDR video frame. Assume that the values of the color channels in the pixel point Q are respectively Q 1 (0.1, 0.2, 0.1). The above Q 1 is the color value of the pixel point Q at this time. Among them, 0.1, 0.2, and 0.1 are floating point types.
首先,GPU可将像素点Q的颜色值Q 1中的各颜色通道的取值乘256,确定Q 1中个颜色通道在0到256上的位置,得到颜色值Q 2(25.6,51.2,25.6)。按照四舍五入取整的方式,这时,GPU实际得到的颜色值Q 2(26,51,26)。 First, the GPU can multiply the value of each color channel in the color value Q 1 of the pixel Q by 256 to determine the position of each color channel in Q 1 from 0 to 256, and obtain the color value Q 2 (25.6, 51.2, 25.6 ). According to the rounding method, at this time, the color value Q 2 (26, 51, 26) actually obtained by the GPU.
以步长为8,将上述Q 2(26,51,26)中各颜色通道的取值除8并四舍五入取整,得到Q 3(3,6,3)。这里,步长为8是因为LUT资源格式为33*33*33。要将256个颜色值对应到32个颜色值,那么需要将8个颜色取值视为1个,即256/32=8。 With a step size of 8, the value of each color channel in Q 2 (26, 51, 26) above is divided by 8 and rounded up to obtain Q 3 (3, 6, 3). Here, the step size is 8 because the LUT resource format is 33*33*33. To map 256 color values to 32 color values, it is necessary to regard 8 color values as one, that is, 256/32=8.
然后,GPU可根据查表公式和Q 3(3,6,3)确定Q 3(3,6,3)在LUT资源对应的颜色值。具体的,GPU可根据查表公式确定Q 3在颜色查找表(表1)中的颜色编号W。其中,查表公式如下: Then, the GPU may determine the color value of Q 3 (3, 6, 3) corresponding to the LUT resource according to the look-up formula and Q 3 (3, 6, 3). Specifically, the GPU may determine the color number W of Q 3 in the color lookup table (Table 1) according to the table lookup formula. Among them, the table lookup formula is as follows:
W=R 2*33 2+G 2*33 1+B 2*33 0 W=R 2 *33 2 +G 2 *33 1 +B 2 *33 0
这时,Q 3(3,6,3)在表1中对应的颜色编号W: At this time, the color number W corresponding to Q 3 (3, 6, 3) in Table 1:
W=3*33 2+6*33 1+3*33 0=3468 W=3*33 2 +6*33 1 +3*33 0 =3468
这时,查询表1,GPU可确定Q 3经过表1的映射得到的颜色值为(1,0.98204,0)。 At this time, by looking up Table 1, the GPU can determine that the color value of Q 3 obtained through mapping in Table 1 is (1, 0.98204, 0).
进一步的,如图7所示,GPU可根据Q 3确定与Q 3在三维空间构成立方体的另外7个颜色值,记为S 1~S 7。同理,GPU可根据查表公式确定S 1~S 7在颜色查找表(表1)中的颜色编号,进而确定经过表1的转换的S 1~S 7的颜色值。 Further, as shown in FIG. 7 , the GPU can determine another 7 color values that form a cube with Q 3 in a three-dimensional space according to Q 3 , which are denoted as S 1 -S 7 . Similarly, the GPU can determine the color numbers of S 1 -S 7 in the color look-up table (Table 1) according to the look-up formula, and then determine the converted color values of S 1 -S 7 in Table 1.
这时,GPU可确定上述Q 3、S 1~S 7在表1中对应的各个颜色通道的颜色值,如表2: At this time, the GPU can determine the color values of each color channel corresponding to the above-mentioned Q 3 , S 1 -S 7 in Table 1, as shown in Table 2:
表2Table 2
 the WW  the
Q 3 Q 3 34683468 (0,0.172259,0.0564126)(0, 0.172259, 0.0564126)
S 1 S 1 45574557 (0,0.171084,0.0907149)(0, 0.171084, 0.0907149)
S 2 S 2 35013501 (0,0.210559,0.0516823)(0, 0.210559, 0.0516823)
S 3 S 3 34663466 (0,0.1897,0.0623789)(0, 0.1897, 0.0623789)
S 4 S 4 45904590 (0,0.209201,0.0870527)(0, 0.209201, 0.0870527)
S 5 S 5 45584558 (0,0.163119,0.0860914)(0, 0.163119, 0.0860914)
S 6 S 6 45914591 (0,0.200137,0.0819867)(0, 0.200137, 0.0819867)
S 7 S 7 35023502 (0,0.20148,0.0486458)(0, 0.20148, 0.0486458)
然后,GPU可利用插值法从上述Q 3和上述S 1~S 7确定出Q 4(0,0.174277,0.035676)。Q 4即为像素点Q的最终的颜色值。上述插值法包括但不限于三线性插值法、四面体插值法等等,这里不再赘述。 Then, the GPU may use an interpolation method to determine Q 4 (0, 0.174277, 0.035676) from the above Q 3 and the above S 1 -S 7 . Q 4 is the final color value of the pixel point Q. The above-mentioned interpolation methods include but are not limited to trilinear interpolation methods, tetrahedral interpolation methods, etc., which will not be repeated here.
图7以待编辑HDR视频帧中一个像素点Q为例,示出了GPU使用LUT资源将色域为BT2020的一个像素点转换为色域为BT709的一个像素点的过程的具体过程。以此类推,GPU可对待编辑HDR视频帧中其他像素点执行上述处理,从而将待编辑HDR视频帧中每一个像素点的色域由BT2020转换为BT709,进而实现将待编辑HDR视频帧转换为待编辑SDR视频帧。Fig. 7 takes a pixel point Q in the HDR video frame to be edited as an example, and shows the specific process of converting a pixel point with a color gamut of BT2020 into a pixel point with a color gamut of BT709 using LUT resources by the GPU. By analogy, the GPU can perform the above processing on other pixels in the HDR video frame to be edited, so as to convert the color gamut of each pixel in the HDR video frame to be edited from BT2020 to BT709, and then convert the HDR video frame to be edited to SDR video frame to be edited.
图8示例性示出了电子设备100的硬件结构示意图。FIG. 8 exemplarily shows a schematic diagram of a hardware structure of the electronic device 100 .
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或 数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. This memory may hold instructions or data that processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces. For example, the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 . In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160 . For example: the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以 用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 can receive charging input from the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 . The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be disposed in the processor 110 . In some other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。 Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like. The mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wirelessfidelity,Wi-Fi)网络),蓝牙(bluetooth, BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system, etc. applied on the electronic device 100. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
在本申请实施例中,电子设备100显示图1A-图1K所示的用户界面可通过GPU、编码器、解码器、OpenGL,显示屏194完成。In this embodiment of the present application, the electronic device 100 may display the user interface shown in FIG. 1A-FIG. 1K through a GPU, an encoder, a decoder, OpenGL, and a display screen 194 .
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
在本申请实施例中,被编辑的HDR视频可以是电子设备100通过无线通信功能从其他电子设备上获取的,也可以是电子设备100通过ISP,摄像头193,视频编解码器、GPU,显示屏194拍摄得到的。In this embodiment of the application, the edited HDR video can be obtained by the electronic device 100 from other electronic devices through the wireless communication function, or it can be obtained by the electronic device 100 through the ISP, camera 193, video codec, GPU, display screen 194 shots were obtained.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193 中。The ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(movingpictureexpertsgroup,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。The NPU is a neural-network (NN) computing processor. By referring to the structure of biological neural networks, such as the transmission mode between neurons in the human brain, it can quickly process input information and continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
内部存储器121可以包括一个或多个随机存取存储器(random access memory,RAM)和一个或多个非易失性存储器(non-volatile memory,NVM)。The internal memory 121 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (non-volatile memory, NVM).
随机存取存储器可以包括静态随机存储器(static random-access memory,SRAM)、动态随机存储器(dynamic random access memory,DRAM)、同步动态随机存储器Random access memory may include static random-access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory
(synchronous dynamic random access memory,SDRAM)、双倍资料率同步动态随机存取存储器(double data rate synchronous dynamic random access memory,DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5 SDRAM)等。非易失性存储器可以包括磁盘存储器件、快闪存储器(flash memory)。(synchronous dynamic random access memory, SDRAM), double data rate synchronous dynamic random access memory (double data rate synchronous dynamic random access memory, DDR SDRAM, for example, the fifth generation DDR SDRAM is generally called DDR5 SDRAM), etc. Non-volatile memory may include magnetic disk storage devices, flash memory (flash memory).
快闪存储器按照运作原理划分可以包括NOR FLASH、NAND FLASH、3D NAND FLASH等,按照存储单元电位阶数划分可以包括单阶存储单元(single-level cell,SLC)、多阶存储单元(multi-level cell,MLC)、三阶储存单元(triple-level cell,TLC)、四阶储存单元(quad-level cell,QLC)等,按照存储规范划分可以包括通用闪存存储(英文:universal flash storage,UFS)、嵌入式多媒体存储卡(embedded multimedia Card,eMMC)等。According to the operating principle, flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. According to the potential order of storage cells, it can include single-level storage cells (single-level cell, SLC), multi-level storage cells (multi-level cell, MLC), triple-level cell (TLC), quad-level cell (QLC), etc., can include universal flash storage (English: universal flash storage, UFS) according to storage specifications , Embedded multimedia card (embedded multimedia Card, eMMC), etc.
随机存取存储器可以由处理器110直接进行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。The random access memory can be directly read and written by the processor 110, and can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接进行读写。The non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
在本申请实施例中,内部存储器121可支持电子设备100向内存申请surface、传送带bufferqueue。In the embodiment of the present application, the internal memory 121 can support the electronic device 100 to apply for a surface and a conveyor bufferqueue from the memory.
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备100的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据 存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。在本申请实施例中,电子设备100拍摄HDR视频时可通过麦克风170C采集声音。在播放视频的过程中,扬声器170A或耳机接口170D连接的扬声器可支持播放视频中的音频。The external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 . The external non-volatile memory communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, files such as music and video are stored in an external non-volatile memory. In the embodiment of the present application, when the electronic device 100 shoots the HDR video, the microphone 170C may collect sound. In the process of playing the video, the speaker 170A or the speaker connected to the earphone interface 170D can support playing the audio in the video.
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。 Speaker 170A, also referred to as a "horn", is used to convert audio electrical signals into sound signals. Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。 Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 receives a call or a voice message, the receiver 170B can be placed close to the human ear to receive the voice.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The earphone interface 170D is used for connecting wired earphones. The earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 180A may be disposed on display screen 194 . There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors. A capacitive pressure sensor may be comprised of at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the intensity of pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenes.
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case. In some embodiments, when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure the distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The light emitting diodes may be infrared light emitting diodes. The electronic device 100 emits infrared light through the light emitting diode. Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 . The electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used for sensing ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the electronic device 100 from being shut down abnormally due to the low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。The touch sensor 180K is also called "touch device". The touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to the touch operation can be provided through the display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
在本申请实施例中,电子设备100检测是否有作用于电子设备100显示屏194的用户操作可通过触摸传感器180K完成。在触摸传感器180K检测到上述用户操作后,电子设备100可执行上述用户操作指示的图像处理,实现视频色域转换。In the embodiment of the present application, the electronic device 100 detects whether there is a user operation on the display screen 194 of the electronic device 100 through the touch sensor 180K. After the touch sensor 180K detects the above-mentioned user operation, the electronic device 100 may execute the image processing indicated by the above-mentioned user operation to realize video color gamut conversion.
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获 取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone. The audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the vocal part acquired by the bone conduction sensor 180M, so as to realize the voice function. The application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power key, a volume key and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。The SIM card interface 195 is used for connecting a SIM card. The SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 . The electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication. In some embodiments, the electronic device 100 adopts an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
实施本申请实施例提供的视频编辑方法,在使用SDR视频编辑器编辑HDR视频时,当SDR视频编辑器无法支持正常显示HDR视频时,电子设备100可以将色域为BT2020的HDR视频转换为色域为BT709的SDR视频,而色域为BT709的SDR视频是上述SDR视频编辑器支持正常显示的,这时,电子设备100可以显示上述经过转化得到的SDR视频,代替直接显示HDR视频,从而避免使用SDR编辑器直接显示HDR视频帧显示不清晰等影响用户体验的显示问题。Implement the video editing method provided by the embodiment of the present application. When using the SDR video editor to edit the HDR video, when the SDR video editor cannot support the normal display of the HDR video, the electronic device 100 can convert the HDR video with a color gamut of BT2020 into a color gamut. The SDR video whose gamut is BT709, and the SDR video whose color gamut is BT709 is normally displayed by the above-mentioned SDR video editor. At this time, the electronic device 100 can display the above-mentioned converted SDR video instead of directly displaying the HDR video, thereby avoiding Use the SDR editor to directly display the display problems that affect the user experience such as the display of HDR video frames is not clear.
在本申请实施例中:In the embodiment of this application:
1、用户点击用于触发编辑视频业务的编辑控件的操作可称为第一用户操作,例如图1C中点击控件133的操作。在检测到第一用户操作时,电子设备当前显示的视频,即用户选定即将编辑的视频,可称为第一视频,例如图1C中窗口131中显示的视频A。解码器解码第一视频得到的一系列视频帧可称为第一视频帧。第一视频帧由N个。该N个由第一视频的时长决定。按LUT资源提供的颜色值和颜色对应关系变更第一视频帧的像素点的颜色值之后得到的视频帧可称为第二视频帧,第二视频帧的数量也为N个。图1D所示的用户 界面可称为第一界面。1. The user's operation of clicking the edit control for triggering editing of the video service may be referred to as a first user operation, such as the operation of clicking the control 133 in FIG. 1C . When the first user operation is detected, the video currently displayed on the electronic device, that is, the video selected by the user to be edited, may be referred to as the first video, such as video A displayed in window 131 in FIG. 1C . A series of video frames obtained by decoding the first video by the decoder may be referred to as first video frames. The first video frame consists of N. The N number is determined by the duration of the first video. The video frame obtained after changing the color value of the pixels of the first video frame according to the color value and the color correspondence provided by the LUT resource may be called a second video frame, and the number of the second video frames is also N. The user interface shown in FIG. 1D may be referred to as a first interface.
2、BT2020色域可称为第一色域;BT709色域可称为第二色域。2. The BT2020 color gamut can be called the first color gamut; the BT709 color gamut can be called the second color gamut.
3、表1所示的33*33*33格式的LUT资源可称为第一颜色表。3. The LUT resource in the format of 33*33*33 shown in Table 1 may be called the first color table.
4、图1D-图1J所示的“剪辑”、“滤镜”、“音乐”、“文本”等编辑操作可称为用户选定的改变所述第一视频显示效果的编辑操作,即第二用户操作。经过上述编辑操作处理后得到的视频帧可称为第三视频帧。第三视频帧的数量可以比第二视频帧包括更多或更少视频帧,和/或,第三视频帧的数量与第二视频帧的数量相同但具有不同显示效果(滤镜、文本贴纸、图像贴纸)。4. Editing operations such as "editing", "filter", "music" and "text" shown in Figure 1D-Figure 1J can be referred to as editing operations selected by the user to change the display effect of the first video, that is, the first video Two user operations. The video frame obtained after the above editing operation may be referred to as the third video frame. The third video frame may include more or fewer video frames than the second video frame, and/or the third video frame may have the same number as the second video frame but have different display effects (filters, text stickers , image stickers).
5、图1J中作用于保存控件146的操作可称为第三用户操作。图1K中显示的保存后的视频可称为第二视频。图1K所示的用户界面可称为第二界面。5. The operation on the save control 146 in FIG. 1J may be referred to as a third user operation. The saved video shown in FIG. 1K may be referred to as a second video. The user interface shown in FIG. 1K may be referred to as a second interface.
6、图1D中作用于“分割”控件的操作可称为分割视频的操作;作用于“删除”控件的操作可称为删除视频帧的操作;作用于“画幅”控件的操作可称为裁剪画面尺寸的操作。图1E-图1F所示的选择滤镜控件155的操作可称为添加滤镜的操作。图1I-图1J所示的添加“标题5”片头操作可称为添加片头或片尾。6. The operation acting on the "split" control in Figure 1D can be called the operation of splitting the video; the operation acting on the "delete" control can be called the operation of deleting video frames; the operation acting on the "frame" control can be called cropping Manipulation of screen size. The operation of selecting the filter control 155 shown in FIGS. 1E-1F may be referred to as an operation of adding a filter. The operation of adding the "Title 5" title shown in FIGS. 1I-1J may be called adding a title or trailer.
7、解码器申请的缓存BufferQueue可称为第一内存;编码器申请的缓存surface可称为第二内存。7. The cache BufferQueue applied by the decoder can be called the first memory; the cache surface applied by the encoder can be called the second memory.
8、图7中像素点Q可称为第一像素点,Q 1可称为第一像素点的当前颜色值Q1,颜色值S 1~S 7的可称为与Q1构成空间正方体的7个辅助颜色值;颜色编号W可称为索引值;Q1、S 1~S 7在表1中各自对应的颜色值可称为目标颜色值;插值后得到的Q 4可称为变更后颜色值。 8. Pixel Q in Figure 7 can be called the first pixel, Q 1 can be called the current color value Q1 of the first pixel, and the color values S 1 to S 7 can be called the 7 space cubes that form a space cube with Q1 The auxiliary color value; the color number W can be called the index value; the corresponding color values of Q1, S 1 to S 7 in Table 1 can be called the target color value; Q 4 obtained after interpolation can be called the changed color value.
9、图4中显存B的地址可称为第一显存地址;图5中显存C的地址可称为第二显存地址。9. The address of video memory B in Figure 4 can be called the first video memory address; the address of video memory C in Figure 5 can be called the second video memory address.
本申请的说明书和权利要求书及附图中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在终端设备上经过解析,渲染,最终呈现为用户可以识别的内容,比如图片、文字、按钮等控件。控件(control)也称为部件(widget),是用户界面的基本元素,典型的控件有工具栏(toolbar)、菜单栏(menu bar)、文本框(text box)、按钮(button)、滚动条(scrollbar)、图片和文本。界面中的控件的属性和内容是通过标签或者节点来定义的,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。此外,很多应用程序,比如混合应用(hybrid application)的界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,GTML),层叠样式表(cascading style sheets,CSS),java脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码 中的标签或者节点来定义的,比如GTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。The term "user interface (UI)" in the specification, claims and drawings of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the internal form of information Conversion to and from a form acceptable to the user. The user interface of the application program is the source code written in specific computer languages such as java and extensible markup language (XML). Such as pictures, text, buttons and other controls. Control (control), also known as widget (widget), is the basic element of user interface. Typical controls include toolbar (toolbar), menu bar (menu bar), text box (text box), button (button), scroll bar (scrollbar), images and text. The properties and contents of the controls in the interface are defined through labels or nodes. For example, XML specifies the controls contained in the interface through nodes such as <Textview>, <ImgView>, and <VideoView>. A node corresponds to a control or property in the interface, and after the node is parsed and rendered, it is presented as the content visible to the user. In addition, the interfaces of many applications, such as hybrid applications, usually include web pages. A web page, also called a page, can be understood as a special control embedded in the application program interface. A web page is a source code written in a specific computer language, such as hyper text markup language (GTML), cascading style Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc., the source code of the web page can be loaded and displayed as user-recognizable content by a browser or a web page display component similar in function to the browser. The specific content contained in the webpage is also defined by the tags or nodes in the source code of the webpage. For example, GTML defines the elements and attributes of the webpage through <p>, <img>, <video>, and <canvas>.
用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个图标、窗口、控件等界面元素,其中控件可以包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。The commonly used form of user interface is the graphical user interface (graphic user interface, GUI), which refers to the user interface related to computer operation displayed in a graphical way. It can be an icon, window, control and other interface elements displayed on the display screen of the electronic device, where the control can include icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, etc. Visual interface elements.
在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。As used in the specification and appended claims of this application, the singular expressions "a", "an", "said", "above", "the" and "this" are intended to include Plural expressions, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in this application refers to and includes any and all possible combinations of one or more of the listed items. As used in the above embodiments, depending on the context, the term "when" may be interpreted to mean "if" or "after" or "in response to determining..." or "in response to detecting...". Similarly, depending on the context, the phrases "in determining" or "if detected (a stated condition or event)" may be interpreted to mean "if determining..." or "in response to determining..." or "on detecting (a stated condition or event)" or "in response to detecting (a stated condition or event)".
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。In the above embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, DSL) or wireless (eg, infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media (eg, solid state hard disk), etc.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments are realized. The processes can be completed by computer programs to instruct related hardware. The programs can be stored in computer-readable storage media. When the programs are executed , may include the processes of the foregoing method embodiments. The aforementioned storage medium includes: ROM or random access memory RAM, magnetic disk or optical disk, and other various media that can store program codes.

Claims (22)

  1. 一种视频编辑方法,应用于电子设备,其特征在于,所述方法包括:A video editing method applied to electronic equipment, characterized in that the method comprises:
    检测到第一用户操作,所述第一用户操作对应于编辑控件,用于触发编辑视频的业务;A first user operation is detected, the first user operation corresponds to an editing control, and is used to trigger a video editing service;
    响应于所述第一用户操作,将第一视频解码为N个第一视频帧,所述N个第一视频帧的色域为第一色域;In response to the first user operation, decoding the first video into N first video frames, where the color gamut of the N first video frames is the first color gamut;
    对所述N个第一视频帧进行色域转换,得到N个第二视频帧,所述N个第二视频帧的色域为第二色域,所述第一色域与所述第二色域不同;Perform color gamut conversion on the N first video frames to obtain N second video frames, the color gamut of the N second video frames is the second color gamut, and the first color gamut and the second Different color gamut;
    在第一界面显示所述N个第二视频帧中的任意一个。Any one of the N second video frames is displayed on the first interface.
  2. 根据权利要求1所述的方法,其特征在于,所述第二色域所能表示的颜色范围小于所述第一色域的所能表示的颜色范围。The method according to claim 1, characterized in that the range of colors that can be represented by the second color gamut is smaller than the range of colors that can be represented by the first color gamut.
  3. 根据权利要求1或2所述的方法,其特征在于,所述对所述N个第一视频帧进行色域转换,具体为:利用第一颜色表对所述N个第一视频帧进行色域转换,所述第一颜色表包括多个用于变更视频帧色域的颜色值。The method according to claim 1 or 2, wherein the performing color gamut conversion on the N first video frames is specifically: performing color gamut conversion on the N first video frames using a first color table. Gamut conversion, the first color table includes a plurality of color values for changing the color gamut of the video frame.
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,在第一界面显示所述N个第二视频帧之后,所述方法还包括:The method according to any one of claims 1-3, wherein after the first interface displays the N second video frames, the method further comprises:
    检测到第二用户操作,所述第二用户操作为用户选定的改变所述第一视频显示效果的编辑操作;detecting a second user operation, where the second user operation is an editing operation selected by the user to change the display effect of the first video;
    响应于所述第二用户操作,增加或减少所述第二视频帧的数量,和/或,变更所述N个第二视频帧中一个或多个第二视频帧的像素点的数量,和/或,变更所述N个第二视频帧中一个或多个第二视频帧的像素点的颜色值,得到M个第三视频帧;所述M与所述N相等或不等;In response to the second user operation, increasing or decreasing the number of the second video frames, and/or changing the number of pixels of one or more of the N second video frames, and /or, change the color values of the pixels of one or more second video frames in the N second video frames to obtain M third video frames; the M is equal to or not equal to the N;
    在所述第一界面显示所述M个第三视频帧中的任意一个。Any one of the M third video frames is displayed on the first interface.
  5. 根据权利要求2所述的方法,其特征在于,所述方法还包括;The method according to claim 2, characterized in that the method further comprises;
    检测到第三用户操作,所述第三用户操作对应于保存控件;detecting a third user operation, the third user operation corresponding to a save control;
    响应于所述第三用户操作,保存所述M个第三视频帧为第二视频,所述第二视频为所述第一视频经过编辑操作处理得到的编辑后的视频;In response to the third user operation, saving the M third video frames as a second video, where the second video is an edited video obtained by editing the first video;
    在第二界面显示所述第二视频。The second video is displayed on the second interface.
  6. 根据权利要求4所述的方法,其特征在于,所述用户选定的改变所述第一视频显示效果的编辑操作,包括:分割视频、删除视频帧、添加片头或片尾、裁剪画面尺寸、添加滤镜、添加文本或图形的操作中的一个或多个;The method according to claim 4, wherein the editing operation selected by the user to change the display effect of the first video includes: splitting the video, deleting video frames, adding a title or trailer, cropping the screen size, adding one or more of filters, operations that add text or graphics;
    其中,所述分割视频、删除视频帧、添加片头或片尾的操作用于增加或减少所述第二视频帧的数量,所述裁剪画面尺寸的操作用于变更所述N个第二视频帧中一个或多个第二 视频帧的像素点的数量,所述添加滤镜、添加文本或图形的操作用于变更所述N个第二视频帧中一个或多个第二视频帧的像素点的颜色值。Wherein, the operations of splitting the video, deleting video frames, and adding credits or credits are used to increase or decrease the number of the second video frames, and the operation of cropping the picture size is used to change the number of the N second video frames. The number of pixels of one or more second video frames, the operation of adding a filter, adding text or graphics is used to change the number of pixels of one or more second video frames in the N second video frames color value.
  7. 根据权利要求6所述的方法,其特征在于,The method according to claim 6, characterized in that,
    当所述第二用户操作包括增加或减少所述第二视频帧的数量的操作时,则所述M与所述N不等;当所述第二用户操作不包括增加或减少所述第二视频帧的数量的操作时,则所述M与所述N相等。When the second user operation includes an operation of increasing or decreasing the number of the second video frames, the M is not equal to the N; when the second user operation does not include increasing or decreasing the second video frame When operating on the number of video frames, the M is equal to the N.
  8. 根据权利要求1-7中任一项所述的方法,其特征在于:所述第一色域为BT2020,所述第二色域为BT709。The method according to any one of claims 1-7, wherein the first color gamut is BT2020, and the second color gamut is BT709.
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述第一视频为高动态范围HDR视频,所述第二视频为标准动态范围SDR视频。The method according to any one of claims 1-8, wherein the first video is a high dynamic range (HDR) video, and the second video is a standard dynamic range (SDR) video.
  10. 根据权利要求3-8中任一项所述的方法,其特征在于,所述电子设备包括视频编辑应用APP、解码器、第一内存,所述第一内存为所述编码器中用于缓存APP输入的视频的存储空间,所述将第一视频解码为N个第一视频帧,具体包括:The method according to any one of claims 3-8, wherein the electronic device includes a video editing application APP, a decoder, and a first memory, and the first memory is used in the encoder for caching The storage space of the video input by the APP, the decoding of the first video into N first video frames specifically includes:
    所述APP向所述第一内存发送所述第一视频;The APP sends the first video to the first memory;
    所述解码器从所述第一内存中读得所述第一视频,并将所述第一视频分解为所述N个第一视频帧。The decoder reads the first video from the first memory, and decomposes the first video into the N first video frames.
  11. 根据权利要求10所述的方法,其特征在于,所述电子设备还包括开放图形库OpenGL、图形处理器GPU,所述利用第一颜色表对所述N个第一视频帧进行色域转换,得到N个第二视频帧,具体包括:The method according to claim 10, wherein the electronic device further comprises an open graphics library OpenGL and a graphics processing unit (GPU), and the first color table is used to perform color gamut conversion on the N first video frames, Obtain N second video frames, specifically including:
    所述OpenGL接收到所述解码器发送的所述N个第一视频帧,所述N个第一视频帧的颜色编码格式为YUV格式、表示颜色值的数据的数据类型为整型;The OpenGL receives the N first video frames sent by the decoder, the color coding format of the N first video frames is YUV format, and the data type of the data representing the color value is an integer;
    所述OpenGL将所述N个第一视频帧的颜色编码格式变更为RGB格式,表示颜色值的数据的数据类型变更为浮点型;The OpenGL changes the color coding format of the N first video frames into an RGB format, and changes the data type of the data representing the color value into a floating-point type;
    所述OpenGL从所述GPU中调用所述第一颜色表,使用所述第一颜色表提供的颜色值转换关系修改所述N个第一视频帧的像素点的颜色值,得到所述N个第二视频帧;所述N个第二视频帧的颜色编码格式为RGB格式、表示颜色值的数据的数据类型为浮点型。The OpenGL calls the first color table from the GPU, uses the color value conversion relationship provided by the first color table to modify the color values of the pixels of the N first video frames, and obtains the N The second video frame; the color coding format of the N second video frames is RGB format, and the data type of the data representing the color value is a floating point type.
  12. 根据权利要求11所述的方法,其特征在于,在所述OpenGL从所述GPU中调用所述第一颜色表之前,所述方法还包括:所述OpenGL向所述APP获取所述第一颜色表,并将所述第一颜色表加载到GPU中。The method according to claim 11, wherein before the OpenGL calls the first color table from the GPU, the method further comprises: the OpenGL obtains the first color from the APP table, and load the first color table into the GPU.
  13. 根据权利要求11或12所述的方法,其特征在于,所述使用所述第一颜色表提供的颜色值转换关系修改所述N个第一视频帧的像素点的颜色值,具体包括:The method according to claim 11 or 12, wherein the modifying the color values of the pixels of the N first video frames using the color value conversion relationship provided by the first color table specifically includes:
    所述OpenGL确定第一像素点的当前颜色值Q1,所述Q1的数据类型为整型,所述第一像素点为所述N个第一视频帧任意一个视频帧中的任意一个像素点;The OpenGL determines the current color value Q1 of the first pixel, the data type of the Q1 is an integer, and the first pixel is any pixel in any video frame of the N first video frames;
    所述OpenGL以所述Q1在三维颜色空间中的位置为原点,确定与所述Q1构成空间正方体的7个辅助颜色值;The OpenGL takes the position of the Q1 in the three-dimensional color space as the origin, and determines 7 auxiliary color values forming a space cube with the Q1;
    所述OpenGL确定所述Q1和所述7个辅助颜色值在所述第一颜色表中的各自的索引值;The OpenGL determines respective index values of the Q1 and the 7 auxiliary color values in the first color table;
    所述OpenGL依据所述索引值在所述第一颜色表中查询分别对应所述Q1和所述7个辅助颜色值的目标颜色值;The OpenGL queries target color values corresponding to the Q1 and the 7 auxiliary color values in the first color table according to the index value;
    所述OpenGL将所述Q1和所述7个辅助颜色值的目标颜色值进行插值,得到变更后颜色值,将所述第一像素点的颜色值设置为所述变更后的颜色值。The OpenGL interpolates the Q1 and the target color values of the seven auxiliary color values to obtain a changed color value, and sets the color value of the first pixel as the changed color value.
  14. 根据权利要求13所述的方法,其特征在于,一个颜色值(R,G,B)在所述第一颜色表中的索引值W计算公式为:The method according to claim 13, wherein the formula for calculating the index value W of a color value (R, G, B) in the first color table is:
    W=R*33 2+G*33 1+B*33 0W=R*33 2 +G*33 1 +B*33 0 .
  15. 根据权利要求11-14中任一项所述的方法,其特征在于,所述在第一界面显示所述N个第二视频帧中的任意一个,具体包括:The method according to any one of claims 11-14, wherein the displaying any one of the N second video frames on the first interface specifically includes:
    所述GPU将第一显存地址发送给所述OpenGL,所述第一显存地址为存储所述N个第二视频帧的显存地址;所述OpenGL将所述第一显存地址发送给所述APP;The GPU sends the first video memory address to the OpenGL, and the first video memory address is a video memory address storing the N second video frames; the OpenGL sends the first video memory address to the APP;
    所述APP根据所述第一显存地址获取所述N个第二视频帧;The APP acquires the N second video frames according to the first video memory address;
    所述APP在所述第一界面显示通过所述第一显存地址获取的所述N个第二视频帧中的任意一个视频帧。The APP displays any one of the N second video frames acquired through the first video memory address on the first interface.
  16. 根据权利要求15所述的方法,其特征在于,所述响应于所述第二用户操作,增加或减少所述第二视频帧的数量,和/或,变更所述N个第二视频帧中一个或多个第二视频帧的像素点的数量,和/或,变更所述N个第二视频帧中一个或多个第二视频帧的像素点的颜色值,得到M个第三视频帧,具体包括:The method according to claim 15, wherein in response to the second user operation, the number of the second video frames is increased or decreased, and/or, the number of the N second video frames is changed The number of pixels of one or more second video frames, and/or, change the color value of the pixels of one or more second video frames in the N second video frames to obtain M third video frames , including:
    所述APP将所述第二用户操作发送给所述OpenGL;The APP sends the second user operation to the OpenGL;
    所述OpenGL根据所述第二用户操作确定:实现所述第二用户操作指示的视频显示效果的计算逻辑;The OpenGL determines according to the second user operation: realize the calculation logic of the video display effect indicated by the second user operation;
    所述OpenGL将所述计算逻辑发送给GPU;The OpenGL sends the calculation logic to the GPU;
    所述GPU依据所述计算逻辑增加或减少所述第二视频帧的数量,和/或,变更所述N个第二视频帧中一个或多个第二视频帧的像素点的数量,和/或,变更所述N个第二视频帧中一个或多个第二视频帧的像素点的颜色值,得到M个第三视频帧;M个第三视频帧构成的视频具备所述第二用户操作指示的显示效果,所述M个第三视频帧的颜色编码格式为RGB格式,表示颜色值的数据的数据类型为浮点型。The GPU increases or decreases the number of the second video frames according to the calculation logic, and/or changes the number of pixels of one or more second video frames in the N second video frames, and/or Or, change the color values of the pixels of one or more second video frames in the N second video frames to obtain M third video frames; the video composed of M third video frames has the second user's For the display effect indicated by the operation, the color coding format of the M third video frames is RGB format, and the data type of the data representing the color value is floating point.
  17. 根据权利要求16所述的方法,其特征在于,在所述第一界面显示所述M个第三视频帧中的任意一个,具体包括:The method according to claim 16, wherein displaying any one of the M third video frames on the first interface specifically includes:
    所述GPU将第二显存地址发送给所述OpenGL,所述第二显存地址为存储所述M个第三视频帧的显存地址;OpenGL将所述第二显存地址发送给所述APP;The GPU sends a second video memory address to the OpenGL, and the second video memory address is a video memory address for storing the M third video frames; OpenGL sends the second video memory address to the APP;
    所述APP根据所述第二显存地址获取所述M个第三视频帧;The APP acquires the M third video frames according to the second video memory address;
    所述APP在所述第一界面显示通过所述第二显存地址获取所述M个第三视频帧中的任意一个视频帧。The APP displays on the first interface that any video frame among the M third video frames is acquired through the second video memory address.
  18. 根据权利要求17所述的方法,其特征在于,所述电子设备还包括编码器和第二内存,所述保存所述M个第三视频帧为第二视频,具体包括:The method according to claim 17, wherein the electronic device further comprises an encoder and a second memory, and the storing the M third video frames as the second video specifically comprises:
    所述APP向所述OpenGL发送调用C2D引擎的请求;The APP sends a request to call the C2D engine to the OpenGL;
    响应于所述请求,所述OpenGL调用C2D引擎从所述GPU中获取所述M个第三视频帧;所述OpenGL获得的所述M个第三视频帧的颜色编码格式为YUV格式,表示颜色值的数据的数据类型为整型;In response to the request, the OpenGL invokes the C2D engine to obtain the M third video frames from the GPU; the color encoding format of the M third video frames obtained by the OpenGL is YUV format, representing color The data type of the value data is an integer;
    所述OpenGL将所述M个第三视频帧发送给所述第二内存;The OpenGL sends the M third video frames to the second memory;
    所述编码器从所述第二内存中读得所述M个第三视频帧,并将所述M个第三视频帧封装为所述第二视频。The encoder reads the M third video frames from the second memory, and packages the M third video frames into the second video.
  19. 根据权利要求18所述的方法,其特征在于,在所述APP向所述第一内存发送所述第一视频之前,所述方法还包括:The method according to claim 18, wherein before the APP sends the first video to the first memory, the method further comprises:
    所述APP调用mediacodec创建所述编码器;所述编码器向内存申请所述第一内存;The APP calls mediacodec to create the encoder; the encoder applies to the memory for the first memory;
    所述OpenGL接收所述APP发送的所述第一内存的内存标识ID和/或地址,根据所述ID和/或地址确定用于接收所述OpenGL输出的视频帧的所述第一内存;The OpenGL receives the memory identification ID and/or address of the first memory sent by the APP, and determines the first memory for receiving the video frame output by the OpenGL according to the ID and/or address;
    所述APP调用mediacodec创建所述解码器;所述解码器向内存申请所述第二内存。The APP calls mediacodec to create the decoder; the decoder applies for the second memory from the memory.
  20. 一种电子设备,其特征在于,包括一个或多个处理器和一个或多个存储器;其中,所述一个或多个存储器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述一个或多个处理器执行所述计算机指令时,使得执行如权利要求1-19任一项所述的方法。An electronic device, characterized in that it includes one or more processors and one or more memories; wherein the one or more memories are coupled to the one or more processors, and the one or more memories It is used to store computer program code, the computer program code includes computer instructions, and when the one or more processors execute the computer instructions, the method according to any one of claims 1-19 is executed.
  21. 一种包含指令的计算机程序产品,当计算机程序产品在电子设备上运行时,使得电子设备执行如权利要求1-19任一项所述的方法。A computer program product containing instructions, when the computer program product is run on the electronic device, the electronic device is made to execute the method according to any one of claims 1-19.
  22. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得执行如权利要求1-19任一项所述的方法。A computer-readable storage medium, comprising instructions, wherein, when the instructions are run on an electronic device, the method according to any one of claims 1-19 is executed.
PCT/CN2022/093042 2021-08-12 2022-05-16 Video editing method and electronic device WO2023016014A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110927488.2 2021-08-12
CN202110927488 2021-08-12
CN202111329478.5 2021-11-10
CN202111329478.5A CN114222187B (en) 2021-08-12 2021-11-10 Video editing method and electronic equipment

Publications (1)

Publication Number Publication Date
WO2023016014A1 true WO2023016014A1 (en) 2023-02-16

Family

ID=80696875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/093042 WO2023016014A1 (en) 2021-08-12 2022-05-16 Video editing method and electronic device

Country Status (2)

Country Link
CN (1) CN114222187B (en)
WO (1) WO2023016014A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222187B (en) * 2021-08-12 2023-08-29 荣耀终端有限公司 Video editing method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769677A (en) * 2018-05-31 2018-11-06 宁波大学 A kind of high dynamic range video dynamic range scalable encoding based on perception
US20200265561A1 (en) * 2019-02-19 2020-08-20 Samsung Electronics Co., Ltd. Electronic device for processing image and image processing method thereof
CN112581575A (en) * 2020-12-05 2021-03-30 西安翔腾微电子科技有限公司 Texture system is done to outer video
CN113206971A (en) * 2021-04-13 2021-08-03 聚好看科技股份有限公司 Image processing method and display device
CN114222187A (en) * 2021-08-12 2022-03-22 荣耀终端有限公司 Video editing method and electronic equipment

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5243447A (en) * 1992-06-19 1993-09-07 Intel Corporation Enhanced single frame buffer display system
WO2002101646A2 (en) * 2001-06-08 2002-12-19 University Of Southern California High dynamic range image editing
CN1584935A (en) * 2003-08-19 2005-02-23 刘磊 Method for realiznig differential broadcasting content zoned broadcasting in displaying screen and displaying apparatus thereof
JP2006211095A (en) * 2005-01-26 2006-08-10 Matsushita Electric Ind Co Ltd High luminance compression circuit
US9973723B2 (en) * 2014-02-24 2018-05-15 Apple Inc. User interface and graphics composition with high dynamic range video
JP6439418B2 (en) * 2014-03-05 2018-12-19 ソニー株式会社 Image processing apparatus, image processing method, and image display apparatus
JP6386244B2 (en) * 2014-03-27 2018-09-05 株式会社メガチップス Image processing apparatus and image processing method
CN104104897B (en) * 2014-06-27 2018-10-23 北京奇艺世纪科技有限公司 A kind of video editing method and device of mobile terminal
CN104601971B (en) * 2014-12-31 2019-06-14 小米科技有限责任公司 Color adjustment method and device
EP3107300A1 (en) * 2015-06-15 2016-12-21 Thomson Licensing Method and device for encoding both a high-dynamic range frame and an imposed low-dynamic range frame
EP3113496A1 (en) * 2015-06-30 2017-01-04 Thomson Licensing Method and device for encoding both a hdr picture and a sdr picture obtained from said hdr picture using color mapping functions
US10536695B2 (en) * 2015-09-09 2020-01-14 Qualcomm Incorporated Colour remapping information supplemental enhancement information message processing
CN105262957B (en) * 2015-09-23 2018-10-02 新奥特(北京)视频技术有限公司 The treating method and apparatus of video image
US10101895B2 (en) * 2015-10-15 2018-10-16 Lenovo (Singapore) Pte. Ltd. Presentation of images on display based on user-specific color value(s)
CN106791865B (en) * 2017-01-20 2020-02-28 杭州当虹科技股份有限公司 Self-adaptive format conversion method based on high dynamic range video
CN108053381B (en) * 2017-12-22 2022-04-22 深圳创维-Rgb电子有限公司 Dynamic tone mapping method, mobile terminal and computer-readable storage medium
CN110149507B (en) * 2018-12-11 2021-05-28 腾讯科技(深圳)有限公司 Video processing method, data processing apparatus, and storage medium
CN109660740A (en) * 2018-12-25 2019-04-19 成都索贝数码科技股份有限公司 A kind of video editing method based on three code rates
CN110730339A (en) * 2019-11-05 2020-01-24 上海网仕科技有限公司 SDR video signal processing method and device and video coding equipment
CN112087637A (en) * 2020-09-09 2020-12-15 中国电子科技集团公司第五十八研究所 High-pixel bit depth video image data coding and decoding processing method
CN112261442B (en) * 2020-10-19 2022-11-11 上海网达软件股份有限公司 Method and system for real-time transcoding of HDR (high-definition link) and SDR (short-definition link) of video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769677A (en) * 2018-05-31 2018-11-06 宁波大学 A kind of high dynamic range video dynamic range scalable encoding based on perception
US20200265561A1 (en) * 2019-02-19 2020-08-20 Samsung Electronics Co., Ltd. Electronic device for processing image and image processing method thereof
CN112581575A (en) * 2020-12-05 2021-03-30 西安翔腾微电子科技有限公司 Texture system is done to outer video
CN113206971A (en) * 2021-04-13 2021-08-03 聚好看科技股份有限公司 Image processing method and display device
CN114222187A (en) * 2021-08-12 2022-03-22 荣耀终端有限公司 Video editing method and electronic equipment

Also Published As

Publication number Publication date
CN114222187A (en) 2022-03-22
CN114222187B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
WO2020253719A1 (en) Screen recording method and electronic device
CN109814766B (en) Application display method and electronic equipment
CN113726950B (en) Image processing method and electronic equipment
WO2020093988A1 (en) Image processing method and electronic device
CN114706664A (en) Interactive method for cross-device task processing, electronic device and storage medium
US20230254550A1 (en) Video Synthesis Method and Apparatus, Electronic Device, and Storage Medium
WO2020155875A1 (en) Display method for electronic device, graphic user interface and electronic device
CN116048933B (en) Fluency detection method
WO2023016014A1 (en) Video editing method and electronic device
WO2023071482A1 (en) Video editing method and electronic device
CN117221713B (en) Parameter loading method and electronic equipment
CN116193275B (en) Video processing method and related equipment
CN117692723A (en) Video editing method and electronic equipment
WO2024083009A1 (en) Interface generation method and electronic device
WO2024061292A1 (en) Interface generation method and electronic device
CN117692714A (en) Video display method and electronic equipment
WO2024083014A1 (en) Interface generation method and electronic device
WO2024046010A1 (en) Interface display method, and device and system
WO2022262291A1 (en) Image data calling method and system for application, and electronic device and storage medium
US20240061549A1 (en) Application switching method, graphical interface, and related apparatus
WO2022166550A1 (en) Data transmission method and electronic device
WO2024041456A1 (en) Application data saving method and electronic device
CN117764853A (en) Face image enhancement method and electronic equipment
CN114385282A (en) Theme color conversion method, electronic device, and computer-readable storage medium
CN117909000A (en) Interface generation method and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22855001

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE