WO2023035882A1 - 视频处理方法、设备、存储介质和程序产品 - Google Patents
视频处理方法、设备、存储介质和程序产品 Download PDFInfo
- Publication number
- WO2023035882A1 WO2023035882A1 PCT/CN2022/112858 CN2022112858W WO2023035882A1 WO 2023035882 A1 WO2023035882 A1 WO 2023035882A1 CN 2022112858 W CN2022112858 W CN 2022112858W WO 2023035882 A1 WO2023035882 A1 WO 2023035882A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- window
- frame
- target
- displayed
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 68
- 230000004044 response Effects 0.000 claims abstract description 26
- 238000005070 sampling Methods 0.000 claims description 83
- 238000004590 computer program Methods 0.000 claims description 12
- 230000001960 triggered effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 39
- 238000009877 rendering Methods 0.000 abstract description 31
- 230000000694 effects Effects 0.000 description 37
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000010428 oil painting Methods 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/36—Monitoring, i.e. supervising the progress of recording or reproducing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/22—Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
Definitions
- the present application relates to the field of computer technology, in particular to a video processing method, device, storage medium and program product.
- Video generally refers to various technologies that capture, record, process, store, transmit and reproduce a series of static images in the form of electrical signals. Continuous image changes exceed a certain number of frames per second, and the human eye cannot distinguish a single static image, which appears to be a smooth and continuous visual effect. Such continuous images are called video. In related technologies, in order to meet the visual requirements of different users, users may also be allowed to edit and process videos.
- the present application provides a video processing method, device, storage medium, and program product to help solve the problem that users cannot intuitively see the differences between different filters or special effects applied to videos in the prior art, resulting in poor user experience. bad question.
- the embodiment of the present application provides a video processing method applied to an electronic device, and the method includes:
- a first preview interface is displayed, and the first preview interface includes a preview frame; the target video is displayed in the preview frame; the target video is the video obtained by decoding the target video file;
- the second preview interface includes a preview frame, a first window, and a second window;
- the target video is displayed in the preview frame
- the first window displays the i-th frame video image of the first video
- the second window displays the i-th frame video image of the second video
- the The first video is a video rendered by using a first filter to render the first sampled video, which contains m frames of video images
- the second video is rendered by using a second filter to render the second sampled video
- Video which comprises m frames of video images
- the first sampled video and the second sampled video are videos formed by sampling m frames of video images from the decoded video of the target video file, and i is greater than 0, and an integer less than m; m is an integer greater than 1;
- the target video is displayed in the preview frame
- the first window displays the i+1th frame video image of the first video
- the second window displays the i+1th frame video of the second video image.
- different filter types can be rendered for the decoded video of the target video file, and displayed in the corresponding window, so that the user can intuitively see that different filters are applied to the decoded target video file.
- the difference on the final video is convenient for users to choose the desired editing type, which improves the user experience.
- the first sampled video and the second sampled video are videos formed by sampling m frames of video images from the decoded video of the target video file, including:
- the target video file is decoded once to obtain a third video, and m frames of video images are sampled in the third video to form a first sampling video and a second sampling video respectively.
- the electronic device only needs to decode the target video file once to obtain the third video, without decoding the target video file once for each filter type, avoiding the redundant overhead of repeated decoding, and improving the processing speed of the electronic device. Reduced resource usage.
- the value of m is smaller than the number of frames of video images included in the third video.
- sampling m frames of video images in the third video to respectively form the first sampling video and the second sampling video includes:
- m frames of video images are sampled to form a first sampling video and a second sampling video in a manner of sampling 1 frame of video images in every 3 frames of video images.
- one frame of video images can be sampled in every three frames of video images in the third video, and m frames of video images can be sampled to form the first sampling video and the second sampling, which can reduce the user experience while not affecting the viewing experience of the user.
- the resource consumption of electronic equipment is improved, and the processing speed of electronic equipment is improved.
- the resolutions of the first video and the second video are smaller than the resolution of the target video.
- the frame rate of the first video displayed in the first window and the second video displayed in the second window are lower than the frame rate of the target video displayed in the preview frame.
- the display size of the first window and the second window is smaller than the display size of the preview frame, reducing the frame rate of the video image displayed in the first window and the second window can prevent The size is small, and the video image playback speed is too fast, making it difficult for users to clearly watch the first video displayed in the first window and the second video displayed in the second window, and by adjusting the first window to display the first video
- the video and the second window display the frame rate of the second video, which can reduce the resource consumption of the electronic equipment and improve the processing speed of the electronic equipment.
- the first sampled video and the second sampled video are videos formed by sampling m frames of video images from the decoded video of the target video file, including:
- the target video file is decoded twice to obtain two third videos, sampling m frames of video images in a third video to form the first sampling video, and sampling m frames of video images in another third video to form the first sampling video Two-sample video.
- the electronic device can decode the target video file once for each type of filter to obtain the third video, which is easy to implement.
- the second preview interface further includes a progress display box whose display size is smaller than that of the preview box, the progress display box displays video images in the fourth video, and the first Quad video is the same as the target video image.
- the user can adjust the video image displayed in the preview frame by adjusting the video image in the progress display frame, which facilitates the user to adjust the video image displayed in the preview frame and improves the editing experience of the user.
- the resolution of the fourth video is smaller than the resolution of the target video.
- the display sizes of the first window and the second window are the same.
- the display sizes of the first window and the second window displayed in the preview interface may be set to be the same size.
- the display size of the first window and the second window is smaller than the display size of the preview frame.
- the display size of the first window and the second window is smaller than the display size of the preview frame, which can reduce the possibility of affecting the display effect of the preview frame due to the large display size of the first window and the second window.
- the displaying the first video in the first window includes: displaying the first video in a cycle in the first window;
- the displaying the second video in the second window includes: displaying the second video in a cycle in the second window.
- playing the first video and the second video in a loop can make the user watch the first video displayed in the first window more clearly, and the second video displayed in the second window.
- the second video displayed in the window can ensure that the user can watch the first video displayed in the first window and the second video displayed in the second window at any time, which improves the user experience.
- the above method further includes:
- the second operation is used to indicate the target filter selected by the user
- the third preview interface includes a preview frame, a first window, and a second window;
- a fifth video is displayed in the preview frame, the first window displays the first video, and the second window displays the second video, and the fifth video is rendered by using the target filter to the target video video.
- the user can select the target filter, and the target video rendered by the target filter is displayed in the preview frame, so that the user can watch the target rendered by the target filter selected by displaying a larger preview frame. video to improve user experience.
- an embodiment of the present application provides an electronic device, including a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, trigger The electronic device executes the method described in any one of the first aspect.
- the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium includes a stored program, wherein when the program is running, the device where the computer-readable storage medium is located is controlled to execute the above-mentioned The method of any one of the first aspects.
- an embodiment of the present application provides a computer program product, the computer program product includes executable instructions, and when the executable instructions are executed on a computer, the computer executes any one of the above-mentioned first aspects. described method.
- the m-frame video images obtained after decoding the target video file can be used as the first sampled video and the second sampled video, and the first sampled video is rendered with the first filter Process to obtain the first video, perform second filter rendering processing on the second sampling video to obtain the second video, and display the first video in the first window, display the second video in the second window, and the first window at the first moment Display the i-th frame video image of the first video, display the i-th frame video image of the second video in the second window, at the second moment, display the i+1-th frame video image of the first video in the first window, the second The i+1th frame of the video image of the second video image is displayed in the window.
- the decoded video of the target video file can be rendered with different filter types and displayed in the corresponding window, so that the user can intuitively see the difference between different filters applied to the decoded video of the target video file. , which is convenient for the user to select the desired editing type and improves the user experience.
- FIG. 1 is an example diagram of a different filter rendering effect provided by the embodiment of the present application
- FIG. 2 is a schematic diagram of a video processing scene provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of another video processing scene provided by an embodiment of the present application.
- FIG. 4 is a schematic flowchart of a video processing method provided in an embodiment of the present application.
- FIG. 5 is a schematic diagram of another video processing scene provided by the embodiment of the present application.
- FIG. 6 is a schematic flow diagram of another video processing method provided in the embodiment of the present application.
- FIG. 7a is a schematic diagram of another video processing scene provided by the embodiment of the present application.
- FIG. 7b is a schematic diagram of another video processing scene provided by the embodiment of the present application.
- FIG. 8 is a schematic flowchart of another video processing method provided by the embodiment of the present application.
- FIG. 9 is a schematic diagram of another video processing scenario provided by the embodiment of the present application.
- FIG. 10 is a schematic flowchart of another video processing method provided in the embodiment of the present application.
- FIG. 11 is a schematic diagram of another video processing scenario provided by the embodiment of the present application.
- FIG. 12 is a schematic flowchart of another video processing method provided in the embodiment of the present application.
- FIG. 13 is a schematic diagram of another video processing scenario provided by the embodiment of the present application.
- FIG. 14 is a software structural block diagram of an electronic device provided by an embodiment of the present application.
- FIG. 15 is a schematic flowchart of another video processing method provided by the embodiment of the present application.
- FIG. 16 is a schematic flowchart of another video processing method provided by the embodiment of the present application.
- FIG. 17 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- UX user experience
- UX characteristics which refers to the user's feeling when using electronic equipment during the shooting process.
- filter mainly used to achieve various special effects of the image.
- Filters generally adjust the relevant data of the image to achieve a better look and feel of the image, including adjusting pixel values, brightness, saturation, contrast, and so on.
- the pixels in the original image are represented by RGB (red, green, blue), and the filter replaces the RGB values of the pixels in the original image with new RGB values, so that the image processed by the filter has a special effect , images processed with different styles of filters have different effects.
- filter styles such as black and white and nostalgia for image tone adjustment, soft focus for focus adjustment, watercolor, pencil, ink, oil painting, etc. for image style adjustment, and some filter styles can also be customized by users or professionals. Such as fresh, Japanese, scenery, food, etc.
- filter 1, filter 2 and filter 3 are three different filters.
- the image 101 shown in FIG. 1 can be obtained.
- the image 102 shown in FIG. 1 can be obtained.
- the image 103 shown in FIG. 1 can be obtained. Comparing the image 101, the image 102, and the image 103 shown in FIG. 1 shows that the image 101, the image 102, and the image 103 have different image effects or styles.
- the electronic devices involved in the embodiments of the present application can also be tablet computers, personal computers (personal computers, PCs), personal digital assistants (personal digital assistants, PDAs), smart watches, netbooks, wearable electronic devices, augmented reality Technology (augmented reality, AR) equipment, virtual reality (virtual reality, VR) equipment, vehicle equipment, smart car, smart audio, robot, smart glasses, smart TV, etc.
- personal computers personal computers, PCs
- personal digital assistants personal digital assistants, PDAs
- smart watches netbooks
- wearable electronic devices augmented reality Technology (augmented reality, AR) equipment, virtual reality (virtual reality, VR) equipment, vehicle equipment, smart car, smart audio, robot, smart glasses, smart TV, etc.
- an electronic device may also be called a terminal device, a user equipment (User Equipment, UE), etc., which is not limited in this embodiment of the present application.
- UE User Equipment
- an electronic device is taken as an example for illustration.
- the mobile phone display interface displays the main screen interface of the mobile phone (1) in Figure 2.
- the mobile phone displays the interface 202 shown in (2) in FIG. 2 .
- the interface 202 includes the target video 203, images and other videos shot by the mobile phone.
- the mobile phone displays the interface 204 shown in (3) in FIG. 2 .
- the interface 204 is the playback interface of the target video 203 .
- An edit control 205 is included in the interface 204 .
- the mobile phone In response to the user's operation on the editing control 205, the mobile phone displays the interface 206 shown in (4) in FIG. 2 .
- the interface 206 is the editing interface of the target video 203 , and the mobile phone enters the editing interface of the target video 203 through the user operating the editing control 205 , for editing the target video 203 .
- the interface 206 includes a preview frame 207 , and the target video 203 is displayed in the preview frame 207 .
- Also included within interface 206 is a filter control 208 . If the user wants to add a filter effect to the target video 203, the filter control 208 can be operated. In response to the user operating the filter control 208, the mobile phone displays an interface 301 as shown in FIG. 3 .
- the interface 301 includes a preview frame 302 , a first window 303 and a second window 304 .
- the target video 203 is displayed in the preview frame 302
- the first video image is displayed in the first window 303
- the second video image is displayed in the second window 304
- the first video image is the image of the target video 203 using filter 1.
- the first frame of the video image is rendered and processed
- the second video image is the image of the first frame of the video image of the target video 203 rendered by the filter 2 .
- a display window is correspondingly set for each type of filter in the interface 301 , and the image rendered by the corresponding filter is displayed in the display window.
- the embodiment of the present application does not limit the number of filter types included in the mobile phone.
- the images displayed in the first window 302 and the second window 303 are only pictures of filter effects of different filter types of a frame of video image, rather than filter effects of multiple frames of video images of the target video.
- a frame of image displayed in the first window and the second window it is impossible to determine the overall filter effect of a certain filter applied to the target video. If the user needs to watch the overall filter effect of the target video, the filter type needs to be applied to the target video to see the overall filter effect, and only one filter type can be viewed at a time. Filter effects, it is not possible to view the overall filter effect of multiple filter types applied to the target video at the same time.
- the above method makes it impossible for the user to intuitively see the difference between different filters or special effects applied to the video, which is inconvenient for the user to choose and reduces the user experience.
- a new video processing method is proposed.
- m frames of video images obtained after decoding the target video file can be used as the first sampling video and the second sampling video.
- Perform the first filter rendering process on a sampled video to obtain the first video perform the second filter rendering process on the second sampled video to obtain the second video, and display the first video in the first window, and display the second video in the second window , and at the first moment, the i-th frame video image of the first video is displayed in the first window, the i-th frame video image of the second video is displayed in the second window, and at the second moment, the i-th frame video image of the first video is displayed in the first window
- the i+1 frame of video image, and the i+1th frame of video image of the second video image is displayed in the second window.
- the decoded video of the target video file can be rendered with different filter types and displayed in the corresponding window, so that the user can intuitively see the difference between different filters applied to the decoded video of the target video file. , which is convenient for the user to select the desired editing type and improves the user experience.
- FIG. 4 it is a schematic flowchart of a video processing method provided by an embodiment of the present invention.
- the method is applied in electronic equipment. As shown in Figure 4, the method includes:
- Step S401 receiving an editing operation of a target video.
- a filter effect can be added to the target video.
- the video content is a character
- in order to beautify the captured video content Superimpose the filter effect of portrait blur on the captured video, so that the captured characters can be highlighted.
- the video content is that person a is singing.
- a dynamic strobe filter effect can be added to the captured video content to achieve the effect of simulating a concert.
- a user looks for thumbnails of videos and captured images stored in a gallery application of an electronic device.
- the video selected by the user is determined as the target video.
- the electronic device can find the corresponding target video file for the thumbnail, and decode it to obtain the desired target video.
- the editing operation of the target video can be sent to the electronic device for the target video, and at this time, the electronic device can receive the editing operation of the target video.
- Step S402 displaying a first preview interface in response to the editing operation of the target video.
- the first preview interface includes a preview frame.
- a target video is displayed in the preview frame, and the target video is a video obtained by decoding the target video file.
- the mobile phone after receiving the editing operation of the target video, displays a preview interface for editing the target video, which is the first preview interface.
- a preview frame is included in the first preview interface, and the target video is displayed in the preview frame.
- a video is a continuous image sequence consisting of consecutive frames of video images, and one frame of video images is one image. Due to the persistence of vision effect of the human eye, when the frame video images of the frame sequence are played at a certain rate, what the user can see is a continuous video. Due to the high similarity between consecutive frames of video images, in order to facilitate storage and transmission, electronic devices can encode the original video to obtain video files to remove redundancy in space and time dimensions and reduce the storage space occupied by videos. Therefore, when a video needs to be played, the electronic device decodes the video file to obtain the desired video.
- Step S403 receiving a first operation on the first preview interface.
- the first operation is an operation of starting a filter function.
- the first preview interface after the electronic device displays the first preview interface, the first preview interface also includes a filter control.
- the user needs to add a filter effect to the target video, he can add the Send the first action.
- the electronic device receives a first operation on the first preview interface.
- Step S404 displaying a second preview interface in response to the first operation.
- the second preview interface includes a preview frame, a first window and a second window.
- the target video is displayed in the preview frame
- the first window displays the i-th frame video image of the first video
- the second window displays the i-th frame video image of the second video
- the first video adopts the first filter pair
- the first sampled video is rendered and processed, which contains m frames of video images
- the second video is a video obtained by using a second filter to render the second sampled video, which contains m frames of video images.
- Both the first sampled video and the second sampled video are videos formed by sampling m frames of video images from the decoded video of the target video file, i is an integer greater than 0 and less than m; m is an integer greater than 1.
- the target video is displayed in the preview frame
- the first window displays the i+1th frame of video image of the first video
- the second window displays the i+1th frame of video image of the second video.
- the electronic device when the user edits the target video, he enters the edit preview interface, which is the first preview interface. If he wants to add a filter effect to the target video, the electronic device can receive the After one operation, the filter function is activated, and the second preview interface is displayed on the display.
- the second preview interface includes a preview frame, a first window and a second window.
- the video images displayed in the first window and the second window are the video images in the video after the sampled video obtained after decoding the target video file and processed by the filter.
- the first window displays the video images in the first video
- the second window displays the video images in the second video
- both the first video and the second video contain at least two frames of video images, so the first window and the second window Display at least two video frames. That is, at the first moment, the target video is displayed in the preview frame, the first window displays the i-th frame of the first video, and the second window displays the i-th frame of the second video.
- the first video is a video rendered by using the first filter to render the first sampling video, and the first video includes m frames of video images.
- the second video is a video obtained by rendering the second sampling video by using the second filter, and the second video includes m frames of video images.
- Both the first sampled video and the second sampled video are videos formed by sampling m frames of video images from the decoded video of the target video file. i is an integer greater than 0 and less than m.
- the target video is displayed in the preview frame
- the first window displays the i+1th frame of video image of the first video
- the second window displays the i+1th frame of video image of the second video. That is, the first video is displayed in the first window, and the second video is displayed in the second window.
- the first window displays the first video
- the second window displays the second video.
- the electronic device displays in the first window the frames in the first video sequentially according to the frame sequence of the video images in the first video.
- video image Displaying in the second window sequentially displays each frame of video images in the second video according to the frame sequence of the video images in the second video.
- the electronic device may contain at least one, two or more types of filters, and the number of windows displaying the video image rendered by the filter in the preview interface of the electronic device is the same as that contained in the electronic device.
- the number of filter types is the same.
- Each window corresponds to a filter type.
- the video displayed in each window is the video obtained by performing filter rendering processing on the sampled video according to the filter type corresponding to the window.
- the filter rendering effect of the video displayed in different windows is different.
- Each window only displays the video after one filter rendering effect.
- an electronic device including the first filter and the second filter is taken as an example for illustration.
- the first window and a second window there are two windows displayed in the preview interface of the electronic device, that is, a first window and a second window, and the first video rendered by using the first filter to render the first sampled video is displayed through the first window.
- the second sampled video is rendered and processed by the second filter through the second window.
- the electronic device is a mobile phone, and the mobile phone includes two types of filters as an example for illustration.
- the user needs to edit the target video, he can enter the editing interface of the target video, as shown in FIG. 2 .
- a filter control 208 is included, if the user needs to add a filter effect to the target video, then in response to the user operating the filter control 208, the mobile phone displays an interface 501 as shown in (1) in Figure 5 .
- the interface 501 includes a preview frame 502, a first window 503, and a second window 504.
- the target video 203 is displayed in the preview frame 502 , the first video is displayed in the first window 503 , and the second video is displayed in the second window 504 .
- the first video is a video rendered by using a first filter for the first sampled video
- the second video is a video rendered by using a second filter for the second sampled video.
- Both the first sampled video and the second sampled video are videos formed by sampling m frames of video images from a video obtained after decoding the target video file.
- the interface 501 shown in (1) includes a preview frame 502 , a first window 503 and a second window 504 .
- the first window 503 in the interface 501 displays the video image 505 of the first frame of the first video
- the second window 504 displays the video image 506 of the first frame of the second video.
- the interface 507 includes a preview frame 502 , a first window 503 , and a second window 54 .
- the first window 503 displays the video image 508 of the second frame of the first video
- the second window 504 displays the video image 509 of the second frame of the second video.
- the interface 506 includes a preview frame 502 , a first window 503 , and a second window 54 .
- the first window 503 displays the video image 511 of the third frame of the first video
- the second window 504 displays the video image 512 of the third frame of the second video.
- the mobile phone contains two types of filters, so the interface 501, interface 507, and interface 510 all include the first window 503 and the second window 504, and the first window 503 corresponds to The video rendered by the first filter is displayed, and the second window 504 correspondingly displays the video rendered by the second filter. If three or more types of filters are included in the mobile phone, then interface 501, interface 507, and interface 510 include a corresponding number of windows, and a video rendered by a filter is displayed in each window, Each window only displays the video processed by one filter.
- the display size of the first window is the same as that of the second window.
- the display sizes of the first window and the second window displayed in the preview interface can be set to be the same size.
- the display sizes of the first window and the second window are smaller than the display size of the preview frame.
- the displaying the first video in the first window includes: displaying the first video in a loop in the first window.
- Displaying the second video in the second window includes: displaying the second video in a loop on the second window.
- the first video and the second video contain m frames of video images
- the first video can be redisplayed, that is, according to The time sequence of each frame of video images in the first video is to re-display the first frame to the m-th video image of the first video in the first window.
- the second video can be redisplayed, that is, the second video is redisplayed in the second window according to the timing of each frame of video images in the second video.
- the first window cyclically displays the first video
- the second window cyclically displays the second video, which is convenient for users to watch the first video displayed in the first window at any time. Displayed within the second video, improving the user experience.
- the interface 510 shown in (3) in FIG. 5 includes a preview frame 502 , a first window 503 , and a second window 504 .
- the first window 503 displays the video image 505 of the first frame of the first video
- the second window 504 displays the video image 506 of the first frame of the second video.
- both the first sampled video and the second sampled video are videos formed by sampling m frames of video images from the decoded video of the target video file, including: decoding the target video file once to obtain the third video, and The m frames of video images are sampled from the three videos to form the first sampled video and the second sampled video respectively.
- both the first sampled video and the second sampled video are sampled and obtained from the decoded video of the target video file.
- the electronic device may decode the target video file once to obtain a decoded third video and a frame sequence of each frame of video images included in the third video.
- the electronic device samples m frames of video images according to the frame sequence of the video images, and uses the sampled m frames of video images as the video images in the first sampling video and the second sampling video to form the first sampling video and the second sampling video.
- the electronic device renders the first sampled video using a first filter to form a first video, and displays the first video in a first window in a loop. Render the second sampled video with a second filter to form a second video, and display the second video in a loop in the second window, as shown in FIG. 6 .
- the third video is a video directly decoded from the target video file, which is the same as the target video.
- the electronic device may directly use the third video as the first sampling video and the second sampling, and at this time, the value of m is the number of frames of video images included in the third video.
- the electronic device can directly use the third video as the first sampled video and the second sampled video, use the first filter to render the third video, and display it in the first window , using the second filter to perform rendering processing on the second sampled video and displaying it in the second window.
- the electronic device samples m frames of video images in the third video, where the value of m is smaller than the number of frames of video images included in the third video.
- the m frames of sampled video images are used as video images in the first sampled video and the second sampled video to form the first sampled video and the second sampled video.
- the electronic device may sample one frame of video images in every n frames of video images according to the frame sequence of video images contained in the third video, and sample m frames of video images in the third video in this way, Thus forming the first sampled video and the second sampled video, using the first filter to render the third video, and displaying it in the first window, using the second filter to render the second sampled video, and displaying it in the second window
- the window is displayed.
- sampling m frames of video images in the third video to respectively form the first sampling video and the second sampling video includes: in the third video, sampling 1 frame of video images in every 3 frames of video images, The m frames of video images are sampled to form a first sampled video and a second sampled video respectively.
- the electronic device can sample one frame of video images in every 3 frames of video images according to the frame sequence of the video images contained in the third video, and sample m frames of video images in the third video in this way , so as to form the first sampled video and the second sampled video.
- the first window displays the first frame of video image, the fourth frame of video image, the seventh frame of video image and other video images rendered and processed by the first filter of the third video.
- the second window displays the video images of the first frame of the third video, the fourth frame of the video, the seventh frame of the video and other video images rendered and processed by the second filter.
- the third video and the target video are the same, assuming that the third video and the target video contain 10 frames of video images, and a frame of video images is extracted every 3 frames in the third video to form the first sampling video and the first sampling video Two-sample video. That is, both the first sampled video and the second sampled video include the first frame of video image, the fourth frame of video image, the seventh frame of video image and the tenth frame of video image of the third video.
- the first sampled video is rendered by using the first filter to obtain the first video
- the second sampled video is rendered by the second filter to obtain the second video. Display the first video in the first window, and display the second video in the second window.
- the mobile phone displays an interface 701 as shown in (1) in Figure 7a.
- the interface 701 includes a preview frame 702 .
- a target video is displayed in the preview frame 702 .
- the interface 701 also includes a filter control 703 . If the user needs to add a filter effect to the target video, the filter control 703 can be operated.
- the mobile phone displays an interface 704 as shown in (2) in FIG. 7a.
- the interface 704 includes a preview frame 702 , a first window 705 , a second window 706 , and a playback control 707 .
- the target video is displayed in the preview frame 702 , and at this moment, only the first frame video image 708 of the target video is displayed in the preview frame 702 .
- the preview frame 702 only displays the first frame of video image 708 of the target video, and does not display other video images of the target video.
- This situation can be referred to as shown in Figure 5 above, here No longer.
- the first frame of video image 709 of the first video is displayed in the first window 705
- the first frame of video image 710 of the second video is displayed in the second window 707 .
- the mobile phone displays an interface 711 as shown in (3) in FIG.
- an interface 715 as shown in (4) in FIG. 7 a includes a preview frame 702 , a first window 705 and a second window 706 .
- the 3rd frame video image 716 of the target video is displayed in the preview frame 702
- the 3rd frame video image 717 of the first video is displayed in the first window 705
- the 3rd frame video image 718 of the second video is displayed in the second window 707 .
- an interface 723 as shown in (2) in FIG. 7 b includes a preview frame 702 , a first window 705 and a second window 706 .
- an interface 725 as shown in (3) in FIG. 7 b includes a preview frame 702 , a first window 705 and a second window 706 .
- the 6th frame video image 726 of the target video is displayed in the preview frame 702
- the first frame video image 713 of the first video is displayed in the first window 705
- the first frame video image 714 of the second video is displayed in the second window 707 . That is, in the mobile phone, the first video is cyclically displayed in the first window 705 , and the second video is cyclically displayed in the second window 706 .
- the target video shown in the preview frame is the unfiltered rendered video.
- the resolutions of the first video and the second video are smaller than the resolution of the target video.
- the sizes of the first window and the second window in the second preview interface are both smaller than the size of the preview frame.
- the similarity of video image content between two adjacent frames is extremely high, and the display space of the first window and the second window is small, which can reduce the size of the first window and the second window.
- the resolution of the video displayed in the window is to reduce the resolution of the first video and the second video, that is, to reduce the display details of the first video and the second video.
- the electronic device can adjust the resolutions of the first video and the second video according to the sizes of the first window and the second window. For example, the resolution of the target video displayed in the preview frame is 1080*720. The electronic device can adjust the resolutions of the first video displayed in the first window and the second video displayed in the second window to 325*288.
- the mobile phone displays an interface 501 as shown in (1) in FIG. .
- the target video 203 is displayed in the preview frame 502
- the first video is displayed in the first window 503
- the second video is displayed in the second window 504 .
- the electronic device may first reduce the resolution of the first video and the second video, and then display the first video in the first window 503 after reducing the resolution of the first video and the second video.
- a video, the second video is displayed in the second window 504 .
- the adjustment of the resolution by the electronic device may be a preset adjusted value according to actual needs, which is not limited in this application.
- the resolution of the video is the resolution of the video image contained therein, and the resolution of the video image is the width and height pixel values of the video image.
- Video image resolution is a measure of the amount of data within a video image, usually expressed in pixels per inch.
- the resolution of video image A is 3200*180, which refers to its effective pixels in the horizontal and vertical directions.
- the electronic device can reduce the resolution of the first video and the second video by reducing the effective pixels of the video images in the first video and the second video.
- the electronic device can adjust the resolution of the first video and the second video by adjusting the resolution of the first sampling video and the second sampling video to adjust the resolution of the first video and the second video.
- the resolutions of the first video and the second video may also be adjusted directly by adjusting the resolution of the third video, as shown in FIG. 8 . It is also possible to directly adjust the resolution of the first video and the second video, which is not limited in this application.
- the frame rate of the first video displayed in the first window and the frame rate of the second video displayed in the second window are the same as the frame rate of the target video displayed in the preview frame.
- the frame rate of the target video displayed in the preview frame can be set to be equal to the frame rate of the first video displayed in the first window and the second video displayed in the second window. That is, the number of frames of the video image of the target video displayed in the preview frame per second is equal to the number of frames of the video image of the first video displayed in the first window per second, and the number of frames of the video image of the second video displayed in the second window.
- the preview box displays 30 frames of video images per second
- the first window also displays 30 frames of first video images per second
- the second window also displays 30 frames of second video images per second. That is to say, the image refresh frequency in the preview frame is the same as the image refresh frequency in the first window and the second window, as shown in FIG. 7a and FIG.
- the electronic device can render the sampled first sampled video through the first filter to form the first video, and display it in the first window.
- the second sampled video is rendered and processed by the second filter to form the second video, and the second video is displayed in the second window, which is simple to implement.
- the display size of the first window and the second window is smaller than the display size of the preview frame
- the frame rate of the first video and the second video is the same as the frame rate of the target video
- the playback speed of the video is faster
- the first The display sizes of the window and the second window are relatively small, making it difficult for the user to clearly watch the first video displayed in the first window and the second video displayed in the second window.
- the frame rate of the first video displayed in the first window may be reduced, and the frame rate of the second video displayed in the second window. That is, in order to reduce the number of frames of the video image of the first video displayed in the first window per second, the number of frames of the video image of the second video displayed in the second window per second.
- the frame rate of the target video displayed in the preview frame is three times the frame rate of the first video displayed in the first window and the second video displayed in the second window.
- the mobile phone displays an interface 901 as shown in (1) in FIG. 9 .
- the interface 901 includes a preview frame 902 , a first window 903 and a second window 904 .
- Display the first frame video image 905 of the target video image in the preview frame 902 display the first frame video image 906 of the first video in the first window 903, and display the first frame video of the second video in the second window 904 image907.
- the mobile phone displays an interface 908 as shown in (2) in FIG. 9 .
- the interface 908 includes a preview frame 902 , a first window 903 and a second window 904 .
- the mobile phone displays an interface 910 as shown in (3) in FIG. 9 .
- the interface 910 includes a preview frame 902 , a first window 903 and a second window 904 .
- the mobile phone displays an interface 912 as shown in (4) in FIG. 9 .
- the interface 912 includes a preview frame 902 , a first window 903 and a second window 904 .
- the first sampled video and the second sampled video are the videos formed by sampling m frames of video images from the decoded video of the target video file, including:
- the target video file can be decoded once for each filter type to obtain multiple third video.
- each third video m frames of video images are sampled to form a corresponding sampled video, and each type of filter is used to render the sampled video respectively to obtain multiple videos, which are displayed in corresponding windows respectively.
- the electronic device can decode the target video file twice to obtain two third videos, and sample m frames of video images in one third video to form the first sampling video , and sample m frames of video images in another third video to form a second sampled video.
- the electronic device renders the first sampled video by using a first filter to obtain the first video, and displays it in the first window.
- the second sampled video is rendered with a second filter to obtain a second video, which is displayed in a second window, as shown in FIG. 10 .
- the second preview interface further includes a progress display frame whose display size is smaller than that of the preview frame, and the video images in the fourth video are displayed in the progress display frame.
- the fourth video is the same as the target video
- the progress display frame contains a progress control for controlling the video image of the target video displayed in the preview frame
- the video image of the fourth video corresponding to the progress control in the progress display frame is the preview The video image of the target video displayed in the box.
- the second preview interface further includes a progress display box.
- Video images in the fourth video are displayed in the progress display frame. Since the target video is displayed in the preview frame, the user cannot control the playback content of the target video.
- a progress display frame is added. The user can adjust the video image of the target video displayed in the preview frame by adjusting the video image of the fourth video corresponding to the progress control in the progress display frame.
- the display size of the progress display frame is smaller than the display size of the preview frame.
- the resolution of the fourth video is smaller than the resolution of the target video.
- the electronic device Based on the fact that the electronic device displays the video images in the video frame by frame, the content similarity of the video images between two adjacent frames is extremely high, and the display space of the progress display box is small, so the resolution of the video displayed in the progress display box can be reduced. That is, the resolution of the fourth video can be reduced, that is, the display details of the fourth video can be reduced. In view of the small display space of the progress display box, even if the resolution of the fourth video is reduced, there is almost no difference in user experience , and can reduce the resource consumption of the electronic equipment, and improve the processing speed of the electronic equipment. Therefore, the electronic device can adjust the resolution of the fourth video according to the display space of the progress display frame.
- the display sizes of the first window, the second window and the progress display frame are the same, which can make the display interface tidy and provide users with better visual effects.
- the resolution of the first video, the resolution of the second video and the resolution of the fourth video are the same.
- the electronic device can decode the target video file once to obtain the third video, and then reduce the resolution of the third video, so that the third video after the reduced resolution processing can be used as the fourth video all the way. It is transmitted to the display in the progress display box, and one path is used as a sampled video, and the corresponding filter rendering process is performed, and it is displayed in the first window and the second window respectively.
- the mobile phone displays an interface 1101 as shown in (1) in FIG. 11 .
- the interface 1101 displays a preview frame 1102 , a first window 1103 , a second window 1104 and a progress display frame 1105 .
- Display the first frame video image 1106 of the target video image in the preview frame 1102 display the first frame video image 1107 of the first video in the first window 1103, and display the first frame video of the second video in the second window 1104 Image 1108, the video image of the fourth video is displayed in the progress display box 1105, and the progress control 1109 corresponds to the first frame video 1110 of the fourth video.
- the mobile phone displays an interface 1111 as shown in (2) in FIG. 11 .
- the interface 1111 includes a preview frame 902 , a first window 903 , a second window 1104 and a progress display frame 1105 .
- FIG. 12 it is a schematic flowchart of another video processing method provided by an embodiment of the present invention.
- the method is applied in electronic equipment. As shown in Figure 12, the method includes:
- Step S1201 receiving an editing operation of a target video.
- step S401 For details, refer to step S401, which will not be repeated here.
- Step S1202 displaying a first preview interface in response to the editing operation of the target video.
- the first preview interface includes a preview frame; the target video is displayed in the preview frame; the target video is a video obtained by decoding the target video file.
- step S402 For details, refer to step S402, which will not be repeated here.
- Step S1203 receiving a first operation on the first preview interface.
- step S403 For details, refer to step S403, which will not be repeated here.
- Step S1204 displaying a second preview interface in response to the first operation.
- the second preview interface includes a preview frame, a first window and a second window.
- the target video is displayed in the preview frame
- the first window displays the i-th frame video image of the first video
- the second window displays the i-th frame video image of the second video
- the first video adopts the first filter pair
- the first sampled video is rendered and processed, which contains m frames of video images
- the second video is a video obtained by using a second filter to render the second sampled video, which contains m frames of video images.
- Both the first sampled video and the second sampled video are videos formed by sampling m frames of video images from the decoded video of the target video file, i is an integer greater than 0 and less than m; m is an integer greater than 1.
- the target video is displayed in the preview frame
- the first window displays the i+1th frame of video image of the first video
- the second window displays the i+1th frame of video image of the second video.
- step S404 which will not be repeated here.
- Step S1205 receiving a second operation on the second preview interface.
- the second operation is used to indicate the target filter selected by the user.
- the user may select the target filter in the second preview interface, and send the second operation to the electronic device.
- Step S1206 displaying a third preview interface in response to the second operation.
- the third preview interface includes a preview frame, a first window and a second window.
- the fifth video is displayed in the preview frame, the first video is displayed in the first window, and the second video is displayed in the second window, and the fifth video is the video after the target video is rendered and processed by the target filter.
- the electronic device after receiving the second operation, can learn the target filter selected by the user through the second operation. At this time, the electronic device can render the target video using the target filter to obtain the fifth video, displaying the fifth video in a preview box so that the user can watch it.
- the filter control 703 can be operated.
- the mobile phone displays an interface 1301 as shown in (1) in FIG. 13 .
- the interface 1301 includes a preview frame 1302 , a first window 1303 , a second window 1304 , a playback control 1305 , and a progress display frame 1306 .
- the target video is displayed in the preview frame 1302 , and at this moment, only the first frame video image 1307 of the target video is displayed in the preview frame 1302 .
- the first frame 1308 of the first video is displayed in the first window 1303
- the first frame 1309 of the second video is displayed in the second window 1304 .
- the progress display box 1306 includes a progress control 1310
- the progress display box 1306 displays the video image of the fourth video
- the progress control 1310 corresponds to the first frame video image 1311 of the fourth video.
- the mobile phone displays an interface 1312 as shown in (2) in FIG. 13 .
- the interface 1312 includes a preview frame 1302 , a first window 1303 , a second window 1304 , a playback control 1305 , and a progress display frame 1306 .
- the fifth video is displayed in the preview frame 1302 , and at this time, only the first frame video image 1313 of the fifth video is displayed in the preview frame 1302 .
- the second frame of video image 1314 of the first video is displayed in the first window 1303
- the second frame of video image 1315 of the second video is displayed in the second window 1304 .
- the video image of the fourth video is displayed in the progress display box 1306, and the progress control 1310 corresponds to the video image 1311 of the first frame of the fourth video.
- the fifth video is a video in which the target video is rendered by using the first filter.
- the mobile phone displays an interface 1316 as shown in (3) in FIG.
- the second frame video image 1317 of the fifth video is displayed in the preview frame 1302 .
- a video image 1318 of the third frame of the first video is displayed in the first window 1303
- a video image 1319 of the third frame of the second video is displayed in the second window 1304 .
- the video image of the fourth video is displayed in the progress display box 1306, and the progress control 1310 corresponds to the video image 1320 of the second frame of the fourth video.
- the m frames of video images obtained after decoding the target video file can be used as the first sampled video and the second sampled video, and the first sampled video is rendered by the first filter to obtain the first video, and the second sampled video Perform the second filter rendering process on the video to obtain the second video, display the first video in the first window, display the second video in the second window, and display the i-th frame of video image of the first video in the first window at the first moment , the i-th frame video image of the second video is displayed in the second window, at the second moment, the i+1-th frame video image of the first video is displayed in the first window, and the i-th frame of the second video image is displayed in the second window +1 frame video image.
- the decoded video of the target video file can be rendered with different filter types and displayed in the corresponding window, so that the user can intuitively see the difference between different filters applied to the decoded video of the target video file. , which is convenient for the user to select the desired editing type and improves the user experience.
- FIG. 14 it is a software structural block diagram of an electronic device provided by an embodiment of the present application.
- the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
- the Android (Android) system is divided into four layers, which are respectively an application layer, a framework layer, a hardware abstraction layer and a hardware layer from top to bottom.
- the application layer may include a series of application packages.
- the application package may include a camera application.
- the application layer can be divided into application interface (user interface, UI) and application logic.
- the UI layer includes camera, gallery and other applications.
- Application logic includes data frames and camera management.
- the data frame includes a data acquisition module, a rendering processing module, a data processing module, and a video decoding module.
- the data acquisition module is used to acquire the target video file.
- the data processing module is used to control the display of videos with different filter rendering effects on the display interface.
- the rendering processing module is used for performing rendering processing on video images.
- the video decoding module is used to decode the video file and obtain the video.
- Camera management includes device management module, Surface management module, session management module, etc. In the Android system, Surface corresponds to a screen buffer, which is used to save the pixel data of the current window.
- the framework layer provides application programming interface (application programming interface, API) and programming framework for the application program of the application layer, including some predefined functions.
- the framework layer includes the camera access interface (Camera2 API).
- the Camera2 API is a set of interfaces introduced by Android to access the camera device. It adopts a pipeline design to make the data stream flow from the camera to the Surface.
- Camera2 API includes camera management (CameraManager) and camera device (CameraDevice).
- CameraManager is the management class of the Camera device, through which the camera device information of the device can be queried to obtain the CameraDevice object.
- CameraDevice provides a series of fixed parameters related to Camera devices, such as basic settings and output formats.
- the hardware abstraction layer is an interface layer between the operating system kernel and the hardware circuit, and its purpose is to abstract the hardware. It hides the hardware interface details of a specific platform, provides a virtual hardware platform for the operating system, makes it hardware-independent, and can be transplanted on various platforms.
- the HAL includes the camera hardware abstraction layer (Camera HAL), and the Camera HAL includes device (Device) 1, device (Device) 2, device (Device) 3, etc. It can be understood that the Device1, Device2 and Device3 are abstract devices.
- the hardware layer (HardWare, HW) is the hardware at the bottom of the operating system.
- the HW includes a camera device (CameraDevice) 1, a camera device (CameraDevice) 2, a camera device (CameraDevice) 3, and the like.
- CameraDevice1, CameraDevice2 and CameraDevice3 may correspond to multiple cameras on the electronic device.
- FIG. 15 it is a schematic flowchart of another video processing method provided by the embodiment of the present application.
- the video decoding module needs to decode one third video for each type of filter, and at least two third videos need to be decoded for at least two types of filters.
- the electronic device includes 2 types of filters as an example for illustration, and the electronic device may also include 3 or more types of filters, which is not limited in the present application.
- This method can be applied to the software structure shown in Fig. 14, and it mainly includes the following steps.
- a gallery application of an electronic device receives an editing operation of a target video.
- the user may send the editing operation of the target video to the gallery application of the electronic device.
- the image gallery application of the electronic device triggers the data acquisition module to acquire the target video file.
- the image gallery of the television device receives the trigger data acquisition module to acquire the target video file corresponding to the editing operation of the target video.
- the data acquisition module of the electronic device acquires the target video file.
- the user can send the selected editing mode selection operation to the electronic device.
- the data acquisition module of the electronic device sends the acquired target video file to the video decoding module.
- the video decoding module decodes the target video file to obtain the target video, and sends the target video to a preview frame of the display interface for display.
- the gallery application of the electronic device receives the start operation of the filter.
- the gallery application of the electronic device triggers the data acquisition module to acquire the target video file, sends an instruction to simultaneously decode the target video file twice to the video decoding module, and sends a filter rendering instruction to the filter rendering module.
- the filter rendering instruction is used to instruct the filter rendering module to use each type of filter therein to perform rendering processing on the received video respectively.
- the data acquisition module of the electronic device acquires the target video file.
- the data acquisition module caches the target video file in the storage unit and does not delete it when acquiring the target video file in the above step 1503, then at this time, the data acquisition module only needs to acquire the target video file in its storage unit video file.
- the data acquisition module of the electronic device transmits the target video file to the video decoding module.
- the video decoding module of the electronic device simultaneously decodes the target video file twice to obtain two third videos.
- the video decoding module of the electronic device transmits the two third videos to the data processing module.
- the data processing module of the electronic device samples m frames of video images in a third video to form a first sample video, and samples m frames of video images in another third video to form a second sample video, and combines the first sample video and The second sampled video is sent to the filter rendering module.
- the data processing module of the electronic device samples m frames of video images for each third video to obtain two sampling videos, which are the first sampling video and the second sampling video .
- m is greater than 0 and not greater than the total number of frames of video images included in the third video.
- the filter rendering module of the electronic device uses the first filter to render the first sampled video to obtain the first video, and uses the second filter to render the second sampled video to obtain the second video.
- the filter rendering module of the electronic device sends the first video and the second video to the display interface, so that the first video is displayed in the first window of the display interface, and the second video is displayed in the second window.
- FIG. 16 it is a schematic flowchart of another video processing method provided by the embodiment of the present application.
- the video decoding module only decodes one third video.
- the electronic device includes 2 types of filters as an example for illustration, and the electronic device may also include 3 or more types of filters, which is not limited in the present application.
- This method can be applied to the software structure shown in Fig. 14, and it mainly includes the following steps.
- a gallery application of an electronic device receives an editing operation of a target video.
- the user may send the editing operation of the target video to the gallery application of the electronic device.
- the image gallery application of the electronic device triggers the data acquisition module to acquire the target video file.
- the image gallery of the television device receives the trigger data acquisition module to acquire the target video file corresponding to the editing operation of the target video.
- the data acquisition module of the electronic device acquires the target video file.
- the user can send the selected editing mode selection operation to the electronic device.
- the data acquisition module of the electronic device sends the acquired target video file to the video decoding module.
- the video decoding module decodes the target video file to obtain the target video, and sends the target video to a preview frame of the display interface for display.
- the gallery application of the electronic device receives the start operation of the filter.
- the image gallery application of the electronic device triggers the data acquisition module to acquire the target video file, sends an instruction to decode the target video file once to the video decoding module, and sends a filter rendering instruction to the filter rendering module.
- the data acquisition module of the electronic device acquires the target video file.
- the data acquisition module caches the target video file in the storage unit and does not delete it when acquiring the target video file in the above step 1603, then at this time, the data acquisition module only needs to acquire the target video file in its storage unit video file.
- the data acquisition module of the electronic device transmits the target video file to the video decoding module.
- the video decoding module of the electronic device decodes the target video file once to obtain a third video.
- the video decoding module of the electronic device transmits the third video to the data processing module.
- step S1613 is directly performed, and if adjusted, step S1612 is performed.
- the data processing module of the electronic device adjusts the resolution and/or frame rate of the third video.
- the data processing module of the electronic device samples m frames of video images from the third video to form a first sampled video and a second sampled video, and sends the first sampled video and the second sampled video to the filter rendering module.
- the data processing module of the electronic device may sample m frames of video images in the third video, and form the m video images into the first sampling video and the second sampling video respectively.
- m is greater than 0 and not greater than the total number of frames of video images included in the third video.
- the filter rendering module of the electronic device uses the first filter to render the first sampled video to obtain the first video, and uses the second filter to render the second sampled video to obtain the second video.
- the filter rendering module of the electronic device sends the first video and the second video to the display interface, so that the first video is displayed in the first window of the display interface, and the second video is displayed in the second window.
- the decoded video of the target video file can be rendered with different filter types and displayed in the corresponding window, so that the user can intuitively see the difference between different filters applied to the decoded video of the target video file. , which is convenient for the user to select the desired editing type and improves the user experience.
- the present application also provides an electronic device, the electronic device is used for storing computer program instructions in a memory and for executing the program instructions, wherein, when the computer program instructions are processed When the device is executed, the electronic device is triggered to execute some or all of the steps in the above method embodiments.
- the electronic device 1700 may include: a processor 1701 , a memory 1702 and a communication unit 1703 . These components communicate through one or more buses.
- a processor 1701 may communicate through one or more buses.
- a memory 1702 may be included in the electronic device 1700 .
- a communication unit 1703 may be included in the electronic device 1700 .
- the structure of the server shown in the figure does not constitute a limitation to the embodiment of the present invention. It can be a bus structure or a star structure , may also include more or fewer components than shown, or combine certain components, or have different component arrangements.
- the communication unit 1703 is configured to establish a communication channel, so that the storage device can communicate with other devices. Receive user data from other devices or send user data to other devices.
- the processor 1701 uses various interfaces and lines to connect various parts of the entire electronic device, and runs or executes software programs and/or modules stored in the memory 1702, and calls stored in the memory data to perform various functions of electronic devices and/or process data.
- the processor may be composed of an integrated circuit (integrated circuit, IC), for example, may be composed of a single packaged IC, or may be composed of multiple packaged ICs connected with the same function or different functions.
- the processor 1701 may only include a central processing unit (central processing unit, CPU).
- CPU central processing unit
- the CPU may be a single computing core, or may include multiple computing cores.
- the memory 1702 is used to store the execution instructions of the processor 1701.
- the memory 1702 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically Erasable Programmable Read Only Memory
- EPROM Erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the electronic device 1700 When the execution instructions in the memory 1702 are executed by the processor 1701, the electronic device 1700 is enabled to execute some or all of the steps in the embodiment shown in FIG. 12 .
- the present application also provides a computer storage medium, wherein the computer storage medium can store a program, wherein when the program is running, the device where the computer-readable storage medium is located is controlled to execute the parts in the above-mentioned embodiments or all steps.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (English: read-only memory, abbreviated: ROM) or a random access memory (English: random access memory, abbreviated: RAM), etc.
- an embodiment of the present application also provides a computer program product, the computer program product includes executable instructions, and when the executable instructions are executed on a computer, the computer executes part or part of the above method embodiments. All steps.
- "at least one” means one or more, and “multiple” means two or more.
- “And/or” describes the association relationship of associated objects, indicating that there may be three kinds of relationships, for example, A and/or B may indicate that A exists alone, A and B exist simultaneously, or B exists alone. Among them, A and B can be singular or plural.
- the character “/” generally indicates that the contextual objects are an “or” relationship.
- “At least one of the following” and similar expressions refer to any combination of these items, including any combination of single items or plural items.
- At least one of a, b, and c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, and c can be single or multiple.
- any function is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
- the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, referred to as ROM), random access memory (random access memory, referred to as RAM), magnetic disk or optical disc, etc., which can store program codes. medium.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
本申请实施例提供了一种视频处理方法、设备、存储介质和程序产品,所述方法包括响应于视频编辑操作,获取视频文件;接收编辑模式选择操作,并根据所述编辑模式选择操作,确定选定的所述编辑模式下的至少一种编辑类型;解码所述视频文件,获得多帧目标视频图像;根据所述至少一种编辑类型分别对所述多帧目标视频图像进行渲染处理,并在显示界面内将渲染后的多帧目标视频图像,在对应的编辑类型的显示窗口中显示出;所述显示界面内包含有至少一个显示窗口,所述至少一个显示窗口与所述至少一种编辑类型一一对应。用以使用户直观的看出不同编辑类型应用在视频文件上的不同,便于用户选择所需的编辑类型,提高了用户体验。
Description
本申请要求于2021年9月10日提交中国专利局、申请号为202111062379.5、申请名称为“视频处理方法、设备、存储介质和程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机技术领域,具体地涉及一种视频处理方法、设备、存储介质和程序产品。
随着互联网的发展和移动通信网络的发展,同时也伴着随着电子设备的处理能力和存储能力的迅猛发展,海量的应用程序得到了迅速传播和使用,尤其是视频类应用。
视频泛指将一系列静态影像以电信号的方式加以捕捉、纪录、处理、存储、传送与重现的各种技术。连续的图像变化每秒超过一定帧数画面以上,人眼无法辨别单幅的静态画面,看上去是平滑连续的视觉效果,这样连续的画面叫做视频。相关技术中,为了满足不同用户的视觉需求,还可允许用户对视频进行编辑处理。
在对视频进行编辑处理时,为了达到美化视频的目的,可以在视频中增加滤镜。目前,电子设备上的视频编辑方式无法做到实时预览视频添加滤镜后的效果,而是用效果图替代。电子设备上即使可以预览视频添加滤镜后的效果时,也需要将某一种滤镜应用在该视频上,才能看到滤镜效果,无法同时观看到该视频的多种滤镜效果。上述方式使得用户无法直观的看出不同滤镜应用在视频上的差异点,不方便用户选择,降低了用户体验。
申请内容
有鉴于此,本申请提供一种视频处理方法、设备、存储介质和程序产品,以利于解决现有技术无法用户直观的看出不同滤镜或特效应用在视频上的差异点,导致用户体验较差的问题。
第一方面,本申请实施例提供了一种视频处理方法,应用于电子设备,所述方法包括:
接收目标视频的编辑操作;
响应于目标视频的编辑操作,显示第一预览界面,所述第一预览界面内包含有预览框;所述预览框内显示有目标视频;所述目标视频是目标视频文件解码得到的视频;
接收对所述第一预览界面的第一操作;
响应于所述第一操作,显示第二预览界面,所述第二预览界面内包含有预览框、第一窗口及第二窗口;
在第一时刻,所述预览框内显示所述目标视频,所述第一窗口显示第一视频的第i帧视频图像,所述第二窗口显示第二视频的第i帧视频图像,所述第一视频为采用第一滤镜对第一采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第二视频为采用第二滤镜对第二采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频,i为大于0,且小于m的整数;m为大于1的整数;
在第二时刻,所述预览框内显示所述目标视频,所述第一窗口显示第一视频的第i+1帧视频图像,所述第二窗口显示第二视频的第i+1帧视频图像。
在本申请实施例中,可以对目标视频文件解码后的视频进行不同滤镜类型渲染处理,并在对应的窗口内显示出,从而可以使用户直观的看出不同滤镜应用在目标视频文件解码后的视频上的不同,便于用户选择所需的编辑类型,提高了用户体验。
在一种可能的实现方式中,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:
对所述目标视频文件进行一次解码得到第三视频,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频。
这样一来,电子设备仅需解码一次目标视频文件,得到第三视频,无需针对每种滤镜类型均解码一次目标视频文件,避免了重复解码的冗余开销,提高了电子设备的处理速度,降低了占用的资源。
在一种可能的实现方式中,m的值小于第三视频包含的视频图像的帧数。
这样一来,仅需根据第三视频内包含的部分视频图像形成第一采样视频及第二采样视频,可以在不影响用户观看体验的同时,降低电子设备的资源损耗,提高电子设备的处理速度。
在一种可能的实现方式中,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频包括:
在第三视频中,按照每3帧视频图像中采样1帧视频图像的方式,采样m帧视频图像分别形成第一采样视频及第二采样视频。
这样一来,可以在第三视频中每3帧视频图像中采样1帧视频图像,采样m帧视频图像后形成第一采样视频及第二采样,这样可以在不影响用户观看体验的同时,降低电子设备的资源损耗,提高电子设备的处理速度。
在一种可能的实现方式中,所述第一视频及第二视频的分辨率小于所述目标视频的分辨率。
这样可以减少第一窗口及第二窗口显示的视频图像的细节部分,由于第一窗口及第二窗口显示视频图像的显示尺寸小于预览框的显示尺寸,即使减少第一窗口及第二窗口显示的视频图像的细节部分,对于用户几乎体验不到区别,且可以降低电子设备的资源损耗,提高电子设备的处理速度。
在一种可能的实现方式中,所述第一窗口显示第一视频及所述第二窗口显示第二视频的帧率小于所述预览框显示目标视频的帧率。
这样一来,由于第一窗口及第二窗口的显示尺寸小于预览框的显示尺寸,降低第一窗口及第二窗口内显示视频图像的帧率,可以防止因第一窗口及第二窗口的显示尺寸较小,而视频图像播放速度过快,导致用户不容易清楚的观看到第一窗口内显示的第一视频,及第二窗口内显示的第二视频,并且通过调整第一窗口显示第一视频及所述第二窗口显示第二视频的帧率,可以降低电子设备的资源损耗,提高电子设备的处理速度。
在一种可能的实现方式中,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:
对所述目标视频文件分别进行两次解码得到两个第三视频,在一个第三视频中采样m帧视频图像形成第一采样视频,并在另一个第三视频中采样m帧视频图像形成第二采样视频。
这样一来,电子设备可以针对每种类型的滤镜均进行一次目标视频文件的解码,得到第三视频,实现简单。
在一种可能的实现方式中,所述第二预览界面内还包括显示尺寸小于预览框的显示尺寸的进度显示框,所述进度显示框内显示有第四视频内的视频图像,所述第四视频与所述目标视频图像相同。
这样一来,用户可以通过调整进度显示框内的视频图像,进而调整预览框内显示的视频图像,方便用户调整预览框内显示的视频图像,提高用户的编辑体验。
在一种可能的实现方式中,所述第四视频的分辨率小于所述目标视频的分辨率。
这样可以降低电子设备的资源损耗,提高电子设备的处理速度。
在一种可能的实现方式中,所述第一窗口及第二窗口的显示尺寸相同。
在本申请实施例中,为了给用户提供更好的视觉效果,使得显示界面整齐化,可以将预览界面内显示的第一窗口及第二窗口的显示尺寸设置为相同的尺寸。
在一种可能的实现方式中,所述第一窗口及第二窗口的显示尺寸小于所述预览框的显示尺寸。
在本申请实施例中,第一窗口及第二窗口的显示尺寸小于预览框的显示尺寸,这样可以降低因第一窗口及第二窗口显示尺寸过大而影响预览框的显示效果的可能性。
在一种可能的实现方式中,所述第一窗口显示第一视频包括:所述第一窗口循环显示第一视频;
所述第二窗口显示第二视频包括:所述第二窗口循环显示第二视频。
在本申请实施例中,由于第一窗口及第二窗口显示尺寸较小,循环播放第一视频及第二视频可以使用户更清楚的观看到第一窗口内显示的第一视频,及第二窗口内显示的第二视频,且可以保证用户随时可以观看到第一窗口内显示的第一视频,及第二窗口内显示的第二视频,提高了用户的体验。
在一种可能的实现方式中,上述方法还包括:
接收对所述第二预览界面的第二操作;所述第二操作用于指示用户选择的目标滤镜;
响应于所述第二操作,显示第三预览界面,所述第三预览界面内包含有预览框、第一窗口及第二窗口;
所述预览框内显示第五视频,所述第一窗口显示第一视频,所述第二窗口显示第二视频,所述第五视频为采用目标滤镜对所述目标视频进行渲染处理后的视频。
在本申请实施例中,用户可以选择目标滤镜,在预览框内显示目标滤镜渲染后的目标视频,使用户通过显示尺寸较大的预览框观看到其选择的目标滤镜渲染后的目标视频,提高用户的体验。
第二方面,本申请实施例提供了一种电子设备,包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被所述处理器执行时,触发所述电子设备执行第一方面任一项所述的方法。
第三方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质包括存储的程序,其中,在所述程序运行时控制所述计算机可读存储介质所在设备执行上述第一方面中任意一项所述的方法。
第四方面,本申请实施例提供了一种计算机程序产品,所述计算机程序产品包含可执行指令,当所述可执行指令在计算机上执行时,使得计算机执行上述第一方面中任意一项所述的方法。
采用本申请实施例提供的技术方案,在视频编辑时,可以将目标视频文件解码后得到的m帧视频图像作为第一采样视频及第二采样视频,对第一采样视频进行第一滤镜渲染处理得到第一视频,对第二采样视频进行第二滤镜渲染处理得到第二视频,并在第一窗口显示第一视频,在第二窗口显示第二视频,且第一时刻第一窗口内显示第一视频的第i帧视频图像,第二窗口内显示第二视频的第i帧视频图像,在第二时刻,第一窗口内显示第一视频的第i+1帧视频图像,第二窗口内显示第二视频图像的第i+1帧视频图像。这样可以对目标视频文件解码后的视频进行不同滤镜类型渲染处理,并在对应的窗口内显示出,从而可以使用户直观的看出不同滤镜应用在目标视频文件解码后的视频上的不同,便于用户选择所需的编辑类型,提高了用户体验。
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1为本申请实施例提供的一种不同滤镜渲染效果的示例图;
图2为本申请实施例提供的一种视频处理的场景示意图;
图3为本申请实施例提供的另一种视频处理的场景示意图;
图4为本申请实施例提供的一种视频处理方法的流程示意图;
图5为本申请实施例提供的另一种视频处理的场景示意图;
图6为本申请实施例提供的另一种视频处理方法的流程示意图;
图7a为本申请实施例提供的另一种视频处理的场景示意图;
图7b为本申请实施例提供的另一种视频处理的场景示意图;
图8为本申请实施例提供的另一种视频处理方法的流程示意图;
图9为本申请实施例提供的另一种视频处理的场景示意图;
图10为本申请实施例提供的另一种视频处理方法的流程示意图;
图11为本申请实施例提供的另一种视频处理的场景示意图;
图12为本申请实施例提供的另一种视频处理方法的流程示意图;
图13为本申请实施例提供的另一种视频处理的场景示意图;
图14为本申请实施例提供的一种电子设备的软件结构框图;
图15为本申请实施例提供的另一种视频处理方法的流程示意图;
图16为本申请实施例提供的另一种视频处理方法的流程示意图;
图17为本申请实施例提供的一种电子设备的结构示意图。
为了更好的理解本申请的技术方案,下面结合附图对本申请实施例进行详细描述。
应当明确,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在本申请实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
应当理解,本文中使用的术语“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,甲和/或乙,可以表示:单独存在甲,同时存在甲和乙,单独存在乙这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
为了便于理解,本申请实施例这里介绍本申请实施例涉及的术语:
1)、用户体验(user experience,UX):也可以称为UX特性,指的是用户使用电子设备在拍摄过程中的感受。
2)、滤镜:主要是用来实现图像的各种特殊效果。滤镜一般通过调整图像的相关数据,使图像达到更好的观感,其中包括调节像素值、亮度、饱和度、对比度等等。例如,原始图像中的像素点采用RGB(红、绿、蓝)表示,滤镜则将原始图像中像素点的RGB值采用新的RGB值代替,从而使滤镜处理过的图像具有特殊的效果,使用不同风格的滤镜处理的图像具有不同的效果。滤镜风格有很多种类,如调整图像色调类的黑白、怀旧,调整聚焦的柔焦,调整画面风格的水彩、铅笔、水墨、油画等,还可以由用户或专业人员自定义一些滤镜风格,如清新、日系、风景、美食等。
需要说明的是,采用不同的滤镜处理同一张图像时,可以得到不同风格图像效果。例如,滤镜1,滤镜2及滤镜3分别是三种不同的滤镜。采用滤镜1处理摄像头采集的原始图像100,可得到图1所示的图像101。采用滤镜2处理摄像头采集的原始图像100,可得到图1所示的图像102。采用滤镜3处理摄像头采集的原始图像100,可得到图1所示的图像103。对比图1所示的图像101、图像102和图像103可知,图像101、图像102和图像103图像效果或者风格不同。
本申请实施例涉及的电子设备除了手机以外,还可以为平板电脑、个人计算机(personal computer,PC)、个人数字助理(personal digital assistant,PDA)、智能手表、上网本、可穿戴电子设备、增强现实技术(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、车载设备、智能汽车、智能音响、机器人、智能眼镜、智能电视等。
需要指出的是,在一些可能的实现方式中,电子设备也可能称为终端设备、用户设备(User Equipment,UE)等,本申请实施例对此不作限制。
在实际应用场景中,以电子设备为手机为例进行说明。用户在需要对目标视频进行编辑时,如图2所示,用户开启手机后,手机显示界面显示手机主屏幕界面参考图2中的(1)。响应于用户在手机主屏幕界面中操作“图库”应用的图标201,手机显示图2中(2)所示的界面202。其中,界面202内包含有手机拍摄的目标视频203、图像及其他视频。响应于用户选择目标视频203的操作,手机显示图2中(3)所示的界面204。界面204是目标视频203的播放界面。在界面204中包含有编辑控件205。响应于用户对编辑控件205的操作,手机显示图2中(4)所示的界面206。界面206是目标视频203的编辑界面,通过用户操作编辑控件205,手机进入对目标视频203的编辑界面,用以对目标视频203进行编辑。在界面206中包含有预览框207,在预览框207内显示目标视频203。界面206内还包括滤镜控件208。若用户想要对目标视频203添加滤镜效果,则可以对滤 镜控件208进行操作。响应于用户操作滤镜控件208,手机显示如图3所示的界面301。界面301中包含有预览框302,第一窗口303,第二窗口304。其中,预览框302中显示有目标视频203,第一窗口303中显示有第一视频图像,第二窗口304中显示有第二视频图像,第一视频图像为采用滤镜1对目标视频203的首帧视频图像进行渲染处理后的图像,第二视频图像为采用滤镜2对目标视频203的首帧视频图像进行渲染处理后的图像。
需要说明的是,在手机中包含有多种类型的滤镜,在本例中仅有包含两种类型的滤镜为例进行说明。在界面301中针对每种类型的滤镜均对应设置有一个显示窗口,在该显示窗口中显示有对应滤镜渲染处理后的图像。本申请实施例对手机中包含的滤镜类型的数量不做限制。
在上述示例中,第一窗口302及第二窗口303中显示的画面仅是一帧视频图像的不同滤镜类型的滤镜效果图片,而不是目标视频的多帧视频图像的滤镜效果。通过第一窗口及第二窗口显示的一帧图像,无法确定目标视频采用某种滤镜的整体滤镜效果。如果用户需要观看目标视频的整体的滤镜效果,需要将该滤镜类型应用在该目标视频上才能观看到整体的滤镜效果,且每次仅能观看一种滤镜类型应用在目标视频的滤镜效果,无法同时观看多种滤镜类型应用到目标视频的整体的滤镜效果。上述方式使得用户无法直观的看出不同滤镜或特效应用在视频上的差异点,不方便用户选择,降低了用户体验。
因此,在本申请实施例中,提出了一种新的视频处理方式,在视频编辑时,可以将目标视频文件解码后得到的m帧视频图像作为第一采样视频及第二采样视频,对第一采样视频进行第一滤镜渲染处理得到第一视频,对第二采样视频进行第二滤镜渲染处理得到第二视频,并在第一窗口显示第一视频,在第二窗口显示第二视频,且第一时刻第一窗口内显示第一视频的第i帧视频图像,第二窗口内显示第二视频的第i帧视频图像,在第二时刻,第一窗口内显示第一视频的第i+1帧视频图像,第二窗口内显示第二视频图像的第i+1帧视频图像。这样可以对目标视频文件解码后的视频进行不同滤镜类型渲染处理,并在对应的窗口内显示出,从而可以使用户直观的看出不同滤镜应用在目标视频文件解码后的视频上的不同,便于用户选择所需的编辑类型,提高了用户体验。
参图4所示为本发明实施例提供的一种视频处理方法的流程示意图。该方法应用在电子设备中。如图4所示,所述方法包括:
步骤S401、接收目标视频的编辑操作。
在本申请实施例中,通常用户在电子设备上播放视频时,为了增加趣味或者美化视频的目的,可以对目标视频增加滤镜效果,例如,视频内容为人物,为了美化拍摄的视频内容,可以将拍摄的视频叠加人像虚化的滤镜效果,从而可以凸显出拍摄的人物。或者,视频内容为人物a正在唱歌,为了增加趣味性,可以将拍摄的视频内容增加动感频闪的滤镜效果,以达到模拟演唱会的效果。
用户在电子设备的图库应用中查找其内保存的视频及拍摄的图像的缩略图。将用户选择的视频确定为目标视频。此时,用户在选择所需视频的缩略图时,电子设备可以对该缩略图找到对应的目标视频文件,并进行解码获取到所需的目标视频。在用户需要对该目标视频进行编辑时,可以针对该目标视频向电子设备发送目标视频的编辑操作,此时电子设备可以接收到目标视频的编辑操作。
步骤S402、响应于目标视频的编辑操作,显示第一预览界面。
其中,第一预览界面内包含有预览框。预览框内显示有目标视频,目标视频是目标视频文件解码得到的视频。
在本申请实施例中,手机接收到目标视频的编辑操作后,显示编辑目标视频的预览界面,即为第一预览界面。在第一预览界面中包含有预览框,在预览框内显示目标视频。
需要说明是,视频是连续的图像序列,由连续的帧视频图像构成,一帧视频图像即为一幅图像。由于人眼的视觉暂留效应,当帧序列的帧视频图像以一定的速率播放时,用户可以看到的就是连续的视频。由于连续的帧视频图像之间相似性极高,为了便于存储传输,电子设备可以对原始视频进行编码,得到视频文件,以去除空间、时间维度的冗余,减少视频占用的存储空间。因此,在需要播放视频时,电子设备将视频文件进行解码得到所需的视频。
步骤S403、接收对第一预览界面的第一操作。
其中,第一操作是启动滤镜功能的操作。
在本申请实施例中,电子设备在显示了第一预览界面后,在该第一预览界面中还包括有滤镜控件,用户在需要对目标视频添加滤镜效果时,可以向第一预览界面发送第一操作。电子设备接收第一预览界面的第一操作。
步骤S404、响应于第一操作,显示第二预览界面。
其中,第二预览界面内包含有预览框、第一窗口及第二窗口。在第一时刻,预览框内显示目标视频,第一窗口显示第一视频的第i帧视频图像,第二窗口显示第二视频的第i帧视频图像,第一视频为采用第一滤镜对第一采样视频进行渲染处理后的视频,其内包含m帧视频图像,第二视频为采用第二滤镜对第二采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第一采样视频及所述第二采样视频均是从目标视频文件解码后的视频中采样m帧视频图像形成的视频,i为大于0,且小于m的整数;m为大于1的整数。
在第二时刻,预览框内显示目标视频,第一窗口显示第一视频的第i+1帧视频图像,第二窗口显示第二视频的第i+1帧视频图像。
在本申请实施例中,用户在对目标视频进行编辑时,进入编辑预览界面,即为第一预览界面,若要在目标视频中增加滤镜效果,电子设备可以接收到开启滤镜功能的第一操作后,启动滤镜功能,并在显示器中显示第二预览界面。
在第二预览界面内包含有预览框、第一窗口和第二窗口。第一窗口及第二窗口显示的视频图像是目标视频文件解码后得到的采样视频经滤镜渲染处理后的视频内的视频图像。第一窗口显示第一视频内的视频图像,第二窗口显示第二视频内的视频图像,而第一视频及第二视频内均至少包含两帧视频图像,因此第一窗口及第二窗口内显示至少两帧视频图像。即为,在第一时刻,预览框内显示目标视频,第一窗口显示第一视频的第i帧视频图像,第二窗口显示第二视频的第i帧视频图像。第一视频是采用第一滤镜对第一采样视频进行渲染处理后的视频,第一视频中包含有m帧视频图像。第二视频是采用第二滤镜对第二采样视频进行渲染处理后的视频,第二视频中包含有m帧视频图像。第一采样视频及第二采样视频均是从目标视频文件解码后的视频中采样m帧视频图像形成的视频。i为大于0且小于m的整数。在第二时刻,预览框内显示目标视频,第一窗口显示第一视频的第i+1帧视频图像,第二窗口显示第二视频的第i+1帧视频图像。即为,在第一窗口显示第一视频,第二窗口显示第二视频。
需要说明的是,第一窗口显示第一视频,第二窗口显示第二视频,是电子设备在第一窗口内显示按照第一视频内视频图像的帧序列,顺序显示第一视频内的各帧视频图像。在第二窗口内显示按照第二视频内视频图像的帧序列,顺序显示第二视频内的各帧视频图像。
需要说明的是,电子设备中可以包含有至少一种、两种或者两种以上类型的滤镜,电子设备中预览界面内显示滤镜渲染效果视频图像的窗口的个数与电子设备内包含的滤镜类型的个数相同。每个窗口与一种滤镜类型相对应。每个窗口内显示的视频是根据该窗口对应的滤镜类型对采样视频进行滤镜渲染处理后的视频。不同窗口内显示的视频的滤镜渲染效果不同。每个窗口内仅显示一种滤镜渲染效果后的视频。在本申请实施例中,以电子设备中包含有第一滤镜及第二滤镜为例进行说明。此时,电子设备的预览界面内显示有两个窗口,即为第一窗口及第二窗口,通过第一窗口显示对第一采样视频采用第一滤镜渲染处理后的第一视频。通过第二窗口显示对第二采样视频采用第二滤镜渲染处理后的视频。本申请实施例中对电子设备内包含的滤镜类型的个数不做限制。
在本申请实施例中,以电子设备为手机,且手机中包含有两种类型的滤镜为例进行说明。用户在需要对目标视频进行编辑时,可以进入目标视频的编辑界面,参考图2所示。在目标视频的编辑界面中,包含有滤镜控件208,若用户需对目标视频添加滤镜效果,则响应于用户操作滤镜控件208,手机显示如图5中(1)所示的界面501。界面501中包含 有预览框502,第一窗口503,第二窗口504。其中,预览框502中显示有目标视频203,第一窗口503中显示有第一视频,第二窗口504中显示有第二视频。第一视频是对第一采样视频采用第一滤镜进行渲染处理后的视频,第二视频是对第二采样视频采用第二滤镜进行渲染处理后的视频。第一采样视频及第二采样视频均是从目标视频文件解码后得到的视频中采样m帧视频图像形成的视频。在手机显示界面501时,第一窗口503及第二窗口504均是自动播放第一视频及第二视频。假设第一视频及第二视频均包含有3帧视频图像,第一窗口及第二窗口自动播放第一视频及第二视频具体为:响应于用户操作滤镜控件208,手机显示如图5中(1)所示的界面501,界面501中包含有预览框502,第一窗口503,第二窗口504。在第一时刻,界面501中的第一窗口503显示第一视频的第1帧视频图像505,第二窗口504显示第二视频的第1帧视频图像506。在第二时刻,如图5中(2)所示的界面507所示,在界面507中包含有预览框502,第一窗口503,第二窗口54。第一窗口503显示第一视频的第2帧视频图像508,第二窗口504显示第二视频的第2帧视频图像509。在第三时刻,如图5中(3)所示的界面510所示,在界面506中包含有预览框502,第一窗口503,第二窗口54。第一窗口503显示第一视频的第3帧视频图像511,第二窗口504显示第二视频的第3帧视频图像512。
需要说明的是,上述示例中,手机内包含有两种类型的滤镜,因此在界面501,界面507,界面510中均至包含有第一窗口503及第二窗口504,第一窗口503对应显示第一滤镜渲染处理的视频,第二窗口504对应显示第二滤镜渲染处理后的视频。若手机内包含有三种或三种以上类型的滤镜时,则在界面501,界面507,界面510中包含有对应数目的窗口,且在每个窗口内显示一种滤镜渲染处理的视频,每个窗口仅显示一种滤镜渲染处理后的视频。
进一步地,第一窗口与第二窗口的显示尺寸相同。在本申请中,为了给用户提供更好的视觉效果,使得显示界面整齐化,可以将预览界面内显示的第一窗口及第二窗口的显示尺寸设置为相同的尺寸。
进一步地,为了不影响预览框的显示效果,第一窗口与第二窗口的显示尺寸小于预览框的显示尺寸。
进一步地,第一窗口显示第一视频包括:第一窗口循环显示第一视频。第二窗口显示第二视频包括:第二窗口循环显示第二视频。
在本申请实施例中,由于第一视频及第二视频中包含有m帧视频图像,在第一窗口内显示了第一视频的m帧视频图像后,可以重新显示第一视频,即为按照第一视频内各帧视频图像的时序,重新在第一窗口内显示第一视频的第1帧视频图像至第m帧视频图像。同理,在第二窗口内显示了第二视频的m帧视频图像后,可以重新显示第二视频,即为按照第二视频内各帧视频图像的时序,重新在第二窗口内显示第二视频的第1帧视频图像至第m帧视频图像。这样一来,在电子设备的第二预览界面内,第一窗口循环显示第一视频, 第二窗口循环显示第二视频,方便用户随时观看到第一窗口内显示的第一视频,第二窗口内显示的第二视频,提高了用户体验。
在一些实施例中,若在第一视频中包含有3帧视频,第二视频中包含3帧视频,参考图5中(3)所示的界面510,在界面510中第一窗口503显示第一视频的第3帧视频图像511,第二窗口504显示第二视频的第3帧视频图像512。在下一时刻,如图5中(4)所示的界面513,在界面513中包含有预览框502,第一窗口503,第二窗口504。第一窗口503显示第一视频的第1帧视频图像505,第二窗口504显示第二视频的第1帧视频图像506。
在一些实施例中,第一采样视频及第二采样视频均是从目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:对目标视频文件进行一次解码得到第三视频,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频。
在本申请实施例中,第一采样视频及第二采样视频均是从目标视频文件解码后的视频中采样获取的。电子设备为了避免重复解码的冗余开销,可以对目标视频文件进行一次解码,得到解码后的第三视频,及第三视频内包含的各帧视频图像的帧序列。电子设备在第三视频中按照视频图像的帧序列,采样m帧视频图像,将采样的m帧视频图像作为第一采样视频及第二采样视频内的视频图像,形成第一采样视频及第二采样视频,电子设备将第一采样视频采用第一滤镜进行渲染处理,形成第一视频,将第一视频在第一窗口内循环显示。将第二采样视频采用第二滤镜进行渲染护理,形成第二视频,将第二视频在第二窗口内循环显示,如图6所示。
需要说明的是,第三视频是直接从目标视频文件中解码得到的视频,其与目标视频相同。
进一步地,电子设备可以将第三视频直接作为第一采样视频及第二采样,此时,m的值即为第三视频内包含的视频图像的帧数。
此时,电子设备可以在解码出第三视频后,可以将第三视频直接作为第一采样视频及第二采样视频,采用第一滤镜对第三视频进行渲染处理,并在第一窗口显示,采用第二滤镜对第二采样视频进行渲染处理,并在第二窗口显示。
或者,电子设备在第三视频中采样m帧视频图像,其中m的值小于第三视频包含的视频图像的帧数。将采样的m帧视频图像作为第一采样视频及第二采样视频内的视频图像,形成第一采样视频及第二采样视频。
由于第二预览界面内第一窗口及第二窗口的显示尺寸均小于预览框的显示尺寸,且相邻两帧间的预览图像内容相似性极高,若减少第一窗口及第二窗口内每秒显示图像的帧 数,对于用户几乎体验不到区别,且可以降低电子设备的资源损耗,提高电子设备的处理速度。因此,电子设备可以在第三视频中,按照第三视频内包含的视频图像的帧序列,每n帧视频图像中采样一帧视频图像,按照该方式在第三视频中采样m帧视频图像,从而形成第一采样视频及第二采样视频,采用第一滤镜对第三视频进行渲染处理,并在第一窗口显示,采用第二滤镜对第二采样视频进行渲染处理,并在第二窗口显示。
在一些实施例中,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频包括:在第三视频中,按照每3帧视频图像中采样1帧视频图像的方式,采样m帧视频图像分别形成第一采样视频及第二采样视频。
即为,电子设备可以在第三视频中,按照第三视频内包含的视频图像的帧序列,每3帧视频图像中采样一帧视频图像,按照该方式在第三视频中采样m帧视频图像,从而形成第一采样视频及第二采样视频。此时,第一窗口显示第三视频的第1帧视频图像,第4帧视频图像,第7帧视频图像等经第一滤镜渲染处理后的视频图像。第二窗口显示第三视频的第1帧视频图像,第4帧视频图像,第7帧视频图像等经第二滤镜渲染处理后的视频图像。
在一些实施例中,第三视频及目标视频相同,假设第三视频及目标视频内包含有10帧视频图像,在第三视频中每3帧抽取一帧视频图像,形成第一采样视频及第二采样视频。即为第一采样视频及第二采样视频中均包含有第三视频的第1帧视频图像,第4帧视频图像,第7帧视频图像及第10帧视频图像。采用第一滤镜对第一采样视频进行渲染处理,得到第一视频,采用第二滤镜对第二采样视频进行渲染处理,得到第二视频。将第一视频在第一窗口中显示,将第二视频在第二窗口中显示。如图7a及7b所示,手机显示如图7a中(1)所示的界面701。在界面701中包含有预览框702。在预览框702内显示有目标视频。界面701内还包括有滤镜控件703。若用户需要对目标视频添加滤镜效果,则可以对滤镜控件703进行操作。响应于用户操作滤镜控件703,手机显示如图7a中(2)所示的界面704。界面704中包含有预览框702,第一窗口705,第二窗口706,播放控件707。其中,预览框702中显示有目标视频,此时,预览框702内仅显示目标视频的第1帧视频图像708。在用户未操控播放案件707时,预览框内702均是仅显示目标视频的第1帧视频图像708,并不会显示目标视频的其他视频图像,该情况可以参考上图5所示,在此不再赘述。第一窗口705中显示有第一视频的第1帧视频图像709,第二窗口707中显示有第二视频的第1帧视频图像710。响应于用户对播放控件707的操作,手机显示如图7a中(3)所示的界面711,在界面711中包含有预览框702,第一窗口705及第二窗口706。其中,预览框702中显示目标视频的第2帧视频图像712,第一窗口705显示第一视频的第2帧视频图像713,第二窗口707中显示有第二视频的第2帧视频图像714。在下一时刻,如图7a中(4)所示的界面715,在界面715中包含有预览框702,第一窗口705及第二窗口706。其中,预览框702中显示目标视频的第3帧视频图像716,第一窗口705显示第一视频的第3帧视频图像717,第二窗口707中显示有第二视频的第3帧视频图像718。在下一时刻,如图7b中(1)所示的界面719,在界面719中包含有预览 框702,第一窗口705及第二窗口706。其中,预览框702中显示目标视频的第4帧视频图像720,第一窗口705显示第一视频的第4帧视频图像721,第二窗口707中显示有第二视频的第4帧视频图像722。在下一时刻,如图7b中(2)所示的界面723,在界面723中包含有预览框702,第一窗口705及第二窗口706。其中,预览框702中显示目标视频的第5帧视频图像724,第一窗口705显示第一视频的第1帧视频图像709,第二窗口707中显示有第二视频的第1帧视频图像710。在下一时刻,如图7b中(3)所示的界面725,在界面725中包含有预览框702,第一窗口705及第二窗口706。其中,预览框702中显示目标视频的第6帧视频图像726,第一窗口705显示第一视频的第1帧视频图像713,第二窗口707中显示有第二视频的第1帧视频图像714。即为,在手机中,第一窗口705内循环显示第一视频,第二窗口706内循环显示第二视频。预览框内显示的目标视频是未经滤镜渲染的视频。
在一些实施例中,在第二预览界面中,第一视频及第二视频的分辨率小于目标视频的分辨率。
为了不影响目标视频的显示,第二预览界面内的第一窗口及第二窗口的尺寸均小于预览框的尺寸。基于电子设备是逐帧显示视频内的视频图像的,相邻两帧间的视频图像内容相似性极高,且第一窗口及第二窗口的显示空间较小,可以降低第一窗口及第二窗口显示的视频的分辨率,即为可以减少第一视频及第二视频的分辨率,也就是说,减少第一视频及第二视频的显示细节部分,鉴于第一窗口及第二窗口的显示空间较小,即使减少第一视频及第二视频的分辨率,对于用户几乎体验不到区别,且可以降低电子设备的资源损耗,提高电子设备的处理速度。因此,电子设备可以根据第一窗口及第二窗口的尺寸调节第一视频及第二视频的分辨率,例如,预览框内显示的目标视频的分辨率为1080*720。电子设备可以将第一窗口内显示的第一视频及第二窗口内显示的第二视频的分辨率调整为325*288。
在一些实施例中,参考图5中(1)所示,手机显示如图5中(1)所示的界面501,在界面501中包含有预览框502,第一窗口503及第二窗口504。在预览框502内显示有目标视频203,在第一窗口503内显示有第一视频,第二窗口504内显示有第二视频。电子设备在显示第一视频及第二视频时,可以先降低第一视频及第二视频的分辨率,在降低了第一视频及第二视频的分辨率后,在第一窗口503中显示第一视频,在第二窗口504中显示第二视频。
需要说明的是,电子设备对分辨率的调整可以是根据实际需求预先设置调整的值,本申请对此不作限制。
需要说明的是,视频的分辨率是其内包含的视频图像的分辨率,视频图像的分辨率是视频图像的宽和高像素值。视频图像分辨率是用于度量视频图像内数据量的一个参数,通常表示为每英寸像素。视频图像A的分辨率为3200*180,是指它在横向和纵向上的有 效像素,显示区域的尺寸较小时,每英寸像素值较高,看起来清晰;显示区域的尺寸较大时,由于没有那么多有效像素填充显示区域,有效像素的每英寸像素值下降,显示时就模糊了。在本申请实施例中,电子设备可以通过减少第一视频及第二视频内的视频图像的有效像素达到降低第一视频及第二视频的分辨率的目的。
需要说明的是,电子设备调整第一视频及第二视频的分辨率,可以是通过调整第一采样视频及第二采样视频的分辨率,来调整第一视频及第二视频的分辨率当然,也可以是直接通过对第三视频进行分辨率的调整,来调整第一视频及第二视频的分辨率,如图8所示。还可以是对第一视频及第二视频直接进行分辨率的调整,本申请对此不作限制。
在一些实施例中,第一窗口显示第一视频的帧率及第二窗口显示第二视频的帧率与预览框显示目标视频的帧率相同。
为了方便实现可以将预览框显示目标视频的帧率设置为与第一窗口显示第一视频及第二窗口显示第二视频的帧率相等。即为,预览框每秒显示目标视频的视频图像的帧数与第一窗口每秒显示第一视频的视频图像的帧数、第二窗口每秒显示第二视频的视频图像的帧数相等。例如,预览框每秒显示30帧目视频的视频图像,第一窗口每秒也显示30帧第一视频的视频图像、第二窗口每秒也显示30帧第二视频的视频图像。也就是说,预览框内的图像刷新频率与第一窗口及第二窗口内的图像刷新频率相同,参考图7a及图7b所示。这样一来,电子设备可以将采样的第一采样视频通过第一滤镜进行渲染处理,形成第一视频,并在第一窗口中显示出。将采样的第二采样视频通过第二滤镜进行渲染处理后形成第二视频,将第二视频在第二窗口中显示出,实现简单。
或者,由于第一窗口及第二窗口的显示尺寸小于预览框的显示尺寸,在第一视频及第二视频的帧率与目标视频的帧率相同时,由于视频的播放速度较快,第一窗口及第二窗口的显示尺寸较小,导致用户不容易清楚的观看到第一窗口内显示的第一视频,第二窗口内显示的第二视频。为了方便用户清楚的观看,可以降低第一窗口显示第一视频的帧率,第二窗口显示第二视频的帧率。即为减少第一窗口每秒显示第一视频的视频图像的帧数,第二窗口每秒显示第二视频的视频图像的帧数。
在一些实施例中,预览框显示目标视频的帧率是第一窗口显示第一视频及第二窗口显示第二视频的帧率的3倍。
在一些实施例中,参考图9所示,手机显示如图9中(1)所示的界面901。在界面901中包含有预览框902,第一窗口903及第二窗口904。在预览框902中显示目标视频图像的第1帧视频图像905,在第一窗口903中显示第一视频的第1帧视频图像906,在第二窗口904中显示第二视频的第1帧视频图像907。在下一时刻,手机显示如图9中(2)所示的界面908。在界面908中包含有预览框902,第一窗口903及第二窗口904。在预览框902中显示目标视频图像的第2帧视频图像909,在第一窗口903中显示第一视频的 第1帧视频图像906,在第二窗口904中显示第二视频的第1帧视频图像907。在下一时刻,手机显示如图9中(3)所示的界面910。在界面910中包含有预览框902,第一窗口903及第二窗口904。在预览框902中显示目标视频图像的第3帧视频图像911,在第一窗口903中显示第一视频的第1帧视频图像906,在第二窗口904中显示第二视频的第1帧视频图像907。在下一时刻,手机显示如图9中(4)所示的界面912。在界面912中包含有预览框902,第一窗口903及第二窗口904。在预览框902中显示目标视频图像的第4帧视频图像913,在第一窗口903中显示第一视频的第2帧视频图像914,在第二窗口904中显示第二视频的第2帧视频图像915。
或者,第一采样视频及第二采样视频均是从目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:
对目标视频文件分别进行两次解码得到两个第三视频,在一个第三视频中采样m帧视频图像形成第一采样视频,并在另一个第三视频中采样m帧视频图像形成第二采样视频。
在本申请实施例中,由于需查看目标视频文件解码后的视频采用每种滤镜类型的渲染效果,为了便于实现,可以针对每种滤镜类型均进行一次目标视频文件的解码处理,得到多个第三视频。在每个第三视频中采样m帧视频图像形成相应的采样视频,进而使用每种类型的滤镜分别对采样视频进行渲染处理,得到多个视频,并分别在对应的窗口内显示。例如,电子设备中包含有两种类型的滤镜,则电子设备可以对目标视频文件分别进行两次解码得到两个第三视频,在一个第三视频中采样m帧视频图像形成第一采样视频,并在另一个第三视频中采样m帧视频图像形成第二采样视频。电子设备对第一采样视频采用第一滤镜进行渲染处理,得到第一视频,并在第一窗口进行显示。对第二采样视频采用第二滤镜进行渲染处理,得到第二视频,并在第二窗口进行显示,如图10所示。
在一些实施例中,第二预览界面内还包括显示尺寸小于预览框的显示尺寸的进度显示框,进度显示框内显示有第四视频内的视频图像。
其中,第四视频与目标视频相同,进度显示框内包含有进度控件用于控制预览框内显示的目标视频的视频图像,进度控件在进度显示框内对应的第四视频的视频图像即为预览框内显示的目标视频的视频图像。这样,用户可以通过调整进度控件在进度显示框内对应的第四视频的视频图像,进而调整预览框内显示的目标视频的视频图像。
在本申请实施例中,第二预览界面还包括有进度显示框。在进度显示框内显示有第四视频内的视频图像。由于预览框内显示目标视频,用户无法控制目标视频的播放内容,为了方便用户调整预览框内显示的目标视频的视频图像,增加了进度显示框。用户可以通过调整进度显示框中进度控件对应的第四视频的视频图像,进而调整预览框内显示的目标视频的视频图像。
为了不影响预览框的显示效果,进度显示框的显示尺寸小于预览框的显示尺寸。
在一些实施例中,第四视频的分辨率小于目标视频的分辨率。
基于电子设备是逐帧显示视频内的视频图像的,相邻两帧间的视频图像内容相似性极高,且进度显示框的显示空间较小,可以降低进度显示框显示的视频的分辨率,即为可以减少第四视频的分辨率,也就是说,减少第四视频的显示细节部分,鉴于进度显示框的显示空间较小,即使减少第四视频的分辨率,对于用户几乎体验不到区别,且可以降低电子设备的资源损耗,提高电子设备的处理速度。因此,电子设备可以根据进度显示框的显示空间,调整第四视频的分辨率。
在一些实施例中,第一窗口、第二窗口及进度显示框的显示尺寸相同,可以使得显示界面整齐化,为用户提供更好的视觉效果。在第一窗口、第二窗口及进度显示框的显示尺寸相同时,则第一视频的分辨率,第二视频的分辨率及第四视频的分辨率相同。为了降低电子设备的资源消耗,电子设备可以对目标视频文件进行一次解码得到第三视频后,可以将第三视频进行降分辨率处理,从将降分辨处理后的第三视频一路作为第四视频传输至进度显示框内显示,一路作为采样视频,进行相应的滤镜渲染处理,分别在第一窗口及第二窗口显示。
在一些实施例中,手机显示如图11中(1)所示的界面1101。界面1101中显示有包含有预览框1102,第一窗口1103,第二窗口1104及进度显示框1105。在预览框1102中显示目标视频图像的第1帧视频图像1106,在第一窗口1103中显示第一视频的第1帧视频图像1107,在第二窗口1104中显示第二视频的第1帧视频图像1108,在进度显示框1105中显示第四视频的视频图像,且进行控件1109对应第四视频的第1帧视频1110。在下一时刻,手机显示如图11中(2)所示的界面1111。在界面1111中包含有预览框902,第一窗口903,第二窗口1104及进度显示框1105。在预览框1102中显示目标视频图像的第2帧视频图像1112,在第一窗口1103中显示第一视频的第2帧视频图像1113,在第二窗口1104中显示第二视频的第2帧视频图像1114,在进度显示框1105中显示第四视频的视频图像,且进行控件1109对应第四视频的第2帧视频1115。
参考图12所示为本发明实施例提供的另一种视频处理方法的流程示意图。该方法应用在电子设备中。如图12所示,所述方法包括:
步骤S1201、接收目标视频的编辑操作。
具体可参考步骤S401在此不再赘述。
步骤S1202、响应于目标视频的编辑操作,显示第一预览界面。
其中,第一预览界面内包含有预览框;预览框内显示有目标视频;目标视频是目标视频文件解码得到的视频。
具体可参考步骤S402在此不再赘述。
步骤S1203、接收对第一预览界面的第一操作。
具体可参考步骤S403在此不再赘述。
步骤S1204、响应于第一操作,显示第二预览界面。
其中,第二预览界面内包含有预览框、第一窗口及第二窗口。在第一时刻,预览框内显示目标视频,第一窗口显示第一视频的第i帧视频图像,第二窗口显示第二视频的第i帧视频图像,第一视频为采用第一滤镜对第一采样视频进行渲染处理后的视频,其内包含m帧视频图像,第二视频为采用第二滤镜对第二采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第一采样视频及所述第二采样视频均是从目标视频文件解码后的视频中采样m帧视频图像形成的视频,i为大于0,且小于m的整数;m为大于1的整数。
在第二时刻,预览框内显示目标视频,第一窗口显示第一视频的第i+1帧视频图像,第二窗口显示第二视频的第i+1帧视频图像。
具体可参考步骤S404在此不再赘述。
步骤S1205、接收对第二预览界面的第二操作。
其中,第二操作用于指示用户选择的目标滤镜。
在本申请实施例中,若用户需要使用目标滤镜,则可以在第二预览界面中选择目标滤镜,且向电子设备发送第二操作。
步骤S1206、响应于所述第二操作,显示第三预览界面。
其中,第三预览界面内包含有预览框、第一窗口及第二窗口。预览框内显示第五视频,第一窗口显示第一视频,第二窗口显示第二视频,第五视频为采用目标滤镜对目标视频进行渲染处理后的视频。
在本申请实施例中,电子设备在接收到第二操作后,通过第二操作可以获知用户选择的目标滤镜,此时,电子设备可以将目标视频采用目标滤镜进行渲染处理,得到第五视频,将第五视频在预览框中显示,以便用户观看。
在一些实施例中,参考图7a中(1)所示,若用户需要对目标视频添加滤镜效果,则可以对滤镜控件703进行操作。响应于用户操作滤镜控件703,手机显示如图13中(1)所示的界面1301。界面1301中包含有预览框1302,第一窗口1303,第二窗口1304,播放控件1305,及进度显示框1306。其中,预览框1302中显示有目标视频,此时,预览框1302内仅显示目标视频的第1帧视频图像1307。第一窗口1303中显示有第一视频的第1帧视频图像1308,第二窗口1304中显示有第二视频的第1帧视频图像1309。进度显示框1306中包含有进度控件1310,进度显示框1306显示有第四视频的视频图像,且进度控件1310对应第四视频的第1帧视频图像1311。假设用户选择了第一滤镜,响应于用户选择第一窗口1303的操作,手机显示如图13中(2)所示的界面1312。界面1312中包含有预览框1302,第一窗口1303,第二窗口1304,播放控件1305,及进度显示框1306。其中,预览框1302中显示有第五视频,此时,预览框1302内仅显示第五视频的第1帧视频图像1313。第一窗口1303中显示有第一视频的第2帧视频图像1314,第二窗口1304中显示有第二视频的第2帧视频图像1315。进度显示框1306显示有第四视频的视频图像,且进度控件1310对应第四视频的第1帧视频图像1311。第五视频是采用第一滤镜对目标视频进行渲染处理的视频。响应于用户对播放控件1305的操作,手机显示如图13中(3)所示的界面1316,在界面1316中包含有预览框1302,第一窗口1303,第二窗口1304,进度显示框1306。其中,预览框1302内显示第五视频的第2帧视频图像1317。第一窗口1303中显示有第一视频的第3帧视频图像1318,第二窗口1304中显示有第二视频的第3帧视频图像1319。进度显示框1306显示有第四视频的视频图像,且进度控件1310对应第四视频的第2帧视频图像1320。
在视频编辑时,可以将目标视频文件解码后得到的m帧视频图像作为第一采样视频及第二采样视频,对第一采样视频进行第一滤镜渲染处理得到第一视频,对第二采样视频进行第二滤镜渲染处理得到第二视频,并在第一窗口显示第一视频,在第二窗口显示第二视频,且第一时刻第一窗口内显示第一视频的第i帧视频图像,第二窗口内显示第二视频的第i帧视频图像,在第二时刻,第一窗口内显示第一视频的第i+1帧视频图像,第二窗口内显示第二视频图像的第i+1帧视频图像。这样可以对目标视频文件解码后的视频进行不同滤镜类型渲染处理,并在对应的窗口内显示出,从而可以使用户直观的看出不同滤镜应用在目标视频文件解码后的视频上的不同,便于用户选择所需的编辑类型,提高了用户体验。
参见图14,为本申请实施例提供的一种电子设备的软件结构框图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将安卓(Android)系统分为四层,从上至下分别为应用层、框架层、硬件抽象层和硬件层。
应用层(Application,App)可以包括一系列应用程序包。例如,该应用程序包可以包括相机应用。应用层又可以分为应用界面(user interface,UI)和应用逻辑。
参考图14所示,UI层包括相机、图库以及其它应用。
应用逻辑包括数据框架和相机管理。其中,数据框架包括数据获取模块,渲染处理模块,数据处理模块,视频解码模块。数据获取模块,用于获取目标视频文件。数据处理模块,用于控制不同滤镜渲染效果的视频在显示界面显示。渲染处理模块,用于对视频图像进行渲染处理。视频解码模块,用于对视频文件进行解码,获取视频。相机管理包括设备管理模块、Surface管理模块、会话管理模块等。在Android系统中,Surface对应一块屏幕缓冲区,用于保存当前窗口的像素数据。
框架层(Framework,FWK)为应用层的应用程序提供应用编程接口(application programming interface,API)和编程框架,包括一些预先定义的函数。在图14中,框架层包括相机访问接口(Camera2 API),Camera2 API是Android推出的一套访问摄像头设备的接口,其采用管道式的设计,使数据流从摄像头流向Surface。Camera2 API包括相机管理(CameraManager)和相机设备(CameraDevice)。CameraManager为Camera设备的管理类,通过该类对象可以查询设备的Camera设备信息,得到CameraDevice对象。CameraDevice提供了Camera设备相关的一系列固定参数,例如基础的设置和输出格式等。
硬件抽象层(HAL)是位于操作系统内核与硬件电路之间的接口层,其目的在于将硬件抽象化。它隐藏了特定平台的硬件接口细节,为操作系统提供虚拟硬件平台,使其具有硬件无关性,可在多种平台上进行移植。在图14中,HAL包括相机硬件抽象层(Camera HAL),Camera HAL包括设备(Device)1、设备(Device)2、设备(Device)3等。可理解,该Device1、Device2和Device3为抽象的设备。
硬件层(HardWare,HW)是位于操作系统最底层的硬件。在图14中,HW包括相机设备(CameraDevice)1、相机设备(CameraDevice)2、相机设备(CameraDevice)3等。其中,CameraDevice1、CameraDevice2和CameraDevice3可对应于电子设备上的多个摄像头。
参见图15,为本申请实施例提供的另一种视频处理方法流程示意图。在本申请实施例中,为了方便实现,视频解码模块需针对每种类型的滤镜解码出一路第三视频,至少两种类型的滤镜需解码出至少两个第三视频。在本申请实施例中,以电子设备中包含有2种类型的滤镜为例进行说明,电子设备中还可以包含3种及3种以上的滤镜类型,本申请对此不作限制。该方法可应用于图14所示的软件结构,其主要包括以下步骤。
S1501、电子设备的图库应用接收目标视频的编辑操作。
具体的,用户在需要对目标视频进行编辑时,可以向电子设备的图库应用发送目标视频的编辑操作。
S1502、电子设备的图库应用触发数据获取模块获取目标视频文件。
具体的,电视设备的图库接收到触发数据获取模块获取目标视频的编辑操作对应的目标视频文件。
S1503、电子设备的数据获取模块获取目标视频文件。
用户在需要对视频文件进行视频编辑时,可以向电子设备发送其选择的编辑模式选择操作。
S1504、电子设备的数据获取模块将获取的目标视频文件发送至视频解码模块。
S1505、视频解码模块对目标视频文件进行解码得到目标视频,将目标视频发送至显示界面的预览框进行显示。
S1506、电子设备的图库应用接收滤镜的启动操作。
S1507、电子设备的图库应用触发数据获取模块获取目标视频文件,向视频解码模块发送同时解码2次目标视频文件的指令,并向滤镜渲染模块发送滤镜渲染指令。
其中,滤镜渲染指令用于指示滤镜渲染模块采用其内的每种类型的滤镜对接收的视频分别进行渲染处理。
S1508、电子设备的数据获取模块获取目标视频文件。
需要说明的是,数据获取模块若在上述步骤1503获取了目标视频文件时将其缓存至存储单元中且未将其删除,则在此时,数据获取模块仅需在其存储单元中获取该目标视频文件。
S1509、电子设备的数据获取模块将目标视频文件传输至视频解码模块。
S1510、电子设备的视频解码模块同时解码2次目标视频文件,获取2个第三视频。
S1511、电子设备的视频解码模块将2个第三视频传输至数据处理模块。
S1512、电子设备的数据处理模块在一个第三视频中采样m帧视频图像形成第一采样视频,并在另一个第三视频中采样m帧视频图像形成第二采样视频,将第一采样视频及第二采样视频发送至滤镜渲染模块。
具体的,电子设备的数据处理模块在接收到2个第三视频后,针对每个第三视频进行m帧视频图像的采样,得到2路采样视频,即为第一采样视频及第二采样视频。
其中,m为大于0,且不大于第三视频内包含的视频图像的总帧数。
S1513、电子设备的滤镜渲染模块采用第一滤镜对第一采样视频进行渲染处理,得到第一视频,采用第二滤镜对第二采样视频进行渲染处理,得到第二视频。
S1514、电子设备的滤镜渲染模块将第一视频及第二视频发送至显示界面,以便在显示界面的第一窗口显示第一视频,在第二窗口显示第二视频。
参见图16,为本申请实施例提供的另一种视频处理方法流程示意图。在本申请实施例中,视频解码模块仅解码出一路第三视频。在本申请实施例中,以电子设备中包含有2种类型的滤镜为例进行说明,电子设备中还可以包含3种及3种以上的滤镜类型,本申请对此不作限制。该方法可应用于图14所示的软件结构,其主要包括以下步骤。
S1601、电子设备的图库应用接收目标视频的编辑操作。
具体的,用户在需要对目标视频进行编辑时,可以向电子设备的图库应用发送目标视频的编辑操作。
S1602、电子设备的图库应用触发数据获取模块获取目标视频文件。
具体的,电视设备的图库接收到触发数据获取模块获取目标视频的编辑操作对应的目标视频文件。
S1603、电子设备的数据获取模块获取目标视频文件。
用户在需要对视频文件进行视频编辑时,可以向电子设备发送其选择的编辑模式选择操作。
S1604、电子设备的数据获取模块将获取的目标视频文件发送至视频解码模块。
S1605、视频解码模块对目标视频文件进行解码得到目标视频,将目标视频发送至显示界面的预览框进行显示。
S1606、电子设备的图库应用接收滤镜的启动操作。
S1607、电子设备的图库应用触发数据获取模块获取目标视频文件,向视频解码模块发送解码一次目标视频文件的指令,并向滤镜渲染模块发送滤镜渲染指令。
S1608、电子设备的数据获取模块获取目标视频文件。
需要说明的是,数据获取模块若在上述步骤1603获取了目标视频文件时将其缓存至存储单元中且未将其删除,则在此时,数据获取模块仅需在其存储单元中获取该目标视频文件。
S1609、电子设备的数据获取模块将目标视频文件传输至视频解码模块。
S1610、电子设备的视频解码模块解码一次目标视频文件,获取一个第三视频。
S1611、电子设备的视频解码模块将该第三视频传输至数据处理模块。
需要说明的是,电子设备的数据处理模块在接收到第三视频后,由于显示界面内第一窗口及第二窗口的显示尺寸小于预览框的显示尺寸,因此电子设备可以对第三视频进行分辨率和/或帧率的调整,当然也可以不调整。若不调整,则直接执行步骤S1613,若调整则执行步骤S1612。
S1612、电子设备的数据处理模块调整第三视频的分辨率和/或帧率。
S1613、电子设备的数据处理模块在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频,并将第一采样视频及第二采样发送至滤镜渲染模块。
具体的,电子设备的数据处理模块在获取到第三视频后,可以在第三视频中采样m帧视频图像,将该m视频图像分别形成第一采样视频及第二采样视频。
其中,m为大于0,且不大于第三视频内包含的视频图像的总帧数。
S1614、电子设备的滤镜渲染模块将采用第一滤镜对第一采样视频进行渲染处理,得到第一视频,采用第二滤镜对第二采样视频进行渲染处理,得到第二视频。
S1615、电子设备的滤镜渲染模块将第一视频及第二视频发送至显示界面,以便在显示界面的第一窗口显示第一视频,在第二窗口显示第二视频。
这样可以对目标视频文件解码后的视频进行不同滤镜类型渲染处理,并在对应的窗口内显示出,从而可以使用户直观的看出不同滤镜应用在目标视频文件解码后的视频上的不同,便于用户选择所需的编辑类型,提高了用户体验。
与上述方法实施例相对应,本申请还提供了一种电子设备,该电子设备用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被所述处理器执行时,触发所述电子设备执行上述方法实施例中的部分或全部步骤。
参见图17,为本申请实施例提供的一种电子设备的结构示意图。如图17所示,该电子设备1700可以包括:处理器1701、存储器1702及通信单元1703。这些组件通过一条或多条总线进行通信,本领域技术人员可以理解,图中示出的服务器的结构并不构成对本发明实施例的限定,它既可以是总线形结构,也可以是星型结构,还可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
其中,所述通信单元1703,用于建立通信信道,从而使所述存储设备可以与其它设备进行通信。接收其他设备发是的用户数据或者向其他设备发送用户数据。
所述处理器1701,为存储设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器1702内的软件程序和/或模块,以及调用存储在存储器内的数据,以执行电子设备的各种功能和/或处理数据。所述处理器可以由集成电路(integrated circuit,IC)组成,例如可以由单颗封装的IC所组成,也可以由连接多颗相同功能或不同功能的封装IC而组成。举例来说,处理器1701可以仅包括中央处理器(central processing unit,CPU)。在本发明实施方式中,CPU可以是单运算核心,也可以包括多运算核心。
所述存储器1702,用于存储处理器1701的执行指令,存储器1702可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
当存储器1702中的执行指令由处理器1701执行时,使得电子设备1700能够执行图12所示实施例中的部分或全部步骤。
具体实现中,本申请还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,其中,在所述程序运行时控制所述计算机可读存储介质所在设备执行上述实施例中的部分或全部步骤。所述的存储介质可为磁碟、光盘、只读存储记忆体(英文:read-only memory,简称:ROM)或随机存储记忆体(英文:random access memory,简称:RAM)等。
具体实现中,本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包含可执行指令,当所述可执行指令在计算机上执行时,使得计算机执行上述方法实施例中的部分或全部步骤。
本申请实施例中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示单独存在A、同时存在A和B、单独存在B的情况。其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项”及其类似表达,是指的这些项中的任意组合,包括单项或复数项的任意组合。例如,a,b和c中的至少一项可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本领域普通技术人员可以意识到,本文中公开的实施例中描述的各单元及算法步骤,能够以电子硬件、计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,任一功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,简称ROM)、随机存取存储器(random access memory,简称RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。本申请的保护范围应以所述权利要求的保护范围为准。
Claims (28)
- 一种视频处理方法,其特征在于,应用于电子设备,所述方法包括:接收目标视频的编辑操作;响应于目标视频的编辑操作,显示第一预览界面,所述第一预览界面内包含有预览框;所述预览框内显示有目标视频;所述目标视频是目标视频文件解码得到的视频;接收对所述第一预览界面的第一操作;响应于所述第一操作,显示第二预览界面,所述第二预览界面内包含有预览框、第一窗口及第二窗口;在第一时刻,所述预览框内显示所述目标视频,所述第一窗口显示第一视频的第i帧视频图像,所述第二窗口显示第二视频的第i帧视频图像,所述第一视频为采用第一滤镜对第一采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第二视频为采用第二滤镜对第二采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频,i为大于0,且小于m的整数;m为大于1的整数;在第二时刻,所述预览框内显示所述目标视频,所述第一窗口显示第一视频的第i+1帧视频图像,所述第二窗口显示第二视频的第i+1帧视频图像。
- 根据权利要求1所述的方法,其特征在于,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:对所述目标视频文件进行一次解码得到第三视频,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频。
- 根据权利要求2所述的方法,其特征在于,m的值小于第三视频包含的视频图像的帧数。
- 根据权利要求3所述的方法,其特征在于,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频包括:在第三视频中,按照每3帧视频图像中采样1帧视频图像的方式,采样m帧视频图像分别形成第一采样视频及第二采样视频。
- 根据权利要求2所述的方法,其特征在于,所述第一视频及第二视频的分辨率小于所述目标视频的分辨率。
- 根据权利要求2所述的方法,其特征在于,所述第一窗口显示第一视频及所述第二窗口显示第二视频的帧率小于所述预览框显示目标视频的帧率。
- 根据权利要求1所述的方法,其特征在于,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:对所述目标视频文件分别进行两次解码得到两个第三视频,在一个第三视频中采样m帧视频图像形成第一采样视频,并在另一个第三视频中采样m帧视频图像形成第二采样视频。
- 根据权利要求1所述的方法,其特征在于,所述第二预览界面内还包括显示尺寸小于预览框的显示尺寸的进度显示框,所述进度显示框内显示有第四视频内的视频图像,所述第四视频与所述目标视频图像相同。
- 根据权利要求8所述的方法,其特征在于,所述第四视频的分辨率小于所述目标视频的分辨率。
- 根据权利要求1所述的方法,其特征在于,所述第一窗口及第二窗口的显示尺寸相同。
- 根据权利要求9所述的方法,其特征在于,所述第一窗口及第二窗口的显示尺寸小于所述预览框的显示尺寸。
- 根据权利要求1-11任一项所述的方法,其特征在于,所述第一窗口显示第一视频包括:所述第一窗口循环显示第一视频;所述第二窗口显示第二视频包括:所述第二窗口循环显示第二视频。
- 根据权利要求1-11任一项所述的方法,其特征在于,所述方法还包括:接收对所述第二预览界面的第二操作;所述第二操作用于指示用户选择的目标滤镜;响应于所述第二操作,显示第三预览界面,所述第三预览界面内包含有预览框、第一窗口及第二窗口;所述预览框内显示第五视频,所述第一窗口显示第一视频,所述第二窗口显示第二视频,所述第五视频为采用目标滤镜对所述目标视频进行渲染处理后的视频。
- 一种电子设备,其特征在于,包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被所述处理器执行时,触发所述电子设备执行权利要求1-13任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括存储的程序,其中,在所述程序运行时控制所述计算机可读存储介质所在设备执行权利要求1-13中任意一项所述的方法。
- 一种计算机程序产品,其特征在于,所述计算机程序产品包含可执行指令,当所述可执行指令在计算机上执行时,使得计算机执行权利要求1-13中任意一项所述的方法。
- 一种视频处理方法,其特征在于,应用于电子设备,所述方法包括:接收目标视频的编辑操作;响应于目标视频的编辑操作,显示第一预览界面,所述第一预览界面内包含有预览框;所述预览框内显示有目标视频;所述目标视频是目标视频文件解码得到的视频;接收对所述第一预览界面的第一操作;响应于所述第一操作,显示第二预览界面,所述第二预览界面内包含有预览框、第一窗口及第二窗口;在第一时刻,所述预览框内显示所述目标视频,所述第一窗口显示第一视频的第i帧视频图像,所述第二窗口显示第二视频的第i帧视频图像,所述第一视频为采用第一滤镜对第一采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第二视频为采用第二滤镜对第二采样视频进行渲染处理后的视频,其内包含m帧视频图像,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频,i为大于0,且小于m的整数;m为大于1的整数;在第二时刻,所述预览框内显示所述目标视频,所述第一窗口显示第一视频的第i+1帧视频图像,所述第二窗口显示第二视频的第i+1帧视频图像;其中,所述第一窗口显示第一视频及所述第二窗口显示第二视频的帧率小于所述预览框显示目标视频的帧率。
- 根据权利要求17所述的方法,其特征在于,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:对所述目标视频文件进行一次解码得到第三视频,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频。
- 根据权利要求18所述的方法,其特征在于,m的值小于第三视频包含的视频图像的帧数。
- 根据权利要求19所述的方法,其特征在于,在第三视频中采样m帧视频图像分别形成第一采样视频及第二采样视频包括:在第三视频中,按照每3帧视频图像中采样1帧视频图像的方式,采样m帧视频图像分别形成第一采样视频及第二采样视频。
- 根据权利要求18所述的方法,其特征在于,所述第一视频及第二视频的分辨率小于所述目标视频的分辨率。
- 根据权利要求17所述的方法,其特征在于,所述第一采样视频及所述第二采样视频均是从所述目标视频文件解码后的视频中采样m帧视频图像形成的视频包括:对所述目标视频文件分别进行两次解码得到两个第三视频,在一个第三视频中采样m帧视频图像形成第一采样视频,并在另一个第三视频中采样m帧视频图像形成第二采样视频。
- 根据权利要求17所述的方法,其特征在于,所述第二预览界面内还包括显示尺寸小于预览框的显示尺寸的进度显示框,所述进度显示框内显示有第四视频内的视频图像,所述第四视频与所述目标视频图像相同。
- 根据权利要求23所述的方法,其特征在于,所述第四视频的分辨率小于所述目标视频的分辨率。
- 根据权利要求17所述的方法,其特征在于,所述第一窗口及第二窗口的显示尺寸相同。
- 根据权利要求24所述的方法,其特征在于,所述第一窗口及第二窗口的显示尺寸小于所述预览框的显示尺寸。
- 根据权利要求17-26任一项所述的方法,其特征在于,所述第一窗口显示第一视频包括:所述第一窗口循环显示第一视频;所述第二窗口显示第二视频包括:所述第二窗口循环显示第二视频。
- 根据权利要求17-26任一项所述的方法,其特征在于,所述方法还包括:接收对所述第二预览界面的第二操作;所述第二操作用于指示用户选择的目标滤镜;响应于所述第二操作,显示第三预览界面,所述第三预览界面内包含有预览框、第一窗口及第二窗口;所述预览框内显示第五视频,所述第一窗口显示第一视频,所述第二窗口显示第二视频,所述第五视频为采用目标滤镜对所述目标视频进行渲染处理后的视频。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/002,799 US20240144976A1 (en) | 2021-09-10 | 2022-08-16 | Video processing method, device, storage medium, and program product |
EP22826302.6A EP4171046A4 (en) | 2021-09-10 | 2022-08-16 | VIDEO PROCESSING METHOD, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111062379.5 | 2021-09-10 | ||
CN202111062379.5A CN113747240B (zh) | 2021-09-10 | 2021-09-10 | 视频处理方法、设备和存储介质 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2023035882A1 true WO2023035882A1 (zh) | 2023-03-16 |
WO2023035882A9 WO2023035882A9 (zh) | 2023-06-22 |
Family
ID=78737981
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/112858 WO2023035882A1 (zh) | 2021-09-10 | 2022-08-16 | 视频处理方法、设备、存储介质和程序产品 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240144976A1 (zh) |
EP (1) | EP4171046A4 (zh) |
CN (1) | CN113747240B (zh) |
WO (1) | WO2023035882A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113747240B (zh) * | 2021-09-10 | 2023-04-07 | 荣耀终端有限公司 | 视频处理方法、设备和存储介质 |
CN115022696B (zh) * | 2022-04-18 | 2023-12-26 | 北京有竹居网络技术有限公司 | 视频预览方法、装置、可读介质及电子设备 |
CN116095413B (zh) * | 2022-05-30 | 2023-11-07 | 荣耀终端有限公司 | 视频处理方法及电子设备 |
CN117935716B (zh) * | 2024-03-14 | 2024-05-28 | 深圳市东陆科技有限公司 | 基于mcu的显示参数控制方法及系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003092706A (ja) * | 2001-09-18 | 2003-03-28 | Sony Corp | 効果付加装置、効果付加方法、及び効果付加プログラム |
CN102917270A (zh) * | 2011-08-04 | 2013-02-06 | 形山科技(深圳)有限公司 | 一种多视频动态预览方法、装置及系统 |
CN105323456A (zh) * | 2014-12-16 | 2016-02-10 | 维沃移动通信有限公司 | 用于拍摄装置的图像预览方法、图像拍摄装置 |
CN105357451A (zh) * | 2015-12-04 | 2016-02-24 | Tcl集团股份有限公司 | 基于滤镜特效的图像处理方法及装置 |
US20180176550A1 (en) * | 2016-12-15 | 2018-06-21 | Htc Corporation | Method, processing device, and computer system for video preview |
CN112165632A (zh) * | 2020-09-27 | 2021-01-01 | 北京字跳网络技术有限公司 | 视频处理方法、装置及设备 |
CN113691737A (zh) * | 2021-08-30 | 2021-11-23 | 荣耀终端有限公司 | 视频的拍摄方法、设备、存储介质和程序产品 |
CN113747240A (zh) * | 2021-09-10 | 2021-12-03 | 荣耀终端有限公司 | 视频处理方法、设备、存储介质和程序产品 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2766816A4 (en) * | 2011-10-10 | 2016-01-27 | Vivoom Inc | RESTITUTION AND ORIENTATION BASED ON A NETWORK OF VISUAL EFFECTS |
JP6455147B2 (ja) * | 2012-05-22 | 2019-01-23 | 株式会社ニコン | 電子カメラ、画像表示装置および画像表示プログラム |
KR102063915B1 (ko) * | 2013-03-14 | 2020-01-08 | 삼성전자주식회사 | 사용자 기기 및 그 동작 방법 |
CN105279161B (zh) * | 2014-06-10 | 2019-08-13 | 腾讯科技(深圳)有限公司 | 图片处理应用的滤镜排序方法和装置 |
US9626103B2 (en) * | 2014-06-19 | 2017-04-18 | BrightSky Labs, Inc. | Systems and methods for identifying media portions of interest |
KR20160146281A (ko) * | 2015-06-12 | 2016-12-21 | 삼성전자주식회사 | 전자 장치 및 전자 장치에서 이미지 표시 방법 |
CN106331502A (zh) * | 2016-09-27 | 2017-01-11 | 奇酷互联网络科技(深圳)有限公司 | 终端及其滤镜拍摄方法和装置 |
CN109309783A (zh) * | 2017-07-28 | 2019-02-05 | 益富可视精密工业(深圳)有限公司 | 电子装置及其滤镜拍摄方法 |
CN107864335B (zh) * | 2017-11-20 | 2020-06-12 | Oppo广东移动通信有限公司 | 图像预览方法、装置、计算机可读存储介质和电子设备 |
CN111083374B (zh) * | 2019-12-27 | 2021-09-28 | 维沃移动通信有限公司 | 滤镜添加方法及电子设备 |
CN111885298B (zh) * | 2020-06-19 | 2022-05-17 | 维沃移动通信有限公司 | 图像处理方法及装置 |
CN112954210B (zh) * | 2021-02-08 | 2023-04-18 | 维沃移动通信(杭州)有限公司 | 拍照方法、装置、电子设备及介质 |
CN113194255A (zh) * | 2021-04-29 | 2021-07-30 | 南京维沃软件技术有限公司 | 拍摄方法、装置和电子设备 |
-
2021
- 2021-09-10 CN CN202111062379.5A patent/CN113747240B/zh active Active
-
2022
- 2022-08-16 WO PCT/CN2022/112858 patent/WO2023035882A1/zh active Application Filing
- 2022-08-16 US US18/002,799 patent/US20240144976A1/en active Pending
- 2022-08-16 EP EP22826302.6A patent/EP4171046A4/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003092706A (ja) * | 2001-09-18 | 2003-03-28 | Sony Corp | 効果付加装置、効果付加方法、及び効果付加プログラム |
CN102917270A (zh) * | 2011-08-04 | 2013-02-06 | 形山科技(深圳)有限公司 | 一种多视频动态预览方法、装置及系统 |
CN105323456A (zh) * | 2014-12-16 | 2016-02-10 | 维沃移动通信有限公司 | 用于拍摄装置的图像预览方法、图像拍摄装置 |
CN105357451A (zh) * | 2015-12-04 | 2016-02-24 | Tcl集团股份有限公司 | 基于滤镜特效的图像处理方法及装置 |
US20180176550A1 (en) * | 2016-12-15 | 2018-06-21 | Htc Corporation | Method, processing device, and computer system for video preview |
CN112165632A (zh) * | 2020-09-27 | 2021-01-01 | 北京字跳网络技术有限公司 | 视频处理方法、装置及设备 |
CN113691737A (zh) * | 2021-08-30 | 2021-11-23 | 荣耀终端有限公司 | 视频的拍摄方法、设备、存储介质和程序产品 |
CN113747240A (zh) * | 2021-09-10 | 2021-12-03 | 荣耀终端有限公司 | 视频处理方法、设备、存储介质和程序产品 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4171046A4 |
Also Published As
Publication number | Publication date |
---|---|
EP4171046A1 (en) | 2023-04-26 |
CN113747240A (zh) | 2021-12-03 |
US20240144976A1 (en) | 2024-05-02 |
WO2023035882A9 (zh) | 2023-06-22 |
CN113747240B (zh) | 2023-04-07 |
EP4171046A4 (en) | 2024-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023035882A1 (zh) | 视频处理方法、设备、存储介质和程序产品 | |
US10735798B2 (en) | Video broadcast system and a method of disseminating video content | |
US8421819B2 (en) | Pillarboxing correction | |
US20170171274A1 (en) | Method and electronic device for synchronously playing multiple-cameras video | |
US6930687B2 (en) | Method of displaying a digital image | |
US8860716B2 (en) | 3D image processing method and portable 3D display apparatus implementing the same | |
US20090327893A1 (en) | Coordinated video presentation methods and apparatus | |
WO2021031850A1 (zh) | 图像处理的方法、装置、电子设备及存储介质 | |
KR20210082232A (ko) | 실시간 비디오 특수 효과 시스템 및 방법 | |
CN106713942B (zh) | 视频处理方法和装置 | |
CN113691737B (zh) | 视频的拍摄方法、设备、存储介质 | |
CN114296949A (zh) | 一种虚拟现实设备及高清晰度截屏方法 | |
EP3684048B1 (en) | A method for presentation of images | |
CN112004100B (zh) | 将多路音视频源集合成单路音视频源的驱动方法 | |
JP2002351438A (ja) | 映像監視システム | |
KR20140146592A (ko) | 컬러 그레이딩 미리 보기 방법 및 장치 | |
CN115002335B (zh) | 视频处理方法、装置、电子设备和计算机可读存储介质 | |
CN116095365A (zh) | 特效处理方法、装置,电子设备和存储介质 | |
CN111221444A (zh) | 分屏特效处理方法、装置、电子设备和存储介质 | |
CN113453069B (zh) | 一种显示设备及缩略图生成方法 | |
CN115706853A (zh) | 视频处理方法、装置、电子设备和存储介质 | |
JP2004325941A (ja) | 描画処理装置、描画処理方法および描画処理プログラム、並びにそれらを備えた電子会議システム | |
CN116847147A (zh) | 特效视频确定方法、装置、电子设备及存储介质 | |
CN110225177B (zh) | 一种界面调节方法、计算机存储介质及终端设备 | |
CN112584084B (zh) | 一种视频播放方法、装置、计算机设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 18002799 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2022826302 Country of ref document: EP Effective date: 20221227 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |